text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Taking aim at the wino-higgsino plane with the LHC
In this work we explore multiple search strategies for higgsinos and mixed higgsino-wino states in the MSSM and project the results onto the $(\mu,M_2)$ plane. Assuming associated production of higgsino-like pairs with a $W/Z$ boson, we develop a search in a channel characterized by a hadronically tagged vector boson accompanied by missing energy. We use as our template an ATLAS search for dark matter produced in association with a hadronically decaying vector boson, but upgrade the search by implementing a joint likelihood analysis, binning the missing transverse energy distribution, which greatly improves the search sensitivity. For higgsino-like states (more than 96% admixture) we find sensitivity to masses up to 550 GeV. For well-mixed higgsino-wino states (70-30% higgsino) we still find sensitivities above 300 GeV. Using this newly proposed search, we draw a phenomenological map of the wino-higgsino parameter space, recasting several complementary searches for disappearing tracks, soft leptons, trileptons, and hadronic diboson events in order to predict LHC coverage of the $(\mu,M_2)$ mass plane at integrated luminosities of up to $3\,\text{ab}^{-1}$. Altogether, the full run of the HL-LHC can exclude much of the"natural"($\mu,M_2<$ 500 GeV) wino-higgsino parameter space.
I. INTRODUCTION
The Large Hadron Collider (LHC) has probed deeply into the low-mass parameter space of supersymmetry (SUSY).Gluinos are bounded below 2 TeV and squark mass bounds are not much behind at 1.6 TeV [1][2][3][4].Despite progress in searches for color-charged states, however, bounds on weakly interacting SUSY particles are not strong.In particular, bounds on electroweakinos with compressed mass spectra (degenerate or nearly degenerate) leave many unconstrained regions of parameter space.
The choice of collider search for electroweakinos depends on the region of parameter space to be probed, and especially on the mass splitting between the lightest neutralino χ0 1 and lightest chargino(s) χ± 1 or the nextlightest neutralino χ0 2 .This splitting is in turn heavily dependent on the gauge-eigenstate composition of the light electroweakinos, which influences the nature of the search conducted.We suppose throughout this work that the bino with mass M 1 is decoupled and focus on the wino-higgsino mass plane.In this regime, for extremely wino-like particles, small mass splitting between charged and neutral states means there is sure to be a long-lived charged particle in events, motivating searches for long-lived charged tracks [21][22][23][24] or soft displaced tracks [25].For states with larger mass splittings between particles, on the other hand, searches with soft leptons may apply [26][27][28][29][30][31][32].But there is a large gap in this search space where electroweakinos that are predominantly higgsino-like -or a well-tempered mixture of wino-higgsino content -where there are no long-lived charged tracks, and small splittings ensure mass degenerate states are more likely to appear as invisible particles.This window covers a large region in the (µ, M 2 ) mass plane of fundamental parameters.In this case a new search strategy is needed to improve coverage of the electroweakino parameter space.
For intermediate mass splittings between χ0 1 and χ± 1 or χ0 2 , the chargino and second-to-lightest neutralino states may be produced and decay with products so soft as to be considered missing energy by the search.In this case it is possible to trigger on the decay of a single heavy vector boson produced in association with electroweakino pairs.We choose to search for the heavy boson(s) in a hadronically tagged channel, continuing a line of inquiry begun in [33,34] targeted at rare and hard-to-constrain SUSY signals. 1 Our hadronic mono-boson analysis is based on a search by the ATLAS Collaboration for jets accompanied by missing transverse energy (E miss T ) [36], which we extend by performing a joint-likelihood analysis using the E miss T distributions.Our strategy significantly improves the sensitivity of the original ATLAS search to electroweakino pair production and allows us to close an existing hole in the electroweakino parameter space not covered by other searches.
The aim of this work is to project the bounds from this mono-boson channel to the (µ, M 2 ) plane, and compare it with the other channels at the LHC today and in the future.To this end, alongside our own analysis, we reinterpret four existing analyses that are expected to be sensitive to wino-higgsino LSP scenarios.These searches are in channels characterized by disappearing tracks [23], soft leptons [28], three leptons accompanied by missing transverse energy [37], and two hadronically decaying vector bosons with missing energy [38].We present a phenomenological map of the wino-higgsino mass plane detailing which searches are most sensitive at present and for the projected 3 ab −1 HL-LHC run.We find complementarity between the searches, with the monoboson search able to cover a sizable region of parameter space.We expect the full run of the HL-LHC to probe or exclude almost all of the "natural" (small-µ) winohiggsino parameter space.This paper proceeds as follows.In Section II we review the masses and splitting of electroweakinos in the MSSM.Section III concerns the electroweakino parameter space and searches that cover its various regions.In Section IV we describe our hadronic mono-boson search strategy.Section V presents results of a sensitivity search for the HL-LHC.Section VI concludes.
II. WINO-HIGGSINO SPECTRA IN THE MSSM
We begin with a brief review of the spectrum of the electroweakinos relevant to our study.Concretely, since we are interested in higgsinos, higgsino-wino admixtures, and winos, we focus on the hierarchy µ, M 2 ≪ M 1 .A higgsino state corresponds to µ < M 2 , a well-mixed state to µ ∼ M 2 , and a wino state to µ > M 2 .In the higgsino limit, µ < M 2 , the eigenvalues of M χ0 may be approximated in order of increasing mass as [39] for µ > 0, a choice we adopt in this work.In the light wino limit (still with M 1 decoupled), the mass ordering in (1) changes from {1, 2, 3, 4} to {3, 1, 2, 4}.Similarly, for charginos in the higgsino limit, the mass eigenvalues are approximately with hierarchy flipped in the wino limit.Note that in the deep higgsino region, m χ0 1 ∼ m χ± ; as we will see, the mass difference between the lightest chargino and lightest neutralino, given by will play an important role in our search strategy.
In this work we are concerned with states that have a naturally small mass difference between charginos and neutralinos, as this is the most technically challenging part of the electroweakino parameter space to probe experimentally.Both wino-like and higgsinolike neutralinos feature some naturally small mass differences, with the former scenario exhibiting nearly degenerate { χ0 1 , χ± 1 } and the latter providing nearly triply degenerate { χ0 1 , χ± 1 , χ0 2 }.Therefore in this work we consider scenarios where M 1 is very large, leaving us with wino-or higgsino-like (light) neutralino parameter space.For the purposes of this work, we fix tan β = 10, which is a common choice but not particularly important for the electroweakino splittings: for instance, raising tan β to values as large as 100 shifts the physical masses by O(1) GeV but has negligible effects on the mass differences.This choice leaves us with only µ and M 2 as adjustable parameters.
The importance of small mass splittings in this analysis requires us to go beyond leading-order calculations of the electroweakino masses, since one-loop corrections to light masses can approach 10% of the leading-order results [40].We employ SPheno version 4.0.5 [41][42][43] to compute the mass spectra (and mixing matrices) for a large number of points in the (µ, M 2 ) plane.In Figure 1 we show the mass of the lightest neutralino χ0 1 in this plane for tan β = 10 and M 1 = 5 TeV.
Both the content of the lightest neutralino and the magnitude of the mass splitting between the lightest chargino and the lightest neutralino vary over the mass plane.In Figure 2a we show the higgsino content of the lightest neutralino over the (µ, M 2 ) plane for our benchmark values of M 1 and tan β.In Figure 2b we show the mass difference ∆m between the lightest chargino and the lightest neutralino in the same plane.We see that in the wino-like region with µ ≫ M 2 , the mass splitting between chargino and wino is very small, less than 1 GeV.
In the higgsino-like region, the mass splitting is still small on an absolute scale but varies from one to a few GeV.
There is also a well-mixed region in which the higgsino content varies from 30-70%.
The mass of the second-lightest neutralino χ0 2 also varies dramatically over the parameter space.Figure 3 shows the mass splitting between the lightest and secondlightest neutralinos; that is, m χ0 2 − m χ0 1 , in the (µ, M 2 ) plane.We see that for higgsino-like states the mass splitting is small.As we transition across the mass plane to well-mixed and wino-like states the mass splitting increases to O(100) GeV and more.As we will later see, the production and decay of χ0 2 will also greatly influence the neutralino searches.
III. PROBING (µ, M2) WITH MULTIPLE SEARCH STRATEGIES
We now consider the LHC phenomenology of the light electroweakinos in our parameter space.While lightest neutralinos χ0 1 invariably appear in the detector as invisible particles, the charginos may decay visibly or invisibly.For small mass differences, the chargino decay proceeds through an off-shell W , χ± 1 → χ0 1 + W ± * .Exactly how these decays appear in the detector, hence how best to probe the charginos experimentally, depends sensitively on the mass splitting.We identify three ∆m regimes: (A) Long-lived charginos, ∆m ≲ 1 GeV; (B) Charginos making soft leptons, ∆m ≳ 4 GeV; and (C) Invisible charginos, 1 ≲ ∆m ≲ 4 GeV; each best suited to a unique search strategy.In Figure 4 we show the parameter space plane for higgsino-and wino-like LSPs with M 1 = 5 TeV.In this plane we demarcate the chargino-neutralino mass splitting in order to sketch the parameter space best suited to the search strategies detailed below.We have also marked in orange the threshold above which the LSP is higgsino-like, which we define as greater than 96% higgsino content.
In the following discussion, we overview the search strategies in these three regimes.The details of the mono-boson search, which is our main result, will be explained in the next section.
A. Nearly degenerate charginos: the long-lived particle region For the smallest mass splittings, the decay products are very soft, so detecting production of pairs such as χ± 1 , i, j = 1, 2, cannot rely on hard leptons or jets.We see in Figure 4 that under the black dashed line the mass splitting between the lightest chargino and lightest neutralino is under 1 GeV.There is a portion of this winolike LSP parameter space where the lightest chargino lives long enough to produce a track in a detector, and so searches for long-lived tracks are expected to give the best mass bounds on LSPs.
An applicable search of this type was performed by the CMS Collaboration using L = 101 fb −1 of Run 2 data and published as CMS-EXO-19-010 [23].This search targets long-lived charged particles, like our charginos, exhibiting "disappearing" tracks that leave the interaction region but do not extend to the outermost region of the tracking detector.A track is defined to disappear if it has at least three missing outer hits in the tracker and if the total calorimeter energy within ∆R = 0.5 of the track is less than 10 GeV.This search applies to charged particles with lifetimes in the range τ ∈ [0.3, 333] ns (the low end of this range is self explanatory; the high end is a practical limit past which charged particles live too long and their tracks do not disappear before the edge of the tracker. In the absence of an excess, CMS imposed limits on chargino production in a few supersymmetric scenarios, including models with higgsino-and wino-like electroweakinos, the latter of which is appropriate for our analysis.This search has moreover been implemented within the MadAnalysis 5 (MA5) framework [44][45][46] and made available on the MA5 Public Analysis Database (PAD) [47,48].
In order to reinterpret the CMS results within our parameter space, we use MadGraph5_aMC@NLO (MG5_aMC) version 3.1.0[49] to produce a number of electroweakino pairproduction samples in the pink region of the (µ, M 2 ) plane depicted in Figure 4.These samples need to be relatively large, each containing 2.5 × 10 5 events, to maintain statistical control given the very low efficiencies characteristic of this search [50].We simulate showering and hadronization with Pythia 8 version 8.245 [51], which also handles the decays of the electroweakinos.We extract the electroweakino decay widths and branching fractions from SPheno version 4.0.5, mentioned in Section II as the generator of our mass spectra.The widths, like the masses, are accurate to one-loop order, which is crucial for e.g.χ± 1 decays to pions [52].To set the normalization of the samples, we use Resummino version 3.1.2[53] to compute the total cross sections of lightest chargino and/or LSP pair production for √ s = 13 TeV and √ s = 14 TeV at approximate nextto-next-to-leading-order accuracy in the strong coupling with threshold resummation at next-to-next-to-leading logarithmic accuracy (aNNLO + NNLL).These showered and hadronized event samples are then passed to MA5 version 1.9.60, which uses the Simplified Fast Detector Simulation (SFS) module [48] to simulate the response of the CMS detector and calls FastJet version version 3.3.3for object reconstruction [54].When MA5 is provided with the signal cross sections, it computes not only the upper limit at 95% confidence level (C.L.) [55] on the cross section of any bSM signal, given the efficiencies returned in each signal region of the analysis, but also the signal confidence level CL s of the particular signal given by the user, such that the signal is excluded if CL s = 0.05.The recasting capabilities of MA5 moreover include higher-luminosity estimates, which rescale the signal and background yields linearly with luminosity and rescale the yield uncertainties according to the user's preferences [56].We use this module to provide sensitivity estimates for the L = 3 ab −1 run of the HL-LHC.For this exercise, we use cross sections computed at a center-of-mass energy of √ s = 14 TeV, but we use the same √ s = 13 TeV event samples due to the significant computational resources required to produce the samples discussed here.The results of this reinterpretation, along with those described below, are discussed in Section V.
B. Locally maximal chargino splitting: the soft lepton region
In the region above the black dashed line the chargino-LSP splitting exceeds 1 GeV.Both higgsino-like and mixed wino-higgsino regions in this parameter space have small ∆m, but in the region where both µ and M 2 are small, the splitting attains a local maximum.For our benchmark with tan β = 10, the maximum mass difference is ∆m ∼ 6 GeV.In Figure 4, we have marked the ∆m = 4 GeV threshold with the black dotted line.In the region enclosed to the left of this curve, there may be soft but detectable leptons from chargino decay.Meanwhile, adjoining the same region in parameter space, the mass splitting m χ 0 2 − m χ 0 1 between the two lightest neutralinos becomes appreciable.On the plot we have marked with a dashed green line the region under which this mass splitting is greater than 8 GeV.Roughly between the line demarcating the higgsino-like LSP region and this green dashed line, we expect small but relevant lepton momentum from decays through offshell W/Z bosons.In this space, the electroweakino spectrum is still "compressed", but leptons resulting from χ0 2 decays -while quite soft -have enough momentum in principle to be detected at LHC.We therefore expect searches for events with soft leptons to impose non-trivial limits in this region.
One such soft-lepton search was carried out by CMS using L = 35.9fb −1 of Run 2 data and was published as CMS-SUS-16-048 [28].This search notably requires two leptons with transverse momentum p T < 30 GeV and, finding no excesses, was used to constrain several benchmark supersymmetric models with electroweakino mass splitting of O(1-10) GeV.One of the constrained scenarios features compressed higgsino-like electroweakinos, but a priori this analysis could be sensitive to well-mixed species.We therefore make use of the public implementation of this analysis in the MadAnalysis 5 PAD, according to a workflow similar to that discussed above for the disappearingtracks analysis, to reinterpret the soft-lepton search in our parameter space and to compute HL-LHC sensitivity estimates.
Before we move on, it is worth noting that searches for electroweakinos in final states with three leptons also analyze χ± 1 and χ0 2 production, with leptons resulting from the decay of these states to the LSP.Current trilepton analyses are capable of sensitivity in regions where the χ0 2 -χ0 1 mass splitting is as little as a few GeV, which overlaps with our soft-dilepton region [37].Therefore, as we demonstrate in Section V, trilepton searches are capable of imposing some limits in this area.
C. The invisible chargino region
In the case of a mass splitting just large enough for the charginos to decay promptly, but not large enough to produce hard particles to trigger on, both charginos and neutralinos are effectively invisible.In this parameter space we must rely on an alternate strategy: the production of light electroweakinos -recorded as missing transverse energy E miss T -along with an on-shell vector boson, pp → χ χ + W/Z.Here, the on-shell boson decays hadronically and may be tagged.This search is best suited for higgsino-like or higgsino-wino-like LSPs (those outside of the deep wino region) for the following reasons.
• In the higgsino-like region, there are three nearly degenerate electroweakinos { χ0 1 , χ0 2 , χ± 1 }.Due to the softness of their decay products, all three of these states may appear as invisible particles in the detector, and any pair of these particles may be produced in association with an on-shell W/Z boson.
• As we can see from Figures 2b and 3, as we move into the more well-mixed region the chargino remains mass degenerate with the lightest neutralino, but χ0 2 achieves greater mass splitting.As the mass splitting grows to O(10) GeV, χ0 2 is no longer an invisible particle and therefore the total production cross section for our invisible process plus a gauge boson is apparently diminished.
• But farther toward the wino region, where the χ0 2 splitting exceeds the mass of the W or Z, χ0 2 may decay to χ0 1 or χ± 1 through an on-shell vector boson.This gives us the process pp → χ0 with a hard vector boson radiated as a decay product in the final state.The jet(s) produced by hadronically decaying vector bosons should be correspondingly hard.
The red regions in Figure 4 ("E miss T + J") are therefore roughly where we expect a hadronic mono-boson search to set the best limits.In the well-mixed region, the monoboson analysis should complement not only the CMS soft-lepton search detailed above but also conventional searches in channels with more than one hadronic vector boson [38] or with multiple leptons [37,[57][58][59].A quantitative comparison verifying this notion is available in Section V.
We note here, in advance of our detailed discussion of the mono-boson analysis, that the most recent limits from conventional monojet searches have historically been weaker and have less coverage of the (µ, M 2 ) plane than those from this mono-boson analysis.For this discussion, we refer to monojet limits on direct pair production of electroweakinos, assuming that the squarks are sufficiently heavy that monojet limits on electroweakinos due to pair production of light squarks do not apply.The situation for light electroweakinos was discussed in [60] with respect to the Run 1 ATLAS monojet search [61] and in [34] for the most recent Run 2 ATLAS monojet search [62].However, in the time since [34] was released, both this ATLAS analysis and its CMS counterpart, the monojet subanalysis in CMS-EXO-20-004 [63], have been implemented in MadAnalysis 5, and moreover a thorough analysis of monojet constraints on higgsinos has been released very recently [64].While the ATLAS search remains weak, and monojet constraints on winos are expected to be superseded by disappearingtrack limits, the limits derived from a combination of the CMS monojet signal regions are competitive with our mono-boson limits for µ ≪ M 2 .We therefore discuss the interplay between mono-boson and monojet higgsino limits in greater detail in Section V.
Our mono-boson analysis upgrades an existing search by the ATLAS Collaboration [36] based on a partial LHC Run 2 dataset with integrated luminosity L = 36.1 fb −1 .Mono-boson searches were originally conceived for fermionic dark matter models, e.g.[65].This ATLAS search targets single on-shell hadronicallydecaying vector bosons, produced in association with invisible particles.The typical event topology features significant missing energy along with either ≥ 1 fat jet or ≥ 2 narrow jets.Here we discuss in greater detail the signals probed by this analysis before reviewing the ATLAS selections and detailing our enhanced analysis.
A. Compressed electroweakino pair + hadronic W/Z production at LHC For this search, we consider hadronic collider processes of the form pp → χ χ + V , where χ = { χ0 1 , χ0 2 , χ± 1 } and where V = {W ± , Z}. Figure 5 shows schematic diagrams of the relevant processes, which we enumerated in Section III.In such processes, the momenta of the visible decay products depend heavily on the hardness of the associated vector bosons, which in turn is dependent on the mass splitting between the LSP and the lightest chargino χ± 1 or second-lowest-mass neutralino χ0 2 .As established above, the regions of interest are the pure higgsino region, where χ0 1 is 96% higgsino or higher, and the well-tempered higgsino-wino region where χ0 FIG. 5: Representative parton-level diagrams for some channels considered in this work.
In Figure 6 we have plotted typical production cross sections for pairs of light electroweak gauginos produced in association with W/Z vector bosons in a slice of the higgsino-like parameter space.We specifically show the LHC production cross sections for √ s = 13 TeV as a function of µ with M 2 fixed at 1 TeV.These results are given at LO and aNNLO + NNLL, as discussed in Section III, and exhibit moderate K factors in the range K ∼ (1.1, 1.3), typical for such processes [66].We see that generically production with an associated W boson has the highest cross section.We have also included the cross section of associated production with a Higgs boson h in order to demonstrate that its rate is much smaller than the mono-V processes.The event selection criteria in the ATLAS mono-W/Z search are given in Table I.As mentioned above, this search looks for events with large missing transverse energy (E miss T ) that contain either a large-R jet (classified as merged topology) or two distinct narrow jets (resolved topology), with dijet invariant mass around that of the W/Z bosons.Jets are clustered according to the anti-k t algorithm [67] with radius parameter R = 1.0 (large-R) or R = 0.4 (narrow).In both topologies any events with reconstructed leptons are rejected.In order to suppress multijet backgrounds, the azimuthal separation between the E miss T vector and the large-R jet is required to be larger than 2π/3 in the merged topology; the same criteria applies in the resolved topology, with the large-R jet replaced by the two-highest-p T -jets system.In addition, the track-based missing transverse momentum ⃗ p miss T , defined as the negative vector sum of the transverse momenta of tracks with p T > 0.5 GeV and |η| < 2.5, is
Merged topology
Resolved topology required to be larger than 30 GeV and its azimuthal angle to be within π/2 of that of the calorimeter-based E miss T .In the resolved topology, the highest-p T jet is required to have p T > 45 GeV and the sum of p T of the two (three) leading jets is required to exceed 120 (150) GeV.
In addition to the above requirements, in the merged topology any b-tagged jet outside the large-R jet is rejected.The signal regions are further classified by the number of b-tagged jets (0 or 1) and the purity (defined in terms of p T requrements on the substructure variable D (β=1) 2 [68]) of the large-R jet to be tagged as originating from a hadronic vector boson decay.In both signal regions of the resolved topology, the angular separation ∆R = (∆ϕ)2 + (∆η) 2 between the two leading jets and the invariant mass m jj of the two leading jets is required to be smaller than 1.4 and within a range [65,105] GeV, respectively.
In the absence of a discovery, ATLAS imposes limits on an array of BSM scenarios that produce hadronic monoboson + E miss T signals, including exotic invisible Higgs boson decays and vector Z ′ + dark matter production.An elementary step would be to straightforwardly reinterpret the ATLAS results for electroweakino pairs within our realistic MSSM parameter space, as discussed above.But, as demonstrated in previous work [34], we can improve upon a simple recast by exploiting the E miss T distributions, which are provided by ATLAS for the observed data and fitted SM background processes. 2The backgrounds considered by ATLAS include t t production, SM W/Z + jets processes (both quite large), and diboson and single-t processes (much smaller).Of these, W/Z + jets is the dominant background in all signal regions requiring zero b-tagged jets -which are a priori most relevant to our electroweakino signals because bottom quarks only appear in ∼ 15% of the χ χ + Z events, themselves subdominant to χ χ + W ± ; viz. Figure 6.
To execute an analysis based on this ATLAS search, we generate χ χ+V events using MadGraph5_aMC@NLO version 2.7.2 and simulate showering and hadronization with Pythia 8 version 8.245 [51].
The signal normalizations are given by the cross sections discussed above.We use Delphes 3 version 3.4.2[69] as our detector simulator.We modify the default ATLAS Delphes card to include a collection of large-R jets in addition to the standard R = 0.4 jets.Pile-up is controlled by trimming from large-R jets all R = 0.2 sub-jets with p T below 5% of the original jet p T [70].The energy fractions of chargino tracks in the electromagnetic and hadronic calorimeters are set to zero since, in the model parameter space, the charginos decay too promptly to deposit energy in the calorimeters.To appropriately capture the physical transition from the higgsino-like region to the well-mixed region, where the neutralino splitting becomes too great for χ0 2 to be appropriately recorded as missing energy, we veto the production of the second lightest neutralino χ0 2 , at the generator level, wherever m χ0 2 − m χ0 1 > 8 GeV.Finally, since the selection criteria in the analysis [36] is adjusted such that the efficiency is 50% independent of jet p T [68], we treat half of the events with a large-R jet as high-purity (HP) events, and the rest are classified into the low-purity (LP) regions.The selections in Table I are imposed on our event samples, and their efficiencies computed, by an in-house C code used to call the ExRootAnalysis library.
The merged-topology high-purity signal region with zero b-tagged jets, 0b-HP, turns out to be most sensitive to our electroweakino signals.This is due in large part to its powerful suppression of the W/Z + jets backgrounds mentioned above.The 0b-HP selection is effective at cutting away these backgrounds because their missing energy is generated by leptonically decaying vector bosons, hence -for events passing the stringent E miss T > 250 GeV selection -the large-R jet requirement in the high-purity region can only be satisfied by accidental reconstruction from the QCD multijet background.Since we have found consistently, beginning with even earlier work [33], that the 0b-HP signal region gives the strongest bounds, we focus on this region in what follows.
We now return to the E miss T distributions, which are our point of departure from the original ATLAS analysis.In Figure 1 of our previous work [34], for illustrative purposes, we compared the E miss T distributions in the 0b-HP signal region for data and SM background to the χ χ + V signal in two higgsino-like LSP scenarios with µ = 200 GeV and µ = 500 GeV.The missing energy recorded in 0b-HP events is divided into eight bins of increasing width between 200 and 1500 GeV, with the last bin E miss T ∈ [800, 1500] GeV.To obtain the binned yields for those signals, additional selections corresponding to these E miss T bins were added to our analysis code at that time.Crucially, we found that the background E miss T falls more quickly than that of the higgsino signals.We now find similar behavior in wino-like LSP scenarios.This implies that more stringent cuts on E miss T may produce improved sensitivity to progressively heavier electroweakinos throughout the (µ, M 2 ) space with suitable mass splitting(s).
For this work, with the yields computed (including the E miss T binning) for our signals throughout the (µ, M 2 ) plane, we perform a joint-likelihood analysis assuming Poisson-distributed data and Gaussian backgrounds such that the likelihood function takes the form [71] The yield (data) in each bin i is m i .The signal yield according to an alternate hypothesis is s i with strength modifier µ.The background distribution in each bin is centered at ⟨b i ⟩ and has uncertainty σ b,i .We use the joint likelihood to compute the test statistic where b = b(µ) in Eq. ( 5) is the conditional maximumlikelihood (ML) estimator of the likelihood for a given µ and the pair (μ, b) are the unconditional ML estimators [72].The one-sided limit at 95% C.L. is then given in terms of (5) by where Φ is the cumulative distribution function of the normal distribution with zero mean and unit variance and n obs is the true number of events surviving the experimental selection [73].In addition to computing the sensitivity of the search given the real data, we make rough sensitivity projections for HL-LHC by rescaling the yields by a factor R(L) = L/(36.1 fb −1 ) and the (background) uncertainties by a factor R(L).We then compute the median significance for exclusion and discovery of our signal according to taking Z excl = 2 and Z disc = 5 as our exclusion and discovery thresholds.
V. RESULTS
We now present results as exclusions in the (µ, M 2 ) parameter space.Following our work in reference [34] we determine the statistical significance of the monoboson analysis by constructing a joint-likelihood function from our binned missing energy analysis within the 0b-HP signal region.The limits from this search and several others are displayed in Figure 7.In this figure the green contour lines show a few distinct lightest neutralino masses m χ0 1 .There is a shaded region in the background in which the mass difference between the lightest neutralino χ0 1 and the lightest chargino χ± 1 , first seen in Figure 2b, is between 1.5 GeV and 4.0 GeV.The red shaded region indicates limits from the mono-boson search at 95% C.L. in the mass plane for the original L = 36.1 fb −1 dataset, while the thin and thick red contours represent exclusion projections for (respectively) the full Run 2 dataset of L = 139 fb −1 and the HL-LHC with √ s = 14 TeV and L = 3 ab −1 .
In the upper region of this plot, for higgsino-like LSPs, the projected sensitivity hovers around 150 GeV for the Run 2 LHC dataset with integrated luminosity L = 139 fb −1 and is anticipated to reach over 550 GeV for the HL-LHC run with L = 3 ab −1 .
The 5σ discovery sensitivity for HL-LHC, which was calculated but is omitted from the plot for visual clarity, is around 300 GeV.As long as M 2 is sufficiently above µ, the lightest neutralino mass -and therefore the lower mass bound -is relatively independent of M 2 , since the neutralino maintains a sufficiently higgsino-like admixture.In the mixed wino-higgsino region, we project that the 139 fb −1 dataset has exclusion sensitivity up to m χ0 1 ∼ 200 GeV.The HL-LHC exclusion sensitivity reaches past 600 GeV for these well-mixed states, with 5σ discovery sensitivity at around 450 GeV.It is evident that the limit strengthens both in the pure higgsino region and in the well-mixed region, with a noticeable dip between these regions.This can be explained by considering that for fixed µ, as M 2 decreases, the mass difference between χ0 2 and χ0 1 (not shown here, but plotted in Figure 3) increases such that χ0 2 decay through off-shell gauge bosons with appreciably hard decay products no longer appear as invisible particles contributing to the search, while the corresponding decay through an on-shell vector boson illustrated in Figure 5 has not yet "turned on" sufficiently to contribute to the V + E miss T channel.
As mentioned in Section III, we wish to compare the sensitivity of the mono-boson search to long-lived track and soft-lepton searches that may be more powerful in parameter space with different electroweakino spectra.Limits from CMS-EXO-19-010 and CMS-SUS-16-048 are therefore included in Figure 7 as violet and blue regions/contours, respectively.These shaded regions, and all of their counterparts discussed below, denote observed limits.In analogy with the mono-boson search, shaded regions indicate current limits at 95% C.L. and solid curves represent HL-LHC projections computed using MadAnalysis 5 (viz.Section III).The logic discussed in that section is borne out in Figure 7: the two CMS searches constrain parameter space complementary to that probed by the mono-boson search.In particular, the mono-boson sensitivity gap between the higgsino-like and well-mixed regions, discussed just above, is filled to some extent by the soft dilepton analysis.Meanwhile, the long-lived track search is several times more powerful than the mono-boson analysis (as a function of M 2 ) in the wino-like region.It is worth noting that these searches cannot match the HL-LHC gains of the mono-boson search for the higgsino region, µ ≪ M 2 , on the basis of improved statistics, simply because their sensitivities are naturally limited to parameter space with suitable electroweakino mass splitting(s).This is also true of the long-lived/disappearing-track search in the mixed winohiggsino region, which is only sensitive to charginos with lifetimes exceeding τ = 0.3 ns -and are already well constrained with Run 2 data.
We next return to monojet constraints, first mentioned in Section III.Shaded in green on the left edge of Figure 7 are the strongest available monojet limits, which come from the 137 fb −1 monojet subanalysis of CMS-EXO-20-004 [63].These limits were very recently calculated for simplified pure-higgsino (LSP) parameter space in [64] (including one of the authors of this work) using the implementation of this analysis in the MadAnalysis 5 PAD and the statistical analysis package Spey [74].We focus on the CMS limits since they are much stronger than the available recast ATLAS limits: this is because CMS has published the correlations between signal regions for the background model in a simplifiedlikelihood framework, permitting the computation of a limit based on the signal region combination, whereas ATLAS provides no statistical information in [62] and the best limit comes only from the most sensitive individual signal region.For this work, we have mapped the CMS results onto the µ ≪ M 2 region of our parameter space, where the simplified pure-higgsino model provides a good approximation to the true mass spectrum.A similar analysis has yet to be carried out for pure-wino LSP models, but by comparison with the higgsino limits we expect monojet limits on winos to be superseded by disappearing-track bounds in most of the wino-like region.The higgsino monojet limits weaken rapidly as the splitting between light neutralinos m χ0 2 − m χ0 1 approaches 20 GeV due to vetoes on leptons with p T > 10 GeV, which can result from off-shell W/Z bosons in electroweakino decays.Ultimately, we find that the 137 fb −1 CMS monojet (observed) limits are stronger than the "true" 36.1 fb −1 mono-boson limits (recall these are shaded in red), excluding up to µ ≈ 200 GeV.Our projection shows that the improved mono-boson analysis takes back the lead when the yields are rescaled to 139 fb −1 to estimate the full Run 2 sensitivity.We therefore conclude that the mono-boson analysis remains superior to monojet searches -at least for higgsinos, for which these analyses compete to set the best limitswhen the datasets are of approximately equal size.Finally, as alluded to in Section IV, we demonstrate the complementarity between the searches detailed above, which explicitly target compressed spectra, and more conventional searches for electroweakino pair production.In light green we represent the observed limits from another 139 fb −1 ATLAS search, ATLAS-SUSY-2019-09 [37], which combines a search for final states with three leptons and missing transverse momentum with a previous 139 fb −1 search for soft-dilepton + E miss T final states [75].(This soft-dilepton analysis constitutes a significant update to the 35.9 fb −1 CMS soft-lepton search discussed above.)As explained in the previous section, this search topology results from soft decays of the chargino and second lightest neutralino.One of the scenarios considered by ATLAS contains compressed higgsino-like electroweakinos, so in the absence of a dedicated recast we perform a simple mapping from the physical plane (m χ0 2 , m χ0 2 − m χ0 1 ) presented by ATLAS onto our (µ, M 2 ) plane, using our spectra computed by SPheno as discussed in Section II.Exclusions from this search overlap with exclusions from the soft-lepton search and fade out as we enter the higgsino-like LSP region, where decay products become invisible; and as we approach the well-mixed region, where heavier neutralino and chargino production is mass suppressed.The mono-boson search dominates the trilepton exclusions for sizable µ, and presumably a combination of these search channels would increase constraints where the two searches are roughly equally powerful.Moving on, in orange we denote the space excluded by ATLAS-SUSY-2018-41, a search for pair-produced electroweakinos with two hadronically decaying vector bosons and missing energy [38].This search uses the full Run 2 dataset of L = 139 fb −1 and uniquely (among the analyses discussed in this work) presents results in the (µ, M 2 ) plane that can be included without further comment.It relies on the production and decay of heavy χ± 2 and χ0 3 states and has exclusion power where they are light enough to have sufficient production cross sections but heavy enough to produce a vector boson hard enough for a boosted tag upon decay.In the deep wino and higgsino regions, these electroweakinos are too heavy for sufficient production rates.
Altogether, we find that the mono-boson search should do the heavy lifting in the (µ, M 2 ) plane during the run of HL-LHC.Nevertheless, analyses of all types have a role to play in probing this space, with long-lived tracks searches covering the deep wino region, our mono-boson analysis offering excellent constraints through a wide coverage of the plane, and e.g. the hadronic diboson search filling gaps in the mono-boson analysis as µ begins to approach M 2 from above.Taken together, if no excess is measured, we project that these analyses can exclude much of the parameter space with µ ≲ 500 GeV and M 2 ≲ 500-750 GeV by the end of the LHC's high-luminosity run.This is of some interest, as the size of the µ parameter itself has long been proposed as a measure of the electroweak fine-tuning of supersymmetric scenarios as given by the minimization conditions of the Higgs potential [76,77].This measure of naturalness requires the µ term not exceed a few hundred GeV, so by this metric we find that our HL-LHC search will have the power to exclude the "natural" region of the MSSM.
VI. CONCLUSIONS
In this work we have explored multiple experimental handles on the relatively unconstrained wino-higgsino plane (µ, M 2 ).We have proposed a hadronic monoboson search with binned E miss T selections as an LHC channel sensitive to neutralinos with significant higgsino admixtures.We have reviewed how the light electroweakino states vary in mass and content in the (µ, M 2 ) plane and described how production processes relevant to the mono-boson search depend on the mass splittings between the χ0 1 , χ± 1 and χ0 2 states.We have also highlighted other search strategies -targeting events with soft leptons, events with long-lived tracks that disappear before the edge of the tracker, and searches with moderately heavy but producible electroweakino pairs -that constrain wino-higgsino parameter space complementary to that probed by the mono-boson search.
We have set limits based on our proposed strategy and from reinterpreted existing results using LHC Run 2 data, and we have performed a sensitivity study for the 3 ab −1 run of the HL-LHC.We have depicted these limits in a considerable portion of the (µ, M 2 ) plane.If no excess is seen, the mono-boson search has sensitivity to pure-or nearly-pure-higgsino LSPs of mass m χ0 1 ∼ 150 GeV and mixed wino-higgsino LSPs up to 300 GeV in the current data set.It also has the power to exclude almost all M 2 < 1 TeV for µ ∼ 120 GeV and all µ < 1 TeV for M 2 ∼ 250 GeV when combined with other recast search limits.At the HL-LHC, for M 2 ∼ 750 GeV, we project a lower bound of µ ≈ 400 GeV in the entire mass plane with exclusions (assuming no excess is observed) of higgsino-like neutralinos up to 550 GeV, and past 600 GeV in the well-mixed region.We also project 5σ discovery potential up to 300 GeV for a higgsino-like LSP and up to 450 GeV for mixed wino-higgsino LSP.
As hoped, we have found that the soft-lepton, disappearing-track, and boosted diboson searches are sensitive to (µ, M 2 ) parameter space in which the mono-boson analysis is weak, thus exhibiting useful complementarity.Exclusions from events with soft but detectable leptons and the diboson analysis fill a notable gap in the mono-boson analysis for low-mass LSPs between the higgsino-like and well-mixed regions, while wino-like long-lived charginos with M 2 ≲ 1 TeV are most strongly constrained by the disappearing-track search.We project that this complementarity will allow the HL-LHC to rule out vast swaths of "natural" (sub-TeV) winohiggsino parameter space in the absence of a discovery.
We expect these results to be somewhat robust with respect to the bino mass M 1 , which we mentioned in Section II was decoupled in our analysis.We know, for instance, that the specific choice of M 1 = 5 TeV can be relaxed to as low as 2 TeV with negligible effect on the electroweakino spectrum.The exclusions in Figure 7 will quantitatively change if M 1 is taken much lower, in the vicinity of µ or M 2 (whichever is heavier), but the picture will remain qualitatively the sameincluding which searches are most sensitive in general regions of the (µ, M 2 ) plane -as long as the bino is still heavier than the wino-higgsinos.Only when the bino is lighter than one or both of the higgsino or wino will the results cease to apply even qualitatively, so that a new (meta-)analysis will be required.Since scenarios with light binos naturally produce larger electroweakino mass splittings, we expect conventional searches, including for instance the trilepton search discussed in Section V, to dominate searches targeting compressed spectra and exclude much more parameter space.But we reiterate that an accurate and comprehensive picture can be painted in some future project analogous to the present work.
Even within the decoupled-bino paradigm discussed in this work, opportunities for further study are numerous.There may be opportunities for the study of mono-boson signatures of electroweakinos in which the boson decays leptonically.A search for a single leptonically decaying mass-reconstructed Z boson was previously proposed for dark matter and higgsino LSPs [60,78], and such strategies might be applied to the entire (µ, M 2 ) plane.Leptonic mono-W searches with a leptonic transversemass cut and a binned missing energy analysis might also provide a probe of the wino-higgsino plane.Such leptonic analyses might be interesting in light of the current excess [64,75] in events with soft lepton pairs.Finally, combinations of the analyses in this work could provide tighter constraints on the (µ, M 2 ) plane for the existing LHC dataset.
FIG. 4 :
FIG. 4: Search strategies in the (µ, M 2 ) plane based on mass difference.Also shown is the wino vs. higgsino content of the lightest neutralino.Recall from Figures 3 and 2b that m χ0 2 − m χ0 1 increases with µ but m χ± 1 − m χ0 1 does the opposite.
FIG. 7 :
FIG. 7: Performance projections for custom hadronic mono-W /Z search for the original 36.1 fb −1 dataset (red shaded) and for the full Run 2 and HL-LHC datasets (red curves) compared to existing searches for electroweakinos decaying to two soft leptons (CMS-SUS-16-048, blue) and with tracks disappearing in the silicon tracker (CMS-EXO-19-010, violet).Also included are conventional searches for electroweakino pair production in final states with two hard hadronic vector bosons (ATLAS-SUSY-2018-41, orange) and three leptons (ATLAS-SUSY-2019-09, light green), along with the strongest monojet + E miss T search (CMS-EXO-20-004, dark green).
TABLE I :
[36]t selection criteria in the mono W/Z search[36].The symbols j and J denote the small-R and large-R jets, respectively.{j i } are the small-R jets ordered (i = 1, 2, 3, . . . ) by their p T in decreasing order.Angles are defined in radians.See text for details. | 10,645.6 | 2023-09-13T00:00:00.000 | [
"Physics"
] |
A bibliometric analysis of interstitial cells of Cajal research
Objective The significance of interstitial cells of Cajal (ICC) in the gastrointestinal tract has garnered increasing attention. In recent years, approximately 80 articles on ICC have been published annually in various journals. However, no bibliometric study has specifically focused on the literature related to ICC. Therefore, we conducted a comprehensive bibliometric analysis of ICC to reveal dynamic scientific developments, assisting researchers in exploring hotspots and emerging trends while gaining a global perspective. Methods We conducted a literature search in the Web of Science Core Collection (WoSCC) from January 1, 2013, to December 31, 2023, to identify relevant literature on ICC. We employed bibliometric software, namely VOSviewer and CiteSpace, to analyze various aspects including annual publication output, collaborations, research hotspots, current status, and development trends in this domain. Results A total of 891 English papers were published in 359 journals by 928 institutions from 57 countries/regions. According to the keyword analysis of the literature, researchers mainly focused on “c-Kit,” “expression,” “smooth muscle,” and “nitric oxide” related to ICC over the past 11 years. However, with “SIP syncytium,” “ANO1,” “enteric neurons,” “gastrointestinal stromal tumors (GIST),” and “functional dyspepsia (FD),” there has been a growing interest in the relationship between ANO1, SIP syncytium, and ICC, as well as the role of ICC in the treatment of GIST and FD. Conclusion Bibliometric analysis has revealed the current status of ICC research. The association between ANO1, SIP syncytium, enteric neurons and ICC, as well as the role of ICC in the treatment of GIST versus FD has become the focus of current research. However, further research and collaboration on a global scale are still needed. Our analysis is particularly valuable to researchers in gastroenterology, oncology, and cell biology, providing insights that can guide future research directions.
Introduction
In 1889, the Spanish neuroanatomist Santiago Ramon y Cajal (1852-1934) made an initial groundbreaking discovery of small individual nerve ganglion cells in the gastrointestinal tissues of mammals.These cells were morphologically characterized as spindleshaped stellate cells and named "interstitial cells of Cajal (ICC)" (1).However, at that time, only morphological techniques were available to identify ICC, and their physiological functions remained elusive for many years.It was not until the 1990s that the tyrosine kinase receptor Kit (c-Kit), also known as CD117 or the stem cell factor receptor, was identified as the primary marker for ICC in pathological specimens (2,3).This discovery marked a significant breakthrough in the field.Over time, ICC has emerged as a focal point in the fields of physiology and medicine (4,5), with their role in the gastrointestinal tract and other organs attracting considerable attention due to their association with various diseases and physiological processes (6,7).ICC possesses distinctive ultrastructural features and electrophysiological properties, playing a crucial role in regulating smooth muscle contraction and coordinating gastrointestinal motility.They are commonly referred to as the "pacemaker cells" of the gastrointestinal tract (8).Research has shown that ICC display distinct functionalities across different subtypes (9).Intramuscular interstitial cells (ICC-IM) modulate neurotransmitter responses (10), and myenteric interstitial cells (ICC-MY) serve as pacemakers by generating slow waves that influence smooth muscle contractions (11).On the other hand, submucosal interstitial cells (ICC-SM) coordinate the regulation of secretions and reflexes within the mucosal and submucosal layers (12), while septal interstitial cells (ICC-SEP) function as a supportive network, maintaining the structural integrity of the gastrointestinal wall (13,14).However, as time passes, the complexity and diversity of ICC research have steadily increased, extending to multiple systems, such as the nervous, digestive, and urinary systems, among others (9,15,16).Consequently, there is a need for a comprehensive methodology to fully understand their developmental trajectories and impact.
Bibliometrics is a subfield of informatics that involves the use of mathematical and statistical methods for quantitative and qualitative analysis of published scholarly literature (17)(18)(19).Integrating temporal and spatial dimensions into bibliometric analysis can provide new insights into knowledge development and academic records (20).This approach focuses on countries, institutions, journals, authors, and keywords associated with a specific field of study, measuring the profile of the field, partnerships, and overall scholarly output.It aims to provide readers with objective view of trends and frontiers in the field with the aim of assessing the characteristics and trends of a specific research area (21,22).Furthermore, it helps to analyze the co-occurrence and impact of a given field, making it an indispensable tool for assessing the quality and impact of scholarly work (23).Despite the growing utilization of bibliometrics in various fields, there remains a notable gap in bibliometric research specifically focused on ICC.Addressing this gap, the present study capitalizes on the Web of Science™ Core Collection (WoSCC) to collect pertinent bibliometric data concerning related to ICC research from 2013 to 2023.Using CiteSpace and VOSviewer tools (24, 25), knowledge maps are generated to help illustrate scientific knowledge and various relationships, providing valuable insights for future research endeavors.
Data sources and search strategy
In this study, we utilized the comprehensively recognized and standardized WoSCC as our primary database for retrieving literature on ICC (26).Compared to Scopus, the accuracy of document type labels in WoSCC is higher (27).Our search strategy was (((TS = ("Interstitial cells of Cajal")) OR TS = ("Cajal cells"))), with a specified time frame from January 1, 2013, to December 31, 2023.No ethical approval was required as our research did not involve animal experiments or human trials.We selected research articles and reviews that met our inclusion criteria and were written in English to ensure professionalism and accuracy.Non-English articles, other document types, and publications outside the specified time range were excluded.The collected data included complete bibliographic records and citation information, all stored in plain text format.To maintain accuracy and timeliness, all retrieval and collection work was finalized on January 12, 2024, thus minimizing the impact of subsequent database updates.The primary objective of this study was to conduct a systematic literature search on a specific topic and establish stringent linguistic standards for subsequent analytical processes.A visual representation of the retrieval process is shown in Figure 1.
Data analysis
The CiteSpace software developed by Professor Chao-Mei Chen at American University can be used to visualize and analyze trends and patterns in the scientific literature, which can help to predict research trends in a particular discipline or field over a specific period of time (28,29).The parameters used for analysis in CiteSpace (version 6.1.R6) were as follows: a link retaining factor of 3.0; a time span from 2013 to 2023 with one slice per year; node types including Reference and Keyword; links with strength measured by cosine and scope within slices; and a selection criteria with a g-index scale factor of 25.In addition to CiteSpace, we used VOSviewer (version 1.6.16) to visualize and analyze the co-occurrence of countries, institutions, author distributions, and keywords.For co-occurrence analysis of keywords, adjustments were made using the Pajek program to enhance clarity of relationships.Geographical visualization of country and institution distribution was performed using Scimago Graphica.In addition, the History of Cite software (HisCite, v2.1) developed by Garfield was utilized for drawing citation maps to examine links between different scholarly works and to identify important publications.Burst detection of keywords and references was generated using CiteSpace.
Furthermore, we employed Microsoft Office Excel 2021 to conduct statistical analyses on the distribution of journals by publication year, authors, countries, institutions, citation rankings, and journal impact factors.We assessed the scientific impact of journals based on the 2023 Journal Citation Reports (JCR).This assessment specifically involved using the journal category Impact Factors (IF) and quartile rankings to gauge their significance in the field.
Analysis of publications
The annual publication output often serves as an indicator of the development status within a field (28).Figure 2 depicts the publication trends in this particular field from 2013 to 2023.Over this period, there has been a noticeable fluctuation in the number of publications.Nonetheless, overall, this field has consistently maintained a substantial level of scholarly output.These findings suggested that researchers demonstrated a lasting commitment to the investigation of ICC throughout the past 11 years.These findings not only highlight the level of activity in the field, but also demonstrate the interest and motivation of researchers to delve deeper into the complex issues involved in ICC investigations.
Analysis of countries/regions
A total of 891 publications on ICC were authored by 928 institutions from 57 countries/regions.The top 10 countries/regions and institutions in terms of productivity are detailed in Tables 1, 2. Our findings illustrated that China (231, 25.93%) exhibited the highest productivity, followed by the United States (224, 25.1%),South Korea (87, 9.76%), Germany (37, 4.15%), and Italy (36, 4.04%).The most active affiliated institutions included the University of Nevada (70, 7.86%), Mayo Clinic (52, 5.84%), Pusan National University (47, 5.27%), the University of Auckland (43, 4.83%), and McMaster University (30, 3.37%).Despite the numerical dominance of China and the United States, our analysis of the overall network strength among countries/regions and organizations shows that the United States and Mayo Clinic have significant influence in this area.This reflects the depth and leadership of these countries and organizations in research, cooperation, and influence in the field of ICC investigations.
Figure 3 illustrates a bibliometric map of geography, showcasing the collaborative authorship network among the top 10 countries with the highest number of published articles.Notably, China and the United States made the most significant contributions, followed by other noteworthy contributors such as South Korea, Germany, and Italy.In addition to the number of collaborations, the quality of collaboration between these countries is also noteworthy.In addition, articles in this field show a commendable level of citation quality, which reflects their far-reaching impact and scholarly value.
In Figure 4, a collaborative map portrays the relationships between countries/regions and institutions.The size of each node reflects the number of documents, while the thickness and color of the connecting lines indicate the extent of collaboration between them.It is evident that the University of Nevada and Mayo Clinic engage in close collaboration with numerous institutions.Furthermore, the University of Auckland and Vanderbilt University exhibited a robust The flowchart of literature selection.
partnership in terms of institutional collaborations.Several research institutions, including Pusan National University, McMaster University, and Seoul National University, actively participated in collaborative endeavors.
Distribution of journals
A total of 359 journals published the articles related to this research.An overview of the top 10 most relevant journals in terms of these publications is summarized in Table 3.The majority of these journals boast an IF of 3 or higher, and they predominantly fall into the Q3 category or higher according to the Journal Citation Reports (JCR) categorization.Notably, Neurogastroenterology and Motility, American Journal of Physiology-Gastrointestinal and Liver Physiology, and PLoS One are the top three journals that have published the greatest number of articles related to this field of study.This reflects the leading position of these journals in the field and their important role in promoting scholarly exchange and knowledge dissemination.
Journal co-citation is a method employed to examine citation relationships and influences among different academic journals.Its Annual trends in publications.
Li et al.
Distribution of authors
In this specific research domain, 4,115 researchers made notable contributions.Table 4 provides a comprehensive snapshot of the top 10 researchers who have made outstanding contributions and garnered significant citations.Notably, among these esteemed scholars, four were affiliated with the University of Nevada, while two were affiliated with the Mayo Clinic.Sanders, Kenton M., Kim, Byung Joo, and Farrugia, Gianrico emerge as three of the most prolific contributors in this field.
Co-citation analysis of authors refers to the situation where two authors' papers are cited by a third author simultaneously (17).The evaluation of researchers' co-citation patterns highlighted individuals who exerted a substantial impact on the field.Table 4 presents the top 10 most co-cited scholars.In terms of citation counts, Sanders, Kenton M. held the leading position with 1,480 citations, further reinforcing his prominence as the most prolific author in terms of publications.Following closely behind were Ward, Sean M. (960 citations), Farrugia, Gianrico (888 citations), Koh, Sang Don (671 citations), and Du, Peng (566 citations).
Notably, this research domain attracted considerable attention from multiple academic teams.Figure 6 illustrates the top 10 authors with the highest number of publications and citations.Each node However, the collaborative relationships between these teams are relatively limited, and there is significant room to optimize and make the best use of the available resources.
Keyword co-occurrence analysis
The keywords used in an article serve as indicators of the research theme and can effectively identify research hotspots and frontiers within a specific field.To achieve this, we utilized VOSviewer software to create a co-occurrence network visualization (Figure 7), which comprises 277 high-frequency keywords (occurring more than 5 times).This visualization offers a graphical representation of their relationships based on their co-occurrence frequencies in the literature.Furthermore, Table 5 presents a compilation of the top 30 keywords that are closely associated with ICC.In addition to the prominent term "Interstitial cells of Cajal, " keywords such as "c-Kit, " "expression, " "smooth muscle, " and others prominently appear in the abstracts and titles of articles.
The co-occurrence network classified keywords into distinct clusters, each distinguished by different colors.We identified 7 clusters.The red category primarily encompasses the physiological functions of ICC, such as their roles in intestinal transport and muscle activity.The green category mainly consists of molecular markers related to ICC.The blue category focuses on clinical studies related to ICC, with keywords such as "in-vivo" and "double blind, " suggesting a focus on evaluating the impact of therapeutic interventions on ICC function, especially in the context of diabetic gastroparesis.The light blue category primarily focuses on the electrophysiological aspects of ICC, involving keywords like "electrical rhythmicity" and "slow waves, " indicating a greater emphasis on the role of ICC in the electrical activity of the gastrointestinal tract and its importance as pacemaker cells The orange category mainly deals with the pathological identification of ICC, with keywords like "kit positive cells" and "human colon, " indicating research into ICC in different types of cells and the human colon.The yellow category primarily explores the relationship between ICC and gastrointestinal motility, involving keywords such as "smooth muscle" and "motility, " suggesting that research may focus on the role of ICC in regulating smooth muscle contraction and intestinal dynamics.The purple category mainly focuses on the biophysical properties of ICC, such as "nitric oxide" and "excitability, " suggesting that the role of ICC in intestinal excitatory and inhibitory signaling may be investigated.
Additionally, certain keywords displayed strong link strengths despite their lower frequencies, suggesting emerging research areas or topics that have not yet received extensive exploration but deserve The dual-map overlay of journals.closer attention from researchers.For instance, keywords such as "pacemaker activity" and "motility" exhibited lower occurrence frequencies but demonstrated significant link strengths, implying their relevance to ICC research, even though they may not have received substantial investigation thus far.
Analysis of HisCite literature
To analyze ICC research progression, we identified the top 20 articles based on Local Citation Score (LCS) from HisCite, as depicted in Figure 8 and Table 6.The LCS is the number of citations a paper receives in a given academic field, and the sum of its indices reflects its influence and visibility in that field (31).
Our analysis highlighted Sanders, Kenton M. 's work as having high citation rates, with a focus on reviewing interstitial cells in smooth muscle, particularly ICC and PDGFRα(+) cells in the gastrointestinal tract.These cells' roles in functions like pacemaker activity, slow wave propagation, neural transmission, and mechanosensitivity, as well as their potential link to gastrointestinal motility disorders, were central to this research.Over the 11 years studies have intensively explored ICC's structure, molecular and electrophysiological properties, and their significance in neurogastroenterology.Co-authorship network map.
Burst detection of keywords and references
Keywords serve as concise summaries of research content.By analyzing their development trends, we gained insights into the hotspots and focal points of specific research areas, which in turn informed research directions.The keyword burst analysis depicted in Figure 9 offered a broad overview of the evolution of ICC research hotspots over the past 11 years.
Specifically: 2013-2015: during this period researchers focused on ICC studies encompassing electrophysiology gene editing animal models and the diagnosis of related diseases 2016-2020: during this timeframe, there was a notable enhancement in the number of keywords, signifying an expanding range of research interests that covered diverse aspects, such as physiology, pathology, and regulation.
2021-2023: the primary emphasis during this period was on functional gastrointestinal disorders, the investigation of ANO1, and the interactions between ICC and enteric neurons.
As time progresses, the detection of citation bursts in reference literature provides insights into shifts in research focus."Burst strength" refers to the citation burst intensity of a document.This value indicates the extent to which the number of citations for a particular document increases suddenly over a given period of time, compared to the average level (32).Among the top 25 cited references
General information
Our research utilized quantitative analysis tools, specifically CiteSpace and VOSviewer, to conduct an extensive investigation of the literature pertaining to ICC over the past 11 years.In addition, we conducted a comprehensive review of the research accomplishments and progress made in this area.Our study meticulously measured key aspects, including the annual publication count, geographical distribution, author collaboration networks, institutional affiliations, and journal evaluations.In the period from Network of the top 20 ICC research articles based on LCS ranking.Each of the 20 circles in the figure represents a key literature reference, with the central number denoting its position in the database, corresponding to the ID sequence in Table 6.Circle size indicates citation frequency-larger circles correspond to higher citation rates.Arrows between circles show citation relationships.2013 to 2023, there were minor fluctuations in the quantity of scholarly articles in the ICC field.However, overall, there was a consistently high level of publishing activity.Each year, about 80 papers were published, highlighting the sustained and widespread interest in the ICC field.
A geographical analysis of country/region and institution distribution revealed active engagement in ICC research from numerous countries and regions.Particularly, the China, United States, and South Korea displayed exceptional performance, accounting for a combined contribution of 60.83% of the total published papers.It was worth noting and emphasizing that publications from the United States achieved notable citation counts, highlighting strong collaborative relationships with other nations and emphasizing the extensive research conducted by the United States in the realm of ICC.
Among the top 10 ranked institutions, three were from the United States, three from China, and two from South Korea.Notably, the University of Nevada stood out as the most productive institution, having published the highest number of papers.Additionally, the Mayo Clinic received the highest ranking in total link strength, further highlighting the potential for significant impact resulting from its research outcomes.The strong collaborative relationships between countries and institutions contributed to breaking down academic barriers and promoting the development of ICC-related research.
The assessment and ranking of cited journals held great importance for researchers, as they aided in quickly identifying the most suitable journals for manuscript submissions.Moreover, analyzing citation frequencies facilitated the identification of primary research directions within the field.Most journals within the top 10 rankings boasted an IF exceeding 4.0, indicating a relatively high research quality in this domain.•Noteworthy publications in esteemed journals such as the Journal of Physiology-London, Gastroenterology, Neurogastroenterology and Motility, and the American Journal of Physiology-Gastrointestinal and Liver Physiology demonstrated exceptional citation performance.Due to the potential impact of ICC research, these leading scholarly journals garnered significant interest from scholars.Therefore, closely monitoring these journals was crucial to remain abreast of the latest research advancements and discoveries.
Lastly, our analysis of author collaboration networks revealed that while several academic teams emerged within the ICC field, the level of collaboration among these teams was relatively low.We strongly encouraged different academic teams to actively enhance scholarly exchanges and engage in collaborative discussions, aiming to collectively delve deeper into the development of the ICC field.Such collaborative efforts had the potential to accelerate progress in this area.Among the top 10 authors, Kenton M. Sanders (52, 5.84%) has the highest number of publications, followed by Byung Joo Kim (41, 4.60%), and Gianrico Farrugia (29, 3.25%).This observation highlights the notable contributions these three authors have made to the ICC field.
Hotspots and Frontiers
Keyword analysis provides insights into the current trends within the research field.Keyword burst refers to the phenomenon where a particular keyword experiences a sudden increase in activity and appears frequently in academic literature within a specific academic field or topic.Utilizing co-occurrence analysis of keywords, we have identified the primary research directions and hot topics in ICC, shedding light on the changing development patterns within the thematic structure (35).The following are potential research frontiers.
Anoctamin 1 (ANO1) is an ion channel protein belonging to the TMEM16 family, which consists of 10 members in mammals (ANO1 to ANO10) (36,37).The Ca 2+ -activated Cl − channel ANO1 in ICC plays a crucial role in regulating pacemaker activity and responses to intestinal neurotransmitters (38,39).It is primarily expressed in epithelial cells, smooth muscle cells, and sensory neurons (39-41), forming calcium-activated chloride channels in the cell membrane, and is considered a more specific ICC marker than c-Kit, as it does not label mast cells (42,43).ANO1 has been shown to be significantly expressed in ICC and generate spontaneous transient inward currents in ICC (38,44), generating slow waves in intact gastrointestinal smooth muscle (36,44).Elevated intracellular Ca 2+ leads to the generation and propagation of pacemaker potentials that are amplified by activation of ANO1 channels (10,41).Abnormal expression or dysfunction of ANO1 is associated with the pathogenesis of various diseases, including diabetic gastroparesis, congenital megacolon, gastroesophageal reflux, and chronic constipation (45)(46)(47)(48).It has been shown that pharmacological inhibition or gene silencing of ANO1 can block slow waves in intestinal smooth muscle, reducing intestinal motility in patients with diarrhea (49).However, validating ANO1 as a therapeutic target remains a challenging task.Despite significant progress in understanding the distribution, expression, structure, and pathophysiological function of ANO1, selective modulators are urgently needed to validate its therapeutic potential.The enteric nervous system (ENS) is a complex network comprised of neurons and glial cells, commonly referred to as the "second brain" (50).The ENS possesses the capacity to regulate digestion, absorption, and defense, as well as exert influence over smooth muscle contractions, secretions, and gastrointestinal blood flow (12).It represents a substantial and intricate neural system within the intestines, capable of functioning autonomously from the central nervous system to coordinate gastrointestinal activities (51).ICC actively contributes to gastrointestinal motility and neural transmission (52).ICC effectively generates slow waves through their interaction with enteric neurons and smooth muscle fibers, playing a direct role in the regulation of gastrointestinal peristalsis and consequently in the overall functionality of the digestive system.Moreover, ICCs are significantly involved in enteric neural transmission and possess mechanoreceptor activity.They exhibit sensitivity to mechanical changes through the detection of intestinal stretch and contraction, enabling the adjustment of neural signals to adapt to various physiological and mechanical conditions, thus maintaining intestinal homeostasis.
Recent investigations have brought to light the interaction between ICC in the smooth muscle layer of the gastrointestinal tract and PDGFRα-positive cells (cells expressing platelet-derived growth factor receptor alpha) with SMC through gap junctions (55-57), Burst detection of references.(60).Neuronal transmission influences these transients, triggering activation of voltage-dependent calcium channels in the pacemaker ICC, maintaining depolarization during slow wave periods (38).These cells regulate peristalsis by activating calcium ion channels and ion channels, and are modulated by neuronal inputs (55).ICC also serve as pacemakers, generating slow waves that form the electrophysiologic basis of gastrointestinal motility.Any factor that causes morphological or functional changes in ICC or PDGFRα+ cells may affect the relative balance between ICC-ANO1-SMC and PDGFRα+ cells-SK3-SMC, resulting in abnormal gastrointestinal motility (61).Recent research proposes that the term "SIP genic" provides a more precise description of gastrointestinal dynamics regulation compared to the traditional term "myogenic" (60).The calcium handling mechanisms are central to mesenchymal cell function, yet the dynamics of these cells remain incompletely understood in gastrointestinal motility disorders.Nonetheless, the physiological and pathophysiological roles of these cells remain largely undefined in most cases.Further research is warranted to explore the forefront issues in this field.Gastrointestinal stromal tumors (GIST) are rare tumors that originate in the interstitial cells of Cajal (62).Two-thirds of adult patients have c-Kit mutations, while a small percentage have PDGFRA mutations (63,64).In recent years, the treatment of GIST has attracted much attention, mainly including targeted therapy and surgical resection.Imatinib has been widely used as first-line treatment for metastatic GIST, but patients usually experience disease progression after 2-3 years (65).Second-and third-line treatment options are limited, including sunitinib and regorafenib, but their efficacy is poor (66).Results from a large clinical trial indicated no significant difference in efficacy between avapritinib and regorafenib in patients with advanced GIST (67).A meta-analysis shows that neoadjuvant chemotherapy may improve 5 years overall survival, while local excision reduces hospitalization time (67).Recent studies have shown significant differences between GIST and wild-type (WT) KIT mutations, offering potential therapeutic perspectives and targets for overcoming imatinib resistance (69).Despite some progress in current therapeutic approaches, the outcome for certain GIST patients remains suboptimal.Therefore, it is necessary to further explore the potential role of ICC in the treatment of GIST, explore new therapeutic strategies and targets, to improve therapeutic efficacy and survival rates.
Functional dyspepsia (FD) is a chronic functional disorder originating from the gastroduodenal region, characterized by epigastric pain or burning sensation, postprandial fullness, or early satiety (70,71).The Rome IV criteria classify FD into two distinct subtypes: postprandial distress syndrome (PDS) and epigastric pain syndrome (EPS) (72).Although the pathophysiology of FD remains uncertain, ICC are recognized as key regulatory mediators and therapeutic targets for the condition (73,74).Several studies have suggested that both the quantity and dysfunction of ICC may contribute to the development of FD.Currently, treatment options for FD include dietary modifications, probiotics, antibiotics, acid suppressants, neuromodulators, prokinetics, and others, but none of these methods are consistently effective.Recent research has found that alterations in gut flora may play an important role in the pathogenesis of FD, especially changes in duodenal microbiome may be caused or contributed to by immune and neuronal dysregulation (71, 75,76).Restoring microbial homeostasis with probiotics has been shown to be effective in FD (77,78).Results of a meta-analysis suggest that spore-forming probiotics may be an effective treatment for FD patients, but more research is needed to validate their long-term efficacy and safety (79).However, the mechanisms by which gut microbiota influence gastrointestinal function and symptoms, and their association with ICC, remain unclear.Whether this process influences the amount and function of ICC may be a direction for future research.
Limitations
The application of bibliometrics relies on the availability of metadata, making the accuracy of the metadata a crucial factor (21).The analysis in this study was based on articles from the WoSCC database, while studies published in non-SCI journals or other databases were excluded, an omission that may have affected the accurate assessment of the study results.Also, we did not exclude the effect of journal self-citation rates, which may bias the results somewhat.It is important to note that CiteSpace and VOSviewer cannot completely replace systematic retrieval.Some bibliometric indicators, such as journal impact factors, may oversimplify the measurement of research outputs and fail to comprehensively reflect the research's quality, innovation, and social impact.However, despite these limitations, they are expected to have a minor effect on the overall results and are unlikely to significantly alter the main trends proposed in this paper.In conclusion, this study serves as a foundation for understanding the research topics, hotspots, and development trends in ICC.
Conclusion
In this study, we conducted a bibliometric analysis to review the trends, hotspots, and frontiers of research related to ICC over the past 11 years.Our study identified 891 publications on ICC, revealing influential countries, institutions, and authors who have made significant contributions in this field.Additionally, we focused on specific topics to investigate research trends.According to our analysis, the role of ICC in the treatment of GIST and FD, as well as the relationship between ANO1, SIP syncytium, enteric neurons, and ICC, may become important directions for future research.Our analysis is particularly valuable to researchers in gastroenterology, oncology, and cell biology, providing insights that can guide future research directions.
Recommendations
This study presents the comprehensive exploration of ICC and its intricate interplay within the gastrointestinal system, providing valuable insights into current research trends.The identification of key areas and cutting-edge frontiers establishes a solid foundation for future investigations which focuses on NO, c-Kit receptor, ANO1, and the enteric nervous system.The elucidation of ICC's role in gastrointestinal disorders underscores the significance of ICC, which is a potential therapeutic target.Further research on the physiological and pathophysiological aspects of ICC, especially in conditions like functional dyspepsia, holds great promise for advancing our understanding and developing targeted interventions.
FIGURE 1
FIGURE 1 principal objective is to aid researchers and academic institutions in comprehending the interconnections between journals and assessing their impact and quality.Upon analyzing Table3, it was clear that three academic journals received over 2000 citations each: Journal of Physiology-London, Gastroenterology, and Neurogastroenterology and Motility.Furthermore, among these journals, seven have an IF greater than 5, further highlighting their high level of academic quality and research influence.A dual-mapping overlay of journals is shown in Figure5, demonstrating the thematic distribution of academic journals, changes in citation trajectories, and shifts in research focus.The graph accurately depicts the distribution of individual academic journals and vividly shows the relationships between journals (where the colored paths represent citation relationships)(30).The left side of the figure represents the citing journals, while the right side represents the cited journals.Labels on the figure indicate the various disciplines covered by the journals, and the colored pathways highlight the citation relationships.Notably, four prominent pathways can be observed.Two orange citation pathways indicated that Molecular Biology and Genetics journals, as well as Health, Nursing, and Medicine journals, were frequently cited by Molecular/Biology/Immunology journals.Additionally, two green citation pathways demonstrated that Molecular Biology and Genetics journals, along with Health, Nursing, and Medicine journals, were frequently cited by Medicine/Medical/ Clinical journals.These findings not only contribute to understand the relationships among different academic journals, but also help to reveal intersections and trends between various subject areas, thus providing valuable insights into academic research and interdisciplinary collaboration.
FIGURE 3
FIGURE 3Geographic bibliometric map based on a network of co-authorship relationships in the top 10 countries in terms of number of articles published.
(Figure 10 )
in recent years, two highly cited articles stood out.One article, authored by Lee et al. (33), exhibited a burst strength of 9.11.Another article, written by Sung et al. (34), demonstrated a burst strength of 6.84.The groundbreaking study by Lee et al. utilized copGFP-labeled ICC mice and flow cytometry to successfully isolate populations of ICC from the mouse small intestine and colon, obtaining their transcriptome data.In addition, the authors constructed an interactive ICC genome browser based on the UCSC genome database, which provides a valuable reference for future functional studies.Sung et al. have made significant contributions to our understanding of the importance of enteric neural transmission in gastrointestinal motility and related mechanisms.The study shows that cholinergic nerve fibers are closely associated with interstitial cells of Cajal (ICC-IM) and mediate the electrical and mechanical responses to neural stimuli through the activation of the calciumactivated chloride channel anoctamin-1 (ANO1).Experimental results demonstrated that in wild-type mice, neural stimulation induced excitatory junction potentials and mechanical responses, whereas these responses were greatly reduced or eliminated in ANO1 knockout/downregulated mice.Furthermore, pharmacological blockade of ANO1 also inhibited these responses.The study further revealed that smooth muscle cells (SMC) express other receptors and ion channels associated with nerve stimulation.These findings highlight the deepening exploration of ICC by scholars, leading to the discovery of additional molecular mechanisms.
Ca 2+ -activated Cl − channel, coordinates contractility in mouse intestine by Ca 2+ transient coordination between interstitial cells of Cajal 34 5 241 Loss of interstitial cells of Cajal and patterns of gastric dysrhythmiain patients with chronic unexplained nausea of a T-type Ca 2+ conductance in interstitial cells of Cajal of the murine small intestine 28 8 55 Cell-specific deletion of nitric oxide-sensitive guanylyl cyclase reveals a dual pathway for nitrergic neuromuscular transmission in the murine fundus 26 9 312 Spontaneous Ca 2+ transients in interstitial cells of Cajal located within the deep muscular plexus of the murine small intestine 26 10 70 Interstitial cells of Cajal in the normal human gut and in Hirschsprung disease 26 11 332 Regulation of gastrointestinal smooth muscle function by interstitial cells 25 12 378 Conditional genetic deletion of Ano1 in interstitial cells of Cajal impairs Ca 2+ transients and slow waves in adult mouse 2+ transients in interstitial cells of Cajal defines slow wave duration 25 14 138 The possible roles of hyperpolarization-activated cyclic nucleotide channels in regulating pacemaker activity in colonic interstitial cells of Cajal 24 15 69 Differential expression of genes related to purinergic signaling in smooth muscle cells, PDGFRα-positive cells, and interstitial cells of Cajal in the murine colon 20 16 268 Nitrergic signalling via interstitial cells of Cajal regulates motor activity in the murine colon 20 17 479 The cells and conductance mediating cholinergic neurotransmission in the murine proximal stomach 19 18 331 Nitric oxide-induced oxidative stress impairs the pacemaker function of murine interstitial cells of Cajal during c-Kit signaling pathway contributes to loss of interstitial cells of Cajal in gallstone disease Li et al. 10.3389/fmed.2024.1391545Frontiers in Medicine 12 frontiersin.org
FIGURE 9 Burst
FIGURE 9Burst detection of keywords.
TABLE 1
The 10 countries/regions with the highest number of outputs and the highest degree of cooperation.
TABLE 2 The
10 institutions with the highest number of outputs and the highest degree of cooperation.
TABLE 3
The top 10 journals and co-cited journals.
TABLE 4
The top 10 authors and co-cited authors.
TABLE 5
The top 30 keywords related to ICC.
TABLE 6
The detailed information on the top 20 ICC research articles based on LCS. | 7,807.8 | 2024-05-20T00:00:00.000 | [
"Medicine",
"Biology"
] |
F air trade in B razil : current status , constraints and opportunities
Abstract
Introduction
S ocially oriented businesses aim to develop alternative ways to solve social and economic problems while new structures of production and marketing based on transparency and fairness are established (ALVES et al., 2016).The Fair Trade concept emerges from this concept.Fair trade businesses are profit oriented, just as for traditional trade businesses, although there is the perspective of fair distribution of profits throughout the production chain, which includes taking into account both producer social conditions and environmental preservation in the perspective of product valuation (FRETEL; SIMONCELLI-BOURQUE, 2003).Hence, fair trade is a concept that seeks to reinforce producer organization initiatives such as cooperatives, producer associations, and networks of small producers.At the same time, fair trade seeks to provide education to final consumers by informing them about product origin and production conditions (JAFFEE; KLOPPENBURG; MONROY, 2004).
Fair trade has been expanded worldwide as a formal certification that ensures final consumers about the transparency along the value chain both in relation to production and to the increased amount of available information for consumers.Currently, fair trade production initiatives exist in 74 countries, with 1,210 organizations benefiting the lives of approximately 1.5 million farmers, workers and their families (FAIRTRADE FOUNDATION, 2015).
The expansion of fair trade in Brazil is related to the willingness to strengthen the solidarity economy understood as an alternative concept to capitalism, by focusing on the centrality of business principles such as solidarity, cooperation and equality (SINGER, 2002;GAIGER, 2011).Governmental initiatives organized by Secretary of Solidarity Economy and Fair Trade, hosted by the Brazilian Ministry of Labor and Employment (SENAES/MTE, 2012) have a direct influence.The Brazilian System of Fair and Solidarity Trade (BSFST) was formally created in 2010 to promote development by encouraging social business throughout the country (MENDONÇA, 2011).By developing this system (BSFST), Brazilian policymakers show that fair trade in Brazil means more than a specific kind of commercialization, and includes solidarity and social system values (MENDONÇA, 2011).From a different perspective, while a solidarity economy is a relatively popular approach in Brazil, its commercialization is a big challenge that has led public policies to include fair (and solidarity) trade as a core value in public policies to ensure that producers will flourish (MENDONÇA, 2011).
Brazil suffered political changes in 2016, creating a delicate scenario for this movement since little is known in terms of current government social development and support alternatives for work and wealth creation.At the same time, civil society involved in the development of BSFST is optimistic in terms of public and private institutions representation in the development of Brazilian fair (and solidarity) trade (GOMES; MENDONÇA, 2016).In this context, there is the perception of an increased interest in understanding how fair trade organizations have arranged their business.At first analysis, we can see that although organizations follow fair trade principles, some are not formalized regarding certification processes (a very important issue in the Fair Trade movement).
Thus, considering the expansion of fair trade at the international level, we can observe that there is still a lack of information regarding this kind of organization in developing countries like Brazil (SILVA-FILHO;CANTALICE, 2011;MOURA;COMINI;TEODÓSIO, 2015).For this reason, the research question that motivates this study is: How is fair trade currently organized in Brazil and what are the constraints and opportunities involved?The study aims to identify the characteristics of fair trade organizations and investigate constraints and opportunities for these organizations in an emerging country.
To achieve our objective, we have analyzed data from a survey conducted by the Brazilian National Government which mapped all existing social businesses in Brazil, including fair trade organizations.In addition, fair trade organization leaders, experts, and representatives of international organizations engaged in fair trade in the country were also interviewed.
This paper is divided into six sections.After the introduction, we present a literature review of Fair Trade, its concept and history.In the third section we describe the method and in the fourth and fifth, the results and discussion.The final remarks are presented in the sixth section.
Fair trade: concept and background
According to França Filho (2001), a solidarity economy congregates two historically separated notions: initiative and solidarity.For this reason, there is a different understanding for economic activities where solidarity and sharing play a central role in the meaning of development.Thus, solidarity economy initiatives combine their economic activities with the educational and cultural nature of actions, enhancing the sense of community and commitment to the social collectivity in which they operate (GAIGER, 2009).
Fair trade is part of the solidarity economy concept, understood as an alternative to conventional trade, regarding the needs of people involved, contributing to sustainable development by offering better trading conditions and protecting workers' rights (FRETEL; SIMONCELLI-BOURQUE, 2003;JAFFEE;KLOPPENBURG;MONROY, 2004).
Fair trade offers farmers more advantageous trading conditions and seeks to provide these farmers the opportunity to enhance their lives and to continue generating income in rural areas.It also offers customers access to information and the opportunity to be part of an initiative that seeks to alleviate poverty through everyday purchases and consumption of more sustainable products (FAIRTRADE INTERNATIONAL, 2011).
Fair trade is typically understood as an alternative market system that aims to rectify historically inequitable terms of trade between the geopolitical North and South and foster more direct producer/consumer linkages (JAFFEE; KLOPPENBURG; MONROY, 2004).Origins of fair trade organizations date back to the 1940s and 1950s, when Christian missions established Non-Governmental Organizations (NGOs) in developed countries to sell handicrafts produced in poor countries of the Southern Hemisphere.In the 1950s and 1960s, commercialization was also done by mail and in solidarity groups through Alternative Trade Organizations (ATOs) (DORAN; NATALE, 2011;FRIDELL, 2004).
Although a consistent growth of fair trade organizations occurred in the 1970s and 1980s, low sales volume offered little aid to small farmers and artisans.Network growth was also hampered by poor access to consumers, who still perceived fair trade products as low quality, and limited to volunteer workers for sales and the inappropriate use of marketing tools.In response to these limitations, fair trade organizations decided that their inclusion in traditional markets was required.Therefore, a strategic reorientation was held in 1988, and fair trade labeling was launched, allowing fair trade products to be marketed along with traditional retail channels (FRIDELL, 2004).
In this first initiative, the idea was to certify and purchase products from small producers at relatively higher prices than those offered by the market.The producers' counterpart would be to preserve the environment and to establish the criteria of solidarity and democracy in their relationships (FRETEL;SIMONCELLI-BOURQUE, 2003).
The process of certification enhanced the perception of quality in the market as well as the accuracy of information provided to consumers, including about the social aspects involved in production.Through certification, fair trade has grown worldwide (DE PELSMACKER;JANSSENS, 2007;FRIDELL, 2004;REED, 2009;RENARD, 2005).
Business growth was linked to the establishment of fair trade labeling as well as professionalization of fair trade shops and the entrance of these products into the food industry (GENDRON; BISAILLON;RANCE, 2009;LOW;DAVENPORT, 2005;RENARD, 2005).However, market growth has also led to debates regarding Fair Trade's fairness (INGENBLEEK;REINDERS, 2013) and if growth could also bring negative consequences to maintenance of the status as an authentic alternative to free trade (STARICCO;PONTE, 2015).This inquiry might be related to another critical perspective, saying that social change actors can start to act similarly to what they oppose, which means in the fair trade case that mainstreaming can turn it into a new way of capitalist business (CHILD, 2015).Nevertheless, fair trade has consolidated in some markets as a benchmark for companies to develop and adopt standard fair rules (INGENBLEEK;REINDERS, 2013).
Fair trade in Brazil
Despite a worldwide evolution in fair trade, an organized movement for fair trade in Brazil did not begin until 2001.According to Brazilian Service of Support for Micro and Small Enterprises [SEBRAE, in Portuguese] (2007), at that time several members of NGOs, solidarity economy movements, family farms, business, government and service providers began to discuss issues related to fair trade.Since then, fair trade in Brazil has not been limited to exports of goods to developed countries, but the actors have also developed new forms of internal trade.
Labels and certifications are still little recognized by Brazilian consumers.This stems from the lack of regulation in recent decades that has allowed the emergence and dissemination of many labels.According to Vialli (2010), there are about 600 ecolabels in Brazil that advertise sustainable production, many of them simply selfdeclared by their own companies without external accreditation.Within this universe of labels, fair trade labeling is among the least known (HAMZA; DALMARCO, 2012).Even among those Brazilian consumers who know the meaning of labels and could easily identify them on the products, most of them did not notice it when making purchases (HAMZA; DALMARCO, 2012).
The international fair trade trajectory has happened in parallel to demands by social movements in Brazil that aim to combat the social inequalities and instability in farming and labor relations.It spawned what is called today "Fair and Solidarity Trade", which is defined as an alternative commercial flow, based on compliance with criteria of justice and solidarity in trade relations and recognition of the autonomy of Solidarity Economy Enterprises (GOMES; MENDONÇA, 2016; ZERBINI; PATEO; SÍGOLO, 2010).
In 2004, the Brazilian Service of Support for Micro and Small Enterprises (SEBRAE) started investing in fair trade in Brazil, focusing on micro and small businesses.In 2005, a project of fair and solidarity trade was initiated by partnership with specialized consultants and the support of NGOs that aimed to enhance small producer access to this market (SEBRAE, 2007).
In 2006 a discussion group for fair trade with representatives of governmental departments linked to the Agriculture Ministry, Family Agriculture and Agrarian Development Ministry, Solidarity Economy Secretary and other departments was created.The constitution process for the Brazilian System of Fair and Solidarity Trade (BSFST) started through this discussion group.
In 2014, worldwide sales of certified fair trade products increased to 6.24 billion dollars, representing an increase of 10% over 2013 (FAIRTRADE INTERNATIONAL, 2015).This growth is attributed, among other factors, to the support from big companies, such as Starbucks, which buys fair-trade certified coffee, and Cadbury and Nestle chocolates.
According to Zerbini, Pateo and Sígolo (2010), with the signing of Decree 7358 (BRASIL, 2010), the government started investing in initiatives for enhancing economic inclusion, promoting democracy and generating equitable development.This decree encourages financial investments in projects that improve organizational capacity, infrastructure, training, promotion of market access, expansion of the program of sustainable public procurement and pricing rules (ZERBINI; PATEO; SÍGOLO, 2010). www.revistaoes.ufba.br
Fair trade in Brazil: current status, constraints and opportunities
Therefore, the development of fair trade in Brazil has its peculiarities, such as its development as a public policy, but always followed by joint actions from civil society.This made it a stronger movement, able to survive to changes in the political scenario.Considering its peculiar characteristics, and the willingness to develop an internal market instead of just selling to developed countries, our analysis includes investigation of its main characteristics, current status, motivations, constraints that organizations face, as well as opportunities to expand this specific market.
Method
To achieve the main goal of this study, we conducted a two-step empirical research.In the first phase, we adopt a quantitative orientation research and present an analysis performed on a database collected by the Brazilian Ministry of Labor and Employment (MLE) from 2009 to 2013 to map the solidarity-based economy in Brazil, which includes fair trade organizations.In the second phase after this preliminary analysis, we conducted a qualitative research by visiting fair trade institutions and applying interviews to Brazilians experts in order to better comprehend previously collected quantitative data and to investigate their perceptions of the fair trade market in Brazil.Our intention was to highlight the way that these organizations are organized and the opportunities and constraints they face in Brazil.The complementarity between quantitative and qualitative research allow us to have a panoramic view about fair trade in Brazil and to also highlight some specificities that emerged from the in-depth interviews.
First step: a national survey on solidarity-based economy and fair trade
The quantitative step of the research was based on a survey produced by the Brazilian Ministry of Labor and Employment (MLE).The survey was conducted nationwide between the years 2010 and 2012.The main objective was to map all Solidarity Economy Enterprises (SEE) existing in Brazil.The survey considered those characterized as collective organizations (associations, cooperatives, production groups, exchange clubs) as SEE, whose members collectively manage the activities and the allocation of the results, and that perform economic activities of production of goods, commercialization and solidarity consumption (KUYVEN;KAPPES, 2013).
The contacts at SEEs were obtained from regional government databases and also from previous surveys carried out by the Ministry of Labor and Employment (MLE).This procedure aimed to identify the greatest number of projects and ensure that the results were representative of the SEEs operating in Brazil.Data collection was performed by researchers through on-site visits to interview enterprise representatives.
The researchers found 19,708 SEEs whose representatives were asked about: a) general enterprise characteristics; b) members' characteristics; c) economic activities performed; d) the labor situation of members and non-members; e) investments, access to financial resources and support; and f) enterprise management.
Among the 19,708 SEE mapped in the research, we specifically selected the organizations that identified themselves as making part of fair trade networks (n=277).We analyzed the characteristics of these 277 organizations in terms of activities performed, starting year, support received, commercialization market, origin of resources, reasons to create the organization, main achievements, and opportunities and challenges faced by the organization.Data analysis was performed by using descriptive statistics.
The first phase gave us a panoramic view about fair trade organizations in Brazil and helped us to prepare the analysis of the second phase of our research, which aimed at in-depth qualitative analysis of the data collected.
Second step: in-depth interviews with experts on fair trade in Brazil
In this phase, we visited fair trade organizations in Brazil and conducted indepth interviews based on the literature review.The interviews were conducted by the authors, recorded and transcribed for analysis.We also used secondary data, mainly reports and publications of fair trade organizations, to complement data collection.
Since our aim was to further understand the current scenario, as well as to delineate motivations and perspective for the internal market, we searched for experts on the subject, as well as fair trade practitioners in terms of product commercialization.Initially, we interviewed an academic expert on fair trade (PhD in Agricultural and Food Economics by University of Reading, UK), a consultant and designer in social projects and the project coordinator of Faces do Brasil.Additionally, we also included representatives of organizations that base their operations on fair trade principles, such as Girasol Cooperative in Porto Alegre, the Projeto Terra in São Paulo and two certified organizations, Justa Trama and Tênis.
Additionally, to update information on the development of Brazilian certification (BSFST), and the overall concept, we undertook a key interview with the main coordinator of Faces do Brasil in 2016.
Visits and interviews lasted from 25 minutes to 2 hours.Table 1 presents the respondent's profile, his/her function and a brief description of their respective organization.
Manager of Projeto Terra
The Projeto Terra Institute is a nonprofit social organization that began in 2002 to create opportunities for market access to artisan products originating from Brazilian projects of income generation and social inclusion and/or for products with ecological content.
Tênis (Campo Bom) Manager of Tênis
The interviewed is a sociologist and designer, who is the Tênis partner in Brazil.Tênis is a French organization that manages the chain that produces shoes and handbags exclusively using fair trade principles and agro-ecological cotton, natural rubber and vegetable leather imported from Brazil as raw materials.
Once we had data from all these interviews, based on a deductive orientation (MAYRING, 2000) we applied the content analysis (BARDIN, 2004).The content analysis allowed us to analyze and make specific inferences about fair trade development in Brazil, considering opportunities and constraints.
The use of multiple data sources allowed for triangulation of data, contributing to research validity and reliability (MAYRING, 2000).Thus, after completing the interviews and collecting secondary data, we described, analyzed and cross-referenced the information to reach conclusions.The next section presents the results of this research.
Results -fair trade in Brazil
In this section, we present the main findings.First, we present the characteristics of Brazilian fair trade organizations, based on the national survey developed by the Brazilian Ministry of Labor and Employment (MLE).Next, we describe fair trade's current status in Brazil, followed by constraints and opportunities that were identified and analyzed in the qualitative step of the research.
Characteristics of organizations linked to fair trade networks in Brazil
Results show that there is still a high level of informality in organizations related to fair trade networks in Brazil.Among the 277 organizations, more than 44% are not formally registered according to Brazilian law.Most of these organizations produce and commercialize (59.2%), or exclusively commercialize (24.9%) fair trade products.
Another characteristic is related to the different kinds of activities these organizations perform.Within the 111 different activities listed in the survey, we can stress agro-industrial production (27.5% of the organizations), handicrafts and souvenirs (25%), agro-industrial trade (9.1%), manufacturing and sale of apparel, clothing and garment (5.1%), and organizations for collective use of infrastructure, land and equipment (4.0%).The broad range of different activities performed by the organizations was unexpected considering that fair trade is still recent in Brazil when compared to international fair trade (MACHADO; PAULILLO; LAMBERT, 2008).
Most organizations were created since 2000 (72%), more specifically, 43.2% emerged since 2005, when Brazilian institutions and agencies started to support the creation of fair trade projects (SEBRAE, 2007;MENDONÇA, 2011).About 81% of surveyed organizations received some support or training along their development, in a wide range of areas such as managerial assistance, managerial, social and political continued from previous page education, and marketing planning.Almost 35% reported support from micro and small-business support services (e.g.SEBRAE) while 35.6% received support from NGOs (35.6%), 29.8% from municipalities, 22.7% from universities and 21.3% from state governments.
The increasing number of fair trade organizations in Brazil can also be explained by the fact that since 2004 the Federal Government has intensified social programs to improve social conditions and income generation in disadvantaged classes of workers.Many farmers and workers could organize their production independently, looking for new and more profitable economic activities.This relation can be identified since 41.5% of the organizations are predominantly composed by people that are beneficiaries of these Conditional Cash Transfers (CCT).
Another contextual fact is that Brazil experienced an increased purchasing power at the base of pyramid.The expansion of social programs to reduce hunger, inequality and poverty allowed millions of people to leave extreme poverty and to enter the consumer market.Nowadays, many households can experience consumption of goods that go beyond the satisfaction of basic needs (ARNOLD; JALLES, 2014).The results of the governmental initiatives can be seen since there was a significant growth in household per capita income since 2003, with 7% a year from 2003to 2009, in comparison to 1.3% from 1995to 2003(SOUZA, 2012)).
Organizations are predominantly small-sized: 37.4% have up to 10 members and 53.8% have up to 20 members.Only 11.4% have more than 100 members, but within this group, organizations vary from hundreds to thousands of members (the largest has 5,500 members).According to respondents, 70.7% of organizations generate enough income to remunerate members.However, it is noteworthy that almost a third of them (29.3%) are unable to pay members.
Motivations for establishing fair trade networks and origin of resources
The four reasons most cited by respondents for establishing their organizations are related to economic issues (see Table 2).Economic reasons come from the possibility of creating supplementary sources of incomes (57.4%), obtaining higher gains (44.4%), search for alternatives to unemployment (44.0%) and the development of a collective activity where all workers are owners (47.3%).Social reasons for creating the organizations were also cited by the respondents.The most reported reasons were the development of community capabilities (39.7%), social, philanthropic or religious motivations (34.7%) and the production of organic or green products (21.3%).In addition, respondents were also asked to inform about the origin of resources that supported project launch.A large percentage indicated the use of members' own resources (67.9%).Only 27.8% indicated the use of public funds and 23.8% reported the use of donations from international organizations or NGOs.It reveals that most organizations were created by member investments.Public funds and NGOs played a secondary role in financially supporting the creation of these organizations.
Commercialization market
Regarding market scope, most of the investigated organizations operate at the local (71.6%)or municipal level (76.4%).Only a small number commercialize products at the national (19.7%) or international level (10.7%) (see Table 3).Most organizations commercialize their products directly to consumers (91.8%).However, 30.9% reported that they also commercialize products to retailers or wholesalers; 24.9% commercialize to government agencies and 17.6% commercialize to private companies.Another 19.3% sell to other social businesses and 13.3% carry out product exchanges.This result shows that due to the limitations of the exclusive fair trade markets and the special certifications required to commercialize at the international level, many organizations connect with alternative buyers (e.g.private firms, government agencies), to sell their products.
An important characteristic of fair trade is that consumers agree to pay higher prices based on producers' dedication to preserving the environment and ensuring solidarity and democracy in their relationships (FRETEL; SIMONCELLI-BOURQUE, 2003).The organizations linked to fair trade networks in Brazil confirmed this characteristic is valued by consumers since 56.3% of them expressed environmental concern in the production of goods.Furthermore, 62.6% reported to be part of at least one social movement linked to sustainability, ecology, human rights, women empowerment or racial movements.
Main constraints, achievements and challenges faced by the organizations
One objective of the study was to analyze what were the difficulties that Brazilian organizations linked to fair trade networks faced to produce and sell their products.Among the surveyed organizations, 68.7% reported facing difficulties surviving in the market.Several problems were mentioned, such as the lack of working capital (cited by 55% of the organizations), inadequate physical infrastructure for marketing (46.9%) and difficulty in shipping (44.4%).The previously reported lack of legal registration is a problem for 24.4% of the organizations.Without formal contracts, they are not able to provide invoices and to operate in certain markets.Table 4 shows the percentage of organizations that indicated each of the difficulties listed.The last set of questions aimed to identify the main achievements and challenges faced by Brazilian organizations linked to fair trade networks.Group integration and successful collective action were highlighted as an achievement by 75.5% of respondents, followed by the generation of income or greater gains for members (70.4%).The development of self-management capabilities and the exercise of democracy was also a significant achievement according to 60.6% of the organizations.Besides this, organizations reported the social commitment of their members (54.9%) and benefits for their local communities (37.9%) as relevant attainments.
Despite these achievements, several challenges were cited by a high percentage of organizations (Table 5).Results show that a main challenge for most part of the organizations (74.7%) is reaching economic viability and generating adequate income for members.A slightly smaller percentage indicated the challenge of keeping the group together and working collectively (65.3%).Curiously, an even higher percentage of respondents indicated this item as an achievement.We infer that although many respondents noted group unity as an achievement, a union's continuity over time is a challenging and very complex task.The same reasoning refers to the ability to generate adequate income for shareholders, which was cited as a challenge by 74.4% of respondents, although previously it had also been indicated as an achievement by 70.4% of them (see Table 5).Fair trade in Brazil: perceptions from Brazilian experts After presenting data from the survey applied to organizations involved in fair trade networks, we present results from interviews with experts who are working on different fronts to promote the development of fair trade in Brazil.For this, we present a perspective of these respondents about the current situation of fair trade in Brazil.Finally, we also discuss the main constraints and opportunities for fair trade in the country.
Current status of fair trade
A major constraint for small organizations, communities and families working with artisan products is having access to markets.One of the key contributions of the organizations working to promote fair trade is precisely to enable small producer access to consumer market.Several consulting companies also facilitate producer access to markets.They are usually hired by companies that want to invest in fair trade and by other supportive organizations, such as the Brazilian Service of Support for Micro and Small Enterprises (SEBRAE).An example is Parceria Social (Social Partnership), which analyzes day-to-day processes for producers to act according to fair trade ideology.
In comparison with international fair trade, the main difference and specificity of the Brazilian case is its development based in public policies.In addition to the struggle to get state recognition for practices, fair trade organizations are responsible for applying resources in a systematic manner through projects and social programs (Coordinator of Faces do Brasil).In this sense, organizations such as Faces do Brasil develop strategies to expand fair trade more broadly, providing for more protagonism in trade relations, to increase the value of their products and services for households or commercial consumers.
Although there is little concern regarding certification in Brazil, it is expected that the Brazilian System of Fair and Solidarity Trade (BSFST) will contribute to its consolidation.BSFST is an important tool to overcome barriers faced by producers.The project coordinator of Faces do Brasil ratifies the result from the survey's descriptive phase (section 4.1), by stating that the main reason for producers to pursue fair trade certification is to increase their earnings, although they soon notice that the certification also transmits credibility in the production process.
So, initially, the motivation for producers to obtain certification is commercial.With the encouragement of Faces do Brasil and other organizations, producers realize the importance of the fair trade approach, which also focuses on social inclusion and environmental benefits.Over the long haul, producers are committed to not only follow principles in order to gain certification, but also to transmit the message and expand fair trade formally.
According to the consultant from Parceria Social and Brasil Social Chic, consumers often acquire products as an act of philanthropy.The manager of Tênis believes that motivations of fair trade consumers in Brazil are more social than ecological.However, it is believed that Brazil is under a process of development of consciousness about these products, since it is an incipient market in which fair trade started commercialization during the past 10 years, whereas in Europe (considered a mature market), it has been practiced for more than 60 years.In turn, the university researcher says that Brazilians associate the concept of fair trade with purchases made locally, as a way to strengthen local economies and have more direct contact with the producer.In this type of purchase, consumers pay the producer directly.
Respondents stress that there is a challenge in creation of a Brazilian consumer culture that fosters a change in the labeling process and in consumers who ask more information about the products.The project coordinator of Faces do Brasil believes that with support from the environmental movement, it is possible to create this culture and demonstrate the impacts of the consumption of fair trade products, comparing them with those of traditional commerce.
With respect to enterprise characteristics, there are varying profiles of gender, age and social reality according to historical foundations and even the area in which these groups are established.The consultant from the Parceria Social and Brasil Social Chic confirms what the survey described (section 4.1.1,Table 2) regarding motivations for starting a fair trade organization: "Fair trade is an opportunity for people, through their work, to live worthwhile in the correct way" (Consultant of Parceria Social and Brasil Social Chic).
Another issue involving fair trade in Brazil is competition with the traditional supply chain because a product may lose sales when it is analyzed based only on price.The fair price considers much more than raw materials, instead it includes the entire work time and efforts that were spent to make the product.Accordingly, the consultant for Parceria Social and Brasil Social Chic reports that producers often do not value their work because of problems with low self-esteem within the groups.Thus, the work of the consultancy is not only to embed the concepts of self-management and transparency.Further work is also necessary.The consultant for Parceria Social and Brasil Social Chic reports that a recent partnership began with a psychologist to try to reconcile the two objectives, fair trade and improved self-esteem, of those involved.
Transparency in the remuneration process is a key issue for fair trade.As reported by project coordinator of Faces do Brasil, all decisions are taken by the group and everything is decided cooperatively.Similarly, the distribution of profits is made annually through the financial surplus.The group decides whether it is better to redistribute, reinvest or conduct some alternative means of distribution.
Constraints on fair trade in Brazil
One of the bottlenecks identified by respondents is the lack of productive capacity of the groups directly affecting the supply of products.Regarding the importance given to the certification by the producers the biggest problem is the lack of recognition by Brazilian consumers.Regarding commercial issues, for the president of Justa Trama and the manager of Tênis, the main challenge is to persuade consumers to choose this type of sustainable consumption.The Tênis manager also believes that most consumers in Brazil do not care or are unaware of the payment flow within the supply chain.
With respect to the production side, the university researcher warned about the risk caused by how under the fair trade principle only small producers can be certified, claiming that this could halt growth in the sector.Although the international rules of fair trade certification are limited to small producers, under the Brazilian System of Fair and Solidarity Trade, producer size is not included as a specific criterion.What matters is whether the production is in accordance with the principles.
Another question indicated by the university researcher and partners of the Cooperative Girasol regards how fair trade products are not seen as high quality.In this sense, it is important that producers receive technical training.Otherwise, consumers might buy a product only as philanthropy, which is not the objective of fair trade.The purpose is to expand the consumer market in a sustainable manner to benefit a greater number of producers and chains.
Opportunities for fair trade in Brazil
In general, the perspective for the fair trade sector is good, according to the consultant for Parceria Social and Brasil Social Chic: "It's a very new market, where the gates have been opened".The president of Justa Trama also believes fair trade market is growing and that the development of the BSFST will help to expand the market.However, the university researcher and the partners of the Girasol Cooperative do not see good prospects for the fair trade market in Brazil: www.revistaoes.ufba.brFair trade in Brazil: current status, constraints and opportunities [...] the cost of certification is very high, it is a very restricted market, which depends on the demands of European consumers.It is not worth it for the small producer.He may seek other strategies for differentiation, such as not having to pay a high price for a certification, without any guarantee of a market (University Researcher).
Faces do Brasil aims to create a Brazilian fair trade consumer market.This would help to transform the view that "fair trade is that 'thing' where rich people buy from the poor", besides the fact that it would be better for the environment.To transform fair trade purchases from a charitable act into a politicized choice, it is important that Brazilian consumers recognize the higher value of a product produced following fair and solidarity principles and criteria.Although there are studies showing the increased concern Brazilians consumers have for sustainability and Corporate Social Responsibility (AKATU, 2013), there are well-known consumer barriers all over the world (MONT;PLEPYS, 2008).
In addition to producers, retailers are willing to work with fair trade.According to the consultant for Parceria Social and Brasil Social Chic, retailers who usually buy fair trade products are the "shops that already have a brand and a clientele with a greater purchasing power, since fair trade products are a little more expensive than others made in the traditional way".
With respect to communication, information must be transferred efficiently for consumers.Brasil Social Chic communicates by using a label with its logo, stating the origin and fair trade principles: "This product promotes social inclusion through job and income generation for crafts and sewing groups".
Expanding publicity is important in order to provide consumers with greater knowledge on the subject and to explain the work that exists within the fair trade supply chain.To reach a broad range of consumers, Brasil Social Chic began working on press relationships as a communication strategy, to appear in newspapers and magazines.The project coordinator of Faces do Brazil and the president of Justa Trama also reported that they are planning actions to promote trading.According to Hamza and Dalmarco (2012), dissemination of information in the media is a major factor in stimulating sustainable attitudes.
To follow and understand fair trade principles is critical for all groups involved.These principles must be firmly established, or, as the manager at Tênis states, they must be "in the companies' DNA".Although the concept has grown as a public policy, the trajectory of fair trade so far in Brazil has shown the relevant strength to keep moving forward, which can make it feasible even with political change.The participation of public and private entities to develop the BSFST, and the large number of actors involved in the development of an internal market, bring a positive perspective to fair trade in Brazil (Coordinator of Faces do Brasil).
Table 6 presents the main ideas and opinions of the interviewees concerning the issues outlined above.There is optimism about the growth of this type of trade in Brazil, although some constraints remain to be overcome.It is necessary that fair trade become better known and that consumers know and understand the impact of its consumption and its importance in the economy.It may be an interesting market for export, although it has risks (University Researcher).
Source: The authors.
Discussion
By analyzing fair trade in Brazil, we identified challenges and opportunities and the need for integrated actions in the following dimensions (GOMES; MENDONÇA, 2016).In economics, through the development of fair and solidarity chains for national and local commercialization; in education, to grow awareness in society and consumer market; and in politics, to ensure practices to enhance justice and social equality (GOMES; MENDONÇA, 2016).
As noted in this paper, some difficulties faced in Europe in the 1970s and 1980s in fair trade networks growth (FRIDELL, 2004) are currently happening in Brazil.Empirical data analyzed in this research shows that there is still much to be done in order to make fair trade products more valuable in the Brazilian market.Among the difficulties cited by Fridell (2004) there are the small size of the fair trade market, restricted consumer access, a public perception of fair trade products being low quality and an inadequate use of marketing tools.These barriers are still found in Brazil and should therefore be addressed.
Fair trade is mainly based on labels and certification, which create advantages (DE PELSMACKER;JANSSENS, 2007;FRIDELL, 2004;GENDRON;BISAILLON;RANCE, 2009;LOW;DAVENPORT, 2005;REED, 2009;RENARD, 2005).However, this certification process is time demanding, highly expensive and mainly required in international consumer markets.Vieira, Aguiar and Barcellos (2010) address the negative aspect of certification, traditionally made by third-party organizations that end up receiving payment from all links in the production chain.This is ratified by the continued from previous page www.revistaoes.ufba.brFair trade in Brazil: current status, constraints and opportunities university researcher, "the certifier gains from all links in the chain ... the producer has a cost... he [the certifier] gains from the importer, ... and still earns a percentage of sales in the grocery store".
Regarding this, with the new BSFST, there is the possibility of reducing the power of third-party certification in the fair trade supply chain.In this new system, production groups can supervise principle compliance by using participative guarantee systems, without depending on an external certifier.BSFST represent a political and economic project, since it institutionalizes the potential for social transformation of fair trade, as well as strengthening commercial relations based in principles that differ from the conventional ones (GOMES; MENDONÇA, 2016).
In addition, most Brazilian consumers still do not recognize certification nor are they concerned with product origin.Brazilian experts interviewed in this research confirmed this Brazilian consumer characteristic, which corroborates the findings of Hamza and Dalmarco (2012).Thus, the low number of certified organizations in Brazil can be explained by the lack of consumer interest and knowledge about fair trade.Because many fair trade products are more expensive as a result of the production process, their contents must be disclosed for consumers to realize the difference between fair trade and traditional products and be willing to pay for this difference.According to Gendron, Bisaillon and Rance (2009), market access should be expanded to increase the customer base.
According to result from interviews with Brazilian experts, most fair trade happens locally.This statement can be confirmed when compared to the survey's result, where more than 70% of the respondents assert that they commercialize their products in local markets.However, when producers commercialize in traditional markets, they are exposed to competitors from all over the world and in these cases, all fair trade orientation for fair income generation is not taken into account.
This result corroborates the studies by Loureiro and Lotade (2005), Doran (2009) and Doran and Natale (2011), since they argue that there is competition with the traditional market, and there is still much to learn about fair trade consumers.Low andDavenport (2005, 2009) highlight the importance of a proper dissemination of the message of fair trade as well as the need to include items that are produced and marketed in an ethical manner in consumers' daily shopping.
However, there are also good perspectives.The survey analyzed in this research has outlined that in the last few years public support institutions and agencies seeking to increase the market of fair trade have been created, improving fair trade organization management.This support is important to minimize legal and managerial challenges.Despite these efforts, it must be highlighted that most surveyed organizations are still young, small and facing severe difficulties in achieving economic viability and generating adequate income for members.
Public support for fair trade organizations should be maintained for a longer period of time in order to help them consolidate.It is especially important considering that many beneficiaries of Conditional Cash Transfer (CCT) programs become members of fair trade organizations to complement their income and overcome poverty.As the organizations consolidate and become economically viable, their members become less dependent on social programs.
Finally, we must highlight the social role that fair trade performs as well as its political nature, since actions should be undertaken to raise awareness of issues of global justice, development and inequality (CLARKE et al., 2007).Thus, we found that fair trade products are distinguished mainly by their origin, not only their social origin, considering the inclusion of communities in the formal economy and market, but also with respect to preserving the environment.The organizations linked to fair trade networks in Brazil confirmed this characteristic since 56.3% of them expressed environmental concern and 62.6% reported to be part of at least one social, political or racial movement.This confirms that a significant number of fair trade organizations comply with solidarity criteria and try to promote justice and popular empowerment (ZERBINI; PATEO; SÍGOLO, 2010).Vermeulen and Ras (2006) call attention to the challenge that the fair trade organizations face in their daily routine.Challenges arise because these organizations must include in their processes procedures that meet global demand and that promote positive impacts on the chain, in both social and environmental terms.This is perhaps a major step that remains to be taken by most organizations, and fair trade can be an alternative.
Conclusions
With this research, undertaken in two phases, first analyzing data from a survey with fair trade organizations and second with in-depth interviews with Brazilian experts, it was possible to outline the current scenario and identify constraints and opportunities for fair trade in Brazil.Despite the assertion of Fretel and Simoncelli-Bourque ( 2003) that fair trade aims to reduce intermediaries, in Brazil, this is not the main focus.Indeed, that is a difficult proposition because Brazilian producers continue to need support and assistance, mainly to increase access to consumer markets and to have better training in order to add greater value to products.Thus, as pointed out by Martins (2011), more intermediaries are often needed to expand the market and develop communities.
Consumers play a key role in the growth of fair trade.If consumer awareness happens, according to the new movement of ethical consumption reported by some authors (FERRAN; GRUNERT, 2007;GOIG, 2007), fair trade will be an attractive choice.On the other hand, fair trade supply chains should know their consumers better (DORAN, 2009;DORAN;NATALE, 2011;LOUREIRO;LOTADE, 2005).
Regarding the spread of fair trade, a wider dissemination of concepts, with support from society, business, universities and government, might help.The bottlenecks indicated by the university researcher for the growth of fair trade in Brazil are the high cost of certification and the restricted market, which was confirmed by the analyzed survey.
Thus, companies and consumers have an important role in the expansion and dissemination of fair trade as well as in maintaining sustainable management of the entire supply chain.
This research also raised a set of new issues to be addressed in future studies.The survey showed that a high number of Brazilian organizations linked to fair trade networks are composed by beneficiaries of social programs.Future studies could analyze if this membership reduces their dependence on social programs and helps them overcome poverty.This information may have important implications for public policies regarding social programs and the support of fair trade organizations.
Another issue refers to the performance of fair trade organizations according to the origin of resources.Researchers could analyze whether organizations created with members' own resources reach different results than those created with the financial support of NGOs or public funds.To find a good balance between public and private financing of fair trade projects could positively influence their development and consolidation.We also recognize that the survey has limitations, such as a lack of information on the certification of Brazilian fair trade organizations.Future surveys should consider this information in order to allow a better comprehension of the fair trade market in Brazil.
Table 1 -Respondents' profile. Organization and Location Function Description of the respondent and/or organization
Justa Trama operates on several fronts and in discussion groups about fair trade in Brazil and abroad.Justa Trama has fair trade certification and produces clothes with agro-ecological cotton.continued on the next page www.revistaoes.ufba.brFair
Table 3 -Commercialization market.
*Multiple answers were allowed ** The question was answered only by organizations that commercialize products.Source: SIES (2013). | 9,976.4 | 2017-09-30T00:00:00.000 | [
"Economics",
"Business"
] |
The Dilemma Economic Growth And Poverty Rate In Sulawesi
Economic growth is often cited as a significantly contributive factor reduction of the poverty rate. This study aims to investigate the economic growth and poverty among all areas within Sulawesi Island and to compare these two aspects among the island’s provinces. This study employs both comparative quantitative analysis to explore economic growth formulatively and qualitative manner for in depth analysis. The result reveals an escalation in both gross regional domestic product (henceforth regional GDP) and total population each year for the last ten years. However this situation is unable to boost the macro-economic growth; a reason for this condition is the population growth in the recent ten years possibly dominated by High birth rates. Yet, this condition does not lead to a drop in the demand for workforces, which implies that the number of the working-age population (which can help improve the regional per capita income) remains constant despite the population growth. Another possible factor of regional GDP escalation is the fact that the government policy, in its foreign cooperation implementation, does not contribute to the local workforces. Nevertheless, the rise in regional GDP is insignificant as it does not affect the local economic conditions. Hence, it proves that the fluctuation of economic growth does not affect the poverty rate.
INTRODUCTION
Poverty is among the mostcommon issue and the major concern of the government of all countries. In almost every developing country, the majority of its people has considerably low living standard to be compared with not only those in rich countries, but also the elites. One of the low living standard manifestations is the noticeably low income of its people, or in other words, poverty (Todaro as cited in Jayadi, 2016). This explains why poverty is considered a very serious problem in social aspect. Such notion is in line with Usman (2008) who explains the urgency to discuss the problem by determining the concept of poverty and exploring its benchmark, different concepts may lead to different benchmarks, followed by identifying dominant cultural and structural factors that causes poverty.
Purnama (2010) proposes ideas similar to Jayadi and Usman that poverty is among the main concerns in economic development. In its principle, economic development is intended to boost the welfare of society, increase income and promote economic growth in all development sectors, conceptualize optimum equitable development, expand labor and improve public live standard. Accelerated economic growth and equitable distribution of income are essential in attaining the overall development goals.
The government of Indonesia is aware of the national development plan's function among many efforts in achieving a just and prosperous society. On that ground, the government has directed several programs towards regional development specifically in areas that suffer from poverty every year. Regional development programs are integrated and continuous in nature, based on every priority and necessity of each region. Furthermore, the targets of the development programs have been determined in the long-term, mid-term, and annual national development goals. This notion implies that poverty decline is among the contributing factors of the national development success. Effectiveness in reducing the poverty rate is the main consideration in implementing a development strategy. Therefore, the effectiveness of poverty rate decline has been regarded as the key factor in deciding the focus or priority sector of the national development goals (Ravi, as cited in Purnama, 2010).
Economic growth is considered as a factor that significantly contributes in reducing the poverty rate. Talmera (2016), however, proposes different opinions regarding the theories of economic growth: the drawbacks of such theories are mostly the absence of a thorough discussion of the correlation between economic growth and income distribution. The theories rather imply that income inequality is getting worse if economic growth is increasing. It is because the accumulation of the income of the individual and private sector is basically crucial in raising capital for furthering the investment and economic growth that is in line with Harrod-Domar's theory of economic growth. According to the theory, saving and investment play a major role in maximizing economic growth. The problem is that not everyone can save their income. Allocating a budget for saving and investment is the privilege of the rich, while the poor save nothing as they spend their money on fulfilling their needs. Another factor widening the gap of income inequality is the fact that the poor have no access to get loans from a bank or credits to improve their welfare and quality of human resources. This situation in turn causes the unemployment rate and dependency burden continues to soar; to worsen things, such condition also wanes the national income.
The issues of income inequality had occurred in some provinces in Indonesia. A study by Soleh (2011) has identified that there is no guarantee that a high economic growth will lead to prosperous society, despite the fact that high economic growth is expected improve public welfare; a concrete example of this problem occurred in Papua Barat: a province with the highest economic growth (11.27% annually) among other provinces in Indonesia, yet its people live under poverty line, making the province the second poorest area (35.77%) right after Papua. The phenomenon depicts the fact that economic growth alone will not help the poor. Western parts of Indonesia has a relatively better economic condition, including the economic growth and poverty rate, compared to its eastern counterpart. In western Indonesia, economic growth is 5.45% per year, which is higher than the average national economic growth. The percentage of the poor population is 43%, while the population of poor in eastern Indonesia is 57%. In general, some areas in the eastern part of Indonesia are mostly underdeveloped.
Sukirno (as cited in Purnama, 2010) defines the term economic growth as improvement of economic activities that leads to the in crease of goods and services amount produced by people over time. The level of economic growth in a year (year-t) can be determined using the following formula: In consideration of the disparity between developed and underdeveloped provinces, economic growth functions as a benchmark to measure and cut poverty. This present study relied on the 2013-2017 time-series data by Statistics Indonesia (henceforth referred to as BPS). The objective of this study is to investigate the economic growth and poverty among all areas within Sulawesi Island and to compare these two aspects among the provinces on Sulawesi.
METHODOLOGY
The objective of this study is to investigate the economic growth and poverty among all areas within Sulawesi Island and to compare these two aspects in the provinces of Sulawesi. Focusing on secondary data, this research relied on online data from the website of Statistics Indonesia (BPS) such as data rate of poverty, while economic growth data acquisition from data PDRB. This study employs both comparative quantitative analysis to explore economic growth formulatively and qualitative manner for in depth analysis. Where g is level (percentage) of economic growth, PDRB 1 is income of regional GDP in the current year, and PDRB 0 is income of regional GDP in the previous year.
RESULTS AND DISCUSSION
The economic conditions in Sulawesi Island, similar to other provinces in Indonesia, head toward positive growth; this is seen in the improvement of macroeconomic conditions of the areas within the island. Potential and non-potential economic sectors of every province in Sulawesi also take part in enhancing the economic condition. In general, the potential economic sectors are the dominant The above regional GDP data are based on the production approach, in which the total value per year is the accumulation of the total of the added value of goods and services made (or produced) by every production unit within every province in Sulawesi Island. According to Table 1, provinces with the highest regional GDP are South Sulawesi, Central Sulawesi, and North Sulawesi. The regional GDP growth of South Sulawesi from 2013 to 2017 is 7.62 (2013) ; 7.54 (2014) ; 7.19 (2015) ; 7.42 (2016) ; 7.23 (2017) (Central Agency on Statistics, South Sulawesi, 2018). The data, however, do not represent a positive growth rate in Sulawesi. Central Sulawesi is the province that has shown the most sustained and robust regional GDP among other provinces in Sulawesi. The growth rate of regional GDP of Central Sulawesi is 14.65 (2013) ; 13.03 (2014) ; 19.20 (2015) ; 11.74 (2016) ;11.68 (2017) (Central Agencyon Statistics, South Sulawesi, 2018). businesses or sectors that absorb employers more. This sector can also be regarded as the high-growth sector that contributes to the domestic product, which represents the distinctive economy characteristics of an area. Provided in the following Table 1 is the regional GDP's current price depicting the economic conditionsof all provinces in Sulawesi from 2013 to 2017.
The data above are a depiction of the economic condition of Sulawesi based on its regional GDP. Regional GDP is, in general, the total of the added value of goods and services made (or produced) by every economic activity within an area in a certain period (Regional Development and Planning Board of Pakpak Bharat Regency -Division of Economy, 2013). There are three ways to calculate regional GDP; those are production method, income method, and expenditure method.
According According to the projection of population growth, it is estimated that the human population in Sulawesi will keep growing, As a developing country, Indonesia deals with issues regarding economic growth crisis and poverty, which leads to income inequality. Inspired by the theory of analysis by Karl Marx and Frederick Engels, the founder of the social-democracy theory, Nawawi (2009) opines that poverty is rather a structural problem, not an individual problem. Poverty is a result of injustice and discrepancy in society that prevents people from accessing resources or everything they need to enhance their life. The wider the gap between the Table 2 shows that the percentage of the poor population in Sulawesi, from reaching 19,934,000 by 2020 (Statistics Indonesia, 2014). Provided in the following figure is the percentage of the population growth in Sulawesi.
high-social class and low-social class people, the more the number of poor.
Income inequality and indigence, however, are not two different discourses despite the interrelation between these aspects. This notion reflects the argument by Sen (as cited in Wie, 1981) that income transfer of those in middle-class to upper-class worsens the level of inequality, but the situation does not affect those in the lower-class. The following table provides information about the percentage of poor in Sulawesi. (2017), is one of the benchmarks of the income distribution of people regardless of their social status. A Gini coefficient that is close to zero represents equality in wealth distribution. On the one hand, the ratio that is close to one means inequality.
Indonesia has posted improving the Gini ratio; it stays in 0.40 every year.
The poverty severity index, well-known as P2, illustrates the distribution of expenditure of poor (Statistics Indonesia, 2019). The higher the index, the wider the gap of the inequality of expenditure. According to the above data, Gorontalo is the province that has the highest population living Southeast Sulawesi, West Sulawesi, South Sulawesi, and North Sulawesi.
Although the Gini ratio of Indonesia was 0.30 back in the 2000s, the ratio significantly increased to 0.37 in early 2010, and it remained constant in 0.41 from 2011 to 2015. The data imply that the upper-class is the one that benefits much from economic growth rather than the lower-class.
The inequality severity index of every province in Sulawesi is provided in the following table.
in poverty among other provinces; it is represented by the average percentage of poverty severity index of 0.80%.
The following table is the percentage of economic growth and poverty rate in Sulawesi; the data were calculated using the formula of economic growth. Table 4 The Percentage of Poverty Severity Index of the Provinces (Urban Areas and Villages) Source: 2018 Table 5 shows a decline in the economic growth of all provinces in Sulawesi from 2014 to 2017. Economic growth caused by increased REGIONAL GDP is found to be insignificant on the improvement of the level of economic strata of society. On the other hand, the percentage of poor population fluctuates every year, meaning that advancement in economic growth does not affect the decline and rise in the percentage of poor.
The condition in North Sulawesi is a concrete example of the issue previously mentioned. It shows that the regional GDP and population growth in the province are growing in the last ten years. This situation, however, is unable to boost the macro-economic growth; a reason for this condition is the population growth in the recent ten years. High birth rates possibly dominate the increase in the number of the population. Yet, this condition does not lead to a drop in the demand for workforces. This implies that the number of the working-age population (which can help improve the regional per capita income) remains constant despite the population growth. Another factor causing the increase in regional GDP is the fact that the government policy, in its foreign cooperation implementation, does not contribute to the local workforces. Nevertheless, the rise in regional GDP is insignificant as it does not affect the local economic conditions. Sukirno (as cited in Purnama, 2010) claims that the quantity and quality of population and workforce are central to advance the economic growth of a region. Although the population continues to grow, poverty issues remain unsolved if there are no attempts to boost the quality of the human workforce and increasing the job opportunity. The idea by Sadono Sukirno, however, does not ensure that a rise or decline in the economic growth will be significant to the poverty rate since the theory demands in-depth analysis.
The discussion above represents the overall condition of economic growth in all provinces in Sulawesi, including Central Sulawesi, West Sulawesi, and Southeast Sulawesi. Gorontalo is the province with the worst poverty severity index, which stuck at 0.80% on average every year (see Table 4). In addition to the number of the population, the issue of indigence in the province is determined by the lack of capital goods and technology exposures. The social system and attitude of the people in Gorontalo that highly value its local Source: Processed Data (2019) Table 5 Percentage of Economic Growth in Sulawesi and Percentage of Poor wisdom are often cited as the factor contributing to the number of unfortunate in the province. Improvement in economic growth is supposed to bring a positive change to a region. Factors, such as a rise in regional GDP sectors, is also inevitable to elevate economic growth comprehensively. These sectors are expected to create more job opportunities, which in turn, can raise the consumption of people (this idea syncs to the poverty crisis). Ignoring poverty or no changes (even a decline) in economic growth does not lead to a break in the cycle of poverty. In conclusion, success in raising or sustaining the percentage of economic growth depends on land and other natural resources, quantity and quality of a population and human capital, capital goods and technology, as well as a social system and attitude of society.
CONCLUSIONS
The percentage of the economic growth in Gorontalo was 13.85% back in 2014. However, the percentage dropped by13.10% in the next year. In 2016 and 2017, the economic growth of the province declined to 11.26% and 8.97%, respectively. Gorontalo is the province with the most people living below the poverty line among other provinces in Sulawesi (average percentage of 17.51%). Back in 2014, the percentage of the economic growth in Central Sulawesi was 13.03 percent, and it increased by 19.20% in 2015. A drop in economic growth was inevitable in 2016 (decreased by 11.74%) and in 2017 (decreased by 11.68%). Central Sulawesi is the second-highest province with people living below the poverty line compared to other provinces in Sulawesi (average percentage of 14.32%).
Economic activity in Southeast Sulawesi tends to fluctuate over the years. The rise in economic growth in 2014 and 2015 was 10.67% and 11.56%, respectively. However, the improvement was insignificant. Later in 2016, the condition of economics in Central Sulawesi fell by 10.60%, yet it increased by 10.78 percent in 2017. The average percentage of poor in this province is 12.89%. In West Sulawesi, the economic growth in 2014 was 16.67%. The percentage of economic growth in the province dropped by 11.98% and 9.01% in 2015 and 2016, respectively. The percentage, however, increased by 10.18% during the next year, although it was not that significant. The population of poor in this province reaches 10.48%. In 2014, the condition of the economic growth of South Sulawesi was 15.14%. The percentage of the economic growth in North Sulawesi was 13.46% in 2014, and it later decreased by 12.99% in the following year. A decline in the percentage of economic growth still occurred in 2016 (10.31%) and 2017 (9.57%). | 3,922.8 | 2021-03-30T00:00:00.000 | [
"Economics"
] |
Composition and Financial Performance of Farmers' Cooperative Societies in Kericho County, Kenya
The main focus of this study was to analyse the relationship between board composition and the financial performance of farmers' cooperative societies in Kericho County, Kenya. The study was based on the Stakeholder Theory and adopted a correlational research design. The target population consisted of accounting officers, auditors, chief executive officers, directors, managers
INTRODUCTION
Farmers' cooperative societies are organisations formed by a group of farmers or agricultural producers who come together to achieve common goals and address common challenges in their agricultural activities.These cooperatives are based on the principles of cooperation, mutual help, and democratic decision-making (Singh et al., 2019).In most countries, agricultural cooperatives play a crucial role in the agricultural sector, particularly in empowering small-scale farmers and promoting sustainable agriculture practices.They provide a way for farmers to work together to overcome challenges that would be difficult to tackle individually and ensure a more equitable distribution of benefits within the agricultural community (Hakelius, 2018).
The development of agricultural cooperative societies dates back to the 18 th and 19 th centuries, and they have continued to evolve and adapt to the changing needs of farmers and the agricultural sector.In addition, their development has been shaped by the collective efforts of farmers, supportive government policies, and a commitment to cooperative principles and governance mechanisms (Mwebia, 2020;Singh et al., 2019).
According to da Silva et al. (2022), the modern cooperative movement has its roots in the 19 th century, with the establishment of the Rochdale Society of Equitable Pioneers in England in 1844, the first credit union by Friedrich Wilhelm Raiffeisen in Germany in 1864, and the passage of the Capper-Volstead Act in the USA in 1922 that granted agricultural cooperatives limited exemptions from antitrust laws, enabling farmers to collectively market their products and improve their bargaining power.
In Nigeria, the Marketing Cooperative Federation was founded in 1935 to address the exploitation of farmers by middlemen.Tanzania launched the Ujamaa policy in the 1960s, emphasising collective farming and the formation of agricultural cooperatives.While in Kenya, the Kenya Farmers Association was established in 1923 to help smallholder farmers improve their productivity and market access.
Despite the fact that agricultural cooperative societies are key drivers in economic development and enhanced livelihoods of farmers in rural areas, they have consistently experienced a decline in their financial performance globally.In Bangladesh financial performance of the cooperative societies in the sector that contributes approximately 35 per cent of the gross domestic product and creates employment of 60 per cent has been declining due to poor management, internal conflicts, or lack of accountability, leading to the deterioration of cooperative societies.
In Sweden, farmers' cooperative societies have enhanced the development of the economy, where the farmers' sector has created approximately 57,000 employment opportunities.The sector ranges from wheat farming, barley farming, milk production, and pig production (Grashuis & Su, 3 | This work is licensed under a Creative Commons Attribution 4.0 International License.2019).However, over the last two decades, the country witnessed a decline in the financial performance of the cooperative societies, which has been attributed to a lack of transparency and accountability in the management of the societies, insider control and an increased lack of member participation.
According to Odetola et al. (2015), cooperative societies in Nigeria significantly reduce poverty and contribute to capital formation among members, but despite this contribution, rural farmers do not get financial services from financial institutions because of bureaucratic management procedures.Adeyemi-Suenu (2014) attributed the decline in financial performance of farmers' cooperative societies and noncooperation by members in Nigeria to poor management and governance, including a lack of accountability and internal conflicts.
The Rwandan government recognises agricultural cooperative societies as among the main means of eradicating poverty in rural areas.The national administration has pumped financial resources into the farmers' cooperative societies to assist their operations (Emmanuel, 2021).However, some of the cooperatives have performed poorly, which has been linked to poor corporate governance arising from malpractices by the boards (Mubirigi et al., 2016).Twimukye (2017); Rwekaza and Mhihi (2016) reported poor corporate governance in the Tanzania and Uganda farmer cooperative societies contributes to the decline in their financial performance.
Farmers' cooperative societies are considered the most prominent in Kenya compared to other cooperative societies.These cooperative societies assist farmers in collecting, processing, storing, and selling the products of their members.The cooperative sector in Kenya contributes 51 per cent of the gross domestic product.It is reported that 26 per cent is directly linked to GDP while 25 per cent contributes indirectly.The cooperative sector exports over 65 per cent of its product, creating job opportunities for almost 40 per cent of the total population (GOK, 2019).However, according to Kenya Cooperative Society's yearbook (2020), farmers' cooperative societies in Kericho County have experienced a 10 percent drop in financial performance from 2018.
Problem Statement
Farmers' cooperative societies perform an essential role in the development and growth of farmers.This is through providing farmers with farmers' inputs, organising training programs, granting loans, and marketing their farm produce.Despite the significant contributions, they have yet to realise their full potential.According to Kenya Cooperative Society's yearbook (2020), farmers' cooperative societies in Kericho County have been posting a drop in performance by 10% since 2018, raising an alarm about the consistent decline in annual financial performance.Several studies ssuch as those of Al-Saidi et al., (2013); Madhani (2017); Martín and Herrero (2018) have been conducted on board composition and financial performance, however, the study findings have been inconsistent.It is unclear how the various components of board composition such as diversity, experience, and board size could influence the financial performance of farmers' cooperative societies.Few academic research studies have been conducted to determine the relationship between board composition and the financial performance of agricultural cooperative societies.Thus, this study sought to assess the relationship between board composition and the financial performance of farmers' cooperative societies in Kericho County, Kenya.The findings of this study will be significant to the management of farmer cooperative societies, policymakers and scholars interested in carrying out research on the relationship between board composition and the financial performance of farmers' cooperative societies.
Research Hypothesis
H01: There is no statistically significant relationship between board composition and the financial performance of farmers' cooperative societies in Kericho County, Kenya.
LITERATURE REVIEW Theoretical Review
The study was anchored on Stakeholder theory, which was first developed by Dr. F. Edward Freeman in 1984.The stakeholder theory postulates that the main responsibility of managers, including the board of directors, is to oblige to the stakeholders' interest in the best way possible, using the resources of the cooperative societies to increase the stakeholders' wealth through the enhancement of profits.In addition, the theorists assert that upholding such behaviour within the constraints of the legal frameworks and without fraud will be helpful for society as a whole.Freeman et al. (2004) noted that through stakeholder theory, there were expectations that firms would make efforts to mitigate conflicts among board members.Further, the theory incorporates all the interests of other parties that depend on the firm.
Mutuku (2016) adopted the theory while conducting an examination on the effects of effective corporate governance (CG) on Sacco's financial performance at Athi River in Machakos County.According to the research study, good corporate governance is meant to maximise the creation of wealth for the whole corporation.Stakeholders are individuals or any group who affect the achievement of organisational objectives.Therefore, organisations are affected by a set of interest groups, such as organisations that are board members.The board composition is core to the organisation's performance or success of the corporation.
Stakeholder theory has been criticised by some scholars, such as Blattberg and Charles (2004), who noted that stakeholder theory assumes the interest of several stakeholders who can be at best balanced against one another or compromised.Donaldson and Preston (1995) also argued that the theory failed to differentiate between various stakeholders and their essential contributions to the firm.Further, the theory is critiqued for being too imprecise both in descriptive capacity and its instrumental utility.Despite this criticism, various scholars have supported the theory of stakeholders by asserting that inside directors are more trustworthy to the firm's resources, thus leading to the improvement of a firm's performance because of information asymmetry (Nicholson & Kiel, 2007).In contrast, other scholars argued that internal directors have an in-depth understanding of the firm, which creates more awareness of the valuable materials that can be put into use to improve the firm's performance (Donaldson, 1990).Therefore, this theory supported the second objective on the relationship between boards' composition and the financial performance of farmers' cooperative societies financial performance.
Empirical Review
The research analysed previous studies that examined the influence of board composition on the financial performance of farmers' cooperative societies in Kericho County, Kenya.The reviewed studies included scholarly research, journal articles, and other publications.
Board Composition and Financial Performance
The Board of Directors is a crucial element in ensuring effective corporate governance within an organisation.As such, its composition must be responsive to key functions, including supervision and monitoring, preventing opportunistic behaviours by executives, and providing guidance to decision-makers to enhance the company's performance (Madhani, 2017).According to Al-Shammari and Al-Saidi (2013), Board composition is comprised of various indicators such as board diversity, board experience, and board size.An adequately constituted board is essential for achieving a company's goals and objectives, as highlighted by Al-hadal et al. (2020).This allows for increased efficiency and effectiveness in operations.Additionally, a wellconstituted board improves a company's image, thereby attracting stakeholders' trust and support.Dzingai and Fakoya (2017) conducted research on the Johannesburg Stock Exchange in South Africa to determine how the corporate governance structure of listed mining firms affects their financial performance.The study focused on agency theory and examined three specific objectives: board size, knowledge balance, and diversity.The researchers used integrated financial annual and sustainability reports from 2010 to 2015 in a survey design to gather data.The study utilised panel data analysis of the random effect model.Results showed that board size had a negative influence on return on assets.It can be noted that the research was limited to listed mining companies in South Africa and used a survey research design.This study's findings can not be generalised to the agricultural sector due to many reglualtions in the mining sector.Therefore, the current study was carried in farmers' cooperative societies using a correlational research design.Martín and Herrero (2018) studied how the composition of a business's board of directors can impact its performance, measured by Tobin's Q ratio and economic profitability.The study focused on three main aspects of board composition: diversity, board size, and experience.The research used a descriptive design and included data from 49 respondents between 2010 and 2015.Multiple regression techniques were used to analyse the data.The results showed that board experience did not have a significant impact on the firm's performance.However, board size and diversity had a positive influence on performance.The study used only one measure of performance, Torbin's Q, a market-based measure of financial performance.The current study used sales growth, profit increase and asset base increase to measure financial performance.
In a study conducted by George and Muiruri (2022), the connection between the financial performance of microfinance institutions and their corporate governance was examined in Inking Limited, Rwanda.The research focused specifically on the impact of diversity, board size, and CEO duality.Using a correlational research design, the study included 35 employees and 11 board members as its target population.Primary data was collected through face-to-face and self-administered questionnaires.Descriptive and inferential statistics were used to analyse the data, revealing a positive correlation between board size and financial performance, while diversity displayed an insignificant relationship with performance.The study's conclusion stated that board size is a crucial factor in improving corporate governance.It is worth noting that this study was performed on farmers' cooperative societies, whereas the previous study was conducted in financial institutions and due to different regulatory frameworks, the study's fisnidngs can bot be generalised to the agricultural sector.
Chemweno (2016) conducted research on the relationship between board diversity and the performance of NSE-listed firms.The study focused on board age, cultural diversity, and gender diversity and evaluated performance using return on assets.The research employed a quantitative research design, gathering data from 42 firms through secondary data obtained from their annual financial reports.Panel data estimation methods were used for analysis.The study concluded that board diversity had no statistically significant correlation with firms' performance.While Chemweno's study analysed NSE-listed firms, this study examines the performance of farmers' cooperative societies.Mutuku (2016) conducted a study in Machakos County, Kenya, that examined the connection between Corporate Governance and the financial performance of Athi River town.The study utilised shareholders, stakeholders, and agency theory.A descriptive research design was employed, with a population of 101 cooperative societies, and a sample size of 33 was chosen through stratified random selection.The semistructured questionnaire was used to collect primary information, which was analysed statistically and descriptively.The study discovered that board composition had a strong correlation with Sacco's financial performance.It should be noted that this study was conducted in farmers' cooperative societies and utilised a correlational research design, unlike the previous study conducted in Saccos, which adopted a descriptive research design.
Conceptual Framework
In Figure 1, board composition is the independent variable whose parameters are board diversity, board experience and board size.Financial performance is the dependent variable measured by sales growth, increase in profits and increase in asset base.
MATERIALS AND METHODS
This study adopted a correctional research design to establish the relationship between board composition and the financial performance of agricultural cooperative societies.This study's target population was 1,261 individuals comprised of the entire management of registered farmer cooperative societies, including accounting officers, auditors, CEOs, directors, employees, and managers from the 51 farmers' cooperative societies registered by the Ministry of cooperatives in Kericho County and that were operational during the period of study.A sample size of 303 respondents was determined scientifically using Yamane's (1967) formula.Data were obtained using structured questionnaires whose content, construct, face, and criterion validity were ensured through extensive literature review and consultation with subject experts in finance.The instrument's reliability was measured through a pilot study from the neighbouring Bomet County.The obtained data were analysed descriptively using means, frequencies, and standard deviation and inferentially using correlation and multiple regression analysis.The findings were presented using frequency tables.
Demographic Characteristics
Out of 303 questionnaires sent out, the researcher received 282, giving a response rate of 93%, which was sufficient for the study.The majority of respondents, 51.4%, were male, with 48.6% being female.In terms of age, 15.2% of respondents were under 30 years old, 22% were between 31 and 40 years old, 22.7% were between 41 and 50 years old, 24.1% were between 51 and 60 years old, and 16% were over 61 years old.Additionally, the study found that 32.6% of respondents had a diploma as their highest level of education, while 30.9% had a certificate, 19.1% had undergraduate or advanced degrees, and 17.4% had only completed high school.Finally, the study found that 39.7% of respondents had worked in a cooperative society for 6-10 years, 35.8% for less than 5 years, 14.5% for 11-15 years, and 9.9% for more than 16 years.This indicates that the respondents were able to answer the research questions accurately.
Descriptive Statistics
The study sought to evaluate the influence of board composition on the financial performance of farmers' cooperative societies.The participants were asked to rate their level of agreement on a scale of 1 to 5, with 1 being "Strongly Agree" and 5 being "Strongly Disagree" in response to various statements related to the variables being studied.
Independent Variable Dependent Variable
The results were analysed and presented using tables showing the responses' frequency, mean, and standard deviation.Chemweno (2016) and Martín and Herrero (2018), indicating that board size and diversity have a positive impact on performance.However, the results contradict those of Dzingai and Fakoya (2017), who found that board size had a negative effect.Additionally, George and Muiruri (2022) found that board diversity did not have a significant relationship with firm performance, which differs from the results of this study.
Financial Performance
The study sought to determine corporate governance's influence on the financial performance of farmers' cooperative societies.The frequencies, mean, and standard deviation of the findings are tabulated in Table 2.
CONCLUSION AND RECOMMENDATION
The study found that the composition of the board has a positive and significant impact on the financial performance of farmers' cooperative societies in Kericho County.The study found that board diversity, relevant experience, variations in director tenure, and board size all contribute positively to the financial performance of farmers' cooperative societies.The study recommends that farmers' cooperative societies prioritise diversity in their boards of directors for better decisionmaking.This diversity brings varied experiences and expertise to the board.The study also recommends that the board establish a fixed tenure for directors to ensure optimum service delivery to stakeholders.
Figure
Figure 1: Conceptual Framework
Table 5 : Analysis of Variance Model Sum of Squares df.
a. Dependent Variable: Financial Performance of Farmers' Cooperative Societies b.Predictor: (Constant), Board Composition,
Table 5
indicates that the F-statistic of the regression is significant (P<0.05),implying that the model applied significantly.Board
Table 6 :
The regression model in Table6indicates that Board Composition has a significant positive relationship with the Financial Performance of Farmers' Cooperative Societies (P<0.05).This implies that when all other factors are kept constant, a unit increase in board composition contributed 0.431 units to the rise in the financial performance of Farmers' Cooperative Societies (β =0.431, P<0.05).
a. Dependent Variable: Financial Performance of Farmers' Cooperative Societies | 4,040.4 | 2024-01-29T00:00:00.000 | [
"Business",
"Agricultural and Food Sciences",
"Economics"
] |
MatBED_B&C: A 3-dimensional biologically effective dose analytic approach for the retrospective study of gamma knife radiosurgery in a B&C model
The biological effect of irradiation is not solely determined by the physical dose. Gamma knife radiosurgery may be influenced by dose rate, beam-on-time, numbers of iso-centers, the gap between the individual iso-centers, and the dose‒response of various tissues. The biologically effective dose (BED) for radiosurgery considers these issues. Millions of patients treated with Models B and C provide a vast database to mine BED-related information. This research aims to develop MatBED_B&C, a 3-dimensional (3D) BED analytic approach, to generate a BED for individual voxels in the calculation matrix with related parameters extracted from Gammaplan. This approach calculates the distribution profiles of the BED in radiosurgical targets and organs at risk. A BED calculated on a voxel-by-voxel basis can be used to show the 3D morphology of the iso-BED surface and visualize the BED spatial distribution in the target. A 200 × 200 × 200 matrix can cover a greater range of the organ at risk. The BED calculated by MatBED_B&C can also be used to form BED-volume histograms to generate plan quality metrics, which will be studied in a retrospective study of gamma knife radiosurgery to guide future BED planning.• We develop MatBED_B&C to calculate the 3D BED in radiosurgical targets.• The BED of MatBED_B&C can visualize the BED spatial distribution profiles.• The BED of MatBED_B&C will generate plan quality metrics studied in a retrospective study.
a b s t r a c t
The biological effect of irradiation is not solely determined by the physical dose. Gamma knife radiosurgery may be influenced by dose rate, beam-on-time, numbers of iso-centers, the gap between the individual iso-centers, and the dose -response of various tissues. The biologically effective dose (BED) for radiosurgery considers these issues. Millions of patients treated with Models B and C provide a vast database to mine BED-related information. This research aims to develop MatBED_B&C, a 3-dimensional (3D) BED analytic approach, to generate a BED for individual voxels in the calculation matrix with related parameters extracted from Gammaplan. This approach calculates the distribution profiles of the BED in radiosurgical targets and organs at risk. A BED calculated on a voxel-by-voxel basis can be used to show the 3D morphology of the iso-BED surface and visualize the BED spatial distribution in the target. A 200 × 200 × 200 matrix can cover a greater range of the organ at risk. The BED calculated by MatBED_B&C can also be used to form BED-volume histograms to generate plan quality metrics, which will be studied in a retrospective study of gamma knife radiosurgery to guide future BED planning.
• We develop MatBED_B&C to calculate the 3D BED in radiosurgical targets.
• The BED of MatBED_B&C can visualize the BED spatial distribution profiles.
• The BED of MatBED_B&C will generate plan quality metrics studied in a retrospective study.
Background information
The biological effect of irradiation results from the combined action of radiation damage and repair kinetics. The biologically effective dose (BED), developed as a radiotherapy concept, has been adopted in gamma knife radiosurgery (GKRS). The BED model considers the factors associated with the repair of sublethal damage related to radiotherapy. In GKRS, these factors include the collimator size, the number of iso-centers used, dose rate, beam-on-time, the gap between the individual iso-centers, and the dose -response of tissues [1] . Consequently, the BED model for GKRS has been developed to consider the factors expressed as the following equations [2 , 3] : where Φ( Ξ, ) is expressed as: D T is the total physical dose. The / ratio reflects the dose response of various tissues. The equation combines the fast and slow components of repair (μ 1 and μ 2 ) in a partition model (partition coefficient c). d i , d j , and d k are the dose distributions of the i th, j th, and k th shots, and t i and t j are the irradiation durations of the i th and j th shots, respectively. t i and t j are the initiation times of the i th and jth shots, respectively. Thus, the value of t j -t i is equal to t i + t gap . t gap is the duration of the gap. Here, N is the number of iso-centers used. Eqs. (1) and (2) can be used to generate a BED for individual voxels. To reduce the calculation load, two approaches can be used. One method is extracting the physical dose distribution directly from the availability of the GammaPlan, as introduced by Hopewell et al. [1] . This strategy, as previously mentioned, is GammaPlan version dependant. However, when the availability of the research version of GammaPlan is lacking, significant computational load may make the BED calculation in each voxel problematic. The BED is calculated based on the original physical dose distribution on a voxel-by-voxel basis. Indeed, physical dose distribution calculation on the MR or CT images slice by slice formed a significant computational load. Accordingly, Jones et al. summarized simplified approaches for BED calculation based on higher and lower total treatment times [4] . Graffeo et al.'s study used a monoexponential fit equation to generate an estimated BED from treatment time and margin dose pairs [5] . These simplified approaches have the limitation of more systematic error than an actual BED calculated on a voxel-by-voxel basis. The report of Klinge et al. used a 31 × 31 × 31 matrix covering the selected region to calculate the BED for each voxel. However, only the BED on the prescription dose iso-surface was calculated [6] . Thus, the calculated BED did not represent a comprehensive BED distribution in the target. In addition, to calculate the BED distribution in the organ at risk, such as a pyramidal tract with a length of more than 150 mm, the 31 × 31 × 31 matrix may hardly cover the range. Therefore, comprehensive evaluations of the BED spatial distribution profiles still face challenges. Another calculation approach to reduce the calculation load is calculating the physical dose distribution using three-dimensional (3D) coordinate values, dose rate, and beam-on time of each shot extracted from the GammaPlan. However, this is version-independent. The latter approach calculates the dose falloff based on the maximum central radiation doses of each shot rather than directly extracting the physical dose distribution. In our study, we used the latter strategy.
Since a GKRS treatment plan devised by different physicians has endless permutations regarding the location and number of different iso-centers, even with the same prescription dose, using physical dose distribution to evaluate the quality of a treatment plan is insufficient. Consequently, BED combined the above-mentioned indicators into a single compound indicator in seeking to predict the treatment outcome. For high-dose, single-fraction treatments, for example, Smith et al. has reported the dose rate effects following Gamma Knife surgery for vestibular schwannomas [7] . Further, Villafuerte et al.'s report has shown the correlation between BED and local control after radiosurgery for acoustic neuromas [8] . Two studies of Tuleasca et al. also showed that BED was associated with linear tumour volume changes and hearing preservation after stereotactic radiosurgery for vestibular schwannomas [9 , 10] . Another study of Tuleasca et al. showed the correlation between GKRS BED and obliteration of unruptured arteriovenous malformation [11] . In addition, as mentioned above, Graffeo et al.'s studies used estimated BED as the variable to predict hypopituitarism after singlefraction pituitary adenoma radiosurgery and predict outcomes for acromegaly [5 , 12] . One major limitation of these studies is that the estimated BED did not consider the location (3D coordinate values) of different iso-centers. Notably, the iso-center location in the GKRS target is an important factor influencing the plan quality. Thus, we developed a 3D BED analytic approach to incorporate the 3D coordinate values of each GKRS iso-center into a BED computational model. It is also worth noting that the 3D BED model in our study is still a model just based on the hypothesis that BED or dose rate is a factor for SRS treatments. To refine the BED model and explore the guiding value of the model for GKRS, the first step is to use the model in the retrospective study and validate the hypothesis. This is our motivation for developing MatBED_B&C, a 3D BED analytic approach.
Gamma knife Models B and C are representative models in the era of stereotactic radiosurgery. In the context of the development of BED analysis methods, millions of brain disorder patients treated worldwide with Models B and C provide a vast database to mine BED-related information. Thus, it is essential to develop a BED analytic approach for the retrospective study of GKRS in the B&C model. MatBED_B&C is a 3-dimensional (3D) BED analytic approach to generate a BED for individual voxels. The parameters involving 3D coordinate values, dose rate, and beam-on time of each shot were extracted from the Gammaplan of the B&C model to generate our BED calculation matrix. These parameters were the output values of the GammaPlan with related versions of Models B and C. Thus, the software version had no significant impact on our calculation matrix. The MATLAB R2020a editor ( http://www.mathworks.com/products/matlab/ ) developed the 3D BED analytic approach for the GammaPlan of the B&C model. The workflow of BED analysis contains two main steps: (1) for the solver of the 3D dose profiles for the ellipsoid expansion coefficient, we deploy the Gaussian function to fit the dose profiles. The data for dose profiles can be acquired in TMR10 [13][14][15] or by using a radiochromic film dosimeter in the phantom [16] , in which the latter forms a curve in the dose profiles for individualized GKRS instruments. (2) A 3D BED calculation is deployed based on spatial distribution profiles of the physical dose. The essential step is to use the 3D coordinate values of each iso-center to calculate the dose distribution and BED distribution in a 200 ×200 ×200 matrix. For the B&C model, an iso-center is generated by a 4, 8, 14, or 18 mm collimator. Each collimator has an association between dose falloff and 3D coordinate values. Fig. 1 shows a diagram of the computational pipeline. First, we set the coefficient L of ellipsoid expansion to form an association between dose falloff and 3D coordinate values. Then, for simplicity, the iso-dose contours are calculated in an expanding ellipsoidal around the iso-center. As a result, the ellipsoid equation can be written as follows:
Solver of the 3D dose profiles for the ellipsoid expansion coefficient
and (x i , y i , z i ) are the 3D coordinate values of one dose-distribution point and iso-center for the i th irradiation, respectively. a, b, and c are the lengths of the semimajor axis in the x-direction, y-direction, and z-direction, respectively. A coefficient L of ellipsoid expansion is set as: The x a, y b , and z c are the abscissa values at the dose percentage in the x-direction, y-direction, and z-direction, respectively. In the dose profiles of an iso-center at the Leksell® coordinates (100, 100, 100) described in the TMR10 [13][14][15] , we can also use a radiochromic film dosimeter in the phantom to acquire data, forming a curve in the dose profiles for individualized GKRS instruments [16] . The reference point with coordinates (100,100,100) is the position of the central maximum dose. The coefficient of ellipsoid expansion, L, positively correlates with the distance between a point (x a , y b , z c ) on an ellipsoid and the reference point (100,100,100) . The ellipsoid expansion starts from the reference point and extends to cover the whole 3D space. The distance component in the x-direction, y-direction, and z-direction is (x a -100), (y b -100) , and (z c -100 On the one hand, to express the dose profiles of collimators, the relationship between dose falloff and L can be fitted by the Gaussian function of the MATLAB Curve Fitting Tool.
First, the ellipsoid volume can be calculated as follows: Substituting Eqs. (4) -(6) into Eq. (8) , when Eqs. (4) -(6) are multiplied, the FWHMx, FWHMy, and FWHMz elimination are completed. Thus, we achieve the following: where L becomes the geometric mean of the lengths of the semimajor axis in the x-direction, y-direction, and z-direction. As Eq. (8) shows, the geometric meaning (L) can indicate the radius of a sphere with an equal volume of ellipsoid expressed by Eq. (1) . However, as Fig. 1A and B show, a, b, c, and L can take negative values due to the purpose of fitting the two-sided dose falloff. Since the dose falloff is on two sides, not one side in the x, y, and z directions, the negative values of a, b, c, and L here express dose falloff in another direction.
Next, we adopt the Gaussian function of the MATLAB Curve Fitting Tool to fit the dose profiles. The curve in the dose profiles is used to achieve the following relations: Then, we substitute Eqs. (10) - (12) into Eq. (9) to achieve the following relation: Here, N is the number of terms (Gaussian function fitting dose falloff). Eqs. (10) On the other hand, the relationship between 3D coordinate values (x, y, z) and L is formed by Eq. (7) , which is achieved by substituting Eqs. (4) -(6) into Eq. (3) . Here, the implication of L is a correlation coefficient of gradually expanding an ellipsoid with 3D coordinate values as parameters ( Fig. 1C ). According to Eq. (7) , we obtain: Consequently, based on the relationships between dose falloff and L and between 3D coordinate values (x, y, z) and L, we obtain the relationship between dose falloff and 3D coordinate values (x, y, z). Finally, we calculate the physical dose distribution using the 3D coordinate values, dose rate, and beam-on time of each shot as follows.
BED calculation based on spatial distribution profiles of the physical dose
Then, the dose distribution is calculated as follows: d i , ̇ , and t i are the dose distribution, iso-center dose rate, and irradiation duration of the i th shot, respectively. A contour of one shot is expressed as an iso-dose of half the maximum dose. Thus, we visualize the spatial morphology and position of a shot using the 3D coordinate values of each iso-center of the B&C model during a retrospective study. If we input a 3D model of a radiosurgical target Fig. 2A ), we achieve the spatial relationship between the shot and the irradiated objects ( Fig. 2B ). Then, the iso-dose surface visualizes the total dose of all iso-centers ( Fig. 2C ). Accordingly, we calculate a BED for individual voxels based on physical dose using Eqs. (1) and ( (2) and visualize the spatial distribution of the BED in the radiosurgical target ( Fig. 2D ). Since the calculation grid size of the 200 × 200 × 200 matrix is 1 mm, the volume is calculated by summing the volume of voxels for the target. According to the 3D coordinate value of each voxel of the target, we generate BED-volume histograms according to the 3D BED distributions ( Fig. 2E and F). The visualization details for the shot, iso-dose surface, iso-BED surface, and formation of the BED-volume histogram are provided in Supplementary Material B. The BED-volume histograms can be used to obtain plan quality metrics such as percentage volume BED, iso-BED volume, conformity index, and gradient index. These BED metrics will be studied in a retrospective study of gamma knife radiosurgery to guide future BED planning. When required to evaluate BED distribution in the organs at risk, such as the optic pathway ( Fig. 3A ) or pyramidal tract ( Fig. 3B ) adjacent to the radiosurgical targets, MatBED_B&C can be used to show the spatial relationship between the BED iso-surface and the organs at risk ( Fig. 3C and D). The spatial range of the organ at risk may be much greater than that of the surgical target. The 200 × 200 × 200 matrix can cover the range of organs at risk. Accordingly, a BED-volume histogram scan was also generated ( Fig. 3E and F). The BED-volume histograms can be used to obtain plan quality metrics such as percentage volume BED, iso-BED volume, conformity index, and gradient index. These BED metrics will be studied in a retrospective study of gamma knife radiosurgery to guide future BED planning. Since the Perfexion/Icon models have the 4, 8, and 16 mm collimators, each of which has 8 sectors. Different combinations of the sectors make the calculation approach of dose distribution more challenging. Further work on the analytic method for Perfexion/Icon models is pending. This study's 4C model analytical approach provides a foundation for future studies.
Limitations and further works in this direction
In the modern GK models (Perfexion, Icon, and Esprit), the radiation units consist of a fixed conical collimation system and 192 60 Co sources equally distributed over 8 sectors in a cylindrical configuration. The three available collimation channels, labelled as 4, 8, and 16 mm, allow the use of composite shot [17] . It means there are numerous combinations of different sectors with different collimation channels. Therefore, the dose falloff profiles of each shot of the modern GK models are more obviously more complex than those of the B or C GK models. In contrast, the method of MatBED_B&C can directly calculate the dose falloff profiles of one shot caused by the 4, 8, 14, and 18 mm collimators of the B or C GK models without beam channel blocking. Under no channel blocking condition, the dose falloff profiles of each shot in the x-direction, y-direction, and z-direction approximately follow a normal distribution. Consequently, the Gaussian function can fit the dose profiles for MatBED_B&C. One limitation of MatBED_B&C is that it cannot calculate the dose profiles of shots with channel blocking. The computational pipeline of MatBED_B&C is also applicable to calculate the dose falloff profile of one collimator for Perfexion and Icon only if all 8 sectors open without using a composite shot. The current method does not apply to the composite shot of Perfexion and ICON with or without beam channel blocking. To solve this problem, further work is being carried out to calculate the dose falloff of each sector rather than each shot for Perfexion and ICON.
Conclusions
We developed MatBED_B&C, a GKRS 3D BED analytic approach for retrospective study with a reduced computational burden to generate a BED for individual voxels. The BED calculation is based on spatial distribution profiles of the physical dose calculated from the 3D coordinate values of each iso-center. The BED of MatBED_B&C can visualize the spatial relationship between the BED iso-surface and radiosurgical target or organs at risk and generate a BED-volume histogram.
Ethics statements
The institutional ethics committee approved the study and exempted informed consent.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 4,454.8 | 2023-08-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Genome Degradation in Brucella ovis Corresponds with Narrowing of Its Host Range and Tissue Tropism
Brucella ovis is a veterinary pathogen associated with epididymitis in sheep. Despite its genetic similarity to the zoonotic pathogens B. abortus, B. melitensis and B. suis, B. ovis does not cause zoonotic disease. Genomic analysis of the type strain ATCC25840 revealed a high percentage of pseudogenes and increased numbers of transposable elements compared to the zoonotic Brucella species, suggesting that genome degradation has occurred concomitant with narrowing of the host range of B. ovis. The absence of genomic island 2, encoding functions required for lipopolysaccharide biosynthesis, as well as inactivation of genes encoding urease, nutrient uptake and utilization, and outer membrane proteins may be factors contributing to the avirulence of B. ovis for humans. A 26.5 kb region of B. ovis ATCC25840 Chromosome II was absent from all the sequenced human pathogenic Brucella genomes, but was present in all of 17 B. ovis isolates tested and in three B. ceti isolates, suggesting that this DNA region may be of use for differentiating B. ovis from other Brucella spp. This is the first genomic analysis of a non-zoonotic Brucella species. The results suggest that inactivation of genes involved in nutrient acquisition and utilization, cell envelope structure and urease may have played a role in narrowing of the tissue tropism and host range of B. ovis.
Introduction
Although previous studies have supported the notion of Brucella as a monospecies genus with different biotypes [1], it is still largely accepted that the genus Brucella is divided into six species, named according to their preferential hosts. Each one of the species is hostadapted, but not host-restricted [2][3][4]. Four of the six Brucella species, namely B. melitensis, B. abortus, B. suis, and B. canis, are capable of causing human disease. B. melitensis and B. suis are the most pathogenic, whereas B. abortus is considered of moderate pathogenicity, and B. canis is considered of low pathogenicity for humans. There are no reports of human infections with B. ovis or B. neotomae [3]. In addition to the six classical Brucella species, Brucella has also been isolated from marine mammals, and two species, B. pinnipedialis and B. ceti have been proposed [5]. Brucella isolates from marine mammals can cause human infections, with one reported case of infection due to laboratory exposure [6], and two reported cases of natural infections resulting in neurological disease [7].
Brucella ovis was initially recognized in the beginning of the 1950's in New Zealand and Australia as a bacterial agent associated with epididymitis and abortion in sheep [8]. Since then this organism has been isolated in several countries [9], and is considered one of the most important causes of ovine infertility, with a significant economic impact on the sheep industry [10]. B. ovis has a worldwide distribution in areas where sheep are economically significant, with the exception of the Great Britain [9]. The prevalence in herds ranges from 9.1 to 46.7% [11], and the seroprevalence within positive herds varies between 2.1 to 67% [11][12][13][14].
B. ovis is stably rough, and it is one of the two classical Brucella species that do not have zoonotic potential. In sheep, the organism causes either clinical or sub-clinical chronic infections characterized by epididymitis, orchitis, male infertility, and occasionally abortion in pregnant ewes [15]. Sexually mature rams are more susceptible than young males [16]. However, B. ovis infection may affect males as young as 4 months old [9]. Natural transmission apparently occurs through mucosal membranes, and venereal transmission is significant when a female previously mated with an infected male copulates with a second susceptible male during the same period of estrus [17]. Upon invasion through mucosal membranes, B. ovis initially resides in local lymph nodes, which is followed by bacteremia and finally colonization of the genital tract around 30 days post infection [18]. The factors defining the genital tropism of this organism remain unknown.
Sequencing of B. melitensis, and B. suis genomes demonstrated a high level of similarity between the two genomes, with over 90% of the genes having more than 98-100% nucleotide identity [19]. Furthermore, comparison between these two species resulted in the identification of only 32 and 43 genes that were unique to B. melitensis and B. suis, respectively. This level of variability is remarkably low even when compared to variations between serotypes within the same bacterial species such as in serotypes Typhi and Typhimurium of Samonella enterica [20]. More recently, the complete genome sequence of B. abortus (strains 9-941 and 2308) became available confirming the striking similarity both among different species of Brucella and within the species B. abortus [21,22]. Comparisons between these three Brucella species revealed more than 94% identity at the nucleotide level. In addition, comparisons between the genomes of the two B. abortus strains that have been sequenced (9-941 and 2308) resulted in identification of only a small number of strain-specific deletions and polymorphisms [21]. The genetic similarity among Brucella species has been confirmed by whole genome hybridizations [23]. Together these studies support the original hybridization studies performed more than 20 years ago suggesting that Brucella is a monospecific genus from the genetic point of view [1]. Considering the high level of identity among Brucella species pathogenic to humans, the comparison of those species with a Brucella lacking the potential to cause human infections may bring new insights into host specificity and pathogenesis. In this study we present the genome analysis of the rough non-human-pathogen Brucella ovis and compare its genome with those of the zoonotic Brucella spp. Our analysis focused on two sets of genomic features (i) deletions and gene degradation that could potentially result in loss of virulence factors important for infection of hosts other than sheep, and (ii) B. ovis-specific genes that could contribute to genital tract tropism or cause a reduction in virulence for non-ovine hosts.
Results and Discussion
General features of the B. ovis genome The B. ovis type strain ATCC25840 (also known as 63/290 or NCTC10512) used for sequencing was isolated from a sheep in Australia in 1960 [8]. The genome of this strain consists of two circular chromosomes of 2,111,370 bp (Chromosome I; NCBI Accession Number NC_009505) and 1,164,220 bp (Chromosome II; NCBI Accession Number NC_009504), which are predicted to encode a total of 2890 proteins, 1928 on ChrI and 962 on ChrII ( Table 1). Features of the B. ovis genome are summarized in Table 1. Comparison with the sequenced genomes of B. suis (GenBank Accession numbers NC_004310 and NC_004311) [24], B. abortus (GenBank Accession numbers NC_007618 and NC_007624) [21,22] and B. melitensis 16M (GenBank Accession numbers NC_003317 and NC_003318) [19] shows a large degree of conservation, particularly in the % G+C content and size of the chromosomes. This comparison also revealed several speciesspecific differences, including regions missing from B. ovis relative to the other sequenced Brucella genomes and genes unique to B. ovis. These differences are listed in Table 1 and Figures 1-2, and are discussed below.
Comparison to B. suis, B. abortus, and B. melitensis proteomes There is a significant degree of overlap between the predicted proteomes of sequenced Brucella species. Comparative best-match blastp searches identified 2282 annotated proteins in B. ovis that are shared with B. suis, B. melitensis and B. abortus, 79% of the annotated proteome. Nonetheless, this analysis suggested that the B. ovis genome lacks 675, 610 and 539 protein coding genes annotated in B. suis, B. melitensis and B. abortus 2308, respectively (Table S3). To determine whether these genes are truly absent from the B. ovis genome they were compared to the B. ovis chromosomal sequences using blastn searches. Interestingly, good matches were found for 64, 125 and 18 genes thought to be lacking in B. ovis in the genomes of B. suis, B. melitensis and B. abortus 2308, respectively (Table S3). There are several possible reasons for these genes not being highlighted as shared using comparative best-match blastp searches, e.g., they may be pseudogenes in B. ovis and therefore, not part of the predicted proteome, they may be the products of duplications where only one duplication product is matched in best match searches or differences in gene annotation between the strains resulted in these sequences not being annotated as a protein coding gene in B. ovis. In total only 33 annotated protein coding genes in B. ovis are unique to this species.
Genomic rearrangements
Gradient figures were used to compare the chromosomes of B. ovis with those of the other sequenced Brucella genomes. No large inversions or rearrangements were observed in the B. ovis genome compared to the sequenced genomes of B. suis or B. melitensis ( Figure 1).
Genomic deletions
Chromosome I. Four deletions of 4 kb or larger were identified in the B. ovis genome, compared to its closest relative, B. suis. Chromosome I lacks a 15 kb region that encompasses B.
suis BR0966-BR0987. Interestingly, in B. suis, this region is inserted into the sequence of tRNAgly and is adjacent to a phage integraselike gene, features suggestive of horizontal gene transfer. This region contains the wboA glycosyl transferase gene, shown to be essential for lipopolysaccharide biosynthesis [25], and a second glycosyl transferase, as well as several hypothetical proteins. The lack of the two glycosyl transferases likely contributes to the rough LPS phenotype of B. ovis. These findings were in agreement with previous reports by Vizcaino et al and Rajashekara et al [23,26] showing the absence of these genes from B. ovis.
A second deletion on Chromosome I of 7745 bp encompasses BR1078-BR1083. This island in B. suis contains three hypothetical genes and two site specific recombinases and is flanked by two tRNA-Leu genes, of which one remains in B. ovis. A smaller deletion of 3954 bp on Chromosome I has led to loss of two genes (BR1852-BR1853) encoding a transcriptional regulator and a protein predicted to be a branched chain amino acid permease. This deletion is also associated with transposable elements, as these two genes in B. suis are flanked by copies of IS2020 [27], one of which remains in B. ovis.
Chromosome II. The 44.5 kb island in B. suis (BRA1074-BRA1113; [28]), containing four predicted ABC transport systems and three transcriptional regulators, is absent from the B. ovis genome. The presence of two copies of IS1239 flanking two pseudogenes in B. ovis suggests that this is the result of a loss of this region by recombination. The 18.3 kb IncP island on B. suis Chromosome II containing BRA0362-BRA0379, previously reported to be present in B. suis biovars 1-4, B. canis, B. neotomae, and marine mammal isolates but missing in B. melitensis [24,28], is also absent from the B. ovis genome, as reported by Lavigne and colleagues [28].
Unique genomic regions
Chromosome II, contains an island of 26.5 kb (BOV_A0482-BOV_A0515) with structure suggestive of a composite transposon (Fig. 2). This island likely overlaps the B. ovis-specific 21-kb SpeI fragment of the small chromosome identified previously by genome mapping [29]. Proteins encoded on the island include transposases, an ABC transporter, a putative hemagglutinin, and several hypothetical proteins. This region is present in 17/17 B. ovis strains tested, suggesting a high level of conservation within the species (Supplementary Table S2). Interestingly, the predicted product of BOV_A0497 exhibits similarity to antitoxins of toxinantitoxin maintenance systems [30], suggesting a possible selective pressure for maintenance of this genetic island in B. ovis. Genome sequence data and analysis of the island by PCR showed that this region is absent from the genomes of B. suis, B. abortus, B. melitensis, B. canis and B. neotomae (Table 2), suggesting initially that it may be specific to B. ovis. However, we detected this island in three marine isolates of Brucella from captive bottlenose dolphins [31,32]. The protein coding genes within this unique region constitute the majority of the unique protein coding genes in B. ovis (Supplementary Table S3).
Transposable elements
The genome of B. ovis contains 38 complete copies of IS711, confirming previous estimates obtained by hybridization [33]. 25 of the copies are located in Chromosome I, and 13 in Chromosome II. Interestingly, several of these copies appear to be expanded clonally, suggesting that they could be active in B. ovis. In 17 cases, IS711 is inserted in copies of a repeated sequence of the BruRS family [34]. Five of the IS711 copies appear to disrupt genes, which could be a contributing factor to the general process of genome degradation observed in B. ovis. A case of special interest is the B. ovis-specific island (see above), where a 25 kb region between two copies of IS711 in B. ovis is absent in all the other sequenced species, suggestive of either deletion by recombination between the two copies in the other Brucella species, or of acquisition of the region with duplication of IS711.
Pseudogenes
It has been proposed that the unique complement of pseudogenes in each of the Brucella species may contribute to their differing degrees of infectivity and host preference [21,22]. Interestingly B. abortus, which is less virulent for humans than B. Table 1). Similar to B. abortus, the B. ovis genome contains a large number of pseudogenes with 244 identified. The small chromosome contains more pseudogenes (125; 11.3%) than the large chromosome (119; 5.7%) ( Table 1). Of the 244 B. ovis pseudogenes, 40 are hypothetical or conserved hypothetical genes, 62 have predicted transport functions, 23 are defective transposases, and 14 are predicted to have regulatory functions. The finding that some pseudogenes in the B. melitensis flagellar region encode full-length proteins raises the question of whether some pseudogenes in B. ovis may also be functional [35].
Metabolism
Urease. Based on biotyping, B. ovis is urease-negative. However it contains the two urease clusters described in all the other sequenced Brucella genomes. Urease has been reported to be important for the ability of B. abortus [36] and B. suis [37] to survive passage through the stomach in the mouse model of infection. The importance of urease for oral infection is consistent with a lack of human infections observed with B. ovis, despite identification of this organism in the milk of infected ewes [38]. The ure1 cluster is the only one showing urease activity at least in B. abortus, B. melitensis, and B. suis [36,37], while the ure2 does not have any demonstrable urease activity. The ure1 cluster of B. ovis contains several point mutations that are predicted to result in conserved changes shared by at least one other urease positive species. However, UreC1 contains a 30 bp deletion that would cause a loss of 10 amino acids in UreC1. Although all the residues described to be important for enzymatic activity of UreC1 are conserved [39], this deletion must render the urease inactive. Moreover, complementation with the ureC1 gene from B. melitensis (Sangari, unpublished results) results in urease activity. Regarding the ure2 cluster, ureF2 (BOV_1316) and ureT (BOV_1319) are pseudogenes, while the remaining genes are conserved, suggesting that this cluster could encode an unknown activity.
Sugar metabolism. B. ovis is defective in oxidative metabolism of arabinose, galactose, ribose, xylose, glucose and erythritol [40]. Analysis of the genome sequence reveals a possible basis for these metabolic defects in B. ovis relative to the human pathogenic Brucella species. Several putative sugar transporters predicted to be functional in other Brucella species appear to be inactivated by frameshifts, point mutations or gene degradation in B. ovis. Further, pckA encoding phosphoenolpyruvate carboxykinase (BOV_2009) is inactivated by frameshift, which would affect the gluconeogenesis pathway and the ability of B. ovis to utilize pyruvate, amino acids, or glycerol as carbon sources.
Erythritol. Erythritol is the preferred carbon source of B. abortus [41,42]. Erythritol is metabolized in Brucella by the enzymes encoded in the eryABCD operon [43] Moreover, it has been recently described that the carbohydrate transport system located upstream of the ery operon constitutes the erythritol transport system in Rhizobium leguminosarum bv. Viciae [44], and that the operon immediately downstream also forms part of the erythritol catabolic pathway. Microarray experiments have revealed that these three operons are regulated by erythritol in B. abortus, reinforcing that the three clusters participate in erythritol catabolism (Sangari et al, unpublished). B. ovis does not oxidize erythritol, and it is not inhibited by its presence in the growth media. This is reflected in the genome by the presence of a stop codon in eryA (BOV_A0811) and a frameshift in eryD (BOV_A0814) that render them pseudogenes. In addition two genes of the putative ABC erythritol transport system, eryF and eryG (BOV_A805 and BOV_A806)) carry mutations that render them pseudogenes (a 2 bp deletion resulting in an premature stop codon). The mutation in eryD could have an effect in the over expression of all genes of the ery system. On the contrary, mutations in the transport genes and in eryA block the entry of erythritol in the cell and its phosphorylation, avoiding the accumulation of toxic intermediates and the depletion of ATP observed in the vaccine strain S19 [45]. The accumulation of mutations in these two clusters suggest that they may no longer be needed by B. ovis. The third cluster is intact, and the enzymes use substrates that are central (or core) carbohydrate metabolites such as dihydroxyacetone phosphate, glyceraldehydes-3-phosphate, and other three carbon compounds that can be produced after decarboxylation of erythritol and its derivatives. While it has been hypothesized that erythritol may serve as a carbon source during growth of B. abortus, B. suis, and B. melitensis in the placenta of their natural hosts, an analysis of deletion mutants in these models has not been reported. However, the lower incidence of abortion in sheep flocks infected with B. ovis compared to B. melitensis correlates with the inability of B. ovis to use erythritol as a carbon source. Mutants in eryB and eryC have been described to have a limited ability to grow in macrophages [46]. This limitation may well contribute to the limited virulence of B. ovis.
Glucose and Galactose. Unlike B. melitensis, B. abortus and B. suis, B. ovis is unable to grow on glucose or galactose as a primary carbon source [40]. Analysis of genes involved in uptake and utilization of these carbon sources reveals that B. ovis gluP (BOV_A0172), encoding a glucose/galactose transporter [47] is a pseudogene. B. abortus gluP mutants have a reduced ability to persist in the spleens of mice [48], suggesting that the ability to utilize these carbon sources may be important for systemic persistence of the human pathogenic Brucella species. However B. suis also lacks GluP and is able to oxidize glucose and galactose, suggesting that additional functions are lacking in B. ovis that contribute to utilization of these carbon sources. Several predicted sugar ABC transporters (BOV_1299-BOV1301, BOV_A0278-BOV_A0282, BOV_A0645-BOV_A0648 and BOV_A1083-BOV_A1086) of unknown specificity contain pseudogenes, which may potentially reduce the ability of B. ovis to take up glucose and galactose.
Ribose. Two components, a permease and an ATP binding protein of a putative ribose ABC transport system (BOV_A0936-BOV_A0937) are pseudogenes in B. ovis, suggesting that the inability of B. ovis to utilize ribose as a sole carbon source may be the result of a reduced capacity to take up ribose from the growth medium. Similarly, a periplasmic binding protein and an ATP binding protein of a predicted ABC transporter for xylose (BOV_A1055-BOVA1056) contain frameshifts, which may underlie the inability of B. ovis to utilize xylose [40].
Oxidase phenotype. B. ovis is the only Brucella species with a negative oxidase phenotype. Oxidase phenotype depends on the activity of cytochrome C oxidase which in B. suis is encoded by at least 7 genes, four of them organized in one operon, BR0363-BR0360 (ccoNOQP), together with BR0467 (coxB), encoding cytochrome c oxidase, subunit II, BR0468 (coxA) encoding cytochrome c oxidase, subunit I and BR0472 (coxC) encoding cytochrome c oxidase, subunit III. In B. ovis, the operon ccoNOQP (BOV_0376-BOV_0379), encoding the cb type cytochrome c oxidase is well conserved, except the gene ccoO (BOV_0378), which contains a frameshift generated by deletion of an A near its 59 end, is a pseudogene. The B. ovis coxB gene (BOV_0473) contains a frameshift that very probably inactivates the gene, while coxA (BOV_0474) differs only in two residues with the B. suis product. The last 6 amino acids of coxC (BOV_0478) are lost as result of a short deletion (56 bp) that fuses this product with the next, apparently unrelated Orf. A short repeated sequence (GGGGCGGC) at both ends of the deletion seems to be responsible for this rearrangement. These genomic differences may be responsible for the oxidase negative phenotype of B. ovis.
Nitrogen metabolism. Several genes encoded in the B. suis genome with inferred functions in nitrogen metabolism are not predicted to be functional in the B. ovis genome. The genes norB (BOV_A0225), encoding the large subunit of nitric oxide reductase, and nosX (BOV_A0256), a gene of unknown function in the operon encoding nitric oxide synthase, are pseudogenes in B. ovis. The third gene, narK (BOV_A0276), encoding a nitrite extrusion protein, is degenerate, as was found in the B. melitensis and B. abortus 2308 genomes [21]. Nitric oxide reductase is required for survival and persistence of B. suis in mice [49], suggesting that lack of this activity in B. ovis may contribute to its restricted tissue tropism and host range.
A locus that contributes to growth of E. coli on aspartate as a nitrogen source, xanthine dehydrogenase (BOV_0365-BOV_0367; [50]) contains a pseudogene in B. ovis, suggesting a further defect in nitrogen metabolism. Further, this locus could function in salvage pathways for purine nucleotides, which could contribute to nitrogen metabolism. A correlation between a defective purine nucleotide salvage pathway and reduced ability of B. ovis to survive intracellularly would be consistent with the importance of purine biosynthetic pathways for full virulence of B. abortus and B. melitensis [51,52]. Further, a homolog of Sinorhizobium meliloti fixI (BOV_0373), encoding an cation pump involved in symbiotic nitrogen fixation [53], is inactivated by a point mutation in the B. ovis genome. As the function of the fixGHI genes in Brucella spp. has not been determined, the biological significance of this gene inactivation for B. ovis is currently unknown.
Host-pathogen interactions
Lipopolysaccharide. B. ovis is known to have a rough LPS phenotype, which given the critical role of O-antigen in pathogenicity of B. abortus, B. suis and B. melitensis, likely contributes to its reduced pathogenicity for laboratory animals compared to other Brucella spp. [8,54,55]. As mentioned above, the wboA gene is missing from B. ovis, as well as a second, genetically linked glycosyltransferase. Pseudogenes with a possible function in LPS biosynthesis include a glycosyltransferase (BOV_A0475), an LpxA family acetyltransferase (BOV_A0367) and a putative undecaprenylphosphate alpha-N-acetylglucosamine transferase (BOV_A0371) of which the latter may also be involved in murein biosynthesis. While it is known that B. ovis LPS has a higher affinity for antimicrobial peptides than that of B. abortus, it is unknown whether differences in LPS structure compared to smooth Brucella species affect the interaction of B. ovis LPS with toll-like receptors of the innate immune system [56,57]. Type IV secretion system (T4SS). The T4SS, encoded by virB1-virB12, is an essential virulence factor in B. abortus, B. suis, and B. melitensis [48,[58][59][60]. The genes virB1-VirB12 are intact in the B. ovis genome, suggesting that this system is functional. Further, the gene encoding its transcriptional activator VjbR (BOVA_0110; [61]) is conserved. These findings are consistent with the detection of VirB5 and VirB8 expression in B. ovis [62]. Two co-regulated genes identified as encoding T4SS substrates, vceA (BOV_1577) and vceC (BOV_1003) are present in B. ovis, however it is currently unknown whether there may be additional T4SS effectors in other Brucella spp. that are absent from B. ovis [63]. A recent report showing that B. ovis strain PA is able to replicate in macrophages and HeLa cells at a rate similar to that of B. abortus suggests that its T4SS is functional [64].
Autotransporter proteins. Four predicted autotransporters have been identified in the sequenced Brucella genomes. Each of the sequenced Brucella genomes is predicted to encode a different combination of autotransporters [21], suggesting that none of these four proteins is essential for virulence, but that different combinations of autotransporters may contribute to observed differences in tissue tropism and host preference. The B. ovis genome is predicted to encode three functional autotransporters, corresponding to B. suis BR0072, BRA0173 and BRA1148, (BOV_0071, BOV_A0152 and BOV_A1053) while the fourth gene, BOV_1937, corresponding to BR2013, is degenerate. These proteins show a range in similarity (at the amino acid level) to their B. suis counterparts, from 99% between BRA1148 and BOV_A1053, to 86% similarity between BOV_A0152 and BRA0173. BR2013, designated omaA, is a pseudogene in both B. abortus and B. melitensis, and was noted to have a polymorphism with unknown functional effects in the genome of vaccine strain B. abortus 19 [65]. Since the product of this gene has been shown to contribute to persistence of B. suis in mice [66], it is possible that lack of a functional OmaA autotransporter in B. ovis may contribute to its limited tissue tropism and host range. However, the finding that all four autotransporter genes are predicted to be pseudogenes in B. melitensis, shows that they are not absolutely required for infectivity and transmission.
Immunomodulatory functions. In addition to the T4SS, several genes implicated in modulation of the immune response by Brucella spp., are well-conserved in B. ovis ATCC25840. These include btp1, encoding the TIR domain protein, an inhibitor of TLR2 signaling, the B cell mitogen encoded by prpA, and cyclic bglucan synthase [67][68][69][70].
Outer membrane proteins. The two component regulator BvrR/BvrS, which has been shown to be a master regulator of many virulence-associated functions [71,72], appears intact in B. ovis. However, a putative envZ osmosensor (BOV_A0412) is a pseudogene. Two genes shown to encode outer membrane proteins in other Brucella species omp2a (BOV_0632 [73]) and omp31 (BOV_1565; [74]), contain point mutations in B. ovis. For Omp31, the point mutation is predicted to lead to a truncation in the protein. It was found previously that the outer membrane of B. ovis is more susceptible to cationic peptides than a rough B. abortus mutant [56], suggesting that together with the defects in LPS biosynthesis discussed above, these defects in outer membrane components may further compromise the cell envelope stability of B. ovis making it less able to survive environmental stresses.
Perspective
The unique biology of Brucella ovis compared to the human pathogenic species appears to be in part the result of genome degradation. Worldwide, the majority of human brucellosis cases occur via ingestion of contaminated dairy products. B. ovis is not known to cause human infection, despite worldwide consumption of unpasteurized sheep's milk, where B. ovis has been detected [38]. Oral transmission, although feasible in experimental conditions, does not appear to be one of the main routes of infection for B. ovis, whereas passive venereal transmission via the ewe is the most important one [75]. This suggests that this species has lost the ability to infect via the oral route. One genomic change that may contribute to this loss of oral infectivity is the loss of an important virulence factor, urease, that is required for survival of stomach acidity by Brucella spp [36,37]. Urease has also been shown to contribute to the establishment of Actinobacillus pleuropneumoniae infection in pigs through the respiratory tract [76]. If this mechanism is also operative in Brucella spp., then it is possible that B. ovis is also deficient in establishing infections not only by the digestive route but also via aerosol inhalation, which are the two main routes of infection by human pathogenic Brucella spp. Additional genes that seem to be non-functional in B. ovis could also contribute to reduce the number of transmission routes compared to other Brucella species. A characteristic of B. ovis is its tropism for the ovine male genital tract, which presents as epididymo-orchitis [8]. Since other Brucella species, especially B. melitensis are known to cause epididymo-orchitis in human patients [77], the predilection of B. ovis to cause epididymo-orchitis in rams likely represents a loss of functions required to target to other tissues. However, due to the large number of genes in the B. ovis genome with unknown function, a gain of functions that allow for increased colonization of the male genital tract cannot be ruled out based on the genome sequence.
Genome sequencing and annotation
The complete genome sequence of Brucella ovis strain ATCC25840 was determined using the whole-genome shotgun method as previously described [78]. Physical and sequencing gaps were closed using a combination of primer walking, generation and sequencing of transposon-tagged libraries of large-insert clones, and multiplex PCR [79]. Identification of putative protein-encoding genes and annotation of the genome were performed as previously described [24]. An initial set of genes predicted to encode proteins was identified with GLIMMER [80]. Genes consisting of fewer than 30 codons and those containing overlaps were eliminated. Frame shifts and point mutations were corrected or designated 'authentic.' Functional assignment, identification of membrane-spanning domains, and determination of paralogous gene families were performed as previously described [24]. Sequence alignments and phylogenetic trees were generated using the methods described previously [24].
Trinucleotide composition
Distribution of all 64 trinucleotides (3-mers) was determined, and the 3-mer distribution in 1,000-bp windows that overlapped by half their length (500 bp) across the genome was computed. For each window, we computed the x 2 statistic on the difference between its 3-mer content and that of the whole chromosome. A large value for indicates the 3-mer composition in this window is different from the rest of the chromosome (minimum of two standard deviations). Probability values for this analysis are based on assumptions that the DNA composition is relatively uniform throughout the genome, and that 3-mer composition is independent. Because these assumptions may be incorrect, we prefer to interpret high x 2 values as indicators of regions on the chromosome that appear unusual and demand further scrutiny.
Comparative genomics
The B. ovis ATCC25840 genome was compared to the genomes of B. suis 1330, B. abortus 2308, B. abortus 9-941, and B. melitensis 16M (PATRIC), at the nucleotide level by suffix tree analysis using MUMmer [81], and the predicted B. ovis CDSs were compared with the gene sets from the other sequenced Brucella genomes by BLAST and by HMM paralogous family searches, as previously described [82].
Analysis of the B. ovis-specific island
Genomic DNA from B. ovis strains and other Brucella species (B. melitensis, B. suis, B. abortus, B. canis, B. neotomae, and B. pinnipedialis) were subjected to PCR amplification of 12 target sequences within the B. ovis-specific island. PCR reactions were performed using 13 mL of a commercial PCR mix (PCR Supermix, Invitrogen, USA), 0.75 mL of a 25 mM solution of each primer (Supplementary Table S1), and 1 mL of DNA (100 to 500 ng per reaction). Cycling parameters were denaturation at 95uC for 5 minutes; 35 cycles of denaturation (95uC for 1 min seconds), annealing (55uC for 1 min), and extension (72uC for 1 min); and a final extension at 72uC for 5 min. PCR products were resolved by agarose gel electrophoresis.
GenBank accession
The genomic sequence data are available at GenBank under accession numbers NC_009505 (Chromosome I) and NC_009504 (Chromosome II). | 7,249.2 | 2009-05-13T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Characterization of Hemerocallis citrina Transcriptome and Development of EST-SSR Markers for Evaluation of Genetic Diversity and Population Structure of Hemerocallis Collection
Hemerocallis spp. commonly known as daylilies and night lilies, are among the most popular ornamental crops worldwide. In Eastern Asia, H. citrina is also widely cultivated as both a vegetable crop and for medicinal use. However, limited genetic and genomic resources are available in Hemerocallis. Knowledge on the genetic diversity and population structure of this species-rich genus is very limited. In this study, we reported transcriptome sequencing of H. citrina cv. ‘Datonghuanghua’ which is a popular, high-yielding variety in China. We mined the transcriptome data, identified and characterized the microsatellite or simple sequence repeat (SSR) sequences in the expressed genome. From ∼14.15 Gbp clean reads, we assembled 92,107 unigenes, of which 41,796 were annotated for possible functions. From 41,796 unigenes, we identified and characterized 3,430 SSRs with varying motifs. Forty-three SSRs were used to fingerprint 155 Hemerocallis accessions. Clustering and population structure analyses with the genotypic data among the 155 accessions reveal broader genetic variation of daylilies than the night lily accessions which form a subgroup in the phylogenetic tree. The night lily group included accessions from H. citrina, H. lilioasphodelus, and H. minor, the majority of which bloom in the evening/night, whereas the ∼100 daylily accessions bloomed in the early morning suggesting flowering time may be a major force in the selection of night lily. The utility of these SSRs was further exemplified in association analysis of blooming time among these accessions. Twelve SSRs were found to have significant associations with this horticulturally important trait.
INTRODUCTION
Hemerocallis species are among the most popular ornamental crops worldwide because of the large, conspicuous flowers and their adaptation to a wide range of soils and climates. In Eastern Asian countries, some species, especially H. citrina (night lily, or long yellow daylily) are important vegetables or medicinal plants with a long history of cultivation. For example, the first reference of H. fulva (daylily) was found in a writing in the Chou Dynasty of (Hu, 1968;Kitchingnan, 1985). The unopened flower buds (fresh or dried) of the night lily are consumed as a special vegetable (also known as "golden needle vegetable"). The medicinal value of Hemerocallis species has received more attention in recent years. The flower buds seem to be enriched with antioxidants, such as stelladerol and caffeoylquinic acid derivatives Hsu et al., 2011;Lin et al., 2011;Jiao et al., 2016). These secondary metabolites are used to treat anxiety and swelling (Uezu, 1998;Gu et al., 2012;Yi et al., 2012;Du et al., 2014;Liu et al., 2014), and for other applications in modern medicine and biology (Venugopalan and Srivastava, 2015). Thus, Hemerocallis flowers have considerable potentials as "nutraceutical" or "functional" foods (Rop et al., 2011).
In addition to their economic and medical values, some features of Hemerocallis species are of particular biological significance. For example, the day and night lilies provide an excellent model for understanding the genetic and molecular mechanisms of flower opening and flower senescing. All known species in this genus show strict circadian rhythm of flowering with the rapid opening and withering that lasts only a few hours indicting precise regulation of floral death probably by a programmed cell death system (Hasegawa et al., 2006;Nitta et al., 2010;Hirota et al., 2012Hirota et al., , 2013). All Hemerocallis species exhibit self-incompatibility. As such, Hemerocallis was proposed as a future model system to study these biologically interesting phenomena (Rodriguez-Enriquez and Grant-Downton, 2012).
As an ornamental plant, extensive breeding efforts during the last century have resulted in varieties with different flower size, color, shape, scent and blooming time. In the US, over 83,000 daylily varieties have been registered in the American Hemerocallis Society online database 1 . Some morphological characteristics of daylily and night lily plants and flowers are exemplified in Figure 1. Despite the economic and biological importance as well as extensive breeding efforts on Hemerocallis species, not much has been accomplished on genetic studies in this genus. As demonstrated in numerous other horticulture crops, molecular markers are important tools for genetic research and breeding in Hemerocallis with many potential applications such as evaluation of genetic diversity and population structure of germplasm collection, development of linkage maps, gene and QTL (quantitative trait loci) mapping or cloning, as well as marker-assisted selection of horticulturally important traits in breeding. Nevertheless, limited work has been done in the development of molecular markers for Hemerocallis species. The natural range of Hemerocallis encompasses temperate and subtropical Asia, with the main center of diversity of the genus in China, Korea and Japan (Flora of China, 2000). However, reports on the genetic diversity and population genetic analysis of Hemerocallis especially for the collections in China are sporadic (e.g., Tomkins et al., 2001;Podwyszynska et al., 2009;Vendrame et al., 2009;Miyake and Yahara, 2010). In these early reports, molecular markers employed included random amplified polymorphic DNA (RAPD) and amplified fragment 1 http://www.daylilies.org/ length polymorphism (AFLP), each of which has limitations in practical uses (Varshney et al., 2007). Despite the increased use of single nucleotide polymorphism (SNPs), SSR markers remain popular in plant genetic and breeding studies because of their many desirable attributes, including hyper variability, a multi allelic nature, co-dominant inheritance, reproducibility, relative abundance, extensive genome coverage (including organellar genomes), relatively low cost, and amenability to high throughput genotyping (Parida et al., 2009). However, very few SSRs have been reported for Hemerocallis (e.g., Zhu et al., 2009). While there are several methods to develop SSR markers, transcriptome sequencing (RNA-Seq) has been an efficient and affordable way for large-scale, and genome-wide SSR discovery and marker development for various marker-based studies in non-model plants (e.g., Hudson, 2010;Strickler et al., 2012;Yuan et al., 2013;Zhang et al., 2013;Guo et al., 2016;Smithunna et al., 2016). Thus, the objective of the present study is to develop EST (expressed sequence tag)-SSR markers through transcriptome sequencing and examine their utility in evaluation of genetic diversity and population structure of our Hemerocallis collection. We also conducted association analysis to identify potential association of molecular markers with horticulturally important traits.
Plant Materials
One hundred and fifty-five Hemerocallis accessions were employed for genetic diversity analysis with 43 EST-SSRs. These accessions belong to at least 13 species from different geographic regions around the world including commercial varieties, landraces, and collections from the wild. The details of these 155 accessions are presented in Supplementary Table S1, and their geographic distributions are illustrated in Figure 2. The taxonomic status of more than half of the 155 accessions was labeled uncertain during collection (Hemerocallis spp. in Supplementary Table S1). All accessions were grown in the Hemerocallis Germplasm Resource Nursery located on the campus of Shanxi Agricultural University (Taigu, Shanxi Province, China). All plants used for sample collection have been grown for 3-5 years in the nursery. At full flowering stage, the fresh roots, flower buds, and leaves of the H. citrina cv 'Datonghuanghua' were collected, flash frozen in liquid nitrogen, and used for total RNA extraction and RNA-Seq.
RNA-Seq, Transcriptome Assembly, and Annotation
For RNA-Seq, total RNA was isolated from each sample with three plants pooled with a modified CTAB protocol (Tel-Zur et al., 1999). RNA quantity and quality were evaluated using the Agilent Bioanalyzer (Agilent Technologies, CA, United States). The integrity of the total RNA was also assessed through agarose gel electrophoresis. The cDNA library for RNA-Seq was prepared using Illumina Truseq TM RNA sample prep kit (Illumina, CA, United States) following manufacturer's protocols. The concentration and insert size of the library were assessed using Qubit2.0 and Agilent 2100. Pair-end (125 PE) Illumina high-throughput sequencing was performed with a Hi-Seq 2500 sequencing machine. All samples were sequenced in the same instrument (HWI-7001455), same run (316), same follow cell (HH7VHADXX), and same lane (2).
After resequencing, the adaptors and low quality reads were filtered from the raw reads. The clean reads were de novo assembled into contigs with an optimized k-mer length = 25 and group pairs distance = 300 (Grabherr et al., 2011) using the Trinity program 2 . Unigene sequences were aligned against databases of the Clusters of Orthologous Groups (COG) (Tatusov et al., 2000), Gene Ontology (GO) (Michael et al., 2000), Kyoto Encyclopedia of Genes and Genomes (KEGG) (Kanehisa et al., 2004), euKaryotic Orthologous Groups (KOG) (Eugene et al., 2004). The functions of unigenes were predicted by BLAST against the NCBI non-redundant protein (Nr), and Swiss-Prot databases at an E-value of 10 −5 . The resulting datasets were further aligned to the Protein family (PFAM) database with HMMER (E-value 10 −10 ). For further quantitative assessment of the completeness of the assembly and annotation, the H. citrina cv 'Datonghuanghua' transcriptome was subjected to BUSCO (Benchmarking Universal Single-Copy Orthologs) analysis (Simão et al., 2015) against the Viridiplantae_odb10, embryophyta_odb10, and liliopsida_odb10 databases.
Microsatellite Sequence Identification and EST-SSR Genotyping
SSR identification among the unigene sequences (>1 kb) was performed with the MIcroSAtellite (MISA) program 3 .
All microsatellites containing di-, tri-, tetra-, penta-, hexanucleotide, and compound motifs with more than five repeats were included. The Primer3 program 4 was used to design primers for the identified SSRs. Each designed primer pair was first evaluated with in silico PCR procedure 5 using unigene sequences as the template. Primers with multiple amplicons were filtered out.
Forty-three EST-SSR markers were selected for genotyping 155 Hemerocallis accessions. Genomic DNA was extracted from fresh leaves of each accession (pooled from five plants) using the CTAB method. A touch-down PCR procedure (Azhaguvel et al., 2009) was employed and the amplicons were separated using 9% non-denaturing polyacrylamide gel electrophoresis (PAGE) with 0.5 × TBE buffer for 1.5 h in 250V, and then visualized with silver staining.
Cluster and Population Structure Analyses
The SSR genotypic data were organized in a matrix in which 0 and 1 representing absent and present alleles, respectively (9 = missing data). The genetic distances were calculated through the SM coefficient by the Sim Qual procedure in NTSYSpc 2.10 ( Rohlf and Jensen, 1989). The dendrogram was constructed using the unweighted pair-group method with arithmetic mean (UPGMA) clustering and drawn by NTSYSpc 2.10. We also applied Neighbor Joining (NJ) clustering on the dataset, which was implemented in the MEGA5.05 software package. A model-based Bayesian clustering was applied to infer genetic structure and define the number of clusters (gene pools) in the dataset using STUCTURE v.2.3.4 (Pritchard et al., 2000). No prior information was used to define the clusters. Independent runs were done by setting the number of clusters (K) from 1 to 15. Each run was comprised of a burn-in length of 10,000 followed by 100,000 MCMC (Monte Carlo Markov Chain); each replicate at a particular K-value was repeated 20 times. An ad hoc statistic K based on the rate of changes in the log probability of data among successive K-values (Evanno et al., 2005) was calculated through Structure Harvester v.0.9.93 and used to estimate the most likely number of clusters (K). K was calculated using K = m[| L(K+1)-2L(K)+L(K-1)|]/s| L(K)|, where (L(K) is the logarithm of K, s is the standard deviation, and m is the mean.
Association Analysis for Blooming Time in 155 Hemerocallis Accessions
We performed association analysis of blooming time with SSR markers among the 155 accessions of the Hemerocallis collection.
During flowering season, the time of flower opening was continuously monitored real-time using a digital video camera (360 Smart Camera, D606, 360 China). The flowering time of each plant from each accession was recorded. The blooming time of at least five flowers per accession was collected. The blooming time of each accession is provided in Supplementary Table S1. In association analysis, the blooming time was treated a qualitative trait. Accessions with blooming time from 4:00 to 10:00 and 16:00 to 24:00 were assigned "0" and "1, " respectively.
Pairwise linkage disequilibrium (LD) and association analyses were performed using TASSEL4.3.6 (Bradbury et al., 2007;Dent et al., 2012), and calculation of LD and P-values. Association analysis was performed using both the general linear model (GLM, Q model) and the mixed linear model (MLM, Q+K model) in TASSEL. The comparison wise significance was computed using 1,000 permutations as implemented in GLM. The kinship matrix was generated by NTSYSpc 2.10 with option P3D for variance component estimation in MLM (Kang et al., 2008). Significance of marker-trait associations was determined at P ≤ 0.05 (Jaiswal et al., 2012). False discovery rate (FDR) was also used to detect true associations.
Gene Expression in Different Tissues of H. citrina cv. 'Datonghuanghua'
The expression level of blooming time-related gene c33464.graph_c0 in different tissues (fresh roots, flower buds and leaves) of H. citrina cv. 'Datonghuanghua' (H0006) was carried out with quantitative real-time PCR (RT-qPCR) on an ABI prism 7500 Fast Real-time PCR system 147 machine (Thermo Fisher Scientific Inc., Waltham, MA, United States). Sequences of the primers for c33464.graph_c0 were GGCGAATTAGTCTGGAAAGAACTAGG (forward primer) and TGTTATGTTCCTCGTCCGTCCAC (reverse primer). The H. citrina actin gene, HcACT, was used as the reference. The primers sequences for this gene were GAGCAAGGAAATCACGGCACT (forward primer) and GGAACCTCCAATCCAAACACTGTAC (reverse primer). RT-PCR procedure followed Hou et al. (2017). Each sample had three biological and three technical replications.
Transcriptome Sequencing, de novo Assembly, and Unigenes Annotation
After filtering of the raw sequencing data, nearly 56 million clean reads (14.15 Gb) were obtained from root, bud and leaf transcriptomes. Main RNA-Seq statistics for the three tissues are presented in Supplementary Table S2. In all three transcriptomes, >90% reads had Q30 or higher quality scores. Using the Trinity de novo assembly program, the ∼14.15 Gb high-quality reads were assembled into contigs, transcripts and unigenes. Main statistics of the assemblies are shown in Table 1. From 6,390,477 contigs (>25 bp in length), 164,723 transcripts were assembled (mean length 838 bp) representing 92,107 unigenes. The length distribution of these unigenes is illustrated in Figure 3. The transcripts with >500 bp in length accounted for 47.06% of all transcripts. There were 13,724,749 (76.9%), 15,731,521 (79.2%), and 14,841,142 (80.4%) reads from the roots, buds and leaves mapped to the assembly transcripts and unigenes, respectively.
In addition, 23,716 unigenes matched with the GO database were categorized into 51 functional sub-groups of the three main GO groups: cellular component, molecular function, and biological process ( Figure 5). The majority of the unigenes was assigned to biological processes (63,089, 43.7%), followed by cellular components (35,677, 37.2%), and molecular functions (27,719, 19.2%). Under the category of biological processes, metabolic process (15,976, 25.3%) and cellular process (13,749, 21.8%) were predominant. In the cellular component category, cell part (12,837, 23.9%) and cell (12,746, 23.8%) were the most abundant classes. As for the molecular function, catalytic activities (12,157, 43.9%) and binding (11,677, 42.1%) were the top two categories in numbers.
In KEGG pathway analysis of the 41,796 unigenes, 9,375 (22.4%) could be assigned to 117 pathways that belong to five main categories including metabolism (6,783), genetic information processing (3,013), cellular processes (495), environmental information processing (321), and organismal systems (305) (see details in Figure 6). The colchicine content is a very important trait for commercial production of night lily. Among annotated unigenes, 20 were involved in the isoquinoline alkaloids metabolic pathway that is related with colchicine biosynthesis. The category with the largest number of unigenes was metabolism, in which the most represented five pathways were oxidative phosphorylation (ko00190) (336, 4.9%), glycolysis/Gluconeogenesis (ko00010) (325, 4.8%), carbon fixation in photosynthetic organisms (ko00710) (306, 4.5%), purine metabolism (ko00230) (242, 3.6%), and starch and sucrose metabolism (ko00500) (228, 3.4%). We conducted BUSCO analysis to evaluate completeness of the transcriptome using the near-universal orthologous gene groups. The details are presented in Supplementary Table S3 and illustrated in Supplementary Figure S1. For example, among the 430 BUSCOs in the viridiplantae_odb10 database, H. citrina cv. 'Datonghuanghua' transcriptome had 388 (90.2%) complete single or duplicated copies, 27 were fragmented, and 15 were missing, which indicated that this transcriptome assembly had adequate coverage and quality to represent the H. citrina transcriptome for various downstream analyses.
Identification and Characterization of EST-SSRs in Datonghuahua Transcriptomes
Using MISA, we mined microsatellite-containing sequences from 13,019 unigenes with >1,000 bp in length. As a result, 3,430 potential SSRs were identified from 2,977 unigenes, 453 of which contained more than one SSR (detailed in Supplementary Table S4). Details of the 2,977 unigenes, associated SSRs, and motif statistics are provided in Supplementary Tables S5, S6, respectively. The sequences of these unigenes in fasta format are provided in Supplementary Table S7. Among those SSRs, those with the tri-nucleotide repeat motifs were the most abundant (54.29%, 1,862), followed by SSRs with dinucleotide (31.75%, 1,089), and tetra-nucleotide (3.24%, 111) motifs. Other types were rare. Among the di-and tri-nucleotide repeats, (GAA/TTC) n , and GA/TC) n or (AG/CT) n were the most common repeat motifs, respectively (Supplementary Table S6).
From the 2,977 SSR-containing unigene sequences, primers were designed for 3,577 SSRs. The primer sequences for each SSR and associated information are provided in Supplementary Table S8. From the list, we tested 192 experimentally in six accessions. Based on the preliminary data, we finally selected 43 SSRs that had clear, and unique amplicons with expected size. Information for the 43 EST-SSRs is provided in Table 3.
Genetic Diversity of Hemeracallis Germplasm Collection
The forty-three EST-SSRs were used to fingerprint 155 Hemeracallis accessions. A total of 396 alleles were detected Frontiers in Plant Science | www.frontiersin.org FIGURE 6 | Functional classification of the assembled unigenes based on Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. 9,375 unigenes are annotated to five categories: metabolism, genetic information processing, cellular processes, environmental information processing and organic systems. There are a large number of pathways of metabolism and genetic information processing (in total 106 pathways) but the picture is limited, so the first five pathways with the largest number of each category are selected for plotting. The value to the right of each bar indicates the number of unigenes annotated in the pathway.
at the 43 marker loci. The number of alleles detected at each locus varied from 2 to 25 with an average of 9.21 per locus. The polymorphism information content (PIC) ranged from 0.907 to 0.259 with an average of 0.606 (Table 4). Mean Shannon's information index (I) was 1.294 (0.477-2.653). Mean observed heterozygosity (Ho) and mean expected heterozygosity (He) was 0.1757 (0.0000-0.5175) and 0.5935 (0.2104-0.9109). These indicated that the 43 SSRs provided rich genetic information (Table 4).
Pairwise genetic similarity coefficients were calculated among all samples which are provided in Supplementary Table S9. The mean genetic similarity coefficient among these accessions was 0.8642 (range: 0.7828-1.000). The least genetic similarity coefficient was 0.7928 between "Suqian 1-C" and US4 suggesting they are genetically the most distant among these accessions. Meanwhile, the genetic similarity coefficient between 'Datonghuanghua' (H0006) and 'Qiaotouhuanghua' (H0010) was 1.000 indicating they may actually belong to the same accession.
A UPGMA dendrogram of the 155 accessions based on 43 EST-SSR markers was constructed (Supplementary Figure S2). Table S1).
We also constructed a Neighbor-Joining (NJ) tree for the 155 accessions, which is presented in Figure 7. The groupings were largely the same as the UPGMA dendrogram (Supplementary Figure S2) with all night lily accession in one clade. These data suggested that night lily has narrower genetic base than daylily, and the night lily is likely the result of human selection from daylilies for human consumption during crop evolution.
Population Structure of 155 Hemeracallis Accessions
A model-based Bayesian clustering approach was used to analyze 155 accessions by STRUCTURE. The logarithm of the likelihood [Ln P(D)] on average continued to increase with increasing numbers of assumed subpopulations (K) from 2 to 20 ( Figure 8A). Difference between Ln P(D) values at two successive K-values became non-significant after K = 5. As the ad hoc statistic K preferentially detects the uppermost level of structure and gave the highest value at K = 2 ( Figure 8B; Lia et al., 2009;Emanuelli et al., 2013). Thus, K = 2 was considered as the most probable prediction for the number of subpopulations. Indeed, in the first round of structure analysis, the 155 accessions were split into two major clusters. Group I contained 53 accessions with an average Q value of 0.9867 (86.79% of them with Q > 0.98), and Group II contained 102 accessions with an average Q value of 0.9213 (78.43% of them with Q > 0.90) (Figure 8C and Supplementary Table S1). Group I consisted of 53 accessions, all of which were blooming at night (night lily group). Groups II consisted of 102 accessions, 97 accessions of them were blooming in the morning All number are the number of alleles detected by each marker among 155 accessions. PIC = polymorphism information content for each marker. I = Shannon's Information index (Lewontin 1972). Ho is observed heterozygosity, and He is expected heterozygosity that was computed using Levene 1949. Table S1). Some accessions were admixtures of the two groups.
Association Analysis of Blooming Time Among 155 Accessions
Blooming time is a very important horticultural trait for both daylily and night lily varieties. We recorded blooming time of all 155 accessions, and the data were presented in Supplementary Table S1. Among them, 96 bloomed (almost all daylily) in the morning (7-10 a.m.), and the rest (almost night lily) in the later afternoon or evening (4-11:30 p.m.). To confirm the utility of the EST-SSR markers developed in the present study, we conducted association analysis of blooming time using these markers. We first examined linkage disequilibrium (LD) in the Hemerocallis population. Of the 903 pairwise combinations generated for 43 the EST-SSR loci, 786 (87.0%) showed LD (Supplementary Table S10 and Supplementary Figure S3). At P ≤ 0.001, 74 pairs (8.2%) had mean r 2 of 0.4266 indicating strong LD. At P ≤ 0.001, the standardized disequilibrium coefficients (D') was 0.8435 which was close to 1.0 suggesting low chance of recombination between pairs of loci, which may be caused by the long-term asexual reproduction among the Hemerocallis collection. Association analysis was performed using both the general linear model (GLM, Q model) and the mixed linear model (MLM, Q+K model) in TASSEL. The results are presented in Table 5, and illustrated in Figure 9. In the GLM model, 12 markers showed significant association with blooming time, 8 of which had extremely high significance (P ≤ 0.01; mean r 2 = 0.0459) (Points above the blue line in Figure 9c). Similarly, under the MLM model, 10 markers were significantly associated with blooming time and 5 had extremely significant association (P ≤ 0.01; mean r 2 = 0.0560) (Points above the blue line in Figure 9d). The 10 markers with significant associations with the blooming time in both GLM and MLM were SAU00008, SAU00045, SAU00047, SAU00045, SAU00063, SAU00064, SAU00109, SAU00121, SAU00143, and SAU00150. Among them, SAU00109 had the strongest association (P << 0.001 in MLM model) that could explain 8.8% phenotypic variance (8.0% in GLM) ( Table 5). The predicted functions of the 12 unigenes associated with these SSRs are provided in Table 5. Both SAU00064 and SAU00109 were associated with the same unigenes c22731.graph_c0, which seems to encode a eukaryotic translation initiation factor 2D. Another unigene, c33464.graph_c0, associated with marker SAU00063 was annotated to encode a LHY-like protein, which is known to be regulated by circadian clock. The expression of c33464.graph_c0 in flower buds was significantly higher than that of roots and leaves in both RNA-Seq and RT-qPCR experiments (Figure 10). The unigene c47752.graph_c0 associated with SAU00008 was predicted to encode phosphatase 2C 35/65, and its expression level in roots was significantly different from that in flower buds and leaves.
DISCUSSION
In this study, 92,107 Hemerocallis transcriptomic unigenes were assembled using the Illumina Hi-Seq 2500 platform ( Table 1). The N50 length of these unigenes was 908 bp, and the average length was 575 bp. These results were comparable to what obtained in other recent studies in species of the Liliaceae family, such as Lilium regale (N50 = 920 bp, average length = 682 bp) (Cui et al., 2018). The number of unigenes and contained SSRs obtained from the present study (92,107 unigenes with 'Datonghuanghua'. The results show the mean ± SD (error bars) and were generated from three biological replicates. ** mean significantly difference, P ≤ 0.01. HcACT was used as an internal control for RT-qPCR.
3,430 SSRs) was also close the number identified in other three Liliaceae species (L. formolong, L. longiflorum, and L. longiflorum) (average unigenes = 72,256) (Biswas et al., 2018). These results suggested that closely related species share similar gene contents and distribution of SSR sequences in their genomes.
Fingerprinting is among the most popular uses of molecular markers. In Hemerocallis, hundreds of varieties have been released. Many varieties have the same names but may be morphologically different, or have the same appearances but with different names. Marker-based fingerprinting can help resolve these issues. In the study, the two landraces 'Malinhuanghua (H0029)' and 'Malinhuanghua2 (H0125)' had the same name (in Chinese), but have different geographic origins. Clustering analysis indicated that they were located in different clades (Figure 7 and Supplementary Figure S2). Therefore, these two accessions do not seem to share any common parent, but may have the same name by chance. The three accessions, 'Taiguxuancao (H0038), ' 'Stella de Oro 2 (H0077), ' and 'Stella de Oro 3 (H0139)' may have a similar situation. In addition, three accessions, H. minor Mill. (H0007) and H. minor Mill.2 (H2802) were all introduced from Qingyang, Gansu Province, China, but clustering analysis did not support their close relatedness. These apparent mis-identifications were probably because the original names of these foreign varieties were changed right after their introduction. This is also likely true for many accessions that were introduced from one place to another place inside China. Work in the present study presents a good example to show the power of molecular markers in variety identification and protection.
Both clustering analysis and STRUCTURE analysis place ∼50 accessions into one group (Figure 7 and Supplementary Figure S2) that bloom in the evening, while the rest of ∼100 accession all bloom in the morning (Supplementary Table S1).
Marker-based phylogenetic analysis revealed that the night lily accessions were more genetically related, which seems to be consistent with their geographical distribution. Accessions in the daylily group were collected from more geographically diverse locations (Supplementary Table S1). Based on our multiple observations (Ji et al., 2018), accessions in the daylily group are also morphologically more diverse than those in the night lily group. For example, the colors of petals and sepals of daylilies may vary from yellow, yellow-orange, orange, orangered, red, purple-red to purple. For flower size, the width variation coefficient was high in petals and sepals. Among them, the orange petal width variation coefficient was the highest of 42.15% and the average width was 35.90 mm (13.58-75.56 mm). Similarly, the variation coefficient of the orange sepal width was the highest at 38.74%, and the average width was 23.40 mm (13.20∼48.34 mm). The difference between stalk height and leaf width were highly significant. The inflorescences were rich in morphological variation ranging from mini inflorescence, extremely short branches, to large branched inflorescence, with varying amounts of blossoms. On the other hand, the 55 accessions in the night lily group (branches in red in Figure 7 and Supplementary Figure S2 were collected from nine provinces of China, all of which were diploids and are edible. For these accessions, both petals and sepals were yellow while the width of petals and sepals were very small, with the mean values 17.46 and 13.77 mm, respectively. As compared with accessions in the daylily group, the stalk height (main stem) of night lily group was in general higher (129.08 cm), the leaf was narrower (3.16 cm), and the average number of blossoms was 22.10. Fifty-three accessions were blooming at night, except for 'Panlonghua' (H0004) and 'Panlonghua 2' (H0111) that bloomed in the morning. The results from both marker analysis (Figure 7 and Supplementary Figure S2) and morphological observations indicate that the night lily group may have narrower genetic base than the daylily group accessions in this collection. This seems reasonable because the daylily group accessions had a more diverse geographic origin (Supplementary Table S1). Also, many of the night lily accessions may share common parents during their breeding and selection. From the crop evolution perspective, night lily was probably selected from the genetically more diverse daylily gene pool for vegetable use. During this process, blooming at night may be a main target of selection which is critical for vegetable use to collect the tender unopened flower bud during day time.
A few accessions were placed in different clades when different programs were used in clustering analysis, for example, 'Panlonghua' and 'Panlonghua 2' were clustered into night lily group by NTSYS and MEGA, which were in daylily group by STRUCTURE (Supplementary Table S1, Figures 7, 8, and Supplementary Figure S2). 'Panlonghua' and 'Panlonghua 2' were edible landraces, which were probably the hybrids between the night lily and the daylily. They displayed characteristics of both parents including morning blooming and the narrow perianth traits. 'Stella de Oro 2' (H0077), 'Xue Qiu Hong' (H0092), 'Beijing 6' (H0101), and 'Little Bumble Bee' (H0141) were daylily varieties that blossom at night, but they were clustered together with other daylily accessions that bloomed during the day. These observations suggest gene flow between the two cultivated groups during the long-term breeding and selection. Hermerocallis varieties are often recognized as either daylily or night lily based on their blooming time. Probably only a few genes with strong phenotypic effects are underlying the blooming time (Masahiro et al., 2006). In this study, we identified 12 SSRs with strong association with blooming time ( Table 5). The SSR marker SAU00063 with the strongest association is in the unigene (c33464.graph_c0) that was annotated to encode a LHYlike protein ( Table 5). The LHY (Late Elongated Hypocotyl) gene has been shown to regulate flower timing in a number of plants (e.g., Ramos-Sanchez et al., 2019;Zhang et al., 2019). Our work in the present study provides a foundation for gene discovery of flower timing in Hermerocallis. However, additional work is needed to validate this result.
CONCLUSION
This is the first report on the Hemerocallis transcriptome, and a large number of EST-SSR markers have been developed from this transcriptome, which will provide an excellent resource for researchers and breeders focusing on Hemerocallis. More importantly, our results showed that this strategy using highthroughput sequencing technology is feasible and extremely convenient and efficient in the genus Hemerocallis. The genes associated with circadian rhythm provide germplasm resources and genes basis for further study on flowering rhythm of this genus.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in NCBI under accession numbers SRR11610941, SRR11610942, and SRR11610943 (Submission ID: SUB7327030; BioProject ID: PRJNA628147).
AUTHOR CONTRIBUTIONS
SL designed and conducted the whole research. FJ collected the phenotype data and conducted the GWAS analysis. FH collected some accessions and developed the EST-SSR. HC analyzed the transcriptome data. QS ran the PAGE gel. GX and XK kept the Hemerocallis germplasm. YW advised the project and revised the manuscript. | 7,201.6 | 2020-06-11T00:00:00.000 | [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
] |
A Human Computer Interactions Framework for Biometric User Identification
Computer assisted functionalities and services have saturated our world becoming such an integral part of our daily activities that we hardly notice them. In this study we are focusing on enhancements in Human-Computer Interaction (HCI) that can be achieved by natural user recognition embedded in the employed interaction models. Natural identification among humans is mostly based on biometric characteristics representing what-we-are (face, body outlook, voice, etc.) and how-we-behave (gait, gestures, posture, etc.) Following this observation, we investigate different approaches and methods for adapting existing biometric identification methods and technologies to the needs of evolving natural human computer interfaces.
Introduction
A number of core Human-Computer Interaction (HCI) methods and technologies lay out the foundation for the computer assisted functionalities and services that are penetrating our world up to the point that we deem them indispensable.However, like with familiar objects we use in customary tasks, we hardly notice them in our daily activities.In the world of changing HCI, the recognition and identification of humans engaged in the entailed interactions becomes a crucial issue both from the security point of view and as a vehicle for increasing computer awareness and adaptability for tailoring user-oriented services.In this study we focus on exploring possible enhancements of natural HCI by introducing unobtrusive user identification and recognition capabilities directly into the employed interaction models.These are weaved into the fabric of the interface, deeply integrated into the interaction model, and highly transparent for the user.Note that when natural human interfaces are used, there might be no keyboard, no mouse, and even no direct contact with any physical interface component.Consider for example accessing a Smartphone through its touch screen and a keyboard image on it vs. a physical keyboard, or consider playing a game using Microsoft's Kinect where there is no physical interface contact whatsoever.Yet, even in such situations, users are often identified by usernames and authenticated by passwords (or some "visual" versions of them).Obviously, it would be much better if user identification and consequent authentication were integrated into the HCI process in a natural an unobtrusive way so that they remain mostly transparent to the user.Natural identification among humans is most often based on biometric characteristics representing what-we-are (face, body outlook, voice, etc.) and how-we-behave (gait, gestures, posture, etc.).Following this observation, in our work we investigate different approaches and methods for adapting existing biometric identification methods and technologies to the needs of the novel HCI and evolving natural human computer interfaces.In particular we discuss the design and development of a specialized framework and a corresponding supportive environment for research and experimental work with a wide range of biometric identification methods in the context of natural HCI and related interfaces.
Natural User Identification Scenarios
In our earlier work [1] we have designed and implemented a user identification system employing a mouse with an embedded fingerprint scanner.The system provided for continuous identity tracking of the mouse operator that was completely transparent with respect to the employed interaction model.With the launch of iPhone5s we have a good example of a widely available button interface incorporating an embedded fingerprint scanner.Following this, one can envisage a keyboard with a fingerprint scanner gathering information from all its keys or even a general touch screen that will be able to scan fingertips when operated.All these examples illustrate the idea of natural user identification that occurs in the course of the normal use of an interface.Such identification and continuous re-identification requires no conscious effort by the user and is thus completely transparent to him.Although biometric identification employing fingerprints and other human characteristics based on what-we-are is highly reliable when properly implemented, it is still prone to various security threats.Some biometric footprints, for example, can be easily recovered from the environment (e.g.fingerprints) while others can be obtained by simple surveillance techniques (face photos, etc.)This increases the risk of biometric identity theft which is a serious problem since user biometrics cannot be changed at will.In our current work we aim to address some of these issues by giving precedence to biometric identification based on how-we-behave (gestures, posture, gait, etc.) rather than what-we-are.Using static handwriting and traditional signatures for human identification in graphology has a long history and is deemed to provide for reliable human identification.Modern multidimensional input devices add additional layers of security to this by collecting dynamic gesture data in the process of writing and/or signing, which makes forgeries extremely difficult.And, if really necessary, one can always change his/her signature (but not his/her fingerprints).That is why in this work we consider different methods and approaches for the collection of dynamic gesture data that can be integrated in various natural identification scenarios.
Framework Building Blocks and Components
The main target of the experimental framework that we build is to integrate various input methods and corresponding interactive interfaces into a modular system for gathering, processing, and analyzing motion and gesture data.Such a modular approach allows for coverage of a wide span of input methods and technologies starting with traditional mouse-based input and its enhancements with gesture dynamics [2], going through classic 2-DOF digitizers, tablets, and sign pads for 2D graphical input and their enhanced multiple-DOF versions capable to track the stylus azimuth, elevation, and applied forces [3].We also cover current touch screen based input both for singleand multi-touch surfaces [4].While there are many applications that employ tablets and electronic sign pads to authorize access to bank accounts or credit card purchases, they are all limited to obtaining a user signature on a specialized flat surface [5].The truly natural gesture-based user identification that we envisage, however, should be more flexible in respect to the user and thus allow for spatial 3D gestures.Our gesture identification framework should, therefore, facilitate the dynamic gathering of such 3D gesture data on multiple hardware platforms.In this respect we build upon our spatial motion tracking experience with Nintendo Wii remote controllers, Android Smartphones, and others.We have used Nintendo Wii remote controllers connected to a Windows PC by Bluetooth as spatial input devices in a Kanji writing game for elementary school pupil called Kanji Sports [6].A new, more advanced version of the Kanji Sports game for mobile devices is under development while a pilot implementation for Android Smartphones is already available for experiments [7].
Spatial Motion Tracking with Android Smartphones
A recent development of a general gesture tracking systems for authentication with Android Smartphones has been reported in [8].Such Smartphone based general gesture tracking systems employ the motion and orientation sensors, embedded in the device, to gather data in local device coordinates.Based on the obtained data a device-to-world transformation matrix is dynamically computed and applied for coordinate transformation to global (world) coordinates.After that, numerical integration is carried out to calculate the 3D orientations, corresponding velocities, and the time-dependent 3D movement path of the device.Each obtained paths is finally mapped to the set of sample gestures and the closest match is determined.Following this procedure, we have successfully conducted experiments with simple Smartphone movements in horizontal and vertical directions and with drawing of circles, triangles, and rectangles, etc.In respect to spatial gesture input, however, there is an inherent problem related to the fact that humans are simply not capable of writing a straight line in the air if no reference edge or a surface is provided in the vicinity.The reason is mainly related to the way our joints work e.g. an intentional straight vertical line drawn on an imaginary large screen in front of us appears as an arc in 3D due to our limited depth control capabilities.This implies that extracted spatial paths need some preprocessing before actual analysis could be done.We implement such preprocessing by projecting each path on a carefully selected surface e.g.imaginary screen on which the writing is mentally conducted.Projected data is then used for matching to the available gesture samples.
Kanji writing specificities
To avail of Kanji writing specificities we note that every Kanji employed in the Japanese writing system can be decomposed into a predetermined sequence of individual strokes.In fact Kanji handwriting rules prescribe lifting of the stylus at the end of every stroke which makes character decomposition and further analysis much more straightforward.In contrast, note that individual characters and even sequences of characters from the English alphabet are often written without lifting the stylus which obviously complicates handwriting analysis.The idea is therefore to decompose the Kanji handwriting into individual strokes which are later combined into Kanji characters, etc.This is actually the standard way surface-based mouse, tablet, and touch screen Kanji input is implemented.When it comes to a truly 3-dimensional input, however, it turns out that in the absence of a tangible reference surface, the sense of lifting the stylus is almost entirely lost.To address this issue we have conducted experiments with buttons and other dedicated sensors to detect beginnings and ends of strokes but this puts additional burden on the user and interferes with the natural interface usage patterns.A deeper look into this problem reveals that stroke beginnings and ends could actually be quite reliably detected based on the observed motion acceleration patterns.In fact, Kanji writing mental models well learned by all Japanese children at school appear to enforce different acceleration patterns for writing of strokes and for movements between strokes.Based on such observations we have implemented the Kanji-oriented motion tracking and stroke determination algorithm shown in Fig. 1 which encompasses natural stroke writing patterns and is thus posit to deliver superior extraction results.The outcome of the algorithm is a sequence of time-dependent stroke descriptions that need to be classified following the Japanese Kanji writing rules.We will illustrate the stroke description and classification process by referring to the Japanese Kanji character for the English word "tree" shown in Fig. 2(c).The character is comprised of four strokes namely a "horizontal", a "vertical", a "right-inclined", and a "left-inclined" as shown in Fig. 2(b).This textual description, however, does not indicate the directions in which the strokes should be drawn although such directions are an important part of the Kanji writing rules.To fully disambiguate the four stroke types and their directions we employ the encoding scheme shown in Fig. 2(a).Following this scheme the stroke sequence of the Kanji in Fig. 2(c) is expressed as 1, 7, 6, 8.Such unambiguously encoded stroke sequences and their extension, as discussed further on in this work, make the foundation of the handwritten Kanji analysis and recognition for biometric human identification and authentication that we develop.While a large number of Kanji can be written by using the simple strokes shown in Fig. 2 there are also other more complex combined strokes.For example, as shown in Fig. 3(a), the Kanji character for the English word of "mouth" looks like a simple square.One may think that it is constructed from four segments as shown in the top sequence of Fig. 3(b) but the Japanese writing rules prescribe that it should be written by three strokes, the second of which is a combination of a horizontal stroke followed by a vertical stroke without lifting the stylus essentially making a corner or an angle as shown in the bottom sequence.We will encode such combined strokes by simply putting together the codes of their comprising strokes in the right order e.g. a stroke encoded by 1 followed by a stroke encoded by 7 without lifting the stylus results to a combined stroke encoded by 17.In some cases either a simple stroke or a combined stroke could be acceptable.In Fig. 4, for example, the third stroke of the Kanji for a "person" could be a simple right-inclined 6 as shown in the top sequence.But it can also be a vertical 7 followed by a right-inclined 6 without lifting the stylus which makes to a combined stroke with code 76 as shown in the bottom sequence.This is reflected in the different possible and acceptable appearances of the corresponding Kanji character depending on the employed font.Note that a large variety of combined strokes built of two and more comprising strokes can be encoded following the above presented rules.In the discussion that follows, however, we will limit the scope to the strokes that are found in the Kanji set employed in our experiments.
Pilot Implementation and Experiments
The pilot software implementation based on the procedure outlined in Fig. 1 enables us to obtain extensive data fully describing the dynamics of the complex motions the mobile device is subjected to.While our ultimate objective is to use such data for natural user identification and authentication, the first set of experiments that we have conducted focuses on spatial Kanji input and its use as a complement or a substitute of the standard touch screen based user authentication essentially allowing users to authenticate by spatial gestures rather than via a touch screen.Our next step will be to employ spatial signatures for direct user identification and consequent authentication without the need of user names and passwords.For our experiments we have prepared a set of 10 Kanji characters as shown in Fig. 5.The set includes some very simple characters comprised of 2 or 3 strokes as well as some fairly complex ones with up to 7 strokes.The stroke types used in the writing of the selected Kanji subset are shown in Fig. 6.Employed strokes include the five simple stroke types with codes 0, 1, 6, 7, and 8 two of which are used most often i.e. 15 code 1 (horizontal) strokes and 10 code 7 (vertical) strokes.From the combined stroke types which are used less often, the employed ones are with codes 76, 17, 71, and 74.As indicated in Fig. 6, different recognition rates have been observed for different stroke types, those with codes 0 and 74 appearing to be the most difficult to draw.We have conducted extensive experiments with discriminating the 9 different stroke types from Fig. 6.A group of 17 test subjects was formed and each participant was asked to write "in the air" the selection of 10 different Kanji characters in Fig. 5 while holding an Android Smartphone in his hand as if it were a brush.While recognition failures were common in the initial writing attempts, repeated writing lead to significant improvements thus demonstrating a steep learning curve for this novel input interface.
Conclusion
Our framework implementation has been used so far for conducting experiments with 3D Kanji writing and various stroke-based Kanji games.We are continuing our research with more detailed analysis of the extensive 3D data that we collect, aiming to better understand the true dynamics of the 3D gestures that might lead to more advanced transparent user identification.
Fig. 2 .
Simple strokes and their codes.
Fig. 3 .
Combined strokes and their encoding
Fig. 5 .
Fig.5.The Kanji set employed in the experimental testing.
Fig. 6 .
Fig.6.Stroke patterns included in the testing Kanji set. | 3,420.2 | 2016-01-01T00:00:00.000 | [
"Computer Science"
] |
Competition between allowed and first-forbidden β decays and the r -process
. β − decay lifetimes are essential ingredients for r -process yield calculations. In N ≈ 126 r -process waiting point nuclei first-forbidden and allowed β decays are expected to compete. Recent experiments performed at CERN / ISOLDE showed that 207 , 208 Hg decay predominantly via first-forbidden decays. In addition, following on a high statistics study of the β + / EC decay of 208 At, it is suggested that the Z > 82, N < 126 nuclei provide an excellent testing ground for global calculations addressing the competition between first-forbidden and allowed β decays.
Introduction
Half of the nuclei heavier than iron were synthesised in the r process. This process is based on a succession of neutron captures and β − decays. Since heavy (A>150) r-process path nuclei still cannot be produced in laboratories, yield calculations have to rely on β decay half-lives predicted by theoretical calculations. In contrast to lighter mass regions, around neutron-number N=126 there is competition between (parity conserving) allowed and (parity changing) first-forbidden β decays [1,2]. First-forbidden (FF) transitions can be dominant, with profound implications on their half-lives and therefore on the r-process, specifically on the third r-process peak at A∼195 [3]. However, FF transitions are notoriously difficult to calculate. Here we present results obtained in β-decay studies on nuclei around 208 Pb in experiments performed at the ISOLDE Decay Station at CERN.
First-forbidden β decays
An ideal atomic nucleus to study the competition between allowed and forbidden β decays should have a small number of both positive and negative parity levels which could be populated in β decay, and they should have simple and well-understood wave functions. Quantum excitations in nuclei in the vicinity of doubly magic numbers have well defined wave functions. In addition, the 208 Tl nucleus (with one proton-hole and one neutron-particle outside the closed shells) is expected to have a small number of both positive and negative parity low-energy states. Indeed, shell model calculations predict two 0 + , five 1 + , as well as one 0 − and three 1 − states at excitation energies below the Q β =3.48 MeV value [4] (note, that in addition, a number of collective octupole states with negative parity are also expected above 2.6 MeV, however these could not be calculated within the used model space). Therefore, the decay of 208 Hg into 208 Tl provides an ideal testing ground for the study of the competition between allowed and FF β decays. This was studied at ISOLDE at CERN [5]. Three negative parity excited states, one 0 − and two 1 − , were populated directly in β decay. The FF decay probabilities with log f t values in the range of 5.2-6.0 are in line with systematics in this region, however at present these cannot be reliably calculated. In contrast none of the positive parity states were populated. This latter can be understood by considering the properties of the single proton and neutrons involved. The half-life of the 208 Hg ground-state was measured as T 1/2 =135 (10) [12][13][14][15][16]. This is not surprising, as even such state-of-the-art calculations are not expected to provide accurate single-particle energies, essential for nuclei close to closed shells where the β-decay strength is dominated by few transitions. There are some shell-model calculations available for N=126 nuclei [17,18], however these do not consider neutrons above the N=126 shell closure, therefore cannot provide a reliable lifetime estimate for 207,208 Hg.
It is difficult to populate heavy neutron-rich nuclei in the laboratory. It is much easier to produce proton rich ones. Recently, the β + /EC decay of 208 At into 208 Po was studied within a high statistics experiment [19]. A roughly equal role of allowed and FF decays were found. This can be understood considering the relevant shell model orbitals in the region. Since the FF decays populate mainly excited states at high energies, some older experiments suffered from the pandemonium effect. The role of the FF β decays in the region is illustrated in figure 1. Note that some nuclei have very low Q EC values and consequently there are few excited states in their daughter nuclei within the available energy window. The decays of 208 Po and 209 Po are peculiar; all β decays proceed via larger degree of forbiddenness due to the lack of daughter states with spin-parities consistent with FF and allowed β decays. The indicated small experimentally determined FF contribution for some Rn and At isotopes with larger Q EC values are most likely due to the pandemonium effect, with the real value expected to be much larger.
It is suggested that the neutron-deficient region "north-west" of 208 Pb provides an excellent testing ground for the competition between allowed and first forbidden β-decay calculations [19]. However, presently very few global calculations for β + /EC decays exists on this part of the nuclide chart. These predict half-lives of 105.7 s [7] and 2412 s [20], as compared to the experimental T 1/2 =5900(1100) s [21]. Note that [20] considers only allowed decays.
The ∆n=0 selection rule
The well-known selection rules for allowed β decay are: angular momentum change ∆I=0,1 and no parity change. However, there is one additional selection rule: the number of nodes in the radial wave function, n, of the parent and daughter states must be equal. The validity of this less known ∆n=0 selection rule in Gamow-Teller β decay was recently investigated [22] by studying the β decay of 207 Hg into 207 Tl. The level of forbiddenness of the ∆n=1 ν1g 9/2 → π0g 7/2 transition has been determined. From the non-observation of this decay at the level of <3.9×10 −3 %, corresponding to log f t>8.8 (95% confidence limit), one can conclude that the selection rule holds. This is the most stringent test of the ∆n=0 selection rule to date. Without this rule the lifetime of 207 Hg would be approximatively 7% shorter, while in the case of 209 Pb the effect is about ≈20%. This selection rule has little importance for nuclei close to stability, but is essential for the Z<82, N>126 r-process waiting point nuclei, lengthening their lifetimes.
Conclusions and outlook
In conclusion, recent ISOLDE results addressing the competition between allowed and firstforbidden β decays were reported. In particular it was found that both 207 Hg and 208 Hg decay via first-forbidden β decays into negative parity daughter states, despite the availability of positive-parity states. These were understood by examining the properties of individual neutrons and protons within the shell model approach. Following the study of the β + /EC decay of 208 At, it is suggested that the Z>82, N<126 nuclei, which can be studied in high yield experiments provide an excellent testing ground of the competition of FF and allowed decays. This region would merit modern global β decay calculations.The presented results on β decays are important for the detailed understanding of the nucleosynthesis of heavy elements produced in the rapid neutron-capture process. | 1,642.2 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
Comprehensive Review of Advanced Machine Learning Techniques for Detecting and Mitigating Zero-Day Exploits
This paper provides an in-depth examination of the latest machine learning (ML) methodologies applied to the detection and mitigation of zero-day ex-ploits, which represent a critical vulnerability in cybersecurity. We discuss the evolution of machine learning techniques from basic statistical models to sophisticated deep learning frameworks and evaluate their effectiveness in identifying and addressing zero-day threats. The integration of ML with oth-er cybersecurity mechanisms to develop adaptive, robust defense systems is also explored, alongside challenges such as data scarcity, false positives, and the constant arms race against cyber attackers. Special attention is given to innovative strategies that enhance real-time response and prediction capabili-ties. This review aims to synthesize current trends and anticipate future de-velopments in machine learning technologies to better equip researchers, cy-bersecurity professionals, and policymakers in their ongoing battle against zero-day exploits.
Introduction
The realm of cybersecurity is in a constant state of flux, with new threats emerging as rapidly as the technologies designed to counter them [1].Among these threats, zeroday exploits stand out due to their nature and the level of risk they pose.A zero-day exploit takes advantage of a vulnerability that is unknown to those responsible for patching or mitigating the vulnerability, often leading to severe consequences before a fix can be applied [2,11].The detection and response to such exploits are paramount in maintaining digital security and integrity.Historically, zero-day ex-ploit detection has relied heavily on signaturebased methods and anomaly detection systems [3,12].However, the sophistication and evolution of these exploits have outpaced traditional security measures [4,13].This has ushered in an era where ma-chine learning (ML) plays a critical role in identifying and responding to these threats.ML's ability to learn from data and identify patterns makes it exceptionally well-suited to detect irregularities and potential threats that elude conventional de-tection systems [5].This paper reviews the application of machine learning in the detection and response to zero-day exploits.It delves into various ML techniques, ranging from simple regression models to complex neural networks, and examines their efficacy in recognizing and responding to unseen vulnerabilities and attacks.The discussion extends to the integration of ML with other cybersecurity measures, offering a holistic view of current and future security landscapes.The adaptation of ML in cybersecurity presents unique challenges, including the need for extensive and relevant training data, the risk of false positives and negatives, and the ongoing battle against adaptive adversaries [6,14].These challenges are explored in depth, providing a realistic understanding of the capabilities and limitations of ML in this context.In this rapidly advancing landscape, the evolution of machine learning tools offers a beacon of hope.These tools are not only capable of enhancing detection mechanisms but are also pivotal in developing proactive defines strategies that can anticipate and neutralize threats before they manifest.The versatility of ML algorithms, including both supervised and unsupervised learning models, provides a comprehensive framework for addressing the unique challenges posed by zero-day exploits.As cybersecurity threats become more complex and elusive, the traditional methods of detection and response prove inadequate.The adaptability of ML mod-ells, which can learn from new data without explicit reprogramming, makes them particularly effective against the dynamically changing tactics of cyber adversaries.This paper delves deeper into various machine learning techniques, from relatively simple regression models to complex neural networks, and examines their efficacy in recognizing and responding to unseen vulnerabilities and attacks.We also explore the synergy between ML and other cybersecurity measures, presenting a holistic view of current and potential future security landscapes.The increasing reliance on machine learning highlights its significance as a transformative tool in cybersecurity-ty, capable of not only detecting but also predicting and mitigating potential threats effectively.By integrating advanced machine learning techniques, cybersecurity systems can evolve from reactive to predictive, significantly enhancing their capability to secure digital assets against the ever-present danger of zero-day exploits.As we progress, this review aims to provide not only a thorough understanding of the current state of ML in zeroday exploit detection and response but also to offer insights into the technological advancements that are shaping the future of cybersecurity.The goal is to equip researchers, cybersecurity professionals, and policymakers with the knowledge and tools necessary to develop effective and adaptive security strategies in the face of evolving cyber threats.Figure 1 Illustrates the evolution of Zero-Day exploits and ML-Based detection effectiveness.As we progress, the paper aims to not only present a thorough understanding of the current state of ML in zeroday exploit detection and response but also to offer insights into future directions and potential innovations in this rapidly advancing field [7,15].The goal is to equip researchers, cybersecurity professionals, and policymakers with the knowledge to continue developing effective and adaptive security strategies in the face of evolving cyber threats.
Historical Background
The landscape of cybersecurity has been an arena of constant evolution, marked by an ongoing arms race between threat actors and defenders [16].The history of zero-day exploits, which are vulnerabilities unknown to software vendors or security teams until they are exploited, is deeply intertwined with the development of cybersecurity measures.In the early days of digital computing, security was a relatively minor concern, often limited to physical access control and basic password protection [17].As networked environments and the internet gained prominence in the late 20th century, the potential for wide-reaching digital attacks became apparent [18].The late 1990s and early 2000s witnessed a surge in the awareness of cybersecurity threats, with several highprofile incidents underscoring the need for more robust protection mechanisms.The term "zero-day" began to gain traction in the early 2000s, derived from the number of days a software vendor has been aware of the vulnerability.Initially, zero-day exploits were rare but highly effective, used primarily by advanced threat actors.The detection methods during this period were mostly reactive, relying on known vulnerability signatures and basic anomaly detection.As the complexity of software systems grew, so did the number and sophistication of vulnerabilities [19].This increase led to a paradigm shift in cybersecurity.Traditional methods, which relied heavily on signaturebased detection and predefined rule sets, were becoming increasingly inadequate.The dynamic and elusive nature of zero-day exploits necessitated a more proactive and adaptive approach.Enter machine learning.By the mid-2000s, machine learning began to emerge as a promising tool in cybersecurity.Its ability to learn from data, identify patterns, and make predictions made it well-suited to the task of detecting previously unknown threats.Early ML applications in cybersecurity were relatively basic, focusing on anomaly detection through statistical methods [20].However, the last decade has seen a rapid advancement in ML techniques, driven by the explosion of data and computational power.Deep learning, a subset of ML characterized by layers of neural networks, has shown particular promise in identifying complex patterns and Comprehensive Review of Advanced Machine Learning Techniques for Detecting and Mitigating Zero-Day Exploits anomalies indicative of zero-day exploits [21].The journey toward the adoption of machine learning in cybersecurity illustrates a shift from a primarily defensive posture to one that is both proactive and predictive.This shift has not only transformed security strategies but also the roles of those involved in cybersecurity defense mechanisms.Today, the focus is increasingly on developing systems that not only withstand attacks but anticipate and neutralize them before they can cause harm.This proactive approach is supported by advances in machine learning algorithms that can process and analyze vast datasets at speeds and accuracies that were unimaginable in the early days of cybersecurity.These capabilities are crucial in the fight against zero-day exploits, as they enable rapid response strategies that mitigate potential damage and fortify systems against future attacks.As we delve deeper into the integration of machine learning with cybersecurity, it becomes evident that this technology is not merely an addition to existing protocols but a fundamental transformation of the cybersecurity landscape.Today, ML is not just a supplementary tool but a core component of many modern cybersecurity systems, continuously learning and adapting to new threats.This historical perspective sets the stage for understanding the current state of ML in zero-day exploit detection and response, highlighting the journey from traditional security methods to the sophisticated, AI-driven approaches of today [22,23,24].
Related Work
The field of machine learning (ML) applied to zero-day exploit detection and re-ponse has seen significant developments in recent years [1].This section reviews the related work, focusing on various approaches and methodologies that researchers have employed to tackle the challenges posed by zero-day attacks.In their landmark study, Bilge and Dumitraş (2012) laid the groundwork for understanding the wide-spread nature of zero-day attacks and their impact on computer security [2].Their findings highlighted the limitations of traditional signature-based detection methods, which often fail to identify new and unknown vulnerabilities.Similarly, reports by Google and the Ponemon Sullivan Privacy Report (2020) have reinforced the notion that zero-day attacks represent a major threat in the cybersecurity domain [3].These studies underscore the urgent need for innovative detection methods capable of anticipating and mitigating attacks before they cause harm.Addressing this need, several researchers have turned to machine learning.Machine learning, with its ability to analyze and learn from data, presents a promising solution for detecting pat-terns and anomalies indicative of zero-day exploits.The effectiveness of ML-based methods, however, varies, with challenges in accuracy, recall, and uniformity against different types of attacks [4].The comprehensive review of ML-based zero-day attack detection approaches in these studies offers a critical comparison of various ML models, training and testing datasets, and their evaluation results, providing valuable insights into the state of the art in this field [5].A novel approach in the realm of ML-based cybersecurity is the use of Hardware-Supported Malware Detection (HMD) [6].By utilizing Machine Learning techniques applied to Hardware Performance Counter (HPC) data, researchers have been able to detect malware at the processor's microarchitecture level.This method, while efficient for known malware, faces challenges in detecting unknown (zero-day) malware in real-time [7].An ensemble learning-based technique using AdaBoost and Random Forest classifiers, as proposed in recent work, demonstrates a significant improvement in detecting zeroday malware with high accuracy and low false-positive rates.The concept of Zero-Day Intrusion Detection and Response Systems (ZDRS) represents a significant advancement in dealing with network security blind spots [8].Traditional full-packet storage methods are costly and inefficient for recognizing zero-day at-tacks.Recent innovations in ZDRS architecture, such as the first-N packet storage method and drill-down session metadata searching algorithms, have shown great promise [9].These methods significantly reduce data storage requirements while maintaining high detection rates, demonstrating a practical and efficient approach to managing zero-day threats [10].Network Traffic Analysis (NTA) plays a crucial role in supporting ML-based Network Intrusion Detection Systems (NIDS).By monitor-ing and extracting meaningful information from network traffic data, NTA enables the identification of significant features crucial for detecting zero-day attacks [25].The application of Benford's law to identify these key features represents an innovative approach to optimizing ML models for NIDS [26].Studies have shown that semi-supervised ML approaches, such as one-class support vector machines, are highly effective in detecting zero-day network attacks [27].An emerging area of research involves using social media data, such as information from Twitter to detect zero-day attacks.By applying ML techniques like word categorization and integrating tools like TensorFlow and the Natural Language Toolkit (NLTK), research-rs have been able to identify vulnerabilities and respond to zeroday attacks swiftly [28].This approach, which leverages publicly available information, marks a novel direction in preemptively addressing cybersecurity threats.Recent studies have focused on the development of adaptive machine learning models that can evolve in response to the changing nature of zero-day threats [29].Research in this area has explored the use of online learning algorithms and dynamic feature selection methods to ensure that the ML models remain effective as the attack patterns evolve [30].For instance, some studies have investigated the application of reinforcement learning, where the model continuously updates its strategy based on the feedback from the environment, effectively adapting to new types of zero-day exploits [31].Deep learning has increasingly been recognized as a potent tool in cyber threat intelligence for zero-day attacks [32].The use of deep neural networks, particularly in processing large volumes of unstructured data such as network logs and threat re-ports, has shown promise in extracting complex patterns and indicators of compromise that precede a zero-day attack [33].Research in this area has highlighted the use of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to analyze temporal and spatial patterns in data, offering advanced predictive capabilities.The integration of big data analytics with machine learning has been a significant area of research [34].Big data technologies offer the capability to process and analyze the vast amounts of data generated in network environments.When combined with ML, this approach enables a more comprehensive and detailed analysis, improving the detection of zero-day exploits.Several studies have focused on optimizing the data processing pipelines and ML algorithms to handle the scale and complexity of big data, thereby enhancing the detection accuracy and speed [35].Comparative studies of various machine learning algorithms have also been a crucial part of the literature.These studies provide insights into the strengths and weaknesses of different ML approaches, such as supervised vs. unsupervised learning, and the specific contexts in which they excel.For instance, some works have compared the performance of decision trees, support vector machines, and neural networks in detecting zero-day attacks, providing valuable guidelines for practitioners in selecting the appropriate algorithms based on their specific requirements and constraints [36].The role of human expertise in conjunction with machine learning has been explored in recent research.Human-inthe-loop approaches aim to combine the scalability and efficiency of ML models with the nuanced understanding and adaptability of human analysts [37].This collaborative approach has been shown to enhance the overall effectiveness of zero-day detection systems, especially in reducing false positives and providing contextual understanding of the alerts generated by ML models.Lastly, the application of machine learning techniques developed in other domains to the field of cybersecurity has been a growing area of interest [38].Techniques from areas such as natural language processing, image recognition, and anomaly detection in financial systems have been adapted to identify and respond to zero-day threats.These cross-domain applications underscore the versatility of ML and its potential to bring innovative solutions to the cybersecurity field.Table 1 presents the comparison of related work focusing on their primary objectives.The detection of zero-day exploits remains a formidable challenge in the cybersecurity domain due to the inherently covert and unexpected nature of such attacks.Recent research has notably advanced the scope and effectiveness of detection mechanisms, primarily through the integration of sophisticated machine learning techniques [42].Innovations in this area are particularly focused on enhancing the accuracy and speed of detection systems, allowing them to identify and react to potential threats before they can be exploited by attackers [43].One of the significant developments in this field is the application of deep learning models, which have proven to be particularly adept at pattern recognition tasks that are too complex for traditional algorithms.These models, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in detecting subtle anomalies in data that may indicate a zero-day exploit [44].Their ability to continuously learn from new data and adjust their parameters accordingly without human intervention marks a critical step forward in autonomous cybersecurity systems.Furthermore, the use of unsupervised learning techniques has grown in importance, addressing the challenge of labeled data scarcity which is common in the context of zero-day threats [45].Techniques such as clustering and dimensionality reduction are being used to identify unusual patterns in large datasets that could suggest the presence of an exploit.This method allows security systems to develop a baseline of "normal" network behavior and flag deviations, which are often indicative of cybersecurity threats.Another noteworthy trend is the adaptation of existing machine learning methods to the specific requirements of cybersecurity [46].Transfer learning, for instance, has been employed to leverage data and learning achieved from one problem domain and apply it to another.This approach is particularly useful in the context of zero-day exploits where pre-existing models developed for similar tasks can be fine-tuned with minimal data from the cybersecurity domain, thereby speeding up the deploy-ment of effective detection systems [47].The integration of machine learning with other technologies has also enhanced detection capabilities [48].For instance, the combination of machine learning with blockchain technology for data integrity and traceability provides a robust framework for the detection of anomalies.Similarly, leveraging big data analytics enables the handling and analysis of vast amounts of network data in real time, which is crucial for timely detection of zero-day exploits.In conclusion, as machine learning techniques continue to evolve, their integration into zero-day exploit detection systems promises not only more reliable protection against these elusive threats but also a paradigm shift in how cybersecurity defenses are conceptualized and implemented.The ongoing research and development in this area highlight the dynamic nature of cybersecurity and the critical role of innovative machine learning approaches in shaping future defense mechanisms against increasingly sophisticated cyber-attacks.Table 1 represents the comparison of related work focusing on their primary objectives.The literature in the domain of ML for zero-day exploit detection and response demonstrates a dynamic and evolving field.From the foundational studies that highlighted the limitations of traditional methods to the latest innovations in hardware-supported detection and social media analysis, the journey of ML in this realm is marked by continuous advancements.While significant progress has been made, the challenges of accuracy, adaptability, and response to the ever-evolving nature of zero-day attacks remain.Future research is expected to focus on enhancing these aspects, further integrating ML into comprehensive cybersecurity solutions.
Methodology
This section details the methodology employed in conducting this comprehensive review.The objective is to provide a clear, reproducible approach for identifying, selecting, and analyzing relevant literature in the field of machine learning for zero-day exploit detection and response.Objective Clarification: Clearly define the objectives of the literature review.For example, understanding the evolution of machine learning in detecting zero-day exploits, comparing different ML approaches, or identifying challenges and future research directions.Scope Determination: Specify the boundaries of the review, including the types of publications considered (e.g., peer-reviewed papers, conference proceedings, industry reports), time frame, and any specific thematic or technological focus.
Search Strategy
Database Selection: List the databases and search platforms used to find relevant literature, such as IEEE Xplore, PubMed, Google Scholar, etc. Keyword Development: Describe how keywords and search terms were developed.Include the main keywords (e.g., "machine learning," "zero-day exploit," "cybersecurity") and any combinations or variations used in the search.Search Process: Outline the search process, including any filters or criteria applied to refine the search results, such as publication date range, language, or document type.
Selection Criteria
Inclusion and Exclusion Criteria: Define the criteria for including and excluding studies.This might involve the relevance to machine learning and zero-day exploits, the quality and credibility of the publication, and the specificity of the information to the review objectives.Screening Process: Explain the process of screening titles and abstracts to determine their relevance, followed by a full-text review for selected papers.
Data Extraction and Synthesis
Data Extraction: Detail the information extracted from each paper, such as authors, year of publication, research focus, ML techniques used, findings, and conclusions.Synthesis Approach: Describe how the extracted data was synthesized.This could involve thematic analysis, comparative analysis, or a narrative synthesis approach, depending on the nature of the review.
Quality Assessment
Assessment Criteria: Outline the criteria used to assess the quality of the included studies, such as methodological rigor, clarity of reporting, and relevance to the re-view's objectives.Assessment Process: Explain how each study was evaluated against these criteria.
Reporting and Presentation of Findings
Structure of the Review: Describe how the findings of the review are organized and presented.This could involve thematic grouping, chronological order, or classification based on the type of ML approach.Interpretation of Results: Explain how the results are interpreted in the context of the review's objectives and scope.
Results
The comprehensive review of literature in the field of machine learning for zero-day exploit detection and response reveals a dynamic and evolving landscape, marked by significant advancements and persistent challenges.This section synthesizes the key findings, weaving them into a coherent narrative that reflects the current state and future prospects of this critical domain.The journey of machine learning in cybersecurity has been characterized by a gradual shift from basic techniques to more sophisticated methods.Initially, research in this area was predominantly focused on utilizing elementary machine learning models such as decision trees and linear regression for the purpose of anomaly detection within network traffic.These early applications laid the groundwork for the integration of machine learning into cyber-security practices.Over time, there has been a noticeable progression towards the adoption of more complex algorithms.The past decade, in particular, has witnessed an accelerated shift towards deep learning models, including convolutional neural networks.These advancements signal a significant move toward data-driven approaches, capable of analyzing intricate patterns indicative of cyber threats.Comparative studies of various machine learning algorithms reveal a consensus regard-ing the superior performance of deep learning models, especially in identifying nu-anced and complex attack patterns.However, these advancements are not without their challenges.High false positive rates and the substantial requirement for train-ing data are recurrent themes in the literature, pointing to the ongoing need for refinement in these models.The application of machine learning in detecting zero-day exploits has been a focal point of many studies.Numerous papers have reported the successful deployment of machine learning models in detecting these elusive threats.These models have been commended for their ability to adapt and learn from the evolving patterns of attacks, a crucial capability given the unpredictable nature of zero-day exploits.Yet, limitations remain, particularly in the realms of real-time detection and adaptation to sophisticated and continuously evolving attack vectors.A prominent trend in the literature is the integration of machine learning with traditional security methods, giving rise to hybrid approaches.This blend of new and established techniques creates more robust and comprehensive defense systems, as evidenced by improved detection rates.However, this integration is not without its drawbacks, often raising concerns about the added complexity and manageability of these combined systems.In the realm of hardware-based detection, the use of machine learning has emerged as an innovative approach.Hardware-Supported Mal-ware Detection (HMD), which leverages hardware performance counters, has been shown to be effective in the early detection of threats.This method stands out for its ability to reduce computational overhead, thereby improving real-time detection capabilities.An emerging area of interest identified in the review is the use of public data sources, such as social media, for the early detection of zero-day threats.The application of machine learning algorithms to analyze data from platforms like Twitter represents an innovative strategy in the cybersecurity field.These approach-es have demonstrated notable success rates in early threat identification, highlighting the potential of public data in enhancing cybersecurity measures.Looking to-wards the future, the literature points to several potential developments.The use of reinforcement learning and the development of adaptive models that can evolve with the changing landscape of cyber threats are identified as promising areas for future research.Additionally, the adaptation of techniques from other fields, such as natural language processing, is poised to bring new perspectives and solutions to the challenges in this domain.However, key challenges remain prevalent across the reviewed studies.These include issues related to data scarcity, the complexity of algorithms, and the need for continual updates to the models to ensure their relevance and effectiveness.To address these challenges, the literature suggests a great-er need for collaborative efforts in data sharing and standardization, which could significantly enhance the effectiveness of machine learning-based cybersecurity solutions.In conclusion, the results from this comprehensive review highlight the significant role that machine learning has come to play in enhancing the capabilities of systems designed to detect and respond to zero-day exploits.While notable progress has been made, the field continues to grapple with challenges that necessitate ongoing research and innovation.This evolving landscape underscores the importance of continued exploration and development in machine learning applications to stay ahead in the everchanging realm of cybersecurity.The results from various studies underscore the critical role machine learning (ML) plays in the detection of zero-day exploits.Advances in this area have significantly enhanced the capability of cybersecurity systems to identify and respond to previously unknown threats effectively [39].This part of the discussion focuses on the detection aspects highlighted by recent research, detailing the performance of different ML approaches and the key advancements that have driven improvements in this crucial area.Recent research into the application of ML for zero-day exploit detection points to several key trends.Firstly, the evolution of deep learning techniques has been particularly impactful.These techniques, which leverage complex neural architectures, have demonstrated superior ability to parse through massive datasets and identify subtle, anomalous patterns that may indicate a security breach.Notably, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been at the forefront, offering promising results in terms of detection accuracy and speed, essential for combating zero-day threats that require immediate action [40].Moreover, the application of ensemble learning models, which combine multiple ML models to improve prediction accuracy, has shown considerable promise in zero-day exploit detection.By aggregating the predictive capabilities of various models, ensemble methods reduce the likelihood of false positives common challenge in the detection of zero-day exploits.This approach not only enhances the robustness of detection systems but also lends a degree of redundancy, ensuring that even if one model fails to detect an anomaly, others might succeed.An emerging area of interest is the use of semi-supervised and unsupervised learning models that excel in environments where labelled data is scarce.Zero-day exploits, by their nature, provide limited examples for training due to their novelty.Semi-supervised learning, which uses a small amount of labelled data along with a larger amount of unlabelled data, and unsupervised learning, which relies solely on the data's structure, are particularly suited to this task.These methodologies help develop models that can identify deviations from normal behaviour patterns, indicating potential zero-day exploits.Furthermore, the integration of artificial intelligence (AI) capabilities with traditional intrusion detection systems (IDS) has resulted in more sophisticated detection mechanisms.AI-enhanced IDS can dynamically adapt to new and evolving threat patterns, a critical requirement in the face of modern, sophisticated cyber-attacks.This adaptive capability is essential for maintaining the effectiveness of zero-day exploit detection systems in a landscape where attackers continually refine their methods.In conclusion, the integration of advanced machine learning techniques into cybersecurity infrastructures has markedly improved the detection of zero-day exploits.While challenges such as data scarcity and false positives persist, ongoing innovations in ML methodologies continue to push the boundaries of what can be achieved in cybersecurity defences [41].As these technologies evolve, they promise not only to enhance the security posture of organizations but also to trans-form the landscape of cybersecurity detection and response strategies fundamentally.Figures (3)(4)(5) represent the results of this review.
Conclusion
The extensive review of the current literature on machine learning (ML) applications in zero-day exploit detection and response culminates in a comprehensive understanding of the field's evolution, current state, and future directions.This paper has traversed a diverse array of methodologies and innovations, highlighting the significant strides made and the challenges that remain.The evolution from basic ML techniques to more sophisticated models, particularly deep learning, marks a significant advancement in cybersecurity capabilities.These techniques have progressively improved in their effectiveness to detect and respond to zero-day exploits, reflecting the dynamic nature of cyber threats and the need for equally dynamic defense mechanisms.The integration of ML with traditional cybersecurity approaches and hardware-supported systems has further enhanced detection capabilities, creating more robust and efficient systems.However, the journey is far from complete.The review has consistently highlighted ongoing challenges, such as high false positive rates, the need for extensive and relevant training data, and the difficulties in real-time detection and adaptation to sophisticated attack patterns.These challenges under-score the necessity for continued research and innovation in the field.An emerging trend, which warrants further exploration, is the utilization of public data sources, such as social media, for early detection of zeroday threats.This approach, coupled with the cross-domain application of ML techniques, presents new opportunities for innovative solutions in cybersecurity.The future of machine learning in zero-day exploit detection and response looks promising yet demanding.It calls for a collaborative approach that integrates advancements in technology with human expertise.The field must navigate the balance between technological advancement and practical implementation, ensuring that the solutions developed are not only theoretically sound but also practically applicable.As we look forward, the field is poised for a new era of innovation, where machine learning is not just a tool but a fundamental component of cybersecurity strategies.The need for adaptable, intelligent, and pro-active systems is more critical than ever in the face of increasingly sophisticated cyber threats.This review paper lays the groundwork for future research, providing a roadmap for the continued evolution and enhancement of machine learning applications in the fight against zero-day exploits.The journey is ongoing, and the pursuit of more effective, adaptive, and intelligent cybersecurity solutions remains a para-mount objective for researchers, practitioners, and policymakers alike.
Figure 1 .
Figure 1.Evolution of Zero-Day Exploits and ML-Based Detection Effectiveness.
Figure 2 .
Figure 2. The methodology used in this review.
Figure 3 .
Figure 3.The methodology used in this review.
Figure 4 .
Figure 4. Evolution of ML techniques and effectiveness in Zero-Day detection.
Figure 5 .
Figure 5. Increase in reported Zero-Day exploits and challenges -false positive rates.
Table 1
. comparison of related work focusing on their primary objectives (Part 2). | 6,745.2 | 2024-06-26T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
More Enlightened Than Thou: The Dangers of Idealizing Knowledge in the Disciplines
In the context of the university's current identity crisis, this essay analyzes the implications of creating a "post-disciplinary" university. A critical discussion of texts by Frank Lentricchia, James Sosnoski, and Judith Butler reveals the institutional and epistemological stakes of postdisciplinary views of the University structure. Against this backdrop, the essay concludes by showing how idealizing knowledge as separable from all strife leads to a systematic underestimation of the disciplines' importance for the philosophical, physical, and administrative structures that undergird the university and the disciplines themselves.
Revolution is the opiate of the intellectuals
As Samuel Weber notes in Institution and Interpretation there is a widespread identity crisis affecting myriad academic disciplines that indicates an ongoing attempt to reconceptualize "the academic division of labor itself" (Weber, x). One example of the attempt to re-think the strict division of academic work into various self-sustaining disciplines is the call for a "post-disciplinary" University. For the purposes of this essay, I will focus on the calls for a post-disciplinary academy by Frank Lentricchia, James Sosnoski, and Judith Butler. All three are intent on reforming the University by eliminating, surpassing or variously challenging the division of fields of knowledge into disciplines. All three identify the disciplines as the locus of essentialist assumptions about knowledge which departmentalize academic work in the name of autonomy1 and professionalism. What is missed in the arguments promoted by Lentricchia, Sosnoski, and Butler is the degree to which the disciplines are foundational to the idea of the University. In calling for a "post-disciplinary University" all three fail to adequately read the history of the University and mistakenly assume that the disciplines are separable from the University, when the disciplines are perhaps the definitive mark of the University. Each re-enacts the enlightenment idealization of knowledge by replaying the mind/body split. All three are able, therefore, to completely ignore the physical manifestations of the disciplinary structure, the University in all of its institutional manifestations, and argue that a reformation of the disciplines (mind) is sufficient to make the University (body) healthy again. This misreading of history is perpetuated in the call for a surpassing and/or negating of the disciplines represented by the use of the prefix "post" which, while intent on questioning the bounds of disciplines like history, ends up suggesting the teleological view of history that underwrites the legitimacy of history as a discipline in the first place. Finally, the failure to recognize the constitutive nature of the disciplines for the structure of the University prevents them from taking into account either the organizational posts of the university, the faculty chairs, administration and staff, or the physical posts which demarcate the university with a series of gates. Having correctly identified the disciplines as a key issue of the University, Lentricchia, Sosnoski, and Butler seriously underestimate the influence that the disciplinary ideal has on the University. In limiting their calls for reform to "the disciplines" (History, English, Philosophy), rather than raising the much broader question of disciplinarity, they are able to deploy the notion of the post-disciplinary as a panacea that would seem to address some nagging intellectual issues, specifically the essentialization of knowledge, without having to exercise themselves unduly about the greater ramifications of the disciplinary structure that underlies the University.
Lentricchia, unlike Sosnoski and Butler, does not champion the postdisciplinary directly. Rather, he has a champion, Kenneth Burke,2 and contents himself with retelling Burke's anti-disciplinary exploits. Burke, for Lentricchia, highlights the limits in the disciplines, most specifically in this case History: For more than fifty years, Burke . . . has been telling us that the conventional division of the humanities, with literature, philosophy, history, linguistics, and social theory . . . is all, at best a lie of administrative convenience, and at worst, a re-enforcement in our institutions of higher education of bourgeois-capitalist hegemony. (Lentricchia, Lentricchia characterizes Burke's career as a series of strategic engagements with the question of history by a "resistance to system [the system of the disciplines] and in particular by a resistance to the essentializing consequences of systematic thought" (121).3 Burke's approach to analyzing history is problematic only when historians try to bring him into the disciplinary setting of the academy. So where Burke generally ignores the disciplines, Lentricchia wants to import Burke into the academy as a model for University intellectuals who want to be postdisciplinary.Lentricchia argues that Burke "with his rare integration of the resources of technical, formalist criticism and social and political investigation . . . set[s] standards for the ideological role of intellectuals that contemporary theory would do well to measure itself by" (Lentricchia,147). Lentricchia is explicitly challenging us to go "beyond" the limits of disciplinary studies and to integrate, like Burke, divergent fields in our work. Burke is not the only model for post-disciplinarity, however, as announced by the title to the essay, "Reading History with Kenneth Burke"; to the extent that Burke has accomplished the resistance to disciplines outside the academy, Lentricchia will replay it "with Burke" inside the academy and, hence, casts himself as already starting to post the disciplines, by performing what he calls elsewhere4 "radical" acts of reading.
Sosnoski, like Lentricchia, finds the "ideals and goals of disciplinarity . . . no longer defensible intellectually or politically" (Sosnoski,7). Elaborating this argument in "Literary Study in a Post-Modern Era: Rereading its History," Sosnoski finds the humanities suffering from the "alienating disciplinary practices of examining, hiring, promoting, granting, and so on" which finally "marginalize and tokenize" the intellectual (Sosnoski,8). Casting himself as a "post-modern" and invoking a "Lyotardian" sense of history, he wants to critique the disciplines and move to a "post-disciplinary" inquiry in which the "political character of literary studies surfaces" (Sosnoski,28). By challenging the authorities which constitute and underwrite the disciplines, we could, it would seem, move beyond academic structures of essentialist representation to a post-modern inquiry that would not be totalizing; it would not ascribe to a disciplinary epistemology; it would concern subjects rather than objects; would not have as its generative principle a logic of consistency. (Sosnoski,26) The point of this is not, as Sosnoski reassures us, to challenge the institution per se, but to achieve a "reinstitutionalization of literary studies" (Sosnoski,28) that will increase the institutional power of the humanities visa-vis the sciences (28). The move to the "counter" or "post-disciplinary" would provide a basis for a literary intellectual who, freed from the constraints of essentializing approaches to knowledge and with a now visible investment in politics, is both more connected with the "world" and more powerful within the University.
Butler's invocation of the post-disciplinary, while only a scant few paragraphs in the introduction to Gender Troubles (compared to full articles by Lentricchia and Sosnoski), repeats several key claims made by them for the post-disciplinary. Butler, centrally concerned with the question of representing gender, finds a disturbing narrative of domestication at play in the representation of women in feminist theory: the development of a language that fully or adequately represents women has seemed necessary to foster the political visibility of women. This has seemed obviously important. (Butler,1) But the full and adequate representation of women required that "the qualifications for being a subject must first be met before representation can be extended" (Butler,2). Representation in this instance shares many characteristics with submission, as the assumption of the represented subject position requires aligning oneself with pre-existing definitions of subjectivity, even though many of those definitions are anathema to feminism. In order to resist this kind of submission, but without completely abandoning the power associated with representation, Butler calls for us to "participate in whatever network of marginal zones spawned from other disciplinary centers and which, together, constitute a multiple displacement of those authorities" (Butler, xiii). From such positions one can receive some recognition as well as challenge the disciplinary structure that underwrites the restrictive definition of the subject. Butler then argues that such representations of "the complexity of gender requires an interdisciplinary and post-disciplinary set of discourses in order to resist the domestication of gender studies and or women studies within the academy" (Butler, xiii). The "post-disciplinary" would resist the essentialization of the "subject," both in the sense of the topic of research and the subject position of the researcher. As in Lentricchia and Sosnoski it would also empower the researcher and enable her to "radicalize the notion of feminist critique" (Butler, xiii).
Absent from all three calls for the post-disciplinary, however, is a consideration of the history of the disciplines. The failure to recognize the history of the disciplines acts to reinforce the idealized status of disciplinary knowledge; the disciplines seems literally outside historical and material constraints. Such an oversight is not surprising in Butler's brief rhetorical statement, but odd in Lentricchia's essay given its lengthy meditation on history and astonishing in Sosnoski's essay, whose title, "Literary Study in a Post-Modern Era: Rereading its History" would seem to announce that he will in fact give us some history of literary studies. We do receive in Sosnoski a brief "history" of "The Concept of a Discipline," the title of the first section of his paper, which goes all the way back to 1972 and Stephen Toulmin's ruminations on the difference between a discipline and a profession.5 What is lost in this historical blindness is the inextricable link between Universities and the disciplinary structure. In The Conflict of the Faculties (1798) Kant presents the idea of the University as the idea of the disciplinary division of labor: Whoever it was that first hit on the notion of a university and proposed that a public institution of this kind be established, it was not a bad idea to handle the entire content of learning . . . by mass production, so to speak -by a division of labor, so that for every branch of the sciences there would be a public teacher or professor. (Kant,23) While Kant presents the idea of the disciplines as a "happy idea" the political motivations behind his argument are thinly veiled. As Derrida puts it: "Kant is well aware that he is in the process of justifying, in terms of reason, what was a de facto organization determined by the government of his day" ("Mochlos," 5). So Kant, one of the founders of the enlightenment, is in fact motivated to present knowledge as ideal because of the political realities of the late 18th century and seems far more aware of the politically contingent status of knowledge than our three authors. The disciplinary division he argued for was, following the debate over the founding of the University of Berlin, carried over as a key component of the German Education model that would have a huge influence on higher education in Germany and the United States. The founding of the idea of the University was, therefore, intricately bound up with the specific structure of the disciplines. The disciplinary structure was itself intricately bound up in the political expediencies of Prussian politics in the late 18th and early 19th century. It is not surprising that Lentricchia, Sosnoski, and Butler are suspicious of the disciplines given this political heritage; it is, however, a mistake to read the political liabilities of the disciplines as separate from those of the University.
By overlooking the foundational importance of the idea of disciplines, Lentricchia, Sosnoski, and Butler suspend the history of the University they would change. For example, in America throughout the 1800s, the German Education model made inroads, lost ground, had successes here and setbacks there, but generally came to be the dominant model of education by the early 1900s. Introducing along with the disciplines the related concepts of scholarship and publication, the development of resources such as libraries and laboratories, and the now ubiquitous idea of the lecture, the German model replaced a College centered system of higher education in which, [if] a college had a building, it had no students. If it had students, frequently it had no building. If it had either, then perhaps it had no money, perhaps no professors; if professors, then no president, if a president, then no professors. (Rudolph,47) The adoption of the University model made a major contribution to stabilizing these elements. The disciplinary model, with its emphasis on research and original scholarship, required teaching space, library space, research facilities and stable administrative structures. What the German model slowly replaced was an essentially medieval system in which all students took the same courses, predominated by Greek, Latin, and religious studies, were taught as much by tutors as by professors, and spent most of their class time in recitation and drills. In 1872 all Harvard students, for example, took the same four years of prescribed courses, no electives, no specialization. However, with the introduction of the German "disciplinary" model and a name change from Harvard College to Harvard University, "by 1897 the prescribed course of study at Harvard had been reduced to a year of freshman rhetoric" (Rudolph,294). The elective system allowed students to freely pursue studies in a University now structured by the disciplines and with a faculty that increased in the course of 30 years from 20 members under the old college system to more than 80 in 1897 not counting numerous lab and research assistants (Carnochan,12).
The historical misreading that sees the question of the disciplines as distinct from the question of the University allows Lentricchia, Sosnoski, and Butler to imagine change in the disciplines, even the elimination of disciplines, as taking place against the otherwise stable institutional structure of the University. This separation allows them to deploy the prefix "post" to signify a teleological movement in which the University as a whole is improved through upsetting the disciplines. At its most basic level, the post is used as a negation, a "displacement" (Butler) or "replacement" (Sosnoski), a doing away with something, in this case the disciplines, that we usually do not like, so that we can replace it with something, in the best tradition of ad-speak, "New and Improved." Deployed as a negation, the post works as a "radical break," of one period giving way to another markedly different period that fills our desire for an improved future. The post marks the beginning of the new, the breaking away from, in terms of the University, the disciplinary. This breaking away from the old is implicit in the desire for a new that the post-disciplinary represents. Enunciating the post, therefore, places us in the present, looking forward to a break that will leave behind the present we are currently in (the present of disciplines). This suggests to me a kind utopic hope for the possibility of "making it [in this case the University] new." While using post in this teleological sense, all three critics simultaneously muddy the historical waters further by invoking a much more Lyotardian6 sense of the post. Although the post seems primarily a call for a better future, it is also involved with redefining the present. Sosnoski's repeated claim of being a "post-modern," Lentricchia's performance of counterdisciplinary reading and Butler's affirmation of positions on the "critical boundaries" suggests that the post-disciplinary is in part already upon us. The newness of post-disciplinarity is comprehended in this sense by a looking back, in this case back to the disciplinary. In describing a hoped-for future, a past of the disciplinary is inscribed which is already being surpassed. The post, in other words, thrusts the present into the past. We incarnate ourselves, through a bit of historical sleight of hand, as the nolonger-disciplinary. The post functions as a negation that invokes a utopian future in which we have surpassed our present problems, and as a marker for where we are. The desire for a better future is partly fulfilled in this deployment of the post, in that we can have newness, partially problematized, in that the new has magically become now, the now of the disciplines, and the now is what Lentricchia, Sosnoski, and Butler were hoping to change. The mis-recognition of the formative role of the disciplines causes all three to try to recuperate changes that are already happening while not being sure if those changes are the joyous harbingers of the post-disciplinary University. Once the impulse to essentalize knowledge is shifted from the disciplines themselves to the disciplinary structures of the University, the inter-disciplinary that seemed before so inviting can be more clearly recognized as a different disciplinary alignment within the same essentializing structure of which we should be wary. In this case, Lyotard's sense that we are always still in the modern and therefore dealing with many of same old problems of modernity should serve as a reminder that the post-modern, like the post-disciplinary, is as troublesome as the modern. Whereas Lyotard warns us about the continual presence of the modern in the post-modern, Lentricchia, Sosnoski, and Butler endorse and perform the "new." Their performances end up conserving a traditional historical view, however, by re-enacting the morality play of the revolution.
The lack of historical articulation seems partially recuperable, however, through arguing that the changes in the disciplines are not synchronous, that while we may be partially past the disciplines, we are not yet in the post-disciplinary, and the changes that are currently taking place are therefore not part of the program. In this way it is possible to see Butler's call for us to take up our positions within the liminal spaces between disciplines as perhaps the beginning of the post that will carry us into a being in the new. Unfortunately, such a strategy remains problematic. The post as a looking forward, as a call for a reorganization of academic disciplines, does not function quite so smoothly because, among other reasons, the historical forces which are bringing about the changes in the University are not entirely under our control. The "post" is a product -a byproduct, really -of the historical development of international forces, specifically late capitalism, over which we have little say. Any change we might desire in the organization and culture of the University cannot escape the logic articulated by Benjamin that the underside of culture is blood, torture, death, and murder. We should not forget what Kant seemed so aware of in 1798, that the structure of the disciplines and therefore the University, is in large part an outgrowth of the political situation in which the University exists. Insofar as we find the disciplines a suspicious idea because they came about under the direct authority of a Prussian king, we should be wary of just what power is at work, if any, undermining the disciplines today.
The assumed stability of the University that allows for the teleological deployment of the prefix post, as we have seen, completely ignores the greater disciplinary structures of the University. Having separated the question of the disciplines from the question of the University, the three critics are free to ignore all other aspects of the University, including the administrative and physical "posts" that delimit the University as a unique space. Consider the faculty posts, the appointments and chairs that are occupied at various times by various people and that mark the organization within the University as various gates mark the boundaries of the University. Derrida's discussion of Cornell, for example, in the "University in the Eyes of its Pupils" is framed by his being given a new post there "as an Andrew Dickson White Professor-at-Large" (Derrida,5). Although he had lectured at Cornell a number of times, the post is literally a new position within and relative to the University: In this case the title with which your University has honored me at once brings me closer to you and adds to the anguish of a cornered animal. Was this inaugural address a well chosen moment to ask whether the University has a reason for being? (Derrida,5) Derrida's nervousness at being a cornered animal is that of the animal being domesticated (both the fear of capture and of looking forward to the perhaps inevitable slaughter that accompanies domestication). His nervousness derives from his new post, which despite being at-large, is an identity very much within the University. Just as Butler has argued it should, his representation within and by the University makes him nervous for his being. His questioning of the being of the University is an awkward issue at such an address because it questions both the University and his position as constituted by the institution. This tension also raises the question of whether or not one can ask the question of the being of the University from a post, even if it is, as in this case, an outpost, of the University. The attempted resistance to essentialist subjectivity which Butler and Sosnoski call for in the post-disciplinary seems to overlook the construction of being in the representations of those posts which, while they may be disassociated from all the disciplines, i.e. at-large, still contain all of the problems of representation.
Further, the attempt to stabilize the University as the scene for disciplinary change necessarily overlooks the rather complicated series of outposts, gates, and gatekeeping that mark the physical space of the University because idealized knowledge has no physical bounds. One remarks at this juncture that it is a tradition at Indiana University and at many other schools for classes, i.e., the class of 67, to leave tributes to the University, among which gates figure prominently. In this way, classes leave behind a tribute that marks both the integrity of the University as a delimitable space, and its permeability, witnessed both by the gates and by the fact that the class has indeed left. People move in and out of the University, carrying on a correspondence between the University and the "outside" literally through the post(s).7 The difference which the gates mark between the University and the not-University would at first seem to suggest the clear demarcation between the new and the old, the University and the not-University. The gates serve to differentiate the physical structure of the University which, if one is to think the University, one has "to think at one and the same time of the entire 'Cornellian' landscape -the campus on the heights, the bridges, and if necessary the barriers above the abyss" (Derrida,17). The calls for a post-disciplinary University did not take the structure, the stuff of the University and its surroundings8 into account at all. They left out a consideration of the space of the University entirely.
The correspondence the University carries on through its posts and gates emphasizes not the integrity of the University but its permeability. In Butler's call to occupy "marginal" zones between the disciplines, she overlooks the very real degree to which the University in its entirety can be seen as a kind of marginal zone. Derrida's pointing to the "bridges" of Cornell emphasizes the site of entrance and egress to the main campus, those liminal spaces which make it difficult to say just where the University begins and ends. Further, the advent of the Internet and other communication technologies further blurs the boundaries of the University as professors and students begin to work together across thousands of miles and almost completely outside the purview of University oversight. The valorizing of marginal spaces becomes somewhat suspect in terms of both its failure to challenge the integrity of the physical space of the University so important to its being and, simultaneously and quite paradoxically, in its failure to deal with the nature of the University as in some ways a definitively marginal space.9 Moreover, the call for a post-disciplinary University overlooks the traditional role of "gatekeeper" played by University professors organized along disciplinary lines. The very physical permeability of the University discussed above has been made possible to a large extent by reliance on careful monitoring of those who would pass through the gates. To return to Kant, it is the responsibility of "faculties," organized by departments with each having a "Dean," to admit to the university students seeking entrance from lower schools and, having conducted examinations, by its own authority to grant or confer the universally recognized status of "doctor." (Kant,23) In the role of gatekeepers, administrators now function to admit students on the undergraduate level. But at the graduate level, it is largely still professors who admit students, and at all levels it is departments that confer degrees and individual faculty members within those departments who do the evaluating. Therefore, the physical space demarcated by gates, the departments and professors functioning as gatekeepers to patrol that space, and the whole associated administrative apparatus combine to form the disciplinary milieu that is the University. The attempt to dissociate the disciplines from the University, were it to be seriously attempted, would undo the space of the University completely.
The systematic idealization of knowledge in the disciplines systematically ignores importance of disciplinary structures to the University overall. This leads to an overly simplistic notion of the post-disciplinary that obscures the complicity of the University with capitalism, promotes a teleological notion of history, a naive sense of the revolutionary new, and essentializes the University as a static structure to which all instabilities and marginalities are threatening. Most disturbingly perhaps, and here I return to Weber with whom I opened this essay, there is clearly a re-evaluation of "the idea and ideal of knowledge" (Weber, ix) taking place in Universities throughout the US. The nearly ahistorical call for the post-disciplinary, by obscuring the interrelationship between the disciplines, politics, structures, space, and administration of the University, makes it difficult to evaluate what the political and intellectual stakes of the ongoing changes are. While it is unlikely that we will be able to figure out what exactly is going on, the failure to recognize the importance of the disciplinary structure in its broadest connotations renders it impossible to evaluate what the ongoing developments in our Universities might mean.
Masters or Ph.D. Burke was also never a full-time employee of a University and was in many ways reclusive. Burke was therefore never invested in the disciplines either professionally or educationally. By and large, Burke simply ignored disciplinary concerns in his work. And when he did raise questions about the disciplinary division of learning in the University, as he does in Permanence and Change, it is in the context of raising questions about the University in general as a "bureaucratic" institution. Lentricchia symptomatically mis-reads Burke on this score both in "Reading History with Kenneth Burke" and Criticism and Social Change.
3Citing Burke as universally opposed to systematic thought is, at least, a bit perverse. One of Burke's main critical developments, Dramatism, was an attempt to provide a complete system to account for man's actions in history, art, and politics (See Grammar of Motives).
4In Criticism and Social Change, page 2. 5Throughout his essay Sosnoski's discussion of the disciplines shifts registers between Toulmin's abstract considerations of the concept of a discipline and the institutional manifestation of disciplines in departments. This allows him to avoid on the one hand the institutional history of the disciplines by appealing to Toulmin's universal category of the discipline and then, shifting back to the institutional setting, avoid Toulmin's idea that disciplines don't end, they just alter their configurations over time and hence can never be really "posted." 6It is in terms of looking forward and back that the post of Lyotard and his question of "When, then, is the postmodern" (80), comes into play: The postmodern would be that which, in the modern, puts forward the unrepresentable in presentation itself . . . Postmodern would have to be understood according to the paradox of the future (post) anterior (modo) (81).
In terms of the modern, as well as the disciplines, the post is not a beyond. The post is definitively "in" the modern, albeit in the "future" and/or in the "lead." But this lead which the post holds may fall behind, may fall literally into the past as the "essay (Montaigne) is postmodern" (81). Montaigne's formative experiments with the essay established the essay as a postmodern genre over four hundred years ago. So the post is not necessarily a looking to the future. Further, while we can certainly find the post in the past, in Lyotard it does not function as a negation. In what might be called a "first past the post" system, the post can be in the present, future or past; can in fact be passed up by the past [postmodern forms -for instance the essay again -influence newer forms -say the fragment -which are newer evolutions of the "original" postmodern form and yet which are modernisms -"the fragment (Anthaeneum) is modern" (Lyotard 80)]. While various postmodern forms may move towards the future and postmodernity may be "in the lead" it can never past the modern -in the sense of either thrusting it into the "past" or as a negation.
7This is a particularly complex notion ruminated on at length in Derrida's The Post Card as the movement and meaning of the physical postcard is overlaps and has similarities to but is not congruently homologous to the dissemination and meaning of the text of the postcard.
8The general importance of the University "setting" is evidenced by the prohibitions found at most Universities against altering landscapes or buildings.
9I think this is particularly clear at Indiana University. I.U., which is comprised of numerous of large buildings and striking gates, has also bled over into surrounding neighborhoods taking over a group of houses here, an individual house there, so that it is literally impossible to tell with any accuracy what the boundaries of the campus might be. Add to this a parking garage in the middle of downtown, a nuclear acceleration facility a number of miles out of town, and a new telescope being constructed by I.U. and several other Universities in Arizona, and it becomes less than clear just where I.U. is. | 6,963.6 | 1995-01-01T00:00:00.000 | [
"Philosophy",
"Education"
] |
Size- and morphology-dependent optical properties of ZnS:Al one-dimensional structures
Typical morphology substrates can improve the efficiency of surface-enhanced Raman scattering; the need for SERS substrates of controlled morphology requires an extensive study. In this paper, one-dimensional ZnS:Al nanostructures with the width of approximately 300 nm and the length of tens um, and micro-scale structures with the width of several um and the length of tens um were synthesized via thermal evaporation on Au-coated silicon substrates and were used to study their size effects on Raman scattering and photoluminescent spectra. The photoluminescence spectra reveal the strongest green emission at a 5 at% Al source, which originates from the Al-dopant emission. The Raman spectra reveal that the size and morphology of the ZnS:Al nanowires greatly influences the Raman scattering, whereas the Al-dopant concentration has a lesser effect on the Raman scattering. The observed Raman scattering intensity of the saw-like ZnS:Al nanowires with the width of tens nm was eight times larger than that of the bulk sample. The enhanced Raman scattering can be regarded as multiple scattering and weak exciton—phonon coupling. The branched one-dimensional nanostructure can be used as an ideal substrate to enhance Raman scattering.
Introduction
Surface-enhanced Raman scattering (SERS) spectroscopic methods are widely used to identify and provide structural information regarding molecular species at ultra-low concentrations (Kneipp et al. 1997(Kneipp et al. , 2006Nelayah et al. 2007). The efficiency of a SERS substrate is described by its enhancement factor (EF), which is quantitatively related to the ratio between the enhanced electric fields at the metallic surface where the Raman scattering molecule is located and the incident field away from the surface. To improve SERS, different types of techniques have been used to provide more focused ''hotspots'' and enhance large electromagnetic fields. Furthermore, studies have shown that SERS relies strongly on controlling the nano-and micro-scale morphology of metal nanostructured SERS substrates (Farcau and Astilean 2014); as the particles are placed in a highly ordered regular array the long-range photonic interactions can produce sharp resonances (Genov et al. 2004), at the same time nanoparticles exhibit shape anisotropy and posses multiple resonances. Therefore, the localized surface plasmon resonance has size, shape, material, and interparticle spacing effects. For example, designing 3D mesoscopic multipetal flowers assembled by metallic nanoparticles as SERS substrates to improve surface-enhanced Raman spectroscopy has been reported (Jung et al. 2014); using holographic laser illumination of a silver nanohole array, the authors observed dynamic placement of locally enhanced plasmonic fields (Ertsgaard et al. 2014).
Although there have been some discussions regarding enhanced Raman scattering, the mechanism behind it is still up for debate. Typically, the main mechanism of the enhanced Raman scattering is considered to be a combination of two effects: an electromagnetic effect as a result of an increase in the strength of the electric field at the molecule and a resonance-like charge transfer effect. These locally enhanced fields are called plasmonic ''hotspots'', which can produce up to 14-15 orders of magnitude enhancement of the surface plasmon compared to normal Raman scattering caused by the creation of an electromagnetic resonator on the surface of the nanomaterial (Camden et al. 2008). The charge transfer enhancement is typically less than two orders of magnitude (Haynes et al. 2005). In Kneipp et al. 1997, seven orders of SERS enhancement and multiresonance features in the entire visible frequency range were achieved by plasmonic hot-spot engineering by increasing the number of petals from four to eight. By studying the effects of the surface roughness, strong Raman scattering effects (Reyes et al. 2010) were found, which were ascribed to the polarizationdependent behavior of CdS. Furthermore, the enhanced Raman scattering and enhanced exciton-LOphonon coupling observed in the CdS ripple-like MBs (Zeng et al. 2014) were ascribed to larger exciton polarizations caused by surface defects via the Fröhlich interaction. The enhanced Raman scattering observed for ZnSe nanoparticles was ascribed to an aggregation of the ZnSe particles, resulting in dipolar interactions and improvement of the Fröhlich interaction (Li et al. 2010). Recently, a remarkable increase in the Raman sensitivity of optimized Si nanowire arrays was determined to result from surface multiple scattering, characterized by a large spatial extension (approximately fifty nm) (Bontempi et al. 2014). By studying the different concentrations of Mn doped into the CdS nanoparticles, the researchers found (Zhao et al. 2013) that the enhancement in the 1LO-phonon intensity was caused by the interstitial Mn dopants, which decrease the NC surface deformation potential because of the small dielectric constant of the metal, resulting in enhanced coupling between the LO phonon and the surface plasmon. Some people (Lin et al. 2004) ascribed the selective enhancement of Raman intensity to the presence of a gold catalyst, which was responsible for plasmon scattering at the ZnS/Au interface; however, the changes in the Raman intensity were not obvious for the ZnS powders in the absence of the gold catalyst.
There are many discussions on SERS, but most of them are based on the optical near-field intensity enhancement on the metallic (plasmonic) substrate; the mechanism of the enhanced Raman scattering from low-dimensional nonmetallic materials should be studied. Unlike photolithography (Perney et al. 2006), etching (Alvarez-Puebla et al. 2007, and spark discharge (Jung et al. 2014) methods which were used to prepare the SERS substrates, in this paper the simple thermal evaporation method was used to prepare one-dimensional (1D) periodic structures, that is, saw-like and firry leaf-like ZnS:Al nanowires, as well as normal and banana leaf-like ZnS:Al microbelts. Their lattice structures and dopant concentrations are discussed from XRD images and the Rietveld method. Raman scattering and photoluminescence measurements were used to study the effects of the sizes and morphologies on the optical properties, where the morphology-dependent Raman scattering was observed, at the same time, the mechanism of the enhanced Raman scattering was discussed.
Experimental section
ZnS:Al nanowires were deposited on Au-coated Si substrates in a horizontal tube furnace via thermal evaporation. The zinc sulfide powder [ZnS (99.999 %)] and aluminum powder [Al (99.99 %)] source materials were obtained from Aladdin (Shanghai, China) and Sinopharm Chemical Reagent Co, Ltd. (Shanghai, China), respectively. First, 0.3 g ZnS powders and different contents of Al powder were placed separately into two identical quartz boats, which were then placed at the center of the heating zone. Then, one Si (100) substrate coated with an Au film was placed obliquely downstream of the Al powder and subsequently ZnS powders at a distance of 11 cm. Before heating, the system was purged with 522 standard cubic centimeter per minute (sccm) of high-purity argon (Ar, 99.999 %) for 30 min. Then, the pressure was reduced to 7.5 9 10 -2 Torr for the duration of the reaction. Next, the furnace was heated to the desired temperature of 1050°C at a heating rate of 10°C/min and was maintained at this temperature for 30 min with a constant Ar flow of 50 sccm. After the system was cooled to room temperature, a whitecolored, wool-like product was deposited onto the silicon substrate. Finally, four types of ZnS:Al samples were obtained with atomic ratios of Al to ZnS equal to 3, 5, 7, and 10 % in the two initial quartz boats, which were labeled as #A1, #A2, #B1, and #B2, respectively.
The as-synthesized products were characterized via X-ray diffraction (XRD-7000, Shimadzu) using Cu Ka radiation (k = 0.15406 nm). The XRD data were collected in the 2h range of 10°-65°using a continuous scanning method at a scanning speed of 10°(2h)/min. The morphology and microstructure of each sample were observed using a field-emission scanning electron microscope (FESEM, s-4800II, Hitachi) equipped with an X-ray energy dispersive spectrometer (EDS) and a high-resolution transmission electron microscope (HRTEM, Tecnai F30, FEI). The absorption measurements were performed using a UV-Vis-NIR spectrophotometer (UV-Vis, Cary-5000, Varian) with an integrating sphere. The Raman measurements were performed using a Renishaw In Via equipment. The photoluminescence (PL) measurements were performed on a FLS920 instrument.
Structures and morphologies of the ZnS:Al samples
The XRD patterns of the ZnS:Al samples are shown in Fig. 1. To determine the unit cell parameters, the XRD patterns were analyzed by a simple Pawley refinement method using the Topas Academic V5.0 software. The XRD images show that nearly all of the peaks agree with the ZnS wurtzite structure (JCPDS 36-1450), except for a small peak at 32.8°. Furthermore, the ZnS NW exhibits a space group of P63 mc, in which the main peaks at 2H = 28. 52, 47.50, 26.89, 30.50, and 56.33°correspond to the directions of the (002), (110), (010), (011), and (112) planes, and the small peak at 32.8°agrees well with the (200) plane of ZnS zinc blende structure (JCPDS 65-0309). Because the (002) and (110) planes of the ZnS wurtzite structure overlap with the (111) and (022) planes of the ZnS zinc blende, the structures of the ZnS:Al samples can be regarded as a mixture of wurtzite and zinc blende structures with a dominant wurtzite ZnS. Additionally, the lattice constants were obtained from the Rietveld Refinement Treatment, as shown in Table 1. For the lattice constants, a = b = 3.8284 and c = 6.2573 Å were found for the undoped NWs; and a = 3.8264, 3.8259, 3.8255, 3.8250 Å and c = 6.2578, 6.2549, 6.2532, 6.2511 Å were found for #A1, #A2, #B1, and #B2, respectively. Compared to the undoped ZnS, the lattice constant a contracted by 0.197, 0.25, 0.286, and 0.342 %, and the constant c contracted by -0.054, 0.243, 0.408, and 0.623 % were found for samples #A1, #A2, #B1, and #B2, respectively. Except for the constant c of sample #A1, the lattice constants a and c decreased with increasing content of the Al source. A gradual decrease in the lattice parameters indicates that an increasing number of Al atoms are successfully incorporated into the ZnS host as the Al source content increases, which is consistent with the relations of Vegard's law, as shown in Fig. 2. With the increase of Al 3? ions, the lattice parameters of the ZnS:Al host decreases because the ionic radius of Al 3? (54 pm) is Al at the atomic ratios of Al to ZnS sources equal to 3 % (#A1), 5 % (#A2), 7 % (#B1), and 10 % (#B2) from the two initial quartz boats. Both the wurtzite structure (JCPDS 36-1450) and zinc blende structure (JCPDS 65-0309) are listed for reference smaller than that of Zn 2? (74 pm). Furthermore, an abnormal increase of the lattice constant c of sample #A1 can be regarded as the surface tensile induced by its typical morphology. Because the Al-dopant concentration is very low when using the thermal evaporation method and is hard to identify in the EDS because of its small quantities, the XRD Rietveld Refinement is used to provide evidence for the incorporation of atoms into the ZnS host. The quantitative elemental analysis from the EDS spectra for the ZnS:Al samples indicate that Zn and S are major elements; however, the S/Zn atomic ratio is less than one, indicating that sulfide vacancies are present in the ZnS:Al samples.
The morphologies and compositions of the products were characterized via FESEM, as shown in Figs. 3 and 4. Figure 3a, b represent the SEM images of sample #A1, whereas Fig. 3c, d represent the SEM images of sample #A2. For sample #A1, some of nanowires have a main bough and many needle-like twigs distributed alternatively along each side. The needle-like twig has a diameter of tens to one hundred nanometers at the base in contact with the bough, and the diameter of the tip of the needle is several to a few tens of nanometers. Additionally, there is a several to tens of nm separation between two neighboring needles, making the structure resemble a pine leaf. And some nanowires have a saw-like structure with a width of the order of tens of nanometers. For sample #A2, each NW resembles a firry leaf with a length ranging from tens to several hundred nanometers and a width of several hundred nm, but without the main bough and twigs, as shown in Fig. 3c, d. From Fig. 3 a-d one can find that the size of sample #A2 is larger than that of sample #A1. Sample #B1 has a microbelt shape with a width of 1-3 micrometers and a length in the tens of micrometers, as shown in Fig. 4c. Sample #B2 is shaped like a banana leaf and exhibits the same size as sample #B1 but with a greater thickness, as shown in Fig. 4d. Sample #B2 more closely resembles the bulk material.
Absorption and PL spectra of the ZnS:Al samples UV-Vis absorption spectra were obtained by measuring the optical absorption spectra on a UV-Vis spectrophotometer (Cary-5000) with an integrating sphere to guarantee a better measurement result, as illustrated in Fig. 5. The absorption spectra of samples #A1, #A2, #B1, #B2, and undoped ZnS semiconductor can be extracted from the formula (ahv) = c(hv -E g ) 1/2 , where c is a constant, E g is the band gap, and the direct band gaps estimated from a plot of (ahv) 2 versus the photon energy hv were equal to 3.626, 3.586, 3.612, 3.608, and 3.611 for undoped, #A1, #A2, #B1, and #B2, respectively. The band gap values show that the Al-doped samples have a slightly smaller band gap in comparison to the undoped sample, indicating that some Al atoms have been doped into the ZnS hosts. Sample #A1, with an atomic ratio of Al to Zn source equal to 3 at%, has the lowest band gap, which can be explained by an increase of surface defects in addition to the Al dopant because of the relatively smaller size and the large surface to volume ratio. The PL properties of the undoped and Al-doped nano-and micro-scale ZnS structures were investigated via PL excitation at 325 nm to further assess their doping and structure qualities, as shown in Fig. 6. For the undoped ZnS structure, a slightly asymmetric emission at 518 nm was observed, and the Gaussian fitting revealed a dominant emission at 514 nm with a weak emission at 550 nm. The peak at approximately 514 nm has previously been attributed to the S vacancies in ZnS nanowires (Zhang et al. 2005), or ascribed to the peak at 518 nm representing the transitions of electrons from S vacancy states to zinc vacancy states (Kumar et al. 2006). The peak at 550 nm originates from bulk defects (Lei et al. 2011). For the Al-doped ZnS, a broad and asymmetric emission spectrum at approximately 500 nm was observed, and the deconvoluted peaks at *490 and *540 nm can be identified using a Gaussian fitting, as shown in Fig. 6c-f. For the four Al-doped ZnS nanostructures, sample #A2 (with 5 at% Al content in the source) exhibited a stronger emission than the other three samples, which is consistent with other reports, in which the highest luminescence intensity corresponded to the Al-dopant concentration of 6 at% for the ZnS:Al films (Prathap et al. 2009). Herein, the enhanced emission at 493 nm can be ascribed to the Al-doped ZnS nanoparticles, presumably from the donor-acceptor pair (DAP) transition (Nagamani et al. 2012); the observation of which is considered to be an obvious evidence for the substitution of Al in the host lattice in ZnS:Al nanoparticles (Reddy et al. 2014). The green band at 530 nm may be related to some native defects of pure ZnS (Sotillo et al. 2012(Sotillo et al. , 2014, such as zinc vacancies (Mitsui et al. 1996, Chen et al. 2011) or S vacancies (Ye et al. 2004Jiang et al. 2012). A decrease in the emission intensity at 500 nm with Aldopant concentration can be explained by the formation of defect complexes (Al donors) that induces the structural disorder caused by an increase in non-radiative recombination (Prathap et al. 2009). Raman spectra of ZnS:Al nanostructures Raman spectral measurements of the samples were conducted using an excitation wavelength of 532 nm with a 3 % incident laser light power of 300 mW to study the effects of size and morphology on the Raman scattering spectra. The room-temperature Raman spectra for undoped and different Al-doped ZnS materials are shown in Fig. 7a. For all of the samples, a sharp peak at 348 cm -1 was clearly observed. For the sample #A1 in Fig. 7a, the following peaks were clearly identified: a mode at *217 cm -1 corresponding to the second-order longitudinal acoustic (2LA) phonon in ZnS (Cheng et al. 2009); a peak at 414 cm -1 associated with the LO?TA phonon mode; two peaks at 639 and 669 cm -1 attributed to the second-order optical phonon LO?TO and 2LO modes (Lin et al. 2004;Krol et al. 1988), respectively; two small peaks at 155 and 178 cm -1 assigned to disorderactivated second-order acoustic phonons (Scocioreanu et al. 2012); and a broad peak at 276 cm -1 , which consists of the unresolved A1(TO) and E1(TO) phonon modes at 273.3 cm -1 and the E2(TO) phonon mode at 286.0 cm -1 , as discussed in (Radhu and Vijayan 2011;Adu et al. 2006). Furthermore, for all samples, the peak at 348 cm -1 can be fitted using three Lorentzian curves, as shown in Fig. 7b, where the decomposition of the peak at 348 cm -1 from sample #A1 exhibits three peaks located at 347.60 (called as P1), 350.14 (P2), and 337.2 cm -1 (P3) with fullwidth-half-maxima (FWHM) of 4.377, 3.733, and 9.786 cm -1 , respectively. The peaks at 347.60 (P1) and 350.14 cm -1 (P2) can be regarded as the E 1 (LO) and A 1 (LO) vibration modes, in which the E 1 (LO) mode results from the lateral morphology of the ZnS nanowires parallel to the c-axis, as discussed in (Lu et al. 2005), whereas the peak at 337.16 cm -1 is assigned to a surface phonon mode (Kim et al. 2012). For the four samples, an identical decomposition was performed with nearly the same magnitude of the E 1 (LO) and A 1 (LO) phonons and their FWHMs. As shown in Table 2, the surface phonon was also found to be approximately 337 cm -1 , except for sample #B2 in which the value was red-shifted approximately 4 cm -1 and the FWHW was larger because of the disappearance of the surface effects for sample #B2. Compared to the Raman spectrum of bulk hexagonal ZnS (LO: 352 cm -1 ) (Brafman and Mitra1968;Nilsen 1969), the peaks of the first-order LO phonons from these ZnS nanowires exhibit a shift toward lower energy.
To study the changes of the Raman scattering intensity, we selected sample #B2 as a reference and determined that the ratios of the 1LO-integrated areas of #A1, #A2, and #B1 to that of #B2 are equal to 8.13, 3.56, and 2.52, respectively, indicating that sample #A1 exhibits greatly enhanced Raman scattering. Comparing samples #A1 and #A2, the greatest difference is in their size and morphological structures. For sample #A1, the pine leaf-like nanowires grow along a main bough with many needle-like twigs alternatively grown along the two sides. Additionally, there is a several to tens of nanometers separation between two neighboring needles, resulting that the diameter for the main bough and the twig is only several tens of nanometer. Even for the saw-like structure nanowires in sample #A1, the diameter is only approximately 100 nm. For sample #A2, although many ZnS:Al nanostructures grow along both sides of the z-direction in a firry leaf-like nanowire, there is no independent nanopattern; their sizes are far more larger than that of sample #A1, thus the surface to volume ratio decreases. Therefore, the greater enhancement of the Raman scattering observed from sample #A1 is mainly attributed to its typical morphology and small size of nanopatterns. The phenomenon of the effect of morphology on the Raman scattering was observed in (Fasolato et al. 2014), where the enhanced Raman spectra were attributed to the different cluster morphologies rather than to the different Ag nanoparticle diameters. Additionally, optimized high-density Si nanowire arrays have demonstrated a remarkable increase in Raman sensitivity compared to reference planar samples, which was explained as surface multiple scattering (Bontempi et al. 2014). Comparing the samples #A2, #B1, and #B2, the 1LO-integrated area gradually decreases, implying that as the Aldopant concentration increases, the Raman scattering intensity decreases. This observation contradicts the results (Zhao et al. 2013) of Mn-doped CdS nanoparticles, which showed that the Raman scattering intensity increased with increasing dopant content before decreasing. Fig. 7 a Raman spectra for samples #A1, #A2, #B1, and #B2; b Raman decomposition at 347 cm -1 fitted with three Lorentzian curves for sample #A1; the peaks at 347.60, 350.14, and 337.2 cm -1 are labeled as P1, P2, and P3, respectively Actually, surface-enhanced Raman scattering (SERS) will take place when the energy of the incident laser light is close to the surface plasmon energy in noble metal nanoclusters which are intentionally introduced in NC arrays, or the energy of the incident laser light matches that of interband electronic transitions in the NCs. For nanostructure materials both quantum effects and surface effects are more important, and size-and shape-dependent Raman effects will be more obvious. Theoretically, the calculations (Zenidaka et al. 2011) of the SERS based on the optical near-field intensity enhancement on the metallic (plasmonic) and the nonmetallic (Mie scattering) nanostructured substrates with two-dimensional (2D) periodic nanohole arrays indicated that both the intervoid distance and the void diameter influence the optical intensity enhancement. Experimentally, the sensitivity of the field enhancement in the hot spots to the distance of the particles, as well as the frequency and polarization of the excitation laser have been reported (KNEIPP et al. 2006). In our studies, the enhanced Raman scattering from interband electronic transitions can be excluded since all the PL spectra centered at approximately 500 nm, and the strongest PL intensity is not for #A1 but for #A2. As reported in (Hu et al. 2013), the coupling strength of the excitonphonon increases with increasing lateral size, which will result in the reduction of the Raman scattering. Therefore, the greatly enhanced Raman scattering for sample #A1 can be explained by the large surface multiple scattering and a surface polarization effect.
Conclusion
In this paper, different morphological ZnS:Al nanoand micro-scale structures were synthesized using a thermal evaporation method, and the lattice constants were analyzed by a simple Pawley refinement method using the Topas Academic V5.0 software. The results showed that the lattice constants decreased with increasing Al source contents, indicating that more Al atoms were doped into the ZnS host. The PL spectra showed that there is a slightly asymmetric emission at 518 nm for the undoped ZnS nanostructure, which is composed of a dominant emission at 514 nm and a weak emission at 550 nm using a Gaussian fitting, corresponding to the S vacancies and the bulk defects, respectively. For the Al-doped ZnS, the broad, asymmetric emission observed at 500 nm can be deconvoluted into two peaks at *490 and *540 nm using a Gaussian fitting, which correspond to the Aldoped ZnS and the native defects of pure ZnS, respectively. The strongest green emission corresponds to the Al-doped sample at 5 at% Al source. Finally, the Raman spectra showed that the size and the morphology have great influence on the Raman scattering, and that ZnS:Al one-dimensional branched nanowires with a small size demonstrate the greatest enhancement in Raman scattering. Compared to the size and morphology, the Al-dopant concentration had minimal effect on the Raman scattering, i.e., the Raman scattering intensity slightly decreases with an increase in Al-dopant concentration. At the same time, the enhanced Raman scattering from interband electronic transitions was not obvious. The enhanced Raman scattering can be regarded as multiple scattering and weak exciton-phonon coupling. The branched one-dimensional nanostrcuture can be used as an ideal substrate to enhance Raman scattering. But more detailed physics mechanism need to be further discussed. Our studies will help to obtain an efficient SERS substrate to improve surface-enhanced Raman spectroscopy. | 5,569.2 | 2015-04-18T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Physics"
] |
Complex-valued Adaptive System Identification via Low-Rank Tensor Decomposition
Machine learning (ML) and tensor-based methods have been of significant interest for the scientific community for the last few decades. In a previous work we presented a novel tensor-based system identification framework to ease the computational burden of tensor-only architectures while still being able to achieve exceptionally good performance. However, the derived approach only allows to process real-valued problems and is therefore not directly applicable on a wide range of signal processing and communications problems, which often deal with complex-valued systems. In this work we therefore derive two new architectures to allow the processing of complex-valued signals, and show that these extensions are able to surpass the trivial, complex-valued extension of the original architecture in terms of performance, while only requiring a slight overhead in computational resources to allow for complex-valued operations.
I. INTRODUCTION
D EEP LEARNING (DL) and neural networks (NNs) [1] are among the most popular techniques of machine learning (ML), and are broadly used in signal processing, among other disciplines [2]- [5]. However, the umbrella of ML covers many other techniques, such as support vector machines [6], [7], kernel adaptive filters [8], [9], random forests [10], [11] and tensor-based estimators [12]. Although it has been shown that tensor-based methods can deliver on par, or even better performance than other methods [13], [14], and can be used in a variety of applications [12], [15]- [20], they are usually disregarded due to their high memory-and computational footprint needed to approximate a given system.
In an attempt to reduce complexity of tensor-based methods, we recently introduced [21] the combination of tensors with least mean squares (LMS) filters for system identification with minimal model knowledge, so called Wiener and Hammerstein models [22] or combinations thereof. We analyzed several of these combinations and came to the conclusion that the proposed methods cannot only outperform (or be on par with) architectures utilizing a single tensor or spline adaptive filters (SAFs), but significantly reduce complexity compared to these methods. However, a downside of the proposed architectures is, that they are only able to deal with real-valued input and output signals. Therefore, they are not suitable for a variety of signal processing and communications related problems.
In this work, we extend the theory of the tensor-LMS (TLMS) block, originally presented in [21] to allow complexvalued input and output signals. The presented theory can be trivially extended to all other architectures presented in [21] and hence is not repeated in this work. As will be shown, the resulting architectures, still keeping complexity at an absolute minimum, yield very good performance for the simulated scenario.
II. PRELIMINARIES AND NOTATION
Before reviewing the findings of [21] and presenting our extensions, this section briefly repeats the overall problem statement and mathematical notation used in [21], as it is also needed in the remainder of this work.
A. Preliminaries
Like the overall problem discussed in [21] (see Fig. 1), the aim is to approximate an unknown system with an adaptive filter by utilizing the same input signal and only observing the output . Naturally, the ideal output of the unknown system is subject to noise to yield the overall output = + . Besides updating the approximation of the system with each observed sample (i.e. one optimization step per timestep), the adaptive filter further assumes that the unknown system itself may not remain static over time [21].
B. Tensor Background
In this work, we adhere to the widely adopted definition of the term tensor as presented in [13], [14]. That is, a tensor may be represented as an -dimensional array, indexed by 1 , 2 , 3 , . . . , [21]. We denote a tensor by X, and, like The original tensor-LMS architecture from [21], which is only able to handle real valued data.
in [21], we use the notations ⊚, ⊛, ⊙ to refer to the outer (tensor) product, the Hadamard product and the Khatri-Rao product, respectively. A rank-1 tensor X of order (also called -way tensor), is the outer product of a collection of vectorsâ ∈ R ×1 , which can also be written as [21] X( 1 , 2 , . . . , withâ ( ) = A ( , 1). Further, any -way tensor X with a higher rank than one can be decomposed into a sum of rank-1 tensors [21] Additionally, the Hadamard product over all matrices A with ≠ ′ is defined as [21] The discretization used to obtain an index for the tensor input is given by the function [21] if Bins is even and with Δ denoting the discretization interval. Lastly, the superscripts (·) T , (·) H , (·) * denote the transpose, Hermitian transpose and conjugate, respectively.
III. TLMS -REVIEW
Before presenting the extensions proposed in this paper, this section reviews the TLMS approach presented in [21] and depicted in Fig. 2 to introduce the most important notations. As the name TLMS implies, this adaptive filter consists of a tensor followed by an LMS filter, hence it is suitable for Hammerstein-type problems (i.e. nonlinearity before linear block). The overall output of this system is denoted byˆ and allows to express the joint cost function as [21] denotes the tapped delay line (TDL) block and where is the desired TLMS output [21].
In order to derive an update for the coefficients A ′ of the tensor, the gradient of the cost function is approximated by [21] G TLMS, ′ , := z This approximation˜ (i.e. for A ′ , the time is omitted) is necessary to be able to take the derivative with respect to A ′ [21], as also can be seen in [23]. Therefore, the tensor update is given by which is evaluated for ′ = 1, . . . , and with where 0 ′ , − +1 −1× ∈ R ′ , − +1 −1× denotes a matrix with all elements being zero [21].
IV. COMPLEX-VALUED TLMS
In order to deal with complex-valued Hammerstein models (i.e. a nonlinearity followed by a linear filter) we propose three architectures based on the TLMS from [21]. For all architectures, the input is the same (and complex-valued), whereby the real and imaginary parts of this signal are first stacked on top of each other to yield Fig. 3. This vector is then discretized according to (6) and serves as an input to a two dimensional TDL (that is, two TDLs working on the rows of a matrix). The resulting output i of the TDL then serves as the input for the tensor(s). After this block, the three architectures differ in their operations, which is described in detail in the following.
Starting from the standard TLMS architecture, the first, obvious choice (denoted as TLMS-2R) is depicted in Fig. 3a and simply uses two realizations of the same architecture for the real and imaginary paths (denoted by the superscripts R and C), respectively. This straightforward concept basically results in the same equations as for the simple TLMS case [21]. However, this approach lacks the ability to utilize connections between the real and imaginary parts of the system, as they appear in complex multiplications.
To alleviate this drawback, the second architecture (TTLMS) utilizes a complex-valued LMS (CLMS) for the linear part of the system and two tensors for the real and imaginary parts of the nonlinear part (cf. Fig. 3b). This approach enables the CLMS to leverage the interplay of real and imaginary parts of the complex signal while the update of the tensors still requires only small adaptions compared to Sec. III, detailed in the following.
By re-defining the cost function as the update equation for the normalized CLMS becomes The two tensors are updated via for all ∈ [1, ], with While this representation may reduce repetition of blocks compared to the first case, the tensors are still not able to make full use of the complex gradient. The final architecture (CTLMS) shown in Fig. 3c, reduces to (mostly) the same architecture as shown in Fig. 2. The difference however is, that the input signal is split into its real and imaginary parts and the LMS as well as the tensor are now fully complexvalued in their outputs. This, of course, necessitates to derive new update equations for the tensor modeling the nonlinearity. This can be achieved by utilizing Wirtinger's calculus [24] and applying the complex chain rule to the cost function. The update for the complex-valued tensor becomes for all ∈ [1, ], with S ′ , according to (13). The CLMS weight update stays the same as in (16). This change now fully supports the complex domain without having to repeat filters (i.e. two tensor or LMS blocks).
In terms of normalization, the first two architectures utilize the same update for Ten as presented in [21], the normalization of the CLMS is straight forward as well and works as presented in (16) for both, TTLMS and CTLMS. In order to normalize the complex-valued tensor in the CTLMS architecture, the same principle as for a real-valued one is used, i.e. the error is approximated via the first order complexvalued Taylor expansion [25] +1 ≈ + follows directly from (20) and therefore, To maintain convergence of the algorithm [26], the norm of the error +1 has to be smaller or equal than the norm of the right side of equation (25). This can be achieved when Solving this equation for Ten , the normalization can be introduced by replacing Ten in (20) by The computational complexity in terms of additions, multiplications and divisions for all architectures is depicted in Table I. The complexity is the least for the first architecture, which just repeats the tensor and LMS blocks for both paths, and is highest with the fully complex-valued architecture. However, it is important to note that the fully complex implementation is able to leverage the full information present in the real and imaginary parts of all signals, while the other two architectures are not able to achieve this.
VI. SIMULATIONS
To evaluate the proposed models for their performance on a complex-valued system identification example, we chose the well-known case of transmitter (Tx) induced harmonics which can occur in 4G/5G cellular transceivers in the case of downlink carrier aggregation coupled with a non-ideal Tx power amplifier (PA). For more details on the exact signal model, the reader is referred to [27]. Additionally, this model simulates saturation behavior of the PA which might occur if the Tx signal power is close to the limit of the PA's dynamic range [21]. Therefore, the interference signal we want to estimate is = h T Dup z + , where h Dup ∈ C constitutes the complex-valued stop-band frequency response of a linear filter (the so-called duplexer), constitutes a noise term, are the complex-valued transmit samples after the PA and is modeled as colored noise, i.e. = −1 + √ 1 − 2 , with denoting complex-valued white Gaussian noise. The used evaluation metric is the mean squared error (MSE), defined as MSE dB = 10 log 10 where ( ) is the desired signal,ˆ ( ) is the estimate at time of the -th run, and is the total number of runs. For the simulations we chose a filter order of = 16, the memory, i.e. dimensionality , of all tensors is two (one dimension for the real-and imaginary parts of the input signal, respectively), the rank of all tensors has been chosen empirically and is set to = 10, and the length of the (C)LMS filters has been chosen to coincide with . The step-sizes for the tensors are Ten = 0.009, Ten = 0.009, Ten = 0.075 and for the (C)LMS LMS = 0.009, LMS = 0.005, LMS = 0.009, for the first, second and third architectures shown in Fig. 3, respectively, and all regularization parameters have been set to = 10 −12 . Lastly, the signal resides 10 dB above . The comparison of all three proposed architectures is shown in Fig. 4, where the simulation was repeated and averaged over = 20 different real-life duplexer fittings. It can be seen that the first architecture, which just repeats the processing pipeline of the original real-valued algorithm twice, performs the worst. Using a complex-valued LMS filter with two tensors already drastically improves performance, and as expected, the fully complex-valued architecture yields the best overall performance.
VII. CONCLUSION
In this paper we extended current state-of-the-art architectures for system identification via a joint tensor-LMS based framework to complex valued models. We proposed three different architectures that comply with complex-valued models. The first architecture simply repeats the estimation blocks (tensor and LMS) for the real and imaginary paths. While this is the most straight-forward approach, it yields poor performance as the two paths cannot interact with each other. To mitigate this problem for the linear subsystem, the LMS block has been replaced with a CLMS filter in our second architecture, which showed moderate improvements compared to the previous case. To fully leverage the complex valued approach, we finally proposed an architecture that models all sub-systems in a complex manner, i.e. via a complex-valued tensor and CLMS filter. This final solution significantly outperforms both other architectures in our considered application. | 3,084.2 | 2023-06-28T00:00:00.000 | [
"Computer Science"
] |
Anisotropic Phonon Bands in H-Bonded Molecular Crystals: The Instructive Case of α-Quinacridone
Phonons play a crucial role in the thermodynamic and transport properties of solid materials. Nevertheless, rather little is known about phonons in organic semiconductors. Thus, we employ highly reliable quantum mechanical calculations for studying the phonons in the α-polymorph of quinacridone. This material is particularly interesting, as it has highly anisotropic properties with distinctly different bonding types (H-bonding, π-stacking, and dispersion interactions) in different spatial directions. By calculating the overlaps of modes in molecular quinacridone and the α-polymorph, we associate Γ-point phonons with molecular vibrations to get a first impression of the impact of the crystalline environment. The situation becomes considerably more complex when analyzing phonons in the entire 1st Brillouin zone, where, due to the low symmetry of α-quinacridone, a multitude of avoided band crossings occur. At these, the character of the phonon modes typically switches, as can be inferred from mode participation ratios and mode longitudinalities. Notably, avoided crossings are observed not only as a function of the length but also as a function of the direction of the phonon wave vector. Analyzing these avoided crossings reveals how it is possible that the highest frequency acoustic band is always the one with the largest longitudinality, although longitudinal phonons in different crystalline directions are characterized by fundamentally different molecular displacements. The multiple avoided crossings also give rise to a particularly complex angular dependence of the group velocities, but combining the insights from the various studied quantities still allows drawing general conclusions, e.g., on the relative energetics of longitudinal vs transverse deformations (i.e., compressions and expansions vs slips of neighboring molecules). They also reveal how phonon transport in α-quinacridone is impacted by the reinforcing H-bonds and by π-stacking interactions (resulting from a complex superposition of van der Waals, charge penetration, and exchange repulsion).
INTRODUCTION
Hydrogen-bonded molecular crystals are appealing because hydrogen bonding improves the supramolecular ordering and promotes self-assembled growth. 1 This eases the control of parameters like molecular orientation. 2,3Quinacridone (5,12dihydroquinolino[2,3-b]acridine-7,14-dione), as a prototypical hydrogen-bonded organic semiconductor, has a long history of industrial application as an organic dye in pigment violet 19. 4 Beyond that, there are various proposed device applications 5 that make use of quinacridone's (QA) electrical and optical properties.These include organic light-emitting diodes, 6,7 fieldeffect transistors, 8 organic solar cells, 9 and photothermal evaporators. 10he application of hydrogen-bonded organic semiconductors (OSCs) in organic devices has been accompanied by a thorough experimental and theoretical investigation of their electronic and optical properties. 11,12In contrast, studies of the phonon properties of such materials have so far been limited to the measurement and simulation of vibrational spectra. 13,14ese solely probe the Brillouin zone center (Γ-point) and fail to reveal the often complex dispersion relations of phonons in H-bonded crystalline materials.Phonon properties of the entire 1st Brillouin zone, however, play a major role for quantities like the thermodynamic stability, 15 the heat capacity, and all transport properties.−28 In the context of charge transport, it has also been shown that a proper description of electron−phonon couplings in OSCs requires the consideration of phonons from the entire 1st Brillouin zone. 29Measuring the phonon band structures of molecular crystals is, however, quite challenging, as it typically requires large single crystals of fully deuterated materials. 30−35 More recently, inelastic X-ray scattering has been applied to determine parts of the dispersion of rubrene crystals, albeit still combined with a tremendous experimental effort. 36−41 Even more importantly, for the aforementioned deuterated naphthalene, they also accurately reproduce the measured phonon band structures. 15,39,42Simulations have the additional advantage that they provide access to quantities beyond phonon frequencies and their dispersion relation.These include, for example, vibrational eigenvectors, which allow an in-depth analysis of the phonon properties, as will be exploited below.
For the present study, we, thus, simulated the phonon properties of α-polymorph of quinacridone (α-QA) from firstprinciples by calculating the atomic force constants from ab initio forces within the harmonic approximation with phonopy's 43 finite difference scheme. 44The (very minor) impact of anharmonicities is discussed in the Methods Section and in much more detail in the Supporting Information.Our focus is on gaining a fundamental understanding of phonons in the low-frequency region (below 6 THz or 200 cm −1 ), which is of primary importance for heat transport. 45In this region, also the rigid translation and rotation modes with relatively strong coupling to electrons are found. 25or a study of the fundamental phonon properties of OSCs, α-QA is an ideal candidate due to the highly anisotropic bonding interactions in this material.Moreover, the molecular stacking motive in α-QA is comparably simple: it contains only one molecule in the unit cell with molecules arranged in stripes.These stripes are close to parallel to the molecular planes and form layers, which adopt a slipped π-stacking arrangement (see the next section).Consequently, one can identify directions with predominantly H-bonding, van der Waals (vdW), and π-stacking interactions.These directions are essentially parallel to the short and long molecular axes and perpendicular to the molecular planes.In passing we note that attractive dispersion forces play a dominant role in all directions, while π-stacking besides attractive charge penetration 46 primarily triggers exchange repulsion, as discussed in detail in ref 47.Nevertheless, we will stick to the above terminology (H-bonding, π-stacking, and vdW bonding directions) throughout the entire manuscript.
The possibility to reliably assign certain spatial directions to specific interaction mechanisms makes α-QA an ideal candidate for a fundamental analysis of the anisotropic coupling between molecules.This would not be possible for molecules in a herringbone arrangement, which is common for OSCs, or for the β-polymorph of QA, 48 where neighboring QA stripes run in different directions.For the electronic band structures, the particularly instructive situation in α-QA has already been exploited for studying peculiarities of the anisotropic electronic coupling between molecules. 12,47Here, we will study the impact of the α-QA structure on vibrational properties and the phonon band structure: first, we will trace the vibrations in the crystalline phase back to those of molecular QA, which explains how the molecular eigenmodes are modified by the crystalline environment.An in-depth analysis of the phonon band structure of α-QA then allows identifying how band dispersions (and, thus, phonon group velocities) are impacted by different types of bonding interactions.The discussion of the angular dependence of phonon frequencies (i.e., of angular band structures) then shows how vibrational eigenmodes change as a function of the direction of phonon propagation and what role avoided band crossings play in this context.This, finally, allows an analysis of the evolutions of the group velocities of low-frequency modes in general and acoustic phonons in particular.The latter are of distinct importance, as they crucially determine the thermal transport properties of the material.
STRUCTURE OF α-QA
As shown in Figure 1a, in the gas phase, the intramolecular conjugation between the benzene rings of QA is disturbed by the amine and carbonyl groups.This is largely overcome in the solid state through the formation of H-bonds between the molecules, which results in a particularly large reduction of the band gap. 49In α-QA, the molecules are arranged in onedimensional (1D) stripes (transparent yellow arrows in Figure 1b,d) with two H-bonds to each of the two neighboring molecules (green ellipses in Figure 1b).The centers of neighboring molecules within the stripes are connected by the a⃗ 1 + a⃗ 2 vector (with a⃗ 1 and a⃗ 2 being two of the real-space unit cell vectors; see Figure 1).As shown in Figure 1d, the short molecular axis is slightly inclined relative to the direction of the stripes.Nevertheless, the stripes as well as the short molecular axes are essentially aligned with the H-bonding direction in the α-QA crystal.The H-bonded stripes adopt a slipped π-stacking arrangement, with the slip between neighboring QA stripes amounting to 1.52 Å along the long and 0.98 Å along the short molecular axes.This results in a distance of 3.44 Å between the π-planes.In the a⃗ 3 direction, the molecules are tightly packed, with the close contact between hydrogens at the molecular ends leading to an interlocking of the molecules (see Figure 1c).This hinders translations of the QA layers in directions parallel to the a⃗ 1 , a⃗ 2 plane.Figure 1e shows the relation between the crystal axes (black arrows), the reciprocal lattice vectors (red arrows), and the QA molecule, with its axes explicitly shown by the purple arrows.These molecular axes, the reciprocal lattice vectors, and their respective intersecting points with the boundaries of the 1st Brillouin zone are shown in Figure 1f and will be referred to when discussing phonon band structures in Section 4.
METHODS
The crystal structure of the α-polymorph of QA 48 was obtained from the Cambridge Crystallographic Data Center. 50Starting from the experimental structure, the atomic positions and lattice vectors were relaxed employing periodic boundary conditions until the maximum residual force component fell below 10 −3 eV/Å.For this, the ab initio materials simulation package FHI-aims 51 (versions "201103" and "220829") was used, employing the Perdew−Burke−Ernzerhof functional (PBE) 52 in combination with the nonlocal many-body dispersion correction by Herman et al. (MBD-NL).This combination has been shown to accurately describe a large variety of molecular complexes and crystals, including systems with intermolecular O•••H− N bonding. 53For the geometry optimizations, a trust radius method enhanced BFGS algorithm was applied. 54,55The interatomic force constants needed to derive phonon properties were obtained from finite atomic displacements using a symmetry-assisted finite difference scheme employing a modified version of the Parlinski−Li−Kawazoe method, 44 as implemented in the phonopy package (v.2.9.3). 43To obtain a converged electronic structure of α-QA, reciprocal space was sampled by a 6 × 4 × 2 Γ-centered k-point grid in combination with default FHI-aims "tight" basis sets for each atomic species (for more details, see the Supporting Information).Converged phonon band structures in α-QA with the above settings require the considerations of 4 × 3 × 2 supercells (amounting to lattice vector lengths of 15.4, 19.3, and 29.3 Å), which after applying all symmetries resulted in 108 single-point calculations on displaced structures.A test of the convergence of the supercell size can be found in the Supporting Information (Section 8).As amplitude for the atomic displacements for calculating the force constant and dynamical matrices, we chose the default value of 0.01 Å. Systematically varying the amplitude of this displacement between 0.001 and 0.02 Å resulted in negligible frequency shifts and eigenvector deviations, i.e., the force constants from these finite difference calculations yield identical vibrational properties.Also for displacements of 0.05 Å, only rather minor differences were observed.This suggests that anharmonic effects have only a minor impact on the calculated phonon frequencies especially at low temperatures.The impact of thermal expansion on the vibrational properties of α-QA is not considered here.Still, it should be mentioned that also when employing the experimental, roomtemperature unit cell parameters instead of the optimized ones, no significant modifications of the vibrational properties of α-QA were observed.This is shown together with details on various convergence and (an)harmonicity tests in the Supporting Information.
In passing we note that the results of the above-described FHI-aims calculations employing the MBD-NL van der Waals correction are essentially equivalent to those calculated with the Vienna Ab initio Simulation Package, 56−58 employing Grimme's D3 correction with Becke Johnson damping 59 (see the Supporting Information).Especially, the latter approach has been thoroughly benchmarked for molecular crystals in the past and is known to excellently reproduce experimental data in the low-frequency region. 15,41honon band structures, group velocities, eigenvectors, and eigenmode displacements were calculated from the force constant matrices either directly with the phonopy package or employing our own python scripts, which make use of the phonopy−python interface. 43Eigenmode characteristics and labels were identified by generating and analyzing animations of the eigenmode displacements with the Ovito visualization tool (version 3.7.11). 60he vibrational properties of an isolated QA molecule were calculated with FHI-aims again using the PBE functional and the MBD-NL van der Waals correction.Here, instead of employing openboundary conditions, we opted for calculating the forces within the finite difference scheme for a single QA molecule in a (100 × 100 × 100) Å 3 cell with periodic boundary conditions.The enormous cell size serves to avoid couplings between periodic replicas.This approach was chosen for purely technical reasons, as it allows employing, for the nominally isolated molecule, the phonopy package using the same finite difference scheme (including symmetry considerations) as for the crystalline system.Furthermore, this approach ensures a consistent determination of eigenmodes in the molecule and in the crystal, which was useful for the projections of eigenmodes described below.To ensure the accuracy of the periodic approach for molecular vibrations, the latter were also computed for an isolated molecule in a finite difference scheme with an amplitude of the displacement of 0.0025 Å (as implemented in the script "get_vibrations.py"provided in the FHI-aims utilities).The two approaches yield equivalent results (see the Supporting Information).
Using the eigenmodes obtained from the periodic approach, we calculated the eigenvector overlap between vibrations of QA in the gas phase and in the crystalline α-polymorph.The overlap elements σ μν between vibrational mode μ in the gas phase and mode ν in the crystalline phase are given by the sum of the scalar products of the corresponding atomic eigenvectors Here, i is an index that runs over all equivalent atoms, N, in the two systems and e⃗ i,μ gas •e⃗ i,ν αQA is the scalar product of the corresponding atomic motions making up the eigenmodes.As the eigenvectors of gas phase vibrations and the vibrations in the crystals form complete bases, in which all α-QA eigenvectors can be described, one obtains = 1 2 and = 1 2 . The overlap elements were then used to identify equivalent vibrations in the crystal and in the isolated molecules.
For a further analysis of the phonon bands of crystalline α-QA, a series of additional quantities were calculated.The mode participation ratio, PR ν,q⃗ (with band index ν and wave vector q⃗ ), quantifies to which degree all atoms of the structure participate in the vibrational motion of a particular eigenmode.Modes with high participation ratios involve the motion of large parts of the structure and represent "collective" oscillations of the atoms.Here, we refer to them as delocalized modes.Conversely, modes with small participation ratios comprise isolated vibrations of only one or a few atoms (e.g., N-H stretching) and are referred to as localized modes.Thus, the participation ratio is related to the "degree of delocalization" of a mode.Notably, in that spirit, delocalized and localized modes exist not only in molecular crystals but also in molecules, and the degree of localization is essentially unaffected by the fact that in a periodic structure, a certain group of atoms by virtue of translational symmetry will move in every unit cell.The participation ratio is defined as 61 where N refers to the number of atoms in the unit cell and m i denotes the mass of atom i.As can be concluded from the definition, the participation ratio essentially measures, to what degree the massweighted atomic motions vary.The maximum value of unity (PR ν,q⃗ = 1) is obtained when all atoms within the unit cell perform the same motion.This, for example, occurs for pure translation modes, in which each atom of the molecule(s) oscillates with the same relative amplitude.The lower limit PR ν,q⃗ = 1/N is observed when only a single atom in the unit cell oscillates.
The "longitudinality" L ν,q⃗ of a mode is given by It measures for a specific mode ν at q⃗ , how parallel the wave vector q⃗ is to the atomic motions (associated with the eigenvectors e⃗ i,ν,q⃗ ) of all N atoms in the unit cell.Note that both the wave vector and the eigenvectors are normalized and only their directions influence L ν,q⃗ .As q⃗ determines the momentum of the respective phonon, L ν,q⃗ = 1 identifies a fully longitudinally polarized (acoustic) mode, while fully transversally polarized (acoustic) modes are characterized by L ν,q⃗ = 0.By replacing the (mode-dependent) q⃗ with a (fixed) vector parallel to a crystal or molecular axis, eq 3 can also be used to determine, to what extent a specific mode is polarized parallel to the chosen axis.
In general, group velocities are the derivatives of the phonon frequencies ω(ν, q⃗ ) (as the eigenvalues of the dynamical matrix) with respect to the wave vector q⃗ Considering that in simulations, reciprocal space is sampled in a discrete manner, the group velocities are here calculated from the expectation values of the q⃗ -derivatives of the dynamical matrix D(q⃗ ) for the respective eigenvectors of D(q⃗ ), as implemented in phonopy 43 Sound velocities were obtained by calculating the group velocities in the long-wavelength limit for small wave vectors q⃗ on a sphere with a constant radius |q⃗ | = 0.0044 Å −1 (corresponding to 1% of the length of the shortest reciprocal vectors) around Γ.
For visualization purposes, the structure of α-QA and displacements of molecular vibrations and crystalline eigenmodes were plotted with Ovito, 60 while for data analysis within Python, the libraries NumPy 64 and SciPy 65 were used.Phonon band structures and other vibrational properties were plotted using the Matplotlib 66 libraries.
Molecular Vibrations in QA and Their Relation to Γ-Point Phonons in α-QA
The vibrational eigenmodes of molecular crystals can be separated into intermolecular (external) and intramolecular (internal) modes.The former refer to motions of the molecules as rigid units and are determined by the arrangement of the molecules within the unit cell of the crystal.The latter are primarily comprised of deformations along the internal degrees of freedom of the molecule.Before analyzing the phonon bands in the crystalline α-phase, it is, therefore, useful to first consider the molecular vibrations of QA and to analyze how these eigenmodes are related to the intramolecular Γ-point modes of α-QA.Deviations from the molecular vibrations can then be associated with intermolecular interactions due to the different noncovalent forces at work in QA crystals.Regarding the intermolecular modes, the rigid translations of the QA molecule become the acoustic bands of α-QA, with vanishing frequency at the Γ-point.Conversely, the rotational molecular motions correspond to some of the lowest optical modes in the crystalline state.As shown in Figure 2, they are found in the same spectral region as the lowest four intramolecular modes.The latter are related to bending and torsion motions of the molecular backbones and their low frequencies can be traced back to the rather high flexibility of the comparably large QA molecules.
Here, bending mode refers to wave-shaped distortions of the molecular backbone, either along the normal to the molecular plane for out-of-plane (OP) modes (see, e.g., Figure 2b) or within the molecular plane for in-plane (IP) modes (see, e.g., Figure 2e).They are reminiscent of standing waves in a string and the number of nodes in the displacements increases with the order of the mode.This is illustrated for the 2nd-order OP bending mode in Figure 2d.Torsion modes comprise twisting motions of the rings around the long molecular axis.They can involve a torsional motion of the entire molecule like for the 1st-order torsion mode shown in Figure 2c, where the two halves of QA rotate in opposite directions.Alternatively, for certain modes, only individual rings are involved in the rotational motion, while the other rings hardly move at all (e.g., the ring-1−5 torsion, for which only the terminal rings rotate).The evolution of the frequency of such modes as a function of chain length has been discussed in ref 67 for the series of acenes based on analogies to simple classical oscillators.Finally, the last mode displayed in Figure 2a at 7.05 THz (235 cm −1 ) corresponds to a uniform, longitudinal stretching of the backbone.At (significantly) higher frequencies (not shown in Figure 2), we enter the realm of "classical" molecular vibrations commonly discussed in literature.These modes are often restricted to specific sections of the molecule or primarily concern the motion of specific chemical groups or atomic species (like C−H and C−C stretching and bending modes).Therefore, higher frequency modes already at the molecular level are usually characterized by rather low participation ratios, although up to ∼20 THz, there exist also modes in which the participation ratios reach values around 0.75, for example, for the cases of mode 27 at 14.09 THz (470.1 cm −1 ) and of mode 31 at 15.68 THz (529.1 cm −1 ) (see Figure S6 and Table S5 in the Supporting Information).
In the molecular crystal, the modes corresponding to rotations around the normal to the molecular plane, around the short molecular axis, and around the long molecular axis are eigenmodes 4, 5, and 7, respectively (eigenmodes 1−3 are the acoustic modes).Regarding the intramolecular modes, their order from the molecular system prevails also in the crystal at the Γ-point.For all vibrations displayed in Figure 2, the associated frequencies are shifted to higher values in the crystalline phase.Qualitatively, this can be explained by intermolecular interactions making the corresponding molecular deformations energetically more costly.Notably, such a shift to higher frequencies is observed also for nearly all higher frequency modes except for certain modes localized on atoms directly affected by the formation of the H-bonds (modes 89, 90, 107, 108; see Supporting Information Table S5).The particularly large shifts in the low-frequency region raise the question to what extent the vibrations in the crystal can be directly associated with molecular vibrations, i.e., to what extent the eigendisplacements in α-QA are comparable to those of the isolated molecule.
This can be analyzed by calculating the eigenvector overlap matrix between the eigenmodes of α-QA and those of the isolated QA molecule employing eq 1.The resulting overlap matrix in Figure 3a shows how similar the lowest 12 modes of molecular QA and α-QA are (with a value of 1, denoting a perfect match; the actual values are listed also in Tables S5 and S6 in the Supporting Information).As mentioned above, the molecular translations form the basis of the acoustic phonons of α-QA.One of these acoustic modes in α-QA perfectly matches the molecular translation perpendicular to the plane of QA; the other two, at least at the Γ-point (where they are degenerate), turn out to be a superposition of the short-axis and long-axis translation of QA.
Hybridization effects also occur among the optical bands: for example, mode 5 is dominated by a short-axis rotation (σ = 0.93), which is mixed with a molecular 2nd-order bending mode (σ = 0.32).The mode hybridization can also be inferred from the displacement pattern of mode 5 in α-QA shown in Figure 3b: the displacement of the peripheral atoms is clearly reduced compared to a pure rotation (cf., Figure S5), which is consistent with an admixture of the molecular 2nd-order OP bending mode from Figure 2d.The minor contributions of other molecular rotations (around the normal axis with σ = 0.129 and around the long axis with σ = 0.118) reflect the fact that the rotation does not occur exactly around the short molecular axis.The reason for the hybridization of the two modes despite their rather different frequencies is the close molecular packing in α-QA: the above-described interlocking of molecules in neighboring QA layers (see Figure 1c) impedes a simple, rigid rotation of the QA molecules around their short molecular axes.Two other examples of modes arising from a hybridization of inter-and intramolecular vibrations are modes 7 and 9.Both modes are a superposition of the long-axis rotation and again the 2nd-order OP bending.While for mode 7, the rotation component dominates (σ = 0.89 vs 0.46 for the 2nd order bending), the opposite holds true for mode 9. There, a stronger 2 nd -order bending (σ = 0.82) and a weaker long-axis rotation (σ = 0.43) are combined with a nonnegligible short-axis rotation (σ = 0.36).The latter again indicates that the rotational axis is not exactly aligned along one of the molecular axes (which is best seen in the animation of the mode in the Supporting Material).Mode 7 is visualized in Figure 3c,d.In contrast to what would be expected for a rotation around the long molecular axis, certain atoms like the nitrogens and their neighbors are hardly displaced.A similar observation is made for mode 9, albeit here the oxygens and their neighbors remain essentially stationary (see Figure 3e,f).This behavior is attributed to the peculiar situation of the Nand O-atoms caused by the formation of the H-bridges.The above hybridizations illustrate, how intermolecular bonding as well as the crystalline packing cause a modification of the vibrational displacement patterns, which is best described as an intermode hybridization on the basis of the molecular eigenmodes.
Overview of the Phonon Band Structure of α-QA
In solids containing a quasi-infinite number of atoms, Γ-point vibrations reveal only a small part of the vibrational properties.Therefore, it is useful to analyze phonon band structures.They display vibrational frequencies as a function of the wave vector q⃗ , which quantifies the phase shift between the displacements in neighboring unit cells.The bands are then usually plotted along high-symmetry directions in reciprocal space.Figure 4a shows such a phonon band structure for α-QA in the low-frequency region with phonon modes colored according to their participation ratios.As discussed in Section 3, these quantify how coherently the atoms move for each vibrational eigenmode and their analysis eases the identification of similar displacement patterns along the bands.The Γ-vibrations discussed in the previous section (cf., Figures 2a and 3a) are labeled by their numbers and by blue circles (internal modes), red squares (rotational modes), and a green triangle (marking the acoustic modes).In α-QA with only one molecule in the unit cell, the latter are the only translational modes.They are of particular relevance, as they are responsible for sound propagation, they often dominate thermal conductivities, and their slopes are intimately related to the material's elastic constants. 68,69Especially, at low q⃗ -values, the acoustic modes display a high participation ratio owing to their translational character (i.e., all molecules move to the same degree and inphase in neighboring unit cells).
At larger q⃗ -values, the bands experience a number of avoided crossings that occur when bands approach, hybridize, and eventually "repel" each other.This happens when the eigenvectors associated with the two bands transform equally under the symmetry operations of the crystal.Due to the low symmetry of the α-QA crystal (space group 1̅ ), this applies to all pairs of bands, which rules out any band crossings and, thus, 2a and 3a).This illustrates the evolution of modes characterized by specific patterns of atomic motions in the region of avoided crossings between the bands.maximizes the number of avoided crossings in α-QA.Notably, after the avoided crossing, the character of the mode is often transferred from one band to the other.This is, for example, illustrated by the evolution of the participation ratios in Figure 4a (and more clearly in the zooms in Figure 4b−d), where high participation ratios in the course of an avoided crossing often switch from a lower to a higher band.Therefore, following the q⃗ -dependent evolution of modes with high participation ratios often yields rather smooth trends, even though individual bands often have a (more or less) pronounced kink close to avoided crossing, as shown, e.g., in Figure 4d, with the character of the lowest mode transitioning from band 3 to band 5.
The first optical band, which is associated with an intermolecular mode, has a Γ-point frequency of 1.07 THz (mode 4 in Figure 4a).It primarily corresponds to a rotation around the axis normal to the molecular plane.Together with the next optical band (mode 5 in Figure 4a), it participates in various avoided crossings with the acoustic bands, as can be seen in more detail in Figure 4b,c.Also, higher-lying bands participate in avoided crossings, as will be discussed in more detail below.Most of the higher bands display a reduced albeit still significant dispersion.All of these observations are strongly reminiscent of the situation in the more commonly studied pentacene crystals. 67They consist of molecules of a rather similar size and shape as QA.The 1st Brillouin zone of pentacene, however, contains twice as many bands due to two molecules in the unit cell.A closer inspection of the acoustic bands reveals another distinct difference: in pentacene, the slopes of the longitudinal acoustic band(s) in the ΓX and ΓY directions are very similar; 67 conversely, in α-QA, the band in the ΓY direction is much steeper, testifying to a much higher phonon group velocity of the longitudinal acoustic band in that direction.This can be explained by the different structures of the two materials: in pentacene, due to the herringbone arrangement of the molecules, bonding in the XΓY plane is rather isotropic and mostly of van der Waals type (with quadrupole contributions).Conversely, in α-QA, the ΓY direction is close to the direction of the short molecular axis of the QA molecule (see Figure 1e,f) and thus also close to the direction of the H-bonding interaction.In the spirit of a coupled harmonic oscillator model, a high dispersion in this direction (and, thus, a high group velocity) can be associated with a high coupling force constant.This is fully consistent with the notion that the resistance to longitudinal deformations is particularly high in α-QA due to the reinforcing hydrogen bonds.
The above discussion requiring the (repeated) mentioning of similarities of directions already suggests that the conventional approach of analyzing phonon band structures along high-symmetry directions in reciprocal space might not be ideal for α-QA.In fact, as illustrated in Figure 1e,f, in α-QA, these directions neither coincide with the real-space lattice vectors of the crystal (and, thus, with the 3D arrangement of the molecules) nor with the most relevant geometric directions within the molecules.The latter comprise the short and long molecular axes and the normal to the π-plane, where it should be stressed again that unique identification of these directions is possible only because of the simple structure of α-QA with only a single molecule in the unit cell.In fact, as will be shown below, only in the "molecular" directions, purely longitudinal and transverse acoustic phonons are found and, in these directions, also the maximum group velocities occur.This makes the three "molecular" axes ideally suited for the further analysis of acoustic and optical phonons and suggests that the bands should be plotted also along these directions.
Acoustic Bands of α-QA
Such a plot is contained in Figure 5b, where we now color the bands according to their degree of longitudinality (see Section 3; eq 3).As a complementary graph, Figure 5a illustrates the longitudinality of the modes in the conventional, highsymmetry directions.In the following discussion, we will first focus on the acoustic bands at low q⃗ values, while the impact of the observed avoided crossings will be discussed later.
The longitudinality of the bands serves as a measure of how parallel atomic displacements are to the wave vector for each mode.Notably, Figure 5a shows that along the high-symmetry directions, a unique identification of a single longitudinal acoustic band is not possible.Especially along the ΓX path, two acoustic bands with similar but only intermediate degrees of longitudinality exist.A mixed band character is also found along the ΓY and ΓZ paths; although in these two directions, the longitudinal character dominates for the highest-energy acoustic band (at least at low q⃗ values).Still, also the longitudinality of the second band is not negligible.In fact, a closer analysis of the atomic motions derived from the eigenmodes of the highest acoustic band reveals that it is dominated by a rigid displacement of the QA molecule parallel to the short molecular axis for both the ΓX and ΓY directions (see Figure S16b in the Supporting Information).
As argued already at the end of the previous section, a much clearer situation should emerge when analyzing vibrations with q⃗ -vectors parallel to one of the molecular axes.This is indeed the case, as one can see in Figure 5b: now the highest band in all three directions has a purely longitudinal character.For example, the longitudinality in the ΓN direction (i.e., perpendicular to the planes of the QA molecules) is 0.99 close to the Γ-point and after two avoided crossings still amounts to 0.96 at N (for the band highlighted with a "*").Notably, as mentioned previously for the participation ratio, also the longitudinality switches between bands at the avoided crossings with low-lying optical bands.At small q⃗ -vectors, similarly high degrees of longitudinality are observed for the third acoustic band also along the ΓS and ΓL paths (i.e., for q⃗ parallel to the short and long molecular axes).For these directions, the longitudinality of the respective bands, however, decreases toward the Brillouin zone boundary, where the phonon modes hybridize after partaking in avoided crossings with relatively large gaps (see SI Figure S16e,f).It is worthwhile mentioning that the purely longitudinal character of the highest acoustic band in ΓN, ΓS, and ΓL directions means that for the corresponding eigenmodes, the molecules rigidly move perpendicular to the π-plane, parallel to the short molecular axis, and parallel to the long molecular axis, respectively.This now allows a less ambiguous explanation for why bands are particularly steep in certain directions: for example, the longitudinal acoustic band is steepest along the ΓS path (slope of 57 THzÅ = 5.7 nm/ps).The associated displacement along the short molecular axis correlates well with the direction of the reinforcing H-bonds, which stiffens the corresponding vibrations and thus results in particularly high phonon group velocities (see also below).As another example, in the ΓL and ΓS directions, the lowest band with the smallest dispersion is the transverse acoustic band associated with a displacement of the molecules parallel to the plane Figure 6.Angular-dependent band structures of α-QA in the long-wavelength limit (for |q⃗ | = 0.0044 Å − 1 , which is 1% of the shortest reciprocal lattice vector) for three circular band paths.The radial values cover a frequency range between 0 and 0.04 THz (1.33 cm −1 ; all plots use the same frequency scale).The frequencies of the three acoustic modes are plotted for wave vectors parallel to the molecular plane (panels a, d, g, j; left column) in the plane spanned by the long molecular axis and the normal to the π-plane (panels b, e, h, k; central column) and in the plane spanned by the short molecular axis and the normal to the π-plane (panels c, f, i, l; right column).The arrows denote the direction of the normal to the πplane (red arrow), the direction of the short molecular axis (green), and the direction of the long molecular axis (blue).These directions are labeled for each angular band structure in panels (a)−(c).The coloring in the top row indicates the longitudinality of the modes (panels a, b, c), while in the following rows, it quantifies the degree to which the displacements associated with specific modes are parallel to the normal to the molecular plane (panels d, e, f), to the short molecular axis (panels g, h, i), and to the long molecular axis (panels j, k, l), respectively (see also color codes at the right end of each row).normal (see Figure S16).This suggests that the smallest restoring force constant is found for a displacement of the molecules parallel to the π-stacking direction, where one is mostly dealing with a combination of van der Waals and electrostatic attraction (due to charge penetration effects) 46 and Pauli repulsion. 47Consistently, also the longitudinal acoustic band in the ΓN direction (which is described by the same displacement) displays the lowest dispersion (amounting to 32 THzÅ = 3.2 nm/ps).
The fundamentally different character of the longitudinal acoustic bands in the n⃗ , s⃗ , and l ⃗ directions raises the question whether there is a gradual angle-dependent change of the molecular displacement directions within a single band or whether the order of the bands with constant character changes at certain angles.To understand that, it is useful to study the band structure as a function of the direction of q⃗ while keeping the length of the wave vector, |q⃗ |, fixed.Such angular-dependent band structures are rarely explored.Here, they are plotted close to the Γ-point in Figure 6 for a |q⃗ |-value of 0.0044 Å −1 , which is 1% of the shortest reciprocal lattice vector.The plots show the phonon frequencies of the first three acoustic bands on a scale between 0 and 0.04 THz along circles in reciprocal space.These circles comprise angulardependent bands for q⃗ -vectors within the molecular plane (i.e., in the LΓS plane containing the long and short molecular axis; left column of Figure 6), q⃗ -vectors in the NΓL plane (i.e., in the plane spanned by the long molecular axis and the normal to the π-plane; central column of Figure 6), and q⃗ -vectors in the SΓN plane (i.e., in the plane spanned by the short molecular axis and the normal to the π-plane; right column of Figure 6).The plotted frequencies are identical in each column of Figure 6, but their color-coding varies: In the first row, the longitudinality of the modes is shown, while the following rows quantify the degree to which the modes correspond to displacements along the normal to the π-plane (red color code), along the short molecular axis (green color code), and along the long molecular axis (blue color code).
The angular band structures reveal several characteristics of the acoustic phonon bands: (i) there is a pronounced anisotropy of all bands with (local) maxima of the phonon energies occurring along the l ⃗ , s⃗ , and n⃗ directions, especially for the outermost (highest frequency) acoustic band.The global frequency maximum occurs in the s⃗ direction, consistent with the previous discussion of maximum band dispersions.For the lower bands, local maxima also occur in the vicinity of avoided crossings.(ii) The outermost band is always the one with the highest degree of longitudinality.This is the consequence of avoided crossings (including switches in band character) occurring at angles, at which a specific type of mode (long-axis, short-axis, or normal displacement) is no longer the most longitudinal one.(iii) Longitudinalities close to one are observed typically only rather close to the l ⃗ , s⃗ , and n⃗ directions.(iv) In these directions, the longitudinality of the lowerfrequency modes is essentially zero (i.e., they possess a purely transverse character), while away from the special directions, bands with mixed character are found.(v) The frequency splitting at avoided crossings varies significantly.This results in pronounced differences in the evolutions of the apparent nature of the different bands in different planes of reciprocal space.
Especially for the directions plotted in the second and third columns of Figure 6, it appears that avoided crossings due to hybridization effects prevent bands from maintaining a specific character, in analogy to the situation depicted in Figures 4 and 5.For the azimuthal band structure plotted in the first column of Figure 6 (i.e., when q⃗ lies in the molecular plane), the situation is somewhat different, especially for the higher frequency bands.For them, the gaps at the avoided crossings are marginal.This means that for the corresponding q⃗ directions, there is hardly any hybridization between longand short-axis displacements.Due to the marginal frequency gaps, one might, in fact, get the impression that one is dealing with crossing bands that maintain their character: two dumbbell-shaped outer bands characterized by long-and short-axis displacements and a third inner band for which the molecules are displaced perpendicular to the π-plane.The two outer bands appear to intersect at an angle of 52°, where the longitudinality of both bands adopts a value of 0.707 (=√2/ 2).However, for symmetry reasons, there must be avoided crossings also for these bands even though they are hardly visible in Figure 6.This is confirmed by a zoom into the corresponding q⃗ -region that is shown in the Supporting Information (Figure S18).Therefore, strictly speaking, one is dealing with a continuous highest frequency band, whose character abruptly changes between long-and short-axis displacement at the positions of the avoided crossings.Similarly, also for the band structures plotted in the NΓL and SΓN planes, the character of the highest energy band switches between either normal, long-, or short-axis displacement depending on the wave-propagation direction but in a much more gradual fashion.Concomitantly, there are larger frequency splitting at the avoided crossings.
Optical Bands in α-QA and Their Involvement in Avoided Crossings
As the next step, it is worthwhile to provide a more in-depth discussion of the optical bands.They are mostly characterized by participation ratios noticeably smaller than those of the acoustic modes.This can be understood from the fact that they correspond to vibrations for which significant parts of the molecules move comparably little.Somewhat higher participation ratios in Figure 4a are found only for short-and normal axis rotations and for the in-plane bending mode (modes 4, 5, and 10 at the Γ-point).As mentioned above, the participation ratio is a useful guide for following specific mode characters throughout the bands in cases in which strong mode hybridizations occur in the vicinity of avoided crossings.At these points also the participation ratio changes rather abruptly, which is a strong indication for a fundamentally changed nature of the bands in question.As an example, band 4, which at Γ corresponds to a rotation around the plane normal, partakes in two avoided crossings approximately halfway toward X, as shown in Figure 4a,b.There, it hybridizes with the two acoustic bands 2 and 3.At small values of |q⃗ |, band 4 is comparably flat, while in response to the avoided crossing bands 4 and 5 at higher |q⃗ |-values adopt rather sizable dispersions.Rather, at large values of |q⃗ |, close to X, band 2 becomes comparably flat (Figure 4a).This can be attributed to its increasingly optical character (as can be inferred from the low associated participation ratio, an assessment that is confirmed by an inspection of the eigenmodes).Similar effects occur at many points in reciprocal space, where strongly dispersing bands with acoustic displacement characteristics approach an optical band and undergo an avoided crossing at which the band characters switch.−72 The situation can become rather complex, as can be illustrated for the mode dominated by a rotation around the short molecular axis.Close to Γ, such vibrations form band 5 (with a Γ-point frequency of 1.40 THz).At the X-point in Figure 4a, after some avoided crossings including switching of band characters, the corresponding vibration appears as mode 8 at 3.45 THz (as shown in more detail in Figure S17).Conversely, along ΓY (see Figure 4c) and ΓZ (see Figure 4d), the negative dispersion and avoided crossings result in this vibration appearing at 0.98 THz at Y (as band 1) and at 0.65 THz at Z (as band 3).
Notably, avoided crossings also occur between purely optical bands, which shall be exemplified for bands 7, 8, and 9 along ΓZ: as discussed above, when comparing molecular vibrations with Γ-point phonons in the crystal, at the Γ-point, bands 7 (Figure 3c,d) and 9 (Figure 3e,f) are closely related and consist of superpositions (Figure 3a) of a long-axis rotation and a 2ndorder OP bending motion.Conversely, band 8 corresponds to the first-order torsion.Along the ΓZ path, the bands switch their character multiple times such that the long-axis rotation, which dominates band 7 at the Γ-point, becomes band 8 at Z, while the 2nd order OP bending, which is dominant for band 9 at Γ, becomes band 7 at Z (see Figure S17b).Concomitantly, the first-order torsion switches from band 8 at Γ to band 9 at Z. Interestingly, as a consequence of rehybridizations at avoided crossings, the original Γ-point hybridizations of the eigenmodes for bands 7 and 9 disappear at Z, such that there the displacements can be directly correlated with individual molecular eigenmodes of QA.Similar decouplings of the longaxis rotation and the 2nd order OP bending modes are also observed at X and Y.
Group Velocities
Phonon-related transport characteristics are, to a large extent, determined by the group velocities, v⃗ ν,q⃗ , of the phonons with the indices referring to the band number, ν, and the wave vector, q⃗ .This becomes apparent, for example, for the thermal conductivity tensor κ αβ , which (employing the Boltzmann transport equation in the relaxation-time approximation) is given by 73,74 Here, V is the unit cell volume and N q refers to the number of considered q⃗ -points in the sampling of reciprocal space, while the C ν,q⃗ are the mode-specific heat capacities and the τ ν,q⃗ are the respective phonon lifetimes.The mode-specific heat capacities are derived from the phonon energies and from the phonon occupations in thermodynamic equilibrium and are given by 73 with the mode eigenfrequencies ω ν,q⃗ and the temperature T.
Considering that the group velocities are the local slopes of the phonon band structures, their values for the different bands can be estimated already from the band plots in the band structures in Figures 4 and 5 and from the absolute values of the phonon frequencies in the angular-dependent phonon bands in Figure 6.For their more quantitative assessment, we pursued two approaches: first, group velocities of the acoustic modes were calculated in the long-wavelength limit for small wave vectors close to the Γ point in analogy to the angulardependent phonon band structures in Figure 6.Second, group velocities were calculated for all vibrational modes on a dense q⃗ -mesh sampling the entire 1st Brillouin zone.Then, they were weighted according to their temperature-dependent occupation based on their C ν,q⃗ -values to assess their potential contribution to quantities like thermal conductivity.In the long-wavelength limit, (i.e., close to Γ), the acoustic bands exhibit perfectly linear dispersion and their slopes with respect to q⃗ do not change with |q⃗ |.They rather depend on the direction of q⃗ , as can be inferred from the section on angulardependent band structures.To illustrate that, the norms of the group velocities |v⃗ g | are plotted for the three acoustic bands (again at |q⃗ | = 0.0044 Å −1 ) and in all spatial directions in Figure 7. .Note that to better visualize the angular dependences, the total ranges of v g vary between the different plots as indicated in the color scales to the right.
The structures of these plots are rather complex.The reason for that are the avoided crossings discussed already in the context of the angle-dependent band structures.They result in abrupt changes of |v⃗ | as a function of the q⃗ direction, resulting in nearby regions with group velocities significantly above and significantly below the average values.For band 3 (the mostly longitudinal band), for which |v⃗ | is plotted in Figure 7a, one can still see certain trends: the highest group velocities with 5.7 nm/ps (5.7 km/s) are found for phonon propagation directions parallel to the short molecular axis (green arrow in Figure 7a); another local maximum is found along the long molecular axis (blue arrow).Between these directions, one observes a kink in the evolution of |v⃗ | consistent with the above-discussed very "pointy" avoided crossing of the two highest frequency bands in the left column of Figure 6.The particularly high group velocity along the short molecular axis can again be attributed to the particularly strong interactions in the H-bonding direction.Conversely, in the π-stacking direction (red arrow in Figure 7a), one observes a very low group velocity.A more in-depth analysis reveals that the minimum group velocities are found approximately halfway between that direction and the direction parallel to the long molecular axis (i.e., approximately half way between the red and blue arrows).As shown already in Figure 6, for the mostly transverse bands (bands 1 and 2), the repeated avoided crossings result in distortions of the angular-dependent band structures with sometimes multiple local frequency minima and maxima and particularly pronounced local frequency variations.This then translates into the extremely complex variations of the group velocities in the polar plots of Figure 7b,c for which an identification of clear trends appears futile.
As the final step, we analyze the group velocities of the optical phonons together with the acoustic phonons in the entire 1st Brillouin zone.As a detailed discussion of all modes would explode the contents of the current manuscript, we opted for "collectively" analyzing their contributions to the thermal conductivity.According to eq 6, the key phonon properties that determine thermal transport are the modespecific heat capacities, the phonon group velocities, and the phonon lifetimes.The latter are inherently anharmonic quantities and their determination requires the knowledge of third-and possibly even higher-order force constants. 16,73As strictly anharmonic quantities, they are out of the scope of the present manuscript (and clearly beyond what is currently computationally accessible for ab initio methods applied to systems as complex as α-QA).Rather, we will consider the mode-resolved contributions to the thermal conductivity that depend purely on harmonic properties.They are given by the dyadic product of the group velocities weighted by the respective mode contributions to the heat capacity (with the latter accounting for thermal occupation of the phonon modes).This yields what we refer to as the harmonic contributions to the thermal anisotropy tensor, η αβ (in analogy to what was done in ref 15 To homogeneously sample reciprocal space in the calculation of η ν,q⃗ αβ , phonon frequencies and group velocities were calculated on a 56 × 34 × 15 q⃗ -mesh.Summing over the tensors for all modes weighted by the phonon lifetimes gives the thermal conductivity tensor, while merely summing over the η ν,q⃗ αβ yields a proportional quantity, if all phonon lifetimes were the same.While the latter is usually not the case, η αβ still provides a first impression of which phonons have harmonic properties that would make them relevant for thermal transport.Moreover, our preliminary tests suggest that at least in α-QA, there is no pronounced anisotropy of the phonon lifetimes, such that the anisotropy of η αβ yields at least a first hint toward the anisotropy of the thermal conductivity. Therefore, a polar plot of a projection of η αβ (the sum over all mode contributions described in eq 8) calculated for a phonon occupation found at a temperature of 300 K is shown in Figure 8a.The projections are obtained by multiplying the tensor from both sides with a unit vector pointing in a specific direction.Thus, the plotted quantity describes the contribution to the component of the heat flux in the given direction for a temperature gradient in the same direction.The angular dependence of the projection of η αβ is much smoother than that of the group velocities of the individual bands in Figure 7.We primarily attribute that to the averaging over several bands and, even more importantly, over the entire Brillouin zone.It, however, also has to be mentioned that when sampling reciprocal space on the surface of a sphere (like for Figure 7), a significantly denser q⃗ -grid can be chosen than for a threedimensional (3D) sampling of the entire 1st Brillouin zone (like for Figure 8).Nevertheless, also when considering all bands and the entire 1st Brillouin zone, several of the original results from the analysis of the acoustic group velocities at small |q⃗ | are recovered, like a particularly high value of η αβ in the direction of the short molecular axis, s⃗ , an intermediate value along the long molecular axis, l ⃗ , and the smallest value perpendicular to the molecular plane, n⃗ .
To understand that in more detail, the contributions of individual phonons in these directions were also analyzed.For that, the mode thermal anisotropy tensors were transformed to align their axes with s⃗ , l ⃗ , and n⃗ (η ν,q⃗ xy ≔ x⃗ T •η⃡ •y⃗ for x, y = n, s, l).The diagonal elements of the mode contribution tensors are shown in Figure 8b for η ν,q⃗ nn , in panel (c) for η ν,q⃗ ss , and in panel (d) for η ν,q⃗ ll .They provide additional insight, why phonon transport should be particularly efficient in the H-bonding direction (with 63 nm 2 /ps 2 as the highest contribution of η ν,q⃗ ss ), intermediate parallel to the long molecular axis (40 nm 2 /ps 2 as the highest contribution of η ν,q⃗ ll ), and smallest in the π-stacking direction (20 nm 2 /ps 2 as the highest contribution of η ν,q⃗ nn ). Figure 8b−d shows that the contributions of the acoustic phonons (more specifically, of all phonons with high participation ratios) to ( ) are particularly large in the direction of the short molecular axis.This becomes especially apparent in panel (c) of Figure 8, where particularly high values of η ss are found at low frequencies for modes displaying a bright yellow shading (denoting a high participation ratio of the respective modes).The contributions of the related bands are clearly reduced in the directions perpendicular to the molecular plane (Figure 8d) and parallel to the long molecular axis (Figure 8b).Notably, in the latter direction, there are rather sizable contributions from modes with low participation ratios between 1 and 4 THz.This is consistent with the data in Figure 4, where the bands in that spectral region are characterized by rather low participation ratios in the ΓZ direction (which is close to l ⃗ ), which once more illustrates that, when combining the different elements discussed in this manuscript, a consistent picture emerges.
SUMMARY AND CONCLUSIONS
In summary, due to its comparably simple structure, the αpolymorph of QA is ideally suited for studying the intricacies of phonon band structures of highly anisotropic organic semiconductors.As a useful starting point for such an analysis, we identify, how the eigenmodes of the molecules translate into Γ-point vibrations in the molecular crystals.Interestingly, intermolecular rotational modes occur in the same frequency range as the lowest-energy intramolecular backbone-bending vibrations.The order of the intramolecular modes that are found for the isolated molecule is largely maintained in the molecular crystal, although one generally observes a shift to higher frequencies in the latter case as a consequence of intermolecular interactions.These interactions, especially in the form of a geometric interlocking of neighboring molecules, can also be identified as driving forces for mode hybridization effects in α-QA already for Γ-point phonons.
The situation becomes significantly more complex when considering the phonon bands in the entire 1st Brillouin zone.There, the low symmetry of α-QA causes a multitude of avoided crossings.They are often accompanied by abrupt changes of the character of the individual bands, which can be directly inferred from analyzing mode participation ratios and mode longitudinalities.Avoided crossings not only arise from hybridizations between acoustic and optical modes but also repeatedly occur between (sometimes multiple) optical modes.An analysis of the character of the acoustic modes reveals that bands with unambiguous longitudinal or transverse character do not occur in the high-symmetry directions of reciprocal space.In fact, they are found only for phonon wave vectors parallel to the long and short molecular axes and the plane normal of individual QA molecules.The reason for that is that acoustic phonons are characterized by displacements that are parallel to these "molecular" directions.This means that, at least in α-QA, the crystalline packing has only rather little impact on the displacement directions of the molecules when exciting acoustic phonons.
Thus, our observation that the highest frequency acoustic mode always displays the highest degree of longitudinality implies that the nature of that mode must change as a function of the direction of q⃗ .Analyzing angular-dependent band structures (i.e., phonon frequencies as a function of the direction of q⃗ rather than of its length) shows that this change in the nature of the involved phonons occurs abruptly again as a consequence of avoided band crossings (now upon changing the direction of the phonon wave vector).
Due to the multiple avoided band crossings, the angular dependence of the group velocities of the acoustic bands (in the long-wavelength limit) adopts a particularly complex structure.One observes massive local changes of the group velocities with kinks and/or local minima and maxima upon only slightly varying the direction of q⃗ .Still, there are certain trends that prevail throughout all studied quantities: (mostly) longitudinal modes are associated with the largest band dispersions and, consequently, with the largest group velocities independent of the wave-propagation direction and the associated dominant type of molecular displacement.This suggests that longitudinal deformations (i.e., compressions and expansions) are consistently energetically more costly than transverse deformations (i.e., slips of neighboring molecules), despite the highly anisotropic structure of α-QA with welldefined π-stacking and H-bonding directions.Still, the bonding anisotropy has a profound impact on the absolute magnitude of the band dispersions and group velocities (even when averaging over all thermally occupied modes) with the highest values found for deformations in the H-bonding direction and the smallest values occurring for displacements perpendicular to the molecular planes.This is consistent with the notion that bonding interactions are reinforced by H-bonds, while perpendicular to the π-plane, the combination between van der Waals attraction, charge penetration, and (typically rather strong) exchange repulsion results in a much more shallow bonding potential.
Overall, the considerations in the present manuscript not only portray the power of the toolbox available for analyzing phonon band structures in complex materials.They also show that despite a large number of phonon bands in complex materials and the multitude of avoided crossings resulting from the often-low symmetries of organic semiconductors, atomistic simulations allow gaining an in-depth understanding of the properties of phonons as key quasi-particles of crystalline materials.In many cases, these phonon properties can eventually even be traced back to a more intuitive understanding of bonding in molecular crystals.Still, at this stage, the development of dependable structure-to-property relationships for the phonon properties of organic semiconductors is in its infancy, but we are confident that the current study forms a firm basis for future investigations.
■ ASSOCIATED CONTENT Data Availability Statement
The data underlying this study will be made openly available at the time of publication of the article in the NOMAD repository at https://doi.org/10.17172/NOMAD/2023.05.19-5.The data is also openly available on the repository of Graz University of Technology at https://doi.org/10.3217/zznj9-hd255.
Animations of the low-frequency Γ-point vibrations of α-QA.A document describing the following aspects: unit cell parameters of α-QA; details on the used basis sets and other electronic structure code settings; a comparison of experimental and calculated hydrogen bonding distances in several QA polymorphs, for PBE and the hybrid functional PBE0; convergence tests for the k-grid and the basis set; details on calculating vibrations of isolated molecules; visualization of the molecular eigenmodes; comparison of the frequencies and participation ratios for all molecular and Γ-point vibrations and their eigenvector overlap; test of the impact of the employed van der Waals correction; tests concerning supercell convergence, where the phonon band structure is presented for additional high-symmetry paths; a variety of tests elucidating the role of anharmonicities; an analysis of displacement types of acoustic phonons along both the high-symmetry paths and the molecular axes; additional details on the avoided crossings in both the high symmetry and the angular band structures (PDF) Gamma_mode_animations (ZIP)
Figure 1 .
Figure 1.Panel (a) shows the chemical structure of an isolated QA molecule, while panels (b)−(d) illustrate the molecular packing in the crystalline α-polymorph viewed in the directions perpendicular to the π-plane, along the short molecular axis, and along the long molecular axis, respectively.The lattice vectors a⃗ 1 , a⃗ 2 , and a⃗ 3 and the unit cell are shown in panels (b)−(d), and the atoms are colored using the code shown in the inset of panel (c).The green ellipses in panel (b) denote the atoms partaking in the H-bonding and the transparent yellow arrows in panels (b) and (d) highlight the directions of the H-bonded QA stripes.The relation between lattice vectors (black arrows), reciprocal lattice vectors (red arrows), the direction normal to the molecular plane n⃗ , the short, and the long molecular axes s⃗ and l ⃗ is shown in panel (e).The 1st Brillouin zone and the intersection with the molecular axes (purple) and the reciprocal lattice vectors (red) are illustrated in panel (f).The intersection points are labeled; they are the points referred to in Figures 4a, 5, and 6.
Figure 2 .
Figure 2. (a) Comparison between the low-frequency modes of the QA molecule and Γ-point frequencies in the α-QA crystal.(Largely) equivalent vibrations are connected by dashed lines, while details on the nature of the different types of vibrations can be found in the main text and are illustrated in panels (b)−(e): (b) 1st-order out-of-plane (OP) bending, (c) 1st-order torsion, (d) 2nd-order OP bending, and (e) 1st-order in-plane (IP) bending modes of molecular QA.The green arrows indicate (on a relative scale) how strongly the atoms are displaced when keeping the center of mass of the molecule fixed in space.The displacements of all eigenmodes contained in panel (a) can be seen in Figure S5 in the Supporting Information.
Figure 3 .
Figure 3. (a) Eigenvector overlap matrix of the first 12 molecular and α-QA vibrational modes.Values close to 1 imply strong similarity between molecular and crystalline modes, and elements below 1 imply hybridized modes.(b) Displacement pattern of mode 5 (purple) in α-QA relative to the equilibrium positions of the molecule and its periodic replica in the a 2 + a 3 direction.The arrows indicate the relative motion of the atoms; (c, d) pattern of atomic motions of mode 7 (yellow) and (e, f) of mode 9 (blue) in α-QA viewed from different directions.
Figure 4 .
Figure 4. (a) Phonon band structure of α-QA along the most important high-symmetry directions in the 1st Brillouin zone (see Figure 1f) showing the relation between vibrational frequencies (ordinate) and wave vectors (abscissa).Frequencies are presented in units of THz (left axis) and in wavenumbers (right axis) in panels (a)−(d).Phonon bands are colored according to the mode participation ratio (eq 2).For the sake of visibility, a different color code was chosen for panel (a) than for the other panels.Particularly, relevant sections along each band structure path are presented as close-ups in panels (b)−(d).The locations of the close-ups in reciprocal space are indicated by black rectangles in panel (a).The wave vectors of the highlighted sections are shown in reduced coordinates ξ of the reciprocal lattice vectors.Numbers in panels (b)−(d) denote the nature of the displacements characterizing a specific band.The labeling follows the order of specific modes at the Γ-point (cf., Figures2a and 3a).This illustrates the evolution of modes characterized by specific patterns of atomic motions in the region of avoided crossings between the bands.
Figure 5 .
Figure 5. Low-frequency phonon band structures of a-QA colored according to the longitudinality of the modes, which ranges between 0 and 1 (see the color bar at the top).Phonon frequencies are plotted for wave vectors along (a) high-symmetry paths of the 1st Brillouin zone and (b) along paths parallel to the molecular axes (see 1st Brillouin zone in Figure 1f).The asterisk in (b) indicates the longitudinal acoustic mode at N.
Figure 7 .
Figure 7. Angular dependence of the norms of the group velocities |v⃗ g | denoted as v g for the three acoustic phonon bands in the longwavelength limit (for |q⃗ | = 0.0044 Å −1 ).Panel (a) shows the situation for the highest frequency, mostly longitudinal acoustic band, while panels (b) and (c) show the situations for bands 2 and 1, respectively.The lattice vectors a⃗ 1 , a⃗ 2 , and a⃗ 3 are denoted by the black arrows, while the red, green, and blue arrows denoted the direction of the molecular plane normal n⃗ , the short s⃗ , and the long axis l ⃗ .Note that to better visualize the angular dependences, the total ranges of v g vary between the different plots as indicated in the color scales to the right.
Figure 8 .
Figure 8.(a) Polar plot of the harmonic contributions to the thermal anisotropy tensor summed over all modes projected onto unit vectors.As a consequence of the latter, the displayed quantity relates a temperature gradient in a specific direction to the component of the heat flux in that direction.The lattice vectors a⃗ 1 , a⃗ 2 , and a⃗ 3 are denoted by the black arrows, while the red, green, and blue arrows denote the direction of the molecular plane normal n⃗ , the short s⃗ , and the long axis l ⃗ .(b−d) Harmonic contributions to the thermal anisotropy tensor η αβ in directions perpendicular to the molecular plane, η nn (b); parallel to the short molecular axis, η ss (c); and parallel to the long molecular axis, η ll (d).The data points are colored according to the respective mode participation ratios. | 14,866.6 | 2023-05-26T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Matrix analyses of pharmaceutical products for the years 2017 to 2019 among public health facilities in Hadiya zone, Ethiopia: a cross-sectional descriptive study
Background To date, global healthcare spending becomes a primary concern, and pharmaceutical costs are the main drivers. The issue is more pressing in developing countries like Ethiopia. However, there is a scantiness of comprehensive data on inventory control practices in health facilities. This study, therefore, aimed to assess the criticality, financial value, and consumption patterns of pharmaceuticals using inventory matrix analyses and explore the related challenges. Methods A cross-sectional study supplemented with qualitative assessments was carried out from December 2020 to January 2021 in public health facilities. Three hospitals and 14 health centers were proportionally selected using a simple random sampling technique. Self-administered questionnaires and review of logistics documents and databases like Dagu-Facility were used to obtain the quantitative data. The data were analyzed using excel spreadsheets and SPSS version 23. We gathered the qualitative data through face-to-face in-depth interviews. Results The facilities spent 66,312,277.0 Ethiopian birrs to procure 518 pharmaceuticals between 2017 and 2019. Of the total products, 68 (13.1%) belonged to class A and 353 (68.1%) belonged to class C. Among 427 items identified by VEN analysis, 202 (47.3%) were vitals, and 201 (47.1%) were essential products making the highest proportions. Cross-tabulations of ABC and VEN showed that 230 (53.9%) items formed category I, representing 84.3% of total expenditures. Sterile surgical gloves #7.5, amoxicillin capsules, examination gloves, and 40% dextrose injection were among the top-ten high-value closing inventories, accounting for 21% of class X items. The fast-moving items were the most prevalent in all years, accounting for more than 45%, and shared the maximum expenditure, up to 90%. Scarcity of infrastructure and skilled human resources, shortage of pharmaceuticals and problems with suppliers, and management issues were the major challenges in the health facilities. Conclusion Most of the items identified by ABC-VEN and FSN-XYZ were Category one, i.e., mainly vital costly products and a few fast-moving items with high closing inventory values, respectively, suggesting close supervision. However, several issues became impediments. Hence, facilities should alleviate the bottlenecks and monitor the stock status to prevent theft and stock out.
Background
Pharmaceuticals are essential components of health care systems, and at the same time, they share substantial health care expenses. The high percentage of medicines spending, especially in countries with limited resources, is paid out of individuals' pockets. This imposes financial burdens on patients and creating an additional problem on policymakers [1,2]. On the other hand, health is a fundamental human right that can also be realized through access to essential pharmaceuticals [3]. However, one-third of the world's population lacks an opportunity to obtain essential medicines, and it is more severe in low-and middle-income countries [4], causing patients to suffer from even minor illnesses [5]. From the systematic review by Tewuhibo D et al., in Ethiopia, the average availability of essential drugs from 2003 to 2019 did not exceed 75% [6].
In this sense, the assessment of supply chain systems, including inventory control, is crucial to discern current expenditures for pharmaceutical products and patterns of consumption to design effective policies aimed at cinching universal access to essential medicines. An Effective Supply Chain (ESC) ensures the sustainable availability of quality, safe, and effective pharmaceutical products [7,8]. ESC can be achieved when proper pharmaceutical selection, quantification, procurement, and use are carried out taking into account consumption rates, clinical significance, and product costs [1,7,9].
To this end, several inventory control mechanisms exist to assess the clinical and financial implications of pharmaceutical consumption patterns [10][11][12]. These include; ABC analysis (always better control); This technique helps classify pharmaceutical items based on their financial value into A (few high priced items), B (medium level of moderately priced items), and C (a large number of low priced items).VEN analysis; classifies pharmaceutical products according to their criticality for health services and the prevention of death or disability into Vital, Essential, and Non-Essential. The other techniques are FSN and XYZ analyses which classify inventory items based on their consumption rates (fast, slow, and non-moving) and closing inventory values (XYZ) [10,12]. Nonetheless, each technique has its limitations, and thus the combination of them helps to take advantage of the specific techniques [10]. ABC-VEN matrix gives results based on the economic as well as the critical value of pharmaceuticals concurrently [13]. This matrix is a strong tool for a critical appraisal of pharmaceutical product use and assists in containing the value for essential medicine by permitting the expenditure on indispensable items [14]. Cross-tabulating XYZ and FSN provides insight for pharmaceutical usage or patterns of movement and their closing inventory values [15]. However, the poor inventory control system in health care institutions in developing countries, including Ethiopia, becomes a driver for insufficient or surplus pharmaceuticals, leading to depletion or expiration of stocks and inappropriate budget expenditure [16][17][18]. The study conducted in Kenya revealed that there was a huge mismatch between the criticality of the product (VEN) and expenditure-based classification (ABC) of pharmaceutical items [19]. FSN-XYZ matrix analysis done in Ethiopia showed that an unexpected amount of budget (20%) was disbursed on high-cost and non-moving pharmaceutical items [20]. Moreover, the study conducted in northern Ethiopia showed that a significant amount of the annual budget (75.86%) was spent on very few pharmaceutical products (17%), which needs management review and engagement in the whole process of the hospital's supply chain decisions [21].
Though few studies were conducted on inventory control practice using matrix analyses in some regions of Ethiopia, almost all have limitations. Most of them addressed ABC-VEN alone in the same types of health institutions, e.g., [7,[21][22][23]. The others studied at a supplier agency [11] or in a single zone [20]. Yet all of them lacked triangulation with qualitative data to provide a deep and complete understanding of the phenomenon. Furthermore, despite differences in regional administration, no study has been conducted in the Hadiya zone of the Southern Nations, Nationalities, and Peoples' Region (SNNPR). This study evaluated pharmaceutical inventories in hospitals and health centers using comprehensive techniques and qualitative data as a supplement. Thus, besides the clinical importance and drug expenditure, it will add information, such as the closing inventory values of pharmaceuticals with various consumption rates and associated bottlenecks to the existing body of literature. Hence, this study aimed to assess the criticality, financial value, and consumption patterns of pharmaceuticals using inventory matrix analyses techniques (ABC-VEN and FSN-XYZ) and also explore challenges in public health facilities of the Hadiya zone Ethiopia, with the goal of better managerial actions.
Study area and period
The study was conducted from December 15, 2020, to January 15, 2021, in selected public health facilities in the Hadiya zone, the SNNPR, Ethiopia. The zone covers an area of 3542.66 square kilometers and accounts for 3.8% of the total area of the region. It has 17 administrative districts. According to the Ethiopian Central Statistical Agency of 2007 last report, the population of the Hadiya zone was 1,243,776 [24]. Access to healthcare in the zone is ensured through public and private health care systems. Currently, there are 376 public health facilities, including four hospitals, 61 health centers, and 311 health posts, in the zone. A total of 1918 healthcare professionals from diverse backgrounds serve the facilities. These include 135 medical doctors, 95 health officers, 495 nurses, 96 midwives, 81 pharmacists, 218 laboratory specialists, 131 druggists, 629 health extension workers, 28 environmental health specialists, seven anesthesiologists, and three biomedical engineers.
Study design
A cross-sectional study supplemented with qualitative data was conducted to evaluate and explore pharmaceuticals inventory control and underlying challenges in healthcare facilities. The qualitative data were used in the discussion part to complement the quantitative findings.
Source and study population
The source population comprised all health facilities in the Hadiya zone, pharmaceuticals, health professionals working in those health facilities, good reception and issuance vouchers Models, records of closing inventory, and a local database, i.e., Health Commodity Management Information System (HCMIS) or the Dagu Facility [25]. The researchers used models 22, HCMIS, documents of closing inventory, and health professionals to source the intended data. The study populations were selected public hospitals and health centers, and pharmaceuticals managed and purchased by the facilities from 2017 to 2019.
Public hospitals and health centers that started operating three years ahead of the data collection period were included. Under Ethiopia's current health care structure, health posts act as dispensaries and are supplied directly by health centers [26]. Therefore, we excluded them from the study. Health professionals with service years of greater than three years in the health facility were recruited in both the quantitative and qualitative study. All drugs labeled as program commodities [27], including antiretroviral, anti-TB drugs, malarial commodities, vaccines, and family planning products were also left out from this study. In Ethiopia, these products are purchased at the central level by Ethiopian Pharmaceuticals Supply Agency (EPSA). As medical pieces of equipment are durables, we omitted them from the list.
Sampling procedures of health facilities
First, the Hadiya zone was randomly selected from 15 zones in the SNNPR [28] using a lottery method. Then the health facilities (HFs) were selected from the area. We used the USAID delivery project recommendation [29] to determine the sample size of the HFs. It recommends taking at least 15% of the total facilities to increase the power of generalization. Accordingly, the calculation gives about 10 health facilities taking the current total number of public hospitals and health centers (65 HFs) into account. However, to address all the 17 districts of the zone, we planned to randomly select one facility from each district. Three out of four hospitals were chosen considering the service year. The health centers were chosen from the districts where no hospitals were selected. Pharmacy heads and store managers were considered experts in this area and thus participated in qualitative data collection. In general, 19 participants (15 males and 4 females) were involved, and the sample size depended on the information saturation, i.e., the interviews ceased when similar issues seem to be repeated.
Data collection procedures
Pretested data extraction formats, developed based on previous literatures, were used to collect the desired information. The facility and participants' profile data were obtained from store managers and pharmacy heads available during the visit and volunteered to participate in the study. The annual consumptions and the respective value of each pharmaceutical product for three consecutive years, September 2017 to September 2019, were used in the ABCanalysis. These data were obtained by reviewing issuance documents (Model 22) used during the specified years. The researchers also extracted the data from Dagu-facility (HCMIS) software for facilities using the program. The clinical staff and standard treatment guidelines were consulted to determine the VEN category. The data for the XYZ analysis were obtained by reviewing the three-year closing inventory files and the facilities' HCMIS. The Fast, Slow, and Nonmoving (FSN) items identification was undertaken based on the frequency of pharmaceutical issuance per annum from the main store to different departments within the HFs. The data for this analysis were acquired from Model 22 and electronic record (HCMIS). The qualitative data were collected through face-to-face in-depth interviews. The questions were comprehensive and probing types. To ensure consistency throughout the interviews, the principal investigator moderated the discussion. Each interview lasted an average of 20 min and was audio-recorded using a smartphone. A local language, Amharic, was used at the interviewees' convenience.
Data analysis
SPSS was used to analyze socio-demographic and facility-related variables, and the ABC, VEN, XYZ, and FSN matrixes were analyzed using Microsoft Excel. Below are brief descriptions of the procedures for the matrix analyses.
ABC analysis
The total quantities of each pharmaceutical issued in the last three years, from September 2017 to September 2019, were compiled. The annual monetary value of each product was computed by multiplying a unit price by the quantity of each product. The percentage of value attributed to specific pharmaceuticals was determined and arranged in descending order. The next step was calculating the cumulative percentage for a discrete item and classifying them into A, B, and C classes based on the following thresholds. A class: about 10% of the items account for around 70% of the total values, B class: Consist nearly 20% of the items and takes roughly 20% of the total annual expenditure, and C class: includes around 70% of the items representing only 10% of the total annual value [9].
VEN analysis
Based on the clinicians' grading and treatment guidelines, the VEN analysis looked at the health impact of pharmaceuticals in the health facilities. The products used to prevent or treat serious illnesses and couldn't be substituted for other products were considered vital (V). To ensure consistency in reporting, items identified as vital in a particular institution remained the same in all facilities, even if they were essential in other settings. Pharmaceuticals somehow substituted and used against less severe but significant illnesses were classified under essential (E) items. Products indispensable for healthcare provision, but their absence didn't interrupt the services were regarded as non-essential (N) products [30].
ABC-VEN matrix analysis
we cross-tabulated the ABC and VEN results to generate different couples and groups of the pharmaceuticals. Accordingly, nine pairs of items were formed and divided into three categories. The first category comprised mainly the expensive and vital products (AV + AE + AN + BV + CV), the second category included most of the less costly and essential items (BE + BN + CE), and the third category contained the cheapest non-essential products (CN) [9,13].
XYZ analysis
The closing inventory values for each fiscal year, 2017-2019, were sorted and arranged in descending order, and the cross-ponding cumulative percentage was computed. The items were then grouped into the X, Y, and Z classes based on the following cutoff points; the first 70% of the total inventory value corresponds to X class, the next 20% are of Y class, and the last 10% of the value corresponds to the Z class [10].
FSN analysis
There is no common guiding principle for classifying commodities as fast-moving (F), slow-moving (S), and non-moving (N) [31]. However, for this particular study, we arranged pharmaceuticals procured from 2017-2019 in descending order based on their average frequency of issuance in a specific year. The resulting values were used to classify the items based on thresholds stipulated in previous studies. Ultimately, the products issued on average ≥ 15, 5-15, and < 5 times comprised F, S, and N items, respectively [20,31].
FSN-XYZ matrix analysis
The FSN and XYZ items coupling was done through a cross-tabulation on an excel spreadsheet. Nine couples were formed and then classified under three categories. Category I commonly comprised the fast and high-value items, including FX, FY, FZ, SX, and NX. The second category constituted SY, SZ, and the last contained the nonmoving and low-value items (NZ). At the last, the quantitative findings were summarized using tables and figures [10].
A thematic analysis technique was used to analyze the qualitative results. The records were transcribed to the English language by the authors and verified by an expert from Jimma University. After the rehearsal of the texts, variables were coded manually in a word document to identify appropriate themes. The themes were described in narrative form, followed by quoting the opinions of some respondents. Finally, the findings were used at the discussion phase to support the quantitative results.
Data quality assurance
We trained quantitative data collectors for one hour on the data collection process and how to acquire the intended information. And the investigators oversaw their daily activities by checking the completeness of the data extraction formats. The formats were developed by examining various types of literature and were evaluated by relevant and experienced researchers to maintain content validity. Moreover, a pre-test was conducted in two randomly selected health facilities in the Hadiya Zone, where both were excluded from the actual study. After the test, we reviewed each questionnaire for changes in responses for different administrations under similar circumstances and determined the variations. All questions produced consistent results and, therefore, were reliable. Concerning the qualitative assessment, all researchers participated in the transcription and compilation of the data. The researchers have experience in qualitative and quantitative research and are all pharmacists with master's degrees in pharmaceutical supply chain management. Besides, a qualitative research expert was also involved in verifying the transcription.
Characteristics of respondents and health facilities
We visited fourteen health centers and three hospitals, and all of them were volunteers making a response rate of 100%. The self-administered questionnaire showed that only 5 out of 17 health institutions had a pharmacy manager (three hospitals and two health centers) at the time of the visit. Out of the total facilities, only 2 (11.8%) and 6 (35.3%) of them had electronic records and functional DTC, respectively. Eight (47.1%) facilities had drug formulary or standard treatment guidelines. Only 4(23.5%) of the facilities got adequate support from the top-level management. In terms of human resources, the health facilities had 89 employees under the pharmacy department serving in various positions such as dispensaries, store manager, and pharmacy headship. Pharmacy heads (five) and store managers (eighteen) participated in the study. Of the participants, 20 (87.0%) were pharmacy professionals, and 17 (73.9%) had diplomas. Twelve of them (52.2%) had more than five years of work experience. Sixteen (69.6%) respondents received IPLS training. Only nine respondents (39.1%) were satisfied with their current job (Table 1).
ABC analysis
Ethiopian Birr (ETB) 66,312,277.0 spent on 518 items between 2017 and 2019 in the health institutions of the Hadiya zone. Of the total items consumed in three consecutive years, 68 products (13.1%) belonged to class A, accounting for ETB 46,556,646.2 (70.2%). B class contained 97(18.7%) items taking ETB 13,417,858.3 (20.2%) of the total pharmaceuticals expenditure. Three hundred fifty-three (68.1%) products with a value of ETB 6,337,772.4 constituted C class items ( Table 2). Out of the 68 class A items, Sodium chloride(N/S)-0.9% IV infusion, Glove examination latex medium size, and Tetanus Antitoxin 1500 IU in 1 ml Ampoule contributed 15.2% of the total expenditures (Fig. 1).
VEN analysis
Of the items identified through VEN analysis (n = 427), 202(47.3%) of them were vitals accounting for 64% of the drug costs. Two hundred one (47.1%) of the pharmaceuticals belonged to essential items and they took about 33.9% of the total annual budget. There was a slight variation in the proportion of vital items in hospitals (40.8%) and health centers (37.4%). However, non-essential items were more prevalent in health centers than hospitals accounting for 11.7% and 5.4% of the total products, respectively (Table 2). Table 3).
FSN analysis
Of the 518 pharmaceuticals issued in the health institutions, fast-moving items accounted for the highest proportions. Ultimately, in 2017 they attributed to 45.5%, and in 2018 and 2019, they represented 46.5% of the issues. More than 80% of the total expenditure was spent on these products over the three years. On the other hand, slow-moving items were as prevalent as the fast-moving items in the last two years, 2018 and 2019 (Table 4).
XYZ analysis
From the XYZ analysis, most of the closing inventories were Z items comprising about 40%, followed by Y accounting for 30% of the items, and they represented 9.5% to 20.4% of closing stock values ( Table 4). The X class had low annual proportions, but it accounted for the highest stock values, around 70% every year. As shown in Fig. 2, the first four mostly stocked high-value items, including sterile surgical gloves # 7.5, Amoxicillin Capsules 500 mg, examination latex gloves medium size, and 40% Dextrose injections, accounted for approximately 21% of X class values (Fig. 2). (Table 4).
Qualitative results
The ages of the participants ranged from 25 to 40 years old, and they had service experiences of 3 to 12 years in their current position. Most of them were pharmacy professionals with a bachelor's degree and a diploma. The interviews also included three clinical nurses from the health centers working as storekeepers. The interviews revealed various bottlenecks in ABC, VEN, XYZ, and FSN inventory control practices in health facilities. The findings were summarized into the following four themes, depending on the characteristics of the data.
Scarcity of infrastructures
Like human resources, sufficient physical assets are mandatory for effective pharmacy practices, including inventory activities. In contrast, the interviewees revealed that health institutions, especially most health centers, faced shortages of premises, including offices, separate toilets for staff, power outages, and water scarcity. In some institutions, there was difficulty in regular physical inventory counting to identify the status of pharmaceuticals due to narrow and non-standardized storage facilities. Even irrelevant items such as stationary materials and detergents might be stored along with pharmaceuticals in the free spaces in the storerooms. As a result, employees get challenged to sort out, plan, and control their inventories properly and provide data for informed decision-making. The purchasing of pharmaceuticals is mainly carried out based on trends and needs that have emerged from the different departments within the facilities. Only a few facilities had the experience to analyze their products based on their value, importance, and movement for better decisions. One of the interviewees elaborated as follows;
Shortage of skilled human resources
The scarcity of qualified and appropriate human resources in almost all institutions became an obstacle to performing pharmaceutical and logistics activities. Most health centers had only two pharmacy experts; one acted as a store manager and the other as a dispenser. Even in three surveyed health facilities, nurses worked on behalf of pharmacy professionals (as pharmacy heads). A store manager at one of the health centers explained the problem as follows; "It is a good practice to conduct pharmaceutical matrix analyses to improve inventory management. In reality, it is a difficult task in a firm where the scantiness of skilled human resources is a problem. Even, in our facility, non-pharmacy professionals might be assigned to perform dispensing practices during the annual leave, sick leave, or maternity leave of the pharmacists. "(Male store manager, four years of service experience).
Another storekeeper at a hospital added,
Supplier-related issues and unavailability of pharmaceuticals
The lack of some but critical pharmaceuticals at the Ethiopia Pharmaceutical Supply Agency (EPSA) had led to procurement from private suppliers for costly prices. Even, sometimes they may not be available at private wholesales. On the other hand, EPSA's resistance in supplying the intended types of medicines in the required amount and its pushing practices of irrelevant and or near expiry items to the medical institutions further complicated inventory control systems. A pharmacy head of one health center described; ''When we place orders with the Ethiopian Pharmaceuticals Supply Agency, we should collect surplus products, though they are not part of our demands, to get the intended services. This increases the inventory level that ties up a budget and occupies substantial spaces of the storerooms. The agency also issues near expiry products that may later expire in the health facilities, especially if they are slow-moving items." (Head of the pharmacy, service experience of four years).
Another nurse store manager explained; "This year, ceftriaxone 1 g and 0.5 g injections have been out of the market for a long time, and now it is bought for 70 and 50 Ethiopian birrs, which is much higher than previous costs, 12.5 and 9.53 birrs, respectively. " (Female store manager, service experience of five years).
Administration related issues
The negligence, lack of commitment, and attitudes gabs towards pharmaceutical services by senior management and committees in health facilities negatively influenced the morale work of pharmacists. In most of the health institutions, the drug therapeutic committees (DTCs) were not functional for selections and procurements of pharmaceuticals. Physical counts of inventories are hardly conducted regularly, the majority of the facilities counted at the end of the fiscal year. Decisions on pharmaceuticals need the involvement of appropriate professionals. Nevertheless, the discussions and judgments on pharmacy activities in the hospitals and health centers usually ignore relevant professionals. Consequently, the majority of pharmacists and druggists were unsatisfied with their current job and lack the motivation to take additional responsibilities. One of as store managers elucidated the problem as follows,
Discussion
Inventory control through diversified techniques is a key to the containment of operational costs and the sustainability of pharmaceutical services [32]. A variety of matrix analyses were used in the present study to classify pharmaceuticals in terms of cost, criticality, and stock movements, thereby determining the efficiency of health institutions in budget utilization and inventory management. ABC analysis revealed that of the 518 pharmaceutical products purchased in the past three years, from 2017 to 2019, 68 (13.1%) belong to class-A and 97 (18.7%) belong to class-B representing ETB 46,556,646.2 (70.2%) and ETB 13,417,858.33 (20.2%) of Total Pharmaceuticals Expenditure (TPE), respectively. Despite the highest proportion of class-C items, the value of class-A products is still high, surpassing three and a half folds the prices of B items. It's an indication of weak inventory monitoring at the facilities, as also evidenced by qualitative data. In reality, class-A items are required in small quantities and need strict control, including frequent counting and better forecasts [33]. Nevertheless, the interviewees revealed that most facilities rarely conducted regular physical inventories due to inadequate workplaces, lack of qualified human resources, and poor administrative support. The results are comparable to reports from Turkey, India, and Arbaminch, Ethiopia [23,30,34], but slightly different from the study by Abdelmonim Ahmed H et al. of Sudan [35]. Class B requires moderate regulation, while Class C requires only minimal order and procurement controls, and these actions can be undertaken by middle and lower-level managers, respectively [21]. The share of class-C items in the current study was about 68.14%, which is encouraging.
The present study showed that from 427 pharmaceuticals used in the facilities and identified through VEN analysis, the essential items and vitals took the highest percentages accounting for 47.1% and 47.3%, respectively. Surprisingly, these results are promising despite poor inventory control practices, negative perceptions of facility administrations, human resource constraints, and related issues highlighted in the qualitative data. It's also far better than a report from Sudan and India [35,36] concerning V items, with only 28%and 33% reports, respectively but comparable in terms of the essentials. It can be a phenomenon associated with procurement based on demand from wards and units of the institutions in the present study. Vital items are lifesaving and cannot be substituted, and require continuous availability and reasonable safety stock. Its absence even for a day is unacceptable and can lead to death and disability of patients and complications of illnesses [37].
The ABC-VEN Matrix dictates the financial and clinical implications of pharmaceutical use. In this study, items classified as Category I (AV + AE + AN + BV + CV) were 230 (53.9%) and consumed 84.3% of the TPE. Category B contained 176 (41.2%) items representing 15.4% of TPE. The less expensive but vitals (CV) and essential items (EC) with percentages of 29% and 28.8%, respectively, were the highest proportions in categories I and II. From the overall findings, category I consumed an enormous percentage of TPE. Hence, more attention and strict management control are required, as these elements are either costly or vital [21]. In contrast, unlike clinical services, the top administration of the facilities had negligence, lack of commitment, and poor perception towards pharmaceutical services, as per information obtained from key informants. These will create a favorable environment for theft and wastage of those expensive or vital products [38,39].
FSN analysis classifies items based on the rate of consumptions/issuances from the medical store to the various wards, clinics, and units within the facility. It supports timely actions to remove dead stock and prevent accumulation [40]. In the present study, the fastmoving items were the most prevalent in all years, accounting for more than 45%, and shared the maximum TPE, up to 90%. And most of the top ten of these items in the facilities were medical supplies, but health centers also had medicines. Such products need strict monitoring as their absence compromises health care delivery and even leads to dangerous consequences if they are lifesaving ones [40,41]. The slow-moving items showed an increment in 2018 and 2019, accounting for 46.3% and 49.3% of the issues, respectively. It's suggestive of a weak monitoring system, for example, less frequency of stock-taking in the facilities, probably because of shortage of skilled human resources, staff dissatisfaction, and scantiness of electronic recording tools. The slow and non-moving items should be at a minimum level to reduce the length of a budget tie-up to products, minimize pharmaceutical waste, and a long-term occupation of a warehouse [38,42]. The proportion of F items is comparable to a report from West Arsi, Ethiopia (35.3%), and India (46.0%) [20,43].
XYZ analysis identifies pharmaceuticals that have a high, medium, and low stock value at the end of a fiscal year and helps to determine the amount of budget linked to products at the same time [44]. Class X pharmaceuticals need special attention as they share a high stock value, which can be used for other purposes. Class Y items take an average stock value and require modest control. Substantial portions of products, ranging from 31.7% to 49.5%, in this study were in categories Y and Z, representing a minute fraction of the stock's values. In contrast, a few class-X items, accounting for less than 26.7% of the closing inventories, claimed approximately 71.1% of total stock values on average. The current study facilities outperformed those reported by Reddy DK et al. [7], where only 10% of X items accounted for 59% of total values in tertiary care hospitals in India. The disparity could be due to the variety of items and settings studied, with the latter study focusing on cardiac medicines in a single facility.
According to the FSN-XYZ matrix analysis, category one (FX, FY, FZ, SX, and NX) had the highest proportion in all years, with a minimum of 40.1% in 2017. This class also shared the maximum TPE, up to 84.5% where the FX items contributed the most, about 60%. As these items are expensive and in high demand, stock outs are more likely. According to evidence, more than 80% of Ethiopia's pharmaceutical needs are met through imports from international suppliers, which may result in longer lead times [45]. Thus, the facility administration, EPSA, regional health bureaus, and the Federal Ministry of Health (FMOH) should strengthen the supportive supervision to address the issues such as infrastructure, staff commitment and satisfaction, and administrative supports through diverse platforms and sharing real-time information with various stakeholders, including nongovernment organizations. The findings are similar to the study by Jobira et al. [20] in health care institutions in the West Arsi region, Ethiopia.
This study had limitations in that a few of the facilities used HCMIS software for information management while the majority used paper-based tools. The data obtained from these two groups of facilities may not be consistent. As some facilities failed to include the price of the pharmaceuticals on their reports (Model 22), we used average prices for the matrix analyses.
Conclusions
Overall, the results showed that most of the items identified by ABC-VEN and FSN-XYZ were in the first category in both cases, i.e., mainly expensive but life-saving and a few fast-moving with high closing inventory values, respectively. This category could likely be superior, as procurement practice in most facilities was based on clinicians' suggestions. The pharmaceutical selection and procurement could be improved to avail affordable but still lifesaving items if the facilities had functional DTC and tools like formulary manuals, standard treatment guidelines, and essential medicine lists. Otherwise, it will not be easy to conduct bulk procurement as the prescribers' needs may not be known in advance. However, in the present study, more than half of the institutions lacked the tools and functional DTCs. These items are highly susceptible to theft and depletion and, therefore, require strict controls such as frequent counting, use of lockable cabinets, and, most importantly, automation of inventory control systems. Unfortunately, the current institutions had several problems, including a shortage of qualified personnel, and inadequate electronic registration system, a lack of staff motivation and commitment, and weak administrative support. Therefore, health facilities should work to mitigate the challenges and put in place adequate controls, such as audits, frequent supervision, and maintain low buffer stock to prevent locking up of budget to these items. | 7,536.2 | 2022-02-07T00:00:00.000 | [
"Medicine",
"Economics"
] |
Environmental Character: Environmental Feelings, Sentiments and Virtues
An argument is made that to further develop the field of environmental virtue ethics it must be connected with an account of environmental sentiments. Openness as both an environmental sentiment and virtue is presented. This sentiment is shown to be reflected in the work of Barbara McClintock. As a virtue it is shown to a mean between arrogance and the disvaluing of individuals, a disposition to be open to the natural world and the values found there. Further development of EVE is then shown to require a connection with an account of environmental wisdom.
Environmental wisdom is needed to give direction and inform decisions along the way of living an environmentally good life.Coupled with a full account of what kind of person EVE would have us be, there is needed an account of an environmentally wise person, someone who has a sophisticated understanding of the [natural] world and is adept at living a morally satisfying life informed by that understanding.Such a full account of environmental wisdom would be beyond the scope of this paper.This paper will focus on the connection between environmental sentiments and environmental virtues.I will argue that it can be shown how the two are related and an account of a distinctive environmental sentiment of a feeling towards natural entities can complement the environmental virtue of openness.
Environmental Emotions and Sentiments
It is commonly held that being able to experience moral emotions is a sign of a person able to function morally well.It is further held that a person who can experience moral emotions such as guilt or shame, admiration, disgust, remorse, regret, outrage, and sympathy has the necessary psychological make up that is part of being a morally good person.To lack the ability to frequently experience these feelings indicates a person of a poor moral character.Thus we claim that a psychopath or sociopath is not morally a good person.When a person has developed a moral generalize feeling for the good over time, we hold that such a person has developed moral sentiments.These can be understood as higher-order feelings, motivational dispositions that incline a person towards the good (Rawls 1971, p. 479).We consider people to be morally good when they have internalized moral norms and are motivated to act out of a sense of what is good, rather than a sense of punishment or reward.There are a variety of moral sentiments that people may feel as motivating them: duty, honour, caring, nobility, to name some.In the spirit of trying to characterize what makes up the moral psychology of an environmentally good person, we can ask what might be some distinctive environmental sentiments, emotional dispositions that regularly motivate people to engage in environmentally good actions.Unfortunately, there has been little work connecting moral feelings and moral sentiments to environmental concerns (see : Partridge 1996;Steverson 2003;Callicott 1990;Colins & Barkdull 1995;Bowles 2008;Fieser 1993).
One such environmental sentiment involves a positive feeling towards natural entities or towards Nature as an interconnected whole.It is a feeling not just of love and admiration of natural entities, but a feeling of being in touch with the entities as they are in themselves without reference to any further human utility.This is a feeling of openness to any value, objective or subjective, intrinsic or inherent, anthropocentric or non-anthropocentric that a natural entity may have.It is a response to the organism or entity for what it can convey to us.It is a feeling that is captured by the fundamental sense of the word pathos.While today the word primarily refers to expression of pity or sympathy, pathetic in a root sense involves the capacity to feel what there is to feel from another, whether person or any other entity.While there is no specific word that refers to the kind of feeling, the closest word might be "openness." This is the feeling identified by Jay McDaniel (1986) in his discussion of the attitudes towards natural entities in the work of the geneticist Barbara McClintock.In McClintock there was an "appreciative and intuitive apprehension of an organism" (McDaniel 1986, p. 44).This 'feeling for the organism' is a manifestation of the environmental virtue of openness.In this case it is an openness to each plant in her research fields, an openness to what each plant had to say.
This openness led to a three-fold apprehension of the organism.First, such openness reveals any organism in Nature as unique.No two organisms are exactly alike and openness to them allows one to experience and appreciate the inherent or intrinsic worth of each creature.There are always going to be more things that we can learn from each individual organism if we can keep from sliding into the arrogant posture that says "If you've seen one redwood, you've seen them all."Second, openness to natural entities can reveal organisms as mysterious others.Such feeling carries with it the awareness that there is always something wholly other to each organism.The possibility of each creature being only partially knowable by science stands as a direct contrast to the arrogant attitude that sees living beings as fully knowable.The otherness of each organism is recognizable only by having an openness to the 'otherness' of each creature.Arrogance is blind to the fundamental mystery of the other and the uniqueness of each creature would dissolve in the attitude that rejects the distinctiveness of each creature.Third, openness reveals each creature as a fellow subject, a locus of intrinsic value, worth and respect.This feeling for the organism involves an openness to the goals, concerns, and intrinsic needs of any or all creatures over and beyond the instrumental value they might have for humans.
The focus on the individual organism can be broadened to include a feeling for or an openness to ecosystems, natural areas or events, or to Nature as a whole.Such an openness reveals a basic interconnectedness between all natural entities.It challenges the idea of individual entities as fundamentally atomic, isolated, and unconnected things.The arrogant attitude of a person who thinks he or she is truly removed from the consequences of his or her action is called into question by this feeling for basic interconnectedness.To be open to Nature is to recognize that a person is one subject among others interwoven in a net of relations.At a fundamental level I am, as Ortega y Gasset pointed out, myself and my circumstance.While contemplating Lake Solitude in the Rocky Mountains Holmes Rolston, III asks: "Does not my skin resemble this lake surface?Neither lake nor self has independent being: both exist in dynamic interpenetration across a surface designed for passage and exchange, as well as for delimination and individuation" (Rolston III,p. 224).Note how, in his openness to the lake and its surroundings, Rolston does not lose his sense of self completely, recognizing that an essential feature of living is to be in symbiosis, to stand in reciprocal relation to other natural entities.Nature provides to the 'open' person an experience of distinction as well as connection.Of this feature of Nature Rolston says: To her teasing, her relentless stimulus, we respond in a 'standing out,' an existence, where the I is differentiated from the Not-I.The environment moves; we are moved, but then reaction is elevated into agentive action.Ecological prodding brings forth the ego."The landscape thinks itself in me, and I am its consciousness" (Cezanne) (Rolston III,p. 225).
Through an openness to wildness the self can discover itself in this intimate relation with Nature.But also in this encounter with wildness Rolston discovers the mysterious other mentioned earlier.In his openness to Nature he discovers not only a connection but an opposition that offers a chance to realize a needed complement to himself that can no longer be found in civilized life.Of this experience of existing in the natural world Rolston says that with this there is a communion, but of opposites.The medium that a person is in and of, he or she is also over and against.When the person encounters a world different from himself, he faces a centrifugal wildness which, if unresisted, will disintegrate his centripetal self, but which, if withstood, may be incorporated and domesticated.To travel into the wilderness is to go into what one is not, so that in returning to and turning from its natural complement, mind grasps itself.I encounter an other of which I have the greatest need.Thus journey here is an odyssey of the spirit traveling afar to come to itself (Rolston III,p. 225).
The danger to the self is what makes the closed person, the environmentally arrogant person, so readily willing to destroy the natural world.Perceiving perhaps at an unconscious level the fundamental challenge to the self, the environmentally arrogant person is willing to destroy the threat to self rather than take to challenge to confront the mysterious other and return to the domesticated world with an expanded sense of self.
And in this encounter the mysterious other is revealed not to be a mere object, but another subject with its own telos.An end which has its own intrinsic worth to be respected, whether it be a single corn seedling or an entire wilderness area.Through the feeling of openness to nature a person thrives in between the impoverished self-closed to the enriching encounters possible with nature and the disvalued self that refuses to place itself within Nature as a valued subject among others, unwilling to say "I" to the "Thou" of nature.
From Environmental Sentiments to Environmental Virtues
When this environmental sentiment has been internalized in a person to the degree that responding this way to natural entities has become an integral trait of a person's character we can speak of openness becoming an environmental virtue (Aristotle 1962. So far I have been speaking of distinctive environmental virtues.One immediate challenge is to show that there are such virtues, and that EVE is not merely an extension of traditional virtue ethic thinking into environmental concerns.This objection has been raised and answered before (Frasz 2004).But another defence can come from looking at the way Aristotle himself objectively grounds virtues and by showing that his grounding is incomplete and, when expanded can allow for distinctive environmental virtues.
As argued by Martha Nussbaum (1988, pp. 32-39; French, Uehling, & Wettstein 1988) 1 , Aristotle in his Ethics (1962, ch. 2.7, 3.5, 6) begins with "a characterization of a sphere of universal experience (what Nussbaum calls 'grounding experiences') and choice, and introducing the virtue name as the name (as yet undefined) of whatever it is to choose appropriately in that area of experience."He then will flesh out his account what the term means and why it is the better account than rival accounts.
Aristotle identifies first various spheres of human experience where people have to make various choices.These are areas that contain experiences that are common to more or less any human life.They involve things that most people have to address in living their lives as human beings.For example, it is a feature of human life that we can choose how to respond to danger and threats to our life, we can act in different ways.Courage would refer to those appropriate responses and actions, while cowardice and foolhardiness refer to inappropriate responses and actions.Second, he asks, what is to choose and respond well within such a sphere?or hat is it to choose or act in poor, deficient, or excessive way in such a sphere?His "thin" list of virtuous terms reflect whatever it is to be stably disposed to act appropriately in that sphere.We may or may not have a good word such responses or actions.His "thick" account is his reasoned defence of one particular account of a virtue and why it is to preferred over some other account.
What Environmental Virtue Ethics adds to this account is to describe the missing "environmental" grounding experiences of human life and then "thicken" the definitions of the virtues of the virtues by showing how particular new definitions provide an understanding of better ways to act or choose appropriately in these spheres.Different accounts of "green" virtues can be 1 Nussbaum points out that traditionally for various spheres of human life there are identified particular virtues.In the sphere of fear of important damages, especially death, there is the virtue of courage, in the management of personal property, where others are concerned there is the need for generosity, etc.These are objective spheres of human concern that transcend relativistic cultural concerns.seen as different answers to how to live well or in an excellent way with the natural world.We can assess the definitions as different solutions in light of how well they deal with the complexities of living in harmony with natural world.Some EVE theorists have already extended traditional virtues into these environmental grounding experiences.2What I argue for here is a distinctive "environmental" virtue, that of openness as a way or habit of character in responding to the "Other" of natural entities of all kinds.
An environmental virtue is a perfection of one's character regarding the relationships between self and the natural world, a person with a good environmental character should be able to flourish more fully in such relationships and in personal development.Having environmental virtues such as openness and leading an environmentally good life makes flourishing in a biotic community more likely.I follow James Liszka when he describes a flourishing life as one that allows you to enjoy the real pleasures of life, to engage in highly qualitative relationships with others, to attain a certain amount of wealth in a respectable way, and to reach a certain well-deserved status and recognition, but all within the context of virtuous living (Liszka 1998, p. 238).
When this openness is part of one's disposition to respond to the natural work and its members, one is more likely to establish the kind of qualitative relationships Liszka describes.
One way to describe the nature of this characteristic of openness is a negative way, by describing what it is not.i.e., by describing the extremes that characterize the vices associated with this virtue.On one end of the scale is the arrogance of overbearing self-importance.This trait closes a person off to any experience of value or worth in nature other than seeing it merely have an instrumental value for achieving personal ends.It is also a fixed perspective on the world, one that cannot see beyond oneself and what matters to oneself.It is the arrogance that one has all the answers, all the insights, and all the means for achieving a full human life.What disturbs us in an arrogant person is the 'closed-ness' of his or her views, the narrow or closed-mindedness of a person who is incapable or unwilling to consider a different viewpoint.As intolerance or narrow-mindedness towards other persons is considered a vice, an extension of this trait towards other natural entities is an environmental vice.We consider a person who is emotionally closed off to Nature as spiritually dead, incapable of appreciation of natural things except in terms of resources solely for human ends.And as such a person may very well live a life of limited love and affection towards other people, and may see them only insofar as they can contribute to one's own needs.Would not a person who is closed off emotionally to natural entities also live a shallow life of limited love?While there is no guarantee that being open to Nature will also manifest itself in openness to other people, it can be argued that someone who is more open to other people as they are in themselves, could be more likely to expand this sense of openness to nonhumans because there are fewer boundaries between the person and other beings.And as the arrogant person is less likely to consider the consequences of an action except as they impact on him or herself, the environmentally arrogant person, one who is closed off to natural entities as they are in themselves, is less likely to consider the environmental effects or consequences of actions towards nature.It is widely agreed that this insensitivity to environmental effects has led to the environmental crisis facing us today.And contributing to this crisis is the arrogance of perceiving nature only in instrumental terms, to be closed off to natural entities as they are in themselves.It is also the insensitivity to the feelings of nonhuman entities which manifests itself in uncaring cruelty in the way animals are used for food, sport, or research.The arrogant attitude is blind to this value for it can conceive of other creatures only as instrumental objects, not as subjects in their own right of a life.To be able instead to say "Thou" to another creature requires that I be open to the distinctiveness of that organism as subject of a life3 .The environmental virtue of openness is necessary to be able to intuitively apprehend this subjectivity in others.Consequently, openness fosters respect for each organism as a subject with intrinsic value to it.And the honest recognition of subjectivity requires that one be open to subjectivity in other organisms and in oneself as well.Arrogance is one end of the scale and is a vice that has been widely noted and discussed.On the other end of the scale is a trait less widely discussed but still a vice nonetheless.It is the trait of an excessive lack of self or of worth of the individual in environmental matters.This trait becomes manifested in those environmental ethics that propose to extend moral consideration to some or all living things.Sufficient it now to indicate that this attitude is something that is often mentioned in discussions of environmental fascism.This discussion is examined in (Callicott 2013, pp. 225-226).
By focusing on these extremes of environmental openness it is possible to delimit the range of what would be an acceptable scope to a 'proper' openness of natural entities.But there is also a need to specifically develop the features of this virtue.In a positive sense 'openness' is an environmental virtue that establishes an awareness of oneself as part of the natural environment, as one natural thing among others.A person who manifests this trait would be one who is neither closed off to the humbling effects of Nature, nor one who loses all sense of individuality when confronted with the vastness and sublimity of Nature.Such a person is capable of feeling a response triggered by natural events, who is able to let Nature speak to him or her.It is a receptiveness to natural entities as they are in themselves.We value openness to other people as an esteemed quality of character since it fosters feelings of love and appreciation for other persons.I believe that as this quality is developed between persons, when it is coupled to an understanding of human beings existing within Nature, that openness to Nature will be more likely to result.Furthermore, I hold that fostering such an openness to natural entities can reinforce a sense of openness to other persons4 , a quality which we already esteem and value.To the challenge of why should people develop this virtue I reply that not only will it develop an attitude that values Nature for its own sake, but will also contribute to the development of qualities of openness to other people.If a critic is willing to admit to the value of openness between persons, I reply that that character trait can be fostered by activities that 'open up' a person to Nature.One such activity is the experience of wilderness solitude.
When the experience of community with other persons is replaced by the experience of solitude in wilderness, then the best possibility exists for encounter with the natural order of things.Again, there is no guarantee that such an awareness will, indeed, occur.But such a condition of being alone in wilderness does make such an encounter more likely.The experience of wilderness solitude, whether by a wilderness lake, in a wild cave, or among the peaks of a mountain range, can open up a person to an awareness of nature of things, as they are for themselves, as they exist independent of human needs or concerns.One goes to wilderness to encounter the natural order, to discover one's place within this order, to be open to what Nature has to say.In this encounter one discovers that one is both self and circumstance, that the self is a semi-permeable membrane.One is neither closed off to Nature nor does one lose the sense of self-worth.And returning from the wilderness to the community of other persons, one can have the further appreciation of persons as other others who exist in a community of interdependence with roots in the natural order.But in order to fully appreciate all the members of this mixed biotic community, it is necessary that a person be able to first feel what Nature has to reveal.When responding with this environmental sentiment becomes a disposition of one's character, then a person has begun to cultivate openness as an environmental virtue.
Further Work In for a Full Theory of Environmental Character
This paper represents an approach to complete EVE by showing a connection with the environmental virtues that are associated with environmental feelings/sentiments.However, a full theory of EVE will need to develop what these are and how they connect to environmental traits of character making up the bulk of current work on EVE.Furthermore, a full theory of EVE must also provide an account of the nature of strength of character regarding these virtues.While it is good to feel a desire to care for nature and a concern that this desire be reflected in the behaviours that make up one's character.Yet to make sure this concern is reflected in action requires an account of how strength of will plays a role in maintaining environmentally grounded action, even sacrifice.Such an account will also have to explain the phenomena of weakness of will when attempts are made to adopt a better, environmentally sensitive set of actions and modify one's character to reflect environmental concerns.While autonomy is often held necessary for doing the right thing, how this self-legislating stance can be connected to an environmental sense of the interconnectedness of all things remains still undone.But even if a theory of EVE provides account of sentiments, moral strength and virtuous dispositions that together all form a basis for an environmentally informed moral character, it will remain incomplete.To be complete an account of a good environmental character must provide an account of the wisdom needed to articulate and guide an environmentally good life.Without an account of environmentally informed wisdom, an environmentally good person will remain morally blind.Environmental wisdom is needed to give direction and inform decisions along the way of living an environmentally good life.Coupled with a full account of what kind of person EVE would have us be, there is needed an account of an environmentally wise person, someone who has a sophisticated understanding of the [natural] world and is adept at living a morally satisfying life informed by that understanding.What is further needed is an account of an environmentally informed account of wisdom and the way it infuses the character of an environmentally wise person (One initial attempt at providing such an account of "green" wisdom is found in : Frasz 2011).This addition of an environmental account of wisdom gives an environmental spin on a well-worn metaphor: environmentally grounded moral sentiments makes us desire a worthwhile destination, but an environmental sound vision tell us where to look.Moral strength may provide us with the will power to do what is necessary to go the distance to attain this environmentally informed destination, but environmentally grounded judgment tells us when it is best to exercise strength of will in environmental situations.Environmental virtues keep us on the path and from developing environmental vices that cause us to stray from it, but an environmentally grounded understanding of deliberation shows us how to best use the path to get to that destination.With this addition of a green understanding of wisdom, it will be possible to have the necessary environmental grounding that we would want for an environmentally good character.
Environmental Character: Environmental Feelings, Sentiments and Virtues
Abstract: An argument is made that to further develop the field of environmental virtue ethics it must be connected with an account of environmental sentiments.Openness as both an environmental sentiment and virtue is presented.This sentiment is shown to be reflected in the work of Barbara McClintock.As a virtue it is shown to a mean between arrogance and the disvaluing of individuals, a disposition to be open to the natural world and the values found there.Further development of EVE is then shown to require a connection with an account of environmental wisdom. | 5,695.2 | 2016-09-01T00:00:00.000 | [
"Philosophy"
] |
Hybrid-Scale Hierarchical Transformer for Remote Sensing Image Super-Resolution
: Super-resolution (SR) technology plays a crucial role in improving the spatial resolution of remote sensing images so as to overcome the physical limitations of spaceborne imaging systems. Although deep convolutional neural networks have achieved promising results, most of them overlook the advantage of self-similarity information across different scales and high-dimensional features after the upsampling layers. To address the problem, we propose a hybrid-scale hierarchical transformer network (HSTNet) to achieve faithful remote sensing image SR. Specifically, we propose a hybrid-scale feature exploitation module to leverage the internal recursive information in single and cross scales within the images. To fully leverage the high-dimensional features and enhance discrimination, we designed a cross-scale enhancement transformer to capture long-range dependencies and efficiently calculate the relevance between high-dimension and low-dimension features. The proposed HSTNet achieves the best result in PSNR and SSIM with the UCMecred dataset and AID dataset. Comparative experiments demonstrate the effectiveness of the proposed methods and prove that the HSTNet outperforms the state-of-the-art competitors both in quantitative and qualitative evaluations.
Introduction
With the rapid progress of satellite platforms and optical remote sensing technology, remote sensing images (RSIs) have been broadly deployed in civilian and military fields, e.g., disaster prevention, meteorological forecast, military mapping, and missile warning [1,2].However, due to hardware limitations and environmental restrictions [3,4], RSIs often suffer from low-resolution (LR) and contain some intrinsic noise.Upgrading physical imaging equipment to improve resolution is often plagued by high costs and long development cycles.Therefore, it is of utmost urgency to explore the remote sensing image super-resolution (RSISR).
Single-image super-resolution (SR) is a highly ill-posed visual problem which aims to reconstruct high-resolution (HR) images from corresponding degraded LR images.To this end, many representative algorithms have been proposed, which can be roughly divided into three categories, i.e., interpolation-based methods [5,6], reconstruction-based methods [7,8], and learning-based methods [9,10].The interpolation-based methods generally utilize different interpolation operations, including bilinear interpolation, bicubic interpolation, and nearest interpolation, to estimate unknown pixel value [11].These methods are relatively straightforward in practice, while the reconstructed images lack essential details.In contrast, reconstruction-based methods improve image quality by incorporating prior information of the image as constraints into the HR image.These methods can restore high-frequency details with the help of prior knowledge, while they require substantial computational costs, making it difficult for them to be readily applied to RSIs [12].Learning-based approaches try to produce HR images by learning the mapping relationship established between external LR-HR image training pairs.Compared with the aforementioned two lines of methods, learning-based methods achieve better performance and become the mainstream in this domain due to the powerful feature representation ability provided by convolutional neural networks (CNNs) [13].However, learning-based methods generally adopt the post-upsampling framework [14], which solely exploits low-dimensional features while ignoring the discriminative high-dimensional feature information after the upsampling process.
In addition to utilizing nonlinear mapping between LR-HR image training pairs, the self-similarity of the image is also employed to improve the performance of SR algorithms.Self-similarity refers to the property of similar patches appear repeatedly in a single image and is broadly adopted in image denoising [15,16], deblurring [17], and SR [18][19][20].Selfsimilarities are also an intrinsic property in RSIs, i.e., internal recursive information.Figure 1 illustrates the self-similarities in RSIs.One can see that the down-scaled image is on the left, and the original one is on the right.Similar highway patches with green box labels appear repeatedly in the same scale image, while the roof of factories with red box labels appear repeatedly across different scales, and these patches with similar edges and textures contain abundant internal recursive information.Previously, Pan et al. [21] employed dictionary learning to capture structural self-similarity features as additional information to improve the performance of the model.However, the sparse representation of SR has a limited ability to leverage the internal recursive information within the entire remote sensing image.
single-scale similarities cross-scale similarities In this paper, we propose a Hybrid-Scale Hierarchical Transformer Network (HSTNet) for RSISR.The HSTNet can enhance the representation of the high-dimensional features after upsampling layers and fully utilize the self-similarity information in RSIs.Specifically, we propose a hybrid-scale feature exploitation (HSFE) module to leverage the internal similar information both in single and cross scales within the images.The HSFE module contains two branches, i.e., a single-scale branch and a cross-scale branch.The former is employed to capture the recurrence within the same scale image, and the latter is utilized to learn the feature correlation across different scales.Moreover, we designed a cross-scale enhancement transformer (CSET) module to capture long-range dependencies and efficiently model the relevance between high-dimension and low-dimension features.In the CSET module, the encoders are used to encode low-dimension features from the HSFE module, and the decoder is used to utilize to fuse the multiple hierarchies high-/low-dimensional features so as to enhance the representation ability of high-dimensional features.To sum up, the main contributions of this work are as follows: 1.
We propose an HSFE module with two branches to leverage the internal recursive information from both single and cross scales within the images for enriching the feature representations for RSISR.
2.
We designed a CSET module to capture long-range dependencies and efficiently calculate the relevance between high-dimension and low-dimension features.It helps the network reconstruct SR images with rich edges and contours.
3.
Jointly incorporating the HSFE and CSET modules, we formed the HSTNet for RSISR.
Extensive experiments on two challenging remote sensing datasets verify the superiority of the proposed model.
CNN-Based SR Models
Dong et al. [22] pioneered the adoption of an SR convolutional neural network (SR-CNN) that utilizes three convolution layers to establish the nonlinear mapping relationship between LR-HR image training pairs.On the basis of the residual network introduced by He et al. [23], Kim et al. [24] designed a very deep SR convolutional neural network (VDSR) where residual learning is employed to accelerate model training and improve reconstruction quality.Lim et al. [25] built the enhanced deep super-resolution model to simplify the network and improve the computational efficiency via optimizing the initial residual block.Zhang et al. [26] designed a deep residual dense network in which the residual network with dense skip connections is used to transfer intermediate features.Benefiting from the channel attention (CA) module, Zhang et al. [27] presented a deep residual channel attention network to enhance the high-frequency channel feature representation.Dai et al. [28] designed a second-order CA mechanism to guide the model to improve the ability of discriminative learning ability and exploit more conducive features.Li et al. [29] proposed an image super-resolution feedback network (SRFBN) in which a feedback mechanism is adopted to transfer high-level feature information.The SRFBN could leverage high-level features to polish up the representation of low-level features.
Because of the impact of spatial resolution on the final performance of many RSI tasks, including instance segmentation, object detection, and scene classification, RSISR also raises significant research interest.Lei et al. [30] proposed a local-global combined network (LGC-Net) which can enhance the multilevel representations, including local detail features and global information.Haut et al. [31] produced a deep compendium model (DCM), which leverages skip connection and residual unit to exploit more informative features.To fuse different hierarchical contextual features efficiently, Wang et al. [32] designed a contextual transformation network (CTNet) based on a contextual transformation layer and contextual feature aggregation module.Ni et al. [33] designed a hierarchical feature aggregation and self-learning network in which both self-learning and feedback mechanisms are employed to improve the quality of reconstruction images.Wang et al. [34] produced a multiscale fast Fourier transform (FFT)-based attention network (MSFFTAN), which employs a multiinput U-shape structure as the backbone for accurate RSISR.Liang et al. [35] presented a multiscale hybrid attention graph convolution neural network for RSISR in which a hybrid attention mechanism was adopted to obtain more abundant critical high-frequency information.Wang et al. [36] proposed a multiscale enhancement network which utilizes multiscale features of RSIs to recover more high-frequency details.However, the CNN-based methods above generally employ the post-upsampling framework that directly recovers HR images after the upsampling layer, ignoring the discriminative high-dimensional feature information after the upsampling process [14].
Transformer-Based SR Models
Due to the strong long-range dependence learning ability of transformers, transformerbased image SR methods have been studied recently by many scientific researchers.Yang et al. [37] produced a texture transformer network for image super-resolution, in which a learnable texture extractor is utilized to exploit and transmit the relevant textures to LR images.Liang et al. [38] proposed SwinIR by transferring the ability of the Swin Transformer, which could achieve competitive performance on three representative tasks, namely image denoising, JPEG compression artifact reduction, and image SR.Fang et al. [39] designed a lightweight hybrid network of a CNN and transformer that can extract beneficial features for image SR with the help of local and non-local priors.Lu et al. [40] presented a hybrid model with a CNN backbone and transformer backbone, namely the efficient superresolution transformer, which achieved impressive results with low computational cost.Yoo et al. [41] introduced an enriched CNN-transformer feature aggregation network in which the CNN branch and transformer branch can mutually enhance each representation during the feature extraction process.Due to the limited ability of multi-head self-attention to extract cross-scale information, cross-token attention is adopted in the transformer branch to utilize information from tokens of different scales.
Recently, transformers have also found their way into the domain of RSISR.Lei et al. [14] proposed a transformer-based enhancement network (TransENet) to capture features from different stages and adopted a multistage-enhanced structure that can integrate features from different dimensions.Ye et al. [42] proposed a transformer-based super-resolution method for RSIs, and they employed self-attention to establish dependencies relationships within local and global features.Tu et al. [43] presented a GAN that draws on the strengths of the CNN and Swin Transformer, termed the SWCGAN.The SWCGAN fully considers the characteristics of large size, a large amount of information, and a strong relevance between pixels required for RSISR.He et al. [44] designed a dense spectral transformer to extract the long-range dependence for spectral super-resolution.Although the transformer can improve the long-range dependence learning ability of the model, these methods do not leverage the self-similarity within the entire remote sensing image [45].
Overall Framework
The framework of the proposed HSTNet is shown in Figure 2. It is built by the combination of three kinds of fundamental modules, i.e., a low-dimension feature extraction (LFE) module, a cross-scale enhancement transformer (CSET) module, and an upsample module.Specifically, the LFE module is utilized to extract high-frequency features across different scales, and the CSET module is employed to capture long-range dependency to enhance the final feature representation.The upsample module is adopted to transform the feature representation from a low-dimensional space to a high-dimensional space.
Given an LR image I LR , a convolutional layer with a 3 × 3 kernel is utilized to extract the initial feature F 0 .The process of shallow feature extraction is formulated as where f sf (•) represents the operation of the convolutional operation and F 0 is the shal- low feature.As shown in Figure 3, the LFE module consists of five basic extraction (BE) modules, and each BE module contains two 3 × 3 convolution layers and one hybrid-scale feature exploitation (HSFE) module.As the core component of the BE module, the HSFE module is proposed to model image self-similarity.The whole low-dimensional feature extraction process is formulated as where f i lfe (•) and F i LFE represent the operation of ith LFE module and its output.After the three cascaded LFE modules, a subpixel layer [46] is adopted to transform low-dimensional features into high-dimensional features, which is formulated as where F up represents the high-dimension feature and Subpixel(•) denotes the function of the subpixel layer.The low-dimension features F 1 LFE , F 2 LFE , and F 3 LFE and the high-dimension feature F up are fed into three cascaded CSET modules for feature hierarchical enhancement.To reduce the redundancy of the enhanced features, a 1 × 1 convolution layer is employed to reduce the feature dimension.The complete process including the enhancement and dimension reduction is formulated as where f i cset (•) and F i CSET represent the operation of ith CSET module and its output, respectively.Finally, one convolution layer is employed to obtain SR image I SR from the enhanced features.A conventional L 1 loss function was employed to train the proposed HSTNet model.Given a training set , the loss function is formulated as: where
Hybrid-Scale Feature Exploitation Module
To explore the internal recursive information in single-scale and cross-scale, we propose an HSFE module.Figure 4 exhibits the architecture of the HSFE module, which consists of a single-scale branch and a cross-scale branch.The single-scale branch aims to capture similar features within the same scale, and a non-local (NL) block [47] was utilized to calculate the relevance of these features.The cross-scale branch was applied to capture recursive features of the same image at different scales, and an adjusted non-local (ANL) block [45] was utilized to calculate the relevance of features between two different scales.Single-scale branch: As depicted in Figure 4, we built the single-scale branch to extract single-scale features.Specifically, in the single-scale branch, several convolutional layers are applied to capture recursive features, and an NL block is employed to guide the network to concentrate on informative areas.As shown in Figure 4a, an embedding function is utilized to mine the similarity information as where i is the index of the output position, j is the index that enumerates all positions, and x denotes the input of the NL block.W θ and W ϕ are the embeddings weight matrix.The non-local function is symbolized as The relevance between x i and all x j can be calculated by pairwise function f (•).The feature representation of x j can be obtained by the function g(•).Eventually, the output of the NL block is obtained by where W φ is a weight matrix.The convolution layer following the NL block transforms the input into an attention diagram, which is then normalized with a sigmoid function.In addition, the main branch output features are multiplied by the attention diagram, where the activation values for each space and channel location are rescaled.
Cross-scale branch: As depicted in Figure 4, the cross-scale branch is utilized to perform cross-scale feature representation.Specifically, the input of the HSFE module is considered the basic scale feature, which is symbolized as F b in .To exploit the internal recursive information at different scales, the downsampled scale feature F d in is formulated as where f s down (•) denotes the operation of downsampling with scale factor s. Two contextual transformation layers (CTLs) [48] are employed to extract feature with two different scales F b in and F d in .To align the spatial dimension of the features in different scales, the downsampled feature is firstly upsampled with the scale factor of s. x b and x b represent the output of the basic scale and the downsampled scale through the two CTLs, and this process is formulated as where f ctl (•) denotes the operation of two CTLs and f s up (•) represents the operation of upsample with scale factor s.
Similar to the single-scale branch, an ANL block [45] was introduced to exploit the feature correlation between two different scales RSIs.As shown in Figure 4b, the ANL block is improved compared to the NL block, and they have different inputs.Thus, z i in Equation ( 8) for ANL block can be rewritten as In the cross-scale branch, we employ the ANL block to fuse multiple scale features, therefore fully utilizing the self-similarity information.The HSFE module can be formulated as where F in is the input of the HSFE module and F out is the output of the HSFE module.f sin (•) and f cro (•) are the operation of the single-scale branch and cross-scale branch, respectively.
Cross-Scale Enhancement Transformer Module
The cross-scale enhancement transformer module is designed to learn the dependency relationship across long distances between high-dimension and low-dimension features and enhance the final feature representation.The architecture of the CSET module is shown in Figure 5a.Specifically, we introduced the cross-scale token attention (CSTA) module [41] to exploit the internal recursive information within an input image across different scales.Moreover, we use three CSET modules to utilize different hierarchies of feature information.Figure 5a illustrates in detail the procedure of feature enhancement using CSET-3 module as an example.
Transformer encoder: The encoders are used to encode different hierarchies of features from LFE modules.As shown in Figure 5a, the encoder is mainly composed of a multi-headed self-attention (MHSA) block and a feed-forward network (FFN) block, which is similar to the original design in [49].The FFN block contains two multilayer perceptron (MLP) layers with an expansion ratio r and a GELU activation function [50] in the middle.Moreover, we adopted layer normalization (LN) before the MHSA block and FFN block, and employed a local residual structure to avoid the gradient vanishing or explosion during gradient backpropagation.The entire process of the encoder can be formulated as where f mhsa (•), f ln (•), and f f f n (•) denote the function of the MHSA block, layer normaliza- tion, and FFN block, respectively.F i EN is the intermediate output of the encoder.F i LFE and F i EN are the input and output of the encoder in the ith CSET module.
Transformer decoder: The decoders are utilized to fuse high-/low-dimensional features from multiple hierarchies to enhance the representation ability of high-dimensional features.As shown in Figure 5a, the decoder contains two MHSA blocks and a CSTA block [41].With the CSTA block, the decoder can exploit the recursive information within an input image across different scales.The operation of the decoder can be formulated as where f csta (•) denotes the process of the CSTA block and F up is the output of Encoder-4.Each CSET module has two inputs, and the composition of the inputs is determined by the location of the CSET module.
where t and s represent the stride and token size.To improve efficiency, T s is replaced by T a , and T l is tokenized with a larger token size and overlapping.Numerous large-size tokens can be obtained by overlapping, enabling the transformer to actively learn patch recurrence across scales.
To effectively exploit self-similarity across different scales, we computed cross-scale attention scores between tokens in both T s and T l .Specifically, the queries 2 , and values v s ∈ R n× d 2 were generated from T s .Similarly, the queries and values v l ∈ R n × d 2 were generated from T l .The reorganized triples q s , k l , v l and q l , k s , v s were obtained by swapping their key-value pairs to each other.Then, the attention operation was executed using the reorganized triples.It should be noted that the projection of attention operations reduces the last dimension of queries, keys, and values in T l from d to d/2.Subsequently, we re-projected the attention results of T l to the dimension of n × d then transformed to the dimension of n × d 2 .Finally, we concatenated the attention results to obtain the output of the CSTA block.
Experiments 4.1. Experimental Dataset and Settings
We evaluate the proposed method on two widely adopted benchmarks [30,31,51], namely the UCMecred dataset [52] and AID dataset [53], to demonstrate the effectiveness of the proposed HSTNet.
UCMerced dataset: This dataset consists of 2100 images belonging to 21 categories of varied remote sensing image scenes.All images exhibit a pixel size of 256 × 256 and a spatial resolution of 0.3 m/pixel.The dataset is divided equally into two distinct sets, one comprising 1050 images for training and the other for testing.
AID dataset: This dataset encompasses 10,000 remote sensing images, spanning 30 unique categories.In contrast to the UCMerced dataset, all images in this dataset have a pixel size of 600 × 600 and spatial resolution of 0.5 m/pixel.A selection of 8000 images from this dataset was randomly chosen for the purpose of training, while the remaining 2000 images were used for testing.In addition, a validation set consisting of five arbitrary images from each category was established.
To verify the generalization of the proposed method, we further adapted the trained model to the real-world images of Gaofen-1 and Gaofen-2 satellites.We downsampled HR images through bicubic operations to obtain LR images.Two mainstream metrics, namely peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), were calculated on the Y channel of the YCbCr space for objective evaluation.They are formulated as where L represents the maximum pixel, and N denotes the number of all pixels in I SR and I HR .
where x and y represent two images.σ xy symbolizes the covariance between x and y. u and σ 2 represent the average value and variance.k 1 and k 2 denote constant relaxation terms.Multi-adds and model parameters were utilized to evaluate the computational complexity [32,54].In addition, the natural image quality evaluator (NIQE) was adopted to validate the reconstruction of real-world images from Gaofen-1 and Gaofen-2 satellites [55].
Implementation Details
We conducted experiments on remote sensing image data with scale factors of ×2, ×3, and ×4.During training, we randomly cropped 48 × 48 patches from LR images and extracted ground-truth references from corresponding HR images.We also employed horizontal flipping and random rotation (90 • , 180 • and 270 • ) to augment training samples.Table 1 presents the comprehensive hyperparameter setting of the cross-scale enhancement transformer (CSET) module.We adopted the Adam optimizer [56] to train the HSTNet with β 1 = 0.9, β 2 = 0.99, and = 10 −8 .The initial learning rate was set to 10 −4 , and the batch size was 16.The proposed model was trained for 800 epochs, and the learning rate decreased by half after 400 epochs.Both the training and testing stages were performed using the PyTorch framework, utilizing CUDAToolkit 11.4, cuDNN 8.2.2, Python 3.7, and two NVIDIA 3090 Ti GPUs.
Qualitative Evaluation
To further verify the advantages of the proposed method, the subjective results of SR images reconstructed by the aforementioned methods are shown in Figures 6 and 7. Figure 6 shows the reconstruction results of the above methods for the UCMerced dataset by taking "airplane" and "runway" scenes as examples.Figure 7 shows the visual results of the "stadium" and "medium-residential" scenes in the AID dataset.In general, the SR results reconstructed by the proposed method possess sharper edges and clearer contours compared with other methods, which verifies the effectiveness of the HSTNet.
Results on Real Remote Sensing Data
Real images acquired by GaoFen-1 (GF-1) and GaoFen-2 (GF-2) satellites were employed to assess the robustness of the HSTNet.The spatial resolutions of GF-1 and GF-2 are 8 and 3.2 m/pixel, respectively.Three visible bands are selected from GF-1 and GF-2 satellite images to generate the LR inputs.The pre-trained DCM [31], ACT [41], and the proposed HSTNet models for the UCMerced dataset are utilized for SR image reconstruction.Figures 8 and 9 demonstrate the reconstruction results of the aforementioned methods on real data in some common scenes including river, factory, overpass, and paddy fields.One can see that the proposed HSTNet can obtain favorable results.Compared with DCM [31] and ACT [41], the reconstructed images of the proposed HSTNet achieved the lowest NIQE scores in all the four common scenes.Although the pixel size of these input images is different from the LR images in the training set, which are 600 × 600 and 256 × 256 for real-world images and training images, respectively, the HSTNet can still achieve good results in terms of visual perception qualities.It verifies the robustness of the proposed HSTNet.
Ablation Studies
Ablation studies with the scale factor of ×4 were conducted on the UCMerced dataset to demonstrate the effectiveness of the proposed fundamental modules in the HSTNet model.HSTNet achieves the highest PSNR and SSIM when utilizing three LFE and five HSFE modules.When employing three LFE and eight HSFE modules, the model has the largest number of parameters and computation, and its performance is not optimal.Therefore, considering the performance of the model and the computational complexity, we adopted three LFE and five HSFE modules in the proposed method.The results confirm the effectiveness of the LFE and HSFE modules in the proposed model, as well as the rationality of the number of LFE and HSFE modules.
Effects of the HSFE module:
We devised the HSFE module in the proposed LFE module to exploit the recursive information inherent in the image.We conducted further ablation studies by substituting the HSFE module with widely used feature extraction modules in SR algorithms, namely RCAB [27], CTB [48], CB [58], and SSEM [45] to validate the effectiveness of the HSFE module.Among them, SSEM [45] is also used to mine scale information.As presented in Table 6, the HSFE module outperforms the other feature extraction modules in terms of PSNR and SSIM, demonstrating its effectiveness in feature extraction.Meanwhile, it is also competitive in terms of parameter quantity and computational complexity.
Number of CSET modules:
The CSET module is designed to learn the dependency relationship across long distances between features of different dimensions.To validate the effectiveness of the proposed CSET modules, we conducted ablation experiments using varying numbers of CSET modules.Table 7 proves that the configuration of three CSET modules performs the best in terms of PSNR and SSIM.The features of low-dimension space are transmitted more to the high-dimension space, reducing the difficulty of optimization and facilitating the convergence of the deep model.The aforementioned results demonstrate the effectiveness of the CSET module in enhancing the representation of high-dimensional features.
Effects of the CSTA block: The CSTA [41] block is introduced to enable the CSET module to utilize the recurrent patch information of different scales in the input image.To verify the effectiveness of the CSTA module, we analyzed the composition of the transformer.Table 8 presents the comparative results of two different transformers.It proves that the CSTA block is beneficial to improve the performance of the HSTNet.
Conclusions and Future Work
In this paper, we present a hybrid-scale hierarchical transformer network (HSTNet) for remote sensing image super-resolution (RSISR).The HSTNet contains two crucial components, i.e., a hybrid-scale feature exploitation (HSFE) module and a cross-scale enhancement transformer (CSET) module.Specifically, the HSFE module with two branches was built to leverage the internal recurrence of information both in single and cross scales within the images.Meanwhile, the CSET module was built to capture long-range dependencies and effectively mine the correlation between high-dimension and low-dimension features.Experimental results on two challenging remote sensing datasets verified the effectiveness and superiority of the proposed HSTNet.In the future, more efforts are expected to simplify the network architecture and design a more effective low-dimensional feature extraction branch to improve RSISR performance.
Figure 1 .
Figure 1.Illustration of self-similarities in RSIs with single-scale (green box) and cross-scale (red box).
Figure 2 .Figure 3 .
Figure 2. Architecture of the proposed HSTNet for remote sensing image SR.
Figure 4 .
Figure 4. Architecture of the proposed HSFE module.
F i DE and F i DE represent the intermediate outputs of the decoder.F i CSET represents the output of ith CSET module.CSTA block: The CSTA block[41] is introduced to utilize the recurrent patch information of different scales in the input image.The feature information flow of the CSTA module is illustrated in Figure5b.Specifically, the input token embeddings T ∈ R n×d of the CSTA block are split into T a ∈ R n× d 2 and T b ∈ R n× d 2 along the channel axis.Then, T s ∈ R n× d 2 including n tokens from T a and T l ∈ R n ×d including n tokens by rearranging T b are generated.The number of tokens in T l can be set to n
Figure 5 .
Figure 5. Architecture of the CSET module.
Table 1 .
Parameter setting of the CSET module in the HSTNet.
Table 2 .
Comparative results for the UCMerced dataset and AID dataset.The best and the second-best results are marked in red and blue, respectively.
Table 3 .
Average PSNR of per-category for UCMerced dataset with the scale factor of ×3.The best and the second-best results are marked in red and blue, respectively.
Table 4 .
Average PSNR of per-category for AID dataset with the scale factor of ×4.The best and the second-best results are marked in red and blue, respectively.
Table 5 presents a comparative analysis of varying quantities of LFE and HSFE modules.It indicates that when adopting two LFE and 2 HSFE modules, the model has the smallest number of parameters and computation, but the model has the lowest PSNR and SSIM values.The results indicate that the proposed
Table 5 .
Ablation analysis of the number of LFE and HSFE modules (the best result is highlighted in bold).
Table 6 .
Ablation analysis of different feature extraction modules in LFE module (the best result is highlighted in bold).
Table 7 .
Ablation analysis of different feature extraction modules in the LFE module (the best result is highlighted in bold).
Table 8 .
Ablation analysis of the CSTA block.The best performances are highlighted in bold. | 6,736.8 | 2023-07-07T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
BIOCHEMICAL EVALUATION OF “Coriandrumsativum” L. (CORIANDER)
K.A. Mahamane 1 , P.P. Ahire 2 , V.B. Kadam 3 and Y. D. Nikam 3 . Department of Botany and research Centre, K.T.H.M. College, NASHIK-422002. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History
Materials and methods: -
The required species of Coriander was collected from the local market and was used for preparing the dry powder. Before sun drying it was separated into leaf, stem & root. The fresh material was used for chlorophyll estimation. The remaining was dried & converted into powder form. This particular procedure was carried out in seasonal format i.e., summer, monsoon & winter respectively. Quantitative estimation of TotalCarbohydrates: -Carbohydrates were estimated by methods suggested by McGready (1950), and Nelson (1941). Reagents: - Somogy's reagent (4 gm. CuSO4 +24 gm. anhydrous Na2CO3 +16 gm. Na-K tartarate (Rocheette salt) + 180gm Anhydrous Na2SO4. Nelson aresenomolybdatereagent: -(24gm (NH4)6MO7O24, 4H2O Ammonium molybdate) + (3gm Na2SO4, 7H2O). Both solutions were mixed and incubated at 370C for 24 hours before use and they were stored in brown bottle. Standard sugar solution was prepared by dissolving 10 mg glucose in 100 ml distilled water.
Procedure: -1gm. of sample was crushed with 10ml 80% ethanol in mortar and pestle by adding acid free sand, and then filtered through Wathman filter paper. The filter and residue were collected separately.
The alcohol residue was taken in 250 ml in conical flask. 150ml distilled water and 5ml conc. HCL were added in it. Hydrolysed for 30 minutes and cooled to room temperature. Na2Co3 was added bit-by bit until the extract became neutral (pH=7). The extract was filtered. Residue was discarded. Total volume of filtered was served as a sample for starch. First filtrate was taken in conical flask and condensed on water bath up to 2-3 minutes then distilled water was added to the filtrate, and then filtered, after mixing residue was discarded and the volume of filtrate was served for reducing sugar. 20 ml of this filtrate was taken in 150 ml conical flask, 2ml of conical flask; 2ml of conc. HCl was added to it and corked. It was then hydrolysed for 30 minutes and cooled at room temperature. Na2CO3 was added bit-by -bit until the extract became neutral (pH=7). Then this extract filtered and residue discarded. The final volume of the filtrate was measured. It was served as a sample for total sugar. 0.5 ml of aliquot sample was taken in each test tube and 1 ml of Somogy's reagent was added in it. All test tubes were placed in boiling water bath for 30 minutes, cooled the tubes to room temperature and 1ml of aresenomolybdate reagent which is poisonous was added to it. The content was mixed thoroughly. Then the content was diluted to a volume of 10ml and its absorbance measured OD at 560 nm in spectrophotometer.
Quantitative estimation of Fats: -
A small quantity of free acids is usually present in oils along with the triglycerides. The free fatty acid content is known as acid number/acid value. It increases during storage. The keeping quality of oil therefore relies upon the free fatty acid content.
Procedure: -Dissolve 1-10g of oil or melted fat in 50ml of the neutral solvent in a 250ml conical flask. Add a few drops of phenolphthalein. Titrate the content against 0.1N potassium hydroxide. Shake constantly until pink color which persists for fifteen seconds is obtained.
Result &Discussion: -Total Carbohydrates: -
The total carbohydrate content of leaves, stem & root were usually higher in summer as compared to winter and monsoon.
1911
The range of total carbohydrate content of leaves was 6.25mg/g dry wt. to 4.66 mg/g dry wt. where in summer accumulation of total carbohydrates was (6.25 mg/g) than in winter (4.72 mg/g) and in monsoon it was found lowest (4.66 mg/g).
Where as in stem it ranged from 5.80 mg/g to 5.42 mg/g dry wt., in Monsoon it was recorded lowest (5.42 mg/g) & highest in summer (5.80 mg/g) whereas modest in winter (5.74mg/g).
The range of total carbohydrate in root was ranged from 6.87mg/g to 6.39mg/g dry wt., in summer it was highest (6.87 mg/g) compared to monsoon (6.39 mg/g) & winter (6.75 mg/g).
Fats: -
The fats content of leaves was found in the range of 0.084 to 0.010 mg/g dry wt., in summer it was highest (0.084mg/g) compared to winter (0.032 mg/g) and monsoon (0.010mg/g).
Where as in stem it ranged from 0.092 to 0.017 mg/g dry wt. in monsoon it was lowest (0.017 mg/g), in winter was modest (0.050 mg/g) & in summer was highest (0.092 mg/g). The concentration of fats was highest in root as compared to leaf & stem. It ranged from 0.093 to 0.025 mg/g dry wt., in summer it was highest (0.093 mg/g), in winter it was modest (0.087 mg/g) & in monsoon was lowest (0.025 mg/g). The comparison can be seen in the following table no.2. | 1,146.6 | 2016-10-31T00:00:00.000 | [
"Chemistry"
] |
Wasserstein distance and metric trees
We study the Wasserstein (or earthmover) metric on the space $P(X)$ of probability measures on a metric space $X$. We show that, if a finite metric space $X$ embeds stochastically with distortion $D$ in a family of finite metric trees, then $P(X)$ embeds bi-Lipschitz into $\ell^1$ with distortion $D$. Next, we re-visit the closed formula for the Wasserstein metric on finite metric trees due to Evans-Matsen \cite{EvMat}. We advocate that the right framework for this formula is real trees, and we give two proofs of extensions of this formula: one making the link with Lipschitz-free spaces from Banach space theory, the other one algorithmic (after reduction to finite metric trees).
Introduction
Embeddings of metric spaces, especially discrete metric spaces like graphs, into the Banach spaces ℓ 1 or L 1 , form a well-established part of metric geometry, with applications ranging from computer science to topology: we refer to [Na18], part I of [DL97] or Chapter 1 in [Os13].In this paper we will be concerned with embeddings of Wasserstein spaces, that we now recall.
Let (X, d) be a metric space and let P 1 (X) be the space of probability measures µ on X with finite first moment, i.e.X d(x 0 , x) dµ(x) < +∞ for some (hence any) base-point x 0 ∈ X.For compact X, the space P 1 (X) coincides with the space P (X) of all probability measures on X.
The Wasserstein metric is a distance function on P 1 (X).Intuitively, given µ, ν ∈ P 1 (X), the distance W a(µ, ν) represents the amount of work necessary to transform µ into ν.More precisely, a probability measure π ∈ P (X × X) is a coupling between µ and ν if its marginals are µ and ν, i.e. µ(A) = π(A × X) and ν(A) = π(X × A) for any Borel subset A ⊂ X.And the Wasserstein distance W a(µ, ν) is defined as W a(µ, ν) = inf X×X d(x, y) dπ(x, y) : π coupling between µ and ν .
Note that X embeds isometrically in P 1 (X) by x → δ x (the Dirac mass at x). See Chapter 5 in [Sa15] or Chapter 7 in [Vi15] for more on the Wasserstein distance, also called Kantorovitch-Rubinstein distance or earthmover distance (EMD) in computer science papers.We denote by W a(X) the space P 1 (X) endowed with the Wasserstein distance, and call it the Wasserstein space of X.For a coupling π, the cost of π is the quantity X×X d(x, y)dπ(x, y).
Let Y = (Y i , d i ) i∈I be a finite family of metric spaces.We say that a metric space (X, d) embeds stochastically in Y with distortion D ≥ 1 if there exists nonnegative numbers (p i ) i∈I summing up to 1, and maps f i : X → Y i (for each i ∈ I) such that: • Each f i is non-contracting, i.e. for every x, y ∈ X we have d i (f i (x), f i (y)) ≥ d(x, y).
• For every x, y ∈ X we have The first aim of this paper is to prove the following result; Theorem 1.1.Assume that the finite metric space (X, d) embeds stochastically with distortion D into a family of finite metric trees.Then W a(X) embeds bi-Lipschitz into ℓ 1 with distortion at most D.
Here, by a metric tree, we mean a tree T = (V, E) endowed with a positive weight function w : E → R >0 : e → w e .For x, y ∈ V we denote by [x, y] the set of edges on the unique path from x to y and we endow V with the distance d T (x, y) = e∈[x,y] w e .
We learned Theorem 1.1 from the paper [IT03] by P. Indyk and N. Thaper, who get a less precise O(D) for the distortion of the embedding into ℓ 1 , and provide a rather frustrating comment that prompted our desire to provide a direct proof of Theorem 1.1. 1 It was shown by J. Fakcharoenphol, S. Rao and K. Talwar [FRT04] that any finite metric space on n points embeds stochastically with distortion O(log n) into a family of finite metric trees (and this bound is optimal).Using this it was shown by F. Baudier, P. Motakis, G. Schlumprecht and A. Zsak (Corollary 8 in [BMSZ20]) that, for X a finite metric space on n points, the lamplighter metric space La(X) embeds into ℓ 1 with distortion O(log n) = O(log log |La(X)|).Using the same result from [FRT04], our Theorem 1.1 immediately implies: Corollary 1.2.For any finite metric space X on n points, the Wasserstein space W a(X) embeds bi-Lipschitz into ℓ 1 with distortion at most O(log n).
Combining with the isometric embedding X → W a(X) : x → δ x , we get as corollary a celebrated result by J. Bourgain [Bo85] 2 Corollary 1.3.Any finite metric space on n points, embeds bi-Lipschitz into ℓ 1 with distortion at most O(log n).
It turns out that on finite metric trees there is a remarkable closed formula for the Wasserstein distance.It originated in papers in computer science in 2002 and probably earlier: see Charikar [Ch02], for measures supported on the leaves of the tree3 .For general probability measures on a finite metric tree, the formula appears in a paper in biomathematics (see section 2 in S.N.Evans and F.A. Matsen [EM12]).We believe it deserves to be better known in mathematical circles.To understand it, let T = (V, E) be a metric tree, fix a base-vertex x 0 ∈ V (so that T appears as a rooted tree).Any edge e ∈ E separates T into two half-trees, and we denote by T e the set of vertices of the half-tree NOT containing x 0 : if we view the tree as hanging from the root, T e is the subtree hanging below the edge e.
Theorem 1.4.Let T = (V, E) be a finite, rooted metric tree.Then for µ, ν ∈ P (V ): This formula has numerous implications: first, the RHS is independent of the choice of the root; second, it shows that the Wasserstein metric on T is a L 1 -metric (see lemma 2.4 below).
Our second aim in this paper is to give two new proofs of Theorem 1.4.The first one advocates that the right framework for Theorem 1.4 is real trees: by exploiting a connection with the theory of Lipschitz-free spaces from Banach space theory, we will extend the result to metric trees with countably many vertices.The second proof is by double inequality: the inequality W a(µ, ν) ≥ e∈E w e |µ(T e ) − ν(T e )| follows by considering the canonical embedding of the tree into ℓ 1 and its barycentric extension to P 1 (V ).The converse inequality is proved by first reducing to finite metric trees and, for those, given µ, ν ∈ P (V ), by providing an algorithmic construction of a coupling π with V ×V d(x, y) dπ(x, y) = e∈E w e |µ(T e ) − ν(T e )|.The paper is organized as follows.In section 2 we prove Theorem 1.1, taking Theorem 1.4 for granted.Sections 3 and 4 present our two proofs of Theorem 1.4, suitably generalized to metric trees with countably many vertices (see Theorem 3.3).Finally the Appendix provides a comparison between various σ-algebras of sets on a real tree, that appeared in the literature.
The second lemma was suggested to us by F. Baudier.
Lemma 2.2.If the finite metric space (X, d) embeds stochastically into Y = (Y i , d i ) i∈I with distortion D, then W a(X) embeds stochastically into (W a(Y i )) i∈I with distortion D.
Proof For i ∈ I, let p i ≥ 0 and f i : X → Y i be realizing the stochastic embedding with distortion D of X into Y.Consider then (f i ) * : P (X) → P (Y i ) : µ → (f i ) * (µ).We claim that the stochastic embedding with distortion D of W a(X) into the family (W a(Y i )) i∈I is realized by the p i 's and the (f i ) * 's; to see this, we check the two points in the definition of a stochastic embedding.Fix µ, ν ∈ W a(X).
where the first inequality follows from the fact that f i is non-contracting.So (f i ) * is non-contracting as well.
• Let π ∈ P (X × X) be a coupling between µ and ν such that where the second inequality follows from the fact that the f i 's provide a stochastic embedding.This concludes the proof.
Combining lemmas 2.1 and 2.2 we immediately get: To prove Theorem 1.1, in view of Corollary 2.3, it is therefore enough to observe: Proof Fix a root x 0 ∈ V and, for any edge e ∈ E, let T e ⊂ V be defined as in the Introduction.The map is an isometric embedding of W a(T ), by Theorem 1.4.This concludes the proof of Theorem 1.1 (taking Theorem 1.4 for granted).
3 First proof of Theorem 1.4
Lipschitz-free spaces
For a metric space (X, d) with a base-point x 0 ∈ X, we denote by Lip 0 (X) the Banach space of Lipschitz functions on X vanishing at x 0 , endowed with the Lipschitz norm.The space Lip 0 (X) has a canonical pre-dual, called the Lipschitzfree space of X (see e.g.Chapter 2 in [We99], Chapter 10 in [Os13]) and denoted by F (X): it is the closed linear subspace of the dual space Lip 0 (X) * generated by the point evaluations δ x (x ∈ X \ {x 0 }).
For µ ∈ P 1 (X), the linear form f → X f (x) dµ(x) defines an element of the dual Lip 0 (X) * : this way we get an embedding of W a(X) into Lip 0 (X) * .When X is a complete separable metric space, it can be shown that this is actually an isometric embedding of W a(X) into F (X) (see Theorem 1.13 in [OO19] or section 2 in [NS07]).
Real trees
A real tree (T, d) is a geodesic metric space which is 0-hyperbolic in the sense of Gromov.For x, y ∈ T , we denote by [x, y] the segment between x and y, i.e. the unique arc joining them.A point x ∈ T is a branching point if T \ {x} has at least 3 connected components; we denote by Branch(T ) the set of branching points of T .Fix a base-point x 0 ∈ T .For x ∈ T , we set so letting T hang from the root x 0 , the set T x is the part of T lying below x.
Following A. Godard [Go10]) we say that a subset A ⊂ T is measurable if, for every x, y ∈ T , the set A∩[x, y] is Lebesgue-measurable in [x, y].On the σ-algebra G of measurable subsets, there is a unique measure λ such that λ([x, y]) = d(x, y): we call λ the length measure4 .It is defined as follows: for S a segment in T , let λ S denote Lebesgue measure on S.Then, for where R is the set of subsets of T that can be expressed as finite disjoint unions of segments.
For A a closed subset of T containing x 0 , still following [Go10] we define a function Assume from now on that the real tree T is complete and separable.Then by the previous sub-section, for A a closed subset of T containing Branch(T ), the space W a(A) isometrically embeds into L 1 (A, µ A ).This embedding is not written explicitly in [Go10]; by making it explicit we get a closed formula for the Wasserstein distance on closed subsets of real trees.
Proposition 3.1.Let (T, d) be a complete, separable real tree and let A be a closed subset of T containing Branch(T ).For µ, ν ∈ W a(A) we have: (3) Proof By the proof of Theorem 3.2 in [Go10], the map is an isometric isomorphism which is weak * -weak * continuous, so its transpose Φ * realizes the desired isometric isomorphism F (A) → L 1 (A, µ A ). Denoting by χ [x,y] the characteristic function of the interval [x, y], the previous formula may be re-written: For ν ∈ W a(A), we compute Φ * (ν).For g ∈ L ∞ (A, µ A ), we have As the measure µ A is σ-finite (here we use that the tree T is separable), we may appeal to Fubini: Since this holds for every g ∈ L ∞ (A, µ A ) we deduce that, for almost every x ∈ A: Equation (3) follows.
Remark 3.2.When A = T , Proposition 3.1 becomes, for T a complete separable real tree and µ, ν ∈ W a(T ): When T is the geometric realization of a finite metric tree, equation (4) appears as equation ( 5) in [EM12]; the proof is different.
Theorem 3.3.Let T = (V, E) be a rooted metric tree with countably many vertices.Then for µ, ν ∈ P 1 (V ): Proof Let x 0 be a base-point in X. Composing β with a translation in E, we may assume that β(x 0 ) = 0.Then, as β(x) − β(y) ≤ C • d(x, y) for any x, y ∈ X, we get To check that β is C-Lipschitz, observe that for µ, ν ∈ P 1 (X) and π a coupling between µ and ν, we have: The result follows by taking the infimum over all couplings π.Remark 4.2.Observe that, if β in Proposition 4.1 is bi-Lipschitz, in general its extension β is not.Indeed take E = R, and let X ⊂ R be any subset with at least 3 elements, the inclusion β : X → R is isometric, but β is not even injective.
Let T = (V, E) be a metric tree; we denote by χ [x,y] the characteristic function of the set of edges in [x, y].There is a well-known isometric embedding β : V → ℓ 1 (E, w) : x → χ [x 0 ,x] (it is hard to locate the first appearance of this embedding in the literature: we learned it from [Ha79]).By Proposition 4.1, we extend it to a 1-Lipschitz map β : P 1 (V ) → ℓ 1 (E, w).Ultimately we will see that β is isometric.For the moment we prove: Proposition 4.3.Let T = (V, E) be a metric tree.For µ, ν ∈ P 1 (V ): Proof The inequality follows from Proposition 4.1, we focus on the equality.But So it it enough to prove that β(µ)(e) = µ(T e ).So we compute: Our aim now is to prove that the inequality in Proposition 4.3 is actually an equality, i.e for metric trees T = (V, E) with countably many vertices we wish to prove the reverse inequality (5) Theorem 1.13 of [OO19] implies that the set of finitely supported probability measures is dense in (P 1 (V ), W a). Of course W a(•, •) : P 1 (V ) × P 1 (V ) → R is continuous, and is continuous too, as an immediate consequence of Proposition 4.3.So to show (5) we may restrict for finitely supported measures i.e. we may restrict to finite metric trees.
An algorithm for finite metric trees
Let T = (V, E) be a finite metric tree.Recall from the proof of theorem 3.3 that if e ∈ E is an edge, we write e + and e − its two extremities chosen so d(x 0 , e + ) < d(x 0 , e − ), moreover if v, w ∈ V , we say that w is a descendant of v if v ∈ [x 0 , w] (notice that a vertex is its own descendant) and we say that w is a child of v -and that v is the parent of w -if w is a descendent of v and [v, w] = {w, v}.If v ∈ V we write T v the half tree with set of vertices the set of all descendants of v, hence T x 0 = T and if e ∈ E then T e = T e − .
To show that we provide an algorithm which transforms a probability measure µ ′ , initially set to µ into ν.In parallel, this algorithm keeps track of a variable (here a matrix) π ′ = (π ′ (x, y)) x,y∈V := (π ′ x,y ) x,y∈V that, all the way through the running of the algorithm, provides a coupling between µ and µ ′ .When the algorithm stops we will have µ ′ = ν and the cost of the coupling π ′ will be e∈E w e | µ(T e ) − ν(T e ) |.This algorithm runs in two phases; intuitively speaking the first phase brings up (towards the root) the excess of mass from those subtrees T e with µ(T e ) > ν(T e ), and the second phase let that mass fall (towards the leaves) in the subtrees T e with µ(T e ) < ν(T e ).Still intuitively, for every vertex x, π ′ x,y is the mass attributed by µ ′ to x coming from y; the coupling remembers where the mass comes from.We consider that the vertices of T are numbered with 1, 2, ..., n := |V | (e.g. in such a way that given two vertices that are at distinct depths in the tree, the deeper one is associate to a lower number than the other).The algorithm is such that it moves first the mass coming from vertices with a low number.
end for M ← 0 % This variable is used just for the proof % Phase (1): for N depth level, from the deeper up to 1: for all T e subtree whose root e − is at depth N: % Loop (*) % "we bring (µ ′ (T e ) − ν(T e )) up one level": for N depth level from 0 to the deepest level in the tree−1: for all T subtree whose root r is at depth N: let s 1 , ..., s n be the sons of r if µ ′ (r) > ν(r): We must now prove that the algorithm works as intended, that is π ′ is always a coupling between µ and µ ′ , and when the algorithm terminates we have µ ′ = ν and the cost of Proof A probability measure on a tree T is determined by the measure attributed to all subtrees T e .To see that µ ′ = ν when the algorithm terminates, we thus show that µ ′ (T e ) = ν(T e ) for all subtree T e : • If µ(T e ) = ν(T e ) then neither phase (1) nor (2) modifies µ ′ (T e ) = µ(T e ) = ν(T e ) (even though the distribution may vary).
• If µ(T e ) > ν(T e ) then phase (1) removes the adequate quantity of mass from µ ′ (T e ) so that once phase (1) is over we have µ ′ (T e ) = ν(T e ).Then phase (2) does not change the quantity µ ′ (T e ) = ν(T e ) (even though it could change the distribution on that subtree).
• If µ(T e ) < ν(T e ) phase (1) does not change the quantity µ ′ (T e ) = µ(T e ) (even though it could change the distribution on that subtree).We write µ N for the measure µ ′ after all subtrees whose root is at depth N have been treated by phase (2) (N going from 0 to the deepest level in the tree −1).
Then we proceed by induction on N, assuming e + is at depth N. The initial step consists in seeing that µ N =0 , the measure µ ′ just after phase (1), is a probability measure on T ; ν being one too it follows µ N =0 (T ) = ν(T ) = 1.For the induction step, we write e − = v 1 , ..., v m the children of e + and assume that (induction hypothesis) for all i = 1, ..., m: Since phase (1) is over µ N (T v i ) ≤ ν(T v i ), hence µ N (e + ) ≥ ν(e + ).Then, phase (2) of the algorithm modifies: And: Then we have the desired fact for i = 1.
Eventually when the algorithm stops µ ′ = ν.µ ′ and π ′ are modified only during loops (*) and (**), we write µ M and π M for the values of µ ′ and π ′ after M rounds through loops (*) or (**).Then π M = (π M (x, y)) x,y∈V := (π M x,y ) x,y∈V is a coupling between µ and µ M : just after initialization it is clear that π ′ = π 0 is a coupling between µ and µ 0 = µ, if follows by induction that π M is a coupling between µ and µ M (treating separately the case where moving from M to M + 1 is done during phase (1) and the case where this move is done during phase (2)).About the cost of the coupling, if moving from M to M + 1 is done during phase (1), in loop (*) we have: Where we used that d(s, j) − d(r, j) = d(s, r) and that every i such that π M r,i contributes to the sum i<j (d(s, j) − d(r, j))π M r,i is such that d(s, i) − d(r, i) = d(s, j) − d(r, j) = d(s, r) ≥ 0: Each vertex i such that π M r,i = 0 is (non strictly) below r in the rooted-tree (since π M r,i is the mass µ M in r coming from i; we let the reader check it formally).Then those vertices i such that π M r,i contributes to the sum i<j (d(s, i) − d(r, i))π M r,i are (non strictly) below r and for those we have d(s, i) ≥ d(r, i) and then d(s, i) − d(r, i) = d(s, r) since s is the father of r.By definition of j, π M r,j = 0 and thus j is (non strictly) below r, hence d(s, j) − d(r, j) = d(s, r) ≥ 0. If moving from M to M + 1 is done during phase (2), in loop (**), we conclude similarly that the cost of the coupling is increased by x • d(s i , r).During phase (1) excess measure is always brought up one level at the time, in the loop (*) we thus always have x = µ ′ (T ) − ν(T ) = µ(T ) − ν(T ).And phase (1) brings up excess measure exactly from those subtrees T e with µ(T e ) > ν(T e ).During phase (2) measure is always brought down one level at the time, in the loop (**) we thus always have And phase (2) brings down adequate quantity of measure exactly in those subtrees T e with µ(T e ) < ν(T e ).Since just after initialization π ′ has null cost, the cost of it at the end of the algorithm is thus e∈E w e |µ(T e ) − ν(T e )|.
Remark 4.4.Let (X, d) be a Polish metric space.For µ, ν ∈ P 1 (X), we have from the Kantorovich-Rubinstein duality: where the supremum is taken over all 1-Lipschitz functions f : see Theorem 1.3 in [Vi15]; see also [Ed10] for a short proof.We observe that, for a finite metric tree, our second proof of Theorem 1.4 does not appeal to Kantorovich-Rubinstein duality (in contrast e.g. with the proof in [EM12]).
5 Appendix: σ-algebras on real trees Let (T, d) be a real tree.Apart from Godard's construction from [Go10] of the σ-algebra G recalled above, we are aware of other constructions of σ-algebras on T and of corresponding length measures: • The σ-algebra S generated by segments, see [Va90].
• The Borel σ-algebra B generated by open subsets, see [EPW06] for compact real trees, then for locally compact real trees in [AEW13].
All these constructions have in common that the length measure of a segment [x, y] is exactly d(x, y).In order to clarify the relation between S, B and G, we also introduce the σ-algebras B 0 generated by open balls (so that B 0 ⊂ B) and S obtained by completing S with respect to λ-negligible subsets.
The following proposition explains our choice to work with Godard's σ-algebra G.
Proposition 5.1.Let T be a real tree.The inclusion S ⊂ G follows from the fact that G is complete, as can be seen from the definitions.
2. The equality B 0 = B holds in every separable metric space (any open set being then a countable union of open balls).
To prove the inclusion G ⊂ S, we consider the subset T 0 =: ∪ x,y∈T ]x, y[ and its complement L = T \ T 0 : the latter is the set of leaves of T .For every segment [x, y] we have L ∩ [x, y] ⊂ {x, y}, so that L ∈ G; moreover λ(L) = 0 by equation (2).
Then we take A ∈ G.To show A ∈ S we use separability: let D be a countable subset of T .It is easy to see that T • = ∪ x,y∈D ]x, y[ which implies that T • is S-measurable, as well as its complement L. On the one hand A ∩ L ⊂ L and λ(L) = 0, then A ∩ L is λ-negligible and thus S-measurable.
On the other hand A ∩ T • ∈ S since A∩]x, y[∈ S for all x, y ∈ T because the σ-algebra of Lebesgue-measurable subsets is the completion of the σalgebra generated by sub-intervals i.e.A∩]x, y[∈ S.This concludes the proof.
1.
We have S ⊂ B 0 ⊂ G and S ⊂ G.2. IfT is separable then B 0 = B and S = G Proof 1.To show that S ⊂ B 0 , fix x, y ∈ T and let (z n ) n>0 be a dense sequence in [x, y].Then the equality [x, y] = m≥1 n>0 B(z n , 1/m) shows that [x, y] ∈ B 0 .Now, let B be an open ball in T .For any x, y ∈ T , the intersection B ∩ [x, y] is convex in [x, y], so it is a sub-interval in [x, y].In particular B ∩ [x, y] is Lebesgue-measurable in [x, y], so B ∈ G. | 6,339.2 | 2021-10-05T00:00:00.000 | [
"Mathematics"
] |
Even-Order Neutral Delay Differential Equations with Noncanonical Operator: New Oscillation Criteria
: The main objective of our paper is to investigate the oscillatory properties of solutions of differential equations of neutral type and in the noncanonical case. We follow an approach that simplifies and extends the related previous results. Our results are an extension and reflection of developments in the study of second-order equations. We also derive criteria for improving conditions that exclude the decreasing positive solutions of the considered equation
Introduction
The aim of this work is to provide new criteria for testing the oscillation of solutions to the even-order neutral differential equation (NDE) r(f) (υ(f) + p(f)(υ(τ(f)))) (n−1) γ + q(f)υ γ (ρ(f)) = 0, (1) in the noncanonical case, that is, where f ≥ f 0 , n ≥ 4 is an even integer, and γ is a quotient of odd positive integers.
Since the time of Newton, differential equations (DE) are still used to understand and model phenomena.Applied phenomena and problems are constantly increasing as a result of the development of all different branches of science.Delay differential equations (DDE) are DE that take into account the time memory of a solution, and therefore are a more suitable method for modeling different phenomena.However, the problem of finding solutions to the equations resulting from the modeling of phenomena stands as an obstacle to understanding and studying these phenomena.Therefore, the qualitative theory contributes greatly to solving this problem, so that the qualitative properties of equations can be studied without finding their solutions.One of the branches of the qualitative theory is oscillation theory, which is the theory that deals with the oscillatory, non-oscillatory, asymptotic behavior and the distribution of zeros for solutions of DE.
Fourth-order DEs often appear in mathematical models of many biological, chemical, and physical phenomena.Structure deformation, elasticity issues, and soil settlement are examples of these uses.The existence or nonexistence of oscillatory solutions is one of the most crucial topics that arise when investigating mechanical problems.Oscillatory muscle movement is one of the models given by a fourth-order equation with a delay that might arise as a result of a muscle's inertial pregnancy, see [1].Fourth-order equations arise in the theory of numbers, which is unusual, see [2].
NDEs appear as models for many phenomena and problems in applied science, including automatic control, population dynamics, mixing liquids, and vibrating masses attached to an elastic bar (see [3]).In particular, neutral equations of second order are of considerable interest in biology to study the self-balance of the human body and in robotics to construct biped robots (see [4]).The applications of these equations have largely stimulated the study of qualitative properties, in particular the theory of oscillation.
Recently, there has been a great interest in the oscillation theory of DDEs, see .This interest greatly contributed to the development of different new techniques and approaches that significantly improved the oscillation criteria, especially for second-order DE.Furthermore, this development has also resulted in many open and interesting issues.
We find that the study of the oscillatory behavior of the NDE in the canonical case (y 0 (f 0 ) = ∞) has attracted the greatest interest regarding the verification and study by many techniques and methods.However, it is possible to find some studies on the oscillation of even-order delay (nonneutral) DE with a noncanonical operator in the works by Baculikova et al. [5], Zhang et al. [6][7][8], and, recently, Moaaz et al. [9,10].
In 2011, Zhang et al. [6] presented the conditions of oscillation for DDE converge to zero, where α ≤ γ and α is a ratio of odd positive integers.Later, in 2013, Zhang et al. [7] established three independent conditions for oscillation of (2).In [8], under four independent conditions, Zhang et al. obtained some oscillation criteria for (2) when n = 4 and α = γ.The results in [8] differ from the results in [6,7] in that they apply to ordinary DE.By using the comparison theorems, Baculikova et al. [5] investigated the oscillatory propertiws of (2).
The two most common methods for studying oscillation of DDEs are: Riccati substitution and comparison with first-order equations.For NDEs in the noncanonical case, Li and Rogovchenko [18] obtained criteria for oscillation of the NDE (1) using the principle of comparison.They mainly relied on the assumption of the existence of the functions ξ i that fulfill the restrictions: and in addition the delay functions were restricted by the conditions Using these conditions, they were able to compare (1) with three DDEs of first order, and thus obtained three independent conditions to check oscillation.
NDEs of higher order have not received the attention of researchers compared to equations with delay (not neutral) or equations of the second order.The studies that deal with the oscillatory behavior of higher-order neutral equations have many interesting open issues as well as additional restrictions on the functions.
This paper aims to study the oscillatory behavior of solutions to the NDE (1) by using several different methods.The motive of this study is to develop and improve the oscillation criteria for solutions of higher-order DE by: -obtaining oscillation criteria for (1) without requiring the existence of the unknown functions ξ i , and without requiring the condition in (3); -reducing the number of conditions that are sufficient to verify the oscillation of all solutions; -obtaining criteria that apply to ordinary DE.
Preliminaries
In this part, we provide definitions and elementary results in the literature that help us to present the main results.
and υ satisfies (1) on [f * , ∞).To facilitate calculations, we will denote the corresponding function of the solution υ by z Definition 2. A solution of any DDE is called oscillatory if it is neither positive nor negative, ultimately; otherwise, it is called nonoscillatory.
Lemma 1.
[Lemma 2.2.1, [19]] Suppose that w ∈ C n ([f 0 , ∞), (0, ∞)), and w with derivatives up to order κ − 1 of constant sign, and w , where L 2 > 0, L 1 and L 3 are constants.Then, the maximum value of H on R at υ Remark 1.In this work, we consider only the solutions which are not identically zero, eventually.Further, all functional equations and inequalities are supposed to hold eventually.
Simplified Criteria for Oscillation
When studying the oscillatory properties of solutions of DDEs, it is known that positive solutions must be classified according to the sign of their derivatives.Now, we assume that υ is an eventually positive solution of (1).Then, from the definition of z, we have that z(f) > 0, eventually.From the DE in (1) and taking into account that q(f) > 0, we have that r(f) z (n−1) (f) γ is a nonincreasing function.Furthermore, according to Lemma 1, we obtain the following three cases, eventually: (1): In order to facilitate the presentation of results, we will use the following notations: The set of all eventually positive solutions of (1); The set of all solutions υ ∈ U + with z satisfying case (i) above, i = 1, 2, 3; and and for k = 0, 1, . . ., n − 2, eventually.
Proof.Assume, for the sake of contradiction, that (1) has a nonoscillatory solution.
Without loss of generality, we let υ ∈ U + .Thus, υ belongs to one of the classes U + i , i = 1, 2, 3. Using Theorem 2.1 in [20], we find that the condition ( 8) is in opposition to υ ∈ U + 2 .So, υ belongs to either U + 1 or U + 3 .First, let us assume that υ ∈ U + 3 .From Lemma 4, (4) and ( 5) hold.Using (5) with k = n − 2, we obtain that Hence, from the definition of z and the above inequality, we see that Then, from this inequality and (1), we obtain Now, by integrating (10) twice from f 1 to f, we have Taking into account the monotonicity of r(f) z (n−1) (f) γ , we obtain that Then, from (4) with k = n − 2, we arrive at which in view of (11) implies that From (9) and the above inequality, we obtain that z (n−2) (f) → −∞ when f → ∞.This is a contradiction with the positivity of z (n−2) (f).
Proof.Assume, for the sake of contradiction, that (1) has a nonoscillatory solution.
Without loss of generality, we let υ ∈ U + .As in the proof of Theorem 1, using the condition (8), we have that υ belongs to either U + 1 or U + 3 .First, let us assume that υ ∈ U + 3 .In a similar fashion as in the proof of Theorem 1, we can arrive at (10).Then, integrating (10) from f 1 to f, we find Since z (f) < 0 and ρ(f) ≤ f, we find that From ( 4), with k = n − 2, in Lemma 4, we have that Combining ( 16) and ( 17), we conclude that This implies that which contradicts (14).Now, let us assume that υ ∈ U + 1 on [f 1 , ∞), where f 1 ≥ f 0 .From ( 14), there exists a
and thus
Letting f → ∞, we obtain that (13).As in the proof of Theorem 1, we arrive at (12).
Proof.Assume, for the sake of contradiction, that (1) has a nonoscillatory solution.
Without loss of generality, we let υ ∈ U + .As in the proof of Theorem 1, using the condition (8), we have that υ belongs to either U + 1 or U + 3 .First, let us assume that υ ∈ U + 3 .In a similar fashion as in the proof of Theorem 1, we can arrive at (15).From Lemma 4, (4) holds.By (4) with k = n − 3 and (15), we find that Hence, we note that w := z is a positive solution of the delay differential inequality It follows from Theorem 1 in [21] that there is also a positive solution of the DDE By Theorem 2 in [22], we find that the condition (18) guarantees oscillation of all solutions of (19), which is a contradiction.
Finally, let us assume that υ ∈ U + 1 .Since the condition ( 13) is necessary for the validity of (18), the proof of this part is exactly the same as in Theorem 1.Thus, the proof is complete.
Improved Criteria Ensure That U
In what follows, we present a finer estimate for the ratio z(ρ(f))/z(f) that will allow us to improve the previous results and apply the new results when (20) holds.
Lemma 5. Assume that υ ∈ U + 3 .If there exist a f 1 ≥ f 0 and a constant β ≥ 0 such that Proof.Assume that υ ∈ U + 3 .Proceeding as in the proof of Theorem 2, we arrive at (16).From Lemma 4, we have that (4) holds for k = n − 3, that is, From ( 16) and ( 23), we obtain Now, we see that It follows from ( 21) that d df z(f) This completes the proof.
Proof.Assume the contrary, that υ ∈ U + 3 for f ≥ f 1 ≥ f 0 .Using Lemma 5, we obtain that (22) holds, and then As in the proof of Theorem 2, we arrive at (15).From ( 15) and ( 26), we get that By replacing ( 16) by (27), and completing the proof as in Theorem 2, we arrive at a contradiction with (25).This completes the proof.
In what follows, using the comparison principle with a second-order DDE, we infer a criterion which ensures that U + 3 = ∅.
Conclusions
The oscillatory properties of a class of NDEs were studied.To extend the evolution of the study of the second-order DDEs, we presented criteria with only two conditions that ensure the oscillation of the considered equation.Due to the importance of the conditions that exclude solutions in class U + 3 , we improve these conditions by finding a better estimate of the ratio z(ρ(f))/z(f).We establish more than one criterion for oscillation, and these new criteria are distinguished by taking into consideration the impact of delay functions, as well as not needing to define additional functions and conditions.Extension of our results to neutral equations of odd order will be interesting as one of the future issues.There are several studies concerned with the oscillatory properties of solutions of fractional DE, see [26][27][28][29].It is also interesting to extend the approach taken in this work to study the oscillatory properties of fractional DEs.
Author Contributions:
Conceptualization, O.M and F.M.; Formal analysis, O.M. and B.A.; Investigation, F.M. and D.A.; Methodology, F.M. and D.A.; writing-original draft preparation, D.A.; writing-review and editing, O.M. and B.A. All authors have read and agreed to the published version of the manuscript.Funding: Princess Nourah bint Abdulrahman University Researchers Supporting Project number PNURSP2022R216, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.Data Availability Statement: Not applicable. | 3,019.8 | 2022-06-02T00:00:00.000 | [
"Mathematics"
] |
Mechanical Behavior of Five Different Morse Taper Implants and Abutments with Different Conical Internal Connections and Angles: An In Vitro Experimental Study
The present study evaluated the mechanical behavior of five designs of Morse taper (MT) connections with and without the application of loads. For this, the detorque of the fixing screw and the traction force required to disconnect the abutment from the implant were assessed. A total of 100 sets of implants/abutments (IAs) with MT-type connections were used, comprising five groups (n = 20/group): (1) Group Imp 11.5: IA sets with a cone angulation of 11.5°; (2) Group SIN 11.5: with a cone angulation of 11.5°; (3) Group SIN 16: with a cone angulation of 16°; (4) Group Neo 16: with a cone angulation of 16°; and (5) Group Str 15: with a cone angulation of 15°. All sets received the torque recommended by the manufacturer. After applying the torque, the counter torque of the fixing screws was measured in ten IA sets of each group without the application of cyclic loads (frequencies ≤ 2 Hz, 360,000 cycles, and force at 150 Ncm). The other ten sets of each group were subjected to cyclic loads, after which the detorque was measured. Afterwards, the force for disconnection between the implant and the abutment was measured by traction on all the samples. The untwisting of the abutment fixation screws showed a decrease in relation to the initial torque applied in all groups. In the unloaded samples, it was found to be −25.7% in Group 1, −30.4% in Group 2, −36.8% in Group 3, −29.6% in Group 4, and −25.7% in Group 5. After the applied loads, it was found to be −44% in Group 1, −43.5% in Group 2, −48.5% in Group 3, −47.2% in Group 4, and −49.8% in Group 5. The values for the IA sets were zero for SIN 16 (Group 3) and Neo16 (Group 4), both without and with loads. In the other three groups, without loads, the disconnection value was 56.3 ± 2.21 N (Group 1), 30.7 ± 2.00 N (Group 2), and 26.0 ± 2.52 N (Group 5). After applying loads, the values were 63.5 ± 3.06 N for Group 1, 34.2 ± 2.45 N in Group 2, and 23.1 ± 1.29 N in Group 5. It was concluded that in terms of the mechanical behavior of the five designs of MT IA sets, with and without the application of loads, the Imp 11.5, SIN 11.5, and Srt 15 groups showed better results compared to the SIN 16 and Neo 16 groups, showing that lower values of cone angulation increase the friction between the parts (IA), thus avoiding the need to maintain the torque of the fixing screw to maintain the union of the sets.
Introduction
The technique of replacing missing teeth with implant-supported dentures has become highly used and predictable due to many years of research and development, boasting high long-term survival rates [1,2].However, several studies have shown that late failures in implant rehabilitation can occur predominantly during the first few years [3][4][5][6].Complications with the dental implant can happen due to infections of the peri-implant tissues and occlusal overload, affecting the local peri-implant bone tissue and, in the worst-case scenario, leading to consequent implant mobility and loss [4,7].On the other hand, complications associated with the implant-abutment (IA) junction are also frequent, such as loosening and fracture of the abutment fixation screw, cracks, and/or superficial fractures between the metallic structure and the layers of coating material (acrylic or ceramic) [8,9].The IA junction is one of the sites most susceptible to failure.This area receives the functional forces of occlusion, which are distributed across the dental implant platform and bone.Hence, any tension or deformation of the prosthesis caused by misfit or local instability could lead to technical complications [10].
In screwed fixed crowns, a frequent problem is the control of the appropriate torque according to the screw and connection types in order to apply adequate preload.It is the torque applied to the fixation screw that develops a compressive tightening force between the IA sets, maintaining the stability of the components [11,12].The correct torque method is required to determine the preload to be imposed on the sets, but some preload loss (dispersion) is expected [13,14].The torque value applied to the screw has a unique relationship with the preload generated by the screw on the apical third of the implant, which is directly influenced by the design of the abutment/implant interface, the type of retention screw, and the torque value applied [13].These factors interfere with the biomechanical stability of the IA interface, which is still dependent on the tolerance between components, the freedom for rotation, and the adjustment precision [15].However, preload values are determined by the manufacturer, considering the strength limits of the screw of each system.
Due to mechanical problems, new designs for the prosthetic interface have been developed, highlighting the internal connection to the implant, such as the internal hexagon (HI) and the Morse taper (MT).Originally created by the mechanical tool industry, the term Morse taper describes a fitting mechanism where two elements seek close contact through friction.This mechanism occurs when the conical pillar is installed in a conical cavity, generating friction that becomes significant due to the parallelism of these two structures.In this type of connection, where the Morse angle is determined according to the mechanical properties of each material, there is a relationship between the angle values and the friction of the parts [16].
New designs for MT implant systems were developed with the aim of improving the IA union, meeting the needs of prosthetic rehabilitation, and improving the mechanical behavior during the application of functional masticatory loads, given the hypothesis that there is no difference in the mechanical behavior of one-or two-piece abutments [17].In the MT connection, the abutment is joined to the implant by contact (friction), with mechanical locking between the implant cone and the prosthetic component cone.This type of locking allows the abutment to achieve good stability even when losing part of the torque (preload) applied, reducing the possibility of micromovement during functional loads, not overloading the screw, and reducing the incidence of loosening and/or fracture [8,9,18].The IA connection influences how the system fails, with each system's characteristics being a relevant factor for clinical indication [18,19].Furthermore, the stress concentration tends to decrease when the internal surface area of the system increases [20].
During the application of the cyclic load on the abutment installed on the MT implant, intrusion and/or settlement occurs at the IA interface, increasing the interlocking between the IA set [15,21,22].Thus, more close contact between the surfaces of the two pieces will occur through juxtaposition until there is no more displacement.Therefore, this better fit between the pieces favors joint action, improving the distribution of chewing forces that affect the IA set [23].The micromovements between the implant and the prosthetic component could lead to the formation of microgaps at that IA junction.Therefore, the greater overlap between the parts and the consequent increase in the Morse effect are positives for the IA interface's sealing and the IA assemblies' stability [24].
Currently, there are different designs of MT fittings on the market between implant brands.They vary in terms of the extent of contact between the abutment and the implant and the angle of the walls.Thus, the present study sought to evaluate the mechanical behavior of five different MT connection designs with and without the application of loads.For this, the detorque of the fixing screw and the traction force required to disconnect the abutment from the implant were evaluated.The positive hypothesis was that the increased angulation in the design of the cone that joins the pieces (IA) would reduce the friction, increasing the need for the fixing screw to keep the pieces together; the null hypothesis was that no difference would be observed for different angulations for the parameters tested.
Materials and Methods
One hundred IA sets of internal Morse taper connections (n = 100) with five different types of design were tested.They formed 5 experimental groups (n = 20 sets per group): In Group 1-Imp.11.5 (G1), DuoCone implants and BaseT abutments were used (Implacil De Bortoli, São Paulo, Brasil).The implants used were 4 mm in diameter and 9 mm in length.The internal connection had a double cone separated by a 12-position index between the two cones, with an angulation of 11.5 • .In Group 2-SIN 11.5 (G2), Strong SWC implants and Pilar Universal abutments were used (Sistema Nacional de Implantes, São Paulo, Brazil).The implants were 3.8 mm in diameter and 10 mm in length.The internal connection had a cone and a hexagonal index in its final portion, with an angle of 11.5 • .In Group 3-SIN 16 (G3), Strong SW implants and Pilar Universal abutments were used (Sistema Nacional de Implantes, São Paulo, Brazil).The implants used were 3.8 mm in diameter and 10 mm in length.The internal connection had a cone and a hexagonal index in its final portion, with an angle of 16 • .In Group 4, Neo 16 (G4), Helix GM implants and titanium base abutments (Neodent/Straumann, Curitiba, Brazil) were used.The implants used were 4 mm in diameter and 10 mm in length.The internal connection had a cone and a hexagonal index in its final portion, with an angle of 16 • .In Group 5-Str 15 (G5), Helix GM implants and Regular CrossFit ® abutments (Straumann, Basel, Switzerland) were used.The implants were 4.1 mm in diameter and 10 mm in length.The internal connection featured a mechanical locking friction adjustment connection with four grooves and 15 • angulation.Figure 1 shows a schematic image of the internal connection of the IA sets.All abutments used were two-piece ones (abutment and fixation screw).Figure 2 shows images of the IA sets used in the present study.All abutments used were two-piece ones (abutment and fixation screw).Figure 2 shows images of the IA sets used in the present study.All abutments used were two-piece ones (abutment and fixation screw).Figure 2 shows images of the IA sets used in the present study.Initially, all IA assemblies were joined and torqued to the values recommended by each manufacturer: 20 Ncm (G1), 10 Ncm (G2), 20 Ncm (G3), 20 Ncm (G4), and 35 Ncm (G5).After 10 min, all sets were retorqued [21,22].After one hour, 10 IA sets from each group were taken to computerized torquemeter equipment (CME-30Nm, Técnica Industrial Oswaldo Filizola Ltd.a., São Paulo, Brazil) to measure the torque values of the fixing screw.Then, the fixing screws were completely unscrewed, and each set was taken to a universal machine (model AME-5kN; Técnica Industrial Oswaldo Filizola Ltd.a., São Paulo, Brazil) with a 0.5 mm/min crosshead speed for tensile testing, measuring the force required to separate the parts (implant and abutment).
Fifty cylindrical polyurethane blocks were cut to 30 mm in diameter and 30 mm in height, with one side being cut at an angle of 30 ± 2° (Figure 3a).Ten implant samples from each group were inserted into the blocks, leaving 3 mm outside the base for all the implants.The implant position (angulation and level) was determined according to recommendations from the International Organization for Standardization (ISO) 14.801:2016 [25].To insert the implants, each cylindrical block was positioned on an angled table of Initially, all IA assemblies were joined and torqued to the values recommended by each manufacturer: 20 Ncm (G1), 10 Ncm (G2), 20 Ncm (G3), 20 Ncm (G4), and 35 Ncm (G5).After 10 min, all sets were retorqued [21,22].After one hour, 10 IA sets from each group were taken to computerized torquemeter equipment (CME-30Nm, Técnica Industrial Oswaldo Filizola Ltd.a., São Paulo, Brazil) to measure the torque values of the fixing screw.Then, the fixing screws were completely unscrewed, and each set was taken to a universal machine (model AME-5kN; Técnica Industrial Oswaldo Filizola Ltd.a., São Paulo, Brazil) with a 0.5 mm/min crosshead speed for tensile testing, measuring the force required to separate the parts (implant and abutment).
Fifty cylindrical polyurethane blocks were cut to 30 mm in diameter and 30 mm in height, with one side being cut at an angle of 30 ± 2 • (Figure 3a).Ten implant samples from each group were inserted into the blocks, leaving 3 mm outside the base for all the implants.The implant position (angulation and level) was determined according to recommendations from the International Organization for Standardization (ISO) 14.801:2016 [25].To insert the implants, each cylindrical block was positioned on an angled table of 30 • to perform drilling using the drill sequence indicated for each implant brand, thus obtaining the ideal load direction (Figure 3b).
The implants were then inserted into the blocks.The abutments were positioned on them, receiving the corresponding torque and retorque.A metallic hemisphere was similarly fabricated for all sets and cemented on the abutments.To manufacture the hemispheres, a CAD/CAM (computer-aided design/computer-aided manufacturing) system was used, with one abutment from each group being scanned and the shape of the hemisphere adapted onto each abutment; then, the hemispheres were printed in wax.Finally, all hemispheres were included in a coating and cast in cobalt chromium.To cement the hemispheres, a temporary cement (Temp-Bond, Kerr, USA) with low adhesion was used, enabling the de-cementation of the hemispheres without affecting the stability of the abutments.Then, a force was applied from a plane surface to avoid reducing the magnitude of the applied load (Figure 4a-c).
sphere adapted onto each abutment; then, the hemispheres were printed in wax.Finally, all hemispheres were included in a coating and cast in cobalt chromium.To cement the hemispheres, a temporary cement (Temp-Bond, Kerr, USA) with low adhesion was used, enabling the de-cementation of the hemispheres without affecting the stability of the abutments.Then, a force was applied from a plane surface to avoid reducing the magnitude of the applied load (Figure 4a-c).was used, with one abutment from each group being scanned and the shape of the hemisphere adapted onto each abutment; then, the hemispheres were printed in wax.Finally, all hemispheres were included in a coating and cast in cobalt chromium.To cement the hemispheres, a temporary cement (Temp-Bond, Kerr, USA) with low adhesion was used, enabling the de-cementation of the hemispheres without affecting the stability of the abutments.Then, a force was applied from a plane surface to avoid reducing the magnitude of the applied load (Figure 4a-c).The specimens were kept submerged in water at 37 ± 2 • C during the test.The test was carried out at frequencies of ≤ 2 Hz.Then, 360,000 cycles of mechanical loading with controlled nonaxial force at 150 Ncm were applied using a mechanical cycler (Biocycle V2, BioPDI, São Carlos, Brazil).After applying the cyclic loads, the samples were taken to the computerized torquemeter equipment (CME-30Nm, Tecnica Industrial Oswaldo Filizola Ltd.a., São Paulo, Brazil) to measure the detorque values of the fixing screw.Therefore, the fixing screws were completely unscrewed, and each set was taken to a universal machine (model AME-5kN; Técnica Industrial Oswaldo Filizola Ltd.a) with a 0.5 mm/min crosshead speed for tensile testing, measuring the tensile strength value (TSv) necessary to separate the parts (implant and abutment).
G*Power software (Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) was used to calculate the effect sizes, considering the F tests (fixed effects, omnibus, oneway), of the five groups with means and standard deviations, with 10 samples each.In the first situation (without loading) at a 5% level of significance and 95% power, considering the maximum level of standard deviation as 1.44 Ncm, the effect size generated was 4.26.In the second situation (with loading) at a 5% level of significance and 95% power, considering the maximum level of standard deviation as 1.5 Ncm, the effect size was 2.58.
Statistical Analysis
To analyze the torque of the fixing screws with and without load application, the values obtained were used to calculate the percentage of loss in relation to the initial torque applied, which were the values recommended by the manufacturers.GraphPad Prism 5.01 software (GraphPad Software Inc., San Diego, CA, USA) was used for the statistical analysis of the torque values for the fixing screws and the traction needed to disconnect the IA assemblies; they were considered statistically significant if p < 0.05.After the Kolmogorov-Smirnov test for normality, Bonferroni's multiple comparison tests detected possible differences between groups.
Results
The assessment of behavior by comparing different designs of internal conical connections provides an indication of their behavioral characteristics during their placement for a masticatory function.It can contribute to a successful outcome after the implant's clinical placement.The results showed important differences between the groups evaluated.The detorque of the screws for the abutment fixation showed considerable loss in all groups compared to the initial torque applied.Table 1 presents the applied torque values, following the recommendation of the manufacturers, the measured detorque values, and the percentage value calculated in each group in both the tested conditions (without and with load application).Without the application of loads, the data collected showed significant differences between the groups Impl 11.5 (G1) versus SIN 16 (G3) (p = 0.0045) and SIN 16 (G3) versus Str 15 (G5) (p = 0.0039); the other groups did not show statistically significant differences.After applying cyclic loads, the Kolmogorov-Smirnov test showed a normal distribution, and no differences were detected among the groups (ANOVA p = 0.2697).In the traction test, for the disconnection of the IA sets, higher values were found in the abutment groups with angulations of 11.5 • (G1 and G2) and 15 • (G5) compared to the groups with angulations of 16 • (G3 and G4) before and after the application of cyclic loads.Table 2 presents the values measured in the five groups without and with the application of loads.Statistical comparisons were carried out among the three groups that presented values greater than zero for tensile strength (Imp 11.5 [G1], SIN 11.5 [G2], and Str 15 [G5]).Table 3 presents the statistical analysis achieved.
Discussion
Abutments have been used to connect dental implants with the prosthetic superstructure, permitting patient rehabilitation.Typically, they are screwed to the dental implant, like the abutments used in this study.They can be customized or prefabricated and made of titanium, zirconium oxide, gold, or aluminum oxide (the material of the abutments herein used) [26,27].There are two types of IA connections: internal and external.This depends on whether the geometric feature extends above or below the platform's coronal surface.In internal taper-connected implant systems, the tightening torque depends not only on the screw height but also on the wedging effect caused by the subsidence of the tapered abutment, while the internal inclination of the device mainly bears the load.Therefore, the load on the abutment screw is lower compared to the external connection.The MT system promotes torque amplification through friction between the internal implant surface and the tapered abutment, resulting in high stability [28,29].Moreover, some authors [30] compared the prevalence of screw loosening between external IA connections and internal connections, with respective results of 4.8% and 1.2% over a 5-year follow-up.Thus, the goal of this study was to assess the mechanical behavior (the detorque of the fixing screw and the traction force required to disconnect the abutment from the implant) of five different MT (internal) connection designs with and without the application of loads.
Comparing different designs of internal conical connections can help predict the masticatory function behavior due to the loss of screw torque, which might be caused by the axial and nonaxial forces that the implant superstructure is subject to.According to the mechanical principles of levers, nonaxial forces might be provoked by the implant's prosthetic part, which must be considered and adequately observed by clinicians; in this study, a similar superstructure was used in all the groups to avoid variations.In this study, we measured the initial torque, detorque, and tensile strength values of five different MT implants with and without loads applied.Our results showed important differences among the groups evaluated.The distortion of the screws for the abutment fixation in all groups led to considerable torque loss compared to the initial torque.Therefore, without the application of loads, there were significant differences between Impl 11.5 (G1) versus SIN 16 (G3) and SIN 16 (G3) versus Str 15 (G5).According to the mechanical principles of screws, the application of torque causes stretching and tension, which create preload forces on the screw [31].This force also refers to the longitudinal axial force generated between the threads of the abutment screw and the internal components of the implant [32].It should be maintained and minimized to prevent loosening of the connection [33].
It is important to remember that the higher the number of cycles used, the greater the initial torque loss value [34].Thereby, this analysis is highly important owing to screw loosening or screw fractures being the most common technical complications of implant-supported prosthetic rehabilitation [30,35].
The pre-tightening force was positively related to the screw-tightening torque value [13].Furthermore, in this study, we kept the torque values recommended by the manufacturers, specifically 20 Ncm (G1), 10 Ncm (G2), 20 Ncm (G3), 20 Ncm (G4), and 35 Ncm (G5).The ideal pre-stress reported in the literature is approximately 60-80% of the material yield strength [36].Only 10% of the torque is converted into preload force, and the remaining 90% is used to overcome the friction between mating surfaces [31,32].Even in the absence of external force, the preload loss is observed within the first 2-3 min [29,33] or 15 h after tightening [13].
Screw loosening happens when the external separation force exerted on the IA connection is greater than the clamping force holding the implant and abutment tightly together [36].Any forces applied to the implant superstructure are followed by tensile and compressive stresses at the IA connections [36], resulting in micromovements between the IA connection, which might terminate with screw torque loss complications [23,24].The incidence of abutment screw loosening for single crowns is 12.7%, and it is 6.7% for splinted crowns [37].The most common clinical complications of abutment screw loosening include gingivitis and screw fractures [38].To reduce these problems, several solutions have been proposed, including retightening abutment screws after initial tightening and applying increased torque [39].Despite numerous studies, the exact cause of abutment screw loosening is unknown [40,41].It may be attributed to insufficient tightening torque, incorrect implant positioning, an inappropriate occlusal surface or crown anatomy, poor abutment and crown fit, microleakage at the IA interface, inappropriate screw design/materials, and occlusal forces.Any oversizing can be considered a cause of abutment screw loosening [38].
Bagegni et al.'s [42] results contrasted with an in vitro study [43] that found that the mechanical strength of the screwless MT (3 • ) IA connection is lower than that of the screw-retained ones (all samples of the screwless Morse taper implants failed to survive the planned dynamic loading (1.2 × 10 6 loading cycles); they all fractured after less than 100,000 loading cycles when a load of 120 N was applied at 30 • to the long axis of the implants).In Bagegni et al.'s study [42], all the screwless Morse taper implants survived 1.0 × 10 6 cycles with a load of 100 Ncm.The only explanation to justify this high level of variation is that the implants in Ugurel et al.'s study [43] had a thinner implant headwall.Therefore, the screw-retained internal groups in that same study [43] exhibited early abutment and/or screw fractures.Those results were also different compared with the screw-retained implant group in Bagegni et al.'s study [42].In our study, we included only screw-retained MT implants; we used a regular standard for frequencies of ≤2 Hz and 360,000 cycles of mechanical loading with a controlled nonaxial force of 150 Ncm.We did not find fractures, and the detorque with loadings was similar among all the groups, although the G3 (SIN 16) group without loads had a higher and more significant detorque than G1 and G5.These findings agree with Bagegni et al.'s results.
In Ebadian et al.'s study [44], the abutment screw detorque that used the minimum mean abutment screw torque value was found for the group with three 30 Ncm abutment screw torques, which underwent five-minute intervals of mechanical cycling; therefore, this methodology was different to that applied in this study.Moreover, the torque used here was recommended by the manufacturers and was lower than that applied in Ebadian et al.'s study [44] (30 Ncm), except for G5 (35 Ncm).So, as reported in the literature [44], less screw loosening is expected to occur when 35 Ncm of torque is applied to the abutment screw in this implant system.Nonetheless, after applying cyclic loads, no differences were detected among all the groups (p = 0.2697), neither with nor without load.
The tensile strength was statistically compared among the three groups that obtained values greater than zero (G1, G2, and G5).The values achieved for G3 and G4, both groups with 16 • angulation, demonstrated that for this type of analysis, no tensile resistance was observed (the implant and abutment were easily separated).The angles for these groups were, respectively, 11.5 • , 11.5 • , and 15 • .The worst result for this parameter among them was found in Group 1 (G1), with significantly lower results compared to G2 and G5 without and with loads (p < 0.0001 and p < 0.0002), and G5 had the best results without and with loads (p = 0.0009 and p = 0.0002).
The results obtained in this study should be carefully analyzed; their interpretation and clinical application should be approached with caution, given the differences in biological behavior between in vitro and in vivo scenarios.Moreover, the groups here studied had different Morse taper connection properties, besides the angulation, such as different abutment heights.The higher the IA contact surfaces, the higher might be the influence on the results.On the other hand, one of the implants with a greater IA contact surface (Neo 16 [G4] group) had no favorable outcomes, whereas the other group with more IA contact (Str 15 [G5]) did achieve good outcomes.New studies must be carried out to better evaluate this parameter.
Conclusions
It was possible to confirm the positive hypothesis proposed in this study and to conclude that the design of the internal connection influences the detorque of the fixing screw and the tensile strength of the IA sets.The implants with smaller angulations, 11.5 • and 15 • (groups Imp 11.5, SIN 11.5, and Str 15), showed better results compared to the implants with greater angulations, 16 • (groups SIN 16 and Neo 16).These results demonstrate that lower values of cone angulation increase the friction between the parts (IA), hence avoiding the need to maintain the torque of the fixing screw to maintain the union of the sets.
Figure 1 .
Figure 1.Schematic images of the internal connection and angulation of each IA set used.
Figure 1 .
Figure 1.Schematic images of the internal connection and angulation of each IA set used.
Figure 1 .
Figure 1.Schematic images of the internal connection and angulation of each IA set used.
Figure 2 .
Figure 2. IA sets for the 5 groups evaluated in the present study, sorted from the left to the right according to group.
Figure 2 .
Figure 2. IA sets for the 5 groups evaluated in the present study, sorted from the left to the right according to group.
Figure 3 .
Figure 3. (a) A cylindrical block of polyurethane was cut to 30 mm in diameter and 30 mm in height, with one side being cut at an angle of 30 ± 2°.(b)The cylindrical block was positioned on an angled table at 30° to perform drilling using the drill sequence indicated for each implant brand, obtaining the ideal load direction.
Figure 4 .
Figure 4. (a) Implants were placed into the blocks.(b) Abutments were positioned on the implants, receiving the corresponding torque and retorque.(c) A metallic hemisphere was cemented on the abutments.
Figure 3 .
Figure 3. (a) A cylindrical block of polyurethane was cut to 30 mm in diameter and 30 mm in height, with one side being cut at an angle of 30 ± 2 • .(b)The cylindrical block was positioned on an angled table at 30 • to perform drilling using the drill sequence indicated for each implant brand, obtaining the ideal load direction.
Figure 3 .
Figure 3. (a) A cylindrical block of polyurethane was cut to 30 mm in diameter and 30 mm in height, with one side being cut at an angle of 30 ± 2°.(b)The cylindrical block was positioned on an angled table at 30° to perform drilling using the drill sequence indicated for each implant brand, obtaining the ideal load direction.
Figure 4 .
Figure 4. (a) Implants were placed into the blocks.(b) Abutments were positioned on the implants, receiving the corresponding torque and retorque.(c) A metallic hemisphere was cemented on the abutments.
Figure 4 .
Figure 4. (a) Implants were placed into the blocks.(b) Abutments were positioned on the implants, receiving the corresponding torque and retorque.(c) A metallic hemisphere was cemented on the abutments.
Table 1 .
Initial recommended torque (RT), measured detorque (DT), and the percentage of torque loss calculated for each group with and without load.
Table 2 .
Mean and standard deviation of the tensile strength values for each group with and without load.Values are in Newtons (N).
Table 3 .
Statistical comparison for the tensile strength values of the groups that presented values greater than zero with and without loads. | 6,769 | 2024-06-28T00:00:00.000 | [
"Engineering",
"Medicine"
] |
A New Watermarking Algorithm for Scanned Grey PDF Files Using Robust Logo and Hash Function
This paper deals with the development and assessment of a watermarking technique which is suitable for scanned PDF documents. The watermark will serve two purposes. The first one is a logo to protect the copyright ownership. This watermark should be invisible and secure and can be extracted even if the document has gone through slight image manipulations. The second watermark will be used to authenticate the document. A slight editing in the document will change the second watermark and indicate forgery. The algorithm was tested successfully on a variety of scanned documents and the performances of the algorithm were assessed.
INTRODUCTION
Watermarking algorithms are used to insert digital data or digital signatures in the original media file to prove the owner's identity of that file and prevent copyright violation.Several commercial companies around the world offer copyright protection services to their customers.The inserted watermark can be visible (Wang-2009) where it can be seen by anyone who is viewing the file, or it can be imperceptible and invisible where it can be only detected by the one who created the watermark using some decoding algorithms.For imperceptible watermark, there is a need for it to be robust so that it cannot be destroyed or lost by modifying the digital media file.There is another requirement for watermarking for copyright protection which is that the algorithm should be blind.That means that the original media file is not needed to extract the watermarking information.However, in non-blind techniques, the original file is needed to extract the embedded watermark (Al-Mansouri-2012).Moreover, Watermarking techniques can be applied either in the frequency domain or in the spatial domain.Frequency domain techniques proved to be more immune and survive different attacks.In contrast, spatial domain watermarks are more sensitive and fragile and can be used to authenticate the copyright of the watermarked file (Al-Gindy-2007).
The distortion in the watermarked file after the watermarking process is analysed and assessed objectively using the peak signal to noise ratio (PSNR).In addition, the watermarking effect is assessed subjectively by viewing the watermarked file (Wang-2004).
This paper introduces a way to watermark PDF files containing grey images in both frequency and spatial domain by converting the PDF file into an image file.The watermark will be inserted in the spatial domain using hash function and in the frequency domain using the Discrete Cosine Transform (DCT).Five sections are included in this paper.Section 2 discusses the algorithm for embedding the watermark signals.Section 3 illustrates the extraction process.Section 4 demonstrates the results of watermarking and the effects of watermarking on the original PDF file and the extracted watermark.Finally, section 5 concludes the work.
EMBEDDING THE WATERMARK SIGNALS
The watermarked PDF file will have two watermarks.One of them is robust and used for copyright protection.The first watermark is inserted in the frequency domain of the converted PDF file using DCT coefficients.The second watermark is fragile and used for authentication and discovering changes in the watermarked file.This watermark is inserted in the spatial domain of the file using the least significant bit (LSB) method.The fragile 2 watermark is generated by using SHA-256 hash function (Cannons-2004).Fig. 1 represents a block diagram for the operation of the algorithm.
DCT algorithm
The PDF file is first converted into an image file.Then the image is divided into 8x8 blocks and then converted into the frequency domain using 2D DCT.The image is then screened to determine the best low frequency coefficients to insert the bits of the logo.If the logo is smaller than the number of available blocks then it can be repeated several times.The chosen coefficients in each block are the first four coefficients other than the DC component as illustrated in the zig-zag method.The insertion process in the coefficients is done by using Even/Odd technique.Fig. 2 shows a diagram that summarizes the DCT embedding process.The equation for the Even/Odd process is shown as follows:
Hash function algorithm
The process of embedding the hash-key is shown in Fig. 3.
Step 1: the watermarked image, with the robust watermarks using DCT, is divided into two parts.One of them is the whole image excluding the first row.The other part is the selected row.
Step 2: The hash-key using SHA256 is extracted from the first divided part of the image that is represented in Fig. 4 by region 1.Region 2 represents the first row of the DCT watermarked image.Then, the extracted Hash-key from region 1 is converted to binary number to have 256 binary bits.
Step 3: The extracted 256 bits are inserted in the first row of the DCT watermarked image.In fact they are inserted in the first 256 pixels of the first row.The method that is used is inserting the hash-key using LSB method in the spatial domain.
Step 4: The image is reconstructed by combining the row with the rest of the image.This results in an image that is watermarked with robust logos and fragile hash-key.Finally, the image is converted back into PDF file. 3
Hash function extraction
The process of extracting the hash key begins with converting the watermarked PDF file into an image file.Then, the first row of the image is cropped and taken to extract the embedded hash-key using the LSB process in the spatial domain.The hash-key is compared to the embedded one that was extracted from region 1 in Fig. 4. If they are equal, the file is authenticated.
DCT extraction algorithm
After extracting the hash-key, the robust logo watermark is extracted from the multi watermarked PDF file.It is first converted into an image file.Then it is divided into 8x8 blocks and converted into frequency domain using DCT.The watermark logos are extracted using the following equation: → if odd, then w(i, j) = 1 → if even, then w(i, j) = 0 In the previous equation, Q points to the nearest quantization value and ∆ shows the scaling factor.
Then the logos are extracted, but one logo is needed.So, they are summed to give one logo after deciding a threshold as equation 3 shows: W(i, j) = w1(i, j) + w2(i, j) + w3(i, j) + w4(i, j) (3) if W(i, j) ≥ 3 → W(i, j) = 1 if W(i, j) < 3 → (i, j) = 0 Fig. 6 shows the diagram for the DCT extraction process of the robust watermark.
RESULTS
The new algorithm was implemented and tested on a PDF file containing a grey image of Lena and a PDF file containing a scanned Ottoman Painting.After converting the PDF files into an image files, the image file of Lena was found to have the dimensions of 512x512 and the image of the Painting had the dimensions of 1244x972.The inserted logos were numbered as four logos, each one of them has the dimensions of 64x64.Moreover, the hash-key was embedded in the border and extracted from the hash part of the image (region 1) using SHA-256 hash function.Fig. 7 and Fig. 8 show the original PDF file and the converted into image of Lena and the Painting respectively.
The Peak Signal to Noise Ratio (PSNR) is used to show the difference between the original file and the watermarked one.Moreover, the following scaling factors were tested: 4 and 8.The used scaling factor in this paper is 4. Tables 1 and 2 show some analysis.The obtained Hash-key using SHA-256 Hash function represents a number with a size of 64 hexadecimal.Table 3 shows the Hash-key extracted from region 1 in Fig. 4. Any small or big change or attack on the watermarked file will change the Hash-key of the region 1, making it different from the hash-key inserted in the border.
Table IV shows the difference in the hash-key.Table 4 shows that any change or attack on the watermarked image will cause a change in the hash-key that is regenerated from the image.The robustness of the logo watermarks has been tested under different types of attacks such as JPEG compression and cropping.That means if the PDF file was compressed or cropped, the watermark will still survive the attack.Fig. 9 and Fig. 10 show the extracted watermark that survives different degrees of the JPEG compression for the Lena and Paining files respectively.Moreover, Fig. 12 shows the extracted watermark after cropping the PDF files.That means the files have been cropped four times, each time the files were cropped from different quarter as shown if Fig. 11.
CONCLUSION
The new algorithm introduces a method of digital watermarking PDF files that uses two different types of watermark signals to embed them in the document.One is robust and is used to prove the ownership of the PDF file.The second watermark is fragile and changed by any type of attacks imposed on the PDF file.This type of watermarking is used to authenticate the file and to detect attacks on the file.The algorithm was tested successfully on several PDF files.
Fig 2 :
Fig 2: The DCT embedding In the previous equations, f k (i, j) indicates an (8x8) blocks of the original image and F k (u, v) is its Discrete Cosine Transform (DCT).w(i, j) is the watermarked image.Moreover, Q e represents an even quantization while Q o represents an odd quantization to the nearest integer number.The H k points to the chosen coefficient locations and ∆ is the quantization scaling factor.
Fig 3 :Fig 4 :
Fig 3: The embedding of the hash-key
Fig 5 :Fig 6 :
Fig 5: The extraction process of the hash-key
Fig 9 :
Fig 9: Extracted Watermark logo from different compression degrees from Lena
Fig 10 :
Fig 10: Extracted Watermark logo from different compression degrees from Painting
Table 1 :
Performance of multi-watermarked file
Table 2 :
Performance of multi-watermarked file with different scaling factors
Table 4 :
change in the Hash-key | 2,272.2 | 2014-03-01T00:00:00.000 | [
"Computer Science"
] |
Time-Resolved Spectroscopy of Fluorescence Quenching in Optical Fibre-Based pH Sensors
Numerous optodes, with fluorophores as the chemical sensing element and optical fibres for light delivery and collection, have been fabricated for minimally invasive endoscopic measurements of key physiological parameters such as pH. These flexible miniaturised optodes have typically attempted to maximize signal-to-noise through the application of high concentrations of fluorophores. We show that high-density attachment of carboxyfluorescein onto silica microspheres, the sensing elements, results in fluorescence energy transfer, manifesting as reduced fluorescence intensity and lifetime in addition to spectral changes. We demonstrate that the change in fluorescence intensity of carboxyfluorescein with pH in this “high-density” regime is opposite to that normally observed, with complex variations in fluorescent lifetime across the emission spectra of coupled fluorophores. Improved understanding of such highly loaded sensor beads is important because it leads to large increases in photostability and will aid the development of compact fibre probes, suitable for clinical applications. The time-resolved spectral measurement techniques presented here can be further applied to similar studies of other optodes.
Introduction
Sensing of physiological parameters such as pH [1][2][3][4][5][6][7][8], oxygen [2,3,[8][9][10][11], glucose [12][13][14], and lactate [15] using fluorescence spectroscopy offers a sensitive technique that has the potential for clinical diagnosis [16,17]. This is achieved by measurement of the fluorescence emission from an analyte-sensitive fluorophore, often with comparison to an insensitive reference reporter. However, the fluorescent emission spectra can be highly dependent on the environment; for instance, it can be affected by pH, polarity, temperature, and proximity to other fluorophores or quenching agents. For in vivo applications, optical fibre sensor systems (optodes) are desirable because they can be placed into remote areas of the human body [18]. The approaches reported include fibre intrinsic interferometers [5], a variety of responsive coatings [2,3,6], and nano or microparticles attached onto the end of the optical fibre [4,8,19,20]. In the latter cases, these sensors often require a high loading density to provide sufficient signal and sensitivities, which we show here may lead to interesting quenching dynamics [21][22][23]. Recently, we demonstrated a new architecture for fibre-based in situ pH and oxygen ratiometric sensing [8], with the sensors based on amino-modified silica microspheres covalently conjugated to fluorophores placed into pits etched on the ends of multi-core optical fibres. Using this approach, we reported pH and oxygen sensing in an ex vivo lung perfusion model with an accuracy of 0.02 pH units and 0.6 mg/L dissolved oxygen sensitivity [8]. Interestingly the optical properties displayed by the silica microspheres with high fluorophore loading densities were intriguing and lead to the detailed studies reported here, which are pertinent to all optical sensing platforms using immobilised fluorescent reporters.
The silica microspheres were loaded with the fluorophore carboxyfluorescein to form so-called "sensor beads". These were assembled into the etched cores of an optical fibre building the full sensing optode. The use of immobilised carboxyfluorescein (FAM) as a pH indicator in chemical sensors is common, however, while fluorescein and its derivatives exhibit high quantum yields and show pH sensitivity, they are also prone to self-quenching which affects their fluorescence emission [24]. Thus, we reported that high loading density gave an unusual fluorophore response, notably an inverse response to pH (compared to previous reports [24]), but improved sensor stability [8]. Here we studied these highly loaded fluorophore sensors using time-resolved single photon detection methods to explicitly observe the effects of different loadings.
Fibre-based steady-state fluorescence intensity measurements can struggle to separate effects of concentration-dependent signals, photobleaching, and fluorescence from both the optical fibre and the biological samples. Time-resolved fluorescence spectroscopy (TRFS) and fluorescence lifetime imaging microscopy (FLIM) offer the potential to overcome most of these issues [25]. Fluorescence lifetime offers contrast between fluorophores, is not typically affected by photobleaching, and is independent of the concentration of fluorophores (with the exception of at high packing density where quenching effects between the individual fluorophores are no longer negligible as shown here). Beyond this, fluorescence lifetime measurement can provide insight into the relaxation dynamics in excited systems as effected by the fluorophore environment.
Several methods of fluorescence lifetime spectroscopy exist, and are differentiated into time-domain techniques, such as time-correlated single photon counting (TCSPC), and frequency-domain techniques, such as phase fluoroscopy. Here we applied TRFS with TCSPC to study the spectral and time-resolved response of our pH-sensing optode with sensor beads deposited on each multimode core of the multicore fibre. We sampled all 19 cores serially in an automated process to build one measurement set, varying fluorophore loading or external conditions by simply placing the optode into differing buffers prior to measurements. A steady-state commercial spectrometer in combination with a custom-built time-resolved spectrometer recorded the multi-dimensional fluorophore emission characteristics. The intensity, spectral, and time-resolved signals in combination provide insight into the dynamic fluorophore emission processes of such highly loaded beads. Specifically, the advantage of the TRFS with TCSPC study presented here is the ability to investigate such multi-dimensional data, beyond that normally observed in optode development. The study results and techniques used here are relevant to the understanding of any sensors based on high fluorophore loading-particularly relevant for compact optical fibre sensors in which limited space is likely to lead to high fluorophore loading regimes.
Fluorescent Silica Microspheres as Fibre-Based pH Sensors
The protocol for fabrication of the fibre-based pH sensing optode was adapted from Choudhary et al. [8] and is shown in Figure 1.
Sensors 2020, 20, x FOR PEER REVIEW 3 of 12 loading-particularly relevant for compact optical fibre sensors in which limited space is likely to lead to high fluorophore loading regimes.
Fluorescent Silica Microspheres as Fibre-Based pH Sensors
The protocol for fabrication of the fibre-based pH sensing optode was adapted from Choudhary et al. [8] and is shown in Figure 1. Biotech GmbH & Co., Steinfurt, Germany) were covalently coupled to carboxyfluorescein before being deposited into the etched cores of a 19-core multicore fibre (MCF). Scanning electron microscope (SEM) images of the sensing optode and a bead (functionalised microsphere) are shown. (B) Experimental setup used for the investigation of the response of the fluorescence emission to bead loading density and pH changes. The setup comprised a pulsed laser source, a coupling and collection system, and either a steady state spectrometer or a time-resolved spectrometer (Key: L-lens, BPbandpass filter, DM-dichroic mirror, G-transmission grating, LP-longpass filter). (C) The distal end sampling of the fluorescence emission was achieved by inserting the sensing optode into buffer solutions at various pHs and automatically scanning across each of the 19 cores at the proximal end. (D) Two-dimensional data of spectrally and time-resolved fluorescence emission from the excited fluorophores.
The silica microspheres were forced into the etched cores by pushing the fibre tip into the microsphere powder such that individual microspheres were forced into the individual cores. Loose microspheres were removed by rinsing thoroughly and wiping with tissue. The microspheres once settled into an etched core are spheres in approximately parabolic pits, and as such were seated firmly. Repeated rubbing on tissue or immersion into the pH buffer had no effect on dislodging the beads. Further assessment of loading repeatability and stability was performed in Choudhary et al. [8], while the later results confirm a reliable response between fibre cores.
Experimental Setup
The fluorescence characteristics-intensity, spectral line shape, and lifetime-of the pH-sensing beads were investigated with a steady-state and a time-resolving spectroscopy system (see Figure 1B). Although the light source and the optical system for coupling and collection were identical, the spectrometers differed. A 485 nm pulsed laser diode (LDH-D-C-485 with PDL 800-D driver, PicoQuant, Berlin, Germany) capable of pulsed or continuous wave (CW) operation was used for excitation and coupled into the coupling-and-collection optical system based around a dichroic beam splitter (Thorlabs, Ely, UK). For the steady state setup, the laser was operated in CW mode, triggered to operate for 100 ms at 10 µW, and the spectra recorded with a simultaneously triggered commercial spectrometer (QE-Pro VIS, Ocean Optics now Ocean Insight, Largo, FL, USA).
For time-resolved measurements the laser was operated in pulsed mode at 20 MHz repetition rate. The time-resolved spectrometer was based on a 256 × 1 pixels complementary metal-oxide-semiconductor (CMOS) single-photon avalanche diode (SPAD) line sensor which allows fast histogramming of arriving photons with time-correlation single photon counting (TCSPC) triggered by the laser source [20,26,27]. The TCSPC capable CMOS SPAD line sensor detects single photons and generates histograms according to their arrival time for 256 pixels simultaneously, each correlated to a different wavelength. An average power of 2 µW was used with an integration time of 10 s to ensure a single-photon regime, avoiding pile-up effects, and recording sufficient signal (photons) for a quantitative analysis.
The measurements were taken in both modalities (steady state and time-resolved fluorescence) with the fibre dipped into a phosphate buffer system at various pHs (5.8, 6.2, 6.5, 7.2, 7.8, 8.1, 8.5) in a randomised order, as indicated in Figure 1C. The pH buffers were checked before measurements with a commercial pH meter (Mettler Toledo, Columbus, OH, USA). For each measurement the sensing optode was placed into the pH buffer for 30 s to stabilise before measurement. Light was coupled consecutively into each of the 19 cores automatically (see Figure 1C) via a motorized x-y-z stage (Nanomax, Thorlabs, Ely, UK). For the steady state experiment the triggered 100 ms measurements were performed automatically at each core. The time-resolved measurement was manually initiated. Three complete steady-state measurement series (of 7 randomised pH buffers) were performed, followed by a time-resolved measurement series, then a final, fourth, steady-state measurement series.
Data Analysis
For the steady-state measurement, the spectra were background subtracted via the OceanView software (OceanOptics, now Ocean Insight, Largo, FL, USA). Empty cores were omitted, as were cores with extremely weak signals indicating the microsphere was not well seated. The remaining spectra were compared by evaluating the integrated fluorescence intensity and the intensity-weighted mean wavelength. The variance between cores provided the statistics shown in later figures. Figure 1D shows an example of the two-dimensional data of the spectrally and time-resolved fluorescence emission from an excited fluorophore. To derive the fluorescence lifetimes from the measured decay curves, a non-linear curve fitting method was used and optimised to reduce the chi-squared error. To avoid distortion at the beginning of the decay, a tail fit was performed omitting the first time bin. The instrument's response function was not deconvolved, resulting in slightly increased derived lifetimes (this does not affect the conclusions because all of the compared lifetimes would be affected in the same way). The more diluted fluorophores fitted well with a single exponential function whereas the more densely loaded beads (Si@FAM-A and Si@FAM-B) were found to fit best with a double exponential function and a single lifetime derived as an amplitude-weighted function of both fluorescence lifetimes [23,28]. Sensors 2020, 20, x FOR PEER REVIEW 5 of 12 compared by evaluating the integrated fluorescence intensity and the intensity-weighted mean wavelength. The variance between cores provided the statistics shown in later figures. Figure 1D shows an example of the two-dimensional data of the spectrally and time-resolved fluorescence emission from an excited fluorophore. To derive the fluorescence lifetimes from the measured decay curves, a non-linear curve fitting method was used and optimised to reduce the chisquared error. To avoid distortion at the beginning of the decay, a tail fit was performed omitting the first time bin. The instrument's response function was not deconvolved, resulting in slightly increased derived lifetimes (this does not affect the conclusions because all of the compared lifetimes would be affected in the same way). The more diluted fluorophores fitted well with a single exponential function whereas the more densely loaded beads (Si@FAM-A and Si@FAM-B) were found to fit best with a double exponential function and a single lifetime derived as an amplitudeweighted function of both fluorescence lifetimes [23,28]. Labelling references the dilution of the fluorophore during loading onto the sensors as described in Section 2.1, such that Si@FAM-E is a substantially lower fluorophore density than Si@FAM-A. To quantify changes in fluorescence response observed in Figure 2A,B, the integrated intensities and the mean wavelengths are plotted versus the loading dilution for all cores in Figure 2C,D, respectively. Fluorescence intensity significantly decreased with increasing bead loading (moving left on the axis), and the spectral line shapes were red-shifted and broadened [24]. It was apparent that intermediate Labelling references the dilution of the fluorophore during loading onto the sensors as described in Section 2.1, such that Si@FAM-E is a substantially lower fluorophore density than Si@FAM-A. To quantify changes in fluorescence response observed in Figure 2A,B, the integrated intensities and the mean wavelengths are plotted versus the loading dilution for all cores in Figure 2C,D, respectively. Fluorescence intensity significantly decreased with increasing bead loading (moving left on the axis), and the spectral line shapes were red-shifted and broadened [24]. It was apparent that intermediate bead loadings (Si@FAM-C and Si@FAM-D) exhibited the greatest fluctuations in intensity between cores.
Changes in the Fluorescence Emission with Bead Loading Density
The calculated mean wavelength quantifies changes in the spectral line shape of the whole optode. This includes the red-shift and broadening, but also an increasing protrusion from the fluorescent background of the fibre at longer wavelength when fluorescence signals from the beads are of lower intensity, e.g., Si@FAM-A and Si@FAM-B. This will influence the trend shown in Figure 2D. However, as can be seen, broadening of the peak of the bead fluorescence is clear.
In Figure 2E the normalised fluorescence decays for each bead loading are displayed and Figure 2F shows the fitted fluorescence lifetimes across all cores. A significant decrease in fluorescence lifetime for beads with greater loading can be seen. Additionally, we observe a shorter rise time for these beads.
The observed features of decreasing fluorescence intensity, spectral broadening, and decreasing fluorescence lifetime with increasing bead loading indicate a quenching effect between the fluorescent molecules. On the highly loaded beads (e.g., Si@FAM-A), the fluorescent molecules are in such close proximity that intermolecular self-quenching occurs. According to the manufacturer, the microspheres have 1 × 10 9 free NH units on their surface to which fluorophore molecules can attach.
If fully occupied, the spacing between molecules is~0.5 nm. At this spacing additional depopulation paths of the fluorescence states occur, such as intersystem crossing or Förster resonance energy transfer (FRET), resulting in fluorophore quenching in which carboxyfluorescein is both donor and acceptor. To be explicit, the fluorophores in close proximity form a coupled excitation energy level system with increased degeneracy. This results in increased decay routes after photon absorption which can allow the excited fluorophore to silently relax (with reduced photon emission). Similarly, due to the degeneracy of relaxation mechanisms, this occurs on a short timescale, thus exhibiting decreased fluorescence lifetime. Of the emissive decay routes, there are more available with lower energy separation, resulting in spectral broadening (red-shifts). The observed dependencies in Figure 2 correlate well with the expected increased coupling between fluorophores as bead loading increased. The most explicit observation is the fluorescent lifetime, reducing from values expected for fluorescein derivatives at low loading (Si@FAM-E), to a short lifetime below the resolution of our system (likely well below 1 ns), indicating a heavily coupled or quenched system (Si@FAM-A). It is interesting to note the prolonged rise time and the flattened peak in Figure 2E for the less quenched fluorophores. This is not included in the definition of the fluorescence lifetime and therefore the fit to the decay, but indicates greater change in the dynamics than we enumerate in Figure 2F.
Some variation in fluorophore signal strength is expected from bead size variations, and therefore seating, of the microspheres in the pits of the fibre. However, it is notable that in Figure 2C the intermediate loading densities (Si@FAM-C and Si@FAM-D) have greater amplitude fluctuations between cores; this is also observed in Figure 2E,F for the lifetime. This represents the fluorophore being partially quenched, and therefore in a more unstable state sensitive to slight fluctuations in surroundings.
Photostability
Fluorescein and its derivatives are affected by photobleaching, which limits their utility for repeated measurements or monitoring in vivo. To assess the photostability of the quenched fluorophores, the averaged fluorescence intensities over all cores for pH 7.2 between four intensity-based measurement series are shown in Figure 3. Importantly, the time-resolved measurements were performed between the third and fourth series with an average power of 2 μW at 20 MHz repetition rate with pulse widths of <100 ps, which leads to a peak power of ~1 mW during each pulse. Photobleaching is a non-linear effect, where a multiply excited fluorophore becomes permanently changed. As such, even with lower average powers, short higher peak power pulses have a much stronger photobleaching effect than constant low power illumination. In addition, the time-resolved measurement took place over 10 s for each core, resulting in notable potential for photobleaching compared to the steady state measurements (100 ms duration per core, 10 μW).
In Figure 3 we observe that the majority of beads showed a reasonably stable fluorescence during the three repeats of steady state measurements, with only more notable changes observed in the fourth measurement. However, the greatest variation is visible in the less densely loaded beads, Si@FAM-E, which shows significant photobleaching over the measurements. This is in direct agreement with results presented in Choudhary et al. [8] (Supplementary Information), in which signal degradation is examined during repeated illumination.
An interesting response is observed in Figure 3 for the fourth measurement set (after more aggressive pulsed illumination). Here the less densely loaded Si@FAM-E exhibited notable bleaching as might be expected, but upon moving to higher loadings the intensity is observed to increase after intense illumination. This is perhaps most notable for intermediate loading regimes (e.g., Si@FAM-B and Si@FAM-C).
Limited bleaching under normal illumination is consistent with more numerous relaxation paths present for the quenched fluorescence molecules. The increased signal after intense illumination shows that the destruction of some fluorophores results in an effective reduction of the loading density and hence the quenching effect. The outcome is an increase in fluorescence intensity.
Response to pH
The pH sensitivity of the functionalised microspheres at various bead loadings was accessed in pH buffers in a physiologically relevant range (pH 5.8 to 8.6) in randomised order over three repetitions. Carboxyfluorescein is a known pH sensitive molecule and the expected behaviour is that the fluorescence intensity increases with increasing pH [24]. Figure 4A compares the response to pH by normalizing the fluorescence intensity changes to the value at pH 7.2. Importantly, the time-resolved measurements were performed between the third and fourth series with an average power of 2 µW at 20 MHz repetition rate with pulse widths of <100 ps, which leads to a peak power of~1 mW during each pulse. Photobleaching is a non-linear effect, where a multiply excited fluorophore becomes permanently changed. As such, even with lower average powers, short higher peak power pulses have a much stronger photobleaching effect than constant low power illumination. In addition, the time-resolved measurement took place over 10 s for each core, resulting in notable potential for photobleaching compared to the steady state measurements (100 ms duration per core, 10 µW).
In Figure 3 we observe that the majority of beads showed a reasonably stable fluorescence during the three repeats of steady state measurements, with only more notable changes observed in the fourth measurement. However, the greatest variation is visible in the less densely loaded beads, Si@FAM-E, which shows significant photobleaching over the measurements. This is in direct agreement with results presented in Choudhary et al. [8] (Supplementary Information), in which signal degradation is examined during repeated illumination.
An interesting response is observed in Figure 3 for the fourth measurement set (after more aggressive pulsed illumination). Here the less densely loaded Si@FAM-E exhibited notable bleaching as might be expected, but upon moving to higher loadings the intensity is observed to increase after intense illumination. This is perhaps most notable for intermediate loading regimes (e.g., Si@FAM-B and Si@FAM-C).
Limited bleaching under normal illumination is consistent with more numerous relaxation paths present for the quenched fluorescence molecules. The increased signal after intense illumination shows that the destruction of some fluorophores results in an effective reduction of the loading density and hence the quenching effect. The outcome is an increase in fluorescence intensity.
Response to pH
The pH sensitivity of the functionalised microspheres at various bead loadings was accessed in pH buffers in a physiologically relevant range (pH 5.8 to 8.6) in randomised order over three repetitions. Carboxyfluorescein is a known pH sensitive molecule and the expected behaviour is that the fluorescence intensity increases with increasing pH [24]. Figure 4A compares the response to pH by normalizing the fluorescence intensity changes to the value at pH 7.2. The experiments were all performed at room temperature. However, the responses to pH of fluorescein may be expected to vary with temperature, and as such calibration would instead be performed at ~37 °C for anticipated in vivo use to match the core body temperature.
The less densely loaded Si@FAM-E beads displayed a factor of 2 change in intensity across the range (matching that expected from the literature [24]) decreasing with lower pH. However, large error bars show the variation between cores and across measurement repetitions. As the pH buffer order was randomised, and notable photobleaching has been observed for this sensor (Figure 3), the large variability in response is expected. The more highly loaded beads exhibited a very different response, inverting the usual response with increasing fluorescence intensity at lower pH, as first observed in Choudhary et al. [8]. The previously observed stability of these beads (Figure 3) in combination with repeatability between the cores results in relatively small error bars and a reliable pH sensor.
Changes in the spectral line shape due to pH responses were quantified with the mean wavelength (see Figure 4B). The higher loaded beads showed a red-shift compared to those with a lower loading, increasing at higher pH (most apparent for Si@FAM-A and Si@FAM-B). The beads with the highest loading exhibited a fluorescent lifetime below that of our measurement system (time resolution ~400 ps and Instrument Response Function (IRF) ~1 ns [1,2]. Lower loaded beads exhibited the expected longer lifetimes, however, Figure 4C,D highlight that the fluorescence lifetime of an intermediate fluorophore bead loading changes in an interesting way in response to pH. A decrease in lifetime is observed at higher pH, where normally an increased intensity signal would also be expected, but here a decrease is seen. The explicit observation of decreasing lifetime at higher pH is The experiments were all performed at room temperature. However, the responses to pH of fluorescein may be expected to vary with temperature, and as such calibration would instead be performed at~37 • C for anticipated in vivo use to match the core body temperature.
The less densely loaded Si@FAM-E beads displayed a factor of 2 change in intensity across the range (matching that expected from the literature [24]) decreasing with lower pH. However, large error bars show the variation between cores and across measurement repetitions. As the pH buffer order was randomised, and notable photobleaching has been observed for this sensor (Figure 3), the large variability in response is expected. The more highly loaded beads exhibited a very different response, inverting the usual response with increasing fluorescence intensity at lower pH, as first observed in Choudhary et al. [8]. The previously observed stability of these beads (Figure 3) in combination with repeatability between the cores results in relatively small error bars and a reliable pH sensor.
Changes in the spectral line shape due to pH responses were quantified with the mean wavelength (see Figure 4B). The higher loaded beads showed a red-shift compared to those with a lower loading, increasing at higher pH (most apparent for Si@FAM-A and Si@FAM-B). The beads with the highest loading exhibited a fluorescent lifetime below that of our measurement system (time resolution~400 ps and Instrument Response Function (IRF)~1 ns [1,2]. Lower loaded beads exhibited the expected longer lifetimes, however, Figure 4C,D highlight that the fluorescence lifetime of an intermediate fluorophore bead loading changes in an interesting way in response to pH. A decrease in lifetime is observed at higher pH, where normally an increased intensity signal would also be expected, but here a decrease is seen. The explicit observation of decreasing lifetime at higher pH is consistent with increased quenching occurring, reducing both emission intensity and lifetime when an isolated fluorophore would have been more active. This is also consistent with the observed increased red-shift (spectral broadening) seen at higher pH.
Ratiometric Dual Fluorophore Optode
The ratiometric dual fluorophore optode was developed in the same way as described for the carboxyfluorescein optode with the silica beads loaded with two fluorophores (carboxyfluorescein (FAM) and tetramethylrhodamine (TAMRA) in a ratio of 300:1). The carboxyfluorescein loading corresponds to that of Si@FAM-A [8]. Although the fluorescence intensity of carboxyfluorescein changes withpH, tetramethylrhodamine does not and could thus be used as an in-measurement reference point to improve pH measurements. However, in a highly coupled system the situation is more complex. Figure 5 investigates the changes in fluorescence spectra and lifetime of the dual fluorophore loaded beads, Si@FAM-TAMRA, to investigate the coupling.
Sensors 2020, 20, x FOR PEER REVIEW 9 of 12 consistent with increased quenching occurring, reducing both emission intensity and lifetime when an isolated fluorophore would have been more active. This is also consistent with the observed increased red-shift (spectral broadening) seen at higher pH.
Ratiometric Dual Fluorophore Optode
The ratiometric dual fluorophore optode was developed in the same way as described for the carboxyfluorescein optode with the silica beads loaded with two fluorophores (carboxyfluorescein (FAM) and tetramethylrhodamine (TAMRA) in a ratio of 300:1). The carboxyfluorescein loading corresponds to that of Si@FAM-A [8]. Although the fluorescence intensity of carboxyfluorescein changes with pH, tetramethylrhodamine does not and could thus be used as an in-measurement reference point to improve pH measurements. However, in a highly coupled system the situation is more complex. Figure 5 investigates the changes in fluorescence spectra and lifetime of the dual fluorophore loaded beads, Si@FAM-TAMRA, to investigate the coupling. Further calibration of signal amplitude response to pH is provided in Choudhary et al. [8]. Figure 5A compares the fluorescence spectra of the dual fluorophore beads with those that were loaded individually with carboxyfluorescein, Si@FAM, and tetramethylrhodamine, Si@TAMRA. The spectral regions where each fluorophore is dominant (chosen for the greatest contrast in lifetime) are highlighted and were used for the time-resolved analysis. Figure 5B shows the spectral changes of the combined fluorescence system in response to pH. The intensities of the characteristic tetramethylrhodamine emissions changed strongly, and the carboxyfluorescein region relatively little. Figure 5C shows the pH dependency of the normalised fluorescence decays, and Figure 5D shows the fitted fluorescence lifetimes, from the carboxyfluorescein (blue) and tetramethylrhodamine (red) dominated regions of the spectra. The Further calibration of signal amplitude response to pH is provided in Choudhary et al. [8]. Figure 5A compares the fluorescence spectra of the dual fluorophore beads with those that were loaded individually with carboxyfluorescein, Si@FAM, and tetramethylrhodamine, Si@TAMRA. The spectral regions where each fluorophore is dominant (chosen for the greatest contrast in lifetime) are highlighted and were used for the time-resolved analysis. Figure 5B shows the spectral changes of the combined fluorescence system in response to pH. The intensities of the characteristic tetramethylrhodamine emissions changed strongly, and the carboxyfluorescein region relatively little. Figure 5C shows the pH dependency of the normalised fluorescence decays, and Figure 5D shows the fitted fluorescence lifetimes, from the carboxyfluorescein (blue) and tetramethylrhodamine (red) dominated regions of the spectra. The carboxyfluorescein dominated decays show a short fluorescence lifetime similar to that seen in the highly loaded carboxyfluorescein beads, e.g., Figure 2E Si@FAM-A. The tetramethylrhodamine dominated region shows a longer lifetime for tetramethylrhodamine (expected fluorescence lifetime of~2.3 ns), which decreases with increasing pH. Again, we noticed the further difference in the rising dynamics of the fluorescence decay for the carboxyfluorescein and tetramethylrhodamine dominated regions, not represented in the fitted lifetime.
These results describe a classic FRET pair where carboxyfluorescein is donor and tetramethylrhodamine acceptor, loaded at high density and therefore exhibiting a joint response to environmental changes [21,23,29]. As previously discussed for the case of highly coupled carboxyfluorescein systems, the response here describes energy transfer and quenching occurring. The carboxyfluorescein changes with pH are being transferred to the tetramethylrhodamine intensity responses [30,31]. At high pH, the fluorescein more actively quenches the tetramethylrhodamine, exhibiting reduced intensity and lifetime at the longer wavelength region. Furthermore, the coupled system also pulls up the lifetime of the carboxyfluorescein dominated region. Explicit combined spectral and lifetime evaluation offers insight into such FRET pairs and could be used for the study of other complex systems.
Conclusions
Fibre-based endoscopic fluorescence sensors have the potential for clinical in vivo application to enhance the understanding of physiological parameters in disease pathology. The need for miniaturised optodes introduces a requirement for high fluorophore density due to the small practical volume. An example of this, based on silica microspheres, was thoroughly characterised here in a spectrally and time-resolved manner. These investigations describe the opportunity for optical fibre time-resolved spectroscopy to provide insight into fluorophore response, especially in the case of densely loaded fluorophores on the ends of optical fibres. However, these techniques are also applicable to investigation of spectral fluorescence dynamics of other samples, such as endogenous tissues.
Here fluorophores were attached covalently to amino-modified silica microspheres that were seated into the etched cores of a 19 core MCF. For five fibres with beads with differing loadings, all cores were characterised utilising an automated alignment system to provide ensemble measurements. We observed decreasing fluorescence intensity, spectral red-shift, and decreasing fluorescence lifetime with increasing bead loading density, which explicitly demonstrates the quenching effect due to energy transfer between the fluorescent molecules. The most densely loaded beads exhibited stable fluorescence across multiple measurement series in comparison to those with reduced loading densities, indicating a limited photobleaching effect under normal illumination as numerous relaxation paths are present for the quenched fluorescence molecules. The fluorescence pH response of these microsensors was evaluated in the physiologically relevant region from pH 5.8 to 8.6. The most densely loaded beads showed a complex quenching effect which reversed the normal influence of pH on fluorescein intensity, decreasing intensity and lifetime with increasing pH.
Finally, we investigated a ratiometric microsensor loaded with carboxyfluorescein and tetramethylrhodamine, which also exhibited strong energy transfer between the fluorophores of a classic FRET pair. Here the observed response to pH shows decreasing tetramethylrhodamine fluorescence intensity with increasing pH, and a converging fluorescence lifetime of both fluorophores with increasing pH due to the coupling between the molecules. The time-resolved features attributable to the distinct fluorophores are separable through the spectral capabilities of our detection system.
Miniaturised fibre-optic based sensors for physiological parameters could have the potential to aid clinical diagnostics in the future, providing key information about physiological parameters. However, a high loading density of the reporter fluorophore is desirable to achieve sufficient sensitivity. This leads to altered fluorescence emission and dynamics that can be traced back to quenching effects. Our results indicate that the sensors work better in these highly loaded regimes and the findings presented in this work are applicable to the development, design, and engineering of next generation fibre-optic biosensors [7,8,32,33] for in vivo analysis of physiological parameters. | 7,288.4 | 2020-10-27T00:00:00.000 | [
"Chemistry",
"Engineering",
"Medicine"
] |
HalluciNet-ing Spatiotemporal Representations Using a 2D-CNN
: Spatiotemporal representations learned using 3D convolutional neural networks (CNN) are currently used in state-of-the-art approaches for action-related tasks. However, 3D-CNN are notorious for being memory and compute resource intensive as compared with more simple 2D-CNN architectures. We propose to hallucinate spatiotemporal representations from a 3D-CNN teacher with a 2D-CNN student. By requiring the 2D-CNN to predict the future and intuit upcoming activity, it is encouraged to gain a deeper understanding of actions and how they evolve. The hallucination task is treated as an auxiliary task, which can be used with any other action-related task in a multitask learning setting. Thorough experimental evaluation, it is shown that the hallucination task indeed helps improve performance on action recognition, action quality assessment, and dynamic scene recognition tasks. From a practical standpoint, being able to hallucinate spatiotemporal representations without an actual 3D-CNN can enable deployment in resource-constrained scenarios, such as with limited computing power and/or lower bandwidth. We also observed that our hallucination task has utility not only during the training phase, but also during the pre-training phase.
The power of 3D-CNNs comes from their ability to attend to the salient motion patterns of a particular action class. In contrast, 2D-CNNs are generally used for learning and extracting spatial features pertaining to a single frame/image; thus, by design, they do not take into account any motion information and, therefore, lack temporal representation power. Some works [16][17][18][19] have addressed this by using optical flow, which will respond at all pixels that have moved/changed. This means the optical flow can respond to cues both from the foreground motion of interest, as well as the irrelevant activity happening in the background. This background response might not be desirable since CNNs have been shown to find short cuts to recognize actions not from the meaningful foreground, but from background cues [20,21]. These kinds of short cuts might still be beneficial for action recognition tasks but not in a meaningful way, that is, the 2D network is not actually learning to understand the action itself, but rather the contextual cues and clues. Despite these shortcomings, 2D-CNNs are computationally lightweight, which makes them suitable for deployment on edge devices. In short, 2D-CNNs have the advantage of being computationally less expensive, while 3D-CNNs extract spatiotemporal features that have more representation power. In our work, we propose a way to combine the best of both worlds-rich spatiotemporal representation with low computational cost. Our inspiration comes from the observation that given even a single image of a scene, humans can predict how the scene might evolve. We are able to do so because of our experience and interaction in the world, which provides a general understanding of how other people are expected to behave and how objects can move or be manipulated. We propose to hallucinate spatiotemporal representations as computed by a 3D-CNN, using a 2D-CNN, utilizing only a single still frame (see Figure 1). The idea is to force a 2D-CNN to predict the motion that will occur in the next frames, without ever having to actually see it. Contributions: We propose a novel multitask approach, which incorporates an auxiliary task of approximating 3D-CNN representations using a 2D-CNN and a single image. It has the following benefits: • Conceptually, our hallucination task can provide a richer, stronger supervisory signal that can help the 2D-CNN to gain a deeper understanding of actions and how a given scene evolves with time. Experimentally, we found our approach to be beneficial in the following computer vision tasks: 1. Action recognition (actions with short-and long-term temporal dynamics).
Scene recognition.
Furthermore, we also found hallucination task to be useful during the following: 1. The pretraining phase.
2.
The training phase.
• Practically, approximating spatiotemporal features, instead of actually computing them, is useful for the following: 1.
Limited compute power (smart video camera systems, lower-end phones, or IoT devices).
2.
Limited/expensive bandwidth (Video Analytics Software as a Service (VA SaaS)), where our method can help reduce the transmission load by a factor of 15 (need to transmit only 1 frame out of 16).
Many computer vision efforts in areas such as automated (remote) physiotherapy (action quality assessment), which are targeted for low-income groups, make use of 3D-CNNs. It is more likely that a low income demographic would have devices with low computational resources and restricted communication resources, which are not suitable to run 3D-CNNs; in these cases, we can just hallucinate spatiotemporal representations, instead of using actual 3D-CNNs and a large number of frames.
Related Work
Our work is related to predicting features, developing efficient/light-weight spatiotemporal network approaches, and distilling knowledge. Next, we briefly compare and contrast our approach to the most closely related works in the literature.
Capturing information in future frames: Many works have focused on capturing information in future frames [16][17][18][22][23][24][25][26][27][28][29][30]. Generating future frames is a difficult and complicated task, and usually requires the disentangling of background, foreground, lowlevel and high-level details, and modeling them separately. Our approach to predicting features is much simpler. Moreover, our goal is not to a predict a pixel-perfect future, but rather to make predictions at the semantic level.
Instead of explicitly generating future frames, works such as [16][17][18][19] focused on learning to predict the optical flow (very short-term motion information). These approaches, by design, require the use of an encoder and a decoder. Our approach does not require a decoder, which reduces the computational load. Moreover, our approach learns to hallucinate features corresponding to 16 frames, as compared to motion information in two frames. Experiments confirm the benefits of our method over optical flow prediction.
Bilen et al. [29] introduced a novel, compact representation of a video called a "dynamic image", which can be thought of as a summary of full videos in a single image. However, computing a dynamic image requires access to all the corresponding frames, whereas HalluciNet requires processing just a single image.
Predicting features: Other works [27,31,32] proposed predicting features. Our work is closest to [32], where the authors proposed hallucinating depth using the RGB input, whereas we propose hallucinating the spatiotemporal information. Reasoning about depth information is different from reasoning about spatiotemporal evolution.
While these works aim to address either reducing the visual evidence or developing a more efficient architecture design, our solution to hallucinate (without explicitly computing) spatiotemporal representations using a 2D-CNN from a single image aims to solve both the problems, while also providing stronger supervision. In fact, our approach, which focuses on improving the backbone CNN, is complementary to some of these developments [42,43].
Best of Both Worlds
Since humans are able to predict future activity and behavior through years of experience and a general understanding of "how the world works", we would like to develop a network that can understand an action in a similar manner. To this end, we propose a teacher-student network architecture that asks a 2D-CNN to use a single frame to hallucinate (predict) 3D features pertaining to 16 frames. Let us consider the example of a gymnast performing her routine as shown in Figure 1. In order to complete the hallucination task, the 2D-CNN should do the following: • Learn to identify that there's an actor in the scene and localize her; • Spatially segment the actors and objects; • Identify that the event is a balance beam gymnastic event and the actor is a gymnast; • Identify that the gymnast is to attempt a cartwheel; • Predict how she will be moving while attempting the cartwheel; • Approximate the final position of the gymnast after 16 frames, etc.
The challenge is understanding all the rich semantic details of the action from only a single frame.
Hallucination Task
The hallucination task can be seen as distilling knowledge from a better teacher network (3D-CNN), f t , to a lighter student network (2D-CNN), f s . The teacher, f t , is pretrained and kept frozen, while the parameters of the student, f s , are learned. Mid-level representations can be computed as follows: where F T is the T-th video frame. The hallucination loss, L hallu encourages f s to regress φ s to φ t by minimizing the Euclidean distance between φ s and φ t : Multitask learning (MTL): Reducing computational cost with the hallucination task is not the only goal. Since the primary objective is to better understand activities and improve performance, hallucination is meant to be an auxiliary task to support the main action-related task (e.g., action recognition). The main task loss (e.g., classification loss), L mt , is used in conjunction with the following hallucination loss: where λ is a loss balancing factor. The realization of our approach is straightforward, as presented in Figure 1.
Stronger Supervision
In a typical action recognition task, a network is only provided with the action class label. This may be considered a weak supervision signal since it provides a single high-level semantic interpretation of a clip filled with complex changes. More dense labels at lower semantic levels are expected to provide stronger supervisory signals, which could improve action understanding.
In this vein, joint actor-action segmentation is an actively pursed research direction [44][45][46][47][48]. Joint actor-action segmentation datasets [49] provide detailed annotations, through significant annotation efforts. In contrast, our spatiotemporal hallucination task provides detailed supervision of a similar type (though not exactly the same) for free. Since 3D-CNN representations tend to focus on actors and objects, 2D-CNN can develop a better general understanding about actions through actor/object manipulation. Additionally, the 2D representation is less likely to take shortcuts-ignoring the actual actor and action being performed, and instead doing recognition based on the background [20,21]-as it cannot hallucinate spatiotemporal features, which mainly pertain to the actors/foreground from the background.
Prediction Ambiguities
In general, the prediction of future activity with a single frame could be ambiguous (e.g., opening vs. closing a door). However, a study has shown that humans are able to accurately predict immediate future action from a still image 85% of the time [27]. So, while there may be ambiguous cases, there are many other instances where causal relationships exist and the hallucination task can be exploited. Additionally, low-level motion cues can be used to resolve ambiguity (Section 4.4).
Experiments
We hypothesize that incorporating the hallucination task is beneficial by providing a deeper understanding of actions. We evaluate the effect of incorporating the hallucination task in the following settings: • (Section 4.1) Actions with short-term temporal dynamics. • (Section 4.2) Actions with long-term temporal dynamics.
Choice of networks: In principle, any 2D-or 3D-CNNs could be used as student or teacher networks, respectively. Noting the SOTA performance of 3D-ResNeXt-101 [3] on action recognition, we choose to use it as our teacher network. We considered various student models. Unless otherwise mentioned, our student model is VGG11-bn, and pretrained on the ImageNet dataset [50]; the teacher network was trained on UCF-101 [51] and kept frozen. We named the 2D-CNN trained with the side-task hallucination loss as HalluciNet, and the one without hallucination loss as (vanilla) 2D-CNN, while the HalluciNet direct variant, which directly uses hallucinated features for the main action recognition task.
Which layer to hallucinate? We chose to hallucinate the activations of the last bottleneck group of 3D-ResNeXt-101, which are 2048-dimensional. Representations of shallower layers will have higher dimensionality and will be less semantically mapped.
Implementation details: We used PyTorch [52] to implement all of the networks. Network parameters were optimized using an Adam optimizer [53] with a beginning learning rate of 0.0001. λ in Equation (4) was set to 50, unless specified otherwise. Further experiment specific details are presented with the experiment. The codebase will be made publicly available.
Performance baselines: Our performance baseline was a 2D-CNN with the same architecture, but was trained without hallucination loss (vanilla 2D-CNN). In addition, we also compared the performance against other popular approaches from the literature, specified in each experiment.
Actions with Short-Term Temporal Dynamics
In the first experiment, we tested the influence of the hallucination task for general action recognition. We compared the performance with two single frame prediction techniques: dense optical flow prediction from a static image [17], and motion prediction from a static image [19].
Datasets: The following action recognition datasets were considered.
1. UCF101 [51] is an action recognition dataset of realistic in-the-wild action videos, collected from YouTube, having 101 action categories. With 13,320 videos from 101 action categories, UCF101 provides the largest diversity in terms of actions and with the presence of large variations in camera motion, object appearance and pose, object scale, viewpoint, cluttered background, illumination conditions, etc., and it is the most challenging dataset to date. The action categories can be divided into five types: (1) human-object interaction; (2) body motion only; (3) human-human interaction; (4) playing musical instruments; and (5) sports.
2.
HMDB-51 [54] is collected from various sources, mostly from movies, and a small proportion from public databases such as the Prelinger archive, YouTube and Google videos. This dataset contains 6849 clips divided into 51 action categories, each containing a minimum of 101 clips. The action categories can be grouped in five types: (1) general facial actions; (2) facial actions with object manipulation; (3) general body movements; (4) body movements with object interaction; and (5) body movements for human interaction. Since HMDB-51 video sequences are extracted from commercial movies as well as YouTube, it represents a fine multifariousness of light conditions, situations and surroundings in which the action can appear, captured with different camera types and recording techniques, such as points of view.
In order to be consistent with the literature, we adopted their experimental protocols. Center frames from the training and testing samples were used for reporting performance, and are named UCF-and HMDB-static, as in the literature [19].
We summarize the performance on the action recognition task in Table 1a. We found that on both datasets, incorporating the hallucination task helped. Our HalluciNet outperformed prior approaches [17,19] on both the UCF-101 and HMDB-51 datasets. Moreover, our method has an advantage of being computationally lighter than [19], as it does not use a flow image generator network. Qualitative results are shown in Figure 2. In the successes, ambiguities were resolved. The failure cases tended to confuse semantically similar classes with similar motions, such as FloorGymnastics/BalanceBeam or Kayaking/Rowing. To evaluate the quality of the hallucinated representations themselves, we directly used those representations for the main action recognition task (HalluciNet direct ). We noticed that the hallucinated features had strong performance, improved on the 2D-CNN, and, in fact, performed best on the UCF-static. (a)
UCF-Static HMDB-Static
App stream [19] 63.60 35.10 App stream ensemble [19] 64.00 35.50 Motion stream [19] 24.10 13.90 Motion stream [17] 14.30 04.96 App + Motion [19] 65.50 37.10 App + Motion [17] 64. 50 Next, we used the hallucination task to improve the performance of recent developments, TRN [43] and TSM [42]. We used Resnet-18 (R18) as the backbone for both, and implemented single, center segment, 4-frame versions of both. For TRN, we considered the multiscale version. For TSM, we considered the online version, which is intended for real-time processing. For both, we sampled 4 frames from the center 16 frames. We used λ = 200. Their vanilla versions served as our baselines. The performance on UCF101 is shown in Table 1b.
We also experimented with a larger, better base model, Resnet-50. In this experiment, we trained using all the frames, and not only the center frame; during testing, we averaged the results over 25 frames. We used λ = 350. Results on UCF101 are shown in Table 1c. Figure 2. Qualitative results. The hallucination task helps improve performance when the action sample is visually similar to other action classes, and a motion cue is needed to distinguish them. However, sometimes, HalluciNet makes incorrect predictions when the motion cue is similar to that of other actions, and dominates over the visual cue. Please zoom in for a better view.
2D-CNN HalluciNet
Finally, in Table 2, we compare our predicted spatiotemporal representation, HalluciNet direct , and the actual 3D-CNN. Hallucinet improved upon the vanilla 2D-CNN, though well below the actual 3D-CNN. However, the performance trade-off resulted in only 6% of the computational cost of the full 3D-CNN. We also observed a reduction in the data needed to be transmitted. Here, our 3D-CNN used 112 × 112 pixel frames as input (mean image size: 3.5 KB), while our 2D-CNN used 224 × 224 pixels (mean image size: 13.94 KB) as input. In this case, we observed a reduction of more than 4 times. In cases where 3D-CNNs use a larger resolution input, we can expect a much larger reduction. For example, if the 3D-CNN uses frames of resolution of 224 × 224 pixels, then the data transmission would be reduced by 16 times.
Actions with Long-Term Temporal Dynamics
Although we proposed hallucinating the short-term future (16 frames), frequently, actions with longer temporal dynamics must be considered. To evaluate the utility of short-term hallucination in actions with longer temporal dynamics, we considered the task of recognizing dives and assessing their quality. Short clips were aggregated over longer videos using an LSTM, as shown in Figure 3.
Dive Recognition
Task description: In Olympic diving, athletes attempt many different types of dives. In a general action recognition dataset, such as UCF101, all of these dives are grouped under a single action class: diving. However, these dives are different from each other in subtle ways. Each dive has the following five components: (a) position (legs straight or bent); (b) starting from arm stand or not; (c) rotation direction (backwards, forwards, etc.); (d) number of times the diver somersaulted in air; and (e) number of times the diver twisted in air. Different combinations of these components produce a unique type of dive (dive number). The dive recognition task comprises predicting all five components of a dive using very few frames.
Why is this task more challenging? Unlike general action recognition datasets, e.g., UCF-101 or kinetics [2], the cues needed to identify the specific dive are distributed across the entire action sequence. In order to correctly predict the dive, the whole action sequence needs to be seen. To make the dive classification task more suitable for our HalluciNet framework, we asked the network to classify a dive correctly using only a few regularly spaced frames. In particular, we truncated a diving video to 96 frames and showed the student network every 16th frame, for a total of 6 frames. Note that we are not asking our student network to hallucinate the entire dive sequence; rather, the student network is required to hallucinate the short-term future in order to "fill the holes" in the visual input datastream.
Dataset: The recently released diving dataset MTL-AQA [6], which has 1059 training and 353 test samples, was used for this task. The average sequence length is 3.84 s. Diving videos are real-world footage collected from various FINA events. The side view is used for all the videos. Backgrounds and the clothing of athletes vary.
Model: We pretrained both our 3D-CNN teacher and 2D-CNN student on UCF-101. Then, the student network was trained to classify dives. Since we would be gathering evidence over six frames, we made use of an LSTM [55] for aggregation. The LSTM was single-layered with a hidden state of 256D. The LSTM's hidden state at the last time step was passed through separate linear classification layers, one for each of the properties of a dive. The full model is illustrated in Figure 3. The student network was trained end-to-end for 20 epochs using an Adam solver with a constant learning rate of 0.0001. We also considered HalluciNet based on Resnet-18 (λ = 400). We did not consider R50 because it is much larger, compared to the dataset size.
The results are summarized in Table 3a, where we also compare them with other stateof-the-art 3D-CNN-based approaches [6,56]. Compared with the 2D baseline, Hallucinet performed better on 3 out of 5 tasks. The position task (legs straight or bent) could be equally identifiable from a single image or clip, but the number of twists, somersaults, or direction of rotation are more challenging without seeing motion. In contrast, HalluciNet could predict the motion. Our HalluciNet even outperformed 3D-CNN-based approaches that use more frames (MSCADC [6] and Nibali et al. [56]). C3D-AVG outperformed HalluciNet, but is computationally expensive and uses 16× more frames. Table 3. (a) Performance (Accuracy in % reported) comparison on dive recognition task. #Frames represents the number of frames that the corresponding method sees. P, AS, RT, SS, TW stand for position, arm stand, rotation type, number of somersaults, and number of twists. (b) Performance (Spearman's rank correlation in % reported) on AQA task. (a)
Dive Quality Assessment
Action quality assessment (AQA) is another task that can highlight the utility of hallucinating spatiotemporal representations from still images, using a 2D-CNN. In AQA, the task is to measure, or quantify, how well an action was performed. A good example of AQA is that of judging Olympic events, such as diving, gymnastics, figure skating, etc. Like the dive recognition task, in order to correctly assess the quality of a dive, the entire dive sequence needs to be seen/processed.
Dataset: MTL-AQA [6], the same as in Section 4.2.1. Metric: Consistent with the literature, we report Spearman's rank correlation (in %). We followed the same training procedure as in Section 4.2.1, except that for the AQA task, we used L2 loss to train, as it is a regression task. We trained for 20 epochs with Adam as a solver and annealed the learning rate by a factor of 10 every 5 epochs. We also considered HalluciNet based on R18 (λ = 250).
The AQA results are presented in Table 3b. Incorporating the hallucination task helped improve AQA performance. Our HalluciNet outperformed C3D-SVR and was quite close to C3D-LSTM and MSCADC, although it saw 90 and 10 fewer frames, respectively. Although it does not match C3D-AVG-STL, HalluciNet requires significantly less computation.
Dynamic Scene Recognition
Dataset: Feichtenhofer et al. introduced the YUP++ dataset [58] for the task of dynamic scene recognition. It has a total of 20 scene classes. Samples from this dataset encompass a wide range of conditions, including those arising from natural within scene category differences, seasonal and diurnal variations as well as viewing parameters. For each scene class in the dataset, there are 60 color videos, with no two samples for a given class taken from the same physical scene. Half of the videos within each class were acquired with a static camera and half were acquired with a moving camera, with camera motions encompassing pan, tilt, zoom and jitter.
The use of this dataset to evaluate the utility of inferred motion was suggested in [19]. In the work by Feichtenhofer, 10% of the samples were used for training, while the remaining 90% of the samples were used for testing purposes. Gao et al. [19] formed their own split, called static-YUP++.
Protocol: For training and testing purposes, we considered the central frame of each sample.
The first experiment considered standard dynamic scene recognition using splits from the literature and compared them with a spatiotemporal energy based approach (BoSE), slow feature analysis (SFA) approach, and temporal CNN (T-CNN). Additionally, we also considered versions based on Resnet50 and predictions averaged over 25 frames. As shown in Table 4a, HalluciNet showed minor improvement over the baseline 2D-CNN and outperformed studies in the literature. T-CNN might be the closest for comparison because it uses a stack of 10 optical flow frames; however, our HalluciNet outperformed it by a large margin. Note that we did not train our 3D-CNN on the scene recognition dataset/task, and used a 3D-CNN trained on the action recognition dataset, but we still observed improvements.
Using Multiple Frames to Hallucinate
As previously discussed, there are situations (e.g., door open/close) where a single image cannot be reliably used for hallucination. However, motions cues coming from multiple frames can be used to resolve ambiguities.
We modified the single frame HalluciNet architecture to accept multiple frames, as shown in Figure 4. We processed frame F j and frame F j+k (k > 0) with our student 2D-CNN. In order to tease out low-level motion cues, we did ordered concatenation of the intermediate representations, corresponding to frames F j and F j+k . The concatenated student representation in the 2-frame case is as follows: where φ l s is the student representation from frame F l as in Equation (2). This basic approach can be extended to more frames, as well as multi-scale cases. Hallucination loss remains as a single frame case (Equation (3). In order to see the effect of using multiple frames, we considered the following two cases:
Two-frame baseline (HalluciNet(2f)). We set k = 3, to give the student network f s access to pixel changes in order to tease out low-level motion cues.
We trained the networks for both the cases, using the exact same procedure and parameters as in the single frame case, and observed the hallucination loss, L hallu , on the test set. We experimented with both kinds of actions-with short-term and longterm dynamics.
Results for short-term actions are presented in Table 5a for UCF101. We saw a reduction in hallucination loss by a little more than 3%, which means that the hallucinated representations were closer to the true spatiotemporal representations. Similarly, there was a slight classification improvement, but with a 67% increase in computation time.
The long-term action results are presented in Table 5b for MTL-AQA. Like with shortterm actions, there was an improvement when using two frames. The percent of reduction in L hallu was better than the short-term case, and dive classification was improved across all components (except AS, which was saturated). Discussion: Despite the lower mean hallucination error in the short-term case, the reduction rate was larger for the long-term actions. We believe this is due to the inherent difficulty of the classification task. In UCF-101, action classes are more semantically distinguishable, which makes it easier (e.g., archery vs. applying_makeup) to hallucinate and reason about the immediate future from a single image, while in the MTL-AQA dive classification case, the action evolution can be confusing or tricky to predict from a single image. An example is trying to determine the direction of rotation-it is difficult to determine if it is forward or backward with a snapshot devoid of motion. Moreover, differences between dives are more subtle. The tasks of counting somersaults and twists need accuracy up to half a rotation. As a result, short-term hallucination is more difficult-it is difficult to determine if it is a full or half rotation. While the the two-frame HalluciNet can extract some low-level motion cues to resolve ambiguity, the impact is tempered in UCF-101, which has less motion dependence. Consequently, there is comparatively more improvement in MTL-AQA, where motion (e.g., speed of rotation to distinguish between full/half rotation) is more meaningful to the classification task.
Utility of Hallucination Task in Pretraining
To determine if the hallucination task positively affects pretraining, we conducted an experiment on the downstream task of dive classification on the MTL-AQA dataset. In Experiment Section 4.2.1 the backbone network was trained on the UCF-101 action classification dataset; however, the hallucination task was not utilized during that pretraining. Table 6 summarizes the results of pretraining with and without the hallucination for dive classification. The use of hallucination during pretraining provided better initialization for both the vanilla 2D-CNN and HalluciNet, which led to improvements in almost every category besides rotation (RT) for HalluciNet. Additionally, HalluciNet training had the best performance for each dive class, indicating its utility both in pretraining network initialization and task-specific training.
Conclusions
Although 3D-CNNs extract richer spatiotemporal features than the spatial features from 2D-CNNs, this comes at a considerably higher computational cost. We proposed a simple solution to approximate (hallucinate) spatiotemporal representations (computed by a 3D-CNN), using a computationally lightweight 2D-CNN with a single frame. Hallucinating spatiotemporal representations, instead of actually computing them, dramatically lowers the computational cost (only 6% of 3D-CNN time in our experiments), which makes deployment on edge devices feasible. In addition, by using only a single frame, rather than 16, the communication bandwidth requirements are lowered. Besides these practical benefits, we found that the hallucination task, when used in a multitask learning setting, provides a strong supervisory signal, which helps in (1) actions with short-and long-term dynamics, (2) dynamic scene recognition (non-action task), and (3) improving pretraining for downstream tasks. We showed the hallucination task across various base CNNs. Our hallucination task is a plug-and-play module, and we suggest future works to leverage the hallucination task for action as well as non-action tasks. | 6,675.2 | 2021-09-08T00:00:00.000 | [
"Computer Science"
] |
A Fading Tolerant Phase-Sensitive Optical Time Domain Reflectometry Based on Phasing-Locking Structure
: The demand for phase-sensitive optical time domain reflectometry ( ϕ -OTDR), which is capable of reconstructing external disturbance accurately, is increasing. However, ϕ -OTDR suffers from fading where Rayleigh backscattering traces (RBS) have low amplitude and may be lower than the noise floor. Therefore, signal-to-noise ratio (SNR) is reduced. In conventional coherent ϕ -OTDR, an acoustic optical modulator (AOM), which consists of an RF driving source and an acousto-optic crystal, is commonly used to generate optical pulses and frequency shifts. Since RF driving and external modulation signals come from an independent oscillation source, every intermediate frequency (IF) trace has a different phase bias. Therefore, it is difficult to average the IF signals directly for noise reduction. In this paper, a coherent ϕ -OTDR system based on phase-locking structure was proposed. This structure provided a clock homologous carrier signal, a modulation signal and a data acquisition (DAQ) trigger signal. Then, moving average methods were taken on IF signals before phase demodulating to reduce the overall noise floor of the system. This new ϕ -OTDR is more tolerant to fading, which can provide higher accuracy for vibration reconstruction. The frequency response range of vibration was as low as 1Hz, and a 25dB improvement of SNR was achieved. light through a 50:50 optical coupler (OC2) for beating. A balanced photoelectric detector (BPD) with a 350 MHz bandwidth converted the beat frequency optical signal into an electrical signal while eliminating DC and common components, which was recorded using a data acquisition card with a 1.25 GHz/s sampling rate. In our experimental system, the spatial resolution was less than 40 m, and the sensing distance was about 700 m. Since we were mainly demonstrating phase-locking structures, we did not pay attention to the spatial resolution and the sensing distance. In practical applications, these two parameters could be
Introduction
As one of the most important branches of distributed optical fiber sensing (DOFS), there has been a growing interest in phase-sensitive optical time domain reflectometry (ϕ-OTDR) in recent years. Due to its desirable characteristics such as high sensitivity, long-scale monitoring range [1] and immunity to electromagnetic interference, ϕ-OTDR has already been widely used in many applications, including but not limited to intrusion detection [2,3], pipeline surveillance [4], dancing of transmission cable and seismic wave detection [5][6][7], where vibrational disturbances are the primary cues for detection. In recent years, it has also been innovatively proposed for use in the field of optical biosensing [8,9]. ϕ-OTDR originates from OTDR but utilizes highly coherent lasers with narrow linewidths as light sources. Compared with traditional OTDRs, which focus on measuring the intensity change of Rayleigh backscatter trace (RBS) along the fiber, ϕ-OTDR detects the optical phase variation of RBS for its definite linear relationship with the external variation. By demodulating the phase variation of RBS, external disturbance could be quantitatively reconstructed using ϕ-OTDR [10].
However, ϕ-OTDR is susceptible to fading [11], which causes drastic amplitude fluctuation on RBS traces. Within fading areas, RBS intensity is close to or even lower than the noise floor. The signal-to-noise ratio (SNR) in these areas is not good enough to correctly demodulate the external variation, leading to a high rate of distortion. For fading suppression, several methods have been investigated. According to the interferential characteristic, a differential phase shift pulsing technology was used to suppress fading by applying two optical pulses [12,13]. A 0-π binary phase shift process was applied on the second half part of the optical pulses. Based on the above operation, a phase stitching method was introduced in phase demodulation to get a higher amplitude, and the probability of fading was reduced. Then, time-gated digital optical frequency domain reflectometry (TGD-OFDR) based on optical intensity modulators (IM) was proposed, which was targeted for frequency domain measurement but could be exploited as a ϕ-OTDR [14]. The IM had a large modulator bandwidth and created both positive and negative frequency components that could be fully used to suppress fading while the spatial resolution remained unchanged. Moreover, because the distributions of fading areas were different for independent optical frequencies, a frequency division multiplexing (FDM) method was studied. In 2013, Pan et al. proposed a ϕ-OTDR with a three optical frequency source [15]. A ϕ-OTDR using optimum tracking over multiple probe frequencies was then proposed [16]. Nevertheless, these structures were relatively complex, came at a high cost and required additional components.
Since the RBS intensity conforms to the property of the Rayleigh distribution [17,18], it can be considered to avoid fading areas before phase demodulation. Therefore, the phase demodulation reference positions can be optimized using a short measurement time. For example, the RBS signal is divided into several intervals, so the reference intervals with the maximum amplitude intensity can be optimized as much as possible to avoid fading. However, this method is usually only suitable for the reconstruction of high-frequency vibration signals. Up to now, the detection frequency of ϕ-OTDR has mostly been in the tens of Hz to MHz level. However, the detection for some applications such as transmission cable dancing (generally less than 3 Hz) or material craking below 10 Hz is difficult [19,20]. In recent years, researchers have only begun to study how to improve the low-frequency response capability of ϕ-OTDR. It is necessary to continuously measure low-frequency signals for a long time to correctly reconstruct the signals. Moreover, due to the drift of the phase bias of the intermediate frequency (IF) curves, the extraction of low-frequency signals is more difficult, leading to poor low-frequency sensing performance by ϕ-OTDR [21].
The fading can usually be relieved by averaging various RBS traces over time directly for noise reduction [22,23]. As the noise floor decreases, SNR increases so that the ϕ-OTDR system can extract the vibration signal with more fidelity, even in fading areas. However, in conventional coherent ϕ-OTDR, the carrier signal inside the acoustic optical modulator (AOM), external modulation signal and data acquisition (DAQ) card trigger signals come from an independent oscillation source. Thus, every RBS trace has a different phase bias. In 2016, He et al. used a self-mixing signal demodulation scheme to eliminate the influence of phase bias drift but lacked the extraction of phases to restore changes in external disturbances [24]. In 2017, an embedded pattern recognition method for ϕ-OTDR with analog down conversion and digital I/Q demodulation was proposed to classify and identify external disturbance events. Similarly, it could not avoid the phase bias drift and affected the accuracy of demodulation and pattern recognition [25]. Then, a clock homologous I/Q demodulation based on phase-locking structure was proposed. However, the system only provided the clock homologous modulation signal of AOM and the chopper signal, which could not completely solve the problem of phase bias drift [26]. In addition, the aim of this phase-locking structure was to eliminate the influence of residual frequency during phase demodulation rather than the negative influence of fading. Due to the drift of phase bias, the correlation between the RBS traces gradually decreased. Averaging the original heterodyne intermediate frequency (IF) curves did not provide much improvement on the SNR. Hence, it was difficult to directly average several IF signals for noise reduction to relieve fading.
In this paper, we present a novel, coherent ϕ-OTDR system based on a phase-locking structure. This structure provides a clock homologous carrier signal, a modulation signal and a DAQ trigger signal. The moving average method is taken on IF signals before phase demodulating to reduce the overall noise floor of the system. SNR in fading areas is improved for enhancement of phase demodulation fidelity. The proposed system is more tolerant to fading, which can provide higher accuracy for vibration reconstruction.
Principles of the Phase-Locking ϕ-OTDR
In a heterodyne detection coherent ϕ-OTDR system, the photocurrent, i het , of a photo detector after band-pass filtering can be expressed as where E R and E LO are the electric field of RBS light and the optical local oscillator (OLO), respectively. ∆ω is the frequency shift provided by a modulator, and ϕ(t) is the phase of the RBS. We can simplify it to where A IF is the magnitude of the IF electric field. The changes in the backscattered phase signal are extremely nonlinear [27]. Hence, it is common to use the phase difference between two specific points for vibration sensing. The phase of RBS light is specific to the location along the fiber. We can extract phase signals in any zone along the fiber by selecting reference points before and after that area and analyze the phase changes between these two points. Figure 1 shows how an external vibration induces extra stress on the fiber and results in a change in the optical path length (OPL). Two segments of fiber, R 1 and R 2 , with an interval length of L, are selected as the reference points. The length change ∆L is directly related to the change in relative phase between these two segments. Any external vibration within segments R 1 and R 2 changes the phase of the RBS light. Therefore, any disturbance event that occurs with the two segments can be reconstructed by demodulating the phase difference ∆ϕ.
λ is the wavelength of optical pulse, and n is the refractive index of the optical fiber.
Electronics 2021, 10, 535 3 of 13 SNR. Hence, it was difficult to directly average several IF signals for noise reduction to relieve fading.
In this paper, we present a novel, coherent φ-OTDR system based on a phase-locking structure. This structure provides a clock homologous carrier signal, a modulation signal and a DAQ trigger signal. The moving average method is taken on IF signals before phase demodulating to reduce the overall noise floor of the system. SNR in fading areas is improved for enhancement of phase demodulation fidelity. The proposed system is more tolerant to fading, which can provide higher accuracy for vibration reconstruction.
Principles of the phase-locking φ-OTDR
In a heterodyne detection coherent φ-OTDR system, the photocurrent, ℎ , of a photo detector after band-pass filtering can be expressed as where and are the electric field of RBS light and the optical local oscillator (OLO), respectively. ∆ is the frequency shift provided by a modulator, and ( ) is the phase of the RBS. We can simplify it to where is the magnitude of the IF electric field. The changes in the backscattered phase signal are extremely nonlinear [27]. Hence, it is common to use the phase difference between two specific points for vibration sensing. The phase of RBS light is specific to the location along the fiber. We can extract phase signals in any zone along the fiber by selecting reference points before and after that area and analyze the phase changes between these two points. Figure 1 shows how an external vibration induces extra stress on the fiber and results in a change in the optical path length (OPL). Two segments of fiber, 1 and 2 , with an interval length of L, are selected as the reference points. The length change ∆ is directly related to the change in relative phase between these two segments. Any external vibration within segments 1 and 2 changes the phase of the RBS light. Therefore, any disturbance event that occurs with the two segments can be reconstructed by demodulating the phase difference ∆ . Here, every IF signal trace can be directly sampled and quantized. Averaging is commonly used for reducing system noise to relieve fading. If a certain number of adjacent periods of IF signal traces are averaged, the overall noise power is decreased and SNR is greatly improved, according to the characteristics of random noise. Next, we introduce a moving average method to decrease the noise floor. Supposing that there are IF traces set of M samples: where r i represents the ith IF trace. If the window width of the moving average is N, the averaged traces set is where The current averaged result r i is the average value of ith and the previous N-1 raw IF traces.
In a ϕ-OTDR system, AOM is a common modulator that shapes continuous-wave (CW) light into optical pulses, which consists of driving sources and acousto-optic crystals. The sinusoidal signal launched by the radio frequency (RF) source inside the driver is multiplied with the external modulation signal through the RF mixer to output the electrical acousto-optic modulated driving signal, which is the chopper for the sinusoidal signal. By applying a periodical on/off external modulation signal to the AOM diving source, an amplitude-modulated (AM) RF signal with a pulse envelope can be generated and then gained by an amplifier to increase its power level for driving the acousto-optic crystal. The AM RF signal is then converted into ultrasonic field changing within the acousto-optic crystal due to the acousto-optic effect [28], as shown in Figure 2. The refractive index of the acousto-optic medium changes as the sound field changes, and the power of the output optical signal also changes accordingly.
is the wavelength of optical pulse, and n is the refractive index of the optical fiber.
Here, every IF signal trace can be directly sampled and quantized. Averaging is commonly used for reducing system noise to relieve fading. If a certain number of adjacent periods of IF signal traces are averaged, the overall noise power is decreased and SNR is greatly improved, according to the characteristics of random noise. Next, we introduce a moving average method to decrease the noise floor. Supposing that there are IF traces set of M samples: where i represents the th IF trace. If the window width of the moving average is N, the averaged traces set is The current averaged result ′ is the average value of th and the previous N-1 raw IF traces.
In a φ-OTDR system, AOM is a common modulator that shapes continuous-wave (CW) light into optical pulses, which consists of driving sources and acousto-optic crystals. The sinusoidal signal launched by the radio frequency (RF) source inside the driver is multiplied with the external modulation signal through the RF mixer to output the electrical acousto-optic modulated driving signal, which is the chopper for the sinusoidal signal. By applying a periodical on/off external modulation signal to the AOM diving source, an amplitude-modulated (AM) RF signal with a pulse envelope can be generated and then gained by an amplifier to increase its power level for driving the acousto-optic crystal. The AM RF signal is then converted into ultrasonic field changing within the acoustooptic crystal due to the acousto-optic effect [28], as shown in Figure 2. The refractive index of the acousto-optic medium changes as the sound field changes, and the power of the output optical signal also changes accordingly. However, since the carrier signal, external modulation signal and DAQ trigger signal come from an independent oscillation source, each optical pulse has a random initial phase bias, so that every IF signal trace also has a different initial phase bias that varies over time. The initial phase bias changes continuously so that the correlation between several consecutive IF traces decreases with time. The weak correlation was verified by simulation. Two groups of sinusoidal signals with random noise were constructed to simulate the reference signals, and the SNR of that was −5 dB. One group represented the signals with the same initial phase bias, and the other group contained a continuously changing phase bias between adjacent periods. Simulation results for these two groups (each group included 50 curves) of sinusoidal signals with averaging are shown in Figure 3. Every curve was generated for 400 ms at a 250 M/s sampling rate. Figure 3a,b represent one trace of a simulated sinusoidal signal with random noise. Figure 3c shows that with the same initial phase bias, the correlation between several consecutive traces is strong. Although the SNR is not good, we can see that all traces show similar shapes. Conversely, the correlation with the continuously changing phase bias is weak so that the supposition of nearby traces is totally mussed, as shown in Figure 3d. After averaging, the result of the first group still has a sinusoidal shape, as shown in Figure 3e, while the amplitude of the averaging result for the second group is degraded. From Figure 3, it can be concluded that if the curves with weak correlation are averaged, the intensity will be too low to acquire a good SNR.
In order to express every IF signal trace with the same phase bias and considering the influence of noise, Equation (2) could be rewritten as where and I(i) is the ith IF signal trace, A IF is amplitude of the effective signal and n i is overall noise and fluctuates randomly. Therefore, SNR before averaging is where σ 2 is the variance of noise. If the averaging number is N, the averaged IF signal is Then, the mean variance of noise after averaging is Therefore, SNR after averaging is In ideal conditions, SNR can be increased by N.
the correlation with the continuously changing phase bias is weak so that the supposition of nearby traces is totally mussed, as shown in Figure 3d. After averaging, the result of the first group still has a sinusoidal shape, as shown in Figure 3e, while the amplitude of the averaging result for the second group is degraded. From Figure 3, it can be concluded that if the curves with weak correlation are averaged, the intensity will be too low to acquire a good SNR. In order to express every IF signal trace with the same phase bias and considering the influence of noise, Equation (2) could be rewritten as where ( )= cos( ( )) and ( ) ′ is the ith IF signal trace, is amplitude of the effective signal and is overall noise and fluctuates randomly. Therefore, SNR before averaging is To make it possible to directly move the average on the nearby IF signals, the initial phase bias must be kept the same. Therefore, a phase-locking structure is proposed, as shown in Figure 4. The RF source generates a RF signal and launches it into the mixer and the synchronous pulse signal generator, respectively. The synchronous pulse signal generator takes it as a clock trigger signal so as to generate an external modulation signal Electronics 2021, 10, 535 7 of 13 and a DAQ trigger signal. The RF signal and the external modulation signal are mixed by the mixer and passed through to form the AM RF signal. This structure makes it possible for the carrier signal, modulation signal and DAQ trigger signal to be synchronized so that the initial phase bias of every IF signal is the same. Therefore, this phase-locking structure provides a clock homologous carrier signal, a modulation signal and a DAQ trigger signal.
Electronics 2021, 10, 535 7 of 13 the initial phase bias of every IF signal is the same. Therefore, this phase-locking structure provides a clock homologous carrier signal, a modulation signal and a DAQ trigger signal.
Experimental Setup
The proposed phase-locking coherent φ-OTDR is shown in Figure 5. A narrow, linewidth CW laser (HAN's Laser Module) was used as the light source, whose center wavelength and linewidth were 1550 nm and 15 kHz, respectively, and its output power was 15 dBm. The CW light was split into two paths, a sensing path and an OLO path, through a 90:10 optical coupler (OC1). AOM shaped the sensing light into a narrow optical pulse with a frequency shift ∆ of 200 MHz. The optical pulse had a repetition rate of 1 kHz, and the pulse width was 100ns. An erbium-doped fiber amplifier (EDFA) amplified the optical pulse to enhance intensity and launched it into the sensing fiber via a circulator. A cylindrical PZT (piezoelectric transducer) actuator, which was stimulated with a low-frequency sinusoidal wave, was implanted at the end of a 220 m sensing fiber. The length of fiber wrapped on the PZT was about 40 m. The driving voltage range in our experiment was 2 V, so that the PZT was stretched by 11.2 nm in the axial direction and by 7.22 nm in the radial direction. Then, the other 370 m fiber was located on the far end of the sensing fiber. The RBS returning from the sensing fiber was combined with OLO light through a 50:50 optical coupler (OC2) for beating. A balanced photoelectric detector (BPD) with a 350 MHz bandwidth converted the beat frequency optical signal into an electrical signal while eliminating DC and common components, which was recorded using a data acquisition card with a 1.25 GHz/s sampling rate. In our experimental system, the spatial resolution was less than 40 m, and the sensing distance was about 700 m. Since we were mainly demonstrating phase-locking structures, we did not pay attention to the spatial resolution and the sensing distance. In practical applications, these two parameters could be better.
Experimental Setup
The proposed phase-locking coherent ϕ-OTDR is shown in Figure 5. A narrow, line-width CW laser (HAN's Laser Module) was used as the light source, whose center wavelength and linewidth were 1550 nm and 15 kHz, respectively, and its output power was 15 dBm. The CW light was split into two paths, a sensing path and an OLO path, through a 90:10 optical coupler (OC1). AOM shaped the sensing light into a narrow optical pulse with a frequency shift ∆ω of 200 MHz. The optical pulse had a repetition rate of 1 kHz, and the pulse width was 100ns. An erbium-doped fiber amplifier (EDFA) amplified the optical pulse to enhance intensity and launched it into the sensing fiber via a circulator. A cylindrical PZT (piezoelectric transducer) actuator, which was stimulated with a low-frequency sinusoidal wave, was implanted at the end of a 220 m sensing fiber. The length of fiber wrapped on the PZT was about 40 m. The driving voltage range in our experiment was 2 V, so that the PZT was stretched by 11.2 nm in the axial direction and by 7.22 nm in the radial direction. Then, the other 370 m fiber was located on the far end of the sensing fiber. The RBS returning from the sensing fiber was combined with OLO light through a 50:50 optical coupler (OC2) for beating. A balanced photoelectric detector (BPD) with a 350 MHz bandwidth converted the beat frequency optical signal into an electrical signal while eliminating DC and common components, which was recorded using a data acquisition card with a 1.25 GHz/s sampling rate. In our experimental system, the spatial resolution was less than 40 m, and the sensing distance was about 700 m. Since we were mainly demonstrating phase-locking structures, we did not pay attention to the spatial resolution and the sensing distance. In practical applications, these two parameters could be better. According to the phase-locking module structure described in Figure 4, we customized a phase-locking module to provide a driving signal for the AOM crystal and the trigger signal for the DAQ card respectively. In order to experimentally verify the difference of phase extraction between the phase-locking φ-OTDR and conventional φ-OTDR, we also used the traditional AOM driver and temporarily disconnected the transmission of the synchronous signal source to the DAQ card. In this situation, the DAQ card and the driver source worked separately with their own reference source. With the experimental setup shown in Figure 5, we recorded the original IF signal traces from the conventional and the phase-locking φ-OTDR, respectively. Data from both setups were collected continuously for 200 periods, which corresponded to 0.2 s, as shown in Figure 6. The curves were disorderly, and their initial phase bias was changing randomly as shown in Figure 6a. On the other hand, every IF trace from the phase-locking structure also had the same initial phase bias as shown in Figure 6b. According to the phase-locking module structure described in Figure 4, we customized a phase-locking module to provide a driving signal for the AOM crystal and the trigger signal for the DAQ card respectively. In order to experimentally verify the difference of phase extraction between the phase-locking ϕ-OTDR and conventional ϕ-OTDR, we also used the traditional AOM driver and temporarily disconnected the transmission of the synchronous signal source to the DAQ card. In this situation, the DAQ card and the driver source worked separately with their own reference source. With the experimental setup shown in Figure 5, we recorded the original IF signal traces from the conventional and the phase-locking ϕ-OTDR, respectively. Data from both setups were collected continuously for 200 periods, which corresponded to 0.2 s, as shown in Figure 6. The curves were disorderly, and their initial phase bias was changing randomly as shown in Figure 6a. On the other hand, every IF trace from the phase-locking structure also had the same initial phase bias as shown in Figure 6b. According to the phase-locking module structure described in Figure 4, we customized a phase-locking module to provide a driving signal for the AOM crystal and the trigger signal for the DAQ card respectively. In order to experimentally verify the difference of phase extraction between the phase-locking φ-OTDR and conventional φ-OTDR, we also used the traditional AOM driver and temporarily disconnected the transmission of the synchronous signal source to the DAQ card. In this situation, the DAQ card and the driver source worked separately with their own reference source. With the experimental setup shown in Figure 5, we recorded the original IF signal traces from the conventional and the phase-locking φ-OTDR, respectively. Data from both setups were collected continuously for 200 periods, which corresponded to 0.2 s, as shown in Figure 6. The curves were disorderly, and their initial phase bias was changing randomly as shown in Figure 6a. On the other hand, every IF trace from the phase-locking structure also had the same initial phase bias as shown in Figure 6b. To test and evaluate the coherent ϕ-OTDR system based on a phase-locking structure with a direct averaging method, we applied a 1 Hz sinusoidal driving signal on the PZT to create low-frequency vibration on the sensing fiber. We demodulated the amplitude and phase of the IF recorded signals and drew the waterfall diagram of the amplitude of the signals for 25 s in a specific part of the testing fiber, as shown in Figure 7. Warm color stands for high amplitude, while cool color areas present low amplitude, which forms the fading areas. The numerical range on the color bar indicates the amplitude voltage range is 0-2 V. The amplitude of RBS dropped into the fading areas is usually low; therefore, a reference point with a weak amplitude has a great probability of fading, so we picked the cool color positions A and B, respectively before and after the PZT, to retrieve the induced sinusoidal vibration signal form phase signal. Then, a comparison on the fading tolerance between the conventional and phase-locking ϕ-OTDR schemes could be carried out. To test and evaluate the coherent φ-OTDR system based on a phase-locking structure with a direct averaging method, we applied a 1 Hz sinusoidal driving signal on the PZT to create low-frequency vibration on the sensing fiber. We demodulated the amplitude and phase of the IF recorded signals and drew the waterfall diagram of the amplitude of the signals for 25 s in a specific part of the testing fiber, as shown in Figure 7. Warm color stands for high amplitude, while cool color areas present low amplitude, which forms the fading areas. The numerical range on the color bar indicates the amplitude voltage range is 0-2 V. The amplitude of RBS dropped into the fading areas is usually low; therefore, a reference point with a weak amplitude has a great probability of fading, so we picked the cool color positions A and B, respectively before and after the PZT, to retrieve the induced sinusoidal vibration signal form phase signal. Then, a comparison on the fading tolerance between the conventional and phase-locking φ-OTDR schemes could be carried out. The moving average method can be regarded as a low-pass filter. Then, the relationship between the number of moving average filtering points and the cutoff frequency can be written as where s is the repetition rate of the optical pulse, is the cutoff frequency and N is the number of moving average filtering points. Since the fundamental frequency of the vibration signal is 1 Hz, we keep the larger or at least equal to 4.4 Hz for performance evaluation [29] by tuning the N from 0 to 100 under 1 kHz of s . Figure 8 illustrates the waveforms of the reconstructed vibration signals of approximately 24 s from the conventional and phase-locking φ-OTDR with different N, respectively. Because of fading, there were significant errors in the vibration signal reconstruction. In phase-locking coherent φ-OTDR, as N changed from 0 to 100, according to the experiment, the best reconstructed signal was obtained when N = 50. Significantly, phase demodulation errors caused by fading were corrected. The vibration signal was well reconstructed. Moreover, it is common to smooth the final phase signal using the moving average The moving average method can be regarded as a low-pass filter. Then, the relationship between the number of moving average filtering points and the cutoff frequency can be written as
Results and Discussion
where f s is the repetition rate of the optical pulse, f CO is the cutoff frequency and N is the number of moving average filtering points. Since the fundamental frequency of the vibration signal is 1 Hz, we keep the f CO larger or at least equal to 4.4 Hz for performance evaluation [29] by tuning the N from 0 to 100 under 1 kHz of f s . Figure 8 illustrates the waveforms of the reconstructed vibration signals of approximately 24 s from the conventional and phase-locking ϕ-OTDR with different N, respectively. Because of fading, there were significant errors in the vibration signal reconstruction. In phase-locking coherent ϕ-OTDR, as N changed from 0 to 100, according to the experiment, the best reconstructed signal was obtained when N = 50. Significantly, phase demodulation errors caused by fading were corrected. The vibration signal was well reconstructed. Moreover, it is common to smooth the final phase signal using the moving average method after phase demodulating in conventional coherent ϕ-OTDR. However, it would be difficult to overcome errors caused by fading even with 50 times averaging. The last two waveforms respectively describe the phase results before and after averaging with the same window width in a conventional coherent ϕ-OTDR system. Obviously, errors in phase demodulation cannot be removed completely.
Results and Discussion
Electronics 2021, 10, 535 10 of 13 method after phase demodulating in conventional coherent φ-OTDR. However, it would be difficult to overcome errors caused by fading even with 50 times averaging. The last two waveforms respectively describe the phase results before and after averaging with the same window width in a conventional coherent φ-OTDR system. Obviously, errors in phase demodulation cannot be removed completely. To evaluate the performance the phase demodulation results under different schemes and averaging numbers, we obtained the power spectrums through Fourier transformation of the time domain demodulation result, as shown in Figure 9. The level of the noise floor could be reduced by nearly 25 dB, and the vibration at 1 Hz could be significantly distinguished. For the repetition frequency of 1 kHz, the moving averaging method with window width N = 50 applied to the IF traces of adjacent periods improved SNR to a great extent and eliminated the negative effects of phase demodulation errors introduced by fading, especially for measurements on low-frequency bands. To evaluate the performance the phase demodulation results under different schemes and averaging numbers, we obtained the power spectrums through Fourier transformation of the time domain demodulation result, as shown in Figure 9. The level of the noise floor could be reduced by nearly 25 dB, and the vibration at 1 Hz could be significantly distinguished. For the repetition frequency of 1 kHz, the moving averaging method with window width N = 50 applied to the IF traces of adjacent periods improved SNR to a great extent and eliminated the negative effects of phase demodulation errors introduced by fading, especially for measurements on low-frequency bands.
The best choice for the window width N still needs to be further studied. We used the mean square error (MSE) to evaluate the degree of fitting under different N values [30]. The smaller the MSE, the more perfect the vibration reconstructed. As Figure 10 shows, MSE first decreased by increasing N. The best result was obtained from N = 50, which was 0.0318. After N = 50, MSE increased rapidly. The best choice for the window width N still needs to be further studied. We used the mean square error (MSE) to evaluate the degree of fitting under different N values [30]. The smaller the MSE, the more perfect the vibration reconstructed. As Figure 10 shows, MSE first decreased by increasing N. The best result was obtained from N = 50, which was 0.0318. After N = 50, MSE increased rapidly. We can assume there is an optimal N which obtains the best vibration reconstruction effect for every repetition rate of the optical pulse. Obviously, there is a need for optimization in the selection of N. It may be simultaneously related to the SNR of the original IF signals, the frequency of the vibration signal and the repetition rate of the optical pulse light, which needs to be considered in the application. The best choice for the window width N still needs to be further studied. We used the mean square error (MSE) to evaluate the degree of fitting under different N values [30]. The smaller the MSE, the more perfect the vibration reconstructed. As Figure 10 shows, MSE first decreased by increasing N. The best result was obtained from N = 50, which was 0.0318. After N = 50, MSE increased rapidly. We can assume there is an optimal N which obtains the best vibration reconstruction effect for every repetition rate of the optical pulse. Obviously, there is a need for optimization in the selection of N. It may be simultaneously related to the SNR of the original IF signals, the frequency of the vibration signal and the repetition rate of the optical pulse light, which needs to be considered in the application. We can assume there is an optimal N which obtains the best vibration reconstruction effect for every repetition rate of the optical pulse. Obviously, there is a need for optimization in the selection of N. It may be simultaneously related to the SNR of the original IF signals, the frequency of the vibration signal and the repetition rate of the optical pulse light, which needs to be considered in the application.
Conclusions
In this paper, we have proposed a coherent phase-locking ϕ-OTDR system using a moving averaging method. Compared with conventional clock homologous ϕ-OTDR, the proposed system provides a clock homologous carrier signal, a modulation signal and a data acquisition trigger signal. This system makes it possible for moving averaging methods to be taken on IF signals before phase demodulating to reduce the overall noise floor of the system. Furthermore, it offers practical technology to improve the SNR of the ϕ-OTDR system, especially for measurements on low-frequency bands. | 8,241.8 | 2021-02-25T00:00:00.000 | [
"Physics"
] |
Numerical Model Related to Impact Fluid / Solid Under the action of an Electric Field
The electrowetting is an area of significant interest. Many experimental studies have been conducted to find the relationship that binds the physical parameters of said phenomenon: the percentage of the white area inside the pixels of different sizes depending on the applied voltage etc. Our study is to develop a CFD model to validate the experimental results for the behavior of fluids (oil colored) within the reflector screens under the action of electric field.
Introduction
Electrowetting has become one of the most widely used for manipulating small amounts of liquids on surfaces under the influence of an electric field. This phenomenon is used in many industrial processes such as microfluidic "lab-on-a-chip", the devices (plans) of adjustable lenses to modify the convergence so in the industry of display screens. Our study is to develop a CFD (Computational Fluid Dynamics) method based on the VOF (Volume of Fluid) to follow the behavior of the liquid (in our case we choose the oil) inside a pixel under action of an electric field.
Mathematical Model
When a drop is placed on top of an electrode and the latter is then charged, the contact angle between the liquid surface (the drop) and the solid surface (electrode) is reduced. This is called electrowetting. Lippmann equation [1] reflects the change in the interfacial tension between the solid SL and the liquid as a function of the voltage V of the drop by the equation (1): Where, C is the specific capacity of the dielectric layer. From Young's equation (2) which describes the relationship between the contact angle and the interfacial tension SL between the interface solid-gas, solid-liquid and liquid-gas in the line of contact at the triple point (see Figure 1) This is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
:
The dielectric constant of the dielectric layer.
t: The thickness of the dielectric layer.
Numerical Model
A fundamental method to solve two-phase flow is the volume of fluid method (VOF), which was developed by Hirt and Nichols (1981). VOF method is a fixed mesh with the interface between immiscible fluids is modeled by a characteristic function (called volume fraction) [2].
The model equations
The Equations of mass and momentum conservation (4) and (5) for each phase are given by [3]: Where V is the velocity vector, P is the pressure, and the FSF is the surface force vector, μ is the viscosity and density . The mixture density is calculated as follows: Where αk is the volume fraction of the liquid. Any other property of the mixture is calculated as follows: Where: α k =0 : The cell is empty. The interface between the two phases was followed by solving the continuity equation for the function of the volume fraction: The surface tension has been modeled as a regular variation of the capillary pressures through the interface.
Where n is the surface normal, n is the unit normal of the curvature. The surface normal n was evaluated in cells containing the interface and requires knowledge of the amount of volume of fluid present in the cell.
To calculate the electric potential in every area, the second order discretized form of the Laplace equation is solved at the beginning of each time step iteration [4].
Geometry and boundary conditions
We chose to study the problem in two dimensions. For this, the geometry is a square of dimension 1x1m 2 . We used a structured grid square of size 5 x 10 -4 using the preprocessor Gambit 2.2.30.For the boundary conditions we used two conditions: the input condition is defined as a pressure inlet and the side walls are declared type Wall. Where the electrode is fixed between the insulator (Teflon) and the substrate is a white rectangle 15nm thick. The liquids used in this study are dodecane and water. A drop of dodecane whose radius is 10 m is defined in the solver Ansys
Results and Discussion
VOF method available in the code Ansys Fluent calculation allowed us a better observation of the drop profile and from the equation of fluid dynamics. The movement of the drop is observed when a voltage of 15V is applied to the electrode of time ranging from t = 0 to t = 2.16 e -2 s (FIG. 4).
It is clear that, at different t, the drop of oil contracts until the contact angle at the triple point reaches its maximum value. | 1,089.2 | 2013-03-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Successive Approximation of Nonlinear Confidence Regions (SANCR)
. In parameter estimation problems an important issue is the approximation of the confidence region of the estimated parameters. Especially for models based on differential equations, the needed computational costs require particular attention. For this reason, in many cases only linearized confidence regions are used. However, despite the low computational cost of the linearized confidence regions, their accuracy is often limited. To combine high accuracy and low computational costs, we have developed a method that uses only successive linearizations in the vicinity of an estimator. To accelerate the process, a principal axis decomposition of the covariance matrix of the parameters is employed. A numerical example illustrates the method.
Introduction
To simplify the notation, we consider a nonlinear model f (t, θ), with θ ∈ R n and t ∈ R, which does not depend on an additional (dynamical) system.We assume that f is differentiable with respect to θ and continuous with respect to t.
We consider the approximation of a confidence region about parameter values estimated by nonlinear least squares.The parameters are estimated by using experimental data y i in some given points t i with i = 1, . . ., m.The observed values contain unknown errors e i that we assume additive, so the response variable can be modeled by where θ true is the unknown true value of the parameters.Therefore, the least squares estimator θ is the value that solves the following problem<EMAIL_ADDRESS>where S(θ) is the residual sum of squares We assume that the model is correct and that the errors are normal, independent and identically distributed (iid) random variables with zero mean and variance σ 2 , i.e. e i ∼ N (0, σ 2 ).The confidence regions are here interpreted (from the frequentistic perspective [15]) as the regions in the parameter space covering the true value of the parameters θ true , in large samples, with probability approximately 1 − α.
The use of linearized confidence regions with nonlinear algebraic models has been extensively treated in literature, see for example [1,2,7,12,9,17].In particular, it has been shown that confidence regions derived for the linear case can be used in linearized form also for nonlinear models, but in many cases with limited accuracy [19].Furthermore, there are approximation techniques for nonlinear models that are not based on linearizations [11,3,20,18].
To simplify the exposition, in this work we consider an algebraic model, but the method can be used for more complex models.In fact, the problem to approximate nonlinear confidence regions for implicit models, i.e. models based on a system of (differential) equations has been considered from different points of view and for different kind of applications by several authors.To cite only few of them, see the work [19] and the references therein for the design under uncertainty, [21] for an application to ground water flow, [14] for ecological systems, and [16] for additional examples.Newly, it has been presented a method based on second-order sensitivity for the approximation of nonlinear confidence regions applied to ODE based models [13].It has been shown that higher order sensitivities give a higher accurate approximation of the confidence regions than methods using only the first order sensitivities.
With this work we show that the approximation using only linearized confidence regions can be substantially improved by a systematic successive application of linearizations, in the following called Successive Approximation of Nonlinear Confidence Regions (SANCR) method.We show results for the case with only two model parameters.An extension to more than two parameters is technically straightforward and could be partially parallelized, but the effect of successive linearizations in more than two (parameter space) dimensions has yet to be studied in this framework.
This paper is organized as follows i) In Section 2 we report the two methods on which our approach is based; ii) In Section 3 we describe the new method; iii) In Section 4 we show a numerical realization of the SANCR method.
Linearized confidence region and likelihood ratio test
As explained above, there are several methods to approximate (nonlinear) confidence regions.Our method is based on the following two approaches [20].
For a given estimator θ of the parameter θ, we consider: (i) The method derived from the likelihood ratio test (LR) from which it follows where L is the likelihood function and γ 2 is the confidence level.(ii) The method based on the Wald test that leads to the linearized confidence regions (CL): where Cov is the estimated covariance matrix of the parameters.There are several approximations of Cov [19], we use the one based on the Jacobian J of f : where The level γ 2 = χ 2 1−α,n is given by the 1 − α percentile of the chi-square distribution with n degrees of freedom in case σ 2 is known, and it is It has been proved [8] that these two confidence regions are asymptotically equivalent, but far from the asymptotic behavior, i.e. in case of a small number of data, they perform differently as presented in [19].Additionally, our method show the limitation of linearized confidence regions based only on (6).
One of the major goals in defining the confidence regions is the reduction of the costs associated to their computation.From the perspective of the computational costs, the method CL is cheap since it needs only one evaluation of the covariance matrix at the parameter value θ, while the method LR is much more expensive because it is based on the evaluation of the functional S in an adequately high number of points θ in the vicinity of θ to produce a contour.In addition, the extension of the confidence region is not known a priori.In practice, the number of function evaluations needed for the method LR is in the order of several thousands, for example in our case with two parameters we use a grid of 10 4 points for the method LR.
On the contrary, as indicated in the expression (7), the covariance matrix can be evaluated at the cost of building the Jacobian J. Therefore, the major computational costs for the method CL are given by the computation of the derivatives of the model f with respect to the parameters.Thus, we have few computations of a linearized model for the method CL while many thousand computations of a nonlinear model are needed for the method LR.Unfortunately, the accuracy of these two methods is inversely related to their computational costs, with the CL method being much more inaccurate if the model is highly nonlinear.We remind that both methods are only asymptotically exact for linear models and their quality decreases far from the asymptotic behavior.
Therefore, a compromise between computational costs and precision is highly required for many practical applications especially in case the model is based on differential equations.To this aim we established a new method combining low computational costs and high accuracy.
Successive linearizations of nonlinear confidence regions
The SANCR method is based on the use of successive linearizations of the confidence region, starting from the estimated parameter value θ (see expression ( 2)) combined with the likelihood ratio test (5) as explained below examplarily for a model with two parameters.
The likelihood ratio test is used to check whether a point belongs or not to the approximate nonlinear confidence region.Instead of testing all points in the vicinity of θ we use an educated guess, i.e. the likelihood ratio test is performed only on few points lying on the contour of the linearized confidence regions.In fact, linearized confidence regions are ellipsoids in the parameter space and the directions of the semi-axis are defined by the eigenvectors of the covariance matrix as can be deduced by the quadratic form (6). Note that the covariance matrix has dimension n × n, where n is the number of parameters to estimate.Therefore, starting from θ we determine the directions of the principal axes and their length which is given by i = γ λ i , where λ i is the eigenvalue corresponding to the i th eigenvector.We perform the likelihood ratio test for the extreme points of the semi-axes, see points θ A , θ B , θ C , θ D in Figure 1.
Let be θ A the first point to be processed.If this point passes the test, i.e. if the following condition is fulfilled it is considered for the construction of the confidence region and the procedure continues along the second axis.On the contrary, if the point θ A does not pass the test, it is discarded and a new candidate in the same direction θθ A is chosen.
A new point θ A along the selected semi-axis is taken by scaling 1 by a factor α < 1 as shown in Figure 2(a).This procedure is repeated with a new likelihood ratio test and possibly a rescaling (reducing α) until a point that satisfies the test Fig. 1: Definition of the points to perform the likelihood ratio test for two parameters.is found.Once this point, say θ new , has been found, we linearize the confidence region around this new point.To this aim we calculate the Jacobian J(θ new ) (see (8)) and the covariance Cov(θ new ) (see (7)).
After performing the eigendecomposition of the new covariance matrix, the principal axes might have changed direction due to the nonlinearity of the model, see Figure 2(b).Following the new principal directions, we can analogously find the next candidate points belonging to the confidence region, i.e. the points θ new,A , θ new,C and θ new,D , see Figure 2(b).The point θ new,B is not considered because it is the opposite extremal point of the same principal axis.In fact, instead of taking θ new,B , we perform the same procedure starting from θ B to approximate the confidence region in the direction θθ B .Therefore, this procedure is repeated along all principal axes considering both directions.
Stopping criterion
The search along one principal axis is stopped if the distance of the next accepted point, let's say θ new,A , to the previous one is less than a given tolerance then the point θ new,A is retained to define the nonlinear confidence region, see Figure 3.
Contour approximation
The countour of the nonlinear confidence region is approximated by connecting all retained points, in our case θ new,A , θ new,C , θ new,D , θ C , θ D and θ B .These points are linearly connected as shown in Figure 3(b).
Numerical results
As an example the following model is considered where the parameter θ 1 and θ 2 are estimated by the nonlinear least squares method.To simulate the parameter estimation process we have applied perturbed data generated using the "true" values of the parameters according to the following model response: where e i is a random variable distributed as N (0, σ 2 ).The Table 1 indicates the values θ true and σ 2 used in the calculations, and the least squares estimated values θ found by minimizing S(θ) for a realization of the observations y i .Additionally, the Table 2 includes the measurement positions t i .One stopping criterion of the SANCR method is that the distance of two successive candidates is smaller than a given tolerance T OL, see (9).We have used T OL = 0.15.
To evaluate the results of our approach we compare it with a Markov Chain Monte Carlo (MCMC) method described in [10] using the associated MCMC toolbox for Matlab.In fact, an alternative way to perform a statistical analysis of nonlinear models is the use of the Bayes's theorem [5].Bayesian inference is not the focus of our work, therefore we refer for example to [6] for a presentation of the Bayesian approach.Since the MCMC method does not allow to easily define a stopping criterion to assure convergence, we have set to 5 • 10 6 the number of model evaluations in the MCMC code.
In Figure 4 the approximations of the confidence region using the four methods can be qualitatively compared.The blue dots (for the colors see the electronic version) are the points of the MCMC method.The cyan ellipse is the linearized confidence region of the method CL.The green curve is the confidence region approximated by the method LR and the red curve is the confidence region approximated by the SANCR method.
One can observe that the linearized confidence region CL is much smaller than the MCMC approximation and that it is not centered in it.The SANCR method is an approximation of the confidence region defined by the method LR obtained at a much lower computational cost than the method LR itself.The computational costs are reported in tables 3 and 4. The method CL is very cheap with only one evaluation of the nonlinear model and the evaluations of the sensitivities with respect to the two parameters, but its quality is not satisfactory.The SANCR method uses 59 function evaluations and 42 ellipses.The latter correspond to 84 sensitivity evaluations according to the number of two parameters.The LR and the MCMC methods have been used here with 10 4 , respectively 5 • 10 6 , model evaluations.
Table 1 :
Parameters and variance
Table 4 :
Derivatives computations of the four methods SANCR CL LR MCMC Fig. 4: Confidence region approximated by the four methods. | 3,085.6 | 2015-06-29T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Effect of Primary Cable Position on Accuracy in Non-Toroidal-Shaped Pass-Through Current Transformer
Non-toroidal-shaped primary pass-through protection current transformers (CTs) are used to measure high currents. Their design provides them with a big airgap that allow the passing of several cables per phase though them, which is the main advantage versus toroidal types, as the number of CTs required to measure the whole phase current is drastically reduced. The cables passed through the transformer window can be in several positions. As the isolines of the magnetic field generated by the primary currents are centered in the cables, if these cables are not centered in the transformer window, then the magnetic field will be non-uniform along the transformer core. Consequently, local saturations can appear if the cables are not properly disposed, causing the malfunction of the CT. In this paper, the performance of a non-toroidal-shaped protection CT is studied. This research is focused on the influence of the cable position on possible partial saturations of the CT when it is operating near to its accuracy limit. Depending on the cable position, the ratio of the primary and secondary currents can depart from the assigned ratio. The validation of this phenomenon was carried out via finite element analysis (FEA), showing that partial transformer core saturations appear in areas of the magnetic core close to the cable. By applying FEA, the admissible accuracy region for cable positioning inside the CT is also delimited. Finally, the simulation results are ratified with experimental tests performed in non-toroidal protection CTs, varying the primary cables’ positions, which are subjected to currents up to 5 kA, achieving satisfactory results. From this analysis, installation recommendations are given.
Introduction
Current transformers (CTs) are used in alternating current (AC) systems to measure extremely high currents, as the direct connection of a measurement device is not possible.One of their many applications is providing current measurements to protective relays.In order to ensure the proper operation of these relays, a dependable current measurement is needed.For this reason, there are standards that define the performance of these transformers [1][2][3].
Mainly in power system applications, when the currents are very high, several cables per phase are used.In this situation, the use of non-toroidal-shaped CTs is very convenient [4].This is because, in high-voltage and/or high-power applications, the cables need to maintain a distance between them, and this type of CT is able, due to its geometry, to comprise all the cables of the same phase.Consequently, the number of CTs utilized per phase is reduced drastically.A theorical example of a non-toroidal CT embracing five single-phase parallel wires can be found in Figure 1.However, depending on their phenomena) requires a compensation method [8,16,19].
Despite the abovementioned phenomena, another source of measurement errors is the position of the primary cable relative to the magnetic core [20,21].This effect is seldom considered in toroidal-shaped CTs [21,22], as it seems negligible compared to that commented on previously, if the cable orientation has the same direction as the normal vector of the CT radial section.Additionally, in [23], three equal Rogowski coils are tested, measuring the same cable and primary current at the same time in order to validate the measurements.From this study, it can be observed that the error between centered and displaced cables is lower than 1%.In [24], the position effect of the primary cable in toroidal yokeless Hall effect current transducers is analyzed.In this study, the error increases when The main cause of accuracy loss in CTs is the saturation of their magnetic core.The causes of this saturation are frequently related to the harmonic content of the primary current [6].As current standards do not contemplate testing CTs at frequencies higher than 50 or 60 Hz, a proposal to extend their testing to higher frequencies was presented [7].
Notwithstanding this, several approaches to detecting, correcting, or compensating for magnetic core saturation effects have been proposed [8][9][10][11][12][13][14][15][16][17].In [8], the causes of CT saturation are also linked to a large amplitude of fault currents and possible direct current (DC) components.Additionally, they implement an Improved S-Transform that allowed them to detect the saturation time and then to estimate the unsaturated AC.The primary current reconstruction from the saturation time estimation has been performed with multiple methods [9][10][11].In [12], the inrush currents are also considered as a possible source of CT saturation.The study proposes placing CTs at both sides, primary and secondary, of a power transformer.Then, the Fréchet distance algorithm is used in both CTs, and afterwards, these are compared to each other to identify possible saturations.The comparison of CT currents at both sides of a power transformer can also be carried out with wavelet decomposition methods [13].Zheng et al. [14] used a feature extracted from secondary current waveform histograms to distinguish whether faults were external or internal.There are other methods, such as that developed by Cavallera et al. [15], that utilize additional stray flux sensors close to the iron core to detect partial saturations in the CT if a certain threshold value is exceeded.In [16], the saturation detection is carried out by implementing an extended Kalman filter to obtain a model that allows primary unsaturated current reconstruction.Data-driven methods can also be considered to solve this issue.In [17], a deep learning approach is focused on CT saturation monitoring, based on historical and continuous monitoring data.Also, in [18], an artificial neural network (ANN) is trained to detect CT saturations during electric faults by monitoring an entire period of the secondary current waveform.Finally, the possible CT saturation due to the presence of a DC component in the primary current (because of transients or steady-state phenomena) requires a compensation method [8,16,19].
Despite the abovementioned phenomena, another source of measurement errors is the position of the primary cable relative to the magnetic core [20,21].This effect is seldom considered in toroidal-shaped CTs [21,22], as it seems negligible compared to that commented on previously, if the cable orientation has the same direction as the normal vector of the CT radial section.Additionally, in [23], three equal Rogowski coils are tested, measuring the same cable and primary current at the same time in order to validate the measurements.From this study, it can be observed that the error between centered and displaced cables is lower than 1%.In [24], the position effect of the primary cable in toroidal yokeless Hall effect current transducers is analyzed.In this study, the error increases when the cable is separated from the transducer center up to 6.5%.This error is attenuated by increasing the number of magnetic sensors around the magnetic isoline of the transducer, i.e., increasing the radial symmetry of the transducer.
However, the cable position acquires higher importance in non-toroidal-shaped ones designed for multiple and separate cable current measurement.This is because the wire can cause partial saturations in the CT's iron core if it is not well positioned.This effect also affects Hall effect-based magnetic sensors, in which a greater error is seen when more ellipticals are disposed among them, surrounding the primary cable [25].
Despite the position of the cable not having previously been studied in iron-core non-toroidal CTs, the main objectives of this paper are focused on the empirical analysis of the systematic error caused by this effect.Therefore, the main contribution of the study is to propose a test with which it is possible to detect partial saturations caused by the cable position in the non-toroidal CT prior to its commissioning.Thus, the correction of possible saturations during the operation of the CT is already characterized up to its accuracy limit.The analysis of this effect is performed using simulations, finite element methods (FEMs), and experimental tests carried out with non-toroidal CTs measuring up to 5 kA currents circulating through the primary cables.The results will allow us to make some recommendations to be taken into consideration in industrial applications when utilizing non-toroidal CTs.These recommendations will mainly be focused on detecting zones where primary cable positions will cause local saturations in the CTs exceeding their admissible maximum composite error.
The paper is organized as follows.In Section 2, the characteristics of the considered transformers are presented.Therefore, the simulation setup and results are presented in Section 3. Afterwards, the experimental testing arrangements and results are given in Section 4. Section 5 discusses the results obtained in the previous Sections, and finally, Section 6 summarizes the paper, highlighting the main conclusions of this research.
Materials and Methods
The performance of CTs is based on the well-known Ampère's law; i.e., the circulation of a magnetic field along a closed loop is equal to the total current passing through the loop: where H is the magnetic field, dl is the length differential of the magnetic circuit, and N•i is the total current (current per wire multiplied by the number of wires).As the variables involved in this equation do not depend on material properties, it is a geometrical relationship.If there is only one cable with a circular cross section, then the system has radial symmetry.Therefore, isolines of the magnetic field are circumferences centered on the cable.In this configuration, the magnitude of this magnetic field can be easily calculated as follows: with r being the radius of the considered circumference.In a CT, there are two magnetic field sources, namely, the primary current (i 1 ) and secondary current (i 2 ).If the secondary winding is uniformly wired around the magnetic core, then this core is a magnetic field isoline.This is due to the winding symmetry around the core cross section.So, it is an isoline for any core shape.In the case of toroidal CTs, if the primary cable is centered on the transformer window, then the magnetic nucleus is an isoline of the magnetic field created by the current carried by this primary cable.However, if the primary cable is displaced from the center, then the isolines are not concentric with the magnetic material anymore.Thus, some H isolines will be closed across the air gap in the farthest part of the CT.This implies that the magnetic flux density (B) will be more concentrated in the core part where the cable is closer.On the other hand, B will be lower in the core part farther from the cable, as some part of the flux lines will be concatenated through the air, becoming stray flux, Φ σ .This concept is plotted in Figure 2a.However, it is common that in conventional toroidal CTs, the air gap concatenated in the magnetic circuit will not be large enough to reach considerable flux unbalances [22].In the case of non-toroidal current transformers, this is no longer the case, as the airgap inside the transformer is not negligible (see Figure 2b).
the magnetic material anymore.Thus, some H isolines will be closed across the air gap in the farthest part of the CT.This implies that the magnetic flux density (B) will be more concentrated in the core part where the cable is closer.On the other hand, B will be lower in the core part farther from the cable, as some part of the flux lines will be concatenated through the air, becoming stray flux, Φσ.This concept is plotted in Figure 2.a.However, it is common that in conventional toroidal CTs, the air gap concatenated in the magnetic circuit will not be large enough to reach considerable flux unbalances [22].In the case of non-toroidal current transformers, this is no longer the case, as the airgap inside the transformer is not negligible (see Figure 2b).The performance of a CT depends on preserving the same saturation state along its measurement range.This is usually accomplished by avoiding saturation; i.e., the combined magnetic field caused by both windings has a magnetic flux density low enough to avoid saturation.This usually happens in design conditions, i.e., when the primary cable is centered on the transformer window.
Therefore, to evaluate the effect of different primary cable positions on non-toroidal CT accuracy, a protection current transformer sensor was built according to IEC 61869-2 [2]. Figure 3 shows the secondary winding and iron core of a non-toroidal CT sensor during its manufacturing process without its casing.Additionally, its characteristics are summarized in Table 1.This non-toroidal CT sensor type will be utilized as a case study in order to evaluate the effects of the primary winding position on its performance.The performance of a CT depends on preserving the same saturation state along its measurement range.This is usually accomplished by avoiding saturation; i.e., the combined magnetic field caused by both windings has a magnetic flux density low enough to avoid saturation.This usually happens in design conditions, i.e., when the primary cable is centered on the transformer window.
Therefore, to evaluate the effect of different primary cable positions on non-toroidal CT accuracy, a protection current transformer sensor was built according to IEC 61869-2 [2]. Figure 3 shows the secondary winding and iron core of a non-toroidal CT sensor during its manufacturing process without its casing.Additionally, its characteristics are summarized in Table 1.This non-toroidal CT sensor type will be utilized as a case study in order to evaluate the effects of the primary winding position on its performance.concentrated in the core part where the cable is closer.On the other hand, B will be lowe in the core part farther from the cable, as some part of the flux lines will be concatenated through the air, becoming stray flux, Φσ.This concept is plotted in Figure 2.a.However, i is common that in conventional toroidal CTs, the air gap concatenated in the magneti circuit will not be large enough to reach considerable flux unbalances [22].In the case o non-toroidal current transformers, this is no longer the case, as the airgap inside the trans former is not negligible (see Figure 2b).The performance of a CT depends on preserving the same saturation state along it measurement range.This is usually accomplished by avoiding saturation; i.e., the com bined magnetic field caused by both windings has a magnetic flux density low enough t avoid saturation.This usually happens in design conditions, i.e., when the primary cabl is centered on the transformer window.
Therefore, to evaluate the effect of different primary cable positions on non-toroida CT accuracy, a protection current transformer sensor was built according to IEC 61869- [2]. Figure 3 shows the secondary winding and iron core of a non-toroidal CT sensor dur ing its manufacturing process without its casing.Additionally, its characteristics are sum marized in Table 1.This non-toroidal CT sensor type will be utilized as a case study in order to evaluate the effects of the primary winding position on its performance.
Simulations
The performance of the CT was simulated using Ansys Maxwell [26] FEA software (ANSYS 2020 R2).In these simulations, several primary cable positions inside the transformer core were considered.
Simulation Setup
With this aim, a model of the transformer core was built.Figure 4 shows the transformer core (see the cross section, the area of which is S = 1.43 cm 2 , on the left).The right side of this figure also shows a 2D projection of this core.The core has 154 mm long straight segments joined by semicircles with a 92 mm inner radius and 105 mm outer radius.
Simulations
The performance of the CT was simulated using Ansys Maxwell [26] FEA software (ANSYS 2020 R2).In these simulations, several primary cable positions inside the transformer core were considered.
Simulation Setup
With this aim, a model of the transformer core was built.Figure 4 shows the transformer core (see the cross section, the area of which is S = 1.43 cm 2 , on the left).The right side of this Figure also shows a 2D projection of this core.The core has 154 mm long straight segments joined by semicircles with a 92 mm inner radius and 105 mm outer radius.
The transformer core is made of M-15 steel alloy [27].Figure 5 shows its magnetizing curve.According to [2], the saturation point is calculated as the point when an increment ΔB = 10% requires an increment of ΔH = 50%.This value is reached at a flux density of B = 1.7 T.
To build the FEA model, not only the magnetic core but also the electric parameters should be defined.As the transformer ratio is 500/1 A and its primary has only one turn, secondary winding has around 500 turns homogeneously distributed along the magnetic core (see the turns' disposition in Figure 3).The secondary winding is made of copper with 0.5 mm 2 section.This implies 0.8 mm diameter.Afterwards, regarding the primary cable, the FEA simulations were performed with 240 mm 2 cross section copper cables.This implies a 9 mm diameter.The transformer core is made of M-15 steel alloy [27].Figure 5 shows its magnetizing curve.According to [2], the saturation point is calculated as the point when an increment ∆B = 10% requires an increment of ∆H = 50%.This value is reached at a flux density of B = 1.7 T. To complete the model, a burden resistance, Rload = 1 Ω, is connected to the secondary winding of the CT.This resistance represents the rated burden of the current transformer (1 VA).Therefore, the primary winding is fed through a current source.Figure 6 shows the electric scheme considered for the FEA model.This type of non-toroidal transformer To build the FEA model, not only the magnetic core but also the electric parameters should be defined.As the transformer ratio is 500/1 A and its primary has only one turn, secondary winding has around 500 turns homogeneously distributed along the magnetic core (see the turns' disposition in Figure 3).The secondary winding is made of copper with 0.5 mm 2 section.This implies 0.8 mm diameter.Afterwards, regarding the primary cable, the FEA simulations were performed with 240 mm 2 cross section copper cables.This implies a 9 mm diameter.
To complete the model, a burden resistance, R load = 1 Ω, is connected to the secondary winding of the CT.This resistance represents the rated burden of the current transformer (1 VA).Therefore, the primary winding is fed through a current source.Figure 6 shows the electric scheme considered for the FEA model.This type of non-toroidal transformer has an elongated window that allows several cables in its primary circuit.For this reason, if only one cable is used, then it can be in several positions in the window.In all simulations, the current source that feeds the primary winding corresponds to the CT accuracy limit; e.g., in the CT of Table 1, it corresponds with 10 times I 1N , 5000 A root mean square (RMS).It is modeled as a source current according to Equation (3) as follows: To complete the model, a burden resistance, Rload = 1 Ω, is connected to the sec winding of the CT.This resistance represents the rated burden of the current trans (1 VA).Therefore, the primary winding is fed through a current source.Figure 6 the electric scheme considered for the FEA model.This type of non-toroidal trans has an elongated window that allows several cables in its primary circuit.For this if only one cable is used, then it can be in several positions in the window.In all tions, the current source that feeds the primary winding corresponds to the CT a limit; e.g., in the CT of Table 1, it corresponds with 10 times I1N, 5000 A root mean (RMS).It is modeled as a source current according to Equation (3) as follows:
Simulation Results
A first set of simulations was conducted with the primary cable in different po which are summarized in Figure 7. Figure 8 shows the magnetic flux densities in th netic core for these positions.Figure 8a plots the magnetic flux density when the p cable is in the centered position.The maximum flux density is 1.283 T, lower th saturation flux density (see Figure 5).In this simulation, B has symmetry in the X-a the Y-axis.It can also be appreciated that both laterals of the iron core have slightl than in the upper and lower parts (ΔB = 0.094 T).This phenomenon is caused by ference in reluctance for both axes.However, the symmetry in the magnetic circ the low B level makes the measurement error negligible.
The following Figure 8b shows the magnetic flux densities in the magnetic cor the primary cable is in the top position (65 mm above the center of the transform dow).In this case, the maximum flux density increases up to 1.642 T, close to, bu than, the saturation flux density.Figure 8c shows the magnetic flux densities in th netic core when the primary cable is in the bottom position (65 mm below the ce the transformer window).The maximum B is 1.533 T. In both simulations ΔB in the direction is considerably higher than the centered case, ΔB = 0.285 T, which will aff
Simulation Results
A first set of simulations was conducted with the primary cable in different positions, which are summarized in Figure 7. Figure 8 shows the magnetic flux densities in the magnetic core for these positions.Figure 8a plots the magnetic flux density when the primary cable is in the centered position.The maximum flux density is 1.283 T, lower than the saturation flux density (see Figure 5).In this simulation, B has symmetry in the X-axis and the Y-axis.It can also be appreciated that both laterals of the iron core have slightly less B than in the upper and lower parts (∆B = 0.094 T).This phenomenon is caused by the difference in reluctance for both axes.However, the symmetry in the magnetic circuit and the low B level makes the measurement error negligible.
Sensors 2024, 24, x FOR PEER REVIEW 7 of 15 current measurement error more.However, the maximum estimated B is still inside the saturation limits.Finally, Figure 8d shows the magnetic flux densities in the magnetic core when the primary cable is in the right position (120 mm to the right of the center of the transformer window).The maximum flux density is 1.782 T, higher than the saturation flux density (see Figure 4) and also higher than in the vertical displacement simulations (compared with Figure 8b,c).Figure 8e shows the magnetic flux densities in the magnetic core when the primary cable is in the left position (120 mm to the left of the center of the transformer window).The maximum flux density again has the same value (Bmax = 1.782T) as in its analogous simulation (see Figure 8d).In these cases, the cable has an excessive eccentricity from the ideal case.
In all cases, higher magnetic flux density levels are reached in the areas closest to the primary; meanwhile, the farthest areas have the lower flux density levels, as some of the H isolines are closed through the air and not through the iron core.This phenomenon directly affects the accuracy of the CT performance.Therefore, the higher the distance from the CT center the primary winding is, the worse the CT performance will be.Additionally, it must be remarked that the results obtained have bilateral symmetry attending to the X-and Y-axes, respectively.Consequently, analyzing only one quadrant is enough.In order to evaluate the zone where primary cable positions provide the required accuracy, the positions shown in Figure 9 were additionally simulated.In this Figure, the white area corresponds to the airgap, the gray section to the iron core, and the orange one to the casing.These consist of a mesh of cable position variations every Δx = 20 mm and Δy = 20 mm in the first quadrant of the air gap due to its symmetry.These increments allow most of the airgap to be covered.Additionally, they produce enough points in the mesh to study the composite error behavior in the region, and the simulations do not require extensive computation resources due to the number of points calculated not being high.The results of these simulations (secondary current RMS values) are shown in Table 2.In this Table, the composite error is also given.It was calculated using Equation (4) according to [2], as follows: where Ɛc is the composite error, Rt is the transformer current ratio, I1 is the primary current (in RMS), i1 and i2 are the instantaneous primary and secondary currents, respectively, and T is the current wave period.The following Figure 8b shows the magnetic flux densities in the magnetic core when the primary cable is in the top position (65 mm above the center of the transformer window).In this case, the maximum flux density increases up to 1.642 T, close to, but lower than, the saturation flux density.Figure 8c shows the magnetic flux densities in the magnetic core when the primary cable is in the bottom position (65 mm below the center of the transformer window).The maximum B is 1.533 T. In both simulations ∆B in the Y-axis direction is considerably higher than the centered case, ∆B = 0.285 T, which will affect the current measurement error more.However, the maximum estimated B is still inside the saturation limits.
Finally, Figure 8d shows the magnetic flux densities in the magnetic core when the primary cable is in the right position (120 mm to the right of the center of the transformer window).The maximum flux density is 1.782 T, higher than the saturation flux density (see Figure 4) and also higher than in the vertical displacement simulations (compared with Figure 8b,c).Figure 8e shows the magnetic flux densities in the magnetic core when the primary cable is in the left position (120 mm to the left of the center of the transformer window).The maximum flux density again has the same value (B max = 1.782T) as in its analogous simulation (see Figure 8d).In these cases, the cable has an excessive eccentricity from the ideal case.
In all cases, higher magnetic flux density levels are reached in the areas closest to the primary; meanwhile, the farthest areas have the lower flux density levels, as some of the H isolines are closed through the air and not through the iron core.This phenomenon directly affects the accuracy of the CT performance.Therefore, the higher the distance from the CT center the primary winding is, the worse the CT performance will be.Additionally, it must be remarked that the results obtained have bilateral symmetry attending to the Xand Y-axes, respectively.Consequently, analyzing only one quadrant is enough.
In order to evaluate the zone where primary cable positions provide the required accuracy, the positions shown in Figure 9 were additionally simulated.In this figure, the white area corresponds to the airgap, the gray section to the iron core, and the orange one to the casing.These consist of a mesh of cable position variations every ∆x = 20 mm and ∆y = 20 mm in the first quadrant of the air gap due to its symmetry.These increments allow most of the airgap to be covered.Additionally, they produce enough points in the mesh to study the composite error behavior in the region, and the simulations do not require extensive computation resources due to the number of points calculated not being high.The results of these simulations (secondary current RMS values) are shown in Table 2.In this table, the composite error is also given.It was calculated using Equation (4) according to [2], as follows: where ϵ c is the composite error, R t is the transformer current ratio, I 1 is the primary current (in RMS), i 1 and i 2 are the instantaneous primary and secondary currents, respectively, and T is the current wave period.From these results, it is obvious that the primary cable position has a huge effect on the current transformer accuracy reaching errors in the current measurement of more than 25% when the cables are in the farthest position from the center of the CT.The maximum admissible composite error for this CT is 5%, and it is possible to observe the limiting zone that provides the required accuracy.Figure 9 depicts this zone.
Experimental Tests
The testing procedure follows the direct testing procedure described in IEC 61869-2 [2] for protection current transformers of the P and PR class.
Experimental Setup
With this aim, the circuit in Figure 6 was built.Figure 10 shows the experimental test Figure 9. Simulated mesh of primary cable positions with ∆x = ∆y = 20 mm.The primary cable position zone for admissible accuracy performance in the non-toroidal CT (green points correspond to a low enough ratio error, and red points correspond to an excessive ratio error; orange area, casing; gray area, iron core; white area, airgap; green area, admissible error zone of primary winding positioning).
From these results, it is obvious that the primary cable position has a huge effect on the current transformer accuracy reaching errors in the current measurement of more than 25% when the cables are in the farthest position from the center of the CT.The maximum admissible composite error for this CT is 5%, and it is possible to observe the limiting zone that provides the required accuracy.Figure 9 depicts this zone.
Experimental Tests
The testing procedure follows the direct testing procedure described in IEC 61869-2 [2] for protection current transformers of the P and PR class.
Experimental Setup
With this aim, the circuit in Figure 6 was built.Figure 10 shows the experimental test bench.The laboratory workbench uses a current source (1) capable of feeding high currents to the transformer primary (2).Several cables (2) with enough ampacity were placed in the CT (3).By feeding them sequentially, the cable position influence was studied.Two CTs (3), with parameters corresponding with those shown in Table 1, were placed near each other to corroborate that the measurement was the same in both.In the secondary current of each CT (3), a resistor (4) with R load = 1 Ω was installed.Finally, several multimeters (5) and an oscilloscope (6) were used as signal acquisitors to monitor I 1 , U 2 , and I 2 in both CTs.
Sensors 2024, 24, x FOR PEER REVIEW 10 of 15 current of each CT (3), a resistor (4) with Rload = 1 Ω was installed.Finally, several multimeters ( 5) and an oscilloscope (6) were used as signal acquisitors to monitor I1, U2, and I2 in both CTs.Afterwards, tests were conducted, controlling the transformer primary current up to its maximum current, I1 = 5 kARMS, where its accuracy limit was 5% (see Table 1).Additionally, the secondary voltage, U2, and current, I2, were measured.Several tests were performed, changing the position of the primary cable in the X-axis and in the Y-axis manually.Then, the extreme positions in both directions were evaluated.To keep the primary wires in the center, a non-ferromagnetic mount was introduced into the airgap of the CT (seen on the left side of Figure 10).
Experimental Results
First of all, the CT saturation performance was examined by monitoring the I2 and U2 of the CT without cables in the airgap.Thus, the saturation curve U-I was obtained by varying the U2 voltage with a controlled AC voltage source and measuring both I2 and U2.The resultant curve can be seen at Figure 11.From this Figure, the saturation knee current can be obtained according to [2] at Iexc,k = 72.8[mA] for a voltage Uexc,k = 26.1 [V].At this point, in [2], it is stated that the composite error should be lower than the stablished limit.Afterwards, tests were conducted, controlling the transformer primary current up to its maximum current, I 1 = 5 kA RMS, where its accuracy limit was 5% (see Table 1).Additionally, the secondary voltage, U 2 , and current, I 2 , were measured.Several tests were performed, changing the position of the primary cable in the X-axis and in the Y-axis manually.Then, the extreme positions in both directions were evaluated.To keep the primary wires in the center, a non-ferromagnetic mount was introduced into the airgap of the CT (seen on the left side of Figure 10).
Experimental Results
First of all, the CT saturation performance was examined by monitoring the I 2 and U 2 of the CT without cables in the airgap.Thus, the saturation curve U-I was obtained by varying the U 2 voltage with a controlled AC voltage source and measuring both I 2 and U 2 .The resultant curve can be seen at Figure 11.From this figure, the saturation knee current can be obtained according to [2] at I exc,k = 72.8[mA] for a voltage U exc,k = 26.1 [V].At this point, in [2], it is stated that the composite error should be lower than the stablished limit.It can be calculated using Equation (5), as follows: where ALF is the accuracy limit factor (ALF = 10 in this CT).In this case, the composite error is ε c = 0.728%, which is inside the limits (ε c < 5%) and validates the correct operation of the non-toroidal CT according to the standards.Then, a composite error of approximately ε c = 5% is expected for 10 times I 1N .In order to verify the influence of the position on the performance of the transformer, five different positions for the primary cable were tested.The primary cables have been fed at I1 = 5000 [ARMS], which implies a current of 1250 [ARMS] through each cable in parallel.The total number of cables across the airgap of the transformer is four, as shown in Figure 10.The positions correspond to the cable centered in the window and in the extreme bottom, top, left, and right of the CT's air gap window.The positions were addressed by displacing the cables manually to these extremes.The aforementioned Figure 7 (in Section 3) shows these positions.These positions have been chosen in order to validate the behavior observed in the simulations, where the upper and lower cases should not experience saturation, while the right and left extremes should.
Figures 12 and 13 show the secondary current, I2, waveforms compared with the primary current in terms of the secondary current, I1/Rt.Both currents were measured with an oscilloscope, and their RMS values were estimated with ammeters.On the one hand, Figure 12 shows the comparison among positions in the time domain.From this Figure, it can be seen that the centered position I2 measurement is the closest to the I1•Rt current.The top and bottom I2 measurements are also close to the centered one.However, as previously estimated in the FEA simulations, the left and right I2 measurements do not fit well the I1/Rt current.On the other hand, Figure 13 shows a comparison between I2 and I1/Rt for the second quadrant positions (left and top); it can be seen that the farthest positions from the In order to verify the influence of the position on the performance of the transformer, five different positions for the primary cable were tested.The primary cables have been fed at I 1 = 5000 [A RMS ], which implies a current of 1250 [A RMS ] through each cable in parallel.The total number of cables across the airgap of the transformer is four, as shown in Figure 10.The positions correspond to the cable centered in the window and in the extreme bottom, top, left, and right of the CT's air gap window.The positions were addressed by displacing the cables manually to these extremes.The aforementioned Figure 7 (in Section 3) shows these positions.These positions have been chosen in order to validate the behavior observed in the simulations, where the upper and lower cases should not experience saturation, while the right and left extremes should.
Figures 12 and 13 show the secondary current, I 2 , waveforms compared with the primary current in terms of the secondary current, I 1 /R t .Both currents were measured with an oscilloscope, and their RMS values were estimated with ammeters.On the one hand, Figure 12 shows the comparison among positions in the time domain.From this figure, it can be seen that the centered position I 2 measurement is the closest to the I 1 •R t current.The top and bottom I 2 measurements are also close to the centered one.However, as previously estimated in the FEA simulations, the left and right I 2 measurements do not fit well the I 1 /R t current.On the other hand, Figure 13 shows a comparison between I 2 and I 1 /R t for the second quadrant positions (left and top); it can be seen that the farthest positions from the center have higher saturation values, and, as a consequence, lower accuracy levels.
Discussion
The performance of our CT was evaluated through FEA simulations and experi-
Discussion
The performance of our CT was evaluated through FEA simulations and experi- From the previous measurements, the composite error was calculated using Equation ( 4).The RMS measured values corresponding to the positions shown in Figure 7 are collected in Table 3.The primary current I 1 was set close to the accuracy limit, i.e., 5000 A RMS , while the I 2 was collected.As expected from Figures 12 and 13, the smaller ε c was given for the centered position of the cables.The top and bottom positions were still inside the accuracy limits.However, the left and right positions produced enough saturation in the iron coil to exceed the accuracy limits of the non-toroidal CT.
Discussion
The performance of our CT was evaluated through FEA simulations and experimental tests.The simulations analyzed different critical positions (x,y extreme positions inside the CT airgap and the center, see Figure 8) and a mesh of positions (see Figure 9).The tests have also been made in five extreme positions (center, top, bottom, left, and right).In the cases where there are tests and simulations, the results show a reasonable agreement, producing higher saturations in the left and right extremes.This fact validates the simulations.As a consequence, the remaining simulations allow the definition of a zone where the assigned accuracy is obtained and a zone where the primary cable should not be placed.
These results are coherent with Ampère's law because, in a non-toroidal-shaped CT, the magnetic core is not a radial isoline for the magnetic field created by the primary current.Therefore, the further from the center the primary cable is, the greater the differences between the primary and secondary magnetic fields.So, the possibility of partial saturation in the iron core also increases.For this reason, the ratio between primary and secondary currents changes, increasing current measurement errors.Regarding the correct zone of operation, it is remarkable that this corresponds to a circle tangent to the straight parts of the inner core.
Despite partial saturation being the main phenomenon that distorts the current measurement output of the CT, its quantification depends on the iron core specifications.These specifications can vary among different CT designs.For this reason, in order to detect possible saturation phenomena in non-toroidal CTs in real facilities, a high-current injection test must be performed before commissioning.This test must be performed with the cables in the same position as they will be in the facility, and only I 1 and I 2 measurements are required.Thus, partial saturation will be detected by comparing both measurements, and the composite error can also be estimated for different current levels.
Finally, special attention must be given to the i 1 waveform, which must be the same in the tests as in the facility once the CT is commissioned.Despite the high-current source emulating the current waveform of the primary cable of the facility, some inherent errors in the composite error estimation can appear from the facility noises, for example, due to power electronics, affecting the robustness of the method.These noises will produce additional magnetic field, which can lead to additional partial saturations.As a consequence, the estimated composite error in the tests will be lower than the error once the cable is installed in the facility.For this reason, the closer the tested i 1 waveform to the facility i 1 , the more accurate the composite error estimation.
Conclusions
Non-toroidal-shaped primary pass-through CTs are used to measure the phase current in applications where several cables per phase, which require isolation distance, are installed.The performance of a non-toroidal-shaped current transformer for different positions of the primary cable was tested and simulated.These tests show that the accuracy is strongly dependent on the primary cable position.Ampère's law reasoning and finite element simulations show that this result is a consequence of partial saturation in the transformer core.Therefore, the optimum performance is achieved when the primary cable is centered in the circle-delimited transformer window due to it being less prone to causing partial saturations in the CT.The further the cables from the center, the higher the composite error.Accordingly, an accuracy area was empirically defined.However, each type of non-toroidal CT must be tested individually, as this zone depends on the transformer design and primary cable current conditions.
The results of the simulations and the experimental test suggest that these current transformers should be tested using a high-current source prior to their commissioning.The high-current source must feed the CT at its maximum current, where it reaches the accuracy limit.It is strongly recommended that the same cable position be used in the tests and in the electric plant where it will be installed.In other words, the current distribution during the tests should be the same as in the operation of the current transformer.In this way, the operators could monitor possible local saturations during the tests and quantify the composite error before CT commissioning, which must never exceed its accuracy limit.Additionally, special attention should be paid to the short-circuit current, as the saturation problem will be achieved in these operation conditions.
Figure 2 .
Figure 2. Current transformer theoretical flux line distribution for a non-centered primary cable: (a) Toroidal shape, lower local saturation, (b) non-toroidal shape, higher local saturation.
Figure 3 .
Figure 3. Case study: manufactured non-toroidal CT iron core and secondary winding.
Figure 2 .
Figure 2. Current transformer theoretical flux line distribution for a non-centered primary cable: (a) Toroidal shape, lower local saturation, (b) non-toroidal shape, higher local saturation.
Figure 2 .
Figure 2. Current transformer theoretical flux line distribution for a non-centered primary cable: (a Toroidal shape, lower local saturation, (b) non-toroidal shape, higher local saturation.
Figure 3 .
Figure 3. Case study: manufactured non-toroidal CT iron core and secondary winding.
Figure 3 .
Figure 3. Case study: manufactured non-toroidal CT iron core and secondary winding.
Figure 4 .
Figure 4. Magnetic core layout and dimensions.Figure 4. Magnetic core layout and dimensions.
Figure 4 .
Figure 4. Magnetic core layout and dimensions.Figure 4. Magnetic core layout and dimensions.
Figure 6 .
Figure 6.Electrical scheme considered in simulations.
Figure 6 .
Figure 6.Electrical scheme considered in simulations.
Figure 7 .
Figure 7. Simulated cable positions inside the non-toroidal CT.
Figure 7 .
Figure 7. Simulated cable positions inside the non-toroidal CT.
Figure 8 .
Figure 8. Simulations of the magnetic flux density in the CT transformer.(a) Primary cable centered in the transformer window (0 mm, 0 mm); (b) primary cable on the upper part of the transformer window (0 mm, 65 mm); (c) primary cable on the lower part of the transformer window (0 mm, 65 mm); (d) primary cable on the right part of the transformer window (120 mm, 0 mm); (e) primary cable on the left part of the transformer window (−120 mm, 0 mm).
Figure 9 .
Figure 9. Simulated mesh of primary cable positions with Δx = Δy = 20 mm.The primary cable position zone for admissible accuracy performance in the non-toroidal CT (green points correspond to a low enough ratio error, and red points correspond to an excessive ratio error; orange area, casing; gray area, iron core; white area, airgap; green area, admissible error zone of primary winding positioning).
Figure 11 .
Figure 11.CT saturation curve.Voltage U2 and Current I2 of the secondary winding with the primary in open circuit.
Figure 11 .
Figure 11.CT saturation curve.Voltage U 2 and Current I 2 of the secondary winding with the primary in open circuit.
Sensors 2024 , 15 Figure 12 .
Figure 12.Secondary wave currents, i2, corresponding to the five tested positions inside the air gap of the CT and the primary current wave converted to the secondary current of the CT, Rt•i1.
Figure 13 .
Figure 13.Observation of the CT saturation for the centered, top, and left primary cable positions comparing i1/Rt and i2.
Figure 12 . 15 Figure 12 .
Figure 12.Secondary wave currents, i 2 , corresponding to the five tested positions inside the air gap of the CT and the primary current wave converted to the secondary current of the CT, R t •i 1 .
Figure 13 .
Figure 13.Observation of the CT saturation for the centered, top, and left primary cable positions comparing i1/Rt and i2.
Figure 13 .
Figure 13.Observation of the CT saturation for the centered, top, and left primary cable positions comparing i 1 /R t and i 2 .
Table 2 .
CT-FEA simulation results of I2 and Ɛc for different meshed positions.
Table 2 .
CT-FEA simulation results of I 2 and ϵ c for different meshed positions.
Table 3 .
Current transformer test results for different primary cable positions.
Table 3 .
Current transformer test results for different primary cable positions.
Table 3 .
Current transformer test results for different primary cable positions. | 10,383.8 | 2024-08-26T00:00:00.000 | [
"Engineering",
"Physics"
] |
Complete Chloroplast Genomes of Acanthochlamys bracteata (China) and Xerophyta (Africa) (Velloziaceae): Comparative Genomics and Phylogenomic Placement
Acanthochlamys P.C. Kao is a Chinese endemic monotypic genus, whereas Xerophyta Juss. is a genus endemic to Africa mainland, Arabian Peninsula and Madagascar with ca.70 species. In this recent study, the complete chloroplast genome of Acanthochlamys bracteata was sequenced and its genome structure compared with two African Xerophyta species (Xerophyta spekei and Xerophyta viscosa) present in the NCBI database. The genomes showed a quadripartite structure with their sizes ranging from 153,843 bp to 155,498 bp, having large single-copy (LSC) and small single-copy (SSC) regions divided by a pair of inverted repeats (IR regions). The total number of genes found in A. bracteata, X. spekei and X. viscosa cp genomes are 129, 130, and 132, respectively. About 50, 29, 28 palindromic, forward and reverse repeats and 90, 59, 53 simple sequence repeats (SSRs) were found in the A. bracteata, X. spekei, and X. viscosa cp genome, respectively. Nucleotide diversity analysis in all species was 0.03501, Ka/Ks ratio average score was calculated to be 0.26, and intergeneric K2P value within the Order Pandanales was averaged to be 0.0831. Genomic characterization was undertaken by comparing the genomes of the three species of Velloziaceae and it revealed that the coding regions were more conserved than the non-coding regions. However, key variations were noted mostly at the junctions of IRs/SSC regions. Phylogenetic analysis suggests that A. bracteata species has a closer genetic relationship to the genus Xerophyta. The present study reveals the complete chloroplast genome of A. bracteata and gives a genomic comparative analysis with the African species of Xerophyta. Thus, can be useful in developing DNA markers for use in the study of genetic variabilities and evolutionary studies in Velloziaceae.
INTRODUCTION
Velloziaceae is a monocotyledonous family of flowering plants consisting of five genera and c. 250 species (Mello-Silva et al., 2011;Behnke et al., 2013). It is classified under the small but morphologically diverse order Pandanales together with Cyclanthaceae, Pandanaceae, Stemonaceae, and Triuridaceae (Angiosperm Phylogeny Group, 2009;Chase et al., 2016). Basing on its generic limits and distributional patterns, it is one of the most interesting plant families that occur in Africa mainland, Madagascar, Arabian Peninsula, and South America (Porembski and Barthlott, 2000;Ibisch et al., 2001;Alves and Kolbek, 2010). Three genera occur in South America, of which two are endemic to Brazil (Barbacenia Vand., Vellozia Vand.) and the third occurs in the Andean region (Barbaceniopsis L.B.Sm.). A fourth genus, Xerophyta Juss., grows in tropical Africa, Arabian Peninsula and Madagascar (Gardens, 2016), and the fifth genus, Acanthochlamys P.C. Kao, is endemic to China, native to Tibet and Sichuan (Ibisch et al., 2001;Mello-Silva et al., 2011;Bao-Chun, 2017). Most species of Velloziaceae occur in the tropical regions of South America, and c.70 species occur in the Old world (Behnke et al., 2013). The plant family mainly consists of shrubs and herbs having stems with persistent leaf-sheaths and fibrous root structure (Beentje, 1994). It is the largest lineage of resurrection plants among angiosperms with its species having varying degrees of desiccation tolerance (Alcantara et al., 2015). This is because its species display different strategies to desiccation, with some of species being able to completely avoid desiccation (Alcantara et al., 2015).
The family is one of the classical examples of "Taxonomic nightmares" among plants, due to its floral similarities and the huge variabilities in the morphological features in terms of leaf form, size and life forms among others. Despite the unquestionable uniqueness of the family, there is still serious lack of phylogeographic synthesis about its species. This is because the vast majority of the available studies lack a phylogenetic perspective, and the information generated has been regarded as having little relevance for historical biogeography of both the New World and the Paleotropical species of Velloziaceae. In addition, only few species within the family have had their whole chloroplast genome sequenced including Xerophyta viscosa (Farrant et al., 2015), Xerophyta spekei , and reported through a short communication showing their length and gene contents of their cp genomes. However, a comprehensive comparative analysis of these chloroplast genomes is still lacking.
Xerophyta Juss. is a genus that consists of small to large perennial herbs and shrubs, naturally occurring in Africa, Madagascar, and the Arabian Peninsula. Most species of this genus have evolved an adaptation to lose their chlorophyll and terminate the process of photosynthesis during periods of extreme drought hence are extremely desiccation tolerant plants (Tuba et al., 1996). Hence, it has been used in the experimental studies on desiccation tolerance (Deeba and Pandey, 2017). In the same vain, Acanthochlamys bracteata P.C. Kao is a dwarf perennial herb found in the grassland nearby bushland of xerophytic valley of China (Deeba and Pandey, 2017). This species was previously classified under the monotypic family Acanthochlamydaceae (Deeba and Pandey, 2017), however, basing on TrnL and rbcL genes sequence data, it was transferred into the family Velloziaceae (Salatino et al., 2001;Mello-Silva et al., 2011). Additionally, morphological shared characters which are similar in form, structure, and origin, mostly persistent leaves, nucellus, tripartite stem cortex, and phloem tube among others, supported its inclusion into Velloziaceae (Mello-Silva et al., 2011). Morphology, pollen structures and biochemistry have played an important role in the grouping of plants into different taxa . However, more emphasis has to be placed on molecular systematics to help understand the morphologically and biochemically similar plants through the genome-wide analysis of their chloroplast.
Systematics and phylogeny, since its inception, has boosted classification and understanding of the evolutionary relationships among plants through genomic analysis Khan et al., 2019). Furthermore, breeding of drought tolerant crops is key to curb the climate change effects and the growing human population (Dai, 2011;Farrant et al., 2015). Chloroplasts are not only useful in photosynthesis, but also a major genetic system together with the nucleus and the mitochondria (De Las Rivas et al., 2002;Daniell et al., 2016;Moon, 2018;Chen et al., 2019;Khan et al., 2019;Li et al., 2019;Zhou et al., 2019). Due to its highly conserved nature, slow rate of nucleotide substitution and its maternal heredity, Chloroplast DNA (cpDNA) has been widely used in genomics to study plant phylogeny thus an important and informative source for taxonomic and phylogenetic studies (Palmer, 1987;Sale et al., 1993;Lee et al., 2014;Moon, 2018;Wang et al., 2018;Konhar et al., 2019;Li et al., 2019;Liu et al., 2019;Zhang et al., 2019;Oulo et al., 2020). The Plastome is circular, having a quadripartite structure and varies from 120kb to 170 kb, having small single copy (SSC) and Large single copy (LSC) regions, divided by two inverted repeats (IRa and IRb) (Zhao et al., 2015;Wang et al., 2018;Zhou et al., 2019). Plastome phylogenomics has led to tremendous advancements in NGS (next-generation sequencing) technologies hence genome sequencing is currently easier, faster and cheaper (Daniell et al., 2016;Li et al., 2017;Konhar et al., 2019;Khan et al., 2019;Liu et al., 2019). However, despite these advancements in sequencing technologies, there are still few plants that have had their chloroplast genome sequenced . Additionally, regardless of the uniqueness of Velloziaceae, there is still paucity of information available on the whole chloroplast genomes comparison. This present study reveals the sequenced chloroplast genome of A. bracteata and a performed phylogenetic analysis to validate its placement; together with X. spekei (MN663122) and X. viscosa (NC_043880) from the NCBI database. Additionally, we briefly discuss the morphological comparison between Xerophyta and Acanthochlamys. This will help in understanding the species in the family and also provide genetic resources for further analyses on the taxonomy and phylogeny of the Velloziaceae.
DNA Extraction and Sequencing
The fresh green leaves of A. bracteata were collected from Luhuo Sichuan at an altitude of 3045 m, China. They were sampled and immediately dried using silica gel in plastic bags (Chase and Hills, 1991). The voucher specimens were stored in the herbarium at Wuhan Botanical Garden, CAS (HIB) (China) with the voucher number DX-0006. 0.5g of the silica dried leaves was used for the DNA extraction using modified cetyltrimethylammonium bromide (CTAB) protocol (Doyle, 1991). Sequencing was done using illumina paired end technology platform at the Novogen Company in Beijing, China.
Genome Assembly and Annotation
After filtering the low-quality data and adaptors, the obtained clean data was assembled using Get Organelle version 1.7.4 software (Jin et al., 2020b), and then manually corrected. Gene annotation was done using Plastid Genome Annotator (PGA) (Qu et al., 2019) using the plastome of X. spekei as the reference genome. Geneious prime and GeSeq online tool 1 (Tillich et al., 2017), was used to manually edit and correct annotations. The circular chloroplast genome map was drawn using the Organelle Genome DRAW (OGDRAW) software (Greiner et al., 2019). The divergence of A. bracteata, X. spekei and X. viscosa species genomes was determined using mVISTA (Frazer et al., 2004) in the glocal alignment algorithm (shuffle-LAGAN mode) and using A. bracteata as the reference genome.
Analysis of Repeats and Codon Usage
Long repeat sequences (forward, reverse, complimentary, and palindromic) in the genome sequence were identified using REPuter online program (Kurtz et al., 2001). Locations and sizes of the repeat sequence were visualized with a minimal standard of: (1) minimum repeat size of 30bp, (2) a hamming distance of 3, (3) 90% or greater identity. Tandem repeats in the 3 species of Velloziaceae; X. viscosa, X. spekei, and A. bracteata cp genomes were identified using the Tandem repeat finder (Benson, 1999) with inbuilt alignment parameters. Simple Sequence Repeats (SSRs) analysis was done using the Perl script Microsatellite (MISA) 2 (Thiel et al., 2003), considering a nucleotide size of 1 to 6 base pairs and a threshold of 10, 5, 5, 3, and 3 for mono-, di-, tri-, tetra-, penta-, and hexa-nucleotides, respectively. The codon bias (RSCU) in the three species was conducted using MEGA7 software (Kumar et al., 2016).
Nucleotide Diversity and Substitution Rate Analysis
To assess the nucleotide diversity (Pi) in the complete Plastome of the three species, A. bracteata was compared with the species X. spekei and X. viscosa. The complete chloroplast genome (cpDNA) sequences were aligned using MAFFT in-built in phylosuite . A sliding window analysis of window length of 600 bp and step size of 200 bp, was used in the DnaSP to estimate the nucleotide diversity values of each gene (Rozas et al., 2017). Protein-coding genes of A. bracteata, X. spekei, and X. viscosa were extracted using Phylosuite, aligned using MAFFT and Ka/Ks rates for each gene estimated using the Ka/Ks calculator (Zhang et al., 2006). Selection pressure within the shared genes of the eleven species of the order Pandanales was evaluated using PAML v4.7 (Yang, 2007), executed in the EasyCodeML software (Gao F. et al., 2019).The dN/dS ratio of the species of the order Pandanales (Pandanus tectorius, Carludovica palmata, Stemona tuberosa, Stemona mairei, Stemona japonica, Croomia pauciflora, Croomia heterosepala, Croomia japonica, X. spekei, X. viscosa, and A. bracteata) was also calculated based on four site specific models (M0 vs. M3, M1a vs. M2a, M7 vs. M8 and M8a vs. M8) with likelihood ratio test (LRT) threshold of p < 0.05 to show highly variable sites in the genome. The protein-coding genes were aligned in correspondence to their amino acids and selection pressures on the genes analyzed using both ω and LRTs values. We estimated the interspecific genetic distance with MEGA X using Kimura two-parameter (K2P) model 3 (Kumar et al., 2018).
Phylogenetic Analysis
To understand the phylogenetic relationship of A. bracteata P.C. Kao with other species of the Order Pandanales, maximum likelihood (ML) and Bayesian inference (BI) trees were reconstructed. We generated 59 individual plastid gene files representing the shared protein-coding genes. Other representatives of the Velloziaceae (Vellozia sp., Xerophyta elegans, Barbacenia involucrata, Barbaceniopsis castillonii) were sampled from the gene bank based on a previous study (Soto Gomez et al., 2020). The 59 shared protein-coding genes of 55 species representatives from orders; Pandanales, Dioscoreales, and Liliales, were used to reconstruct the phylogeny using Elaeis guineensis as an outgroup based on previous study (Liu et al., 2012) (Supplementary Table 6). All the 55 species were subjected to MAFFT alignment, and the phylogenetic relationships estimated using the ML and BI analyses done using the IQ-Tree and MrBayes, respectively, integrated in Phylosuite Table 6). Model Finder (Kalyaanamoorthy et al., 2017) was used to find the best model using Bayesian Information Criterion (BIC). The model of best-fit for Bayesian analysis was GTR + F + I + G4, while that of IQ-tree was GTR + F + R3. The models GTR + I + G4 + F and GTR + F + R3 was run for 1000 replicates using ultrafast bootstraps.
Complete Chloroplast Genomes
The complete chloroplast sequence of A. bracteata was deposited in the GenBank database (Accession No. MW727487). All the three species' whole chloroplast genome; A. bracteata, X. spekei, and X. viscosa exhibited a spherical quadripartite nature ( Figure 1), with sizes of A. bracteata, X. spekei, and X. viscosa cp genomes being 153,843 bp, 155,235 bp , and 155,498 bp, respectively, similar to most angiosperm Plastomes (Daniell et al., 2016). The cp genomes consist of Inverted repeats (IRs) (IRa and IRb) each with a length ranging from 27,022-27,110 bp within the three species. The Large Single-Copy (LSC) region in the three species showed length ranging from 81,919 to 83,813 bp and Small Single-Copy (SSC) region (17,387-17,880 bp) ( Table 1). The LSC and SSC regions are separated by the IRs. Generally, the gene constituent of the chloroplast genome is approximately between 120 and 140 genes that are always actively involved in photosynthesis, transcription and translation processes (Gu et al., 2019). All the genes annotated in the cp genomes of the three species ranged between 129 and 132 genes, including 37-38 tRNAs and 8 rRNAs. The guaninecytosine (GC) content of the three chloroplast genomes showed no significant difference, however, A. bracteata had a slightly lower GC content of 37.4% of the genome. The regions (LSC and SSC) had no considerable differences in the GC content in the three species. However, the IR regions showed a higher GC content of 42.6%. This is due to the presence of the rRNA and tRNA genes which occupy greater area than the proteincoding genes within the inverted repeat regions ( Table 2). This phenomenon has also been shown in previous studies (Talat and Wang, 2015;Chen et al., 2016). The genomes contain protein-coding genes ranging from 84 or 85 in number, transfer RNA genes (tRNA) ranges from 37 or 38 in number, and 8 rRNA genes (Table 1). A significant number of genes occur in the LSC and SSC regions. However, 17 genes are recurrent in the inverted repeat (IRa and IRb) regions. These include six coding genes (ndhB, rpl2, rps12, rpl23, rps7, ycf2); and the non-coding include seven transfer RNA species (trnA-UGC, trnI-CAU, trnI-GAU, trnH-GUG, trnN-GUU, trnR-ACG, and trnV-GAC) and four ribosomal RNA species (rrn4.5,
Repeat Analysis
Chloroplast repeats are important genetic resources that play a key role in the genome recombination and rearrangement (Lee et al., 2014). They are useful in the study of population genetics and biogeographic studies (Xie et al., 2018). In the current study, repeat analysis revealed that the Plastomes of the three species contained varied number of repeats (i.e., Palindromic, Forward, and Reverse repeats). The repeat analysis of A. bracteata revealed palindromic repeats (28), forward repeats (9) and reverse repeats (1). Out of which 9 palindromic, 8 forward and 1 reverse repeat are have a length of between 20 and 40 bp (Figure 2). In X. spekei there were only 18 palindromic repeats and 11 forward repeats with no reverse repeats. On the other hand, X. viscosa repeat analysis revealed 16 palindromic repeats, 11 forward repeats and 1 reverse repeat. Basing on the type of repeats, A. bracteata and X. viscosa showed similarity as compared to X. spekei. However, in terms of the number and length, there is a variation in the three species.
Microsatellites are small repeating units (1-6 nucleotide) within a genome nucleotide sequence (Shukla et al., 2018). They exhibit polymorphism and are usually dominantly expressed at the species level hence used as DNA markers for the population and evolutionary studies (Vendramin et al., 1999;Deguilloux et al., 2004;Piya et al., 2014;Redwan et al., 2015;Gao et al., 2018). In this study, we analyzed the presence, type, and allocation of SSRs in the cp genomes of A. bracteata, X. spekei, and X. viscosa. Mono-, di-, tri-, tetra-, and hexa-nucleotides types of SSRs were detected in the chloroplast genome of the three species. 90 SSRs were detected in A. bracteata cp genome. Comparably, 59 and 53 microsatellites were revealed in X. spekei and X. viscosa, respectively ( Figure 3A). The mono-nucleotide repeats reported 58.91% of the total SSRs which made them the most abundant type of SSRs within the three species' cp genomes. Their numbers vary from 52 in A. bracteata, 36 in X. spekei and 31 in X. vsicosa, followed by tetra-nucleotide repeats (17.82%), di-nucleotide repeats (13.37%), tri-nucleotide repeats (6.93%). Penta-nucleotide repeats (2.97%) were the least abundant and were only present in the species A. bracteata. The genes within the chloroplast genomes are always highly conserved, however, the microsatellite abundance varies among the species . A/T mononucleotide repeats were highest in number in all the cp genomes of the three species (Figure 3B). Our findings are similar to other studies that show that A/T repeats were the most abundant . However, this varies among species with other studies recording di-nucleotides and tri-nucleotides as most abundant (Wang et al., 2017;Xie et al., 2018). Thus, this shows that SSRs are vital for understanding intrageneric and intergeneric variations within A. bracteata and its close relatives in Africa and South American species.
Our results show that SSRs within these chloroplast genomes are mostly comprised of poly-adenine (Poly-A) and polythymine (Poly-T) repeats. Hence, they contribute much to the AT abundance of the three species cp genome. The coding sequences also had SSRs mostly composing of the mono-nucleotide A/T which accounts for only 9.9%. This means that SSRs are mostly located in the non-coding regions. This trends have been shown in previous several studies (Rajendrakumar et al., 2007;Gandhi et al., 2010). These SSRs can be used to develop specific markers, which can be key in the study of systematics and evolution of the family.
Codon Usage
Codon usage is an essential feature for gene expression in both eukaryotes and prokaryotes genomes due to its strong correlation to protein and mRNA levels genome-wide (Lyu and Liu, 2020). It is the fact that different organisms vary in their synonymous codons rates of occurrences in their protein-coding sequences, meaning that some codons are rarely used while other codons are frequently used in a particular organism. Based on the proteincoding genes in the three species: A. bracteata, X. spekei, and X. viscosa, 51,281, 51,745, and 51,832 codons, respectively, were identified. Methionine and Tryptophan amino acids are encoded by a single codon. Other amino acids showed obvious codon usage bias. On average, the most abundant amino acids in the three species were leucine (A. bracteata 51341; 10.01%, X. spekei 5260; 10.17%, X. viscosa 4953; 9.56%) whereas the least abundant amino acid was Cysteine (A. bracteata 1144; 2.23%, X. spekei 1073; 2.07%, X. viscosa 1154; 2.23%). The relative synonymous codon usage (RSCU) analysis showed that A. bracteata (32 codons), X. spekei (33 codons), and X. viscosa (33 codons) were >1, indicating a codon bias in the amino acids (Figure 4). Most (28 codons) of these preferred codons in the three species, ended in an A or U. Codon usage bias is a product of selection and mutation factors (Xu et al., 2011;Liu et al., 2018). Hence, the
Nucleotide Diversity (Pi) and Selection Pressure Analysis
The average Pi value of the three species of Velloziaceae was found to be 0.03501. IR regions showed lower nucleotide diversity indicating that they were quite conserved than the LSC and SSC regions (Figure 5). The nucleotide diversity (Pi) showed values ranging from 0.00111 to 0.14000 in the shared protein-coding genes. 20 regions showed values of Pi > 0.1000. These results indicated insignificant variations within these genome regions. However, high variations (Pi value > 0.1000) were found in these regions; psbK-psbI, trnQ-UUG/trnS-GCU, atpA, atpF, rps2, psbD/psbC, ndhK/ndhC, atpB/rbcL, ndhD, ndhG/ndhI, trnA-UGC, ycf1. The Large Single-Copy region (LSC) recorded most of the highly diverse regions.
Non-synonymous/Synonymous mutation ratio (Ka/Ks) is comprehensively effective in the detection of selection pressures in proteins or fragments of DNA sequences in plant species (Wang et al., 2010;Gao L. Z. et al., 2019;Xu and Wang, 2021). It is key in analyzing the evolutionary pressures within the genome. Synonymous substitutions are likely to occur in most protein-coding regions compared to non-synonymous substitutions (Rono et al., 2020). The synonymous substitutions leaves the amino acid unchanged as opposed to the nonsynonymous substitutions which changes the sequence of the amino acid. In the current study, the estimated Ka/Ks ratios of the 73 protein-coding genes of A. bracteata, computed against the close relatives X. spekei and X. viscosa are shown in the line graphs below (Figure 6). The mean Ka/Ks ration for all the genes was 0.26. For protein-coding genes in the A. bracteata plastome, Ka/Ks ratios was majorly between 0 and 1. This suggest that the majority of the genes in the A. bracteata plastome were probably under purifying selection. The ccsA, ndhG, psbL, and ycf2 gene families had higher Ka/Ks ratios compared to most of the rest of the proteincoding genes in the plastome. Two genes (ycf2 and ndhG) showed a Ka/Ks ratio greater than 1 (Supplementary Table 2) in the pairwise comparisons, showing that they may have undergone some evolutionary pressures. Genes with Ka/Ks > 0.5 were ccsA, ndhB, ndhF, psbL, psbN, ycf1, and ycf2 in the FIGURE 6 | The non-synonymous-to-synonymous substitution (Ka/Ks) ratios of 73 coding genes of (A) Acanthochlamys bracteata plastome relative to Xerophyta viscosa and (B) Acanthochlamys bracteata plastome relative to Xerophyta spekei. Note: Y-axis represents the Ka/Ks values (ratios) while the X-axis represents the protein-coding genes within the chloroplast genomes.
computation against X. spekei; whereas in the estimation against X. viscosa, the genes included ccsA, ndhB, ndhF, ndhG, ndhK, psbL, psbN, rpl32, and ycf2 (Supplementary Table 2). In both estimations of the A. bracteata, the least significant Ka/Ks value (0.00) was found mostly in the genes involved in the processes of photosynthesis (psaC, ndhE, petG, psbJ, petL, petN, psaJ, psbC, psbE, psbF, psbI, psbM, and psbT), selfreplicating genes (rpl14, rpl36, rps7, rps12) and hypothetical chloroplast reading frames (ycf3) indicating significant purifying selection. Similar results were reported for other cp genomes (Li et al., 2015). These three species occur in more or less similar habitat growing on inselbergs in China and Africa hence a conjecture that functional genes in the chloroplast genomes played a big role in the adaptations in these strenuous environments.
The same protein-coding genes in the order Pandanales were used to detect sites of positive selection within their genomes. Four models (M0 vs. M3, M1a vs. M2a, M7 vs. M8 and M8a vs. M8) were compared in this analysis. Comparative model of M7 vs. M8 was positive in determining the LRT p value of >0.05 and the strength of positive selection of the genes. Bayes Emperical Bayes (BEB) and naïve empirical Bayes (NEB) analysis was only shown in model M8. Most genes showed no significant positive selection (p-value > 0.05) except eight genes with high posterior probabilities found in the BEB test (rps16, atpF, atpH, rpoC2, psaA,atpB, rbcL, accD) and NEB test (rps16, atpI, rpoC2, psbZ, rps14, ndhC, rbcL, accD) (Supplementary Tables 3, 4). Most of these genes that contained highly positively selected sites were mostly related to functions of photosynthesis and self-replication. Studies have linked these to codon sites with high posterior sites to be under positive selection pressure (Xie et al., 2018). Using K2P model in MEGA, the average interspecific genetic distance in the 77 PCGs for the 11 species in the Order Pandanales was estimated. On average, the K2P interspecific genetic distance was calculated to be 0.0831 (Figure 7 and Supplementary Table 5). The least K2P values were detected in psbL gene (0.0050) and the highest was in the gene ccsA (0.1374).
We compared the border structure of the three cp genomes in detail to identify the IR expansion or contraction (Figure 8). IR regions contained rpl2 and trnH genes. The location of the ndhF gene was in the SSC region, however, in the X. spekei it was located at the border junction of IRb and SSC region. The ycf1 gene is a pseudogene found at the junction of IRa and SSC region. In X. spekei and X. viscosa, the rps19 was located in the LSC region together with rpl22 gene, however, in A. bracteata the rps19 extended into the IRb with a 9 bp. There was a notable difference in the inverted repeat regions of the three species. The distance between psbA and the JLA junction varies from 111 bp to 117 bp. Inverted repeat regions in land plants' cp genomes vary greatly (Khayi et al., 2020). However, studies show that they are the most conserved regions of the chloroplast genome (Asaf et al., 2020). Contraction and expansion of the IR regions leads to the size differences in the Plastomes (Mo et al., 2020). IRs are thought to stabilize the plastome with studies showing that Plastomes that lack one or all IRs are less stable in terms of their genome arrangements than the species genome that have the IRs (Jin et al., 2020a,b).
To understand structural characteristics of the cp genomes of the three species of Velloziaceae, whole sequence alignment was conducted using the annotation of A. bracteata as a reference (Figure 9). These three species all belong to the family Velloziaceae. The gene number, order and orientation were relatively conserved, although some highly divergent regions were found. The results show that all the three species cp genomes were 70% similar. High genetic inconsistency, however, occurred in the single-copy (LSC and SSC) regions compared to the IR regions. Similarly, non-coding regions also had higher gene variations than in the coding regions. The LSC and SSC regions are more divergent than the two IR regions. In addition, within the LSC and the SSC regions, the noncoding regions are more divergent than the coding regions. This kind of phenomenon has been shown in other studies . The most highly divergent regions include psbI-trnS(GCU), trnS(GCU)-trnG(UCC), ycf3-trnT(UGU), matK, psbK, ycf2, ndhF, rpl32, ycf1, ndhE, ndhD, ndhA, rps4, trnH-psbA, trnG-psaA, atpB-rbcL, and ndhF-rpl23. The IR regions were highly conserved in terms of gene order and abundance. However, at the border of the IR and single-copy regions there were notable significant differences. Hence, Velloziaceae Plastomes were quite well conserved, with few variations detected (Figure 9). Variations in the size of the genome, expansion and contraction of the IR junctions were the major differences in the 3 cp genomes structure. DNA barcodes are sections of DNA sequences with a high mutation rate that can be useful to identify a species in a given taxonomic group (Rousseau-Gueutin et al., 2015;Xu et al., 2015;Zhou et al., 2016). These major regions of variations can be important markers for barcoding and studies on the evolution within the species of Velloziaceae.
Phylogenetic Analysis
Plastomes are important for explaining intra-and interspecific evolutionary histories with recent studies showing significant power in phylogenetic, evolution, and molecular systematic studies (Asaf et al., 2018). The cp genomes that have sufficient variable sites have shown to be useful in solving the phylogenetic relationships (Ma et al., 2014;Carbonell-Caballero et al., 2015). The phylogeny of the Velloziaceae was reconstructed using 59 shared protein-coding genes from the orders; Pandanales, Liliales and Dioscoreales, to evaluate the position of Acanthochlamys and the Xerophyta species. The three selected orders (Liliales, Pandanales, and Dioscoreales) were clustered into three different clades. In the reconstructed phylogeny (Figure 10), all the families for the order Pandanales were Cyclanthaceae, Pandanaceae, Stemonaceae, and Velloziaceae with exception of Triuridaceae which has lost most of its genes over time. The families in the Order Pandanales form a monophyletic group at this current taxon sampling analysis. Acanthochlamys bracteata was the early diverged in the family Velloziaceae based on this analysis because it was sister to the rest of the Velloziaceae species. Xerophyta spekei was sister to the clade that consisted of Xerophyta elegans and Xerophyta viscosa with a high support (100). Barbacenia involucrata L.B.Sm. was sister to the group [Vellozia sp. + Barbaceniopsis castillonii (Hauman) Ibisch] which was different to the previous analysis that used three genes (Soto Gomez et al., 2020). However, the tree topology of the Velloziaceae clade was similar to the same previous study that performed the analysis using a 12-gene mitochondrial data set. Pandanaceae and Cyclanthaceae clustered together and this clade was sister the clade formed by Velloziaceae and Stemonaceae. The phylogenetic relationship of the taxa in this study was consistent to the previous study that combined 18S rDNA (mitochondrial), Nuclear atpA, matR, and nad1bc intron dataset (Mennes et al., 2013) and showed that Velloziaceae was sister to the other families in the order Pandanales, whereas, Stemonaceae was related to the clade (Pandanaceae + Cyclanthaceae; which are sisters). However, in our study we excluded family Triuridaceae from the analysis due to the huge number of genes lost from its cp genome. This phylogenetic tree topology therefore showed a close relationship between the taxa. Furthermore, the three species of Velloziaceae analyzed, A. bracteata and the two Xerophyta species (X. spekei, and X. viscosa) clustered together in the same clade, hence revealing a closer relationship of these species. This is similar to an early phylogenetic inference of the family Velloziaceae using the Chloroplast trnL-F sequence which supported a close relationship between Acanthochlamys and other Velloziaceae hence its inclusion into the family (Salatino et al., 2001). From this phylogenetic analysis, shared genes could be as well provide more reliable phylogenetic insights for species that have undergone genome-wide rearrangement and gene losses. The phylogenetic analyses done so far are largely increasing our understanding of the evolutionary relationship among species in Velloziaceae. Although our results clarified the phylogenetic relationships of the seven Velloziaceae together with species of Order Pandanales, more cp genome of the family and Order need to be sequenced and analyzed to completely understand the phylogeny. Low taxon sampling may produce inconsistencies in the topology of the tree (Leebens-Mack et al., 2005).
Morphological Comparison
Integrating morphology into phylogenetic analyses is important as it reveals suites of phenotypic novelties that characterize molecular classification hence assists systematists come up with species and clades (Lee and Palci, 2015). In this section we review the morphological importance as specifically used in the classification of A. bracteata P.C. Kao. Velloziaceae are xeromorphic and sometimes tree-like monocots with persistent leaf-sheath (Stevens, 2002). Despite the discordance in the treatment of the family Velloziaceae, morphological characters have always provided a foundation in the classification of the family. Velloziaceae are xerophytes adapted to inselbergs which generally favors their endemism (Behnke et al., 2013). The taxonomic history of Velloziaceae is linked mainly to its floral characters, stamen and stigma (Mello-Silva et al., 2011), however, this was misleading because of its variations among the taxa. The anatomical characters that were first used to classify A. bracteata were very unique among other monocotyledonous plants including the eustele in rhizome, protostele in root and leaf-stem compound structure in scape. Due to this unique structure in the vascular bundles, it was classified in a separate family Acanthochlamydaceae. However, basing on the morphological and molecular phylogenetic studies, even in this present study molecular analysis, shows a close relationship to the Velloziaceae. This in turn brought into lime-light the close relationship between the Handgun mountains and the African tropical regions, and its classification into family Velloziaceae. Acanthochlamys is clearly sister to but morphologically and anatomically different from the rest of the family, with exception of its sieve tube plastids which seem rather similar. The family Velloziaceae is supported currently by at least four-character states: Persistent leaves, presence of abscission zone, two phloem strands and violet tepals (Mello-Silva et al., 2011). In this section, the morphological characters summary among the African genera and the Asian genera are summarized in Table 4 above. Morphological summary of the three species of the family, despite their geographical occurrence differences, fit the classification into similar taxa.
CONCLUSION
The complete chloroplast genome of the A. bracteata was reported and comparative and phylogenetic analyses with the two species from genus Xerophyta revealed similarities in their genomic structure and composition. Additionally, it provided valuable genetic information for further studies on the three species, A. bracteata, X. spekei, and X. viscosa, in terms of chloroplast sequence variations, assembly and evolution.
Valuable genetic resources such as SSRs, large repeats and variable loci can be used as genetic markers important for barcoding. Additionally, to understand the sequence divergences in terms of phylogeny, the genetic markers can perhaps be used in the phylogenetic tree analysis upon further analyses. Furthermore, since these species are desiccation and drought tolerant, the genetic markers can be used in the agricultural sector with broad studies on the species compatibility in breeding research.
GENOME SEQUENCE DATA
The complete chloroplast sequences Acanthochlamys bracteata and other species used in the study are available in GenBank, https://www.ncbi.nlm.nih.gov/(Supplementary Table 6).
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
AUTHOR CONTRIBUTIONS
VW, XD, G-WH, Q-FW, and RG participated in design of the study can carried out the experiment. XD and G-WH collected the materials. VW, MO, EM, GO, MG, PK, and J-XY contributed in data analysis and draft manuscript writing. VW, XD, G-WH, MG, MO, and EM revised the draft manuscript. All the authors read and approved the final version of the manuscript. | 7,848.4 | 2021-06-14T00:00:00.000 | [
"Biology"
] |
Radiative corrections to the lepton flavor mixing in dense matter
One-loop radiative corrections will lead to a small difference between the matter potentials developed by $\nu_\mu^{}$ and $\nu_{\tau}$ when they travel in a medium. By including such radiative corrections, we derive the exact expressions of the corresponding effective mass-squared differences and the moduli square of the lepton flavor mixing matrix elements $|\widetilde U_{\alpha i}^{}|^2$ (for $\alpha=e,\mu,\tau $ and $i=1,2,3$) in matter in the standard three-flavor mixing scheme and focus on their asymptotic behaviors when the matter density is very big (i.e., the matter effect parameter $A\equiv 2\sqrt{2} G_{\rm F}^{} N_e^{}E$ is very big). Different from the non-trivial fixed value of $|\widetilde U_{\alpha i}^{}|^2$ in the $A\to \infty$ limit in the case without radiative corrections, we get $|\widetilde U_{\alpha i}^{}|^2=0~{\rm or}~1$ under this extreme condition. The radiative corrections can significantly affect the lepton flavor mixing in dense matter, which are numerically and analytically discussed in detail. Furthermore, we also extend the discussion to the $(3+1)$ active-sterile neutrino mixing scheme.
Introduction
In 1978, Wolfenstein firstly pointed out that when neutrinos travel in matter, the coherent forward scattering of them with electrons and nucleons must be considered and the induced matter potentials will change the neutrino oscillation behaviors [1]. In 1985, Mikheev and Smirnov put forward that the effective mixing angle can be significantly amplified in matter (such as inside the sun) even if the corresponding mixing angle in vacuum is small. This is the wellknown Mikheev-Smirnov-Wolfenstein (MSW) effects, which successfully explain the flavor conversion behaviors of solar neutrinos in the sun [2]. Such matter effects have been proved very important in a number of reactor, solar, atmospheric, accelerator neutrino oscillation experiments aiming to accurately extract the intrinsic neutrino oscillation parameters in vacuum [3]. A lot of efforts have been made to make the neutrino oscillation probabilities in matter more intuitive [4][5][6][7][8][9][10][11][12][13][14][15][16][17]. The language of renormalization-group equation was also introduced to describe the effective neutrino masses and flavor mixing parameters in matter [18][19][20][21]. In this paper, we mainly focus on the neutrino flavor mixing in very dense matter, which has been discussed in Refs. [22][23][24][25]. We further include the radiative corrections in this connection.
In the standard three-flavor mixing scheme, the Hamiltonian responsible for the propagation of neutrinos in matter can be expressed as where E is the neutrino beam energy, m i (for i = 1, 2, 3) and U stand respectively for neutrino masses and Pontecorvo-Maki-Nakagawa-Sakata (PMNS) lepton flavor mixing matrix, m i (for i = 1, 2, 3) and U denote effective neutrino masses and PMNS matrix in matter, respectively, and V α (α = e, µ, τ ) represent the matter potentials arsing from charged-and neutralcurrent coherent forward scattering of ν α with electrons, protons and neutrons in matter. Considering the one-loop radiative corrections to V α , we have [26] where G F is the Fermi coupling constant, α is the fine-structure constant; N e , N p and N n denote the number density of electrons, protons and neutrons in matter, respectively; m τ and m W are the masses of τ lepton and W boson, respectively; and sin 2 θ W ≡ 1 − m 2 W /m 2 Z with m Z being the Z boson mass. To be more intuitive, Eq. (1) can be rewritten as where ∆ ij ≡ m 2 i −m 2 j , ∆ ij ≡ m 2 i − m 2 j , A = 2 √ 2G F N e E, B = m 2 1 −m 2 1 −2EV µ , and I denotes a 3 × 3 identity matrix. According to Eq. (2) and assuming N e = N p = N n , ǫ ≃ 5 × 10 −5 is a small quantity but matters a lot in dense matter (i.e., A is very big).
On the other hand, given the implications of extra light sterile neutrinos in short-baseline neutrino oscillation experiments [27], we extend our discussion to the scheme of (3 + 1) flavor mixing with one more sterile neutrino ν s . The corresponding Hamiltonian describing the propagation of neutrinos in a medium turns out to be where V and V denote the 4 × 4 lepton flavor mixing matrix in vacuum and matter, respectively, A ′ = −2EV µ = √ 2G F N n E, ∆ 41 = m 2 4 − m 2 1 , and ∆ 41 = m 2 4 − m 2 1 with m 4 and m 4 being the sterile neutrino mass in vacuum and matter, respectively. The existence of an extra sterile neutrino can make the neutrino flavor mixing in dense matter very different from the standard three-flavor mixing scheme.
The remaining parts of this paper are organized as follows. In section 2, we include the radiative corrections and derive the corresponding expressions of ∆ ij , | U αi | 2 and U αi U * βi in matter in the standard three-flavor mixing scheme. The asymptotic behaviors of ∆ ij and | U αi | 2 and neutrino oscillations in dense matter are analytically and numerically investigated in detail. In section 3, we extend our discussion to the (3 + 1) flavor mixing scheme. Section 4 is devoted to a brief summary.
The standard three-flavor mixing scheme
In the standard three-flavor mixing scheme, the exact formulas of ∆ ij without radiative corrections have been given in Refs. [28][29][30]. Considering radiative correction effects, we derive the eigenvalues of H m in Eq. (3) and express the two independent effective masssquared differences ∆ ij (for ij = 21, 31) as for the case of normal mass ordering (NMO) with m 1 < m 2 < m 3 ; or for the case of inverted mass ordering (IMO) case with m 3 < m 1 < m 2 , where The unitarity conditions of U and the sum rules derived from H m and H 2 m , constitute a set of linear equations of U αi U * βi (for α, β = e, µ, τ and i = 1, 2, 3): where A αβ stand for the (α, β) element of the matter potential matrix A ≡ Diag{A, 0, Aǫ}. Taking α = β and solving Eq. (9), we obtain with where α = e, µ, τ and i, j, k = 1, 2, 3. Similarly, in the case of α = β, U αi U * βi can be derived from Eq. (9): with where (α, β, γ) run over (e, µ, τ ) and n = m = 1, 2. Note that U α1 U * β1 , U α2 U * β2 and U α3 U * β3 for α = β constitute the effective Dirac leptonic unitarity triangle in the complex plane. From Eq. (12), it is straightforward to check that the Naumov relation J ∆ 21 ∆ 31 ∆ 32 = J ∆ 21 ∆ 31 ∆ 32 [31] still holds, where J and J are the Jarlskog invariants [32] in vacuum and in matter, respectively, with ε αβγ and ε ijk being three-dimension Levi-Civita symbols. The only difference due to the radiative corrections in the exact formulas of ∆ ij , | U αi | 2 and U αi U * βi above is the appearance of the term A τ τ = Aǫ. By setting ǫ = 0, one can turn off the radiative corrections and get the corresponding expressions of ∆ ij , | U αi | 2 and U αi U * βi in the previous literature [30,[33][34][35]. With the help of Eqs. (5), (6) and (12), we can directly write out the probabilities of the ν α → ν β (for α, β = e, µ, τ ) oscillations in matter where α, β = e, µ, τ ; i, j = 1, 2, 3; and L is the neutrino oscillation length. Note that the results in Eqs. (5)- (15) are only valid for a neutrino beam. When it comes to an antineutrino beam, we need to do the replacements U → U * and A → −A. According to the exact expressions of ∆ ij , | U αi | 2 and U αi U * βi in Eqs. (5), (6) (10) and (12), we study the neutrino flavor mixing in dense matter in the standard three-flavor mixing scheme. Both neutrinos and antineutrinos with the normal or inverted mass ordering (i.e., cases (NMO, ν), (IMO, ν), (NMO, ν) and (IMO, ν)) will be considered separately. Numerically, we take the standard parametrization of U, and input the best-fit values of (θ 12 , θ 13 , θ 23 , δ, ∆ 21 , ∆ 31 ) in Ref. [36]: Analytically, we treat ∆ 21 /A, ∆ 31 /A and ǫ as small quantities and make perturbative expansions of ∆ ij and | U αi | 2 . Thus the analytical approximations in this section only apply to the range A ≫ ∆ 31 .
(NMO, ν)
Let us first consider the case of a neutrino beam with normal mass ordering. The corresponding evolution of ∆ ij and | U αi | 2 with the matter effect parameter A are illustrated in the upper left panel of Fig. 1 and Fig. 2, respectively. We find that the radiative corrections Figure 1: In the standard three-flavor mixing scheme, the illustration of how the effective neutrino mass-squared differences ∆ 21 and | ∆ 31 | evolve with the matter effect parameter A in the cases with or without radiative corrections, where the best-fit values of (θ 12 , θ 13 , θ 23 , δ, ∆ 21 , ∆ 31 ) in Ref. [36] have been input.
may significantly affect the values of ∆ ij and | U αi | 2 only if A is big enough (for example, A > 1 eV 2 ). This can be revealed more clearly by expanding the exact expressions of ∆ ij and | U αi | 2 in terms of ∆ 21 /A, ∆ 31 /A and ǫ. Only keeping the first order of these quantities, with or without radiative corrections, where the best-fit values of (θ 12 , θ 13 , θ 23 , δ, ∆ 21 , ∆ 31 ) in Ref. [36] have been input.
we simplify ∆ ij in Eq. (5) as According to Eq. (17), it becomes clear that if A is big enough, ∆ 21 will increase with A instead of taking a fixed value in the case without radiative corrections. The reason why the radiation corrections to ∆ 31 are not significant is just that the much smaller Aǫ term appears in the next-to-leading order with the leading order being A. By performing perturbative expansions of Eq. (10) in terms of ∆ 21 /A, ∆ 31 /A and ǫ and only keeping the leading order, | U αi | 2 are approximately expressed as This means that the neutrino flavor mixing in dense matter can be approximately described by only one degree of freedom, as having been pointed out in Ref. [24]: where θ ∈ [0, π/2] and Similarly, the neutrino oscillation probability P αβ in Eq. (15) can be approximately written as where ∆ 21 is taken from Eq. (17) and sin 2 2θ = 4| U µ1 | 2 (1 − | U µ1 | 2 ) with | U µ1 | 2 being taken from Eq. (19). Note that Eq. (22) is similar to Eq. (8) in Ref. [25] except that we include radiative corrections in θ and ∆ 21 . In order to numerically test the accuracies, we define the absolute error of P αβ as ∆ P αβ = |( P αβ ) Exact − ( P αβ ) Approximate |, where ( P αβ ) Exact stand for the exact results of P αβ and ( P αβ ) Approximate represent the approximate results of P αβ . The absolute errors of P αβ in Eq. (22) with different L/E and A/∆ 31 are demonstrated in Fig. 3. Similar to the case without radiative corrections discussed in Ref. [25], the analytical expressions of P αβ in Eq. (22) are accurate enough in most of the parameter space. For the upper left part in each subgraph of Fig. 3, we need to keep higher orders of ∆ 21 /A, ∆ 31 /A and ǫ, or just make perturbative expansions in terms of ∆ 21 /A and ǫ to improve the accuracies of P αβ .
To be more explicit, if the Aǫ term is not bigger than the ∆ 21 term in Eq. (19), we can further simplify | U αi | 2 (for αi = µ1, µ2, τ 1, . This is equivalent to the asymptotic values of | U αi | 2 (for αi = µ1, µ2, τ 1, τ 2) in the A → ∞ limit when radiative corrections are not taken into account (the blue dashed line in Fig. 2). As the increase of A, the Aǫ term in Eq. (19) becomes non-negligible. If the Aǫ term and ∆ 31 term are of the same order, the relation can be derived. This means θ = arctan(| U µ2 |/| U µ1 |) will decrease with the increase of A due to the existence of the radiative correction parameter ǫ. In the A → ∞ limit, it is easy to infer from Eqs. (19) and (21) that | U αi | 2 trivially take 0 or 1 and θ is approaching zero, implying that all the three flavors do not oscillate into one another. Thus it makes no sense to discuss lepton flavor mixing in this extreme case. Considering the four cases (NMO, ν), (IMO, ν), (IMO, ν) and (IMO, ν) separately, we summarize the corresponding analytical expressions of ∆ ij (for ij = 21, 31) and U in the A → ∞ limit in Table 1 while the other three cases will be discussed later.
Given a neutrino beam with inverted mass ordering, the evolution of ∆ ij and | U αi | 2 with A are illustrated in the lower left panel of Fig. 1 and Fig. 4, respectively. Note that there is no intersections between ∆ 21 and ∆ 31 in cases (IMO, ν) and (IMO, ν) in Fig. 1 with | ∆ 31 | = − ∆ 31 being shown in fact. In the case (IMO, ν), we also note that ∆ 21 > | ∆ 31 | holds when the matter effect parameter A is big enough. Analytically, expanding Eq. (6) in ∆ 21 /A, ∆ 31 /A and ǫ directly leads to with ξ being defined in Eq. (18). Consistent with Fig. 1, ∆ 31 approaches −Aǫ instead of a constant value in the A → ∞ limit. To understand the asymptotic behaviors of | U αi | 2 in the A → ∞ limit shown in Fig. 4, we expand Eq. (10) and get So one can use only one parameter to approximately describe lepton flavor mixing, 22), where the best-fit values of (θ 12 , θ 13 , θ 23 , δ, ∆ 21 , ∆ 31 ) in Ref. [36] and ǫ ≃ 5 × 10 −5 [26] have been input.
where θ ∈ [0, π/2] and The analytical approximations of P αβ in Eq. (15) turn out to be where ∆ 31 comes from Eq. (24) and sin 2 2θ = 4| U µ1 | 2 (1 − | U µ1 | 2 ) with | U µ1 | 2 coming from Eq. (25). For simplicity, we do not show the accuracies of Eq. (28), which are very similar to the case (NMO, ν) in Fig. 3. Comparing Eq. (28) with Eq. (22), we find that it is impossible to discriminate the normal mass ordering from the inverted mass ordering from neutrino oscillations if the matter density is very big. We also notice that if the ∆ 21 term and Aǫ term in Eq. (25) are of the same order, one can omit them and get . This corresponds to the fixed values of | U αi | 2 (for αi = µ1, µ3, τ 1, τ 3) in the A → ∞ limit if radiative corrections are not taken into account (the blue dashed line in Fig. 4). As the increase of A, the Aǫ term will gradually dominate and the neutrino oscillation behaviors can be very sensitive to A. In the limit of A → ∞, θ approaches π/2 and there will be no neutrino oscillation phenomenon.
(NMO, ν)
Considering an antineutrino beam with normal mass ordering, we make the replacements A → −A and U → −U * in Eqs. (5) and (10), and draw the corresponding evolution of ∆ ij and | U αi | 2 with A in the upper right panel of Fig. 1 and Fig. 5, respectively. Note that we always have ∆ 31 > ∆ 21 in this case although the difference between them is too small to be shown clearly in Fig. 1 if A is big enough. The radiative corrections to both ∆ 21 and ∆ 31 are very small, which can be analytically understood. By performing perturbative expansions, ∆ ij (for ij = 21, 31) in Eq. (5) are reduced to From Eq. (29), it is clear that the leading order of ∆ 21 and ∆ 31 is A and the radiative corrections in the next-to-leading order do not matter a lot. The only difference between Eq. (30) (the expression of ξ in the case (NMO, ν)) and Eq. (18) (the expression of ξ in the case (NMO, ν)) is the sign of ǫ. Similarly, | U αi | 2 can be expanded as namely, with or without radiative corrections, where the best-fit values of (θ 12 , θ 13 , θ 23 , δ, ∆ 21 , ∆ 31 ) in Ref. [36] have been input.
If A is small enough, we can ignore the smaller terms of ∆ 21 and Aǫ in Eq. (31), and This is consistent with the fixed values of | U αi | 2 (for αi = µ2, µ3, τ 2, τ 3) in the A → ∞ limit if radiative corrections are not included (the blue dashed line in Fig. 5). If the term Aǫ too big to be abandoned, the neutrino flavor mixing can be significantly affected by A. In the A → ∞ limit, | U αi | 2 trivially take 0 or 1 and θ approaches π/2, leading to no neutrino oscillations.
(IMO, ν)
Similarly, in the case (IMO, ν), i.e. an antineutrino beam with inverted mass ordering, we illustrate the evolution of ∆ ij and | U αi | 2 in the lower right panel of Fig. 1 and Fig. 6, respectively. Through making perturbative expansions, ∆ ij and | U αi | 2 can be approximately expressed as Figure 6: In the standard three-flavor mixing scheme, the illustration of how | U αi | 2 (for α = e, µ, τ and i = 1, 2, 3) evolve with the matter effect parameter A in the case (IMO, ν) with or without radiative corrections, where the best-fit values of (θ 12 , θ 13 , θ 23 , δ, ∆ 21 , ∆ 31 ) in Ref. [36] have been input. and where ξ has been defined in Eq. (30). From either Fig. 1 where θ ∈ [0, π/2] and The corresponding analytical approximations of P αβ are the same as Eq. (22) . This coincides with the fixed values of | U αi | 2 (for αi = µ1, µ2, τ 1, τ 2) in the A → ∞ limit if radiative corrections are not included (the blue dashed line in Fig. 6). If the term Aǫ is too big to be omitted, it will affect the neutrino flavor mixing a lot. In the A → ∞, θ approaches π/2 and no neutrino oscillations between ν e , ν µ and ν τ will happen.
Specifically, if Aǫ is negligible in Eqs. (54) and (57), one may abandon the smaller terms of ∆ 21 and ∆ 31 , and get where |V µ4 | 2 = cos 2 θ 14 sin 2 θ 24 , |V τ 4 | 2 = cos 2 θ 14 cos 2 θ 24 sin 2 θ 34 from the parametrization of V in Eq. (53). This is equivalent to the asymptotic behaviors of ∆ 21 and | V αi | 2 (for αi = µ1, µ2, τ 1, τ 2) in very dense matter in the case without radiative corrections (i.e. the blue dashed lines in Fig. 8). Due to the typical value θ 34 = 0 inputted in Fig. 8, we get By choosing a non-zero value of θ 34 , the neutrino flavor mixing can be very different. With the increase of A, the Aǫ term in Eqs. (54) and (57) will become dominate. In the A → ∞ limit, With | V αi | 2 taking 0 or 1 (or θ s → 0 or π/2), there will be no neutrino oscillation between the four flavors and it makes no sense to discuss lepton flavor mixing.
Summary
With the coming of the precision measurement era of neutrino physics, we are committed to digging the underlying physics behind the lepton flavor mixing [40] and on the other hand to conducting cosmological and astronomical researches with neutrinos being a good probe. As preliminarily discussed in Ref. [25], it is possible to explore the density and size of a hidden compact object in the universe by observing its effects on the neutrino flavor mixing. In this paper, we point out that radiative corrections to the matter potentials can significantly affect the neutrino flavor mixing in dense matter. Considering the standard three-flavor mixing scheme with radiative corrections, we derive the exact expressions of the effective neutrino mass-squared differences ∆ ij , the moduli square of the nine lepton flavor mixing matrix elements | U αi | 2 , the vector sides of the Dirac leptonic unitarity triangles U αi U * βi in a medium. From these exact formulas, the neutrino flavor mixing in dense matter are numerically and analytically discussed. Different from the fixed value of | U αi | 2 in dense matter in the case without radiative corrections, | U αi | 2 can be very sensitive to the value of A and trivially approach 0 or 1 in the A → ∞ limit if radiative corrections are taken into account. When it comes to the (3 + 1) flavor mixing scheme, the neutrino flavor mixing will be very different from the standard three-flavor scheme if A is big enough but not infinite. However it is meaningless to discuss the lepton flavor mixing in both schemes in the A → ∞ limit. | 5,349.4 | 2020-02-01T00:00:00.000 | [
"Physics"
] |
Impulsive Disturbances on the Dynamical Behavior of Complex-Valued Cohen-Grossberg Neural Networks with Both Time-Varying Delays and Continuously Distributed Delays
1Key Laboratory of Fluid and Power Machinery, Ministry of Education, Xihua University, Chengdu 610039, China 2Key Laboratory of Automotive Measurement, Control and Safety, Sichuan Province, Xihua University, Chengdu 610039, China 3National Traction Power Laboratory, Southwest Jiaotong University, Chengdu 610031, China 4School of Technology, Xihua University, Chengdu 610039, China 5Institute of Information Research, Southwest Jiaotong University, Chengdu 610031, China
Introduction
Recently, several kinds of complex-valued neural networks have been proposed and attracted researchers' attention due to the broader range of their applications in electromagnetics, quantum waves, optoelectronics, filtering, speech synthesis, remote sensing, signal processing, and so on [1].Complexvalued neural networks are quite different from real-valued neural networks, and they are not only the simple extension of real-valued systems due to two main aspects.Complexvalued neural networks have more complicated properties than real-valued neural networks because the neuron states, connection matrices, self-feedback functions, and activation functions are all defined in a complex number domain, which is the reason why existing research methods applied for studying the dynamical behavior of the real-valued neural networks cannot be used to study the complex-valued neural networks directly.Besides, complex-valued neural networks can solve some problems that cannot be solved with their realvalued counterparts.For example, the exclusion OR (XOR) problem and the detection of symmetry problem cannot be solved with a single complex-valued neuron with the orthogonal decision boundaries, which reveals the patent computational power of complex-valued neurons [1,2].
Due to the finite switching speed of amplifiers, time delay inevitably exists in neural networks.It can cause oscillation 2 Complexity and instability behavior of systems.As pointed out in [9], constant fixed time delays in the models of delayed feedback systems serve as a good approximation in simple circuits having a small number of cells.Although we consider that the time delays arise frequently in practical systems, it is difficult to measure them precisely.Up until now, there have been some results about the stability of complex-valued neural networks with time-varying delay (see [6,7,13,19,22,23] and the references therein).It is well known that a neural network usually has a spatial nature due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths; it is desired to model them by introducing continuously distributed delays over a certain duration of time such that the distant past has less influence compared with the recent behavior of the state [2].Therefore, it is necessary and accepted to study a neural networks model with both time-varying delays and continuously distributed delays, such as in [2,9,10,15,[24][25][26].The existing results on the dynamical behavior analysis for the neural networks models with the above mixed delays were mainly with respect to real-valued neural networks.Xu et al. [9] considered a class of complex-valued Hopfield neural networks with mixed delays and obtained some sufficient conditions for ensuring the existence, uniqueness, and exponential stability of the equilibrium point of the system.Song et al. [2] investigated the stability problem for a class of impulsive complex-valued Hopfield neural networks with the above mixed delays.
The impulsive disturbances are also likely to exist in the system of neural networks and can affect the dynamical behaviors of the system states, the same as the time delays effect.For instance, in the implementation of electronic networks, the states of the neural networks are subject to instantaneous perturbations and experience abrupt changes at certain instants, which may be caused by the switching phenomenon, frequency change, or other sudden noises.This phenomenon of instantaneous perturbations to the system exhibits an impulsive effect [2,12,13,[27][28][29].The authors in [10] investigated the stability problem for a class of impulsive complex-valued neural networks with both timevarying delays and distributed delays.By applying the vector Lyapunov function method and the mathematical induction method, some sufficient conditions were obtained for judging the exponential stability of the systems and showed the impulsive disturbance on the convergence rate of the system.In [29], the authors considered a class of fractional-order complex-valued neural networks with constant delay and impulsive disturbance.By using the contraction mapping principle, comparison theorem, and inequality scaling skills, some sufficient conditions were obtained for ensuring the existence, uniqueness, and global asymptotic stability of the equilibrium point of the system.
The model of Cohen-Grossberg neural networks was proposed by Cohen and Grossberg in 1983 [30].It has been widely applied within various engineering and scientific fields such as neurobiology, population biology, and computing technology [31,32].Very recently, the authors in [21] considered a class of complex-valued Cohen-Grossberg neural networks with constant delay and studied the global asymptotic stability by separating the model into its real and imaginary parts.As pointed out in [2,7], this required the existence, continuity, and boundedness of the partial derivatives of the activation functions about the real and imaginary parts of the state variables, which impose restrictions on the applications of the obtained results.Under the assumption that the activation functions only need to satisfy the Lipschitz condition, the authors in [32] removed the mentioned restrictions on the activation functions and studied a class of Cohen-Grossberg complex-valued neural networks with only time-varying delays.As far as we know, there is no result related to the dynamical behavior analysis for complex-valued Cohen-Grossberg neural networks with mixed delays and impulsive disturbances.
Based on the above analysis, in this paper, we will investigate the dynamical behavior for a class of impulsive complex-valued Cohen-Grossberg neural networks with time-varying delays and continuously distributed delays.In this paper, advantages and contributions can be listed as follows.
(1) Both impulsive disturbances and mixed delays are considered in the complex-valued Cohen-Grossberg neural networks.The studied model is more universal than the existing ones.(2) The activation functions have not been separated into their real and imaginary parts, and the selffeedback functions are nonlinear functions.(3) Different from the study method in [7], the existence and uniqueness of the equilibrium point of the system are analyzed by using the corresponding property of -matrix and the theorem of homeomorphism mapping.(4) The sufficient conditions established by the vector Lyapunov function method to ensure the global exponential stability of equilibrium point are expressed in terms of simple forms of inequalities, which are easy to be checked in practice.
Model Description and Preliminaries
To make reading easier, the following notations will be used throughout this paper.Let C denote a complex number set, N denote a natural number set, and R denote a real number set.Let || = √(Re()) 2 + (Im()) 2 be the module of complex number, where Re() and Im() are the real part and the imaginary part of the complex number , respectively.For complex number vector z ∈ C , let |z| = (| 1 |, | 2 |, . . ., | |) be the module of the vector z; here, (⋅) denotes the transpose of vector.Let ‖z‖ ∞ = max 1≤≤ {| |} and ‖z‖ 1 = ∑ =1 | | be the ∞-norm and 1-norm of the vector z, respectively.
Remark 3. The authors in [21,32] have studied a class of complex-valued Cohen-Grossberg neural networks and obtained some important stability results.It is supposed that the self-feedback functions are linear functions in [21,32].That is to say, the self-feedback functions are supposed to be () with > 0, = 1, 2, . . ., .Obviously, Assumption 2 in this paper is with more generality than that in the mentioned papers.Remark 5.The choice of the activation function is the main challenge in the dynamical behavior analysis of complexvalued neural networks compared to the study of real-valued neural networks.The complex-valued activation functions were supposed to need explicit separation into a real part and an imaginary part in [3,5,[8][9][10]18].However, this separation is not always expressible in an analytical form.In this paper, the complex-valued activation functions only need to satisfy the Lipschitz condition.Assumption 6.It is supposed that the amplification function ℎ ( ()) is with a lower boundary; that is to say, there exists a positive real number such that the inequality ℎ ( ()) ≥ > 0 holds, = 1, 2, . . ., .
Remark 7. The amplification functions of complex-valued Cohen-Grossberg neural networks were supposed to be with upper bounds and lower bounds in [21].Besides, the authors assumed that the amplification functions needed to be separated into real parts and imaginary parts.Details can be found in Assumption 6 in [21].In this paper, the complexvalued amplification functions only need to be with lower bounds.
Remark 9.The impulsive effect is introduced into (1) as the disturbance to the system, which results in the negative impact on the convergence speed of the equilibrium point of the system.The low bound may not be discussed in this case.If the impulsive disturbance is so weak that | ( + )| < | ( − )|, then it will lead to a positive impact on the convergence speed of the equilibrium point of the system.This means the convergence rate of the equilibrium point of the system with impulsive disturbances will be faster than that of the system without them.Therefore, only the upper bound for the impulsive intensity is discussed in this paper.
To proceed with our results, we quote the following lemmas in the proof of the theorems in this paper.
(iii) There exists a positive vector ∈ R such that A > 0.
Lemma 11 (see [10]).If H(z) is a continuous function on C and satisfies the following conditions:
Proof.The proof of the theorem is separated into two steps.
Step 1. Firstly, the existence and uniqueness of the equilibrium point z # of (1) will be proved by using the corresponding properties of homeomorphism and -matrix.Define a map H(z) = [ 1 (z), 2 (z), . . ., (z)] associated with (1) with the following forms: It is well known that if H(z) is a homeomorphism on C , then (1) has a unique equilibrium point z # obviously.
A We prove that the map H(z) is univalent injective on C under Assumptions 2 and 4.
From inequalities (3), it can be concluded that the following inequalities hold: According to Lemma 10, we can obtain that the matrix Q is -matrix, where = , = 1, 2, . . ., .Moreover, because inequalities (5) hold, we know that there exists a sufficient small positive number > 0 such that the following inequalities hold: It is assumed that there exist u, v ∈ C with u ̸ = k, such that (u) = (v), = 1, 2, . . ., ; that is, Taking absolute value on both sides of (7) Furthermore, inequalities (9) can be rewritten as follows: Obviously, Q|u−k| ≤ 0. Because Q is an -matrix, we get det Q > 0 and Q −1 exists.Furthermore, it can be concluded that |u − k| = 0 (i.e., u = k).To sum up, the map H(z) is univalent injective on C , = 1, 2, . . ., .
Let H (z) = (z) − (0), where (0 Multiplying by the conjugate complex number of on both sides of (11), we get Taking the conjugate operation on both sides of (12), we have Taking the summation operation on both sides of ( 12) and ( 13) and considering Assumptions 2 and 4, we obtain Namely, Combining A and B above, we know that H(z) is a homeomorphism on C .So, (1) has a unique equilibrium point z # .
Step 2. In this section, the global exponential stability of the equilibrium point z # under impulsive disturbances will be proved by applying the vector Lyapunov function method and the mathematical induction method.
It is assumed that the following inequalities hold: When = , according to Assumption Due to ≥ 1, inequalities (24) can be changed into the following forms: Furthermore, we can conclude that the following inequalities hold: If inequalities (27) This is a contradiction with the assumption + ( ) ≥ 0. So, inequalities (24) hold.
Based on the idea of the mathematical induction method, we have It follows from the condition of the theorem = lim →∞ sup(2 ln /( − −1 )) that ≤ exp(0.5( − −1 )), ∈ .Substituting it into inequalities (30), we obtain Furthermore, we have where Γ = √ 2 max / min .According to Definition 1, the zero solution of system ( 17) is globally exponentially stable.That is to say, the equilibrium point z # of system ( 1) is also globally exponentially stable.
To sum up, it can be concluded from Steps 1 and 2 that system (1) has a unique equilibrium point z # , and the equilibrium point is globally exponentially stable with exponential converge rate 0.5( − ).The proof is completed.Remark 13.Although there have been various methods for studying the diverse complex-valued neural networks, the scalar Lyapunov function method combined with the LMI method is nearly the most popular method to research the stability problem and synchronization problem (see [3,4,11,13,20,29,33]).The continuously distributed delays were not considered in the mentioned references.Mixed time delays in the model of complex-valued neural networks make the system become an infinite-dimensional interconnected system.Using the vector Lyapunov function method used in this paper can avoid discussing the convergence of the candidate scalar Lyapunov function, which is extremely difficult to prove in most cases.
From Theorem 12, we can directly obtain corresponding corollaries for guaranteeing the existence, uniqueness, and global exponential stability of the equilibrium point of system (33) and system (35) described as follows.
If there are no continuously distributed delays in (1), that is, when P = 0, the corresponding system is with the following forms: Then, (33) has a unique equilibrium point z # for arbitrary external input J ∈ C , and the equilibrium point z # is globally exponentially stable with the convergence rate 0.5( − ).
Similarly, if there are no time-varying delays in (1), that is, when B = 0, the corresponding system is with the following forms: Then, (35) has a unique equilibrium point z # for arbitrary external input J ∈ C , and the equilibrium point z # is globally exponentially stable with the convergence rate 0.5( − ).
When ℎ ( ()) = 1 in system (1), model ( 1) is changed into impulsive complex-valued Hopfield neural networks with time-varying delays and continuously distributed delays, which can be described as follows: It is easy to obtain sufficient conditions for ensuring the existence, uniqueness, and global exponential stability of the equilibrium point of system (37).
Corollary 16.Suppose that Assumptions 2, 4, and 8 are satisfied.If there exist positive constants > > 0 and a series of positive constants such that the following inequalities hold, here = lim →∞ sup(2 ln /( − −1 )), ∈ , Then, (35) has a unique equilibrium point z # for arbitrary external input J ∈ C , and the equilibrium point z # is globally exponentially stable with the convergence rate 0.5( − ).
The same as the preceding analysis, we can easily obtain the corresponding criteria for guaranteeing the stability of the equilibrium point of impulsive complex-valued Hopfield neural networks with only time-varying delays or continuously distributed delays.Therefore, we omit the similar works here.
When there is no impulsive disturbance in model ( 1), the complex-valued Cohen-Grossberg neural networks with time-varying delays and continuously distributed delays can be described as follows: All variables and functions in model (39) are the same as in system (1).Next, we will establish some sufficient conditions for judging the dynamical behavior of the equilibrium point z # of system (39).
According to the proof of Step 2 in Theorem 12, we can conclude that the equilibrium point of system (39) is globally exponentially stable, and the convergence rate is 0.5.The proof is completed.
When there are only time-varying delays or continuously distributed delays in system (39), it is easy to obtain the corresponding criteria for guaranteeing the global exponential stability of the equilibrium point of the complex-valued Cohen-Grossberg neural networks.We omit the similar works here.
Remark 18. Separating the model of complex-valued neural networks into its real and imaginary parts is a routine method (e.g., [3,[8][9][10]18]).The complex-valued activation functions were supposed to be with existence, continuity, and boundedness of the partial derivatives of the activation functions about the real and imaginary parts of the state variables, which impose restrictions on the applications of the obtained results.Assumptions 1 and 2 in [34] deviate from the boundedness and differentiability assumption for activation functions.In future works, we will furthermore study the dynamical behavior of complex-valued neural networks with a lower level of conservation of assumption conditions, including activation function, self-feedback function, and amplification function.
Numerical Examples
In this section, we will give three examples with numerical simulations to demonstrate the correctness of the above results.
Example 1.Consider a class of two-order system described by (1) with the following assumptions.
From the above computing analysis, the assumption conditions in Theorem 12 are satisfied.According to Theorem 12, it can be concluded that the equilibrium point of (1) under the above assumptions is existent, unique, and globally exponentially stable, and the exponential convergence rate is 3.89.
The numerical simulations of the above system are shown in Figures 1-4.Figures 1 and 2 show the state curves of the real parts and imaginary parts of neuron states, respectively.Figure 3 shows the modulus of state curves under the condition that there is no impulsive disturbance in the system.Figure 4 shows the modulus of state curves under the condition that there exist impulsive disturbances in the system.From the simulation results, it can be seen that the equilibrium point of the system is existent, unique, and stable.Remark 19.In [3,10], under the assumption that the activation functions satisfied boundedness and analyticity, some sufficient conditions were obtained to guarantee the stability of the equilibrium point.As pointed out in [7], if the activation function only needs to satisfy the condition of global Lipschitz, the restriction for the activation function in this paper is weaker than Assumption 1 in [3,10].Besides, model (1) in this paper includes the models studied in there.By calculation, we have = 0.14, 1 = 15, 2 = 16, 1 = 0.5, 2 = 0.6, 1 = 0.5, 2 = 1, = 0.4.Taking the above parameters into inequalities (3), we obtain 1 (−2 According to Theorem 12, it can be concluded that the equilibrium point of (1) under the above assumptions is existent, unique, and globally exponentially stable, and the exponential convergence rate is 1.43.
The numerical simulation of the system is shown in Figure 5.It can be seen from the simulation result that the equilibrium point of the system is existent, unique, and stable.].Obviously, it is known from Lemma 10 that the matrix Q is an -matrix.According to Theorem 17, the equilibrium point of system (39) is with existence, uniqueness, and global exponential stability under the assumption conditions above.
The numerical simulation of the system is shown in Figure 6.It can be seen from the simulation result that the equilibrium point of the system is existent, unique, and stable, which verifies the correctness of Theorem 17.
Conclusions and Future Directions
This paper has studied the dynamical behavior for a class of impulsive disturbance complex-valued Cohen-Grossberg neural networks with both time-varying delays and continuously distributed delays.Based on the idea of the vector Lyapunov function method, some sufficient conditions have been established for ensuring the existence, uniqueness, and global exponential stability of the equilibrium point of the system by using the corresponding properties of -matrix and homeomorphism mapping.Not only are the established criteria easy to be verified, but also they improve existing results.Three numerical examples with simulation results have been given to illustrate the effectiveness of the obtained results in this paper.
It is well known that the synchronization problem of chaotic neural networks can be translated into the stability problem of the corresponding error system of driving system and driven system.There have been some literatures concerning the analysis of synchronization control for some complex-valued neural networks with time delays by using the idea of adaptive control [33,35] and sliding mode control [36].In future works, we will furthermore investigate the synchronization problems of complex-valued chaotic neural networks with a lower level of conservation of assumption conditions.
Remark 20 .
According to results of calculation and simulation, we find that the convergence rate in Example 2 is slower than that in Example 1 due to the larger amplification function and time delays.The correctness of Theorem 12 is verified by the comparison between Examples 1 and 2. | 4,741.8 | 2017-01-01T00:00:00.000 | [
"Mathematics"
] |
MiR-320 inhibits the growth of glioma cells through downregulating PBX3
Background MiR-320 is downregulated in multiple cancers, including glioma and acts as tumor suppressor through inhibiting tumor cells proliferation and inducing apoptosis. PBX3 (Pre-B cell leukemia homeobox 3), a putative target gene of miR-320, has been reported to be upregulated in various tumors and promote tumor cell growth through regulating MAKP/ERK pathway. This study aimed to verify whether miR-320 influences glioma cells growth through regulating PBX3. Methods Twenty-four human glioma and paired adjacent nontumorous tissues were collected for determination of miR-320 and PBX3 expression using RT-qPCR and western blot assays. Luciferase reporter assay was performed to verify the interaction between miR-320 and its targeting sequence in the 3′ UTR of PBX3 in glioma cells U87 and U251. Increased miR-320 level in U87 and U251 cells was achieved through miR-320 mimic transfection and the effect of which on glioma cells growth, proliferation, cell cycle, apoptosis and activation of Raf-1/MAPK pathway was determined using MTT, colony formation, flow cytometry and western blot assays. PBX3 knockdown was performed using shPBX3 and the influence on MAPK pathway activation was evaluated. Results MiR-320 downregulation and PBX3 upregulation was found in glioma tissues. Luciferase reporter assays identified miR-320 directly blinds to the 3′ UTR of PBX3 in glioma cells. MiR-320 mimic transfection suppressed glioma cells proliferation, and induced cell cycle arrest and apoptosis. Both miR-320 overexpression and PBX3 knockdown inhibited Raf-1/MAPK activation. Conclusion MiR-320 may suppress glioma cells growth and induced apoptosis through the PBX3/Raf-1/MAPK axis, and miR-320 oligonucleotides may be a potential cancer therapeutic for glioma.
Background
Glioma is one of the most common forms of neural malignancy and is a highly infiltrating, aggressive brain cancer with no available curative treatment [1]. Despite therapeutic advances, the 5-year survival rate of patients with low-grade gliomas (World Health Organization [WHO] grade I and II) is approximately 30 to 70%, whereas the median survival duration of patients with glioblastoma multiforme (GBM) (grade IV) ranges from 9 to 12 months [2,3]. Thus, it is quite urgent to investigate the mechanisms underlying the development and progression of glioma in order to identify sensitive and specific early biomarkers for diagnosis and prognosis.
MicroRNAs (miRNAs) are a class of short, endogenous, non-coding RNA molecules that function as post-transcriptional gene regulators through binding to complementary sequences in the 3′UTRs of target mRNA transcripts [4,5]. Growing evidence has shown that aberrant expression of miRNAs are involved in the progression and development of human cancers, either as oncogenes or tumor suppressors [6,7]. Previous investigations have indicated that miR-320 is involved in the development of several human tumors [8][9][10][11][12]. Dong et al. found miR-320 showed significantly low expression in glioblastoma patients [13], however, the exact role of miR-320 in glioma occurrence and development remains unknown.
Open Access
Biological Research *Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS>2 Central Laboratory, Provincial Hospital Affiliated to Shandong University, 544 Jingwu Road, Jinan 250021, China Full list of author information is available at the end of the article Pre-B-cell leukemia homeobox (PBX) refers to a family of transcription factors, including PBX1, PBX2 and PBX3. PBX3 has been continuously reported to be associated with tumor growth and progression. Li et al. found PBX3 was an important cofactor of HOXA9 in leukemogenesis [14]. HOXA/PBX3 knockdown impaired leukemia growth and sensitized cells to standard chemotherapy [15]. Recently, PBX3 was reported to be upregulated in gastric cancer and to regulate tumor cell proliferation [16]. Han et al. demonstrated PBX3 promoted migration and invasion of colorectal cancer cells via activation of MAPK/ERK signaling pathway [17]. However, no data exist concerning the role of PBX3 in the progression of glioma. In addition, as a putative target gene of miR-320, whether miR-320 functions through regulating PBX3 remains unknown.
In the present research we identified PBX3 was regulated by miR-320 in glioma cells. MiR-320 overexpression suppressed glioma cells proliferation and induced cell cycle arrest and apoptosis. Either miR-320 overexpression or PBX3 knockdown induced inactivation of MAPK pathway.
Ethics statement
This study was approved by the hospital ethics committee, and written informed consent was obtained from all of the patients.
Clinical specimens
Twenty-four human glioma tissues, including eleven lowgrade gliomas (two grade I and eleven grade II tumors) and thirteen high-grade gliomas (five grade III and eleven grade IV tumors), were obtained from the department of neurosurgery of provincial hospital affiliated to Shandong University. The glioma specimens were verified and classified according to the WHO classification of tumors by two experienced clinical pathologists. All tissue samples resected during surgery were immediately frozen in liquid nitrogen for subsequent total RNA extraction.
Cell culture
U87 and U251 glioma cell lines were purchased from Cell bank of chinese academy of sciences (Shanghai, China). Cells were cultured in Dulbecco's modified eagle's medium (DMEM; Sigma Aldrich, St. Louis, MO, USA) supplemented with 10% fetal bovine serum. The cultures were maintained at 37 °C in a humidified atmosphere with 5% CO 2 .
Transfection of oligonucleotides and plasmid vectors
MiR-320 mimics, DNA template oligonucleotide corresponding to PBX3 and their control oligonucleotides were obtained from Genepharma (Shanghai, China). All of the above sequences were inserted into the BglII and HindIII enzyme sites of pSUPER.retro vector, respectively. The transfection were performed using Lipofectamine ™ 2000 (Invitrogen, USA) according to the instructions provided by the manufacturer.
Luciferase assays
The 3′-UTR of PBX3 was amplified and cloned downstream of firefly luciferase coding region in the pMir-Report vector (pMir-REPORTTM; Ambion Life Technologies). Mutations were introduced into the potential miR-320 binding sites using the QuikChange site-directed mutagenesis kit (Stratagene, Agilent, San Diego, CA, USA). Firefly luciferase reporters, Renilla luciferase pRL-TK vector (used as internal control, Promega, USA) and miR-320 mimics were co-transfected into the U87 and U251 cells. Cells were collected 36 h after transfection and assayed for luciferase activity using the Dual-Luciferase Reporter Assay System (Promega Corporation).
Cell cycle analysis
U87 and U251 cells were harvested 48 h post-transfection and washed with ice-cold phosphate buffered saline (PBS). After fixed in 70% ethanol for 24 h, the cells were re-suspended in 200 μl of PBS containing 50 μg/mL of propidium iodide (Sigma), 10 μg/mL RNase A, 0.1% sodium citrate and 0.1% Triton X-100, incubated for 1 h at 37 °C in the dark. Cell cycle was analyzed immediately using the flow cytometer (Millipore Guava).
Cell proliferation assays
The cell proliferation were determined by MTT and colony formation assays. For MTT assay, U87 and U251 cells with a concentration of 2 × 10 3 cells per well were seeded into 96-well plates for 24 h before transfection. For quantitation of cell viability, 25 μl of MTT stock solution (KeyGEN, China) was added to each well and followed an incubation at 37 °C for 4 h at 1, 2, 3, 4, 5, 6 and 7 days after transfection. Absorbance value of each well was measured spectrophotometrically at 570 nm. For the colony formation assay, cells transfected with miR-320 mimics were placed into 6-well plate and maintained in media containing for 2 weeks. Colonies were fixed with methanol and stained with 0.1% crystal violet (Sigma, St. Louis, Mo). Visible colonies were manually counted.
Cell apoptosis assays
U87 and U251 cells were plated in six-well plates and transfected with miR-320 mimics. For measurement of cell apoptosis, the cells were harvested 48 h post-transfection and incubated with Annexin V/Propidium Iodide (Sigma-Aldrich). The apoptotic cells were detected and quantified using flow cytometry (Becton-Dickinson) according to the manufacturer's instructions.
Western blot assay
Total protein was extracted using RIPA lysis buffer (Cell Signaling, USA). Concentrations of each protein sample were determined by BCA Assay Kit (KeyGEN, China). Equal amounts of proteins in each sample were separated by 10% sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) and transferred to Fig. 1 Altered expression of miR-320 and PBX3 in glioma tissues. a and b Measurement of miR-320 and PBX3 expression in glioma and adjacent tissues of 24 patients using RT-qPCR. c A representative result of western blot analysis for PBX3 protein expression in glioma and adjacent tissues. dStatistical results of western blot analysis of more than 5 randomly selected paired tissues. * P < 0.05, *** P < 0.001, compared with adjacent tissues polyvinylidene fluoride (PVDF) membranes (Millipore Corporation).
Statistical analysis
All experiments were performed at least three times, and all samples were analyzed in triplicates. Data are represented as mean ± SD. Statistical difference between each group was assessed by Student's t test and ANOVA analysis using SPSS 12.0 software. A P < 0.05 was considered to be statistically significant.
MiR-320 expression was down-regulated, while PBX3 was up-regulated in glioma tissues
MiR-320 and PBX3 expression in glioma tissues and adjacent healthy tissues was determined using qRT-PCR. The results showed miR-320 was significantly reduced in glioma tissues in comparison with that of adjacent healthy tissues, while PBX3 expression was detected to be significantly increased in glioma tissues (Fig. 1a, b). Western blot analysis also showed that PBX3 protein expression was significantly increased in glioma tissues compared with adjacent health tissues (Fig. 1c, d).
miR-320 directly targets PBX3 in U87 and U251 cells
Bioinformatics approaches suggested the gene encoding PBX3 is a putative target gene of miR-320. To determine whether PBX3 expression was regulated by miR-320 in glioma cells, U87 and U251 cells were transfected with miR-320 mimic, and the expression of miR-320 and PBX3 was determined using qRT-PCR and western blot assays. The results showed miR-320 expression was significantly increased by miR-320 mimics, while PBX3 expression was significantly reduced by miR-320 mimics (as shown in Fig. 2a-c). Luciferase reporter assay showed that over-expression of miR-320 led to a marked decrease of Renilla luciferase activity, which was specifically abolished by the mutation of the corresponding anti-seed sequence in 3′ UTR of PBX3 (Fig. 2d). These results suggested that miR-320 directly modulate PBX3 expression by direct binding.
MiR-320 suppressed glioma cells proliferation through inducing cell cycle arrest at G0/G1 phase
To investigate the biological function of miR-320 in glioma cells, exogenous miR-320 expression by mimic transfection was performed initially. Transfection of Fig. 2 MiR-320 regulates PBX3 expression in glioma cells. a miR-320 expression in U87 and U251 cells following transfection with miR-320 mimics or NC, and miR-320 expression was significantly increased in cells transfected with miR-320 mimics in comparison with that with NC. b PBX3 expression in U87 and U251 cells following transfection with miR-320 or NC, and PBX3 expression was significantly reduced by miR-320 compared with NC. c Western blot of PBX3 in U87 and U251 cells transfected with miR-320 mimics or NC, and protein expression of PBX3 was significantly reduced. d Computer prediction of miR-320 binding sites in the 3′UTR of PBX3 gene. Luciferase reporter assays that cells were transfected with 100 ng of wild-type 3′-UTR-reporter or mutant constructs with 100 nM of the miR-320 mimic or NC. ***P < 0.001, compared with negative control mimics significantly suppressed cell proliferation rates and colony formation abilities in both glioma cell lines, U87 and U251 (Fig. 3a-c). We then explored the effect of miR-320 expression on cell cycle progression by flow cytometry methods. Compared to the cells transfected with control mimics, glioma cells with miR-320 transfection showed redistributed cell cycle progression with an increased proportion of cells in G0/G1 arrest (Fig. 3d, e).
MiR-320 induced apoptosis in glioma cells
The effect of miR-320 mimics on gliomas cell apoptosis was examined using Annexin V and PI double staining assay and western blotting analysis of the caspase-3 protein. Cells were harvested at 48 h after transfection and apoptosis was assessed by FCM. Compared with the control group, the proportion of apoptotic cells in the miR-320 mimic-transfected group was significantly higher Fig. 3 MiR-320 mimics influences the proliferation of U87 and U251. a Growth curves of U87 and U251 cells transfected with miR-320 or NC. b and c U87 and U251 cells transfected with miR-320 mimics were cultured for 2 weeks. Colonies were fixed with methanol and stained with 0.1% crystal violet. Visible colonies were manually counted. d Flow cytometric analysis of the indicated glioma cells transfected with NC or miR-320. e Cell cycle distribution of U87 and U251 cells transfected with miR-320 mimics or NC. *P < 0.05, **P < 0.01, ***P < 0.001, compared with negative control (at the same time point) (Fig. 4a, b). In addition, miR-320 transfection resulted in caspase-3 activation evidenced by the increased protein level of cleaved caspase-3 (Fig. 4c), suggesting miR-320 mediated glioma cell apoptosis is caspase enzyme dependent.
MiR-320 overexpression or PBX3 knockdown inhibits MAPK pathway activation in glioma cells
Several studies have shown that Raf-1/ERK pathway functions as a switch determining cell fate, including proliferation, differentiation, apoptosis, survival and oncogenic transformation [7], [18][19][20]. In our research, miR-329 mimics is found to significantly decrease the phosphorylation levels of Raf-1, p38, ERK1/2, ERK5 and JNK in both U87 and U251 cells (Fig. 5a, b). To address whether miR-320 functions through targeting PBX3, PBX3 knockdown was performed using shPBX3 and the effect of which on the activation of Raf-1, p38 and ERK1/2 was detected. As shown in Fig. 5c, d, PBX3 knockdown significantly reduced the increased phosphorylation level of Raf-1, p38 and ERK1/2. Taken together, the results presented indicated miR-320 may suppress glioma cell growth through targeting PBX3 and regulating MAPK pathway.
Discussion
Numerous studies have focused the role of miR-320 in tumor pathogenesis and progress. Wu et al. found miR-320 suppressed tumor angiogenesis driven by vascular endothelial cells in oral cancer by silencing neuropilin 1 [21]. In addition, miR-320 was demonstrated to inhibit osteosarcoma cell proliferation by directly targeting fatty acid synthase [22]. In this study, we found that miR-320 expression was downregulated in glioma tissues. Overexpression of miR-320 suppressed glioma cell proliferation, induced cell cycle arrest and apoptosis. With former relevant researches, the present study suggested that miR-320 may acts as a tumor suppressor.
We identified PBX3 was regulated by miR-320 in glioma cells. Overexpression of PBX3 has been associated with many kinds of malignancies, including gastric cancer [16], colorectal cancer [23], prostate [24] and leukemic [14]. Han et al. found PBX3 is targeted by multiple miRNAs, and is sufficient and necessary for the acquisition and maintenance of tumour-initiating cells (TIC) properties [25]. In the same study, they demonstrated PBX3 drives an essential transcriptional programme, activating the expression of genes critical for hepatocellular carcinoma (HCC) TIC stemness including Fig. 4 MiR-320 mimics induces apoptosis in U87 and U251 cells. a Annexin V/PI dual staining for U87 and U251 cells transfected with miR-320 mimics or NC. b Quantitative analysis of cell cytometry results expressed as percentage of the total number of cells counted. c A representative result of western blot analysis for caspase-3 protein expression in glioma cells transfected with miR-320 mimics or NC. ***P < 0.001, compared with negative control CACNA2D1, EpCAM, SOX2 and NOTCH3 and the expression of CACNA2D1 and PBX3 mRNA is predictive of poor prognosis for HCC patients [25]. In this research, we found that PBX3 was overexpressed in glioma tissues and was regulated by miR-320, suggested PBX3 may participate in the glioma inhibition function of miR-320. Han et al. found PBX3 was upregulated in colorectal cancer tissues, and over-expression of PBX3 promoted tumour metastasis, both in vitro and in vivo [26]. Li et al. found PBX3 was overexpressed in gastric cancer specimens and cell lines, and positively correlated with disease severity and tumor cell proliferation and invasion [16]. In addition, Han et al. demonstrated high level of PBX3 expression was correlated with the invasive potential of colorectal cancer cells, and significantly associated with lymph node invasion, distant metastasis, advanced TNM stage and poor overall survival of patients [17]. They also found ectopic expression of PBX3 in low metastatic cells was shown to promote migration and invasion [17]. Taken together, PBX3 may be a clinically relevant oncoprotein and a promising therapeutic target of these cancers.
However, the molecular mechanisms responsible for the tumor promoting effect of PBX3 are largely unknown. Han et al. found upregulation of phosphorylated extracellular signal-regulated kinase (ERK)1/2 was one of the targeted molecules responsible for PBX3-induced colorectal cancer cell migration and invasion [17]. Hence, we speculated that PBX3 might also promote glioma by activating MAPK/ERT pathway. Our results showed that either miR-320 mimics transfection or PBX3 knockdown significantly reduced the phosphorylation levels of Raf-1, p38, ERK1/2, ERK5 and JNK were in U87 and U251 cells. The findings suggested miR-320 and PBX3 modulated MAPK pathway may contribute to their effect on the proliferation and apoptosis of glioma cells. The activation of Raf-1 initiates MiR-320 overexpression or PBX3 knockdown inhibits MAPK pathway activation in glioma cells. a and c After transfection of miR-320 mimics or shPBX3, the activation of Raf-1, p38, ERK1/2, ERK5 and JNK was determined using western blotting assay. b and d Statistical results of western blot analysis of three independent experiments. *P < 0.05, **P < 0.01, ***P < 0.001, compared with negative control a MAPK cascade that comprises a sequential phosphorylation of the dual-specific MAPK kinases (MAP2K1/ MEK1 and MAP2K2/MEK2) and the extracellular signalregulated kinases (MAPK3/ERK1 and MAPK1/ERK2). The cascade can promote NF-κB activation and inhibit signal transducers involved in motility (ROCK2), apoptosis (MAP3K5/ASK1 and STK3/MST2), proliferation and angiogenesis (RB1). In addition, Raf-1 can also protect cells from apoptosis by translocating to the mitochondria where it binds Bcl-2 and displaces BAD 29 . PBX3 knockdown significantly suppressed Raf-1 phosphorylation, suggesting PBX3 might aggravate glioma through promoting the activation of Raf-1, and subsequent Raf-1 mediated MAPK cascade and apoptosis inhibition.
In summary, our current data demonstrated that miR-320 is downregulated in glioma tissues and inversely correlates with PBX3 expression. Over expression of miR-320 inhibited glioma cell proliferation and induced cycle arrest and apoptosis. PBX3 was negatively regulated by miR-320 in glioma cells. Either miR-320 overexpression or PBX3 knockdown suppressed the phosphorylation of Raf-1, p38, ERK1/2, ERK5 and JNK. The results suggested miR-320 might functions through the PBX3/Raf-1/MAPK axis, and miR-320 oligonucleotides might be a potential cancer therapeutic for glioma.
Authors' contributions YRZ conceived and deigned the whole study. CCP and HG performed the study and they were major contributors in writing the manuscript. NZ and QG analyzed and interpreted the patient data. YYS contributed to literature research. YRZ reviewed the manuscript. All authors read and approved the final manuscript. | 4,215.4 | 2017-09-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
Optical Storage Properties in Cast Films of an Azopolymer
In this paper we discuss the properties of optically induced birefringence in DR19-MDI cast films that may be used in optical storage applications. The selection of DR19-MDI cast films was based on a comparative study of optical storage properties of Langmuir-Blodgett (LB) films from various azopolymers. DR19-MDI possesses a high residual fraction of optical birefringence and good environmental stability, which was corroborated by the data from optical storage experiments. DR19-MDI cast films maintain a reasonable level of birefringence after the initial decay due to chromophore relaxation, thus making them promising candidates for optical storage devices.
Introduction
Azoaromatic dyes have been widely investigated over the last few years due to their potential use in a variety of optical devices employing nonlinear optics, surface relief gratings and optically induced birefringence 1 .The latter process arises from the reversible trans-cis-trans photoisomerization and resulting molecular orientation, and allows organic polymers containing azo chromophores to be used in storing information optically 2 .When excited by linearly polarized laser light, azobenzene chromophores suffer trans-cis-trans isomerization, accompanied by molecular reorientation.Through the hole-burning mechanism, an excess of chromophores is formed in the direction perpendicular to the laser polarization after a large number of isomerization cycles.This causes birefringence in the film structure, thus representing a WRITE mechanism.As the light source is switched off, some molecular relaxation occurs, but a considerable number of molecules remain oriented.The stable birefringence pattern corresponds to the STORE step, which can be detected by measuring the change in transmittance of a weak probe beam that passes through crossed polarizers (READ).This birefringence pattern can be completely erased by heating or overwriting the test spot with circularly polarized light (ERASE).Figure 1 shows a schematic representation of the orientation process 3 .In order to improve the optical storage features in azopolymers, a large number of fundamental and technology-driven studies have been made, by exploiting the rich variety of organic compounds and material processing methods available.The first possibility in this engineering process concerns the chemical synthesis, where different functional groups can be attached to azobenzene molecules, improving photochemical characteristics such as the isomerization rate 4 .Moreover, the azo group can be attached in different polymer backbones either in the main-chain or as a side chain 5 , allowing the mechanical and processing properties of conventional polymers to be combined with the optical properties of azo groups.Extra control of the material properties can be achieved using film forming techniques such as the Langmuir-Blodgett (LB) 6 or layer-bylayer (LBL) 7 methods.
In an effort to establish materials with the desired optical storage properties, we have instituted a project to investigate a series of films from different azopolymers, particularly for obtaining experimental information to guide the development of optical storage devices.Four polymeric systems were found to provide good results in terms of optical storage and stability: polyurethane prepared from DR19 dye and methylene-diisocyanate (DR19-MDI), the methacrylic homopolymer of DR13 dye (HPDR13), copolymers of hydroxyethylmethacrylate and the methacrylic derivative of the DR13 dye (HEMA-DR13) and another polyurethane from DR19 dye with isophorone-diisocyanate (DR19-IPDI) 6,[8][9][10] .The chemical structures of these polymers are shown in Fig. 2.These studies were performed with LB films to achieve better control of thickness and surface uniformity imparted by the LB method 11 .Guesthost systems from some polymers and chromophores were also studied.However, these latter systems are not suitable for applications owing to their small residual birefringence 12 .The low storage efficiency is typical of azoaromatic dyes in guest-host films, where the chromophores are not covalently attached to the polymer chain, resulting in an excessive freedom for rotational diffusion.The optical storage properties of LBL films of a series of anionic dyes and azopolymers alternated with different polycations were also investigated.In this kind of systems, the writing process takes much longer than in the spin-coated or cast films, due to the electrostatic and H-bonding interactions, which hamper the photoisomerization and molecular orientation 7 .
The studies involving the four systems in Fig. 2 pointed to a clear relation between the optical storage properties and the structure of the polymer to which the azobenzene group is attached.Since the writing and relaxation times for azopolymers are usually in the range of a few seconds, and are basically inherent to the mechanism of orientation, the following discussion will be focused on the maximum and the residual fraction of the induced birefringence.These two parameters are the most important for applications.Table 1 summarizes some optical storage features of LB films from the four azopolymers, and a full discussion of these results is given in Ref. 13.It is readily seen that the residual fraction for DR19-MDI is higher than for other films, owing to the polymer rigidity, which is reflected in T g values.The maximum induced birefringence (∆n) also depends on the polymer rigidity.The higher the polymer T g , the lower the maximum induced birefringence.Of course, this value also depends on the chromophore concentration that is related to the chosen polymer.
On the basis of these results, DR19-MDI may be considered a good candidate for optical storage applications due to its high residual fraction in addition to environmental stability.One barrier to be overcome though is the low birefringence, which is 0.026 for a LB film.Better performance may be achieved with thicker films, produced by casting or spin-coating instead of the LB method.It is known that the birefringence of a LB film is usually higher than in a spin-coated film 14 , but the much larger thickness of the latter film can compensate and lead to an appreciable transmission signal.As a consequence, a higher light phase variation appears, which is important for optical storage devices.In this work we present data for DR19-MDI cast films, including the optical storage characteristics, writing/erasing stress tests and liability (relaxation) of the stored information.
Film Preparation and Characterization
The polymer synthesis was carried out as described in Ref. 8. The films were cast onto glass substrates, previously cleaned with the RCA 15 method.Initial attempts to produce thick films were made using volatile solvents, such as dichloromethane and chloroform.However, due to the low polymer solubility, it was impossible to obtain thick and visually uniform films.Thick, uniform films were obtained by using a high boiling point solvent, N-methy l-2-pyrrolidone (NMP).After several experiments, the optimized condition found was a 10/90 w/w solution (polymer /NMP), dried at 50 °C for 4 h.The film was characterized by UV-vis.absorption spectroscopy with a Hitachi-U2001 spectrophotometer.
The UV-vis.spectrum of a cast DR19-MDI film is shown in Fig. 3, which also includes the spectrum for DR19-MDI in a NMP solution.The absorption maximum of the film is
Optical Storage Experimental Setup
The optical storage experiments were performed by producing the optically induced birefringence in the film using a diode-pumped frequency doubled, linearly polarized Nd:YAG continuous laser at 532 nm (writing beam) with a polarization angle of 45° with respect to the polarization orientation of the probe beam (reading beam).The power of the writing beam is varied to study the time and amplitude of the optically induced birefringence.A low-power He-Ne laser light at 632.8 nm passing through crossed polarizers was used as the reading beam to measure the induced birefringence in the sample.Figure 4 presents a diagram of the experimental setup.The optically induced birefringence can be determined by measuring the probe beam transmission ( ) according to: where λ is the wavelength of the incident radiation, d is the film thickness, I 0 is the incident beam intensity and I is the intensity after the second polarizer.
Results and Discussion
The result of a typical optical storage experiment with a DR19-MDI cast film is presented in Fig. 5, in which a writing power of 7 mW was used.Before the writing beam was switched on, there was no transmission of the probe beam that passes through the film and crossed polarizers.This indicates the random orientation of the chromophores.When the writing beam was switched on at point A, the transmis-sion increased and reached saturation in ca.7 min.Such an increase in transmission is related to the induced birefringence due to the chromophore orientation.At point B the writing beam was switched off and the transmission decreased sharply, reducing to about 85% of the saturation value (point C) in about 4 min.This high residual rate, which is close to the one obtained for this polymer in the LB film 12 after the same time span, is one of the most important factors for using this material in optical storage applications.Noticeable also in Fig. 5 is the transmission level obtained with this film, about 13%, which allows its usage for applications.This high transmission value could be obtained due to the higher thickness (d = 3 µm) of the cast film.Such a thicker film is hardly obtained by other film forming techniques, such as the Langmuir-Blodgett method.
The influence of the writing beam power has also been studied, as shown in Fig. 6.The maximum induced birefringence increases with the laser power up to 8 mW or so, after which saturation is reached (curve A).The characteristic time to induce the birefringence, obtained by fitting an exponential function to the writing sequence experimental data, decreases drastically with increasing writing beam power, again up to 8 mW (curve B).
The remaining birefringence fraction, corresponding to the ratio between the maximum induced birefringence and that obtained at point C (after 4 min) in Fig. 5, is about 80% of the maximum birefringence for DR19-MDI film.This value is independent of the pump laser power, as shown Fig. 7.
If the sample were left to relax, the residual fraction shown in Fig. 7 corresponding to the transmission in the point C of Fig. 5 would remain in an appreciable value for several days.That is to say, the induced birefringence is stable.In order to the viability of the DR19-MDI as an optical storage device, in Fig. 8 we present a curve of the birefringence relaxation process taken during 50 h.Before the relaxation could be measured, the birefringence was induced using the writing beam (532 nm) operating at 6.5 mW for about 500 s.Although the sample absorption at the reading beam wavelength (632.8 nm) is very small, as the experiment takes a long time, it can also induce molecular orientation perpendicularly to its polarization direction, changing the birefringence already induced by the writing beam.In order to avoid this problem, the reading beam was blocked between two consecutive measurements.Any probe beam power fluctuation was canceled by using a reference detector.Figure 8 shows that after 2 h of relaxation the birefringence has reduced to 65 % of the maximum value.However, as shown in Fig. 8 this birefringence level stabilizes and remains the same for 2 days.Considering that no precaution was taken to protect the film from environ- mental changes and from ambient light, the results of Fig. 8 indicate that the optically written information in this polymer can be used for storage purposes.Besides, multiple writing and erasing test (writing/erasing stress) was performed at room temperature on DR19-MDI film (results not shown here).The same birefringence levels were reached for every cycle, which again shows that this material is suitable for optical storage applications.
Conclusions
The procedures for preparing uniform, thick cast films from DR19-MDI were optimized.Photoinduced birefringence in these films were studied as a function of the writing laser power and the characteristic time to in- duce the birefringence decreases with increasing writing beam power while the maximum induced birefringence increased with the power and begins to saturate with 8 mW.The remaining birefringence fraction is about 80% of the maximum birefringence for DR19-MDI film and is independent of the writing laser power.This value is relatively stable, decreasing to 65% after 2 h and remaining stable for more than 50 h.This relative stability and rather higher values of residual birefringence allied to the good film forming properties make DR19-MDI suitable for optical storage applications.
Figure 3 .
Figure 3. UV-vis.spectra of a DR19-MDI film (solid line) and a solution of the same polymer in NMP (dotted line).
blue shifted in comparison to that in solution, indicating Htype aggregation of azobenzene chromophores the film.Such a behavior is similar to that presented in Ref. 8.
Figure 5 .
Figure 5. Writing sequence for a MDI cast film.The writing beam power was 7 mW.
Figure 6 .
Figure 6.Dependence on the laser power for the amplitude of the induced birefringence (a) and writing time (b) for a DR19-MDI cast film.The lines are drawn to guide the eyes.
Figure 7 .
Figure 7. Residual fraction as a function of the writing beam laser power. | 2,907.8 | 2003-06-01T00:00:00.000 | [
"Physics"
] |
Evidence in the Japan Sea of microdolomite mineralization within gas hydrate microbiomes
Over the past 15 years, massive gas hydrate deposits have been studied extensively in Joetsu Basin, Japan Sea, where they are associated primarily with active gas chimney structures. Our research documents the discovery of spheroidal microdolomite aggregates found in association with other impurities inside of these massive gas hydrates. The microdolomites are often conjoined and show dark internal cores occasionally hosting saline fluid inclusions. Bacteroidetes sp. are concentrated on the inner rims of microdolomite grains, where they degrade complex petroleum-macromolecules present as an impurity within yellow methane hydrate. These oils show increasing biodegradation with depth which is consistent with the microbial activity of Bacteroidetes. Further investigation of these microdolomites and their contents can potentially yield insight into the dynamics and microbial ecology of other hydrate localities. If microdolomites are indeed found to be ubiquitous in both present and fossil hydrate settings, the materials preserved within may provide valuable insights into an unusual microhabitat which could have once fostered ancient life.
Results
Growth, morphology and depth-related changes. The Joetsu Basin dolomitic aggregates are easily distinguished from other minerals by their size, with diameters ranging from ~10 μm to ~150 μm (avg. diam. = 40 μm), and their distinctive morphologies e.g. "dumbbell pairs", "chains", or branching "cauliflower growths" (Fig. 3a-d). These distinctive morphologies can be seen to be formed from conjoined microdolomite spheres. Dark-coloured material is present at the cores of the microdolomites, and broken chains reveal interconnected internal porous regions (Fig. 3e,f). Macroscopically, the microdolomite appears as a very fine white or light-yellow powder, as the conjoined growth-patterns rarely exceed several grains and are sufficiently dispersed within the hydrate matrix that spheroidal aggregates do not merge to form crusts, veins, or other larger cemented mineralisations (Fig. 2c).
Although individual dolomite samples show some variation in the abundance of single grains, dumbbells, or cauliflower-shaped aggregates, the abundance of these morphologies does not change with depth. There is, however, an observable depth-related change in the outer texture and size of the aggregates. Hydrates sampled from <20 metres below seafloor (mbsf) contain aggregates of rough spheroidal angular dolomite rhombs (Fig. 3c). The rhombs are ~5 μm across, and form spheroidal aggregates ~15-20 μm in diameter. There is a transition to smoother surface textures between 20 mbsf and 30 mbsf, such that dolomite rhombs sampled at >30 mbsf have intergrown dolomite surfaces (Fig. 3d), comprising hexagonal plates or shield shapes ~10-15 μm across organized in spheroidal aggregates ranging from ~30-150 μm in diameter.
Non-biogenic microdolomite aggregates with similar spheroidal and dumbbell morphologies have been produced in the laboratory at temperatures of >40 °C through direct precipitation from a gel of magnesium-rich amorphous calcium carbonate (MgACC) which quickly transforms to spheroidal proto-dolomite and eventually undergoes dewatering to produce microdolomite 17,18 . During this transformation, Mg:Ca ratios increase from 0.65 in pure MgACC to 1.00 in stoichiometric dolomite, where Mg:Ca is the molar ratio of the two corresponding elements in the dolomite. In order to see if the Joetsu Basin dolomites show any systematic change, Mg:Ca was determined through Rietveld refinement of the x-ray diffraction patterns 19 and by applying the equation of Turpin et al. 20 . Grain diameter was determined through microscopy 21 (Supplementary Tables S1, S2). In general, both the grain size and average diameter of grains increase with depth (Fig. 4a). The smallest average grain diameter is 19 µm (UTNE at 16mbsf) while the largest average is 114 µm (UTCW at 88mbsf). At 22 µm, the Joetsu Knoll sample (JK at 30mbsf) is also quite small despite being much deeper than other small dolomite aggregates. The lowest Mg:Ca ratio is 0.76 (UTCW 12mbsf), while the highest is 0.99 (UTCW 86mbsf). There are, however some dolomites with Mg:Ca > 0.95 at depths as shallow as 20mbsf.
Stable isotopic composition. Stable carbon isotope ratios can potentially indicate the carbon source of the microdolomites, particularly in the Joetsu Basin where the primary carbon pools have distinct isotopic compositions. The δ 13 C values for the microdolomites (Fig. 4b,d) are all positive, which is significantly different from the negative values associated with MDACS in the area 8,9 . Generally, dolomites which have positive δ 13 C values are the result of a carbon source related to methanogenesis which has subsequently undergone evaporation 22 www.nature.com/scientificreports www.nature.com/scientificreports/ reported δ 13 C values for dissolved inorganic carbon (DIC) in the interstitial waters of Joetsu Basin sediments range from +18.7‰ to +28.5‰VPDB at Joetsu Knoll and −4.9‰ to +41.4‰ at UTCW 15 ; at both areas, the least positive values are shallow sediments near the sulphate methane transition (SMT), due to anaerobic oxidation of methane (AOM) whereas the most positive values are in the deeper sediments. Presumably the source of this DIC, which is enriched in 13 C relative to 12 C, is a deep source of residual organic that has undergone methanogenesis over long periods of time 3,4 . Porewaters with negative δ 13 C values for DIC are related to a combination of gas hydrate dissociation and AOM 8,24 . During growth gas hydrate incorporates pore fluids resulting in residual porewaters with negative δ 18 O values 12,13,25 . The removal of porewater and the resulting dehydration is observed as salinity anomalies within interstitial water throughout surrounding hydrate locales of the Joetsu Basin 11,14,15 . Japan Sea bottom-waters in the Joetsu Basin are cold at 0.4 °C and the geothermal gradient at Umitaka Spur and Joetsu Knoll is 105 mK/m 26 . The temperature of the deepest samples at 90 mbsf, which is just above the gas hydrate stability zone (GHSZ) would be expected to be 9.9 °C. The δ 18 O value for dolomite in equilibrium with seawater down to this depth would range from +6.4‰ to +4.1‰VPDB 27 (Supplementary Table S2). The dolomites show δ 18 O values to the left of equilibrium with seawater plotted as a solid line (Fig. 4c) due to isotopic fractionation between oxygen in the interstitial water and water-bound oxygen in the growing hydrate 13,28 . Both UTNE and JK dolomites show a greater degree of depletion of 18 O than UTCW. If the extent of disequilibrium is taken as an indicator of the rate of growth, then most of the rapid growth of hydrates occurred at depths less than 20 mbsf. As hydrate is buried, the values again approach those of thermal equilibrium with seawater. Unlike the δ 13 C values, the δ 18 O values of MDACs in the UTCW area show considerable overlap with those values observed for the microdolomites (Fig. 4d). The larger, www.nature.com/scientificreports www.nature.com/scientificreports/ deeper microdolomites show less positive values, particularly at UTNE, indicating that hydrate growth at depth continues and in doing so takes up water from the fluid inclusions in which the microdolomites also continue to grow.
Non-hydrocarbon gases incorporated in hydrates.
The Joetsu Basin hydrates contain both hydrogen sulphide and carbon dioxide 5,7 . Significant amounts of H 2 S (up to 10.8 mL/L-CH 4 ) were found in hydrates between 10 mbsf and 20 mbsf. These depths coincide with the formation of small dolomite grains (Fig. 4e) and it may be that the presence of high sulphide concentrations in the hydrate contributed to the initial stages of microdolomite precipitation 29 . Carbon dioxide is also present, and the δ 13 C values for CO 2 reach minimum values over the same interval (Fig. 4f) then gradually become more positive with depth as they reach equilibrium with the surrounding DIC-pool. The composition of non-hydrocarbon gases within the hydrate therefore seems to be influenced by AOM and sulphate reduction to some degree 5 , but only significantly so at depths of less than 20 mbsf. This is not the case for the microdolomites, as the δ 13 C of the UTNE microdolomites seems only to be influenced by AOM at shallow depths and not at all at UTCW (Fig. 4b). Assuming the aforementioned temperature gradient, we calculated Δ 13 C dolomite-CO2 as a function of depth (Supplementary Table S2) 30 . Calculating the equilibrium values for the microdolomites indicates a high degree of isotopic disequilibrium with the CO 2 for the hydrates from both UTNE and UTCW at <20 mbsf (Fig. 4g). Even though AOM may influence the δ 13 C of CO 2 in the hydrate, the primary source of carbon in the microdolomites must be 13 C-enriched DIC, which can ultimately only derive from methanogenesis at greater depths or from some form of microbial activity which also produces CO 2 within saline fluid inclusions in the gas hydrate.
Oils, brines, and other impurities excluded during hydrate growth. It has been shown that rapid hydrate growth, at least in the case of synthetic hydrates, can lead to the temporary formation of encapsulated pockets of brine or finely dispersed saline fluid inclusions 31 . The Joetsu Basin hydrates recovered from the UTCW sites have yellow hydrate which, when dissociated, yields yellow oil which is emulsified in the clear hydrate water and microdolomite which settles to the bottom (Fig. 2c). The recognition of distinct insoluble oil and water phases is important because when trapped in pockets and veins, water-in-oil emulsions both stabilize brines providing microbial habitats 32 and could potentially serve as a spherical-template for the formation of mineral precipitates such as the spheroidal microdolomites. Evidence for microbial processes can be found in the chemical composition of the oils which exhibit alteration consistent with the subsurface degradation of petroleum 33 . Specifically, the higher carbon-number n-alkanes and steranes which are generally present in undegraded oil are notably absent, and instead biodegradation-products such as the 25-norhopanes have been formed from regular hopanes, and the oils relatively enriched in recalcitrant components such as asphaltene ( Supplementary Fig. S3 and Supplementary Table S4). The proportion of asphaltene, here taken as a more refractory organic component, becomes greater relative to total extractible organic matter (EOM) with depth, indicating that the biodegradation of the labile compounds is ongoing during burial especially in the upper 30mbsf (Fig. 4h) and is generally accompanied by an increase in the diameter of the microdolomite grains. Similarly, C 29 25-norhopane, which is formed directly from the biodegradation of C 30 hopane increases with depth ( Fig. 4i) and the amount of C 30 hopane decreases relative to asphaltene. 5). This may reflect that hydrate growth is initially rapid in shallow sediments and leads to less-ordered dolomites. However, during hydrate burial the dolomites grow slowly until the overall Mg:Ca ratio approaches 1 (Fig. 4a). Prior to analysis, and during the sample preparation, it was also noticed that some of the grains still contain fluid inside ( Supplementary Fig. S2) which, if left to dry, formed secondary minerals on the surfaces of the polished microdolomites. EPMA showed that these grains have high Na and Cl in the cores, presumably trapped saline water.
internal microbial content of dolomite grains. Epifluorescence microscopy of DNA-stained microdolomites indicated high concentrations of microbial DNA in two samples from shallow depths (less than 30 mbsf), and lesser concentrations in two samples beneath 30 mbsf. Despite differences in DNA-concentrations, both shallow and deep microdolomites could yield sufficient extractible DNA for 16S rRNA phylogenetic analysis (Supplementary Table S5). Epifluorescence microscopy showed that DNA lined the inner surfaces and cores of the spheres (Fig. 6), strongly indicating that phylogenetic information for the microdolomites pertains to their internal microhabitat.
It is notable that the microdolomites mostly lack sulphate reducing bacteria and ANME archaea that are associated with gas hydrate mounds and shallow sites of methane seepage [34][35][36] . A single sample (J20R 18.5 mbsf) had 0.5% archaea and 0.5% δ-proteobacteria (possible sulphate reducing bacteria), but aside from this the interior of the microdolomites appears to be a microhabitat distinct from that typically found at shallow sites of methane seepage. Instead, phylogenetic analysis suggests the microdolomites grew in a microhabitat similar to that hosted by deep gas hydrates 37 and by marine oil spills 38 . For example, Sphingomonadales is present in all samples, and there is a notable predominance of α-proteobacteria in the deepest microdolomites (Rhizobiales makes up 50% of the microbial abundance in J25R 53.91 mbsf, compared to 17.9% for other α-proteobacteria in shallower samples). Both Sphingomonadales and Rhizobiales are reported oil-degraders.
However, there are differences to the communities reported from marine oil spills: β-proteobacteria, including Burkholderiales, which commonly occur in oxic-seawater near oil seep sites 38 are absent, while γ-proteobacteria are of low abundance in shallow samples and absent in deeper samples. The low abundance of γ-proteobacteria and greater relative abundance of α-proteobacteria in the deepest samples is consistent with α-proteobacteria supplanting γ-proteobacteria during the later stages of petroleum degradation, after the lighter substrates have been removed 39 . (2020) 10:1876 | https://doi.org/10.1038/s41598-020-58723-y www.nature.com/scientificreports www.nature.com/scientificreports/ From the perspective of the formation of microdolomites, perhaps the most striking difference between the deep and shallow microbial communities is the abundance of Bacteroidetes; in particular Flavobacteriia which is present in the shallower samples (48.3% at 12.04 mbsf and 25.3% at 18.25 mbsf) and completely absent from the deeper samples. Because Flavobacteriia breaks down complex macromolecules including oils 39 , and produces extracellular polymeric substances that initiate the formation of spherulitic microdolomite 40 , Flavobacteriia likely plays a key role initiating the formation of microdolomites at shallow depths. Some strains of Flavobacteriia have a light-yellow colour 39 and this may account for the yellow colour of some microdolomites and oils. Because both hydrate and sediment in Joetsu Basin samples contain oil, the sediments also contain some similar organotrophs 41 , as do sediments above deep gas hydrate on the Pacific side of Japan 42 , yet the formation of authigenic carbonates in the sediments appears to be predominantly a consequence of shallow ANME and SRB.
Of final note is the Phylum Firmicutes, including Class Bacilli, which is present in the shallow microdolomites, but more generally associated with hypersaline anoxic environments 37 . The Phylum Firmicutes is present at 11.8% in one shallow sample (J20R 18.25 mbsf), but the Class Bacilli is found only in trace amounts in both shallow samples and not at all in the deeper samples. In one of the deeper microdolomite samples (J25R 57.91 mbsf), the Phylum Firmicutes, Class Thermodesufovibrionia makes up 9.8% of the microbial distribution and its presence is most likely associated with the degradation of long-chain alkanes and fatty acids 43 .
Discussion
Authigenic carbonates are often associated with gas hydrates and gas chimneys, but the microdolomites in the residue from dissociated Joetsu Basin hydrate differ from MDACs in several respects. The first major difference is mineralogy. The MDACs found at the Umitaka Spur and Joetsu Knoll gas chimneys all comprise aragonite or high-Mg calcites 8,9 , and occur as concretions or nodules of cemented sediment. To date, microscopic observation of the MDACs in Joetsu Basin has revealed the development of micritic carbonate and carbonitic microspar on sediment surfaces, but until this study had not found spherulitic aggregates of microdolomite. Hydrate Ridge www.nature.com/scientificreports www.nature.com/scientificreports/ located offshore Oregon is similar the Joetsu Basin Sites in a number of regards, including the presence of yellow oil-containing hydrate 44 , and is known to host carbonate "clathrites" that developed in very close proximity to massive hydrate and which also consist of high-Mg calcitic and aragonitic sediment cementation 25 similar to the Joetsu Basin MDACs 8,9 . In contrast, the carbonate present inside of the Joetsu Basin hydrates is exclusively microdolomite. The second difference is in the microbial communities associated with the carbonate growth which, in shallow marine sediments, is related to the availability of porewater sulphate. Such settings, which are external to the hydrate surfaces, give rise to a consortium of ANME Archea and SRB as has been documented in Hydrate Ridge 35 . Given the absence of available sulphate inside of fluid inclusions within the hydrate, the microbial content preserved in the microdolomites is lacking in ANME and SRB. Although methane is available in abundance within the hydrate, the microbial communities inside of the Joetsu Basin microdolomites are primarily organotrophs which rely on complex macromolecules as a metabolic source. Other key differences between conventional MDACs and the microdolomites, such as stable isotopic composition and growth habit, are intimately tied to the first two differences. The effect of anaerobic methane oxidation, for example, does not seem to influence the δ 13 C of the microdolomites, even though it has produced very negative δ 13 C values in the nearby MDACs (Fig. 4d). The spheroidal growth habit observed with the microdolomites which appears to be the consequence of extracellular polymeric substances produced by Flavobacteriia, is also not observed in nearby MDACs.
At first glance, the hydrate microdolomite grains seem perhaps similar to oolitic carbonates found in fossil hydrocarbon seep settings in Japan 45 and elsewhere 46 . As with Joetsu Basin MDACs, however, these oolitic growths are related to the production of dissolved inorganic carbon (DIC) through microbial activity carried out by ANME and SRB and have negative δ 13 C values. Also, the microdolomites differ significantly in size, not exceeding 200 μm in diameter, whereas the oolitic growths which developed in the aforementioned seep settings are about an order of magnitude larger in diameter and often consist of acicular aragonite which coats pre-existing grains.
By considering the microdolomites as a microhabitat or micro-environment, better analogues can be found based on chemical characteristics, rather than facile comparisons to geological settings. For example, highly www.nature.com/scientificreports www.nature.com/scientificreports/ evaporative settings may develop saline anoxic lagoons or sabkha environments where carbonate authigenesis is locally mediated by the extracellular polymeric substances produced though microbial activity 22,40 . Spheroidal microdolomite aggregates of similar size and growth habit have been shown to develop in Lagoa Vermelha and Brejo do Espinho, Brazil 47,48 . Petroleum seepage sites in Kuwait (Eocene and Quaternary sediments) have been shown to host hydrocarbon-related microdolomites 49 . Microbially-derived spheroidal microdolomites have also been reported, along with evidence of previous gas hydrate occurrences, in the Tertiary-age seep sites of Monferrato, Italy 50 . In these settings, the formation of dolomite is favoured at high salinities, petroleum provides the organic substrates, and microbial processes which ensue both oxidise large organic molecules and mediate the formation of dolomite. In the case of Joetsu Basin, a hypersaline micro-environment would be generated by the rapid growth of the surrounding hydrate 11,12,15 coupled with the drawdown of water and the formation of saline water inclusions, and with hydrocarbons seeping up from the underlying petroleum system 3,4 .
Our results suggest the development of a microbiome inside of the saline fluid inclusions that form during rapid growth of massive hydrates within gas chimneys. This rapid hydrate growth has led to the exclusion of oily, saline pockets inside of the hydrate. During the exclusion, a water-in-oil emulsion may form which limits the migration of saline waters out of the hydrate and provides a suitable medium for the biodegradation of oils to occur. The uptake of water by the growing hydrate further concentrates the residual brines, which reach saturation with respect to dolomite, while increasing the concentration of dissolved nutrients available to microorganisms.
Organotrophic microbes, in particular Flavobacteriia, metabolize the oils, producing extracellular polymeric substances that are suitable for the formation of spherulitic microdolomite 40,51 . In addition, the same organotrophic microbes produce DIC which is enriched in 13 C, similar to the DIC which has migrated upward from depth, along with the oils and methane. Dolomite precipitation initiates around extracellular polymeric substances excreted by the microbes and, since the microbial activity was centred around the oil-covered water droplets, saline water becomes trapped inside microdolomites. Although microbial activity appears to be focused at depths less than 30mbsf, the microdolomites continue to grow as they are buried with the hydrate, increasing in diameter over time and developing dolomite coatings with higher Mg:Ca ratios.
Although the presence of microdolomites has only recently been observed in Joetsu Basin, future research will focus on whether or not they are present in other environments, such as shallow permafrost or deep pore-filling hydrates. Given that the conditions leading to shallow hydrate growth are not unique to the Japan Sea, it is also likely that this unusual microbiome is present in other settings, including permafrost hydrate, other offshore hydrate settings, and areas that preserve sedimentological evidence of fossil gas seeps. Given that spheroidal microdolomites can potentially encapsulate seawater and organic matter inside, such grains may potentially preserve valuable information regarding ancient life.
Methods
Sample collection. The contents of the clear polycarbonate liners were inspected as soon as the core arrived on deck. Where massive hydrate was observed, 10-20 cm whole-round sections were cut immediately and preserved in liquid nitrogen for on-shore research. Remaining core was cut into 1 m sections for core description, including intervals which also included nearly pure hydrate.
Identical procedures were used to disassociate gas hydrate on-shore and shipboard; the outer hydrate portions removed, 20-30 cc of clean hydrate dissociated in 50 cc syringe (preformed in a fume hood due to hydrogen sulphide), and evolved gases collected in aluminium polymer bags with Teflon stopcocks. The remaining liquid phase was agitated with a vortex stirrer, transferred to a 50 cc tube and centrifuged. A Pasteur pipette was used to collect oil residues from the surface, the liquid phase was decanted and a solid phase at the bottom recovered, rinsed and cleaned. (Residue was centrifuged twice with 18 MΩ deionized water, final rinse with ethanol prior to drying at 40 °C).
X-Ray Diffractometry (XRD).
The hydrate-residue had a grain-size that was sufficiently fine that it did not need to be ground for powder XRD. XRD was carried out on a Rigaku Ultima IV with the following settings: rotation: 40 RPM, 2θ-range: 5° -85°, 2θ-step size: 0.02°, scan step speed: 0.8 s, Generator: 40 kV and 30 mA. The stoichiometric ratio of Mg:Ca in the microdolomites was determined from the XRD patterns using the method of Turpin 20 (Table S1). Mineralogical abundances were determined through Reitveld Refinement (Table S2) using Profex software 19 . Of the 39 samples, 28 had >15% dolomite and were taken for further analysis (11 had a high marine clay-content and were not analysed further).
Microscopy. External features of uncoated microdolomites were observed with a Keyence VE-8800 Real
Surface-View Scanning Electron Microscope (3 kV to 8 kV). Internal details of the microdolomites were observed using a Keyence Digital Microscopy System and polished resin blocks. Resin blocks were prepared by embedding the microdolomite in Luff Araldite resin (Polysciences), placing the resin under low-vacuum to remove air-bubbles and, once hardened, polishing with plastic lapping film (3M). This procedure was carried out with dry lapping film to avoid dissolution of grain materials. To image the internal presence of microbial DNA, the microdolomite samples were sectioned with a diamond-blade microtome and stained with SYBR green. Confocal epifluorescence microscopy was done with an Olympus BX51 fluorescence microscope equipped with an Olympus DP71 charge-coupled device (CCD) camera.
Elemental mapping and electron probe microanalysis. Elemental mapping was carried out on Pd/Pt-coated polished samples using JEOL Superprobe (XA-8230) with the following instrument parameters; Spot mode, AccV = 15 kV, ProbC = 50 nA, dwell time = 60 msec/pixel. Samples were polished as described above the Scientific RepoRtS | (2020) 10:1876 | https://doi.org/10.1038/s41598-020-58723-y www.nature.com/scientificreports www.nature.com/scientificreports/ microscopy of internal details. Elemental mapping provided intensity values that were mapped using ImageJ software 52 . For quantitative determination of the ratio Mg:Ca, analyses were taken at a 2 μm frequency along linear transects that crossed grain surfaces. The following calibration standards were scanned before and after each transect to correct for instrument drift and permit the calculation of relative intensities: CaSiO 3 for calcium and MgO for magnesium.
Stable isotopic composition of microdolomites. The carbon and oxygen stable isotopic the microdolomites was determined using a ThermoFisher GC-MS (MAT 243) equipped with a GasBench system. A small amount of powdered microdolomite (200-300 μg) was digested overnight in 100% H 3 PO 4 at 80 °C. The resultant CO 2 gas was then injected into the IRMS automatically for carbon and isotope analysis. The δ 13 C and δ 18 O values were calibrated using NBS19. A dolomite acid fractionation factor for δ 18 O was calculated according to Rosenbaum and Sheppard 53 .
Hydrate gas composition. The gas from hydrate dissociation was collected in polymer-coated aluminium gas-collection bags (GL Sciences) Gas standards were prepared from 99.9% CH 4 (GL Science) and the methane concentration was determined by injecting 0.5 mL of hydrate gas into a GC-FID (Shimadzu GC14B) with an alumina column (60/80, GL Science). A second sample split of 0.5 mL was injected into a GC-FPD (Shimadzu GC-2014S) equipped with a β, β-ODPN 25% Uniport HP 6/80 column in order to determine H 2 S concentration. Calibration standards were prepared from a 10% H 2 S gas (Takachiho Chemical Industrial). The results of the two analyses were used to calculate a molar ratio of H 2 S/CH 4 . Stable isotopic analysis of the δ 13 C of CO 2 in the dissociated hydrate was carried out by first transferring gas into a vacuum chamber with a Pfeiffer Prisma 200 quadrupole mass spectrometer to analyse gas composition. Samples with less than 10% air contamination (based on O 2 content), were then passed through a liquid-nitrogen cold trap in order to separate CO 2 . Carbon isotopes were then measured for the CO 2 using an IsoPrime 100 mass spectrometer. oil analysis. Biomarker analysis was carried out on a 6890N Network GC system interfaced to a 5975 inert mass selective detector with a PTV injector. Oil was extracted from aliquots oily hydrate water by liquid-liquid extraction. Extractions were performed by shaking water in the presence of dichloromethane and the products of each stage were combined. A quantitation standard of 5β-cholane was added. Extracts were analysed by GC-MS using the following method: PTV injector (300 °C) operating in splitless mode; GC temperature program was as follows; 60 °C to 120 °C at 20 °C/min then from 120 °C to 290 °C at 4 °C/min. The column was a Greyhound GC-5 (HP-5 equivalent phase; 30 m length, 250 µm ID and 0.25 µm film thickness). The MS was operated in sim mode (less than 20 ions & dwell time less than 40 ms). Compounds were identified by reference to well-characterized samples of biodegraded oil from the Niger Delta 54 . Surface enhanced Raman spectroscopy of asphaltene was done using a BWTek i-Raman Pro fitted with a 532 nm light source and mounted on 20 × video-microscope and employing a gold-coated glass substrate. A small aliquot of extract and asphaltene-quantitation standards dissolved in dichloromethane were absorbed on the gold surface. Surface enhanced Raman spectra were collected by accumulating 1000 spectra over a 300 ms duration in the range 200-2000 cm −1 , with the 1200 to 1800 cm −1 region used. Laser spot size was approximately 2-4 μm and laser power was 40-60% (<13 mW delivered to the sample). Quantification was performed using the procedure in Bowden and Taylor 55 ; deconvolution with entire spectra of asphaltene and interfering compounds and quantification of asphaltene. Microbial content of microdolomites. As previously described 56 , 0.1 g of crushed microdolomite sample was incubated at 65 °C for 30 min in 150 μL of alkaline solution (pH 13.5, 75 μL of 0.5 N NaOH, and 75 μL of TE buffer including 10 mM Tris-HCl and 1 mM EDTA). Following centrifugation at 5,000 g for 30 s at room temperature, the supernatant was neutralized with 750 μL of TE buffer and 150 μL of 1 M Tris-HCl (pH 6.5). The DNA was concentrated by ethanol precipitation from the DNA-bearing solutions (pH 7.0-7.5) and the DNA precipitate was dissolved in 50 μL of TE buffer (pH 8.0). Sequencing and phylogenic analysis was carried out with an Illumina MiSeq sequencer. Using the primers Uni530F and Uni907R 57 , including TruSeq adapter sequences and 7-mer index 58 16S rRNA gene sequences was amplified by PCR using LA taq polymerase (Takara-Bio, Inc., Japan) for Illumina MiSeq sequencing. Thermal cycling was performed with 35 cycles of denaturation at 96 °C for 20 s, annealing at 58 °C for 45 s, and extension at 72 °C for 120 s. The PCR products were subjected to electrophoresis on 1.5% agarose gels and purified using the MinElute Gel Extraction Kit (Qiagen). The purified PCR products were mixed and used as templates for sequencing by MiSeq Genome Analyzer using MiSeq Reagent Nano Kit v2 with 500 cycles following the manufacturer's instructions (Illumina, USA). Raw reads were processed using QIIME2 59 for Phylotype composition analyses, including quality assessment, quality trimming, chimera detection and OTU clustering (97% cut-off). Initial taxonomic assignment was determined using a BLASTn-based similarity search against a nucleotide collection consisting of sequences from GenBank, European Molecular Biology Laboratory (EMBL), DNA Data Bank of Japan (DDBJ), and Reference Sequence (RefSeq) 60 . | 6,722.8 | 2020-02-05T00:00:00.000 | [
"Geology"
] |
Time-Varying SAR Interference Suppression Based on Delay-Doppler Iterative Decomposition Algorithm
Narrow-band interference (NBI) and Wide-band interference (WBI) are critical issues for synthetic aperture radar (SAR), which degrades the imaging quality severely. Since some complex signals can be modeled as linear frequency modulated (LFM) signals within a short time, LFM-WBI and NBI are mainly discussed in this paper. Due to its excellent energy concentration and useful properties (i.e., auto-terms pass through the origin of Delay-Doppler plane while cross-terms are away from it), a novel nonparametric interference suppression method using Delay-Doppler iterative decomposition algorithm is proposed. This algorithm consists of three stages. First, we present signal synthesis method (SSM) from ambiguity function (AF) and cross ambiguity function (CAF) based on the matrix rearrangement and eigenvalue decomposition. Compared with traditional SSM from Wigner distribution (WD), the proposed SSM can synthesize a signal faster and more accurately. Then, based on unique properties in Delay-Doppler domain, a mask algorithm is applied for interference identification and extraction using Radon and its inverse transformation. Finally, a signal iterative decomposition algorithm (IDA) is utilized to subtract the largest interference from the received signal one by one. After that, a well-focused SAR imagery is obtained by conventional imaging methods. The simulation and measured data results demonstrate that the proposed algorithm not only suppresses interference efficiently but also preserves the useful information as much as possible.
Introduction
Synthetic aperture radar (SAR) has become an important instrument for earth mapping and been widely utilized in both military surveillance and civilian exploration.However, well-focused SAR images are always prominently corrupted by untargeted interference caused by natural or man-made factors, especially the interferences whose frequencies fall into the frequency spectrum of useful signals [1,2].Although the two-dimensional matched filter has an inherent ability in interference suppression, interferences with stronger power will defocus the image and degrade the imaging quality seriously [3,4].
Since the existence of interference would seriously degrade the quality of SAR imagery, interference detection and suppression have been paid increasing attention in the SAR community.In terms of ratio of the interference bandwidth to the useful signal, interference is generally categorized into two groups: narrowband interference (NBI, the ratio is smaller than 1%) and wideband interference (WBI, the ratio is greater than 1%).Compared with WBI, NBI is much easier to deal with.To obtain high-quality images, the NBI suppression algorithm can mainly be classified into two classes: parametric and non-parametric methods.In parametric approaches, the interferences are usually modeled as a summation of complex sinusoidal waves and their parameters, such as amplitude, frequency and phase, are estimated according to the least-mean square method [5][6][7] or maximum likelihood criteria [8].Then, NBI can be reconstructed and subtracted from the received echoes.However, without prior knowledge, model mismatch will lead to an inaccurate estimated of the NBI and result in great degradation of the suppression performance.Additionally, it is difficult to obtain accurate high-dimensional parameters for non-stationary NBI.The other is the non-parametric method, including the notch filtering method [9][10][11][12][13] and the eigen-subspace projection method [14].These methods avoid the complicated NBI modeling or multiparameter estimation.Notch filtering method is one of the most widely used interference suppression techniques in practice.Several excellent works are carried out on notch filter design, such as minimax optimization based IIR notch filter [10], Sliding DFT Phase Locking Scheme [11], adaptive filter [12,13], etc.They employ spectral estimation to distinguish the interference and design a proper filter to remove it in the frequency domain.These methods work based on the assumption that only a fraction of frequency bins of NBI are overlapped with that of useful signals.However, the frequency notch filtering method may cause the discontinuity of signal spectrum and further lead to the loss of useful signal.Thus, the signal-to-noise ratio (SNR) of the SAR image may degrade seriously.The eigen-subspace projection method projects echoes onto the interference subspace and the signal subspace, respectively.This approach has good performance for stationary interference suppression, while the suppression would lead to significant signal loss when time-varying interference exists.In addition, independent component analysis [15][16][17], empirical mode decomposition [18], sparsity and low-rank method [19], and the other excellent works [20][21][22] are proposed to cope with the time-varying interference with little useful signal loss.
By contrast, the WBI removal approaches are more complicated, since its bandwidth is large and occupies a large portion of the bandwidth of the useful SAR signal.The best choices of WBI suppression method are mainly based on multi-antenna, beamforming and space-time adaptive processing techniques, in which adaptive nulling is formed in the direction of the interference [23,24].However, the hardware cost and complexity become more unaffordable, as the number of antennas increases [25].Moreover, these methods perform poorly to counter jamming entering from the main lobe.Another WBI suppression framework is based on the analysis of the time-frequency (TF) properties of interference.TF based interference suppression methods can also be generally categorized into two groups: parametric and nonparametric approaches.In parametric methods, WBI is modeled as polynomial phase signal, whose parameters can be estimated by the fractional Fourier transform (FrFT) [26], polynomial phase transform (PPT) [27], high-order ambiguity function (HAF) [28], product high-order ambiguity function (PHAF) [29] and other time-frequency analysis-based parameter estimation methods [30][31][32][33].Similar to the parametric methods in NBI suppression, model mismatch and parameter estimation error may lead to great degradation of the SAR image quality.Nonparametric methods mainly focus on TF characteristic analysis and TF filter design to remove the WBI and preserve the SAR signal.These methods usually assume that the WBI is concentrated in the TF domain.In this paper, we mainly focus on the linear frequency modulated (LFM) WBI (LFM-WBI), since some complex signal can be modeled as an LFM signal within a short time.Due to its good concentration at each time slice, short time Fourier transform (STFT) [34][35][36][37][38][39][40][41] has been shown as an effective tool for time-varying WBI suppression.The energy of interference concentrates in a few frequency bins in the instantaneous frequency spectrum at each time slice.However, its TF resolution varies with the bandwidth of signal, i.e., the larger the bandwidth is, the worse the TF resolution will be.Compared with STFT, Wigner distribution (WD) [42][43][44] is more effective for time-varying signal analysis due to its excellent energy concentration.For multiple components, WD's bilinear transformation always produces unwanted cross-terms, which may severely impede auto-terms identification.Furthermore, the traditional signal synthesis method (SSM) from WD is quite time consuming and inaccurate.Ambiguity function (AF) [32,45] has the same energy concentration as WD does.Similarly, due to its bilinear transformation, AF will also suffer from the identifiability problem when dealing with multi-component signal.However, AF analysis tool has a useful property that auto-term passes through the origin of the AF plane and the cross-terms are away from it [32,45].By making full use of this property, we propose an efficient interference suppression algorithm.First, we present an SSM from AF and cross AF (CAF).Then, a binary mask based on the Radon transform (RT) and its inverse transform is constructed for auto-terms extraction and cross-terms suppression, which overcomes the cross-terms identifiability issue in WD.Finally, an AF-CAF based iterative decomposition method (IDM) (AF-CAF-IDA) is presented to decompose a signal by subtracting the largest reconstructed interference component from the received signal one by one.After that, well-focused SAR imaging results can be obtained by conventional imaging methods.The proposed algorithm has three advantages: (1) both AF and CAF have excellent energy concentration; (2) auto-terms of interferences are more easily identified and extracted in the AF and CAF domains due to their useful properties; and (3) SSM from AF and CAF is faster and more accurate than the traditional SSM from WD. Thanks to these advantages, the proposed algorithm not only suppresses interference but also preserves the useful information as much as possible.The performance analyses of the simulated data and measured data demonstrate that the proposed algorithm outperforms the notch filtering method and the TF filtering method.
The paper is organized as follows.After the Introduction, the mathematical model of the received signal is given in Section 2. To identify and suppress the interference efficiently, AF-CAF-IDA based interference suppression algorithm is proposed in Section 3. The experimental analysis is illustrated in Section 4. Finally, conclusions are drawn in Section 5.
For readability, the main abbreviations used in this paper are listed in Table 1.
Mathematical Model of Received Signal
Assume that the SAR system transmits P pulses, and each received echo, during a pulse repetition time, consists of N range samples, then the useful signals with interference and noise can be modeled by where S n is the useful signal, I n is the interference, N n is the additional noise, and n = 1, 2, . . ., N is the fast time.
For NBI, its frequency spectrum usually concentrates within a narrow frequency bins, which can be written as [4] where a l , f l and ϕ l denote the amplitude, frequency and phase of the l-th NBI, respectively, and L represents the number of NBI.
For WBI, LFM-WBI is mainly discussed in this paper, which can be expressed as [36] where γ l is the l-th chirp rate of LFM-WBI.
Figure 1a shows the frequency spectrum of received signal with NBI and LFM-WBI.It is obvious that NBI is easily captured according to its amplitude changes in frequency domain, while the spectrum of LFM-WBI distributes along the whole frequency band, which increases difficulties in interference identification and suppression.For STFT, the interferences are concentrated along the straight lines in the TF plane, which occupies much smaller frequency bins at each time slice than in the frequency domain, as illustrated in Figure 1b.For WD, it has a good energy concentration property for LFM signals.However, for multiple components, an unwanted cross-terms problem may severely impede auto-terms identification, as shown in Figure 1c.Although the cross-terms also occur in the AF plane, AF has a useful property that the auto-terms pass through the origin of AF plane and the cross-terms are away from it, as illustrated in Figure 1d.Based on this property, the auto-terms of interference are easily identified.Thus, an interference suppression algorithm in AF plane is proposed in the following.
where l γ is the l-th chirp rate of LFM-WBI.
Figure 1a shows the frequency spectrum of received signal with NBI and LFM-WBI.It is obvious that NBI is easily captured according to its amplitude changes in frequency domain, while the spectrum of LFM-WBI distributes along the whole frequency band, which increases difficulties in interference identification and suppression.For STFT, the interferences are concentrated along the straight lines in the TF plane, which occupies much smaller frequency bins at each time slice than in the frequency domain, as illustrated in Figure 1b.For WD, it has a good energy concentration property for LFM signals.However, for multiple components, an unwanted cross-terms problem may severely impede auto-terms identification, as shown in Figure 1c.Although the cross-terms also occur in the AF plane, AF has a useful property that the auto-terms pass through the origin of AF plane and the cross-terms are away from it, as illustrated in Figure 1d.Based on this property, the auto-terms of interference are easily identified.Thus, an interference suppression algorithm in AF plane is proposed in the following.
Interference Suppression Algorithm Using AF-CAF-IDA
In this section, AF-CAF-IDA based interference suppression is introduced.This algorithm consists of three stages: (1) SSM from AF-CAF; (2) binary mask construction for interference identification and extraction; and (3) IDA using SSM and mask construction.In the following, interference suppression including SSM and binary mask construction is discussed in details.
SSM from AF-CAF for Mono-Component Signal
Assume a discrete mono-component signal n x with N samples in length, and its vector form is denoted as , ,..., N x x x = x .To obtain its AF, symmetric instantaneous autocorrelation function (SIAF) of n x is calculated first, which can be expressed as * ( , ) where * ( ) ⋅ is the conjugate operator.Matrix form of ( , ) x R n m is written as For the convenience of description, Figure 2 gives the structure of x R with N being 5.The AF of n x can be obtained by calculating the one-dimensional (1-D) inverse Fast Fourier Transform (IFFT) of each row of x R , defined by AF ( , ) 2 ( , )
Interference Suppression Algorithm Using AF-CAF-IDA
In this section, AF-CAF-IDA based interference suppression is introduced.This algorithm consists of three stages: (1) SSM from AF-CAF; (2) binary mask construction for interference identification and extraction; and (3) IDA using SSM and mask construction.In the following, interference suppression including SSM and binary mask construction is discussed in details.
SSM from AF-CAF for Mono-Component Signal
Assume a discrete mono-component signal x n with N samples in length, and its vector form is denoted as x = [x 1 , x 2 , . . . ,x N ].To obtain its AF, symmetric instantaneous autocorrelation function (SIAF) of x n is calculated first, which can be expressed as For the convenience of description, Figure 2 gives the structure of R x with N being 5.The AF of x n can be obtained by calculating the one-dimensional (1-D) inverse Fast Fourier Transform (IFFT) of each row of R x , defined by According to above operations, the received echo can be transformed into AF plane.If AF x (m, u) is known, x n can be reconstructed by following steps.
First, calculate matrix R x by 1-D Fast Fourier Transform (FFT), which is expressed as Here, we consider another matrix R with its elements defined by R(n 1 , n 2 ) = x n 1 x * n 2 , where 1 ≤ n 1 , n 2 ≤ N. If matrix R is given, the original signal x can be obtained by eigenvalue decomposition (EVD), expressed as [42] where λ i and u i represent the eigenvalue and corresponding eigenvectors, respectively.The matrix of R has only one nonzero eigenvalue, with which the original signal x can be recovered as follows: where φ is the constant phase.The above signal synthesis method is under the assumption that R is known.However, according to Equation ( 7), only matrix R x can be obtained from the AF.Fortunately, compared with R, the relationship between R and R x can be summarized as follows: the elements in the m-th row of R x are located at 2m-th auxiliary diagonal of R, as shown in Figure 2. Based on this property, a new matrix Reven can be constructed by matrix rearrangement, defined by From Equation (10), it is clear that half of elements are zeros.To obtain the remaining elements in R, CAF transform is introduced.Consider another discrete signal y n , the symmetric instantaneous cross-correlation function (SICF) between x n and y n is calculated by Its matrix form can be written as From Equation ( 12), the CAF of x n and y n can be obtained by calculating the 1-D IFFT of each row of R x,y , defined by Similarly, if CAF x,y (m, u) is given, R x,y can be calculated by 1-D FFT, depicted as To obtain the remaining elements in the (2m − 1)-th auxiliary diagonal of R, we let y n = x n+1 , then R x,y can be reconstructed, as shown in Figure 2. Compared with R, we can draw another conclusion that the elements in the m-th row of R x,y are located at (2m − 1)-th auxiliary diagonal of upper triangular matrix in R. The elements in the (2m − 1)-th auxiliary diagonal of lower triangular matrix in R can be easily obtained, due to its characteristic of Hermitian matrix (i.e., R(n, m) = R * (m, n)).According to this property, another new matrix Rodd can be constructed by matrix rearrangement, defined by Therefore, R can be obtained by combining Reven with Rodd .Then, the original signal x can be reconstructed by EVD.
Compared with traditional SSM from WD, the major difference is the construction of matrix.In the traditional SSM from WD, an interpolation to WD is necessary, and all elements in the matrix R need to be calculated by discrete Fourier transform (DFT) one by one.In the SSM from AF-CAF, the matrix of R can be obtained by rearranging the matrix from the AF and CAF, and the elements in R can be calculated by FFT.Thus, it is time saving.Here, we take a signal with 512 samples in length as an example to check calculation time and signal energy loss.Results show that the calculation time of traditional synthesis method is 3.5 times of that of SSM from AF-CAF.The eigenvalues obtained by SSM from AF-CAF are shown in Figure 3a.There is only one nonzero eigenvalue with its energy being 512.Compared with traditional signal synthesis method, there are several nonzero eigenvalues, and the largest one is 503.4,as illustrated in Figure 3b.The signal energy loss is caused by the interpolation to the WD.Therefore, SSM from AF-CAF is more accurate.).According to this property, another new matrix ˆodd R can be constructed by matrix rearrangement, defined by Therefore, R can be obtained by combining ˆeven R with ˆodd R .Then, the original signal x can be reconstructed by EVD.
Compared with traditional SSM from WD, the major difference is the construction of matrix.In the traditional SSM from WD, an interpolation to WD is necessary, and all elements in the matrix R need to be calculated by discrete Fourier transform (DFT) one by one.In the SSM from AF-CAF, the matrix of R can be obtained by rearranging the matrix from the AF and CAF, and the elements in R can be calculated by FFT.Thus, it is time saving.Here, we take a signal with 512 samples in length as an example to check calculation time and signal energy loss.Results show that the calculation time of traditional synthesis method is 3.5 times of that of SSM from AF-CAF.The eigenvalues obtained by SSM from AF-CAF are shown in Figure 3a.There is only one nonzero eigenvalue with its energy being 512.Compared with traditional signal synthesis method, there are several nonzero eigenvalues, and the largest one is 503.4,as illustrated in Figure 3b.The signal energy loss is caused by the interpolation to the WD.Therefore, SSM from AF-CAF is more accurate.).According to this property, another new matrix ˆodd R can be constructed by matrix rearrangement, defined by Therefore, R can be obtained by combining ˆeven R with ˆodd R .Then, the original signal x can be reconstructed by EVD.
Compared with traditional SSM from WD, the major difference is the construction of matrix.In the traditional SSM from WD, an interpolation to WD is necessary, and all elements in the matrix R need to be calculated by discrete Fourier transform (DFT) one by one.In the SSM from AF-CAF, the matrix of R can be obtained by rearranging the matrix from the AF and CAF, and the elements in R can be calculated by FFT.Thus, it is time saving.Here, we take a signal with 512 samples in length as an example to check calculation time and signal energy loss.Results show that the calculation time of traditional synthesis method is 3.5 times of that of SSM from AF-CAF.The eigenvalues obtained by SSM from AF-CAF are shown in Figure 3a.There is only one nonzero eigenvalue with its energy being 512.Compared with traditional signal synthesis method, there are several nonzero eigenvalues, and the largest one is 503.4,as illustrated in Figure 3b.The signal energy loss is caused by the interpolation to the WD.Therefore, SSM from AF-CAF is more accurate.
SSM from AF-CAF for Multi-Component Signal
In this section, the SSM from AF-CAF for multi-component signal is presented.Consider a multi-component signal where subscript i denotes the i-th component of signal.The SIAF of x n can be written as Substituting Equation ( 17) into Equation ( 6) yields where AF x i ,a (m, u) and AF x i x j ,c (m, u) represent the auto-terms and cross-terms, respectively.Suppose that the cross-terms have been eliminated completely, the masked AF (MAF) is equal to the sum of AFs of individual components, which can be expressed as follows: Substituting Equation (19) into Equation ( 7), we have From Equation (20), it is obvious that inverse MAF equals to the sum of the inverse AFs of individual components.After that, the matrix rearrangement is applied to reconstruct the matrix Reven .
Similarly, the SICF of x n and y n (y n = ∑ i=1 y i,n ) can be expressed as The CAF of x n and y n can be given by where CAF x i y i ,a (m, u) and CAF x i y j ,c (m, u) represent the auto-terms and cross-terms, respectively.Assume that the cross-terms in CAF domain are removed clearly; the mask CAF (MCAF) also equals the sum of the CAFs of individual components, expressed as The inverse CAF of Equation ( 23) is given by It can be seen that inverse MCAF equals the sum of the inverse CAFs of individual components and the matrix Rodd can be obtained by matrix rearrangement.After combining Reven and Rodd , R can be rewritten as follows: After EVD, the multi-component signal can be synthesized by
Binary Mask Construction for Signal Extraction and Cross-Terms Suppression
The analysis above is based on the assumption that the cross-terms are removed clearly, otherwise Equation (26) would no longer hold.In this section, the mask algorithm is presented to suppress cross-terms.In the AF and CAF domain, the auto-terms of multi-LFM component signal have two properties: Property 1. Auto-terms have line-like features in the AF and CAF domains.Property 2. Auto-terms pass through the origins of the AF and CAF domains, while the cross-terms are away from the origins.
For detailed discussion and proof, please refer to Appendix A.
According to these two properties, an interference identification and cross-term suppression algorithm is proposed.Based on Property 1, Radon transform (RT) is utilized to integrate the auto-terms along the straight line and the integral value exhibits a distinct peak in the Radon AF (RAF) plane.The RT is defined by where δ(•) is the delta function, and ρ and α represent the polar distance and polar angle, respectively.Based on Property 2, auto-terms in the RAF plane are certainly located at zeros-polar distance slice (i.e., ρ = 0).According to these properties, auto-terms of interference can be integrated and identified at ρ = 0 slice in polar distance and polar angle (ρ − α) domain, as shown in Figure 4.If the amplitude at ρ = 0 slice jumps with peaks much larger than the mean value, then one can conclude that the data may be contaminated by interferences.In addition, the inverse RT is utilized to extract the interference and suppress the cross-terms.
SAR Imaging with AF-CAF-IDA Based Interference Suppression
According to the aforementioned analysis, AF-CAF-IDA is proposed to identify and suppress the interference.For simplicity, the flowchart of SAR image formation from raw data contaminated by interferences using the proposed scheme is shown in Figure 5.The detailed procedures can be summarized as the following steps: Step 1: Extract the p-th azimuth sample data n x , where 1 n N ≤ ≤ , 1 p P ≤ ≤ , N and P represent the total samples in fast time and slow time, respectively.
Step 2: Transform n x into the RAF domain to detect whether the interference exists.If interference exists, AF-CAF-IDM is utilized to suppress interference, as shown in dashed area of Figure 4; otherwise.go to Step 3. The AF-CAF-IDM consists of following steps: (1) Calculate the MAF and MCAF of received signal according to mask algorithm.
(2) Rearrange the IFFT of MAF and MCAF and recovery the signal via EVD, and then estimate the parameter φ by solving the following equation: The processes of mask algorithm in the AF and CAF can be depicted as follows: (1) Calculate the AF (AF(m, u)) and CAF (CAF(m, u)) of the received signals, respectively.
(3) Compare the elements of IR(0, α max ) with zero, and then binary mask can be expressed as follows: (4) Extract the interference in the AF and CAF plane as follows: For simplicity, a flowchart of interference identification and extraction using mask algorithm is shown in Figure 4.After interference, identification and extraction, the interference can be recovered by SSM from MAF and MCAF.
SAR Imaging with AF-CAF-IDA Based Interference Suppression
According to the aforementioned analysis, AF-CAF-IDA is proposed to identify and suppress the interference.For simplicity, the flowchart of SAR image formation from raw data contaminated by interferences using the proposed scheme is shown in Figure 5.The detailed procedures can be summarized as the following steps: To demonstrate the validity of AF-CAF-IDA, a chirp signal (i.e., useful signal) with NBI and LFM-WBI is generated.The signal can be modeled by where where 1 ( ) S n , 2 ( ) S n and 3 ( ) S n represent the chirp signal, NBI and LFM-WBI, respectively.( ) N n is additive Gaussian noise with the signal-to-noise-ratio (SNR) being 5 dB. Figure 6a shows the frequency spectrum of ( ) S n .In the presence of the interferences, the chirp signal is seriously contaminated by NBI and LFM-WBI.Figure 6b shows the WD of ( ) S n , where NBI and LFM-WBI are well concentrated in TF plane with cross-terms.However, it is difficult to distinguish the autoterms and cross-terms in the WD domain.Figure 6c gives the AF of the received signal.It is obvious that the auto-terms of chirp signal, NBI and LFM-WBI pass through the origin of the AF plane, and cross-terms are away from it.Furthermore, the auto-terms of chirp signal is highly overlapped with that of LFM-WBI, since they have the same chirp rates.Therefore, two cases should be considered in this example.Step 1: Extract the p-th azimuth sample data x n , where 1 ≤ n ≤ N, 1 ≤ p ≤ P, N and P represent the total samples in fast time and slow time, respectively.
Step 2: Transform x n into the RAF domain to detect whether the interference exists.If interference exists, AF-CAF-IDM is utilized to suppress interference, as shown in dashed area of Figure 4; otherwise.go to Step 3. The AF-CAF-IDM consists of following steps: (1) Calculate the MAF and MCAF of received signal according to mask algorithm.
(2) Rearrange the IFFT of MAF and MCAF and recovery the signal via EVD, and then estimate the parameter φ by solving the following equation: (3) Subtract the synthesized component from the received signal and iterate above steps until all interferences in the p-th azimuth sample data are suppressed.
Step 3: Let p = p + 1; if p is less than or equal to P, iterate above steps to ensure interferences at each azimuth gate are completely eliminated.Finally, a well-focused SAR image is obtained by the conventional radar imaging algorithm.
To demonstrate the validity of AF-CAF-IDA, a chirp signal (i.e., useful signal) with NBI and LFM-WBI is generated.The signal can be modeled by where where S 1 (n), S 2 (n) and S 3 (n) represent the chirp signal, NBI and LFM-WBI, respectively.N(n) is additive Gaussian noise with the signal-to-noise-ratio (SNR) being 5 dB. Figure 6a shows the frequency spectrum of S(n).In the presence of the interferences, the chirp signal is seriously contaminated by NBI and LFM-WBI.Figure 6b shows the WD of S(n), where NBI and LFM-WBI are well concentrated in TF plane with cross-terms.However, it is difficult to distinguish the auto-terms and cross-terms in the WD domain.Figure 6c gives the AF of the received signal.It is obvious that the auto-terms of chirp signal, NBI and LFM-WBI pass through the origin of the AF plane, and cross-terms are away from it.Furthermore, the auto-terms of chirp signal is highly overlapped with that of LFM-WBI, since they have the same chirp rates.Therefore, two cases should be considered in this example.between the synthesized LFM-WBI and original component.It is clear that the synthesized component fits well with the original one.After subtracting the LFM-WBI from the rest signal, the desired chirp signal is obtained.Figure 6k,l presents the WD of the chirp signal and its real part, respectively.From this example, we can conclude that AF-CAF-IDA is effective for interference suppression with only a small loss of signal.
Experimental Analysis
The above sections have addressed interference suppression based on AF-CAF-IDA theory.In this section, we demonstrate the effectiveness of the interference suppression algorithm by dealing with SAR data.Case (1): Mono-component signal synthesis.After conducting the mask algorithm, the greatest signal (i.e., NBI) is extracted, as shown in Figure 6d.By performing the EVD, only one large eigenvalue corresponding to the NBI is obtained, as illustrated in Figure 6e.In Figure 6f, a comparison between the reconstructed signal and the original NBI is presented, which implies that the synthesized signal is in good agreement with the original signal.It is obvious that the NBI is eliminated cleanly after subtracting reconstructed signal from the original signal, as shown in Figure 6g.
Case (2): Multi-component signal synthesis.After performing the mask algorithm, the MAF of the rest signal is equal to the sum AFs of chirp signal and LFM-WBI, as illustrated in Figure 6h.In Figure 6i, there are two large eigenvalues corresponding to the chirp signal and LFM-WBI, respectively, which is consistent with Equation (25). Figure 6j gives the comparison of the real part between the synthesized LFM-WBI and original component.It is clear that the synthesized component fits well with the original one.After subtracting the LFM-WBI from the rest signal, the desired chirp signal is obtained.Figure 6k,l presents the WD of the chirp signal and its real part, respectively.From this example, we can conclude that AF-CAF-IDA is effective for interference suppression with only a small loss of signal.
Experimental Analysis
The above sections have addressed interference suppression based on AF-CAF-IDA theory.In this section, we demonstrate the effectiveness of the interference suppression algorithm by dealing with SAR data.
Comparison between Interference Suppression Algorithms
In this part, comparisons of the frequency-notch filtering, TF filtering and AF-CAF-IDA are provided.The simulated NBI and LFM-WBI are added to the real SAR data to verify the validity of AF-CAF-IDA.After range compression, Figure 7a,b presents the frequency spectrum and TF spectrogram of SAR echo with interferences, respectively.It is clear that the frequency spectrum of LFM-WBI occupies many frequency bins.If the frequency-domain notch filter is adopted, the useful information, which is overlapped with the LFM-WBI in the frequency spectrum, are also removed simultaneously, as shown in Figure 7a,d.The NBI and LFM-WBI are concentrated along the straight lines in the TF plane with only a fraction of frequency bins occupied at each time slices, as shown in Figure 7b.The filtering in the TF plane can suppress the interferences with smaller loss of the useful signal than frequency notch does, and its frequency spectrum and TF spectrogram are given in Figure 7e,f.Figure 7g,h illustrates the results of AF-CAF-IDA based interference suppression algorithm.In this figure, it is observed that interferences are eliminated completely while the useful signal is preserved well.To make a quantitative evaluation of the proposed algorithm, signal distortion ration (SDR) is introduced, which is defined by [36] SDR = 10 log 10 where d(n) represents the signal after interference suppression and d 0 (n) denotes the original signal without interference.The SDR of these three algorithms are calculated and the results are listed in Table 2.In this table, it is noted that the signal losses of frequency-notch filtering and TF filtering are greater than that of the proposed algorithm, which implies that AF-CAF-IDA can not only suppresses the interference but also preserve the signal energy as much as possible.
where ( ) 2. In this table, it is noted that the signal losses of frequency-notch filtering and TF filtering are greater than that of the proposed algorithm, which implies that AF-CAF-IDA can not only suppresses the interference but also preserve the signal energy as much as possible.
Results of Measured Data
In this section, measured SAR data contaminated by serious interference are utilized.To present the image quality improvement of proposed algorithm, two metrics of SNR and contrast in the image domain [36] are introduced in the following discussion.The SNR is defined as In this section, measured SAR data contaminated by serious interference are utilized.To present the image quality improvement of proposed algorithm, two metrics of SNR and contrast in the image domain [36] are introduced in the following discussion.The SNR is defined as where y i represents the i-th prominent scatterer and ŷj denotes the j-th pixel of the surrounding region.N 1 and N 2 denote the number of prominent scatterer pixels and noise pixels, respectively.A greater SNR results in a better focused image.Another metric is image contrast.For a P × N SAR image (P is the number of pixels in range, and N is the number in the azimuth), its contrast can be defined as mean z p,n 2 (35) where z p,n represents the (p-th, n-th) pixel in the SAR image; mean() denotes the mean value of the signal.From Equation (35), the greater the contrast D is, the better the image quality well be.
(1) Result of NBI Suppression: Figure 8a presents the SAR image with NBI, where the bright lines overshadow the important features, such as the ships and fields.Figure 8b presents the SAR image after frequency-domain notch filtering.Although the majority of interference energy in frequency bins has been suppressed, the useful information, which is located in the same bins, is also removed clearly.Due to significant portion loss of useful signal, the SNR of the imagery may degrade.Figure 8c shows the imaging result after NBI suppression using TF filtering.Compared with Figure 8b, only a small fractional of useful signal is lost and image contours have more clarity.However, it is clearly seen that the targets are defocused.Figure 8d presents the SAR image after AF-CAF-IDA.It is observed that the image is well focused and bright lines have been suppressed efficiently.In addition, the SNR and contrast D for three interference suppression algorithms are calculated.In Table 3, AF-CAF-IDA is with greater SNR and contrast than TF filter and notch filter methods, which not only suppresses the NBI but also preserves the useful information as much as possible.( ) mean denotes the mean value of the signal.From Equation ( 35), the greater the contrast D is, the better the image quality well be.
(1) Result of NBI Suppression: Figure 8a presents the SAR image with NBI, where the bright lines overshadow the important features, such as the ships and fields.Figure 8b presents the SAR image after frequency-domain notch filtering.Although the majority of interference energy in frequency bins has been suppressed, the useful information, which is located in the same bins, is also removed clearly.Due to significant portion loss of useful signal, the SNR of the imagery may degrade.Figure 8c shows the imaging result after NBI suppression using TF filtering.Compared with Figure 8b, only a small fractional of useful signal is lost and image contours have more clarity.However, it is clearly seen that the targets are defocused.Figure 8d presents the SAR image after AF-CAF-IDA.It is observed that the image is well focused and bright lines have been suppressed efficiently.In addition, the SNR and contrast D for three interference suppression algorithms are calculated.In Table 3, AF-CAF-IDA is with greater SNR and contrast than TF filter and notch filter methods, which not only suppresses the NBI but also preserves the useful information as much as possible.(2) Result of WBI Suppression: Figure 9a shows the image with LFM-WBI, where most portion of the scene in the image is covered due to existence of LFM-WBI. Figure 9b,c presents the imaging result after frequency-domain notch filtering and TF filtering, respectively.Because the majority of useful signal is interfered and filtered, the image is contaminated by noise and we can only see blurred image contours.Figure 9d shows the imaging result after AF-CAF-IDA.A clear image with (2) Result of WBI Suppression: Figure 9a shows the image with LFM-WBI, where most portion of the scene in the image is covered due to existence of LFM-WBI. Figure 9b,c presents the imaging result after frequency-domain notch filtering and TF filtering, respectively.Because the majority of useful signal is interfered and filtered, the image is contaminated by noise and we can only see blurred image contours.Figure 9d shows the imaging result after AF-CAF-IDA.A clear image with village, mountain, river and fields is presented.Furthermore, the results of SNR and contrast for these three interference suppression algorithms are listed in Table 4, indicating the advantages of the proposed algorithm over the others.To sum up, AF-CAF-IDA based interference algorithm can effectively suppress LFM-WBI with the useful signal being kept well.effectively suppress LFM-WBI with the useful signal being kept well.
Conclusions
In this paper, the Delay-Doppler distributions of time-varying SAR interferences (i.e., NBI and LFM-WBI) are analyzed via AF and CAF for the first time.By making full use of the properties in Delay-Doppler domain, a nonparametric interference suppression method based on the Delay-Doppler iterative decomposition algorithm is proposed.This algorithm has three advantages: (1) Compared with STFT, time-varying interference analysis in the Delay-Doppler domain has an excellent energy concentration property.(2) Compared with WD, AF has a unique property that the auto-terms pass through the origin of Delay-Doppler domain while cross-terms are away from it.Therefore, masking in the AF can solve the cross-terms identifiability problem and are very suitable for interference identification and extraction.(3) Since the matrix rearrangement and FFT are utilized, SSM from AF and CAF is more accurate and faster than the traditional SSM from WD. Thanks to these advantages, the proposed algorithm not only suppresses interference but also preserves the useful information as much as possible.The performance (such as SDR, SNR and Contrast) analyses of the simulated data and measured data demonstrate that the proposed algorithm outperforms the notch filtering method and the TF filtering method.
Conclusions
In this paper, the Delay-Doppler distributions of time-varying SAR interferences (i.e., NBI and LFM-WBI) are analyzed via AF and CAF for the first time.By making full use of the properties in Delay-Doppler domain, a nonparametric interference suppression method based on the Delay-Doppler iterative decomposition algorithm is proposed.This algorithm has three advantages: (1) Compared with STFT, time-varying interference analysis in the Delay-Doppler domain has an excellent energy concentration property.(2) Compared with WD, AF has a unique property that the auto-terms pass through the origin of Delay-Doppler domain while cross-terms are away from it.Therefore, masking in the AF can solve the cross-terms identifiability problem and are very suitable for interference identification and extraction.(3) Since the matrix rearrangement and FFT are utilized, SSM from AF and CAF is more accurate and faster than the traditional SSM from WD. Thanks to these advantages, the proposed algorithm not only suppresses interference but also preserves the useful information as much as possible.The performance (such as SDR, SNR and Contrast) analyses of the simulated data and measured data demonstrate that the proposed algorithm outperforms the notch filtering method and the TF filtering method.
Figure 2 .
Figure 2. Matrix rearrangement from AF and CAF.
Figure 3 .
Figure 3. Eigenvalues by the SSM from AF-CAF and traditional SSM from WD method: (a) SSM from AF-CAF; and (b) traditional SSM from WD.
Figure 2 .
Figure 2. Matrix rearrangement from AF and CAF.
Figure 2 .
Figure 2. Matrix rearrangement from AF and CAF.
Figure 3 .
Figure 3. Eigenvalues by the SSM from AF-CAF and traditional SSM from WD method: (a) SSM from AF-CAF; and (b) traditional SSM from WD.
Figure 3 .
Figure 3. Eigenvalues by the SSM from AF-CAF and traditional SSM from WD method: (a) SSM from AF-CAF; and (b) traditional SSM from WD.
Figure 4 .
Figure 4.The flowchart of interference identification and extraction using mask algorithm.
Figure 4 .
Figure 4.The flowchart of interference identification and extraction using mask algorithm.
Figure 6 .
Figure 6.Interference suppression based on AF-CAF-IDA: (a) frequency spectrum of the original signal; (b) WD of the original signal.(c) AF of the original signal; (d) NBI extraction from the original signal; (e) eigenvalues obtained from masked signal (NBI); (f) real part of the reconstructed NBI; (g) AF of the rest signal; (h) MAF of the rest signal; (i) eigenvalues obtained from masked signal (LFM-WBI and chirp signal); (j) real part of the reconstructed LFM-WBI; (k) WD of the chirp signal; and (l) real part of the chirp signal.
Figure 6 .
Figure 6.Interference suppression based on AF-CAF-IDA: (a) frequency spectrum of the original signal; (b) WD of the original signal.(c) AF of the original signal; (d) NBI extraction from the original signal; (e) eigenvalues obtained from masked signal (NBI); (f) real part of the reconstructed NBI; (g) AF of the rest signal; (h) MAF of the rest signal; (i) eigenvalues obtained from masked signal (LFM-WBI and chirp signal); (j) real part of the reconstructed LFM-WBI; (k) WD of the chirp signal; and (l) real part of the chirp signal.
n represents the signal after interference suppression and ( ) 0 d n denotes the original signal without interference.The SDR of these three algorithms are calculated and the results are listed in Table
Figure 7 .
Figure 7.Comparison between interference suppression methods: (a) frequency spectrum of signal with interferences; (b) spectrogram of signal with interferences; (c) frequency spectrum of signal after frequency-domain notch filtering; (d) spectrogram of signal after frequency-domain notch filtering; (e) frequency spectrum of signal after TF filtering; (f) spectrogram of signal after TF filtering; (g) frequency spectrum of signal with LFM-WBI after AF-CAF-IDA; and (h) spectrogram of signal with LFM-WBI after AF-CAF-IDA.
Figure 7 .
Figure 7.Comparison between interference suppression methods: (a) frequency spectrum of signal with interferences; (b) spectrogram of signal with interferences; (c) frequency spectrum of signal after frequency-domain notch filtering; (d) spectrogram of signal after frequency-domain notch filtering; (e) frequency spectrum of signal after TF filtering; (f) spectrogram of signal after TF filtering; (g) frequency spectrum of signal with LFM-WBI after AF-CAF-IDA; and (h) spectrogram of signal with LFM-WBI after AF-CAF-IDA.
p-th, n-th) pixel in the SAR image;
Figure 8 .
Figure 8. Imaging results of NBI suppression: (a) image of raw data; (b) image after frequency-domain notched filtering; (c) image using the TF filtering; and (d) image using AF-CAF-IDA.
Figure 8 .
Figure 8. Imaging results of NBI suppression: (a) image of raw data; (b) image after frequency-domain notched filtering; (c) image using the TF filtering; and (d) image using AF-CAF-IDA.
Figure 9 .
Figure 9. Imaging results of WBI suppression: (a) image of raw data; (b) image after frequencydomain notched filtering; (c) image using the TF filtering; and (d) image using AF-CAF-IDA.
Funding:
This research was funded by National Nature Science Foundation of China (NSFC) under Grants 61701414, 61801390, 61601372 and 61601373.This work was also supported by Science and Technology Fund Project, Postdoctoral Innovation Talent Support Program under Grant BX201700199 and China Postdoctoral Science Foundation under Grant 2018M631123 and 2017M623240.
Figure 9 .
Figure 9. Imaging results of WBI suppression: (a) image of raw data; (b) image after frequency-domain notched filtering; (c) image using the TF filtering; and (d) image using AF-CAF-IDA.
Remote Sens. 2018, 10, x FOR PEER REVIEW 6 of 19 upper triangular matrix in R .The elements in the (2m − 1)-th auxiliary diagonal of lower triangular matrix in R can be easily obtained, due to its characteristic of Hermitian matrix (i.e., * = Remote Sens. 2018, 10, x FOR PEER REVIEW 6 of 19 upper triangular matrix in R .The elements in the (2m − 1)-th auxiliary diagonal of lower triangular matrix in R can be easily obtained, due to its characteristic of Hermitian matrix (i.e., * =
Table 2 .
Evaluation Metrics of Three Algorithms of Interference Suppression.
Table 2 .
Evaluation Metrics of Three Algorithms of Interference Suppression.
Table 3 .
Evaluation Metrics of Three Algorithms of NBI Suppression.
Table 3 .
Evaluation Metrics of Three Algorithms of NBI Suppression.
Table 4 .
Evaluation Metrics of Three Algorithms of LFM-WBI Suppression.
Table 4 .
Evaluation Metrics of Three Algorithms of LFM-WBI Suppression. | 10,169.6 | 2018-09-18T00:00:00.000 | [
"Engineering"
] |
Oxidative stress enhances the therapeutic action of a respiratory inhibitor in MYC‐driven lymphoma
Abstract MYC is a key oncogenic driver in multiple tumor types, but concomitantly endows cancer cells with a series of vulnerabilities that provide opportunities for targeted pharmacological intervention. For example, drugs that suppress mitochondrial respiration selectively kill MYC‐overexpressing cells. Here, we unravel the mechanistic basis for this synthetic lethal interaction and exploit it to improve the anticancer effects of the respiratory complex I inhibitor IACS‐010759. In a B‐lymphoid cell line, ectopic MYC activity and treatment with IACS‐010759 added up to induce oxidative stress, with consequent depletion of reduced glutathione and lethal disruption of redox homeostasis. This effect could be enhanced either with inhibitors of NADPH production through the pentose phosphate pathway, or with ascorbate (vitamin C), known to act as a pro‐oxidant at high doses. In these conditions, ascorbate synergized with IACS‐010759 to kill MYC‐overexpressing cells in vitro and reinforced its therapeutic action against human B‐cell lymphoma xenografts. Hence, complex I inhibition and high‐dose ascorbate might improve the outcome of patients affected by high‐grade lymphomas and potentially other MYC‐driven cancers.
18th Oct 2022 1st Editorial Decision 18th Oct 2022 Dear Dr. Amati, Thank you again for submitting your work to EMBO Molecular Medicine. We have now heard back from three referees who agreed to evaluate your manuscript. As you will see from the reports below, the referees acknowledge the potential interest of the study. However, they raise a series of concerns, which we would ask you to address in a major revision of the manuscript. I think that the referees' recommendations are relatively straightforward, so there is no need to reiterate their comments. In particular, Referee #2 was concerned that most of the experiments were performed in an in vitro context, and we would ask you to strengthen the in vivo relevance for at least some of the key findings. Referee #3' major comments #4 and #5 need to be carefully addressed.
We would welcome the submission of a revised version within three months for further consideration. Please note that EMBO Molecular Medicine in principle only allows a single round of revision. As acceptance or rejection of the manuscript will depend on another round of review, your responses should be as complete as possible.
EMBO Molecular Medicine has a "scooping protection" policy, whereby similar findings that are published by others during review or revision are not a criterion for rejection. Should you decide to submit a revised version, I do ask that you get in touch after three months if you have not completed it to update us on the status.
We are aware that many laboratories cannot function at full efficiency during the current COVID-19/SARS-CoV-2 pandemic and have therefore extended our "scooping protection policy" to cover the period required for a full revision to address the experimental issues. Please let me know should you need additional time, and also if you see a paper with related content published elsewhere.
Please read below for important editorial formatting and consult our author's guidelines for proper formatting of your revised article for EMBO Molecular Medicine.
I look forward to receiving your revised manuscript.
Sincerely, Jingyi
Jingyi Hou Editor EMBO Molecular Medicine ***** When submitting your revised manuscript, please carefully review the instructions that follow below. We perform an initial quality control of all revised manuscripts before re-review; failure to include requested items will delay the evaluation of your revision.
We require: 1) A .docx formatted version of the manuscript text (including legends for main figures, EV figures and tables). Please make sure that the changes are highlighted to be clearly visible.
3) A .docx formatted letter INCLUDING the reviewers' reports and your detailed point-by-point responses to their comments. As part of the EMBO Press transparent editorial process, the point-by-point response is part of the Review Process File (RPF), which will be published alongside your paper. 4) A complete author checklist, which you can download from our author guidelines (https://www.embopress.org/page/journal/17574684/authorguide#submissionofrevisions). Please insert information in the checklist that is also reflected in the manuscript. The completed author checklist will also be part of the RPF. 5) Please note that all corresponding authors are required to supply an ORCID ID for their name upon submission of a revised manuscript.
6) It is mandatory to include a 'Data Availability' section after the Materials and Methods. Before submitting your revision, primary datasets produced in this study need to be deposited in an appropriate public database, and the accession numbers and database listed under 'Data Availability'. Please remember to provide a reviewer password if the datasets are not yet public (see https://www.embopress.org/page/journal/17574684/authorguide#dataavailability).
In case you have no data that requires deposition in a public database, please state so in this section. Note that the Data Availability Section is restricted to new primary data that are part of this study. 7) For data quantification: please specify the name of the statistical test used to generate error bars and P values, the number (n) of independent experiments (specify technical or biological replicates) underlying each data point and the test used to calculate p-values in each figure legend. The figure legends should contain a basic description of n, P and the test applied. Graphs must include a description of the bars and the error bars (s.d., s.e.m.). See also 'Figure Legend' guidelines: https://www.embopress.org/page/journal/17574684/authorguide#figureformat 8) At EMBO Press we ask authors to provide source data for the main and expanded view figures. Our source data coordinator will contact you to discuss which figure panels we would need source data for and will also provide you with helpful tips on how to upload and organize the files. 9) Our journal encourages inclusion of *data citations in the reference list* to directly cite datasets that were re-used and obtained from public databases. Data citations in the article text are distinct from normal bibliographical citations and should directly link to the database records from which the data can be accessed. In the main text, data citations are formatted as follows: "Data ref: Smith et al, 2001" or "Data ref: NCBI Sequence Read Archive PRJNA342805, 2017". In the Reference list, data citations must be labeled with "[DATASET]". A data reference must provide the database name, accession number/identifiers and a resolvable link to the landing page from which the data can be accessed at the end of the reference. Further instructions are available at . -For the figures that you do NOT wish to display as Expanded View figures, they should be bundled together with their legends in a single PDF file called *Appendix*, which should start with a short Table of Content. Appendix figures should be referred to in the main text as: "Appendix Figure S1, Appendix Figure S2" etc.
-Additional Tables/Datasets should be labeled and referred to as Table EV1, Dataset EV1, etc. Legends have to be provided in a separate tab in case of .xls files. Alternatively, the legend can be supplied as a separate text file (README) and zipped together with the Table/Dataset file. See detailed instructions here: . 11) The paper explained: EMBO Molecular Medicine articles are accompanied by a summary of the articles to emphasize the major findings in the paper and their medical implications for the non-specialist reader. Please provide a draft summary of your article highlighting -the medical issue you are addressing, -the results obtained and -their clinical impact.
This may be edited to ensure that readers understand the significance and context of the research. Please refer to any of our published articles for an example.
The authors combine in vitro studies, xenografts and metabolic analyses to discover that IACS+MYC deregulation disrupt redox homeostasis, sensitizing cells to the action of pro-oxidant drugs, like ascorbate. The combination of IACS and ascorbate proves to be very effective at killing different B cell non-Hodgkin lymphoma cell lines in vitro, and also in xenograft studies, which are also extended to two different, primary derived xenografts of double-hit lymphoma. These studies refine a previous report by this same group (Donati et al, Mol Oncol 2022;16).
MYC deregulation is thought to cause important metabolic changes in cells, some of which represent new dependencies. These metabolic dependencies could be used as points of entry for new therapeutic strategies. Exploiting these vulnerabilities offers an opportunity to treat MYC-driven cancers, and particularly those that are refractory to current therapies, like certain aggressive B cell lymphomas (i.e. double-hit lymphomas). Such unmet medical need makes new potential therapeutic combinations, like the one proposed in this manuscript, of particular interest.
The results of this study show that IACS-010759 increases oxidative stress by disrupting redox homeostasis. This response is in part compensated in cells overexpressing MYC by shuttling glucose toward the generation of NADPH via the pentose phosphate pathway. Inhibiting this pathway with specific drugs, or using vitamin C (ascorbate, a pro-oxidant), tilts the balance and reduces cell viability. These observations are all supported by a thorough and careful study on metabolites and experiments that take advantage of selective compounds that enhance of relieve this cellular response (e.g. NAC, BSO). Overall, the message is that MYC deregulation sensitizes cells to the inhibition of oxidative phosphorylation by different means. Links between MYC activity and oxidative stress were previously reported, and the current manuscript is consistent with this idea.
The (perhaps) most compelling piece of data is the fact that a rational (data-based) combination of ascorbate and IACS-010759 effectively kills B-cell non-Hodgkin lymphoma cell lines (aggressive Burkitt and Diffuse Large cell lymphoma) and patient-derived xenografts. The process by which authors dissect the pathway and reason the use of specific drug combinations is a strength of this study. The tumor study results are quite relevant, because of their clinical potential. But conceptually, what seems lost as the data flow, is the connection to MYC deregulation. The IACS+Ascorbate combination seems to kill everything, and despite the results in the FL5.12 MycER cell line, which are in some cases subtle, there is no strong evidence to say for sure that this is something specific to cells with MYC deregulation. So maybe the message of the manuscript, and the title, should be mindful of that.
Some additional pints would also warrant revision.
Major comments: 1-Metabolic changes equivalent to the ones proposed here upon MYC deregulation also occur in normal B cells upon activation with cytokines+/-BCR crosslink (metabolism is normally rewired toward aerobic glycolysis). A prediction is that the combination of IACS+ascorbate could be also toxic to normal B cells in this setting. The xenograft experiments using immunodeficient mice cannot assess potential toxicities in normal B cell counterparts, which may be relevant if this drug combination were to advance to preclinical studies.
2-
The authors imply a connection between OXPHOS and the response to the different drug combinations, but this is not really sustained in the cell line data, in part because of the lack of metabolic data on all the cell lines used in these experiments. Only Karpas 422 would fit the OxPHOS category described by Shipp and cols in Monti et al, Blood 2005, but the synergistic effect of the IACS+Ascorbate combination in this cell line is perhaps the less compelling (Fig. 5A). Other cell lines used here belong to different gene expression groups and the evidence for an OXPHOS metabolic makeup is less clear. The two double-hit PDX models are also of unclear metabolic profile. Although the authors showed in the past that OXPHOS and MYC expression seem to positively correlate, it is a bit of a stretch to assume that this happens in all cases.
3-Fig 6C, D:
In these PDX xenograft studies, the size of the error bars does not seem to support the existence of statistically significant differences, particularly for DFBL-69487. An alternative statistical test, different from a One-Way ANOVA comparing endpoints (day 14), may help resolve this dicordance. Fig.1 are somewhat subtle. One can infer that the authors are using Nrf2 here as an indirect reporter of oxidative stress, but this is a MYC target (also mentioned in the text) that may be important to help cells cope with oxidative stress. The fact that IACS reduces Nrf2 levels would suggest some kind of mechanistic connection. Does depletion of Nrf2 by genetic means (e.g. RNAi) alter cell survival in cell with MYC deregulation? Does IACS add anything to that response? Fig 1D: the increase in superoxide levels in response to IACS is variegated (2 outliers, 2 replicates with much lower response). Were these independent experiments? Maybe a larger number of replicates would help discern if the response is or not homogeneous, and if there are any true outliers here. Figure 5A and Suppl Fig 4 would indicate that ascorbate at relatively high doses is toxic to most of the cell lines tested, except for FL5.12 cells. There is some additional toxicity provided by the combination with IACS, but most of the effect seems to be driven by ascorbate. However, this effect is lost in vivo, at least from what can be inferred from the xenograft studies. Any reason for this discrepancy? Is this technical? 7- Fig. 4A: The graph shows that in absence of ascorbate (=0), cell viability is already reduced in the IACS+4-OHT group (to about 70%). While this is expected, the authors mention that they used 135nM IACS, which, looking at the IACS only curve, doesn't seem to have any effect on viability. This would suggest some effect of MYC deregulation, which is not reflected in the text. Also, the slope of the IACS+4-OHT curve is different from the IACS only one, suggesting some kind of interaction. Can the authors discuss this more carefully?
6-
Minor comments: 8-The changes in cell viability in all experiments, and particularly when combining MYC deregulation (4-OHT) and IACS seem to have limited penetrance, this is, only kill a fraction of cells. Any idea why this is the case? Do longer times lead to further reductions in viability, or are there always a fraction of cells not affected? 9-Some studies seem to indicate that tamoxifen can promote oxidative stress in different cell types, e.g. in murine hepatocytes and also in breast cancer cells (e.g. Nazarewicz et al, Cancer Res 2007; 67 (3)). Potential off target effects could be addressed by including controls with FL5.12 parental cells (without MYC-ER) exposed to 4-OHT.
10-Page 7: looks like when mentioning Supplementary Fig 2A, Novelty is limited by many previous studies on this compound from this group and others.
The medical impact as it stands today is limited, to my knowledge the compound has not advanced to the clinic.
Model systems. Most of the study is done in vitro, this is limiting with respect to tumor metabolism. Only the final conclusion (Vit C and IACS) is tested in vivo. I am not a metabolism expert, but this would raise concerns for me.
Referee #2 (Remarks for Author): The authors use a series of pharmacological single agent and combination studies to show that MYC expressing lymphoma cell lines are sensitive to a ETC I inhibitor (IACS010759) and that various (more or less characterized) compounds that purport to target glucose metabolism, or redox signals alter sensitivity to the IACS compound. The study builds on a series of previous studies by this group and others on the ETC 1 inhibitory compound, but it appears to pursue a new angle related to redox and the role of Vit. C in drug synergy through the production of ROS.
I would be curious to learn more about the compound and its selectivity and the introduction should provide a better explanation. I would also be curious to learn more about the selective activity against MYC expressing tumor cells. Does MYC directly increase ETC 1 activity? Does MYC increase expression or (ribosomal?) translation of the ETC1 proteins?
Experimentally, the study is interesting although somewhat minimal and only the final conclusion (synergy of IACS and Vit C) has been tested in more than one cell line or in vivo.
Given the uncertain specific of compounds that are used to elicit biological effect, the study would be improved if key data were supported using genetic studies: E.g., the study relies heavily on compounds that directly/indirectly influence ROS biology (NAC, VitC, PPP inhibition etc). the conclusions would be strengthened by experiments that show effects of NRF2 activation or inhibition (e.g. by modulating Keap1 levels).
The drug synergy shown in Figure 4 seems modest (or the presentation is hard to interpret). A more intuitive comparison of IC50 data might be more convincing. The in vivo synergy data showing tumor volumes look good. I am less sure about the imaging studies.
Some results remain speculative and inconclusive. E.g., compounds (of unclear specificity) are used to block the PPP shunt. This will alter glucose levels and increase glycolysis may bypass the ETC1 inhibition. Alternatively, blocking PPP will affect NADPH and redox potential of cells. What is the relevant role of these effects in cells in vitro and in tumor model (physiological glucose) in vivo?
Referee #3 (Comments on Novelty/Model System for Author): Note to editor: this is a rather fundamental paper, some aspects require more rigorous data, and the direct medical impact cannot be predicted as yet (impossible question in my opinion) Referee #3 (Remarks for Author): Donati et al employ an inducible system for expresion of Myc in a B cell line to investigate the synthetic lethality between Myc overexpression and inhibition of ETC complex I by IACS-010759 -a compound currently in clincial trials. They report that MYC hyperactivation and ETC inhibition disrupt redox homeostasis, leading to oxidative stress and apoptosis. By combining IACS-010759 with pro-oxidant drugs such as ascorbate, the efficacy can be further enhanced.
The work is a direct extension of a related recent paper by the group (Donati et al Mol Oncol 2022) where Myc overexpression combined with Bcl2 inhibition were investigated. The current work complements the previous paper by showing that Myc makes cells vulnerable for oxidative stress, which is in itself not a novel observation. The authors do provide novel mechanistic insight in order to make this paper novel and relevant. In addition, though all mechanistic experiments are done with just 1 genetically manipulated cell line, authors do make the transition to additional B cell lines and patient derived PDX in Figure 6, which is commendable. Nevertheless several aspects should be improved and/or clarified to make the article more coherent and impactful.
Major comments 1. Figure 1B: This essential WB data require quantification of multiple experiments, as the change in Nrf2 levels in the nucleus is not clear. Why does OHT+IACS not result in increased Nrf2, if the signature of Nrf2 is present regardless of IACS+/-? In addition protein markers/ kD should be indicated, especially as there is controversy on the size of Nrf2 by western blot. 2. Figure 2 E: in the text is mentioned there is an abrupt increase of 405/488. If t0 is defined per each cell, then many of them have this abrupt increase much before (2h) they die. Can these differences between cells be explained/discussed? 3. Figure 4: Which kind of ROS does ascorbate generate? Are IACS and ascorbate inducing two ROS generating pathways, or do they generate the same kind of ROS species but at higher levels? It would be informative to perfom H2O2 and superoxide measurements as in Fig. 1 C and 2 with the 2 compounds + combination. 4. The role of glucose in the culture medium is important in determining the effects of IACS -as presented in the previous paper. Authors should make more clear why and what is new here in relation to glucose. The current article should be an independent body of work; proposed is to show new data on the role of glucose concentration in media and subsequent effects of IACS, ascorbate etc. -see also next point. 5. Since ferric iron chelator deferoxamine (DFX) fully prevented cell death in double-treated cells ( Figure 4C), the question arise to what extent the processes studied may involve a ferroptotic component. IS there synergy with ferroptosis inducers/prevention by ferroptosis inhibitors? What is the role of GPX4? In Discussion authors make some assumptions about that, suggesting future investigations, but these aspects should be included here, to increase significance and impact.
Authors' Response
We thank the Referees for their constructive comments, which have significantly helped us improving the impact and clarity of our manuscript. Our point-by-point rebuttal and a description of the changes implemented in our work are provided below. For full information of the Referees, we include here some additional experiments performed during the revision process ( Figure R1).
Referee #1 (Comments on Novelty/Model System for Author):
The data is quite compelling, with regards to the effectiveness of the drug combinations. These drug combinations were designed through a rationalities approach, which is a strength of the manuscript. However, a lingering problem is that the initial experiments, and the main theme of the manuscript, stress the idea that these combinations were meant to target cells with MYC deregulation and with and OXPHOS metabolic switch.
We previously reported that OxPhos-and MYC-related transcriptional programs are closely correlated in DLBCL, and that MYC hyperactivation sensitizes cells to the OxPhos inhibitor IACS-010759 (Donati et al, 2022). Here, we clarify that this MYC-dependent sensitization to IACS-010759 depends on cooperation in causing oxidative stress, and not on increased OxPhos metabolic dependency. In fact, most of our mechanistic experiments were performed in FL MycER cells, which maintain a glycolytic energy metabolism regardless of exogenous MYC activation. We have amended our text to clarify this concept: "independently from OHT treatment, energy production in FL MycER cells was mainly glycolytic (Supplemental Figure 3B)".
The final result would suggest that the outcomes may very well be MYC-independent, and more widespread or inespecific.
Referee #1 (Remarks for Author):
In this manuscript, Donati et al. investigate the molecular mechanisms that underlie the synthetic lethal interaction between MYC deregulation and IACS-010759, a specific inhibitor of the electron transport chain (ETC) complex I. These studies refine a previous report by this same group (Donati et al, Mol Oncol 2022;16). 2 ascorbate. The combination of IACS and ascorbate proves to be very effective at killing different B cell non-Hodgkin lymphoma cell lines in vitro, and also in xenograft studies, which are also extended to two different, primary derived xenografts of double-hit lymphoma. These studies refine a previous report by this same group (Donati et al, Mol Oncol 2022;16).
MYC deregulation is thought to cause important metabolic changes in cells, some of which represent new dependencies. These metabolic dependencies could be used as points of entry for new therapeutic strategies. Exploiting these vulnerabilities offers an opportunity to treat MYC-driven cancers, and particularly those that are refractory to current therapies, like certain aggressive B cell lymphomas (i.e. double-hit lymphomas). Such unmet medical need makes new potential therapeutic combinations, like the one proposed in this manuscript, of particular interest.
The results of this study show that IACS-010759 increases oxidative stress by disrupting redox homeostasis. This response is in part compensated in cells overexpressing MYC by shuttling glucose toward the generation of NADPH via the pentose phosphate pathway. Inhibiting this pathway with specific drugs, or using vitamin C (ascorbate, a pro-oxidant), tilts the balance and reduces cell viability. These observations are all supported by a thorough and careful study on metabolites and experiments that take advantage of selective compounds that enhance of relieve this cellular response (e.g. NAC, BSO). Overall, the message is that MYC deregulation sensitizes cells to the inhibition of oxidative phosphorylation by different means. Links between MYC activity and oxidative stress were previously reported, and the current manuscript is consistent with this idea.
It is true that previous reports established the links between MYC and oxidative stress: our manuscript fully acknowledges these previous links and -as also noted by the Referee in the next paragraph -builds upon this concept to pinpoint an important therapeutic mechanism-of-action.
The (perhaps) most compelling piece of data is the fact that a rational (data-based) combination of ascorbate and IACS-010759 effectively kills B-cell non-Hodgkin lymphoma cell lines (aggressive Burkitt and Diffuse Large cell lymphoma) and patient-derived xenografts. The process by which authors dissect the pathway and reason the use of specific drug combinations is a strength of this study. The tumor study results are quite relevant, because of their clinical potential. But conceptually, what seems lost as the data flow, is the connection to MYC deregulation. The IACS+Ascorbate combination seems to kill everything, and despite the results in the FL5.12 MycER cell line, which are in some cases subtle, there is no strong evidence to say for sure that this is something specific to cells with MYC deregulation. So maybe the message of the manuscript, and the title, should be mindful of that.
The sensitizing effects of MYC activation in FL MycER cells shown in our original submission (formerly Fig. 4A) were indeed somewhat subtle, yet fully reproducible. To illustrate this effect in a clearer manner, the experiment was repeated using a shorter time of ascorbate treatment (6h instead of 12h). Moreover, in addition to FL MycER we used BaF MycER , a distinct B-cell line that was also described in our previous work (Donati et al., 2022). The new results clearly show increased effectiveness of the IACS/ascorbate treatment in OHT-induced cells (Fig. 4A).
Moreover, the Referee is right in noting that IACS-010759-induced sensitization to the prooxidant activity of ascorbate is not strictly dependent on MYC hyperactivation. This is now explicitly noted in our Results section: "In FL MycER cells, in which a broader concentration range of ascorbate was tested, the highest concentrations of this vitamin allowed killing by IACS-010759 in the absence of OHT priming." However, this is precisely where the added relevance to MYCdriven tumors lays, based on the mechanistic aspects reported in our work. Briefly here: Killing by IACS is clearly potentiated by MYC overexpression, as reported in our previous work (Donati et al., 2022), and confirmed here. (ii.) In this work, we demonstrate that this is due to the combined oxidative stress induced by MYC and IACS. (iii.) Ascorbate exploits this mechanism to further strengthen IACS' anti-tumoral activity.
Altogether, the title of our manuscript appropriately conveys the therapeutic potential of this drug combination to treat MYC-overexpressing B-cell lymphomas.
Some additional pints would also warrant revision.
Major comments: 1-Metabolic changes equivalent to the ones proposed here upon MYC deregulation also occur in normal B cells upon activation with cytokines+/-BCR crosslink (metabolism is normally rewired toward aerobic glycolysis). A prediction is that the combination of IACS+ascorbate could be also toxic to normal B cells in this setting. The xenograft experiments using immunodeficient mice cannot assess potential toxicities in normal B cell counterparts, which may be relevant if this drug combination were to advance to preclinical studies.
The Referee is correct in pointing this out, and we have now added a paragraph covering this issue in our Discussion: "Similar to what observed after ectopic MycER activation (Donati et al., 2022) (Supplemental Fig. 3A), mitogenic stimulation of B-cells coordinately potentiates glycolysis and mitochondrial respiration (e.g. Caro-Maldonado et al, 2014) as well as ROS production (Wheeler & Defranco, 2012). Thus, we cannot a priori exclude that a pro-oxidant therapeutic regimen such as IACS-010759 and ascorbate may be toxic for activated B-cells. However, we note that high-dose ascorbate has already proven safe and tolerable in a clinical setting, either alone or in association with platinum-based and other ROS-producing chemotherapeutic agents (Bottger et al, 2021). Moreover, high-dose ascorbate reinforced anti-cancer immunotherapy in multiple solid tumor models (Magri et al, 2020), implying that it does not impair -or rather may favor -anti-cancer immunity: it will be of high interest to address whether the same may be true in combination with IACS-010759 or other mitochondrial inhibitors." We shall also emphasize here that, while treatment of our experimental animals with IACS-010759 and ascorbate yielded no overall toxicity, the potential effects on normal activated B-cells (or other activated cell types) are a common caveat of effective anti-cancer therapies. As also outline by Referee 3, "direct medical impact cannot be predicted as yet (impossible question in my opinion)" and is objectively beyond the scope of the present pre-clinical study.
2-The authors imply a connection between OXPHOS and the response to the different drug combinations, but this is not really sustained in the cell line data, in part because of the lack of metabolic data on all the cell lines used in these experiments. Only Karpas 422 would fit the OxPHOS category described by Shipp and cols in Monti et al, Blood 2005, but the synergistic effect of the IACS+Ascorbate combination in this cell line is perhaps the less compelling (Fig. 5A). Other cell lines used here belong to different gene expression groups and the evidence for an OXPHOS metabolic makeup is less clear. The two double-hit PDX models are also of unclear metabolic profile. Although the authors showed in the past that OXPHOS and MYC expression seem to positively correlate, it is a bit of a stretch to assume that this happens in all cases.
This is an important point to be clarified, and we thank the Referee for bringing it up. Indeed, as we shall explain if further detail below, the relevance of our findings goes beyond the metabolic classification of tumors as "OxPhos".
Our previous data unraveled the sensitization of MYC overexpressing cells to two distinct mitochondrial inhibitors, tigecycline (D'Andrea et Ravà et al, 2018) and IACS-010759 (Donati et al., 2022). These results, together with the positive correlation between MYC-and OxPhos-associated gene expression signatures in DLBCL patient datasets (Donati et al., 2022) led us to the hypothesis that MYC-overexpressing lymphomas were more dependent on proficient OxPhos, and thus could be selectively targeted with mitochondrial inhibitor-based therapies. However, here we present observations clarifying that MYC-driven sensitization to mitochondrial inhibitors is not due to increased reliance on OxPhos for energy production, but rather to increased sensitivity to the oxidative stress caused by these drugs.
Consistent with the above, the synergistic anti-cancer activity of IACS and ascorbate is not limited to the OxPhos CCC subtype, which has been associated with reliance upon OxPhos as main energy source (Caro et al, 2012). Accordingly, we had already stated in the Introduction that "this mechanism does not strictly depend on the reliance of tumor cells upon OxPhos, and can be exploited to further enhance killing of MYC-overexpressing cells by combining IACS-010759 with other pro-oxidant drugs". In the Results, we conclude: "In summary, IACS-010759 and ascorbate synergized in vitro to kill MYC-overexpressing mature B-cell neoplasms, regardless of their origin and molecular subtype". Then again, in the Discussion: "In the present work, we clarify that the MYC-mediated sensitization to IACS-010759 is brought about by a critical accumulation of oxidative stress, rather than increased reliance on OxPhos for energy metabolism" followed by "MYC-induced sensitization to IACS-010759 did not depend upon OxPhos-driven ATP production, as was instead the case for IACS-010759 mediated killing of glycolysis-deficient cells (Molina et al., 2018)" and finally "This combination also showed synergy in BL and DLBCL lymphoma cell lines of multiple molecular subtypes, not restricted to the "OxPhos" category (Supplemental Figure 5B)."
3-Fig 6C, D:
In these PDX xenograft studies, the size of the error bars does not seem to support the existence of statistically significant differences, particularly for DFBL-69487. An alternative statistical test, different from a One-Way ANOVA comparing endpoints (day 14), may help resolve this dicordance.
The Referee states that "the size of the error bars does not seem to support the existence of statistically significant differences". Please note that, as indicated in the legend, the bars represent the standard deviation (SD), not the standard error (SE) of the sample. There is no discordance between the graph and the result of the one-way ANOVA, which remains the most powerful test available to detect significant differences between means.
In the Results, we have now added the following "Similar effects were obtained with two DHLderived patient-derived xenografts (PDX) (Townsend et al, 2016), injected systemically in NSG mice and monitored by whole-body bioluminescence ( Figure 6C, D). Note that one of the PDX tumors, PDX-69487, showed a remarkable resistance to IACS-010759 alone even if used at a higher dose; nonetheless, as with all other xenografts, the combination did cause a significant reduction in tumor growth relative to untreated controls." Altogether, our data fully support the conclusions given in the text for the response to ascorbate and/or IACS-010759, for both cell line-and PDX-based xenografts ( Figure 6). Fig.1 are somewhat subtle. One can infer that the authors are using Nrf2 here as an indirect reporter of oxidative stress, but this is a MYC target (also mentioned in the text) that may be important to help cells cope with oxidative stress. The fact that IACS reduces Nrf2 levels would suggest some kind of mechanistic connection. Does depletion of Nrf2 by genetic means (e.g. RNAi) alter cell survival in cell with MYC deregulation? Does IACS add anything to that response?
4-The changes in Nrf2 levels shown in
To clarify the role of Nrf2 in modulating the response to the oxidative stress linked to MycER activation and IACS treatment, we employed CRISPR-Cas9 engineering on our FL MycER cells and derived KO clones lacking either Nrf2 or its negative regulator Keap1. While these experiments allowed us to critically reassess the quality of the anti-Nrf2 antibody used in our original submission, they did not provide the additional molecular or phenotypic insight that would have formally been needed for inclusion in our revised manuscript. The data are included here for the Referees ( Figure R1) and described in the following two subsections: 4A -NRF2 IMMUNOBLOTTING: The immunoblot performed to confirm Nrf2 ablation in Nrf2 KO clones (Fig. R1A) revealed a nonspecific band recognized by the antibody (clone D1Z9C, Cell Signaling Technology). This nonspecific band is predominant and runs very close to the real Nrf2 band, which is essentially undetectable in our cells without prior treatment with the proteasome inhibitor MG132 (Fig. R1A). Most problematically, the two bands fail to be reproducibly resolved in most SDS-PAGE gels (Fig. R1B, top). Given this problem with clone D1Z9C, we probed our blots with anti-Nrf2 polyclonal antibody (PA5-27882, ThermoFisher), with which we could confirm increased Nrf2 levels in MG132-treated cells (Fig. R1B), as well as in Keap1 KO clones, as compared to untreated parental cells (Fig. R1C).
We conclude that the immunoblot presented in Fig. 1B of our original manuscript (shown again here: Fig. R1D, top) cannot be trusted to represent a specific Nrf2 signal. Moreover, immunoblotting of equivalent subcellular fractions from OHT-and IACS-treated FL MycER cells with the polyclonal anti-Nrf2 antibody did not detect a specific signal above background noise, not even upon prolonged exposures (Fig. R1D, bottom).
On a formal basis, given the above results, we conclude that immunoblotting on cell fractions is not a reliable means to monitor Nrf2 activity. We thus removed this experiment (formerly Fig. 1B) from our manuscript. We are truly thankful to the Referees for having prompted us to produce the information that led to this decision. Most importantly, this impacts in no way on the initial observation made on our work, namely the identification of the Nrf2-mediated oxidative stress response as the top OHT-responsive pathway in FL MycER cells ( Figure 1A). All the experiments that followed from this observation remain fully valid.
4B -PHENOTYPIC ANALYSIS OF NRF2 AND KEAP1 KO CLONES:
Having derived Nrf2 and Keap1 knockout (KO) FL MycER clones, we addressed the response of those cells to OHT and IACS treatment.
Six Nrf2 KO clones were tested (Fig. R1E), revealing incongruent changes, with drug sensitivities ranging from reduced (KO #1, #2) to unchanged (KO #3, #6) to increased (KO #4, #5) relative to parental cells. We surmise that clonal variability predominated over the effects of Nfr2 loss in those clones. Of note here, KO efficiencies did not allow us to work with polyclonal populations, and transduction of FL MycER cells with Nrf2 shRNAs achieved only partial knockdown, precluding this strategy as an alternative to the KO. In conclusion, we did not find definitive evidence for an involvement of the Nrf2 pathway in cell survival after MYC-activation and/or IACS-010759 treatment and thus decided to only show the activation of the Nrf2 pathway as an indirect readout of oxidative stress.
Unlike for Nrf2 KO, phenotypic assessment of 4 Keap1 KO FL MycER clones consistently showed increased resistance to IACS-010759 (Fig. R1F). A possible explanation for the apparent contradiction between the results obtained with Nrf2 and Keap1 KO could be the existence of non-canonical, Nrf2-independent functions of Keap1 (Kopacz et al, 2020). Specifically, while the Keap1-Nrf2 pathway responds to moderate oxidative stress, the Keap1-Pgam5 pathway is activated by heavy oxidative damage to induce oxeiptosis, a ROS-induced mitochondrial pathway of cell death (Holze et al, 2018). The eventual involvement of this mechanism in the effects of IACS-010759 is an intriguing possibility, but way too preliminary to make a formal point here, and must therefore be the subject of detailed studies. Fig 1D: the increase in superoxide levels in response to IACS is variegated (2 outliers, 2 replicates with much lower response). Were these independent experiments? Maybe a larger number of replicates would help discern if the response is or not homogeneous, and if there are any true outliers here.
5-
The superoxide values shown in Fig. 1C (1D in the previous version) are indeed from independent experiments. The same is true for H2O2 measurements in Fig. 1B. This is now fully clarified in the legend, "Each point in the graphs in B and C is from an independent biological replicate, each representing the average of thousands of events (single cells) in a distinct cell population, normalized to the untreated condition".
Even though the relative increase of superoxide level induced by IACS is different among the biological replicates, a consistent induction of this ROS species is evident in all of them, with no significant impact of OHT. Similar results and variability for IACS treated cells were obtained in subsequent experiments aimed at quantifying O2 •induced by IACS and ascorbate (Supplemental Fig. 4B). Figure 5A and Suppl Fig 4 would indicate that ascorbate at relatively high doses is toxic to most of the cell lines tested, except for FL5.12 cells. There is some additional toxicity provided by the combination with IACS, but most of the effect seems to be driven by ascorbate. However, this effect is lost in vivo, at least from what can be inferred from the xenograft studies. Any reason for this discrepancy? Is this technical?
6-
These in vitro assays (now in Fig. 5A and Suppl. Fig. 5A) demonstrate synergistic interactions between ascorbate and IACS in defined concentration ranges in all the cell lines tested, regardless of their differential cytotoxic activity as single agents. It is therefore inexact to deduce, as done here by the Referee, that "most of the effect seems to be driven by ascorbate". We have amended our description in the Results, in order to better emphasize this concept: "… ascorbate also increased IACS-010759 mediated killing in these cells, with the two drugs displaying significant synergistic effects within defined concentration ranges ( Figure 5A, B)".
Regarding the in vivo results, the daily dose of ascorbate (4 g/kg) was initially selected from a published protocol (Chen et al, 2008) and was maintained as it did not show any obvious toxic effect for the animals. While the effective local concentrations reached in our in vivo experiments remain to be determines, the fact that ascorbate enhances the anti-tumoral activity of IACS-7 010759 is conclusively demonstrated in our four tumor models (Fig. 6A-D). Altogether, there is no discrepancy here: our in vivo and in vitro data with IACS and ascorbate are consistent, and fully support the conclusions drawn in the text. Fig. 4A: The graph shows that in absence of ascorbate (=0), cell viability is already reduced in the IACS+4-OHT group (to about 70%). While this is expected, the authors mention that they used 135nM IACS, which, looking at the IACS only curve, doesn't seem to have any effect on viability. This would suggest some effect of MYC deregulation, which is not reflected in the text. Also, the slope of the IACS+4-OHT curve is different from the IACS only one, suggesting some kind of interaction. Can the authors discuss this more carefully?
7-
Detailed comments on this figure were provided above under the general remarks. We shall add here that the conditional toxicity of IACS-010759 after exogenous MYC activation in FL MycER cells was documented in our previous study (Donati et al., 2022) and is fully consistent with the data reported here: no killing by IACS alone, but partial killing (at this particular concentration) in OHTtreated cells.
We hypothesized that the observed difference in the slope of the viability curves between IACS and OHT+IACS in Fig. 4A might result from excessive oxidative stress induced by ascorbate at higher concentrations and at a relatively late time point (12h), which eventually exceeded that induced by IACS and overwhelmed cellular redox defenses. This experiment was repeated at a shorter time point (6h) in 2 different MycER-expressing cell lines: as discussed above, the new results showed consistently increased toxicity for the combination in OHT-primed cells (Fig. 4A).
Minor comments: 8-The changes in cell viability in all experiments, and particularly when combining MYC deregulation (4-OHT) and IACS seem to have limited penetrance, this is, only kill a fraction of cells. Any idea why this is the case? Do longer times lead to further reductions in viability, or are there always a fraction of cells not affected? "Limited penetrance" seems a somewhat inappropriate concept here: the point is that enhanced cell killing, even if quantified as a partial effect over a defined period of time (as inherent to any viability measurement), may be sufficient to achieve significant anti-tumoral effects if it supersedes cell proliferation. Eventually, preclinical in vivo data are the only means inform of the potential therapeutic window provided by a given drug combination, as clearly confirmed in our work for IACS and ascorbate.
9-Some studies seem to indicate that tamoxifen can promote oxidative stress in different cell types, e.g. in murine hepatocytes and also in breast cancer cells (e.g. Nazarewicz et al, Cancer Res 2007; 67 (3)). Potential off target effects could be addressed by including controls with FL5.12 parental cells (without MYC-ER) exposed to 4-OHT.
This control was provided in our previous study (Donati et al., 2022), where we showed that OHT priming sensitized FL MycER but not parental FL5.12 cells to killing by IACS-010759. Hence, the ontarget action of OHT was established, and does not need to be re-addressed here.
10-Page 7: looks like when mentioning Supplementary Fig 2A, C, the authors may have been referring to Suppl . Fig 2 C,D.
8 This is true and has been corrected in the text (now Suppl. Fig. 3). We thank the Referee for spotting the mistake.
11-Lactate M1 is not shown in the scheme in Fig 3A
What is shown in Fig. 3A is a schematic summary of the relevant glucose metabolic pathways, not the tracing experiment. We acknowledge that as originally written in our text, this was prone to confusion, and thank the Referee for pointing this out. We have now rewritten the text as follows: "This decreased PPP flux would also be expected to suppress the production of lactate from glucose passing through the PPP before re-entering glycolysis ( Figure 3A), measurable as lactate M1 in our tracing experiment: while apparent in our data, this effect remained below statistical significance (Supplemental Figure 3E)."
Referee #2 (Comments on Novelty/Model System for Author):
I find the exclusive use of various compounds (of unknown selectivity) to interrogate biology quite limiting. key conclusions should be tested with more defined genetic tools.
This concern is addressed below.
Experiments are technically well done.
Novelty is limited by many previous studies on this compound from this group and others.
We must firmly disagree with the notion that the novelty of our work is limited by previous studies. Instead, our findings significantly extend these studies, making new and important points in the field: It is true that anti-cancer response to OxPhos inhibition with IACS-010759 has been described in multiple papers, including our own regarding sensitization by oncogenic MYC (Donati et al., 2022). Nonetheless, we would like to point out the novelty of our findings regarding the disruption of redox equilibrium as the major mechanism of action for of IACS-010759, as well as the role of this mechanism in MYC-induced sensitization. Within this work, we have also exploited these findings for the rational design of a new combinatorial therapy (i.e. IACS-010759 + ascorbate) in MYC-associated lymphomas.
The medical impact as it stands today is limited, to my knowledge the compound has not advanced to the clinic.
The Referee is right in pointing this out, and this is precisely why we deem our findings timely and important from a clinical standpoint. Altogether, our data make a strong case for the combination of drugs targeting the ETC (of which IACS-010759 can be taken as a paradigm here) with ascorbate or other drugs that potentiate killing of tumor cells with a mechanism-driven rationale.
Considering the specific case of IACS-010759, our work offers new perspectives that may guide further clinical developments. This is a key aspect, which is developed in a dedicated paragraph in our Discussion: "The combinatorial action of IACS-010759 and ascorbate unraveled here might prove to be relevant in diverse clinical settings. First, etc..." Model systems. Most of the study is done in vitro, this is limiting with respect to tumor metabolism. Only the final conclusion (Vit C and IACS) is tested in vivo. I am not a metabolism expert, but this would raise concerns for me.
We understand and agree with the concerns regarding anti-cancer mechanisms in vivo. Indeed, we had already tried to assess oxidative damage by immunohistochemical analysis of specific biomarkers, including 8-hydroxydeoxyguanosine and 4-hydroxynonenal. These experiments did not yield conclusive results, owing mainly to the non-quantitative nature of the assay. More detailed in vivo studies (such as metabolic profiling, etc…) are not readily accessible, and beyond the scope of the present study.
This notwithstanding, we would like to point out that the pro-oxidant in vivo effects of high-dose ascorbate are well documented in the literature, as mentione in our text: "Parenteral administration of a high dose of ascorbate (vitamin C) has been shown to have pro-oxidant and anti-cancer activity in preclinical models, etc…". Finally, the same objection could be made regarding the specific inhibition of mitochondrial complex I by IACS-010759 in vitro (Donati et al., 2022;Molina et al, 2018), for which no tractable in vivo biomarker was described so far.
Referee #2 (Remarks for Author):
The authors use a series of pharmacological single agent and combination studies to show that MYC expressing lymphoma cell lines are sensitive to a ETC I inhibitor (IACS010759) and that various (more or less characterized) compounds that purport to target glucose metabolism, or redox signals alter sensitivity to the IACS compound. The study builds on a series of previous studies by this group and others on the ETC 1 inhibitory compound, but it appears to pursue a new angle related to redox and the role of Vit. C in drug synergy through the production of ROS.
I would be curious to learn more about the compound and its selectivity and the introduction should provide a better explanation.
As mentioned in the Introduction, and as rigorously characterized in the original report (Molina et al., 2018), IACS-010759 is a specific inhibitor of ETC complex I. There is really not much else to be explained about this, and documenting details of the previous study would be beyond the scope in our text.
Would there be formal reasons to suspect off-target effects or alternative targets of the compound behind the biological effects described in our work, this would of course become a key point of our discussion. However, there are no such alternative targets documented, nor do we have any reason to suspect their existence based on the available data. While this type of considerations will always be relevant in pharmacological studies, raising them here without a formal reason to do so would provide no added value to our study.
I would also be curious to learn more about the selective activity against MYC expressing tumor cells.
This was documented in detail in our previous study (Donati et al., 2022): as written in our Introduction this had shown that "a specific inhibitor of ETC complex I, IACS-010759 (Molina et al., 2018), selectively killed MYC-overexpressing cells by inducing intrinsic apoptosis (Donati et al., 2022)". Indeed, that was the basis for the follow-up study presented here.
Does MYC directly increase ETC 1 activity? Does MYC increase expression or (ribosomal?) translation of the ETC1 proteins?
There is ample evidence in the literature for the importance of MYC in promoting mitochondrial gene expression and biogenesis. We have added a sentence to clarify this at the beginning of the second paragraph in our Introduction: "Multiple studies linked MYC to mitochondrial biogenesis and activity (Li et al, 2005;Morrish & Hockenbery, 2014;Wolpaw & Dang, 2018), in particular via activation of nuclear genes encoding the mitochondrial RNA polymerase POLRMT (Oran et al, 2016) or mitochondrial ribosomal proteins (D'Andrea et al., 2016), leading to enhanced respiratory activity (Donati et al., 2022)". Etc… This is a relevant point to be made in as background information in our work, and the thank the Referee from bringing it up. Yet, we believe that reviewing this aspect in further detail is unnecessary.
Experimentally, the study is interesting although somewhat minimal and only the final conclusion (synergy of IACS and Vit C) has been tested in more than one cell line or in vivo.
As mentioned in our reply to Referee #1, besides FL MycER cells we now present data in a second Bcell line BaF MycER . The new results clearly confirm the effects of ascorbate in reinforcing IACSinduced killing (Fig. 4A). For further detail, please refer to our reply to Referee #1.
The above notwithstanding, we must disagree with the Referee on the "somewhat minimal" nature of the in vitro data included in our original submission. In fact, besides the data in FL MycER cells (and now also BaF MycER ), some of the key pharmacological interactions were also demonstrated in the DoHH2 and Ramos lymphoma cell lines. In particular: i. Our data in in FL MycER cells showed that (quoting our test) "the inhibitor of GSH synthesis buthionine sulfoximine (BSO) enhanced killing ( Figure 2D). This effect of BSO in potentiating the cytotoxic action of IACS-010759 was confirmed in two MYC-rearranged human lymphoma cell lines, DoHH2 and Ramos, derived from a double-hit and a Burkitt's lymphoma (BL), respectively (Supplemental Figure 2A)." ii. "To confirm the importance of the oxidative PPP for the selective killing of MYCoverexpressing cells by IACS-010759, we inhibited G6pd and Pgd ( Figure 3A) with dehydroepiandrosterone (DHEA) and 6-aminonicotinamide (6AN), respectively" etc… iii. "Finally, either DHEA or 6AN also potentiated killing by IACS-010759 in human MYCrearranged lymphoma cells lines (Supplemental Figure 3H).
Concerning the in vivo studies, we have the same comment here as written above in reply to Referee 1: We understand and agree with the concerns regarding anti-cancer mechanisms in vivo. Indeed, we had already tried to assess oxidative damage by immunohistochemical analysis of specific biomarkers, including 8-hydroxydeoxyguanosine and 4-hydroxynonenal. These experiments did not yield conclusive results, owing mainly to the non-quantitative nature of the assay. More detailed in vivo studies (such as metabolic profiling, etc…) are not readily accessible, and beyond the scope of the present study.
Given the uncertain specific of compounds that are used to elicit biological effect, the study would be improved if key data were supported using genetic studies: E.g., the study relies heavily on compounds that directly/indirectly influence ROS biology (NAC, VitC, PPP inhibition etc). the conclusions would be strengthened by experiments that show effects of NRF2 activation or inhibition (e.g. by modulating Keap1 levels).
We agree that genetic models can provide important evidence against off-target and other confounding effects that may affect pharmacological studies. Indeed, to confirm the involvement of the PPP pathway, we targeted Pgd, as described in our text: "We then sought to confirm these results in a genetic model of PPP impairment obtained by ablation of Pgd through CRISPR-Cas9 targeting. Of note, all of the Pgd KO FL MycER clones obtained were heterozygous, with residual Pgd protein expression (Supplemental Figure 3I), consistent with the essential nature of this gene, as defined in the Broad Institute Dependency Map (DepMap) portal (Ghandi et al, 2019). This notwithstanding, these Pgd-targeted clones showed increased sensitivity to IACS-010759 following OHT priming (Supplemental Figure 3J)".
The above notwithstanding, we would like to emphasize here the compelling nature of our pharmacological data. In particular, the use of different pharmacological treatments to modulate the cells' antioxidant defenses (see Fig. 2 and 3) concordantly pointed to oxidative stress as the mediator of IACS-010759 selective cytotoxic activity, makes it highly unlikely that the results shown are products of off-target effects of any single drug.
Regarding the role of the Nrf2-Keap1 pathway, we targeted the Nrf2 and Keap1 loci in FL MycER cells and addressed response of the resulting KO clones to MycER activation and IACS treatment (Fig. R1E, F). Please refer to our detailed response to Referee #1 (point 4) for a description of these results.
The drug synergy shown in Figure 4 seems modest (or the presentation is hard to interpret). A more intuitive comparison of IC50 data might be more convincing.
The figure has been updated with new data obtained with a shorter ascorbate treatment (6h instead of 12h), which more convincingly show increased sensitivity to the IACS/ascorbate combination after MycER activation (Fig. 4A). Moreover, the results were also reproduced in BaF MycER cells.
The in vivo synergy data showing tumor volumes look good. I am less sure about the imaging studies.
In vivo luminescence provides a quantitative measure of tumor load, provided as bilateral femur radiant efficiency (Fig. 6, legend; Materials and methods), and differences among treatment groups were assessed with the most appropriate statistical test (see also answer to Referee #1, point 3). We thus see no reason to question the relevance of the effects reported by imaging of the PDX-derived tumors (Fig. 6C, D), given also their consistency with those scored by subcutaneous tumor volume with the lymphoma cell lines Ramos and DoHH2 (Fig. 6A, B).
Some results remain speculative and inconclusive. E.g., compounds (of unclear specificity) are used to block the PPP shunt. This will alter glucose levels and increase glycolysis may bypass the ETC1 inhibition. Alternatively, blocking PPP will affect NADPH and redox potential of cells. What is the relevant role of these effects in cells in vitro and in tumor model (physiological glucose) in vivo?
While we reiterate the improbability of incurring in off-target effects from different drugs, all coherently pointing toward redox homeostasis as critical for the cytotoxic action of IACS in MYC overexpressing cells, we now provide a genetic validation for our model. Specifically, FL MycER clones where the Pgd gene was ablated showed increased sensitivity to IACS treatment after OHT-priming (Suppl. Fig. 3J).
Regarding the first scenario outlined here by the Referee, one would expect that increased compensatory glycolysis due to PPP inhibition would lead to increased resistance to IACS-010759 after MycER activation, but our results show that the opposite is true. In fact, PPP inhibition leads to increased killing by IACS-010759 in these cells, owing to disruption of redox homeostasis (Fig. 3E, F). Moreover, we now provide evidence that, upon pharmacological inhibition of the PPP pathway, IACS treatment is cytotoxic for OHT-primed FL MycER cells even at a higher glucose concentration (Suppl. Fig. 3G), which would normally prevent IACS-induced cell death (Donati et al., 2022). The importance of glucose-dependent glutathione regeneration through the PPP for the selective activity of IACS is underscored by the results obtained upon total inhibition of glucose metabolism with 2DG: treating the cells with 2DG induces a cytotoxic response of IACS irrespective of previous MycER activation (Fig. 3G, Suppl. Fig. 3G). This result is most compatible with an unavoidable energy crisis due to concurrent suppression of OxPhos and glycolysis.
Referee #3 (Comments on Novelty/Model System for Author):
Note to editor: this is a rather fundamental paper, some aspects require more rigorous data, and the direct medical impact cannot be predicted as yet (impossible question in my opinion) This is an important point, and we thank the Referee for bringing it up. Our study provides an innovative mechanism-based therapeutic concept, with pre-clinical proof-of-principle: whether this may ultimately have a direct impact in patients may only be assessed in tailored clinical studies, as generally true for this type of studies.
Referee #3 (Remarks for Author):
Donati et al employ an inducible system for expresion of Myc in a B cell line to investigate the synthetic lethality between Myc overexpression and inhibition of ETC complex I by IACS-010759 -a compound currently in clincial trials. They report that MYC hyperactivation and ETC inhibition disrupt redox homeostasis, leading to oxidative stress and apoptosis. By combining IACS-010759 with pro-oxidant drugs such as ascorbate, the efficacy can be further enhanced. The work is a direct extension of a related recent paper by the group (Donati et al Mol Oncol 2022) where Myc overexpression combined with Bcl2 inhibition were investigated. The current work complements the previous paper by showing that Myc makes cells vulnerable for oxidative stress, which is in itself not a novel observation. The authors do provide novel mechanistic insight in order to make this paper novel and relevant. In addition, though all mechanistic experiments are done with just 1 genetically manipulated cell line, authors do make the transition to additional B cell lines and patient derived PDX in Figure 6, which is commendable.
While it is true that all the mechanistic experiments were done in the FL MycER model, a mouse Bcell line engineered for conditional super-activation MYC, we would like to point out that we present important confirmatory results in two MYC-rearranged lymphoma cell lines, DoHH2 and Ramos. In particular, those data reinforce the conclusion that the anti-tumoral activity of IACS-010759 is potentiated by hampering antioxidant defenses (Suppl. Fig. 2A and 3H; see also the reply to Referee 2). Moreover, we have added a dose response curve showing ascorbatedependent potentiation of cell killing by IACS-010759 in BaF MycER (Fig. 4A).
Nevertheless several aspects should be improved and/or clarified to make the article more coherent and impactful.
Major comments 1. Figure 1B: This essential WB data require quantification of multiple experiments, as the change in Nrf2 levels in the nucleus is not clear. Why does OHT+IACS not result in increased Nrf2, if the signature of Nrf2 is present regardless of IACS+/-? In addition protein markers/ kD should be indicated, especially as there is controversy on the size of Nrf2 by western blot.
As discussed above, a number of control experiments were performed, as a result of which we finally decided to remove the Nrf2 blot. Please refer to our reply to Referee #1 (point 4) for a detailed explanation.
2. Figure 2 E: in the text is mentioned there is an abrupt increase of 405/488. If t0 is defined per each cell, then many of them have this abrupt increase much before (2h) they die. Can these differences between cells be explained/discussed?
As originally written in our text, our time-lapse data showed that "an abrupt fall in GSH availability (as revealed by the increase in 405/488 nm fluorescence ratio) regularly preceded death in double-treated cells ( Figure 2E, Movie S1)." We believe this conclusion to be appropriate, accounting for the fact that other biological parameters, which are stochastic by nature, are likely to determine the onset of apoptosis following the drop in GSH. We have added a new sentence following to account for this phenomenon: "The observed time-window between the drop in GSH and cell death was variable, ranging from few minutes to hours: this should be considered the product of a series of stochastic parameters etc…" 3. Figure 4: Which kind of ROS does ascorbate generate? Are IACS and ascorbate inducing two ROS generating pathways, or do they generate the same kind of ROS species but at higher levels? It would be informative to perfom H2O2 and superoxide measurements as in Fig. 1 C and 2 with the 2 compounds + combination.
As suggested by the Referee, we tested the effects of ascorbate treatment, alone and in combination with IACS, on O2 •and H2O2 production. Our results showed that: i. "…ascorbate treatment rapidly induced high levels of H2O2 in both the cytoplasm and mitochondria, with IACS-010759 co-treatment further enhancing this effect in the cytoplasm (Supplemental Figure 4A)"; ii. "… somewhat counterintuitive, ascorbate blunted superoxide production in IACS-010759treated cells (Supplemental Figure 4B), which seem at odds with its pro-oxidant effects. A possible explanation for this result could be that superoxide is being scavenged by ascorbate radicals (Nishikimi, 1975;Scarpa et al, 1983) at a rate similar to that achieved with dihydroethidium (Zhao et al, 2003), the fluorescent probe used for superoxide quantification".
In our previous study, we showed that selective killing by IACS of OHT treated FL MycER cells occurred only in reduced glucose medium, but was not associated with disruption of energy homeostasis (Donati et al., 2022). These findings are summarized in our Results, within the first paragraph of the section entitled "Glucose and the pentose phosphate pathway maintain redox homeostasis in IACS-010759 treated cells": "In all cell types examined so far, including OHT-treated FL MycER cells, the cytotoxic action of IACS-010759 was suppressed by excess glucose in the culture medium […] We previously reported that IACS-010759-induced cell death in OHT-primed cells was not associated with ATP reduction and energy impairment in OHT-primed cells (Donati et al., 2022). Altogether, these observations imply a distinct metabolic requirement for glucose -other than sustaining glycolysis for ATP production -in blocking the cytotoxic action of IACS-010759." The present study complements and extends those findings by revealing that IACS-induced cell death is associated with disruption of redox homeostasis, an effect that high glucose opposes by boosting NADPH production through the PPP pathway. This is extensively documented in the same section of our Results, ending with the following conclusion: "Altogether, the above data show that glucose protects MYC-overexpressing cells from IACS-010759-induced killing by sustaining NAPDH production through the oxidative phase of the PPP ( Figure 3A), ensuring the regeneration of GSH required to maintain redox homeostasis." Regarding the combination of IACS and ascorbate, the synergism between the two drugs is unaffected by glucose concentration (see Fig. 4C and Suppl. Fig. 4E) due to the fast kinetic of oxidative damage seen with the combination (see Fig. 4B and below the answer to point 5).
5. Since ferric iron chelator deferoxamine (DFX) fully prevented cell death in double-treated cells ( Figure 4C), the question arise to what extent the processes studied may involve a ferroptotic component. IS there synergy with ferroptosis inducers/prevention by ferroptosis inhibitors? What is the role of GPX4? In Discussion authors make some assumptions about that, suggesting future investigations, but these aspects should be included here, to increase significance and impact.
The point raised by the Referee regarding the potential involvement of ferroptosis in the response to the IACS/ascorbate combination, and more generally targeting oncogenic MYCexpressing cells with ferroptosis inducers, is indeed of high interest. We have now added data showing that ascorbate treatment was sufficient to induce lipid peroxidation (Fig. 4D). As written in our Results: "given the importance of iron in the combinatorial effects of IACS-010759 and ascorbate, we investigated the involvement of ferroptosis, a form of regulated cell death initiated in response to lipid peroxidation by iron-generated ROS (Jiang et al, 2021). Indeed, ascorbate treatment caused a marked increase in lipid peroxidation ( Figure 4D); while IACS-010759 had no effect alone, it showed a tendency (albeit below statistical significance) to reinforce the effect of ascorbate. Altogether, the above results suggests that the potentiation of IACS-010759-induced cell death by ascorbate was contributed by ferroptosis." And in our Discussion: "Remarkably, IACS-010759 and ascorbate synergized in killing MYCoverexpressing B-cells, owing most likely to the cooperative induction of oxidative damage, including lipid peroxidation and ferroptosis." To address the role of GPX4, as requested by the Referee, we tested RSL3, an inhibitor of the lipid peroxide scavenging enzyme GPX4 (Yang et al, 2014). However, unlike ascorbate, RSL3 alone showed strong toxicity in FL MycER cells, regardless of OHT or IACS-010759. While showing that GPX4 is required for cell survival in normal growth conditions, this result does not contribute any additional insight regarding its contribution to cell killing by IACS-010759 and ascorbate. We thus decided not to include this experiment in our paper.
Minor comments 1. Fig 1A Which part of these data is new and which was already published? In the previous paper Donati, 2022 also RNAseq analysis of OHT-treated versus not-treated are shown but a diferent time point. In there only the 1st 3 pathways are shown and Nrf2 does not appear. Please explain.
The RNA-seq data used here are the same as in our previous study (Donati et al., 2022), thus reflecting exactly the same samples and time-points. However, the analysis presented here is different from that in our previous study, which we have now made clearer in our text (see the first paragraph of the Results, the legend to Fig. 1A and Materials and Methods). Briefly here, in our previous paper we employed Gene set enrichment analysis to compare the differentially expressed genes (DEG) from our samples to signatures from the MSigDB Hallmark collection (Liberzon et al, 2015), which does not contain a Nrf2 target signature. Instead, Fig. 1A of the present manuscript shows the result of the analysis performed with the Qiagen Ingenuity Pathway Analysis (IPA) software, which uses its own collection of curated signatures, among which that of the Nrf2 pathway.
2. Fig3C graph appears to be missing, only one representative histogram is shown.
This figure plots the 405/488 nm ratio from each single event (cell), which due to the bimodal value distribution from OHT/IACS treated samples we deemed more informative than a graph plotting the average values.
3. The synergy between Myc, IACS and ascorbate in Figures 5 and S4; these figures are of too low resolution.
Indeed, we realized after submission that the integration in the word file downgraded the resolution of the figures, which we now provide at the original resolution (note that Figure S4 has become S5). 4. Figure 6; Y-axes in B, D do not have a legend.
We have modified the figures to include a legend. Figure legends should be placed at the end of the main manuscript file. Please check "Author Guidelines" for more information: https://www.embopress.org/page/journal/17574684/authorguide#figureformat 3) In the main manuscript file, please do the following: -Correct/answer the track changes suggested by our data editors by working from the attached document.
-Add up to 5 keywords.
-In M&M, provide the antibody dilutions that were used for each antibody.
-Please rename "Competing Interest" to "Disclosure Statement & Competing Interests". We updated our journal's competing interests policy in January 2022 and request authors to consider both actual and perceived competing interests. Please review the policy https://www.embopress.org/competing-interests and update your competing interests if necessary. -Author contributions: Please specify author contributions in our submission system. CRediT has replaced the traditional author contributions section because it offers a systematic machine-readable author contributions format that allows for more effective research assessment. You are encouraged to use the free text boxes beneath each contributing author's name to add specific details on the author's contribution. More information is available in our guide to authors: https://www.embopress.org/page/journal/17574684/authorguide#authorshipguidelines -In data availability statement please remove information of previous published datasets. If no data produced within this study are deposited in public repositories, please add the sentence: "This study includes no data deposited in external repositories". Please check "Author Guidelines" for more information. https://www.embopress.org/page/journal/17574684/authorguide#availabilityofpublishedmaterial 4) Supplemental information: Please rename the file to "Appendix" and add table of content on the first page. Rename figures to Appendix Figure S1 etc. and update the callouts in the main manuscript text. 5) Movie: Please rename the movie file to Movie EV1, zipp the legend with the movie file and update the callout in the main text. 6) Funding: Please place the information about funding to "Acknowledgments". 7) The Paper Explained: Please provide a summary of the study structured as followed: PROBLEM -the medical issue you are addressing, RESULTS -the results obtained, and IMPACT -their clinical impact. Please refer to any of our published primary research articles for an example. Please check "Author Guidelines" for more information. https://www.embopress.org/page/journal/17574684/authorguide#researcharticleguide 8) Synopsis: Every published paper now includes a 'Synopsis' to further enhance discoverability. Synopses are displayed on the journal webpage and are freely accessible to all readers. They include separate synopsis image and synopsis text. -Synopsis image: Please provide a striking image or visual abstract as a high-resolution jpeg file 550 px-wide x (250-400)-px high to illustrate your article.
-Synopsis text: Please provide a short standfirst (maximum of 300 characters, including space) as well as 2-5 one sentence bullet points that summarise the paper as a .doc file. Please write the bullet points to summarise the key NEW findings. They should be designed to be complementary to the abstract -i.e. not repeat the same text. We encourage inclusion of key acronyms and quantitative information (maximum of 30 words / bullet point). Please use the passive voice.
-Please check your synopsis text and image before submission with your revised manuscript. Please be aware that in the proof stage minor corrections only are allowed (e.g., typos). 9) For more information: This space should be used to list relevant web links for further consultation by our readers. Could you identify some relevant ones and provide such information as well? Some examples are patient associations, relevant databases, OMIM/proteins/genes links, author's websites, etc... 10) Source data: Please upload source data as clearly labelled one (zipped) file per figure. 11) As part of the EMBO Publications transparent editorial process initiative (see our Editorial at http://embomolmed.embopress.org/content/2/9/329), EMBO Molecular Medicine will publish online a Review Process File (RPF) to accompany accepted manuscripts. This file will be published in conjunction with your paper and will include the anonymous referee reports, your point-by-point response and all pertinent correspondence relating to the manuscript. Let us know whether you agree with the publication of the RPF and as here, if you want to remove or not any figures from it prior to publication. Please note that the Authors checklist will be published at the end of the RPF. 12) Please provide a point-by-point letter INCLUDING my comments as well as the reviewer's reports and your detailed responses (as Word file).
I look forward to reading a new revised version of your manuscript as soon as possible.
Zeljko Durdevic
Zeljko Durdevic Editor EMBO Molecular Medicine *** Instructions to submit your revised manuscript *** *** PLEASE NOTE *** As part of the EMBO Publications transparent editorial process initiative (see our Editorial at https://www.embopress.org/doi/pdf/10.1002/emmm.201000094), EMBO Molecular Medicine will publish online a Review Process File to accompany accepted manuscripts.
In the event of acceptance, this file will be published in conjunction with your paper and will include the anonymous referee reports, your point-by-point response and all pertinent correspondence relating to the manuscript. If you do NOT want this file to be published, please inform the editorial office at<EMAIL_ADDRESS>5) The paper explained: EMBO Molecular Medicine articles are accompanied by a summary of the articles to emphasize the major findings in the paper and their medical implications for the non-specialist reader. Please provide a draft summary of your article highlighting -the medical issue you are addressing, -the results obtained and -their clinical impact. This may be edited to ensure that readers understand the significance and context of the research. Please refer to any of our published articles for an example. 6) For more information: There is space at the end of each article to list relevant web links for further consultation by our readers. Could you identify some relevant ones and provide such information as well? Some examples are patient associations, relevant databases, OMIM/proteins/genes links, author's websites, etc... 7) Author contributions: the contribution of every author must be detailed in a separate section. 8) EMBO Molecular Medicine now requires a complete author checklist (https://www.embopress.org/page/journal/17574684/authorguide) to be submitted with all revised manuscripts. Please use the checklist as guideline for the sort of information we need WITHIN the manuscript. The checklist should only be filled with page numbers were the information can be found. This is particularly important for animal reporting, antibody dilutions (missing) and exact values and n that should be indicted instead of a range. 9) Every published paper now includes a 'Synopsis' to further enhance discoverability. Synopses are displayed on the journal webpage and are freely accessible to all readers. They include a short stand first (maximum of 300 characters, including space) as well as 2-5 one sentence bullet points that summarise the paper. Please write the bullet points to summarise the key NEW findings. They should be designed to be complementary to the abstract -i.e. not repeat the same text. We encourage inclusion of key acronyms and quantitative information (maximum of 30 words / bullet point). Please use the passive voice. Please attach these in a separate file or send them by email, we will incorporate them accordingly.
You are also welcome to suggest a striking image or visual abstract to illustrate your article. If you do please provide a jpeg file 550 px-wide x 400-px high.
10) A Conflict of Interest statement should be provided in the main text 11) Please note that we now mandate that all corresponding authors list an ORCID digital identifier. This takes <90 seconds to complete. We encourage all authors to supply an ORCID identifier, which will be linked to their name for unambiguous name identification.
Currently, our records indicate that the ORCID for your account is 0000-0002-2958-1799.
Please click the link below to modify this ORCID: Link Not Available 12) The system will prompt you to fill in your funding and payment information. This will allow Wiley to send you a quote for the article processing charge (APC) in case of acceptance. This quote takes into account any reduction or fee waivers that you may be eligible for. Authors do not need to pay any fees before their manuscript is accepted and transferred to our publisher.
*Additional important information regarding Figures
Each figure should be given in a separate file and should have the following resolution: Graphs 800-1,200 DPI Photos 400-800 DPI Colour (only CMYK) 300-400 DPI" The system will prompt you to fill in your funding and payment information. This will allow Wiley to send you a quote for the article processing charge (APC) in case of acceptance. This quote takes into account any reduction or fee waivers that you may be eligible for. Authors do not need to pay any fees before their manuscript is accepted and transferred to our publisher. ***** Reviewer's comments ***** Referee #1 (Comments on Novelty/Model System for Author): Developing new treatment strategies against MYC-driven cancers is a pressing need. These cancers are typically aggressive and refractory to first-line therapies, .e.g. the case of double-hit lymphomas. To solve this problem, Donati et al turn their attention to metabolic vulnerabilities imposed by MYC deregulation in cancer cells, which they show can be exploited for therapy. Although the data presented in this manuscript relies on work on a limited number of mouse and human cell lines, as well as xenograft studies in a couple of PDX lines, the thorough molecular studies presented here make a very strong case for specific drug combinations that could be easily moved into preclinical and early phase clinical trials.
Referee #1 (Remarks for Author): The revised version of this manuscript by Donati et al. has been strengthened by thorough discussions and additional experimental evidence. The old and new data extend previous findings (Donati et al, Mol Oncol 2022;16) and thoroughly investigate the molecular mechanisms underlying the sensitivity of cell lines with MYC deregulation to a combination of IACS-010759, a specific inhibitor of the electron transport chain (ETC) complex I, and ascorbate or inhibitors of NADPH production through the PPP. The data is well supported with studies in different cell lines and xenograft-PDX lines, as well as a combination of pharmacological and also gene KO experiments to confirm the specificity of the drug effects. These studies make a strong case for using this or similar drug combinations in future clinical studies; and provide a wealth of data and mechanistic insights that should help rationalize this drug combination or additional combinations, and further refine these for future clinical testing. I do not have major comments to the new version. All my prior concerns were answered, and the new experimental data included in this revised version of the manuscript -i.e., the use of additional cell lines and the investigation into Nrf2, as well as the detailed discussions in the rebuttal and the main text, are greatly appreciated.
Referee #2 (Comments on Novelty/Model System for Author):
The authors have addressed my concerns.
Referee #2 (Remarks for Author): Thank you for revisions. The manuscript is clearly enhanced.
Referee #3 (Remarks for Author): Th eauthors did a thorough job of adressong my (and the other reviewers) comments. There are two minor (text) issues remaining that may be clarified: 1.-In using the 13-C glucose tracing authors used [1,2-13C]-glucose, to discriminate between straight glycolysis and the bypass via the PPP. This could be clarified in the text a bit better (page7/8), especially since the data in fact do not quite confirm their first assumption, making this section difficult to understand. 2. Authors have added new data on the possibel contribution of ferroptosis; the data do not allow a straight yes or no but altogether it is now more informative. The last sentence of that section in Results seems in (syntax) error, please correct: Altogether, the above results suggest that the potentiation of IACS-010759-induced cell death by ascorbate was contributed by ferroptosis. 3) In the main manuscript file, please do the following: -Correct/answer the track changes suggested by our data editors by working from the attached document. Done.
-Add up to 5 keywords.
We introduced 5 keywords as instructed.
-In M&M, provide the antibody dilutions that were used for each antibody. Done.
-Please rename "Competing Interest" to "Disclosure Statement & Competing Interests". We updated our journal's competing interests policy in January 2022 and request authors to consider both actual and perceived competing interests. Please review the policy https://www.embopress.org/competing-interests and update your competing interests if necessary.
Done.
13th Apr 2023 2nd Authors' Response to Reviewers -Author contributions: Please specify author contributions in our submission system. CRediT has replaced the traditional author contributions section because it offers a systematic machine-readable author contributions format that allows for more effective research assessment. You are encouraged to use the free text boxes beneath each contributing author's name to add specific details on the author's contribution. More information is available in our guide to authors: https://www.embopress.org/page/journal/17574684/authorguide#authorshipguidelines Done.
-In data availability statement please remove information of previous published datasets. If no data produced within this study are deposited in public repositories, please add the sentence: "This study includes no data deposited in external repositories". Please check "Author Guidelines" for more information. https://www.embopress.org/page/journal/17574684/authorguide#availability ofpublishedmaterial Done: as instructed, we specified that "This study includes no data deposited in external repositories". 4) Supplemental information: Please rename the file to "Appendix" and add table of content on the first page. Rename figures to Appendix Figure S1 etc. and update the callouts in the main manuscript text.
Done. 5) Movie: Please rename the movie file to Movie EV1, zipp the legend with the movie file and update the callout in the main text.
Done. 6) Funding: Please place the information about funding to "Acknowledgments". Done.
7) The Paper Explained: Please provide a summary of the study structured as followed: PROBLEM -the medical issue you are addressing, RESULTS -the results obtained, and IMPACT -their clinical impact. Please refer to any of our published primary research articles for an example. Please check "Author Guidelines" for more information. https://www.embopress.org/page/journal/17574684/authorguide#researcharticleguide Done. 8) Synopsis: Every published paper now includes a 'Synopsis' to further enhance discoverability. Synopses are displayed on the journal webpage and are freely accessible to all readers. They include separate synopsis image and synopsis text. -Synopsis image: Please provide a striking image or visual abstract as a high-resolution jpeg file 550 px-wide x (250-400)-px high to illustrate your article.
-Synopsis text: Please provide a short standfirst (maximum of 300 characters, including space) as well as 2-5 one sentence bullet points that summarise the paper as a .doc file.
Please write the bullet points to summarise the key NEW findings. They should be designed to be complementary to the abstract -i.e. not repeat the same text. We encourage inclusion of key acronyms and quantitative information (maximum of 30 words / bullet point). Please use the passive voice.
-Please check your synopsis text and image before submission with your revised manuscript. Please be aware that in the proof stage minor corrections only are allowed (e.g., typos).
Done. 9) For more information: This space should be used to list relevant web links for further consultation by our readers. Could you identify some relevant ones and provide such information as well? Some examples are patient associations, relevant databases, OMIM/proteins/genes links, author's websites, etc...
The following is the corresponding authors' institutional website: https://www.research.ieo.it/research-and-technology/principal-investigators/oncogenestranscription-and-cancer/ 10) Source data: Please upload source data as clearly labelled one (zipped) file per figure.
Done. 11) As part of the EMBO Publications transparent editorial process initiative (see our Editorial at http://embomolmed.embopress.org/content/2/9/329), EMBO Molecular Medicine will publish online a Review Process File (RPF) to accompany accepted manuscripts. This file will be published in conjunction with your paper and will include the anonymous referee reports, your point-by-point response and all pertinent correspondence relating to the manuscript. Let us know whether you agree with the publication of the RPF and as here, if you want to remove or not any figures from it prior to publication. Please note that the Authors checklist will be published at the end of the RPF.
We agree with the publication of our detailed response, including the figures. 12) Please provide a point-by-point letter INCLUDING my comments as well as the reviewer's reports and your detailed responses (as Word file).
Provided with the present letter. ***** Reviewer's comments ***** Referee #1 (Comments on Novelty/Model System for Author): Developing new treatment strategies against MYC-driven cancers is a pressing need. These cancers are typically aggressive and refractory to first-line therapies, .e.g. the case of doublehit lymphomas. To solve this problem, Donati et al turn their attention to metabolic vulnerabilities imposed by MYC deregulation in cancer cells, which they show can be exploited for therapy. Although the data presented in this manuscript relies on work on a limited number of mouse and human cell lines, as well as xenograft studies in a couple of PDX lines, the thorough molecular studies presented here make a very strong case for specific drug combinations that could be easily moved into preclinical and early phase clinical trials.
Referee #1 (Remarks for Author): The revised version of this manuscript by Donati et al. has been strengthened by thorough discussions and additional experimental evidence. The old and new data extend previous findings (Donati et al, Mol Oncol 2022;16) and thoroughly investigate the molecular mechanisms underlying the sensitivity of cell lines with MYC deregulation to a combination of IACS-010759, a specific inhibitor of the electron transport chain (ETC) complex I, and ascorbate or inhibitors of NADPH production through the PPP. The data is well supported with studies in different cell lines and xenograft-PDX lines, as well as a combination of pharmacological and also gene KO experiments to confirm the specificity of the drug effects. These studies make a strong case for using this or similar drug combinations in future clinical studies; and provide a wealth of data and mechanistic insights that should help rationalize this drug combination or additional combinations, and further refine these for future clinical testing.
I do not have major comments to the new version. All my prior concerns were answered, and the new experimental data included in this revised version of the manuscript -i.e., the use of additional cell lines and the investigation into Nrf2, as well as the detailed discussions in the rebuttal and the main text, are greatly appreciated.
Referee #2 (Comments on Novelty/Model System for Author): The authors have addressed my concerns.
Referee #2 (Remarks for Author): Thank you for revisions. The manuscript is clearly enhanced.
Referee #3 (Remarks for Author): Th eauthors did a thorough job of adressong my (and the other reviewers) comments. There are two minor (text) issues remaining that may be clarified: 1.-In using the 13-C glucose tracing authors used [1,2-13C]-glucose, to discriminate between straight glycolysis and the bypass via the PPP. This could be clarified in the text a bit better (page7/8), especially since the data in fact do not quite confirm their first assumption, making this section difficult to understand.
We have inserted a new paragraph break and explanatory section in our text, which now reads as follows: "Lactate is the end product of glycolysis: by using the [1,2-13 C]glucose tracer, lactate produced by glucose that passed directly through glycolysis can be distinguished from that produced by glucose processed through the PPP ( Figure 3A): the former would be quantified as lactate M2 isotopomer, and the latter as lactate M1. Given the decreased PPP flux in IACS-010759 treated cells, we expected a reduced production of lactate M1: etc…" 2. Authors have added new data on the possibel contribution of ferroptosis; the data do not allow a straight yes or no but altogether it is now more informative. The last sentence of that section in Results seems in (syntax) error, please correct: Altogether, the above results suggest that the potentiation of IACS-010759-induced cell death by ascorbate was contributed by ferroptosis.
We have modified this final sentence, which now reads as follows: "Altogether, the above results suggest that ferroptosis contributes to the potentiation of IACS-010759-induced cell death by ascorbate." We are pleased to inform you that your manuscript is accepted for publication and is now being sent to our publisher to be included in the next available issue of EMBO Molecular Medicine.
Please read below for additional IMPORTANT information regarding your article, its publication and the production process.
Congratulations on your interesting work, | 20,144.6 | 2023-05-09T00:00:00.000 | [
"Medicine",
"Biology",
"Chemistry"
] |
The High-Redshift Gas-Phase Mass-Metallicity Relation in FIRE-2
The unprecedented infrared spectroscopic capabilities of JWST have provided high-quality inter-stellar medium (ISM) metallicity measurements and enabled characterization of the gas-phase mass-metallicity relation (MZR) for galaxies at z ≳ 5 for the first time. We analyze the gas-phase MZR and its evolution in a high-redshift suite of FIRE-2 cosmological zoom-in simulations at z = 5 − 12 and for stellar masses M ∗ ∼ 10 6 − 10 10 M ⊙ . These simulations implement a multi-channel stellar feed-back model and produce broadly realistic galaxy properties, including when evolved to z = 0. The simulations predict very weak redshift evolution of the MZR over the redshift range studied, with the normalization of the MZR increasing by less than 0 . 01 dex as redshift decreases from z = 12 to z = 5. The median MZR in the simulations is well-approximated as a constant power-law relation across this redshift range given by log( Z/Z ⊙ ) = 0 . 37 log( M ∗ / M ⊙ ) − 4 . 3. We find good agreement between our best-fit model and recent observations made by JWST at high redshift. The weak evolution of the MZR at z > 5 contrasts with the evolution at z ≲ 3, where increasing normalization of the MZR with decreasing redshift is observed and predicted by most models. The FIRE-2 simulations predict increasing scatter in the gas-phase MZR with decreasing stellar mass, in qualitative agreement with some observations
INTRODUCTION
The mass-metallicity relation (MZR) is the observed positive correlation between a galaxy's stellar mass and its metallicity (Lequeux et al. 1979;Tremonti et al. 2004).There are both stellar and gas-phase versions of the MZR, which relate stellar mass to stellar metallicity and interstellar medium (ISM) metallicity, respectively.Throughout this work we focus on the gas-phase MZR.The MZR and its evolution have been observed extensively across wide ranges of redshift and stellar mass (e.g., Erb et al. 2006;Lee et al. 2006;Zahid et al. 2011Zahid et al. , 2012;;Henry et al. 2013a,b;Maier et al. 2014;Steidel et al. 2014;Yabe et al. 2014;Sanders et al. 2015;Guo et al. 2016).Zahid et al. (2013) characterized the observed evolution of the MZR from z = 0 − 2.3, noting that, for a given stellar mass, metallicity tends to increase as redshift decreases.Previously, relatively small samples of galaxies have been able to confirm the existence of the MZR up to z ∼ 3 (e.g., Maiolino et al. 2008;Mannucci et al. 2009).
Recently, new observations from the James Webb Space Telescope (JWST) have greatly expanded the physical regimes where the MZR has been probed, both in mass and in cosmic time.For example, Nakajima et al. (2023) characterize the evolution of the MZR for 4 < z < 10 using metallicity measurements of 135 galaxies identified by JWST in this redshift range.Curti et al. (2023a) analyze the gas-phase metallicities of 146 highredshift (3 < z < 10) galaxies observed by JWST, 80 of which were also present in the sample from Nakajima et al. (2023).Bunker et al. (2023) use strong-line ratios to constrain the metallicity of GN-z11 at z ∼ 10.6.Gas-phase metallicities have been derived for a number of other high-redshift JWST targets via direct methods (e.g., MACS0647-JD at z = 10.165;Hsiao et al. 2024, galaxies in JWST Early Release Observations at z ∼ 8; Curti et al. 2023b, and 9 sources in the sight line of MACS J1149.5+2223 at z = 3 − 9; Morishita et al. 2024).It is imperative that these unprecedented advances in observations of the MZR at high redshift and low stellar mass be met with detailed theoretical predictions in the newly probed regimes.
The MZR and its evolution have been studied at high redshift in a number of different simulation codes, such as IllustrisTNG (Torrey et al. 2019), FirstLight (Langan et al. 2020), SERRA (Pallottini et al. 2022), AS-TRAEUS (Ucci et al. 2023), and FLARES (Wilkins et al. 2023).FirstLight, ASTRAEUS, and FLARES predict weak or no evolution in the MZR for z ≳ 5.The Feedback in Realistic Environment (FIRE) project 1 is a set of cosmological zoom-in simulations that resolve the multiphase ISM of galaxies and implement detailed models for star formation and stellar feedback (Hopkins et al. 2014(Hopkins et al. , 2018(Hopkins et al. , 2023)).Ma et al. (2016) characterized the MZR in the first generation of FIRE simulations from z = 0 to z = 6.A number of previous works (e.g., Ma et al. 2016;Torrey et al. 2019;Langan et al. 2020) invoke gas fractions to explain the redshift evolution or lack of redshift evolution in the MZR.Recently, Bassini et al. (2024) analyze the evolution of the MZR for z = 0 − 3 in FIREbox (Feldmann et al. 2023), a cosmological volume simulation that uses FIRE-2 physics.
In this work, we present the MZR predicted from a high-redshift suite of FIRE-2 simulations.We measure the MZR at redshifts 5 ≤ z ≤ 12 and we provide fitting formulae describing the redshift evolution of the MZR.We show that our model is in agreement with recent high-redshift observations of the MZR made by JWST.We compare our best-fit MZR with previous simulationbased models across this redshift range.This work analyzes the same suite of FIRE-2 simulations as done in two recent papers by Sun et al. (2023a,b), which focus on bursty star formation and its implications for the highredshift ultraviolet luminosity function (UVLF) and survey selection effects in the context of JWST.Yang et al. (2023) also analyzed some galaxies from this suite of simulations and showed that metallicities derived from mock observations of emission lines from individual HII regions of FIRE-2 galaxies are in 1σ agreement with JWST and ALMA observations.This paper is organized as follows.In Section 2, we describe the high-z suite of FIRE-2 simulations used in 1 See FIRE project website: http://fire.northwestern.edu.
this paper and the methods used to analyze the gasphase MZR.In Section 3, we present the resulting highz MZR in FIRE-2 simulations.We compare our results with new observations made by JWST and with previous theoretical models of the MZR derived from FIRE-1 and other simulations.Finally, in Section 4 we summarize the key conclusions from this work and discuss potential future work on high-z metallicity scaling relations.
Throughout this work we adopt a standard flat ΛCDM cosmology with cosmological parameters consistent with Planck Collaboration et al. (2020).We define a galaxy's gas-phase metallicity to be the mass-weighted mean metallicity of all gas particles within 0.2R vir of the galaxy's center.All log functions are base 10, except when written as ln (natural logarithm).
Simulations
The simulations analyzed in this paper are cosmological zoom-in simulations from a FIRE-2 high-redshift suite originally presented by Ma et al. (2018aMa et al. ( ,b, 2019)).All simulations in this suite were run using the GIZMO code (Hopkins 2015).The hydrodynamic equations are solved using GIZMO's meshless finite-mass (MFM) method.The 34 particular simulations analyzed in this paper are the z5m12a-e, z5m11a-i, z5m10a-f, z5m09ab, z7m12a-c, z7m11a-c, z9m12a, and z9m11a-e runs.The names of these simulations denote the final redshift that they were run down to (z fin = 5, 7, or 9) and the main halo masses (ranging from M h ≈ 10 9 − 10 12 M ⊙ ) at these final redshifts.Baryonic (gas and star) particles have initial masses m b = 100 − 7000M ⊙ (simulations with more massive host galaxies have more massive baryonic particles).Dark matter particles are more massive by a factor of Ω DM /Ω b ≈ 5. Gravitational softenings are adaptive for the gas (with minimum Plummerequivalent force softening lengths ϵ b = 0.14-0.42physical pc) and are fixed to ϵ * = 0.7-2.1 physical pc and ϵ DM = 10 − 42 physical pc for star and dark matter particles, respectively.
A full description of the baryonic physics in FIRE-2 simulations is given by Hopkins et al. (2018), while more details on this specific suite of FIRE-2 simulations are discussed in Ma et al. (2018aMa et al. ( ,b, 2019)).Here, we briefly review the aspects of the simulations most pertinent to our MZR analysis.
FIRE-2 simulations track the abundances of 11 different elements (H, He, C, N, O, Ne, Mg, Si, S, Ca, Fe).Metals are returned via multiple stellar feedback processes, including core collapse and type Ia supernovae as well as winds from O/B and AGB stars.Star particles in the simulations represent stellar populations with a Kroupa IMF (Kroupa 2001) and with the stellar evolution models from STARBURST99 (Leitherer et al. 1999).These simulations also include sub-grid modelling for turbulent diffusion of metals to allow for chemical exchange between neighboring particles.The implementation and effects of the sub-grid turbulent diffusion model in FIRE simulations are described in Colbrook et al. (2017) and Escala et al. (2018).
Analysis
We analyze galaxies in each simulation at snapshots from z = 5 to z = 12, with integer redshift increments.In addition to the main, most massive galaxy, each simulation's zoom-in region captures numerous other, less massive, galaxies.The coordinates of galaxy centers and virial radii are taken from Amiga Halo Finder (AHF) catalogs (Gill et al. 2004;Knollmann & Knebe 2009).Halos are defined using the redshift-dependent overdensity parameter from Bryan & Norman (1998).The galaxies used in our analysis are filtered based on the following criteria.Galaxies must have a minimum stellar mass within 0.2R vir M * > 10 6 M ⊙ , a non-zero gas mass (M gas > 0) within the same radius, and the halo must have a minimum virial mass given by M vir ≥ 10 9 M ⊙ .We exclude satellite galaxies and subhalos from our analysis as their properties can be significantly influenced by their host galaxy.Finally, we filter out any galaxies that are "contaminated" by low-resolution dark matter particles residing within 1R vir of their center.After applying these cuts, the number of galaxies in our sample at different redshifts ranges from N gal = 106 − 300.
This work focuses on the MZR for gas-phase metallicity, which reflects the current chemical composition of a galaxy's ISM.We leave the analysis of stellar metallicities, which provide information on the integrated chemical enrichment history of galaxies, as a subject for future work.We define a galaxy's gas-phase metallicity to be the mass-weighted mean metallicity of all gas particles within 0.2R vir of the galaxy's center.We calculate a galaxy's stellar mass as the total mass of all star particles within 0.2R vir of the its center.We choose 0.2R vir as the outer boundary for our galaxies rather than the commonly used 0.1R vir due to the tendency of high-redshift galaxies to have more expansive stellar populations relative to their virial radii as compared to galaxies at lower redshift.This is consistent with the radial cut used by Sun et al. (2023a,b) in their analysis of the same simulation suite.
In Appendix C we consider alternative definitions and gas cuts for calculating gas-phase metallicities.This includes weighting by gas particles' SFR property (rather than by mass) when calculating metallicity, setting the outer boundary of galaxies to be 0.1R vir (rather than 0.2R vir ), and introducing a temperature cut of T gas < 10 4 K on gas particles used to calculate metallicity.The SFR-weighting scheme has only a minor impact on our calculated MZR and likely introduces a bias by removing galaxies with no star-forming gas from our sample since, according to the fundamental metallicity relation, galaxies with lower star formation rate will tend to have higher metallicities at fixed stellar mass (Ellison et al. 2008;Mannucci et al. 2010;Marszewski et al. in prep.).We also show that our MZR is relatively insensitive to the different radial and temperature cuts applied to gas particles in our analysis.
RESULTS
In this section we present the MZR for our analyzed galaxies.Gas-phase metallicities are given in units of log(Z/Z ⊙ ), where Z is the total metal mass fraction and we have adopted the solar metallicity, Z ⊙ = 0.02, from Anders & Grevesse (1989).With this value of Z ⊙ , we can convert to units of oxygen abundance (often more relevant for comparing with observations) using the calibration presented in Appendix B of Ma et al. (2016), log(Z/Z ⊙ ) = 12 + log(O/H) − 9.00. (1) This calibration was obtained by fitting metallicity against oxygen abundance in FIRE-1 simulated galaxies and may be subject to systematic uncertainties originating from supernovae rates, metal yields from different enrichment processes, and the fiducial solar metallicity used in FIRE simulations.
Mass-Metallicity Relation
At each redshift analyzed, we separate the data into 5 stellar mass bins of equal width.These stellar mass bins span from the minimum stellar mass of galaxies included in our sample (M * = 10 6 M ⊙ ) to the maximum stellar mass of galaxies in our sample at that redshift.We then calculate linear best fit models between the median values of log(Z/Z ⊙ ) and log(M * /M ⊙ ) in different stellar mass bins.For purposes of fitting, the median metallicity value of each stellar mass bin is given equal weight.We provide a fitting formula across our redshift range (z ∈ [5, 12]) by allowing the slope and y-intercept of the linear fit to vary with (1 + z), using the form: where Table 1.Best fit parameters for three different evolution models of the MZR using the form given by equation 2. For the first model, both the slope and normalization of the MZR are allowed to vary smoothly with (1 + z).The second model fixes the slope of the fit but allows for the normalization to vary.The third model fixes both the normalization and slope across our entire redshift range (z = 5 − 12).
To investigate the importance of the redshift evolution of the slope and normalization of our fits, we also provide fitting formulae where we fix combinations of these parameters across our redshift range.In particular we present one version of the fit where we fix the slope of the MZR by setting α = 0 and another version of the fit where we fix both the slope and normalization by setting α = β = 0. Our best-fit parameters for all three versions of the fit are given in Table 1.The MZR for each analyzed galaxy along with the medians of each stellar mass bin and the three versions of the best-fit lines are plotted for all analyzed redshifts in Figure 1.
The agreement between our redshift-evolving fits and our fit with no evolution suggests weak redshift evolution of the MZR across our redshift range.Our Slope and Normalization Evolution fit is characterized by a decrease in slope and increase in normalization of the MZR with decreasing redshift.The effects of these changes, however, largely offset one another across our stellar mass range.From the Normalization Evolution version of our fit we find that, when holding the slope constant, the normalization of the MZR changes by only ≈ 0.01 dex over our entire analyzed redshift range (z = 5 − 12).The evolution found in either of these models is within the level of inherent uncertainty in our data, as evidenced by their agreement with the No Evolution model.We therefore conclude that the MZR in our simulations is characterized by weak to no redshift evolution for z ≳ 5.
We also find that the scatter of our relation tends to decrease with increasing stellar mass.For example, our lowest two stellar mass bins, centered below 10 7.5 M ⊙ , have scatters between 0.6 − 1.2 dex, while stellar mass bins centered above 10 8.5 M ⊙ typically have scatter less than 0.5 dex.Therefore, FIRE-2 simulations predict large scatter in the MZR at low stellar mass.We hypothesize that this is, in part, due to galaxies becoming less bursty relative to their stellar mass as their stellar mass increases.This explanation would be consistent with the fundamental metallicity relation (FMR) where the star-formation rate (or gas-fraction) acts as a secondary predictor for metallicity (Ellison et al. 2008;Mannucci et al. 2010).According to the FMR, smaller variance in star-formation rates at a given stellar mass would result in smaller scatter in the metallicities at that stellar mass.The scatter we predict at the lowmass end is much larger than predicted by Bassini et al. (2024) in their analysis of FIREbox simulations at z ≤ 3, but it is not inconsistent with that study as the mass range where we predict increased scatter is below the mass limit of their analysis (M * ∼ 10 8 M ⊙ ).There is also likely some redshift evolution of the scatter between z = 5 and z = 3.We verified using FIRE-2 zoom-in sim-ulations evolved to z = 0 that the scatter in the MZR at z = 3 increases significantly below the lower stellar mass limit used by Bassini et al. (2024) but is in agreement with their results above this limit.We do not find a clear redshift evolution trend of the scatter of the MZR at fixed stellar mass over our redshift range in FIRE-2.
Comparison with Observations
The advent of JWST has allowed for the measurement of significant samples of ISM metallicities at z ≥ 5 for the first time.JWST surveys have already made many metallicity measurements available for galaxies at much earlier redshift than previously feasible (e.g., Curti et al. 2023a;Nakajima et al. 2023).Hsiao et al. (2024) measure the metallicity of MACS0647-JD using the direct, T e -based method at z = 10.165 and Bunker et al. (2023) present a metallicity measurement of the galaxy GN-z11 using strong-line ratios at z = 10.6, further demonstrating the observational power of JWST.With additional large samples of metallicity measurements from JWST on the way, it is timely that we verify the results of our simulations against current observations and make predictions for future observations.Here, we compare our best-fit MZR to observations already made available from JWST surveys.2020).These findings support the notion that the MZR evolves significantly for z ≲ 3 and evolves weakly or not at all for z ≳ 3. Nakajima et al. (2023) do not find significant evidence for evolution of the MZR over the redshift range z = 4 − 10.
Increasing scatter of the MZR with decreasing stellar mass is found in several observational works at lower redshift (e.g., Zahid et al. 2012 at z ≤ 0.1, Guo et al. 2016 at 0.5 ≤ z ≤ 0.7), qualitatively matching the trend we find in FIRE-2 at higher redshift.More recently, Li et al. (2023) find an increase in the scatter at the low-mass end of the MZR using a sample of 51 dwarf galaxies observed by JWST at z = 2 − 3.They quote intrinsic scatters in their MZR of 0.16−0.18dex and 0.23 dex for stellar mass bins at 10 8 − 10 9 M ⊙ and 10 7 M ⊙ , respectively.
As shown in Figure 2, there are apparent quantitative inconsistencies between the scatter predicted by our model and the scatter observed by Curti et al. (2023a) and Nakajima et al. (2023).At lower stellar masses (M * /M ⊙ ≲ 10 8.5 ), the scatter in the simulations is larger than that of the observations.In part, this may be a result of observational samples at high redshift having a selection bias toward more luminous, actively starforming galaxies (e.g., Sun et al. 2023a).According to the fundamental metallicity relation, galaxies with low star formation rates that are absent from observational samples would have systematically higher metallicites at a given stellar mass.This explanation is consistent with our analysis since the increased scatter in our MZR is largely due to a small number of very metal rich galaxies.Additionally, this could be a result of strong line measurements of metallicity, which constitute the majority of these observational samples, systematically underestimating the scatter in the MZR.Strong line measurements are susceptible to predicting systematically low scatter since they rely on ratios between specific strong emission lines and do not take into account potential scatter in other parameters used to infer metallicities from these ratios.For example, it is possible for two different galaxies to have different metallicities yet have the same line ratio (e.g, if the ionization state of the gas is different).A particular strong line calibration would then imply a single metallicity.At higher stellar masses (M * /M ⊙ ≳ 10 8.5 ), where the predicted MZR is very narrow, the scatter is smaller in simulations than in observations.This could be because of the uncertainties present in observational methods (e.g., noise in raw observational data and various uncertainties in converting from line fluxes to metallicities) that are not present in the analysis of simulations.These uncertainties could result in a larger apparent scatter of the MZR in a regime where the relation is particularly tight in nature.
Comparison with Other Theoretical Models
Some other cosmological and seminumerical simulation projects have analyzed the MZR at high redshift.We compare our work on the MZR in high-redshift FIRE-2 simulations to other theory-based work and discuss predictions for future observations.Figure 3 shows comparisons between the best-fit model presented in this work and other theoretical and simulation-based models.
The weak time evolution of the MZR we find at 5 ≤ z ≤ 12 is consistent with previous results from FIRE-1 simulations reported by Ma et al. (2016), who predicted a flattening of the evolution of the MZR at high redshift.However, our best-fit model predicts metallicities approximately 0.3-0.4dex higher than their best fit derived from FIRE-1 simulations.Bassini et al. (2024) find significant evolution in the MZR from z = 0 − 3 in the FIREbox simulation, a full cosmological volume simulation that uses the physics of FIRE-2.In particular, the gas-phase metallicity at fixed stellar mass is found to in-crease with decreasing redshift over the range z = 0 − 3. Our best-fit MZR at z = 5 is similar in normalization and slope to the FIREbox results at z = 3.We therefore conclude that the evolution of the MZR in FIRE-2 simulations is characterized by metallicity increasing with decreasing redshift for z ≲ 3 and by little to no evolution for z ≳ 3 at fixed stellar mass.A more complete comparison between this work and previous work on the MZR in FIRE simulations is provided in Appendix A.
Weak evolution of the MZR past z ≳ 5 has been found in some other simulation projects as well.Langan et al. ( 2020) report a slight increase in mean metallicity with increasing redshift in FirstLight simulations from z = 5 − 8.They do not, however, find this trend to be significant at a level beyond the intrinsic scatter of their data.Similarly, Ucci et al. (2023) characterize the MZR in ASTRAEUS, a seminumerical simulation project, as having effectively no redshift evolution from z = 5 − 10.The median metallicity of their low-mass galaxies (M * ∼ 10 7 M ⊙ ) decreases very slightly (∼ 0.15 dex) from z = 10 to z = 5, while metallicities in their high-mass range (M * ∼ 10 9 M ⊙ ) remain nearly constant over the redshift range.Wilkins et al. ( 2023) also find no significant evolution in the MZR for z ≥ 10 in FLARES.Torrey et al. (2019) report that, for IllustrisTNG simulations from z = 2 − 10, the normalization of the MZR decreases with increasing redshift while the slope is not a strong function of redshift.The evolution of the normalization in their simulations within their redshift range is given by d log(Z)/dz ≈ −0.064.This evolution is stronger than the evolution found in FIRE-2 and other simulation projects.
Interpretation using Analytic Models
Multiple explanations for the weak evolution of the MZR beyond z ≳ 5 have been put forth.Torrey et al. (2019) find that the evolution of the MZR from z = 2 to z = 10 is explained by the evolution of the gas fraction through the gas-regulator model.The regulator model gives an approximate equilibrium metallicity of the form (e.g., Lilly et al. 2013) where y = M Z /M ⋆ (often assumed to be y = 0.02) is the metal yield (the mass of metals returned to the ISM per unit mass in formed, long-lived stars), Z acc is the metallicity of accreted gas, η = Ṁwind /SFR is the mass loading factor of galactic winds, where SFR is a galaxy's star formation rate, and f gas is the version of the galactic gas fraction given by M gas /M ⋆ .However, Bassini et al. (2024) show that the evolution of the gas fraction does not drive the evolution of the MZR in FIREbox at lower redshifts (z = [0, 3]).Rather, an evolution in both the mass-loading factor and the metallicities of inflows and outflows at fixed stellar mass drive the decrease of the MZR with increasing redshift up to z = 3.In Appendix B we show there is substantial redshift evolution in the median gas fractions of our analyzed galaxies with redshift.This fact combined with the very weak evolution of the MZR over the same redshift range implies that the weak evolution of the high-redshift MZR is likely not explained solely by the evolution of the gas fraction in the gas-regulator model.Other works have used idealized "closed box" or "leaky box" models to explain the weak evolution of the MZR at high redshift (e.g., Langan et al. 2020;Ma et al. 2016).In the "closed box" model, metallicity is given by while in the "leaky box" model, where fgas is the version of the gas fraction given by M gas /(M ⋆ + M gas ) and y eff is the effective metal yield which is often calibrated to make the "leaky box" model best fit the data.In Appendix B we present median values of fgas across our redshift range in different stellar mass bins.The difference between y and y eff quantifies the net impact of inflows and outflows on a galaxy's metallicity.From this picture it has been argued that the weak evolution of the high-redshift MZR is due to values of fgas that have saturated to unity and/or that evolve weakly at high redshift.Langan et al. (2020) find that a "leaky box" model with y eff = 0.002 is able to explain the weak evolution of the MZR predicted by First-Light simulations.However, with an effective yield that is an order of magnitude lower than the intrinsic stellar yield (y ≈ 0.02), this model concedes that the impact of inflows and outflows is crucial.Ma et al. (2016) also found that a "closed box" model systematically overpredicts metallicities in FIRE-1 simulations, also due to the large effects of inflows and outflows (i.e.y eff is significantly smaller than y).Thus, weakly evolving gas fractions are at best an incomplete explanation for the weak evolution of the MZR at high redshift.It would be interesting to perform an analysis similar to that done by Bassini et al. (2024) to quantify explicitly the effects of gas fractions, inflow/outflow metallicities, and massloading factors on the evolution of the MZR.
DISCUSSION AND CONCLUSIONS
We characterize the high-redshift gas-phase MZR in FIRE-2 simulations.We find that the MZR from z = 5 − 12 in these simulations is an approximately constant power-law relation given by log(Z/Z ⊙ ) = 0.37 log(M * /M ⊙ ) − 4.3.Weak evolution of the highredshift MZR has been found in numerous other simulations (e.g., FirstLight (Langan et al. 2020), ASTRAEUS (Ucci et al. 2023), FLARES (Wilkins et al. 2023)), with stronger evolution being found in IllustrisTNG (Torrey et al. 2019).Combining our work with the analysis of the MZR in FIREbox (also run with the FIRE-2 code) from Bassini et al. (2024), we find that the normalization of the MZR in FIRE-2 decreases by ∼ 0.4 dex from z = 0 − 3 and evolves weakly for z ≳ 3.
Our best-fit MZR is in good agreement with early measurements of the MZR at z ≳ 5 made by JWST.In particular, our results are in agreement with the mean stellar mass and redshift binned results from highredshift surveys presented by Curti et al. (2023a) and Nakajima et al. (2023).These same simulations have also been tested against the JWST UVLF by Sun et al. (2023a,b).These agreements validate FIRE-2 as a useful predictive tool for future observations as JWST continues to expand the probed parameter region of metallicity scaling relations.
We also predict increasing scatter in the gas-phase MZR with decreasing stellar masses.This effect may be attributable to galaxies becoming more bursty as their stellar mass decreases.
Future work will explore the existence of the fundamental metallicity relation (FMR) in FIRE simulations.In addition to stellar mass, the FMR suggests gas mass fraction or star formation rate as secondary predictors for metallicity (Ellison et al. 2008;Mannucci et al. 2010).A comprehensive study on the effects of evolving gas fractions, mass loading factors, and inflow/outflow metallicities on ISM metallicities, similar to the work done by Bassini et al. (2024) at z = 0 − 3, would be valuable to explain the evolution or lack of evolution in metallicity scaling relations at high redshift.Beyond gas-phase metallicities, which probe the current enrichment conditions of the ISM, future work may also investigate stellar-phase metallicity scaling relations, which provide information on integrated chemical enrichment histories of galaxies.Future analysis of the scaling relations for individual metal species tracked in FIRE simulations will allow us to make predictions for observations of individual chemical abundances made by JWST.Finally, another emerging area of study which our simulations could inform is the measurement of metallicity gradients (radial dependence of metallicity) at high redshift.Metallicity gradients serve as a probe for the larger processes that drive galaxy evolution in the highredshift regime.These processes include large galactic inflows or merger events that drive bursts of star formation, generating strong feedback capable of flattening metallicity gradients, disrupting galaxy kinematics, and driving outflows.Studying the dispersal of metals from galaxies into the intergalactic medium (IGM) will allow us to draw connections between IGM metallicity and galaxies during the epoch of reionization probed by JWST (Bordoloi et al. 2023).
A. COMPARISON WITH PREVIOUS FIRE WORK
In Figure 4, we compare our MZR results with previous results from FIRE simulations.First, we compare to the MZR in FIRE-1 zoom-in simulations from Ma et al. (2016).As in Figure 3, we find a significant (∼ 0.3-0.4dex) offset between the MZR in this work and that from FIRE-1 presented by Ma et al. (2016) at z = 5.For the comparison shown, we repeated our analysis of the MZR matching the temperature and radial cuts on the gas particles included in our analysis with those made by Ma et al. (2016) for consistency.These cuts include all gas with temperature below 10 4 K within 0.1R vir of the galaxy's center (in the main body of the paper, we included all gas within 0.2R vir ).The change in cuts on the gas does not appreciably change our results and the offset between the FIRE-1 result and this work remains.
We additionally compare this work with a recent study of the MZR by Bassini et al. (2024) based on the FIREbox simulation, run with the FIRE-2 code like our zoom-in simulations, but analyzed over z = 0 − 3.While we apply slightly different cuts (they consider all gas within 0.1R vir ), we find close agreement between the MZR at our lowest redshift analyzed (z = 5) and their highest redshift analyzed (z = 3).This agreement suggests the absence of significant evolution in the MZR in FIRE-2 simulations for z ≳ 3. The FIREbox data also allow us to compare FIRE-2 vs. FIRE-1 runs at different redshifts.We find that the FIRE-1 fit from Ma et al. (2016) is significantly offset from the FIREbox result at z = 3 (0.15-0.4 dex below).However, the offset between FIRE-1 and FIRE-2 largely vanishes by z = 0.This is reassuring because the comparison between FIRE-1 and FIRE-2 in Hopkins et al. (2018), which found no major differences in the stellar mass-halo mass relation between the two sets of simulations, focused on z = 0.This suggests that some aspects of the cosmic baryon cycle which determines the enrichment of galaxies differ significantly between FIRE-1 and FIRE-2 at high redshift, even though the stellar masses and galaxy metallicities converge to broadly consistent values by z = 0.It is beyond the scope of this work to fully investigate the cause of this offset as FIRE-2 implemented a large number of improvements and changes that could impact the metal enrichment of the ISM as well as the driving of inflows and outflows.Changes made between the FIRE-1 and FIRE-2 codes include the introduction of a more accurate hydrodynamic solver, a supernova feedback scheme that more accurately conserves momentum, and updated metal yields (see Hopkins et al. 2018 for an exhaustive list and full descriptions of improvements).Here, temperature and radial cuts are made on the high-redshift FIRE-2 galaxies to match those made by Ma et al. (2016) in analyzing FIRE-1 galaxies.Our z = 5 fit is in good agreement with the z = 3 result from FIREbox (run with the FIRE-2 code) presented by Bassini et al. (2024) (shown in orange), indicating weak evolution of the MZR in FIRE-2 galaxies for z ≳ 3. The fit from FIRE-1 simulations (shown in purple) remains consistently 0.3-0.4dex below this work at z = 5 and 0.15-0.4dex below the FIREbox result at z = 3.The agreement between FIREbox and FIRE-1 at z = 0 demonstrates that this offset between FIRE-1 and FIRE-2 runs is only present at higher redshift.
B. EVOLVING GAS FRACTIONS Previous works have used galaxies' gas fractions to explain the evolution of the MZR using either gas-regulator models or "closed/leaky box" models.In the gas-regulator model we define the gas fraction to be f gas = M gas /M ⋆ , where M gas and M ⋆ are the total gas mass and stellar mass within 0.2R vir of the center of a galaxy, respectively.Previous works have used the redshift evolution of galaxies' gas fractions to explain the evolution of the MZR for z ≲ 3.5.However, Bassini et al. (2024) show that evolving gas fractions are not responsible for the evolving MZR in this redshift range in the FIREbox simulation.Rather, the redshift dependence on the metallicities of gas inflows and outflows as well as the evolution of the mass loading factor drive the evolution of the MZR at lower redshifts.Other works have cited weak evolution of gas fractions at high redshift as being responsible for weak evolution of the high-redshift (z ≳ 3.5) MZR (e.g., Torrey et al. 2019).The left panel of Figure 5 presents stellar mass-binned gas fractions from our simulations that vary substantially with redshift and in a mass-dependent way, indicating that gas fractions alone cannot explain the weakly evolving high-redshift MZR.
In the "closed box" or "leaky box" models, we define a second version of the gas fraction as fgas = M gas /(M ⋆ +M gas ).The right panel of Figure 5 shows values of fgas in our simulated galaxies.The saturation of fgas to unity and/or its weak evolution at high redshift has been used to explain the weak evolution in the MZR for z ≳ 3.5 (e.g., Ma et al. 2016;Langan et al. 2020).However, both of these works find that, in order for their simulation data to be well-fit by a "leaky box" model, they must use an effective stellar yield (y eff ) that is much smaller than the intrinsic stellar yield (y = 0.02).The significantly smaller value of y eff implies that there is a large net impact of inflows and outflows on metallicities.Thus, the weak evolution of the MZR at high redshift cannot be attributed to saturated or weakly evolving values of fgas alone and must take into account the larger effects of inflows and outflows.percentiles.Empty squares represent the median gas fractions of stellar mass bins that contain fewer than 5 galaxies.Solid lines present best fits to the median gas fractions.At some stellar masses, median gas fractions show large fluctuations between different redshifts.The systematic redshift evolution is especially clear for the definition on the left.
C. ALTERNATIVE CALCULATIONS OF METALLICITIES
Throughout this work we define a galaxy's gas-phase metallicity to be the mass-weighted mean metallicity of all gas particles within 0.2R vir of the galaxy's center.Here, we investigate the effects of using alternative cuts on the gas particles included in our analysis.We consider a smaller radial cut on our galaxy (r gas < 0.1R vir ).We also present a version of our analysis with radial and temperature cuts that exactly match those used by Ma et al. (2016) in their analysis of the MZR in FIRE-1 (r gas < 0.1R vir and T gas < 10 4 K).Applying either of these cuts does not appear to have a significant effect on our resulting MZR.We ultimately elect to use r gas < 0.2R vir as our radial cut due to the tendency of high-redshift galaxies to have more spatially extended stellar populations relative to their virial radii as compared to galaxies at lower redshift.We also choose to not include a temperature cut on our gas as this cut would eliminate a significant number of galaxies from our sample.
We also consider weighting gas particles by their star formation rate property rather than by their mass when calculating a galaxy's metallicity.This SFR-weighting scheme does not have a significant impact on our calculated MZR and likely introduces a bias by removing galaxies with no star-forming gas from our sample.We therefore elect to use mass-weighted metallicities.A comparison between the MZR calculated using our fiducial method and the MZR calculated using the alternative cuts and the SFR-weighting method is shown in Figure 6. .The gas-phase MZR in our simulations at z = 8 calculated using different weighting schemes and cuts on the gas included.The solid lines show our best fits to the medians, allowing for redshift evolution of the slope and normalization.The median metallicities of stellar mass bins are shown by squares with error bars representing the 16th and 84th percentiles.Note that, while the medians are shown at z = 8 only, the fits are calculated across all redshifts analyzed (z = 5 − 12), and thus, may be offset from the medians shown.Our fiducial method, where metallicity is calculated as the mass-weighted mean metallicity of all gas particles within 0.2Rvir, is shown in black.The method where metallicities are calculated as the SFR-weighted mean metallicity of all gas particles within 0.2Rvir, is shown in blue.The mass-weighted method only considering gas particles within 0.1Rvir is shown in red.The mass-weighted method including a temperature cut (Tgas < 10 4 K) and only considering gas particles within 0.1Rvir is shown in green.Different weighting schemes and cuts on the gas do not appear to significantly change our resulting MZR with all models differing from our fiducial model by ≲ 0.1 dex across our stellar mass range.
Figure 1 .
Figure1.The evolution of the gas-phase MZR in FIRE-2 simulations from z = 5 − 12.The solid black, solid red, and dashed blue lines show our best fits to the medians for the Slope and Normalization Evolution, Normalization Evolution, and No Evolution models, respectively.Values for the slope (A) and normalization (B) are provided for each model at the bottom right of each redshift panel in their corresponding color.Metallicities for individual galaxies are shown in orange.The median metallicities of stellar mass bins for stellar mass bins containing at least five galaxies are shown by solid black squares with error bars representing the 16th and 84th percentiles.Empty black squares represent the median metallicities of stellar mass bins that contain fewer than 5 galaxies.Both the slope and normalization of the MZR are approximately constant over this redshift interval.
Figure 2 .
Figure 2. Comparison between our best-fit and recent high-redshift observations of the MZR made by JWST.We center our comparisons at z = 5 (left panel) and z = 8 (right-panel).As in Figure 1, solid black squares show the median metallicities of stellar mass bins with error bars representing the 16th and 84th percentiles.Our model shows good agreement with the stellar-mass-binned mean values from both Nakajima et al. (2023) (orange squares and diamonds) and Curti et al. (2023a) (green squares).Measurements of individual galaxies are represented by dots, including 3 JWST Early Release Observations at z ∼ 8 presented by Curti et al. (2023b) (shown in blue), 9 measurements at z = 3 − 9 presented by Morishita et al. (2024) (shown in red), the MACS0647-JD galaxy presented by Hsiao et al. (2024) at z ∼ 10.16 (shown in brown), and the GN-z11 galaxy presented by Bunker et al. (2023) at z ∼ 10.6 (shown in magenta).
Figure 3 .
Figure 3.Comparison between our best-fit model and previous simulation-based work on the MZR.Results are shown for IllustrisTNG(Torrey et al. 2019), FirstLight(Langan et al. 2020), ASTRAEUS(Ucci et al. 2023), FLARES(Wilkins et al. 2023) for the redshift and stellar mass ranges at which they were reported.Additionally, the MZR fit for FIRE-1 data fromMa et al. (2016), given in the range z = 0 − 6, has been extrapolated and plotted across our redshifts of interest.With the exception of IllustrisTNG, all models shown here are consistent with weak evolution of the MZR over this redshift interval.fraredSpectrograph (NIRSpec) Instrument of 135 and 146 galaxies, respectively, primarily using strong line methods.There are 80 galaxies that are originally presented byNakajima et al. (2023) which are also included in the analysis byCurti et al. (2023a).The sample fromNakajima et al. (2023) includes 10 direct, T e -based measurements.Curti et al. (2023a) include 3 direct, T e -based measurements, originally presented byCurti et al. (2023b), as well as the measurement of GN-z11 byBunker et al. (2023), all of which we compare to individually.Galaxies analyzed inCurti et al. (2023a) have redshifts in the range 3 < z < 10 and stellar masses in the range 10 6.5 < M * /M ⊙ < 10 10 while galaxies inNakajima et al. (2023) have redshifts 4 < z < 10 and stellar masses 10 7 < M * /M ⊙ < 10 10 .Each stellar mass-binned mean metallicity from these two works is in reasonably good agreement with our best-fit MZR.However, both works report a smaller slope in the observed MZR (A = 0.17±0.03fromCurti et al. 2023a and A = 0.25 ± 0.03 fromNakajima et al. 2023).Morishita et al. (2024) provide 9 additional gas-phase metallicity measurements of galaxies with redshifts 3 < z < 9 made via the direct method.A comparison between the bestfit model presented in this work and recent high-z JWST observations of the MZR is shown in Figure2.Additionally,Curti et al. (2023a) report a significant difference in normalization between the high-redshift MZR and the
Figure 4 .
Figure 4. Comparison between the MZR at z = 5 from the high-redshift suite of FIRE-2 simulations analyzed in this paper and other FIRE work on the MZR.Here, temperature and radial cuts are made on the high-redshift FIRE-2 galaxies to match those made byMa et al. (2016) in analyzing FIRE-1 galaxies.Our z = 5 fit is in good agreement with the z = 3 result from FIREbox (run with the FIRE-2 code) presented byBassini et al. (2024) (shown in orange), indicating weak evolution of the MZR in FIRE-2 galaxies for z ≳ 3. The fit from FIRE-1 simulations (shown in purple) remains consistently 0.3-0.4dex below this work at z = 5 and 0.15-0.4dex below the FIREbox result at z = 3.The agreement between FIREbox and FIRE-1 at z = 0 demonstrates that this offset between FIRE-1 and FIRE-2 runs is only present at higher redshift.
Figure 5 .
Figure5.The gas fractions, fgas = Mgas/M⋆ (left) and fgas = Mgas/(M⋆ + Mgas) (right), in different stellar mass bins for z = 5 (blue), z = 7 (green), z = 9 (yellow), and z = 12 (red).Individual galaxies are shown by dots.The median gas fractions for stellar mass bins containing at least five galaxies are shown by solid squares with error bars representing the 16th and 84th percentiles.Empty squares represent the median gas fractions of stellar mass bins that contain fewer than 5 galaxies.Solid lines present best fits to the median gas fractions.At some stellar masses, median gas fractions show large fluctuations between different redshifts.The systematic redshift evolution is especially clear for the definition on the left.
Figure6.The gas-phase MZR in our simulations at z = 8 calculated using different weighting schemes and cuts on the gas included.The solid lines show our best fits to the medians, allowing for redshift evolution of the slope and normalization.The median metallicities of stellar mass bins are shown by squares with error bars representing the 16th and 84th percentiles.Note that, while the medians are shown at z = 8 only, the fits are calculated across all redshifts analyzed (z = 5 − 12), and thus, may be offset from the medians shown.Our fiducial method, where metallicity is calculated as the mass-weighted mean metallicity of all gas particles within 0.2Rvir, is shown in black.The method where metallicities are calculated as the SFR-weighted mean metallicity of all gas particles within 0.2Rvir, is shown in blue.The mass-weighted method only considering gas particles within 0.1Rvir is shown in red.The mass-weighted method including a temperature cut (Tgas < 10 4 K) and only considering gas particles within 0.1Rvir is shown in green.Different weighting schemes and cuts on the gas do not appear to significantly change our resulting MZR with all models differing from our fiducial model by ≲ 0.1 dex across our stellar mass range. | 10,521 | 2024-03-13T00:00:00.000 | [
"Physics"
] |
Dense U-Net for Limited Angle Tomography of Sound Pressure Fields
: Tomographic reconstruction allows for the recovery of 3D information from 2D projection data. This commonly requires a full angular scan of the specimen. Angular restrictions that exist, especially in technical processes, result in reconstruction artifacts and unknown systematic measurement errors. We investigate the use of neural networks for extrapolating the missing projection data from holographic sound pressure measurements. A bias flow liner was studied for active sound dampening in aviation. We employed a dense U-Net trained on synthetic data and compared reconstructions of simulated and measured data with and without extrapolation. In both cases, the neural network based approach decreases the mean and maximum measurement deviations by a factor of two. These findings can enable quantitative measurements in other applications suffering from limited angular access as well.
Introduction
Tomographic reconstruction of projection fields has been used in many areas for decades. It is an established technique for 3D imaging and measurement in medicine [1][2][3][4][5], geoscience [6], for studying combustion [7] or materials science [8]. Another important field is in acoustic research, where it is becoming increasingly important with modern technology to make complex acoustic phenomena visible for the first time [9].
In this field, especially the noninvasive volumetric measurement of sound pressure fields, there is a suitable application for tomographic reconstructions [10]. For investigations of local phenomena, for instance in sound dampers such as the bias-flow-liners in aircraft turbines, it is necessary to perform measurements within a flow channel in order to mimic the conditions of real applications [11,12]. However, in a flow channel, it is not possible to measure parallel to the flow direction without disturbing the flow. This results in a limitation of the angular scan range available for the measurement. Often, the angular scan range is limited from 180 • at ideal conditions to about 100 • due to technical facilities [13].
Tomographic reconstruction algorithms such as filtered back projection require a measurement from different scan angle positions in the range of 180 • [9]. Limitations of the angular scanning range result in artifacts such as diagonal lines in the local reconstruction field and disappearance of sharp edges outside the scan angle range occurs [1][2][3]14,15]. Furthermore, an unknown systematic measurement error occurs, i.e., the absolute values of the reconstruction are strongly distorted [16,17]. To improve the reconstruction result, additional prior knowledge about the measurement object must be integrated into the reconstruction process. This could be iterative approaches such as algebraic reconstruction technique [18] or total variation [18]. They need a lot of computing power but can use estimations as initial value for the reconstruction. Errors in initial estimation can distort the complete reconstruction.
The hypothesis in this publication is that neural networks can be used to extrapolate data from missing scan angles to significantly improve reconstruction results. This enables the use of standard algorithms for the tomographic reconstruction. There are several existing deep learning based approaches employing neural networks to directly reconstruct local data. They replace the established standard reconstruction methods, such as filtered back projection, and achieve good results, for instance with sparse angle tomography [19][20][21][22][23]. Synthetic data are used for training and validation. Finally, reconstruction improvements of real limited angle sound pressure measurements are presented.
Sound Pressure Measurements
A standard method for quantitative sound pressure measurements is the detection by condenser microphones [24]. This allows for point-like measurement. The extension to a measurement field of one, two or three dimensions is possible with a matrix arrangement of microphones. By using beam forming, a two-dimensional arrangement of microphones is sufficient for a three-dimensional reconstruction [25]. However, measurements with microphones are invasive and distort the sound pressure field [26]. Furthermore, microphones typically feature diameters of several millimeters and have a direction characteristic, limiting the spatial resolution to several millimeters and distorting measurements based on relative sound source position [27]. An alternative, noninvasive measurement principle is the measurement of the transfer function between two microphones. This function then describes the acoustic behavior between the two measurement points but offers no spatial resolution [28].
A noninvasive approach based on laser interferometric vibrometry (LIV) can be used for sound pressure measurement. The LIV measures the integral, sound-induced refractive index variation along the optical path [29]. If the measurement is also performed from different angles, the reconstruction of the local sound field for a point in the measurement volume can be calculated by tomographic reconstruction algorithms. This principle can be extended to a measurement plane or volume by using a camera as detector. At highspeed camera-based laser interferometric vibrometers (CLIV), each camera pixel performs a simultaneous line integral measurement [30].
CLIV
When a laser beam passes through an acoustically active volume, the phase of the light wave changes as a function of the sound wave due to the sound-induced change of the refractive index. This is called the optoacoustic effect [9].
Consider a simple LIV set-up, a plane wave sound source and a photo detector. The laser beam travels through a known constant distance l. Let the plane sound wave excite the medium along l. Over time the sound wave creates areas of higher density and areas of lower density in dependence of the current spatial and temporal sound pressure wave propagation. This oscillation is given by the frequency and amplitude of the sound wave. The changes in the light intensity are detected as a phase shift [9]. The intensity signal I(t) of the laser light can be described as: depending on the modulation depth V and the phase shift ∆Θ. The intensity signal oscillates with the instant frequency where f B represents the carrier frequency and ∆ f (t) the frequency shift. A fluctuation of the sound field results in a fluctuation of the frequency shift in dependency of the laser wavelength λ Laser and the time derivativeL = dL/dt of the optical path length i.e., the line integral over the refractive index n along the laser beam [10]. The Gladstone-Dale theorem [30] provides the mathematical link between the refractive index n(z, t) and the density ρ of the medium: with the material dependent Gladstone-Dale constant G. Using Equations (2)-(5) and the assumption of an adiabatic change of state, the one-dimensional acousto-optical relationship between the instant frequency f I (t) and the time derivativeṗ(z, t) = dp(z, t)/dt of the sound pressure along the optical path can be established: where κ is the adiabatic exponent, n 0 and p 0 are the refractive index and the pressure under reference conditions, respectively. This equation is valid for a projection line in the measurement volume. Thus, the integral sound pressure can be estimated from the detected intensity signals instant frequency f I (t). In order to detect negative and positive frequency shifts, i.e., heterodyne detection, and to define a measurement range for f I (t), the laser light has to be modulated with a carrier frequency f B [7]. This can be achieved by using an acousto-optical modulator (AOM). Finally, the photo detector has to be fast enough to capture the high-frequency fluctuations f B + ∆ f . The extension of this technique is the matrix measurement of many projections with a camera. Figure 1 shows the camera-based laser interferometer. This enables a threedimensional tomographic measurement of local sound pressure fields with a spatial resolution of 31.5 µm at a sampling rate of 120 kHz.
Tomographic Reconstruction
The optical measurement of sound pressure can only be done in projection. This means that a two-dimensional tomographic reconstruction must be performed. The solution of this inverse problem can be done using the filtered back projection.
The tomographic reconstruction is performed as follows: Let the measurement of a projection be a vector to R p (x , α, t) in a transformed plane (x , y ) from an angular perspective of the rotation angle α as shown in Figure 2. Using a total number of N x scan lines and N α angular scans these vectors give the sinogram with N x · N α dimensions (see Figure 3). Mathematically, a sinogram is the forward operation of the Radon transformation. Thus, a solution of the inverse problem by applying the inverse Radon transformation is needed for the reconstruction of the sound field with the filter function h(x ). Due to the integral operation of the inverse transformation, low-frequency components are weighted higher than high-frequency components. With the filter function this weighting error can be corrected during the reconstruction. This leads to a sharper image and ensures an absolute value reconstruction [9]. Different types of filters can be considered as a filter function. For example, there is the Ram-Lak filter with a linear increase of the amplitude over the frequency, the well-known Hamming filter or the Hanning filter [31]. While the Ram-Lak filter is sensitive to noise, it has provided the best absolute value reconstruction results, which was tested on simulated data. Thus, the Ram-Lak filter will be used for all reconstructions on simulated and measured data in the following investigations for a fair comparison. There are prerequisites in the measurement setup to reconstruct a local field. For example, the field to be measured must be scanned from different directions in an angular range of 180 • and high angular resolution [10]. The angle resolution depends on the crucial spatial resolution of the local field. Sparse angular sampling can result in aliasing [10]. Furthermore, a stationary and spatially closed field is required [9]. If these prerequisites are violated, strong artifact formation and additional information loss of the local absolute values will occur. A high angular limitation results in high artifact formation also known as missing cone artifacts [32]. This is caused by an incorrect frequency weighting, as well as the systematic reconstruction deviation of the absolute amplitude, caused by numeric instabilities, compared to the model field [17].
Deep Learning
Neural networks differ from analytical approaches in one fundamental aspect. Analytical approaches use a deterministic mathematical model to represent the transformation equation from input to output. On the other hand, neural networks are modeled by an adaptive mathematical equation. This equation consists of an input vector X, a weight vector W and a bias b [16]: Equation (8) represents a neuron k, which exhibits a nonlinear activation function f . This again produces a nonlinear transformation of the input signal. Weights are parameters updated by error propagation, also known as backpropagation. Neurons are usually grouped into layers, which can be distinguished into input, hidden and output layers. The connection model between layers can vary. So, comprehensible task processing can be achieved. The most common layer topologies are perceptron layers as well as convolutional layers. This last type is particularly efficient for tasks regarding image processing [19,33,34]. The type of neural network is determined by its architecture. This refers to the size, type and connection model that exists between the layers. The implementation of convolutional layers in neural networks has become a crucial factor for image processing tasks in recent years [19,35]. An example of this, is the Unet in Figure 4, top. Later, the Dense Unet has shown improvement ( Figure 4, overall) by replacing convolutional layers with denseblocks [19]. Here, a denseblock consists of a set of convolutional layers concatenated using a skip-layer strategy (see Figure 4, bottom). Similar to a convolutional layer, dense blocks have their own hyperparameters, such as the growth rate k and the number of repetitions l. The growth rate k refers to the feature layers that are calculated from the previous step. Then, l indicates how many times k feature layers will be concatenated. With the use of dense blocks in every convolutional step, there are more connections between every neuron in the step. Thus, the overfitting can be reduced [36]. The disadvantage is a larger number of hidden layers, which requires more computing power and increases the training time.
Experimental Setup and Measurement Execution
The aim was to enable a complete reconstruction of the local sound pressure under a limited incomplete angle. For this purpose, a full angle projection measurement of the sound field was performed with the CLIV system proposed in [9] as a reference. The sound field to be investigated was limited by a translucent PMMA cylinder of 100 mm diameter to allow a angular scan range of 180 • and shield the reference beam of the interferometer from the propagating sound wave in the measurement object. An approximately plane sound wave was generated by a speaker mounted to the top end of the tube. Thus, the sound wave propagates perpendicular to a hexagonal array of 169 Helmholtz resonators at the bottom of the cylinder (see Figure 5). Each Helmholtz resonator is cylindrical, has a resonator volume of 1.41 cm 3 and a hole aperture of 2 mm diameter (see Figure 6). This results in a resonator frequency of 1479 Hz. The measurement of the local sound field was performed at the resonator frequency. The maximum effective lateral range of the CLIV system is 25 mm with a spatial resolution of 31.5 µm. In order to perform a measurement of the entire cylinder, the measurement was divided into seven, 2 cm wide lateral areas, which were stitched in post-processing. The measurement procedure starts with setting the lateral position. After that follows the measurement of each scan angle with a resolution of 0.1 • in a angular scan range of 180 • . When a scan is completed, the next lateral position is scanned.
The measuring time per single angle scan is 1s. In addition, there is a camera hardwarerelated memory cycle of approx. 2 min, which is decisive for the total measurement time per scan point. The measurement volume is located 31.5 µm above the resonator surface. This allows for the largest possible local sound pressure changes and avoids reflections from the resonator surface that affect the measurement result. The measuring system is capable of recording a pixel area of 64 pixels in height with the necessary frequency. However, only a reduced height of 16 px was used in order to speed up the measuring process.
Methodology
To investigate the performance of the neural network, a projection measurement of the sound field above the hexagonal measurement object was performed using CLIV. The reconstruction of the fully measured angular scan range will later serve as a validation object.
The goal is to provide a complete reconstruction of the local sound pressure under constrained angle. Therefore, an additional computational step was introduced with the neural network to extrapolate the missing information. The complete process is described in Figure 7. For validation and comparison of the reconstruction of the sound pressure data, angular information was removed from the full measurement to create a constrained angle sinogram. These data were fed into the neural network. Due to the complex structure of the network, there are computational limitations. The pixel resolution of the projection data at input as well as output is limited to 256 angle measurements at 1024 individual projection lines per lateral plane. Hence, the remaining spatial resolution after the extrapolation process is 97.6 µm. Only the missing angle information is reconstructed by the neural network. Using the original angle constrained data, the complete sinogram can be assembled. However, the extrapolated part of the sinogram has a different angular resolution than the measured part. This means that resampling of the full sinogram must be performed.
Neuronal Network Training
In order to enable extrapolation, the neural network must be trained. For this purpose, a training data set was created from 60,000 synthetically generated sound pressure fields. These sound pressure fields were modeled after a tomographic measurement of a Kundt's tube at an angular access of 180 • (see Figure 5). The measured model is a hexagonal array of Helmholtz resonators with a circular resonator aperture. The size of the resonator holes (r x ; r y ), the number of resonators n holes , the distance between the resonators (x dist ; y dist ), the amplitude above the resonators (a hole ) and the orientation of the start scan angle (α) were varied. This is superimposed by a static sound pressure throughout the cylinder (offset). In the region of the apertures there is a Gaussian reduction of the sound pressure. The parameter range of the randomly varied quantities is listed in Table 1. By randomly choosing the above parameters, an asymmetric pattern is guaranteed for each image. Only in case of such asymmetric modeling an angular scan of 180 • is necessary. Extrapolation of the neural network by repetition and mirroring of already existing areas of the input data can thus be excluded. For training, 80% of the data set was used. The remaining data were used as a validation data set. A total of 24 epochs were trained to converge the loss function to a acceptable remaining loss and thus, the extrapolation results can not be substantially improved. Additionally, the problem of overfitting is minimal. The overfitting and thus the difference of the loss function between training data set and validation data set after 24 epochs was less than 0.3%.
The training results are shown in Figure 8. Good reconstruction results have been obtained. Mean squared error (MSE) and structural similarity (SSIM) [37] were used as comparison values. The MSE is calculated according to the following formula: Starting with the 2nd epoch, an MSE lower than 0.1% is reached. In epoch 7, there is a sudden increase of the error to 0.27% in the validation data set. This can be explained by the NADAM optimization algorithm used. The algorithm does not run into a local minimum, but tries to reach the global minimum. Thus, due to abrupt changes in the optimized weights, the error may increase abruptly [38]. After 24 epochs, an error of the MSE of less than 0.005% was reached. The impact of an overfitting effect is negligible after 24 epochs. The same result can be seen for SSIM, which reaches a value of 99.4% after 24 epochs of training. The training took 11 h on a high performance workstation with an Nvidia TITAN RTX GPU. After training the extrapolation process took in summary 35 s for one extrapolation process. This time is mainly limited by the loading and saving process of the data.
Synthetic Data
A synthetic sound pressure field was generated according to the parameters presented in Table 1 (2. line). The first row of Figure 9 shows the sinograms with full scanning angle (a), limited scanning angle (LA) (b) and the neuronal network based extrapolation (c). The second row shows corresponding local data computed by filtered back projection (d-f). The bottom row describes the relative error referred to the full angle filtered back projection tomographic reconstruction. The local error for limited angle and neural network reconstruction are depicted in (h,i). Boxplots at (g) show the corresponding error distribution.
The modeled constant sound pressure field in the cylinder (Figure 9i)) results in a quadratic function in the sinogram over the projection axis. In the angular axis, the function is constant. Each hole produces a reduction in the total amplitude on each measurement in the angular scan range. If several resonators are in succession in the direction of projection, the reduction in amplitude is summed. The hole position has a sinusoidal curve over the scanned angle axis. These correlations were learned without the possibility of mirroring or repeating certain parts of the already given range by training the neural network.
It can be seen that a reconstruction using filtered back projection of the measurement data with full angular scan range leads to an artifact-free reconstruction result, only higher noise emissions are visible. On the contrary, when reconstructing with a limited angular scan range, clear artifacts can be seen. Individual resonators have an elliptical shape and become blurred. This leads to an inseparability of individual resonators if they are close to each other. Furthermore, an incorrect absolute value reconstruction occurs, which deviates both downwards and upwards. Locally, especially between neighboring resonators, excessive sound pressure values are reconstructed and the values within a hole are calculated as too low. The sinogram extrapolated by means of neural network shows a considerable qualitative improvement in the reconstruction. The resonators regain their circular shape and are separable. However, a low-pass effect can be seen at the hole edges in the extrapolated angular regions.
For a quantitative comparison between limited angle reconstruction and neural network reconstruction, the relative local deviation towards the full angle reconstruction was calculated and is presented in Figure 9h,i. In the comparison of the two reconstructions, a clear reduction of the relative error for the neural network reconstruction can be seen. Large areas of the limited angle reconstruction have an error above 10% whereas the error of the neural network reconstruction is larger than 10% only in the area of large hole clusters. In the region of hole accumulations there are high spatial frequencies due to edges. However, when extrapolating the missing angular regions, there is a spatial low-pass effect. Thus, the high spatial frequencies are missing in the sinogram. The filtered back projection additionally attenuates low spatial frequencies. This amplifies the noise in the entire loose sound field. Especially in the area of high spatial frequencies, this leads to an incorrect reconstruction.
For better comparability of the overall results, the distribution of the magnitude relative error over the middle part of the sound field in the area (35 mm × 35 mm) was calculated and is shown in Figure 9g. This avoids a distortion of the comparison due to edge effects.
It can be seen that the error of the reconstruction using neural network was significantly reduced from mean value 9.8% to 4.6% compared to the reconstruction without extrapolation. Especially the maximum error could be reduced by more than 10%. With this approach an enhancement of local reconstruction results by a factor of 2 can be reached at an limited angular scan range of [40, 140] • . In a further step, the impact of complexity in the synthetically processed pressure fields was investigated. Therefore, seven different pressure fields with different number of resonators were generated. After the neural network extrapolation and tomographic reconstruction, the relative error distribution to the full angle reconstruction is shown in Figure 10. For a better understanding, the used model field is shown above every error distribution plot. To give a better overview, the number of trained data sets with the same number of holes, meaning the same complexity distribution is shown as well.
For a very simple model with only few resonators, the neural network reconstruction distribution error is a factor of 2 higher than reconstructions with higher complexity starting at nine holes. We assume that this effect is caused by a small number of trained data sets with a corresponding number of holes, because the focus of the training data is on a higher number of holes. At an average number of holes of 9 to 80 resonators, the neural network reconstruction distribution errors are very low and are below 4.5%. For 122 resonators and above, the neural network reconstruction distribution error increases again and is at a comparable level to that at low complexity. We assume that the resolution of the sinogram is to low in relation to its complexity. This results in averaging errors within a pixel and consequently in reconstruction errors compared to limited angle filtered back projection. The neural network always shows improvement over the limited angle reconstruction, even if only a limited number of training sets existed. However, the increasing uncertainty for lowly represented training sets indicates that network training always needs to be performed on structures similar to the specimen.
Measurement Data
For validation the neural network, trained on synthetic data, was employed on experimental data. The measurements, shown here, were performed at the resonance frequency of the Helmholtz resonators at f R = 1479 Hz. Some resonators were deactivated by filling resonator volume with liquid, to avoid a hexagonal rotational symmetry.
The first row of Figure 11 shows In Figure 11, artifacts due to stitching the measurements can be seen. This results in horizontal lines in the sinograms (a-c) and circles in the reconstructions (d-f). Furthermore, strong distortions for radii above 45 mm due to the optical aberrations induced from the glass cylinder are apparent.
In the full angle reconstruction (d), a reduction of the sound pressure by 0.4 Pa can be seen above each active resonator. The resonators have a circular contour. The deactivated resonators can also be seen clearly. There is no characteristic minimum in the area above the resonator. That means this area has no local acoustic damping effect. The limited angle reconstruction (e) shows similar artifacts to the reconstruction of synthetic data, additionally to the measurement artifacts, mentioned in the full angle reconstruction. Line artifacts, as well as amplitude errors, are present. The contours of the resonators are elliptical. The amplitude over the resonators reconstructed with limited angle deviates by 0.23 Pa compared to the full angle reconstruction.
The sinogram of the neural network extrapolated data (c) shows a hard transition from the extrapolated angular scan range. There is a spatial low-pass effect in the extrapolation. However, this reconstruction (f) also shows a significant improvement compared to the limited angle reconstruction. The contour of the resonator is round and artifacts are strongly attenuated. The amplitude over the resonators reconstructed with neural network extrapolation deviates by 0.05 Pa compared to the full angle reconstruction.
The comparison of the limited angle and neural network with the full angle reconstruction shows that, overall, a significant reduction in relative error distribution can be achieved with the neural network extrapolated sinogram. Especially, in a close range around the Plexiglas wall an error reduction with neural network reconstruction is noticeable. Local errors surrounding the position of local resonators, are present in Figure 11h. This means every hole has a significant local sound pressure error compared to the full angle reconstruction. In neural network comparison (i), there is no error surrounding the position of local resonators. Thus, it can be assumed that the reconstruction of a single hole is almost similar compared to the full angle reconstruction.
The associated distribution error is presented in Figure 11g. The mean error can be reduced from 3.5% to 2.6% and maximum error can be reduced from 22.13% to 11.34%. The reconstruction improvement is lower than the synthetic model. We assume this is caused by measurement artifacts and deviation of the model used for physical boundary conditions.
Summary and Outlook
Limited angle tomography is important for many applications, especially at technical processes but suffers from artifacts and unknown measurement deviations. The hypothesis of the approach of extrapolation of non-scannable angular regions through an neural network with only synthetic data sets used for training can be confirmed. We show that we can reduce the MSE by a factor of 2.24 and the maximum error by a factor of 2.22 on synthetic data and reduce the MSE by a factor of 1.93 and the maximum error by 1.95 when evaluating measurement data. The dense Unet was applied to real measurement data acquired with a high-speed camera-based laser interferometric vibrometer on a bias flow liner model. The measurements validate the approach, demonstrating the big potential for this technology to generate a paradigm shift in limited angle tomography. Using the neural network for extrapolation only, is a simple extension and easier to implement compared to using neural networks for the complete tomographic reconstruction. Established signal processing only needs to be extended with this technique and has not to be changed completely.
Further investigation is required into the translation of this technique to other specimen and structures especially regarding their sparseness and complexity. In the next steps, the approach can be further improved. We want to use 3D data for extrapolation. By doing so, the measurement artifacts could be minimized, resulting in more appropriate information to feed in the neural network for extrapolation. In addition, more complex structures have to be trained. Finally the approach will be applied at real flow channel measurements.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to its large amount. It is stored locally and will be made accessible via cloud sharing after request. | 6,612.2 | 2021-05-17T00:00:00.000 | [
"Physics"
] |
Relationship between Chinese and International Crude Oil Prices: A VEC-TARCH Approach
Many studies focus on the impact of international crude oil price volatility on various economic variables in Chinawith a hypothesis that international crude oil price affected Chinese crude oil price first and then other economic variables. However, there has been little research to explore whether or not international and Chinese oil market are integrated. This study aims to investigate the relationship between Chinese and international crude oil prices by VAR and VEC-TARCHmodels. It was found that the two crude oil markets have been integrated gradually. But the impact of external shocks on the Chinese crude oil market was stronger and the Chinese crude oil price was sensitive to changes in international crude oil price, implying that the centrally controlled oil market in China is less capable of coping with external risk. In addition, the volatility of both Chinese and international crude oil prices was mainly transmitted by prior fluctuation forecast and the impact of external shocks was limited, demonstrating that in both cases volatility would disappear rather slowly. Furthermore, Chinese and international crude oil markets have established a stable relationship. When the direction of external shocks on the two variables’ respective stochastic term was consistent, the impact on the two variables’ joint volatility was aggravated and vice versa.
Introduction
There is an extensive body of literature analyzing the impact of oil price fluctuations on the Chinese economy [1].Positive oil price shocks had negative effects on Chinese macroeconomics [2].The global oil price affected both China's economic growth and inflation whereas China's economic activity failed to affect the world oil price [3].Besides, oil price increase negatively affected output and investment [4].China's imports and exports, which are important for the Chinese economy, are also correlated with the oil price [5,6].
The impact level of oil price shocks on Chinese industries may be different.Negative oil price shocks had strong influences on Chinese grains, metals, petrochemicals, and oil fats [7].On the other hand, a positive oil price shock leads to significant profit increases for the Chinese petroleum and natural gas extraction industry but negative influence on petroleum processing industry [8].Moreover, the oil price volatility depresses the oil company index and may increase the speculations in the mining index and petrochemicals index in China [9].In contrast to metals and grains, petrochemicals and oil fats indices responded to global oil price shocks [10].
Oil price shocks can affect stock returns in China [11][12][13], and the correlations between oil price shocks and stock returns are systematically time-varying [14].There is a cointegration relationship between oil prices and Chinese stock prices at the disaggregated sector level and there are some structural breaks in the interaction between them [15].Higher oil prices may cause lower stock prices whereas positive shocks to oil market-specific demand resulted in higher stock price [16].In addition, the impact of international oil prices on emerging economies' stock markets is different [17].China's stock returns are correlated only with the expected volatility in world oil prices [18].
Parts of the literature pay attention to the relationship between oil prices and various price indices.Oil price shocks lead to a contemporaneous increase in consumer and producer price indices [19].However, the impact of the WTI (West Texas Intermediate) crude oil price shocks on import 2 Mathematical Problems in Engineering price index, producer price index, and retail price index weakens gradually [20].
On the other hand, the increasing oil demand in China may exert an impact on the international crude oil price [21][22][23][24].The world oil price rise could partly contribute to Chinese oil demand [25].The demand from emerging markets such as China has become a significant factor in the world oil pricing system [26].The cumulative impact of real G3 (USA, Eurozone, and Japan) real M2 (broad money) shocks on real oil prices is small while the impact of China's real M2 on the real price of crude oil is large and statistically significant [27].However, China's crude oil imports do not significantly affect Brent price changes and there is no solid evidence to illustrate that dramatic fluctuations of international oil price have effect on China's crude oil imports [28].
Why and how did international oil price influence the Chinese economy and various macroeconomic variables?Obviously, the most sensitive economic indicator to international oil price change should be the domestic crude oil price.If the Chinese crude oil market was integrated with the international crude oil market, other macroeconomic variables in China would be impacted by the international crude oil price through the domestic crude oil price.In turn, if it were isolated, it would be impossible to set up a linkage between international crude oil price and domestic macroeconomic indicators.That is, international oil price shocks usually transmit to the Chinese domestic oil price first and then impact other macroeconomic indicators subsequently.Thus, it is important to explore the relationship between Chinese and international crude oil prices.
The contribution of this paper is that we can compare the commodity price fluctuation risk between market-oriented and partially regulated economies in the perspective of crude oil markets.As we know, the oil markets in Western countries such as the USA or Great Britain are highly marketoriented while the Chinese are chiefly controlled by the central government, specifically the National Development and Reform Commission.Since 1998, the Chinese government has made attempts to reform its crude oil pricing mechanism [29,30], which was demonstrated in the Chinese crude oil and refined oil price reform plan released by the National Development and Reform Commission on June 3, 1998.The final goal of the oil price reform was to gradually integrate its crude oil market with the international market.However, the crude oil industry, especially the upper-stream oil industry, is still controlled by a few oligopolies.Besides, unlike other highly market-oriented commodities, crude oil pricing rights belong to the central government.The crude oil market in China is full of bureaucratic control.On the other hand, whether initiatively or passively, the Chinese crude oil market has become gradually connected with the world oil market.Notably, crude oil is an important industrial raw material and the volatility of its price will influence economic growth.As the Chinese economy develops rapidly, the demand for crude oil in the country has increased rapidly as well.A remarkable fact is that China's domestic crude oil's external dependence increases gradually.As the China Statistical Yearbook showed, the crude oil import in 2014 was roughly 310 million tons, increasing by 9.4% from the previous year.China's crude oil's external dependence rate has grown gradually in recent years; in 2007, 2008, 2009, 2010, 2011, 2012, 2013, and 2014, the external dependence rates were 47.2%, 49.8%, 52%, 54.8%, 56.5%, 56.8%, 57.3%, and 59.5%, respectively.Compared to the highly market-oriented oil market, whether the government-controlled crude oil market is more capable of resisting the market risk is a meaningful issue, and the features of the Chinese crude oil market give us an appropriate chance to explore the problem.
The remainder of this paper is organized as follows.Section 2 briefly reviews the evolvement of the Chinese crude oil price.Section 3 introduces the methodology and the data.Section 4 presents the results for the Chinese and international crude oil market.
Evolution of the Chinese Crude Oil Price Mechanism
During the era of planned economy in China, namely, before 1978, the government set almost every commodity's price, including crude oil price.As a result, China's crude oil market was completely isolated from the international crude oil market.After 1981, keeping pace with the central government's reform of the planned economic system, two pricing systems appeared concerning the crude oil price, which was known as the double pricing system (DPS).One kind of prices was determined by the central government while the other was spontaneously adjusted by the demand and supply of crude oil.However, the centrally controlled price still dominated the market until 1993 when Chinese crude oil was self-sufficient.As a result, during this period, the Chinese crude oil price was still isolated from the international crude oil market.After 1993, with the rapid economic development in China, the demand for crude oil increased greatly and domestic crude oil supply could not fulfill this demand.From this time, China has changed from a crude oil exporting country to a net importer in the international crude oil market.Consequently, China's economic development was gradually affected by the volatility of the international crude oil price.By 1998, the Chinese government explicitly declared the reform of the crude oil pricing mechanism aiming to integrate its oil market with the international market.The domestic crude oil settlement price in China was then composed of a benchmark price and premium.The government, based on Singapore FOB price plus freight, insurance, and tariff, determined the former and the buyers and sellers negotiated the latter.The standard retail price in various regions was announced by the central government and the two oil monopoly giants, known as Sinopec and PetroChina, could set up a retail price within a 5% floating range based on the standard retail price in 1998.This floating range increased to 8% in 2001.However, the standard retail price has been mastered by government.
Methodology and Data
3.1.VAR Model.This study investigates empirically the interrelationship between Chinese and international crude oil prices in the framework of Vector Autoregressive (VAR) and Multivariate General Autoregressive Conditional Heteroskedasticity (MGARCH) models, which are both widely used in the economic literature.The VAR model was firstly applied to macroeconomics to explore the interrelationship of economic variables [31].A basic -lag VAR model including endogenous variables (in this study equals 2) was defined as follows: where 1 and 2 refer to Chinese and international crude oil prices, respectively, and is the model order. 1 and 2 refer to the corresponding regression equations' stochastic terms with ( ) = 0. 1 and 2 are both white noise series with ( and ( 1 2 ) = .Before we start to utilize the VAR model to investigate the relationship between the crude oil prices in China and in the world, one of the premises is that they do impact each other; that is to say, both of them should be treated as endogenous variables of each other.The Granger causality model [32], which tests causal relationship between time series variables, could be employed to explore whether they affected each other statistically.Essentially, the Granger causality model is a nonstructured VAR model.
The VAR model is a dynamic and endogenous system, so the impact of external shocks on all variables in the system is dynamic as well.As some of the parameters in the VAR model are usually not statistically significant, the impulse response function and variance decomposition (together called "innovation accounting") are often adopted to further analyze an external shock's dynamic impact.A shock to the th variable not only directly affects the th variable but is also transmitted to all other endogenous variables through the dynamic (lag) structure of the VAR model.An impulse response function traces the effect of a one-time shock to one of the innovations on current and future values of the endogenous variables 1 and 2. It can be written as ( + | )/ , where ≥ 0 and denotes 1 or 2. Obviously, an impulse response function reflects the impact of a shock in the th variable on the expected value of + [33].
Variance decomposition can measure the contribution of various structural shocks.The moving average forms of (1) read as follows: Assuming that there is no serial correlation between 1 and 2 in the same period, the variances of ( 2) are given by var var 2 1 and 2 2 are the variances of the stochastic terms in (1).Evidently, the right two terms' respective proportion to the left term in (3) gives the corresponding variance contribution (the same applies to (4)).
VEC-TARCH Model.
When setting up a regression model of time series variables, one of the key assumptions is homoscedasticity of the stochastic error term.However, economic time series usually exhibit the feature of volatility clustering, which means that the variance of the stochastic term is variable.In such case, a MGARCH model may be more suitable.There are various MGARCH models including the VEC [34], CCC [35], BEKK [36], and DCC models [37] to describe the conditional variance-covariance matrix in a MGARCH model.The conditional variance-covariance matrix should satisfy the positive definiteness condition.In addition, the number of parameters to be estimated increases rapidly as the dimension of the variables increases.Fortunately, in the previous study there are only two variables, so we just adopt the common VEC model.Actually, the VEC model is based on the univariate GARCH model [38] and is its direct generalization: We proceed by formulating a VEC model for the residuals 1 and 2 in (1) for the Chinese and international crude oil prices.The VEC model, shown in (5), is usually used to simulate two variables.In (5), the conditional variancecovariance matrix is a linear function of the lagged squared errors and the lagged values of the elements of itself. 0 is a constant coefficient matrix whereas and are coefficient matrices of lagged squared errors and lagged values of , respectively.Both matrices and are symmetric so as to reduce the number of estimated parameters [34].⊗ is Hadamard product operator, representing the corresponding term's multiplication.Since a GARCH(1, 1) model can already describe a vast number of time series variables, the values of both and are restricted to one in this study.In addition, when the coefficients of the VEC model were estimated, the matrix 0 is limited to being a scalar.In order to reflect the coefficients' economic meanings clearly in the matrices , 0 , , and , (5) is rewritten in the form of a set of equations as follows: The second and third term on the right-hand side in ( 5), (6), and ( 7) are referred to as ARCH and GARCH terms, respectively.As both and are symmetric matrices, 12 = 21 and 12 = 21 .Moreover, as 0 is a scalar, ℎ 12 = ℎ 21 .Equations ( 6) and ( 7) are the corresponding GARCH(1, 1) models of the Chinese and international crude oil prices. 11 and 11 measure the impact of external shocks and China's domestic crude oil price's conditional variance in the last period on the current price volatility, while 22 and 22 measure that of the international crude oil price.Equation ( 8) is the GARCH(1, 1) model of the conditional covariance for the Chinese and international crude oil prices. 12 and 12 measure the impact of the joint stochastic term's shocks and the conditional covariance of Chinese and international crude oil prices in last period on the current joint volatility.The squared errors 2 follow a heteroskedastic ARMA(1, 1) process.The autoregressive root which governs the persistence of volatility shocks is the sum of and .In many applications, this root is very close to 1 so that shocks die out rather slowly.
A VEC-TARCH model is formulated by the combination of TARCH and VEC models.The TARCH model serves to explore whether positive and negative news' impacts on current volatility are symmetric or not [39] and reads as follows: where −1 is a dummy variable.When −1 < 0, −1 = 1; that is, negative external news influences the current conditional variance as ∑ =1 + , whereas when −1 > 0, −1 = 0; that is, the influence of positive external news in the last period is ∑ =1 .Obviously, when < 0, the impact of positive news is greater than that of negative news and vice versa.
The VEC-TARCH model adds another term based on (5), which is similar to the second term on the right-hand side of (9).Here, −1 is a virtual variable matrix.When the corresponding stochastic term or covariance in the last period is negative, equals one; else it equals zero.The elements of the coefficient matrix 1 reflect the asymmetric impact of positive and negative news on the current volatility.
Data.
In this study, the crude oil price in China was represented by the Daqing crude oil spot price, while the international crude oil price was represented by that of West Texas Intermediate (WTI), which is influential to the international crude oil market.The sample period was from January 1991 to October 2011, which includes 250 monthly observations.In terms of oil reserves and annual output, the Daqing oil field is the biggest in China and this is the reason
Empirical Results and Discussion
Many time series are nonstationary, implying that their means are nonconstant.In this case, the results of modeling these nonstationary variables may be inaccurate.Therefore, it is necessary to test whether or not time series are stationary before setting up an empirical model.We carried out an augmented Dickey-Fuller (ADF) test with the null hypothesis that the Chinese or international crude oil price has a unit root.Both ADF tests for the dependent variables 1 and 2 include an intercept term and linear trend with zero lag.Least squares are used to compute the coefficients and statistics of the ADF test equations based on the 248 observations from March 1991 to October 2011.Following Schwarz' information criterion, the results showed that the statistics of the ADF test equations of 1 and 2 were −13.4866 and −10.1360, respectively, which both are larger than the test's critical values of −3.9955 at 1% significance level.This implies that we can reject the null hypothesis that the Chinese or international crude oil price has a unit root; that is, 1 and 2 were stationary at the significance level of 0.01.
Granger Causality Test.
Before setting up an empirical model of Chinese and international crude oil prices, we should identify the interrelationship between the two variables, that is, whether one's impact on the other is endogenous or exogenous.The Granger causality test can help to statistically infer a causal relationship between Chinese and international crude oil prices.The result of the Granger causality test is sensitive to the model orders (maximum lags); so we tested the relationship of the two variables when lagged from one to six.As is shown by the statistics in Table 1, the null hypothesis that "the Chinese crude oil price did not Granger-cause the international crude oil price" was always rejected.At the same time, the null hypothesis that "the international crude oil price was not Granger-cause of the Chinese crude oil price" was rejected as well when lagged from one to six time steps.Taken together, Chinese and international crude oil prices were mutual Granger-causes of each other, illustrating that they did affect each other empirically, which is inconsistent with previous results that China had little impact on the volatility of the international crude oil markets [40].
High external oil dependence in China implies its strong demand for crude oil from the international markets.China has been the second largest oil consuming country since 2003 and in 2008 it became the second largest oil importing country.Roughly 60% of the total oil consumption in China was imported in 2014.With further development of marketization in China, the domestic crude oil price could reflect its supply and demand situation to some extent, which would transmit to the international crude oil market, thereby affecting the international oil price.Moreover, the benchmark price in the Chinese crude oil market was determined based on the international crude oil price (specifically, the Singapore crude oil price from 1998 to 2001 and the average price of the New York, Rotterdam, and Singapore crude oil futures markets after 2001).
VAR Model.
On the basis of the above analysis, we know that the Chinese and international crude oil prices had constituted a mutual influencing endogenous system.Therefore, an unstructured VAR model is employed to further investigate their dynamic relationship.
The first step to set up the VAR model is to determine the lag order.As is shown in Table 2, the optimal lag was three according to the LR, SC, and HQ criteria while it was four following the FPE and AIC information criteria.The lag four statistics for FPE and AIC only slightly differed from the lag three statistics.As a result, the order of the VAR model was chosen as three.
Impulse Response Function.
The economic meaning of the coefficients in the VAR model is not evident and some of them are usually not statistically significant.Furthermore, the VAR model is a dynamic system and an external shock to any endogenous variable will affect all the variables in the model.Therefore, impulse response function and variance decomposition are commonly utilized to investigate the variables' relationship in a VAR model.The impulse response function reflects all the endogenous variables' time-varying response to any variable's external shocks.The dashed line in Figure 1 shows the ±2 standard deviation error band, representing the 5% level of significance.In order to transform the impulse, we choose the Cholesky decomposition method, which utilizes the inverse of the Cholesky factor of the residual covariance matrix to orthogonalize the impulses, imposes an ordering of the variables in the VAR model, and attributes all of the effect of any common component to the variable that comes first in the VAR system.
When the crude oil price in China was affected by an external shock of one unit standard deviation, it would be positively influenced itself for about seven months (Figure 1(a)).The impact on its price from the first to fifth month was positive.In the first month, the price increased drastically by 4.77 dollars while from the second to the fifth month its price rose by 0.79, 0.75, 0.19, and 0.37 dollars, respectively.In the sixth and seventh period, Chinese crude oil prices decreased but the magnitude was small.An external shock to the crude oil price in China positively affected the international crude oil price for roughly eight months.The international crude oil price in the second period increased by 1.96 dollars and fluctuated in the following seven periods, but the magnitudes were small as well (Figure 1(b)).Similarly, when the international crude oil price was affected by an external shock of one unit standard deviation, it grew by about 2.55 dollars in the first month and then fluctuated slowly from the second to the seventh month (Figure 1(c)).From the eighth period on, it was restored to the level before the shocks.An external shock to the international crude oil price positively affected the Chinese crude oil price for approximately four months, increasing by about 2.89, 1.43, 1.08, and 0.99 dollars from the first to the fourth period, respectively (Figure 1(d)).The Chinese crude oil price was sensitive to the international crude oil price shocks and experienced a relatively long increase in response to the rise of the international crude oil price.
In brief, regardless of whether an external shock acted on the Chinese or international crude oil price, the magnitude of the Chinese crude oil price volatility in the first period overweighed that of the international crude oil price, suggesting that the crude oil price in China was more sensitive to external shocks than the international one.A possible explanation is that China's domestic oil market mechanism Mathematical Problems in Engineering needed to be improved further to enhance its buffering capacity to external shocks.Consequently, the partially planned economies' mechanisms in crude oil markets seemed to be more vulnerable to external risk than market-oriented mechanisms.In addition, the reason why the crude oil price in China would experience a relatively long increase after an external shock to the international oil price is that it was determined by the average price of the New York, Rotterdam, and Singapore futures markets of the last month starting from October 2001, which were all closely related to the West Texas Intermediate crude oil price.
Variance Decomposition.
Based on the results of the Granger causality test and the impulse response function, we know that the relationship between the Chinese and international crude oil prices was mutual and dynamic.Through the VAR model, current Chinese crude oil price volatility would affect the Chinese and international crude oil prices in the next period, then influencing the third period's prices, and so forth.Variance decomposition can measure the contribution of structural shocks of all the endogenous variables to any analyzed variable in the VAR model.In other words, it is possible to assess the relative significance of different structural shocks.The results of variance decomposition were estimated by the Cholesky decomposition method and shown in Figure 2. In terms of the international crude oil price, the relative contributions of the Chinese and international crude oil prices were 56.19% and 43.81%, respectively, in the first period (Figure 2(a)).Then the contribution of the Chinese crude oil price gradually increased to 65.08% in the fourth period.Since the fifth period, the two variables' respective contribution to structural shocks from the international crude oil price tended to stabilize.In short, the contribution of the Chinese crude oil price to the international crude oil price was stronger.A possible interpretation of this finding is that the internal stabilization mechanism of the Chinese crude oil price was less stable than that of the international crude oil price.No matter the origin of the initial external shock, the results of the impulse response function showed that the Mathematical Problems in Engineering Chinese crude oil price was affected more deeply.In addition, the impact of the Chinese crude oil price volatility on itself was stronger than that of the international crude oil price.As a result, the international crude oil price's relative influence on itself was smaller.When it came to the Chinese crude oil price, as is shown in Figure 2(b), in the second period, the contributions of the Chinese and international crude oil prices were 85.87% and 14.13%, respectively, and in the third period 86.04% and 13.96%.From the fourth period on, Chinese crude oil price's impact on itself stabilized at about 85%, whereas that of the international crude oil price was approximately 15%.In summary, the Chinese crude oil price's impact on itself was stronger and that of the international crude oil price was limited, suggesting that China's crude oil market needs to be more open and fully competitive to attain the goal of integrating the domestic crude oil market with the international market.When the Chinese crude oil market was affected by an external shock, it should ask for a solution from the international crude oil market.The volatility of both of the Chinese and international crude oil prices was primarily affected by the previous conditional variance forecast.In other words, their current volatility was mainly originated from previous volatility transmission, implying that both oil prices' volatility had the feature of endogeneity.That is to say, both China's and international crude oil markets had a respective endogenous volatility transmission mechanism.The last period's external shock did also affect their current volatility.In addition, the shocks to the volatility of both Chinese and international crude oil prices would disappear rather slowly, suggesting that the risk was increasing and the uncertainty of the crude oil market was growing both in China and in the world.Similarly, the current joint volatility of the Chinese and international crude oil prices was mainly affected by prior conditional covariance, implying that a stable relationship had been established between Chinese and international crude oil markets.
VEC-TARCH Model.
The elements of the dummy matrix −1 reflect whether the volatility of the Chinese and international crude oil prices was symmetrical or not.The value of 11 was −0.1002 and deviated from zero statistically significantly, manifesting that the volatility of the Chinese crude oil price was asymmetric and that positive news would exacerbate volatility more greatly than negative news.In China's crude oil market, pricing was under the central government's dominance.When there were rising signals in the international crude oil market, the current crude oil price in China tended to rise as well; when there were declining signals, the crude oil price in China tended to remain at the current level because the monopolistic petroleum enterprises in China may exert an influence on the government's decision.The value of 22 was negative and showed that prior positive news impacted the international crude oil price more than negative news.In addition, the value of 12 was negative and highly significant, which illustrates that when the product of the prior stochastic terms of the Chinese and international crude oil prices was positive, it affected the current joint volatility more than when it was negative.Because China's crude oil market was integrated with the international oil market, when the direction of the two variables' respective stochastic terms' external shock was consistent-that is, the product of the prior stochastic terms of Chinese and international crude oil prices was positive-the impact on the two variables' joint volatility was aggravated.On the other hand, when the direction of both was inconsistent, the impact on the two variables' current joint volatility counteracted.
Conclusions
This study first utilized the Granger causality model to test for a statistical causal relationship between the Chinese and international crude oil prices.Then, the impulse response function and variance decomposition were adopted to trace the effect of a one-time shock to one of the innovations on current and future values of the endogenous variables and the contribution of endogenous variables' structural shocks, respectively.Finally, a VEC-TARCH model was used to explore the two variables' volatility relationship and asymmetry.The main conclusions are as follows.
The Chinese and international crude oil prices were of reciprocal causation.The two crude oil markets have gradually linked with each other.However, because of the special crude oil pricing mechanism in China, an external shock's impact on the Chinese crude oil price was stronger and the Chinese crude oil price was sensitive to the change of the international crude oil price.Though the two crude oil markets affected each other, the buffering capacity of the international crude oil price to external shocks was stronger than that of China.As a result, the impact of shocks of the Chinese crude oil price on the international crude oil price was limited, whereas that of shocks of the international crude oil price on the Chinese crude oil price was larger.In addition, both the variance decomposition of Chinese and international crude oil price indicated that the former structural shocks' contribution was greater than the latter, further suggesting that the Chinese crude oil price was more sensitive to external shocks than the international crude oil price.
The volatility of the crude oil price both in China and in the world was affected by external shocks and their fluctuation transmission in the last period.But the corresponding coefficient of the previous conditional variance was far greater than that of the prior external shock, implying that the volatility of both Chinese and international crude oil prices was mainly transmitted by prior fluctuation and the external shocks' impact was limited.Furthermore, the joint volatility of the two variables was primarily influenced by their previous conditional covariance, reflecting that the Chinese crude oil market and international crude oil market were gradually integrated and that a stable relationship between them had been established.Finally, the volatility of the crude oil price both in China and in the world showed the feature of asymmetry and positive news' shocks on current volatility were larger than those of negative news.When the directions of the two variables' respective stochastic terms' external shocks were consistent, the impact on the two variables' joint volatility was aggravated, whereas when inconsistent, the impact on the two variables' current joint volatility was counteracted.
Figure 1 :
Figure 1: Response of (a, c) 1 and (b, d) 2 to shocks of the (a, b) Chinese and (c, d) international crude oil price.Solid (dotted) lines denote the mean innovation (±2 standard deviations).
Table 1 :
Granger causality test between 1 and 2. " * * * ," " * * ," and " * " denote the significance at the 1%, 5%, and 10% level, respectively.whythe Daqing crude oil spot price was chosen to denote the Chinese crude oil price.All data were collected from the US Energy Information Administration.
Table 2 :
VAR lag order selection criteria." * " indicates the model order selected by the corresponding criterion.LR, FPE, AIC, SC, and HQ are the sequential modified LR test statistic (each test at 5% level), final prediction error, Akaike information criterion, Schwarz information criterion, and Hannan-Quinn information criterion, respectively.
Table3shows the estimated coefficients of the VEC-TARCH model.The values of 11 and 11 were 0.0746 and 1.0074 and deviate from zero at the significance level of 0.01, suggesting that the conditional variance of the Chinese crude oil price was affected by both the stochastic term in the last period and the prior conditional variance.In other words, the Chinese crude oil price volatility was influenced by both the last period's external shock and the last period's forecast variance based on past information.Moreover, 11 was much smaller than 11 , implying that the Chinese crude oil price volatility was mainly affected by its previous volatility transmission.The sum of 11 and 11 was 1.082, just outnumbering one, which indicated that the shocks to the volatility of the Chinese crude oil price would die out slowly.Similarly, the values of 22 and 22 were 0.0989 and 0.9927 and significant, indicating that the international
Table 3 :
Estimated coefficients of the VEC-TARCH model between 1 and 2. " * * * ", " * * " and " * " denote the significance by statistic at the 1%, 5% and 10% level, respectively.crude oil price's conditional variance was affected by both the stochastic term in the last period and the prior conditional variance.The sum of 22 and 22 was very close to one, illustrating that the shocks to international crude oil price volatility would die out slowly as well.The values of 12 and 12 were 0.0464 and 1.0277, respectively, implying that the two variables' joint volatility was affected by both the stochastic term's volatility in the last period and the prior conditional covariance.In addition, 12 was far lower than 12 , demonstrating that the two variables' joint volatility was primarily affected by the prior joint volatility.The sum of 12 and 12 was also close to one, which demonstrated that the shocks to the two variables' joint volatility would disappear as well. | 7,854 | 2015-11-16T00:00:00.000 | [
"Economics"
] |
Spoken language change detection inspired by speaker change detection
Spoken language change detection (LCD) refers to identifying the language transitions in a code-switched utterance. Similarly, identifying the speaker transitions in a multispeaker utterance is known as speaker change detection (SCD). Since tasks-wise both are similar, the architecture/framework developed for the SCD task may be suitable for the LCD task. Hence, the aim of the present work is to develop LCD systems inspired by SCD. Initially, both LCD and SCD are performed by humans. The study suggests humans require (a) a larger duration around the change point and (b) language-specific prior exposure, for performing LCD as compared to SCD. The larger duration requirement is incorporated by increasing the analysis window length of the unsupervised distance-based approach. This leads to a relative performance improvement of 29.1% and 2.4%, and a priori language knowledge provides a relative improvement of 31.63% and 14.27% on the synthetic and practical codeswitched datasets, respectively. The performance difference between the practical and synthetic datasets is mostly due to differences in the distribution of the monolingual segment duration.
I. INTRODUCTION
Spoken language diarization (LD) is a task to automatically segment and label the monolingual segments in a given multilingual speech signal.The existing works towards LD are very few (Sitaram et al., 2019).The majority of them use phonotactic (i.e. the distribution of sound units) based approaches (Chan et al., 2004;Lyu et al., 2013;Spoorthy et al., 2018).The development of LD using a phonotactic-based approach requires transcribed speech utterances.The same is difficult to obtain as most of the languages present in the code-switched multilingual utterances are resource-scare in nature (Sitaram et al., 2019;Spoorthy et al., 2018).Even though, there exist some transfer learning approaches that adapt the phonotactic models of the high resource language to obtain the models for the low resource language, may end up with performance degradation if both the languages are not from the same language group (Sitaram et al., 2019).Further, LD is effortless for humans, especially for known languages, and challenging for machines.Hence there is a need for exploring alternative approaches for LD.
Speaker diarization (SD) is a task to automatically segment and label the mono-speaker segments for a given multispeaker utterance, which is well explored in the literature.Though there exist differences in the information that needs to be captured to perform LD and SD tasks, there exist many similarities like the features apa<EMAIL_ADDRESS>b<EMAIL_ADDRESS>proximating the vocal tract resonances that have been successfully used for the modeling of both speaker and language-specific phonemes (Carrasquillo et al., 2002;Li et al., 2013;Liu et al., 2021).Furthermore, most of the approaches used for spoken language identification (LID) are inspired by the approaches used for the speaker identification/verification (SID/SV) task (Richardson et al., 2015;Snyder et al., 2018).In addition to that most of the successful LID systems that are borrowed from SID/SV literature do not require transcribed speech data (Li et al., 2013;Snyder et al., 2018).Alternatively LID systems developed using the phonotactic approach require transcribed speech data.This motivates a close association study between the LD and SD tasks and may be exploited to come up with approaches for LD.
The SD field has evolved mainly in two ways: (1) change point detection followed by clustering and boundary refinement, and (2) fixed duration segmentation followed by i-vector/ embedding vector extraction, clustering, and boundary refinement (Moattar and Homayounpour, 2012;Park et al., 2022;Tranter and Reynolds, 2006).(Bredin et al., 2017;Dawalatabad et al., 2020;Hogg et al., 2019;Park et al., 2022) reported that initial change point detection improved overall SD performance.Thus this study focuses on the development of spoken language change detection (LCD) through a comparative analysis between LCD and speaker change detection (SCD).The available SCD approaches can be broadly classified into two groups: (1) distance-based unsupervised approach and (2) modelbased supervised approach (Moattar and Homayounpour, 2012;Park et al., 2022).The distance-based approach applies hypothesis testing (either coming from a unique speaker or not) for predicting the speaker change to the speaker's specific features extracted from the speech signal with sliding consecutive windows (Moattar and Homayounpour, 2012;Park et al., 2022).Following this approach, many feature extraction techniques like excitation source (Dhananjaya and Yegnanarayana, 2008;Sarma et al., 2015), fundamental frequency contour (Hogg et al., 2019), etc., and distance metrics like Kullback-Leibler (KL) divergence (Siegler et al., 1997), Bayesian information criteria (BIC) (Chen et al., 1998), KL2 (Siegler et al., 1997), generalized likelihood ratio (GLR) (Gish et al., 1991) and information bottleneck (IB) (Dawalatabad et al., 2020) are proposed in the literature.Generally, the performance of the distance-based unsupervised approach degrades with variation in environment and background noise (it may predict false changes), hence to resolve the issue supervised modelbased approaches are proposed in the literature (Moattar and Homayounpour, 2012;Park et al., 2022).In the early days, the proposed approaches model individual speakers using the Gaussian mixture model and universal background model (GMM-UBM) (Barras et al., 2006;Moattar and Homayounpour, 2012), hidden Markov model (HMM) (Meignier et al., 2006), etc, but nowadays, using the deep learning framework the approach predicts the speaker change by discriminating between the speaker change segments (neighborhood of the speaker change point) with no change segments (Moattar and Homayounpour, 2012;Park et al., 2022).However, the modelbased approach smooths the output evidence and may lead to miss detection of the change points (Moattar and Homayounpour, 2012).In addition to that training of the supervised model requires labeled speech data from a similar environment/recording condition, speaking style, language, etc., making the system development complicated.Therefore the distance-based unsupervised approaches are more popular and widely used for SCD tasks (Dawalatabad et al., 2020;Moattar and Homayounpour, 2012;Park et al., 2022).
Even though the available SCD frameworks look simple to adopt, there are challenges in doing so.Fig. 1 (a) and (b), show the time domain speech signals corresponding to the utterance having a speaker change and a language change, respectively.By listening and observing the time domain representation of both utterances, the identified speaker/language change points are manually marked.From the time domain signal, it is very difficult to locate both the speaker and language change points.Fig. 1 (c) and (d) show the spectrogram of both utterances.From the spectrogram, it can be observed that around the speaker change the formant structure shows significant variation, whereas around language change the structure is intact.When the speaker changes, the vocal tract system information changes and hence the variation in the formant structure.However, the structure of the formant frequencies remains intact during language change as the single speaker is speaking both languages.It is interesting to note that humans discriminate between spoken languages without knowing the detailed lexical rules and phonemic distribution of the respective languages.Of course, humans need to have prior exposure to the languages (Li et al., 2013).Humans may exploit the long-term phoneme dynamics to discriminate between languages.Therefore, the language change may be detected by capturing the long-term language-specific spectral-temporal dynamics.This may represent valid phoneme sequences and their combinations to form syllables and subwords of a language.
Based on the need to exploit the long-term spectrotemporal evidence, it can be hypothesized that the LCD by human/machine may require more neighborhood duration around the change point than the SCD.In addition, LCD may also benefit from prior exposure to respective languages.A human subjective study that focuses on language/speaker change detection is set up for validating the same.
For automatic detection of language change, the initial studies are performed using the available unsupervised distance and the supervised model-based SCD approaches.The model-based approaches include GMM-UBM, i-vector, and x-vector.Based on the experimental results for LCD and SCD, appropriate modifications will be done to each framework for improving the performance of the LCD task.
The main contribution of this work are summarized as follows: (a) by observing the spectro-temporal representation around the speaker and language change, it is hypothesized that detecting language change, requires a larger duration around the change point and a priori knowledge of the language as compared to detecting a speaker change.The same hypothesis is confirmed by the human subjective study, (b) the SCD frameworks are used as initial baselines to perform LCD and their performances are analyzed, and (c) these frameworks are further refined to improve the performance of LCD.
II. DATABASE SETUP
This section provides a brief description of the database used in this study.
Initially, the studies have been performed with synthetically generated code-switch and multi-speaker utterances.For generating the utterances, we have used the Indian institute of technology Madras text-to-speech (IITM-TTS) corpus (Baby et al., 2016).The IITM-TTS corpus consists of speech data recordings from native speakers of 13 Indian languages.For each native language, two speakers (a male and a female) recorded their utterances in their native language and English.In this study for synthesizing the code-switch utterances, a female speaker speaking her native language Hindi, and her second language English is considered.For each language, the first 5 hours of data are used for training purposes.The rest of the monolingual utterances are stitched randomly for generating code-switched utterances.Altogether, 4000 utterances are generated having one to five language change points.The average monolingual segment duration of the generated code-switch utterances for Hindi and English languages are approximately 6.5 and 5.2 secs, respectively.The generated dataset is termed as TTS female language change (TTSF-LC) corpus.Similarly, for generating speaker change utterances by keeping the language identical, we have used English speech utterances from native Hindi and Assamese female speakers.The average mono-speaker segment duration of the generated utterances are 5.19 and 4.86 secs, respectively.The generated dataset is termed as TTS female speaker change corpus (TTSF-SC).
Finally, for generalizing the obtained observations, the experiments are performed on the standard LCD corpus.Microsoft code-switched challenge task-B (MSC-STB) dataset is used.The dataset has development and training partitions that consist of code-switched utterances and language tags (each 200 msec) from three language pairs: Gujarati-English (GUE), Tamil-English (TAE), and Telugu-English (TEE).The approximate duration of each language in the training and development set is 16 and 2 hours, respectively.The detail about the database can be found at (Diwan et al., 2021).
III. HUMAN SUBJECTIVE STUDY FOR LANGUAGE AND SPEAKER CHANGE DETECTION
An experimental procedure has been set up, where each human subject is exposed to a pool of utterances that may or may not have a language/speaker change.The human subjects are asked to mark, if there exists a language/speaker change or not.The utterances are classified into five groups.Each group is represented with approximate duration considered in terms of the number of voiced frames (NVF) taken around the true/false change point.The true change point refers to the actual change points of the selected utterances.The selected utterances are split around the change point to generate the mono-language/speaker utterance.The false change point represents the centered voiced frame's start location of the given mono-language/speaker utterance.The voiced frame is decided by taking 6% of the average short time frame energy (computed with a frame size of 20 msec and a frameshift of 10 msec) of a given utterance as a threshold (Rabiner, 1978).The 30 mono-speaker utterances are generated by splitting the selected 15 utterances around the true change point.Out of 30, with respect to duration, the largest 15 has been chosen for this study.The same procedure has been followed to generate the mono-lingual utterances using the selected codeswitched utterances belonging to the HIE, BEE, TAE, and TEE language pairs.However, there is an exception for the utterances belonging to BEA and TAM, as the utterances have a speaker change along with the language change.Hence for a fair comparison, the monolingual utterances for these cases are synthesized, such that they also have a speaker change, i.e.BEB, ASA, MAM, and TAT, respectively.After that, each utterance S(n) is masked by considering x number of voiced frames (NVF-x) from the left and right of the true/false change point.According to the value of x, the masked utterances are grouped into five different groups, termed NVF-10, NVF-20, NVF-30, NVF-50, and NVF-75.To avoid abrupt masking, a Gaussian mask G(n) with appropriate parameters is multiplied with the utterances to obtain the masked utterance The masked signal is passed through an energy-based endpoint detection algorithm to obtain the final masked utterance (Rabiner, 1978).The detailed procedure of the masked utterance generation is attached in the supplementary 1 , and also the generated utterances are available at https://github.com/jagabandhumishra/HUMAN-SUBJECTIVE-STUDY-FOR-LCD-and-SCD.
The listening experiment is conducted with 18 subjects.Out of them, 13 number of the subjects are male and 5 are female.The selected subjects are from the 20 − 30 years age group.The subjects have no prior exposure to the voice samples of the speakers used in this study.However, the subjects are comfortable with English, and for other languages, the comfortability varies.To know the language comfortability, each of the subjects is asked to provide a language comfortability score (LCS) from zero to three for each pair of languages.
The listening study is conducted with 390 utterances (i.e 240 for LCD and 150 for SCD).The LCD task is separate from SCD, hence conducted in two different sessions, and also the subjects are well rested so that they don't have listener fatigue.A graphical user interface (GUI) has been designed to perform the listening study.For a specific LCD/SCD study, all the masked utterances are presented to the listener in a random order, irrespective of their segment duration.If a listener is unable to provide the response for one-time playing, s/he is allowed to play the utterance multiple times.Our objective here is to observe, how correctly humans recognize the speaker and language change by listening to the utterances coming from the five different groups.Hence, the responses recorded in (Sharma et al., 2019) for analyzing the talker change detection ability of humans are used here.Three kinds of responses have been recorded, these are (1) language/speaker change detected or not (2) the number of times replayed (NR), and (3) response time (RT).RT is the time duration taken by a subject to provide his/her response, after listening to the full utterance.The RT is computed by subtracting the respective utterance duration (UD) from the total duration (TD) (i.e.RT = T D − U D).The TD is the duration taken by a subject (i.e. from pressing the play button to pressing the yes/no button) to provide his/her response.
For a given subject, there are three kinds of performance measures computed in this study: (1) average detection error rate (DER) (2) average number of times replayed (N R), and (3) average response time (RT ).The DER is defined in Eq. 1, where N is the total number of trials, F A is the number of false language/speaker change utterances, marked as true by the subject and F R is the number of true language/speaker change utterances, marked as false by the subject, respectively.The DER measure defines the inability of the subject to detect language/speaker change.The N R provides an estimation of the average number of replays required for the subject to mark their response comfortably.Similarly, the RT provides an estimation of the average duration required for the subject to perceive the language/speaker change, after listening to the respective utterances.A higher value of the performance measures indicates the inability of the human subject to perceive the language/speaker change and vice versa.
After performing both the LCD and SCD experiments, the subject-specific, DER, N R, and RT are computed with respect to NVF.The distributions of the obtained DER with respect to the NVF are depicted in Fig. 2(a).It can be seen that the DER values are smaller for the SCD than for the LCD, regardless of the NVF.This suggests that human subjects are more comfortable with detecting the switching of speakers than language.Furthermore, as the NVF increases from 10 to 75, the DER decreases for both SCD and LCD.The For observing the effect of language comfortability on detecting language change, the responses of the human subjects are considered for the group NVF-50 and NVF-75 that have the median of DER lesser than 0.25 (assuming sufficient duration from either side).With respect to the LCS, the responses are segregated into four groups.The group segregation with respect to language comfortability is done as 0: very low, 1: lower medium, 2: medium, and 3: excellent, respectively.The obtained DER distribution with respect to LCS is depicted in Fig. 4. From the figure, it can be observed that the DER values are decreases with an increase in LCS.This concludes that a priori knowledge of languages helps people to better discriminate between languages.The objective of this section is to perform LCD tasks inspired by the existing unsupervised distancebased SCD framework.In general, the SCD task is performed by computing and threshold the distance contour obtained between the features of the sliding analysis window with a fixed length N .The basic block diagram of the approach is depicted in Fig. 5. First feature vectors are extracted from the speech signal and then energybased voice activity detection (VAD) is performed to obtain the voiced frame indices.The voiced frame indices are stored for future reference and the feature vectors corresponding to the voiced frames are used for further processing.The voiced feature vectors are used with two consecutive windows having a fixed length to model two different Gaussian distributions (g a and g b ).The divergence distance contour is obtained through the entire scan of the given test utterance by sliding the analysis window with a frame, as mentioned in Eq. 2. The evidence contour is then smoothed with the hamming window with length (h l ).The smoothed contour is then used for peak detection, with a peak-picking algorithm having a minimum peak distance parameter called γ.The higher value of γ reduces the number of detected peaks and vice-versa.For reducing the number of false change points, an approach of deriving a threshold counter proposed in (Lu and Zhang, 2002) and mentioned in Eq 3 is used here.Finally, the change frame is obtained by comparing the strength of the detected peaks with the threshold contour.The change point's actual frame index and sample location are obtained by using the stored voiced frame locations. (2) Initially, we used the TTSF-SC dataset for designing and tuning the hyperparameters of the SCD system.Out of 4000 test utterances, the first 100 utterances are used to tune the hyperparameters.It has been observed that the performance is optimal by considering α = 1, γ equal to 0.9 times the analysis window length, and 150 as the analysis window length.Keeping the methodology and hyperparameters identical, the TTSF-LC and MSCSTB dataset is used to perform the LCD task.For evaluating the performance, the commonly used performance measures for event detection tasks, i.e. identification rate (IDR), false acceptance rate (FAR), miss rate (MR), and mean deviation (D m ) are used here (Mishra et al., 2021;Murty and Yegnanarayana, 2008).The performances of both tasks are tabulated in Table I.
From the results, it can be observed that the performance of the SCD in terms of IDR is 84.1%, whereas the performance of the LCD in terms of IDR is 51.2%.The reduction in performance may be due to two reasons, (1) the used MFCC features may fail to capture languagespecific discriminative evidence, and (2) the hyperparameters, mostly the analysis window length, are tuned for SCD and may not be appropriate for LCD.Hence to understand the issue a study is carried out by varying the features and analysis window length around the change point.The most used features in literature for language identification (LID) tasks, i.e.MFCC, LPCC, SDC, and PLP are considered here.The objective here is to observe the language discriminative ability of the features by considering a fixed number of voiced frames (NVF), x around the change point and compare it with the speaker discrimination ability of the MFCC feature.This study will help us to reason out the performance degradation of LCD as compared to SCD.Further, the observation will also help us to optimally decide the feature and analysis window length for performing LCD.
For performing the study, the TTSF-SC and TTSF-LC dataset is considered.Out of 4000 test utterances, the utterances having only one change point are selected.The number of utterances selected for speaker change and language change is 799 and 836, respectively.For observing the discrimination ability, the idea here is to observe the distributional difference between the true and false distances.The true distances are the KL divergence distance between the x number feature vectors from either side of the ground truth change point.Similarly, the false distance is computed by placing the change point randomly anywhere in the mono-language/ speaker segments.The procedure of computing the true and false distances is also depicted in Fig. 6.For observing the duration effect on the discrimination, the value x is considered as 10, 20, 30, 50, 75, 100, 150, 200, 250, and 300, respectively.For a given x value, the ANOVA test is conducted between the obtained true and false distances.
The obtained F-statistics values of the ANOVA test are depicted in Fig. 7.
From the figure, it can be observed that the Fstatistics values increase with an increase in NVF and saturate after a certain number of voiced frames, and started decreasing after that.A similar observation has also been observed in the case of the LCD and SCD study by humans.However, in case humans' performance doesn't degrade with an increase in NVF.This may be due to the inability of the Gaussian (assumption of statistical independence) to model the speaker and language spectral dynamics and leading to the increase of the class-specific variance in the distance distribution.
Using the MFCC feature, the F-statistic values of the SCD are higher than the LCD irrespective of the NVF.Further, it can also be observed that the discrimination ability (in terms of F-statistics) of the LCD follows the SCD with an increase in the NVF.Furthermore, it has also been observed that the highest F-statistics values obtained for speaker and language change study are at 150 and 200, respectively.In addition to this, for language change study, the MFCC features provide better F-statistics value, followed by PLP, LPCC, and SDC.For clear observation, the distance distribution of the MFCC feature to perform SCD and the MFCC and PLP features to perform LCD with NVF of 50, 150, 200, and 250 is depicted through box plots in Fig. 8. From the box plots, it can also be noticed that the speaker and language discrimination saturates at NVF 150 and 200, respectively.Though the boxplots look to have better discrimination, the increase in inter-class variance leads to a decrease of the F-statistics values.Furthermore, the discrimination ability of the MFCC is better compared to PLP, as the separation between the true and false distance distribution of the MFCC feature is higher than the PLP feature for LCD at NVF equal to 200.This motivates us to consider the MFCC feature with the analysis window length of 200 for performing LCD for the TTSF-LC dataset.The performance of the LCD task with modified analysis window length is tabulated in Table I.
The table shows that the performance in terms of IDR, FAR, and MDR follows the observations noticed with respect to the F-statistics.The performance obtained for the TTSF-LC dataset with MFCC feature (considering analysis window length 200) is 66.1% in terms of IDR, providing a relative improvement of 29.1% and followed by the IDR of 64.06% using PLP feature.Similar observations also have been reported using the MSCSTB dataset, where the performance in terms of IDR improved relatively with 2.72%, 2.85%, and 1.63% by considering the analysis window length of 160, 180, and 170 for GUE, TAE, and TEE language pairs, respectively.The analysis window length 160, 180, and 170 are decided greedily by evaluating the performance by considering the analysis window length from 100 to 250 with a shift of 10 on the first 100 test trails.Hence, this justifies the hypothesis that the requirement of relatively higher duration information to perform LCD than SCD.
V. LANGUAGE CHANGE DETECTION BY MODEL-BASED APPROACH
The SCD and LCD by human suggest that prior exposure to the language make human more efficient in detecting language change.This motivates extracting the statistical/embedding vectors from the trained machine learning (ML)/ Deep learning (DL) framework and using them to perform change detection tasks.The detailed procedure is explained in the following subsections.
A. Model-based change detection framework
The block diagram of the model-based change detection framework is depicted in Fig. 9. From the training data, initially, MFCC+∆ + ∆∆ are computed, and voiced feature vectors are selected for further processing by using VAD.The voiced feature vectors are used to train the statistical models like the universal background model (UBM), adaptation model, Total variability matrix (T matrix), and DL model like TDNNbased x-vector models.The statistical vectors like u/a/ivectors are extracted using trained UBM/adapt model/ T-matrix, respectively.The u-vector and a-vectors are computed by computing the zeroth order statistics from the UBM and adapt model, respectively.The zeroth order statistics are computed using Equation 4, where i ranges from 1 ≤ i ≤ M , M is the number of mixture components, x j are the MFCC features and T is the number of voiced frames.The u-vectors are the M dimensional vectors extracted using the UBM model, whereas the avectors are the concatenation of the M dimensional vec- tors, extracted from the class-specific adapt models.The i-vectors are extracted as mentioned in (Dehak et al., 2010).Similarly, the x-vectors are extracted from the trained TDNN-based x-vector model.Both the statistical/ embedding vectors are computed by considering N number voiced feature vectors as analysis window length.The extracted vectors are then used to train the linear discriminate analysis (LDA), within class covariance normalization (WCCN) matrix, and the probabilistic LDA (PLDA) model.
During testing, the feature vectors are extracted from the code-switched utterances.After that using the VAD labels, with a fixed number of voiced frames the statistical/embedding (S/E) vectors are extracted using the trained models.The S/E vector extraction and the distance contour for each test utterance are computed using Eq. 5.Where x i s' are the voiced feature vectors, ψ(.) is the distance computation function and F(.) is the mapping function from the feature space to S/E vector space.
The distance contour is then smoothed using a hamming window with length (h l ).The h l is considered as 1/δ times N .The peaks of the smoothed contour are computed and the magnitude of peaks greater than the threshold contour is considered as the change points.
B. Experimental Setup
The TTSF-SC dataset is used for SCD, whereas TTSF-LC and MSCSTB are used for performing LCD tasks.The 39 dimensional MFCC+∆ + ∆∆ feature vectors are computed from the speech signal with 20 msec and 10 msec as window and hop duration, respectively.The voiced frames are decided by considering the frame energy that is greater than the 6% of the utterance's average frame energy.The UBM and adapt models are trained with a cluster size of 32.The dimensions of the u/a/i-vectors are 32, 64 and 50, respectively.The recipe from the speech brain is used to train and extract the 512 dimension x-vectors (Ravanelli et al., 2021).For the speaker-specific study, the x-vectors are trained without dropout and L2 normalization, whereas for the languagespecific study, dropouts of 0.2 in the second, third, fourth, and sixth layers are used along with L2 normalization.
During training, the speaker/language-specific voiced feature vectors are used to extract the S/E vectors dis-jointly with a fixed N , whereas during testing the S/E vectors are extracted with a sample frameshift.All the models have been trained for 20 epochs.For TTSF-LC the optimal N is decided experimentally as 200 and for TTSF-SC N is considered as 50.After training, by observing the validation loss and accuracy the model corresponding to the 15 th and 11 th epoch is chosen for the language and speaker-specific study, respectively.Similarly, for MSCSTB, x-vector models for each language pair are trained.After training for 100 epochs, by observing the validation loss and accuracy the model belonging to the (54 th , 29 th , and 26 th ) epochs for N = 200 and (25 th , 80 th , and 18 th ) epochs for N = 50 are chosen for GUE, TAE, and TEE language pairs, respectively.
For TTSF-LC and TTSF-SC, the extracted embedding vectors are normalized without having LDA and WCCN.The normalized vectors are used for modeling the PLDA and computing the distance contour for LCD and SCD tasks.Using the MSCSTB dataset, it is observed that performing LDA, and WCCN along with using cosine kernel distance instead of PLDA distance contour improves the change detection performance.This may be due to the nature of the datasets.The TTSF-LC and TTSF-SC are the studio recording of read speech, whereas the MSCSTB is the conversation recording in the office environment.
C. Language discrimination by statistical/embedding vectors
The aim here is to observe the discrimination ability of the extracted S/E vectors for language discrimination, by synthetically emulating the CS scenario.The TTSF-LC, where the same speaker is speaking two languages is considered for this study.The training partition is used to train the UBM, adapt, T-matrix, and TDNN- FIG. 10. t-SNE feature distribution between the Hindi (H) and English (E) (a) MFCC features, (b) u-vector, (c) a-vector, (d) i-vector, and (e) x-vector.Within and Between language PLDA score distribution, with EER of (f) 28.5, (g) 17.35,(h) 12.55, and (i) 3.6 for u, a, i, and x-vector, respectively.
based x-vector model.From the test partitions, two utterances are selected, one from each language, spoken by a speaker.Using the selected utterances the MFCC+∆+ ∆∆ features and the S/E vectors are extracted and projected in two dimensions using t-SNE (Maaten and Hinton, 2008).The two-dimensional representations are depicted in Fig. 10(a-e).From the figure, it can be observed that the overlapping between the languages reduces by moving from the feature space to the S/E vector space.This shows, like human subjects, prior exposure to the languages through ML/DL models helps in better discrimination.Furthermore, among the S/E vectors, the overlap between the languages is least in the x-vector space, followed by the i-vector, adapt, and UBM posterior space.This is due to the ability of the modeling techniques to capture the language-specific feature dynamics.
For strengthening the observation, the features are extracted from the test utterances and pooled together with respect to a given language.The pooled feature vectors are randomly segmented with a context of 200 and used to extract the S/E vectors.The extracted S/E vectors are paired to form 2000 within a language (WL) and 2000 between language (BL) trails.The WL and BL vector pairs are compared using the PLDA scores.Fig. 10 (f-i) shows boxplots of the PLDA score distribution of the WL and BL pairs.From the box plot distribution, it can be observed that, between the WL and BL, the overlap of PLDA scores distribution reduces with improvement in the modeling techniques from UBM to x-vector.
In the change point detection task, the aim is to get a sudden change in the distance contour, when there exists a change in language.That can be achieved if the contour (negative of PLDA score) variation is less in WL and provide a sudden change in the contour for BL pairs.Hence for ensuring this, the PLDA score distribution between the WL and BL should be maximized.Keeping this into account, the equal error rate (EER) has been used as an objective measure, where the WL and BL trials are termed false scores and true scores, respectively.The obtained EER for UBM/adapt/i-vector and x-vector are 28.5, 17.35, 12.55, and 3.6, respectively.Hence as per the discrimination ability, the change point detection study has been carried out using i/x-vectors as the representations of the speaker and language.
D. Experimental Results
Initially, the change detection study is conducted with TTSF-SC and TTSF-LC using i/x-vectors as the speaker/language representation.The discrimination ability and the LCD/SCD study suggest that the x-vector is a better representation of the speaker/language than the i-vector.Therefore, the LCD task on the MSCSTB dataset is conducted by considering x-vectors as language representations.
The experimental results are tabulated in Table II.The performance obtained in terms of IDR on SCD task using i-vector and x-vector is 87.75% and 92.27%, respectively.Similarly, for LCD tasks the performances on TTSF-LC are 80.58% and 87.01%, respectively.As evidenced by the language discrimination study, the performance of LCD provides a relative improvement of 21.9% and 31.63%using i-vectors and x-vectors, over the best performance achieved on the unsupervised distancebased approach, respectively.This justifies the claim that, like humans, the performance of the LCD can be improved by incorporating language-specific prior information through computational models.
The performance of the LCD task on the MSCSTB dataset using x-vectors as language representation with considering N as 200 (same as TTSF-LC) is 46.56%, 49.91% and 47.13% in terms of IDR for the GUE, TAE, and TEE partitions, respectively.The performance provides a relative improvement of 5.6%, 2.3%, and 4.2%.However, the improvement is small as compared to the improvement achieved using TTSF-LC data.This may 11.From the figure, it can be observed that the median of the monolingual segment duration in the case of TTSF-LC for primary and secondary language are (5.54 and 4.9) seconds, and for MSCSTB is (1.46 and 0.51), (1.54 and 0.41), (1.61 and 0.41) seconds for GUE, TAE, and TEE partition, respectively.Further, it has been observed that language discrimination is better by considering N equal to 200 (i.e.approx. 2 seconds).Hence, due to the monolingual segment duration of the MSCSTB dataset being smaller than the considered analysis window duration resulting in smoothing on the resultant distance contour, and leads to an increase in the MDR.Therefore, the alternative is to reduce the analysis window length, but that may affect the language discrimination ability of the x-vectors.
A study is performed for observing the trade-off between the analysis window length and language discrimination ability.The language discrimination test and the LCD task are performed using the GUE partition of the MSCSTB dataset by reducing the analysis window length from 200 to 50.The results of the LCD task are tabulated in Table III.The cosine score distribution of the x-vectors' WL and BL pairs after the LDA and WCCN projection with varying the analysis window length are depicted in Fig. 12. From the be observed that with decreasing in N , the performance of the LCD task improves, and achieved the best performance of 54.74% at N equals to 50.Hence the change detection performance is computed with N equal to 50 for GUE, TAE, and TEE language pairs and tabulated in Table II.However, the relative performance improvement by incorporating language-specific prior exposure through the x-vector model is not as expected as in the TTSF-LC dataset.This is due to the language discrimination ability of the x-vectors reducing with the decrease in N .From Fig. 12, it can be observed that the overlap between the WL and BL score distribution increases with a decrease in the value of N .As an objective measure, the computed EER for N equals to 200, 150, 100, 75 ,and 50 are 7. 1, 9.8, 12.8, 19.8 and 29.2, respectively.
VI. DISCUSSION
The human-based LCD and SCD study suggests that the language requires more neighborhood information as compared to the speaker for comfortable discrimination.Further, prior exposure to the languages helps humans to better discriminate between the languages.Motivated by this, it is hypothesized that the performance of LCD by machine can be improved with the (a) incorporation larger duration analysis window (N ) and (b) languagespecific exposure through computational models.
In the unsupervised distance-based approach, it has been observed that the performance of the LCD improves by increasing the value of N .The optimal N value for the SCD study is 150.Considering the same value of N , the LCD task is carried out for both TTSF-LC and MSCSTB datasets, and performances are tabulated in Table IV.In the case of the MSCSTB dataset, the average IDR values with respect to all three language pairs are tabulated.Motivating by the LCD/SCD study by humans, the N value is increased and the obtained optimal N value for the LCD with TTSF-LC is 200.Similarly, the optimum N value for MSCSTB is 160, 180, and 170 for the GUE, TAE, and TEE , respectively.The performance with the optimal N value for TTSF-LC and MSCSTB is 66.1% and 46.02%, which provides a relative improvement of 29.1%, and 2.4%, respectively.These observations justify the claim that the performance of the LCD by machines can be improved by increasing the analysis window duration.
Furthermore, as hypothesized from the subjective study, the incorporation of language-specific exposure through computational models improves LCD performance.The i/x-vector models have been trained, which essentially capture the language-specific cepstral dynamics.It has been observed that with the x-vector approach, the obtained performance is 87.01%for TTSF-LC and 52.59% in terms of IDR, which provides a relative improvement of 31.63% and 14.27% over the performance of the unsupervised distance-based approach.Similarly, for the SCD task using the TTSF-SC dataset, the performance provides a relative improvement of 9.71%.Comparing the performance of LCD and SCD on synthetic data, it can be observed that the improvement is more significant on LCD than the SCD.This concludes, like human subjective study, in an ideal condition (only speaker/language variation and keeping other variations limited), the requirement model-based approach is more significant on LCD than the SCD.
It is also observed that in the LCD task, the performance improvement on MSCSTB data is limited as compared to the improvement achieved on the synthetic TTSF-LC dataset.This is due to the difference in the mono-lingual segment duration.The trade-off between the analysis window duration and the language discrimination ability shows that the discrimination ability improves with an increase in analysis window duration.At the same time during change detection, as the monolingual segment duration can possibly be lesser than 500 msec (approx.50 voiced frames), considering a larger analysis window leads to degrading in performance by smoothening the evidence contour (leads to an increase in MDR).Hence to overcome this issue, (1) need to achieve significant language discrimination with the N value as small as possible, and (2) need to develop a framework whose performance will be least affected/independent with the variations of the analysis window duration.
VII. CONCLUSION
In this work, we performed LCD using the available frameworks for SCD.From the subjective study, it is observed that humans require comparatively larger neighborhood information around the change point as compared to the speaker.It is also observed that prior language-specific exposure improves the performance of the LCD task.In the unsupervised distance-based approach, the incorporation of larger neighborhood information improves the LCD performance by relatively 29.1% and 2.4% on the synthetic TTSF-LC and the practical MSCSTB dataset, respectively.Similarly, incorporating language-specific prior information through the computational models provides a relative improvement of 31.63% and 14.27% over the unsupervised distance-based approach.
It has also been observed that the practical data set does not perform as expected like synthetic data.This is due to the distributional difference in the monolingual segment duration on both datasets.The MSCSTB dataset consists of the monolingual segments having a duration lesser than 0.5 secs, and for better language discrimination the required duration is about 2 secs (about 200 voiced frames).Hence it is challenging to decide on the analysis window duration.The larger duration smooths the evidence contour and increases the MDR, whereas a smaller duration of 0.5 secs is not able to provide appropriate language discrimination.Therefore, our future attempts will try to develop a better framework, which can provide better language discrimination on a small duration, and also plan to come up with a change detection framework, whose performance should be independent/less affected by the variations of the analysis window duration.itY), Govt. of India, for supporting us through different projects.
FIG. 1 .
FIG. 1.(a) and (c) Two speaker time domain speech signal and its spectrogram, respectively.(b) and (d) Two languages (Bilingual) time domain speech signal and its spectrogram, respectively.
FIG. 2. (a) DER distributions of the subjects, (b) F-Statistics (F Stat) values of the ANOVA test between the DER distributions of LCD (L) and SCD (S) study, respectively .
FIG. 5 .
FIG. 5. Basic block diagram of the change detection framework for unsupervised distance-based approach
FIG. 6 .
FIG. 6. Distance computation around the true and false change point of an utterance, (a) true change point and (b), (c) false change points, fl d, and tr d are false and true distances, respectively.
FIG. 7. ANOVA test F-statistics (F Stat) values obtained between the true and false KL divergence distances for speaker/language change study with varying the number of voiced frames (NVF).
FIG. 9 .
FIG. 9. Block diagram for the model-based change detection study.
TABLE I .
Perforance of LCD and SCD with the unsupervised distance-based approach.A: with N = 150 (tuned for SCD) and B: with the optimal N value (tuned for LCD).
TABLE II .
Performance of LCD and SCD by model-based approaches, S: statistical i-vector, E: embedding based x-vector, N: analysis window length.
TABLE IV .
Performance comparison, RI: relative improvement, A: with N = 150 (tuned for SCD), B: with the optimal N value (tuned for LCD), and C: x-vector based approach. | 9,441.6 | 2023-02-10T00:00:00.000 | [
"Computer Science"
] |
An Integrated Source and Channel Rate Allocation Scheme for Robust Video Coding and Transmission over Wireless Channels
A new integrated framework for source and channel rate allocation is presented for video coding and transmission over wireless channels without feedback channels available. For a fixed total channel bit rate and a finite number of channel coding rates, the proposed scheme can obtain the near-optimal source and channel coding pair and corresponding robust video coding scheme such that the expected end-to-end distortion of video signals can be minimized. With the assumption that the encoder has the stochastic information such as average SNR and Doppler frequency of the wireless channel, the proposed scheme takes into account robust video coding, channel coding, packetization, and error concealment techniques altogether. An improved method is proposed to recursively estimate the end-to-end distortion of video coding for transmission over error-prone channels. The proposed estimation is about 1–3 dB more accurate compared to the existing integer-pel-based method. Rate-distortion-optimized video coding is employed for the trade-o ff between coding e ffi ciency and robustness to transmission errors.
INTRODUCTION
Multimedia applications such as video phone and video streaming will soon be available in the third generation (3G) wireless systems and beyond. For these applications, delay constraint makes the conventional automatic repeat request (ARQ) and the deep interleaver not suitable. Feedback channels can be used to deal with the error effects incurred in image and video transmission over error-prone channels [1], but in applications such as broadcasting services, there is no feedback channel available. In such cases, the optimal trade-off between source and channel coding rate allocations for video transmission over error-prone channels becomes very important. According to Shannon's separation theory, these components can be designed independently without loss in performance [2]. However, this is based on the assumption that the system has an unlimited computational complexity and infinite delay. These assumptions are not satisfied in delay-sensitive real-time multimedia communications. Therefore, it is expected that joint considerations of source and channel coding can provide performance improvement [3,4].
Most of the joint source and channel coding (JSCC) schemes have been focusing on images and sources with ideal signal models [4,5]. For video coding and transmission, many works still keep the source coding and channel coding separate instead of optimizing their parameters jointly from an overall end-to-end transmission point of view [6,7]. Some excellent reviews about robust video coding and transmission over wireless channels can be found in [8,9]. In [10], a JSCC approach is proposed for layered video coding and transport over error-prone packet networks. It presented a framework which trades video source coding efficiency off for increased bitstream error resilience to optimize the video coding mode selection with the consideration of channel conditions as well as error recovery and concealment capabilities of the channel codec and source decoder, respectively. However, the optimal source and channel rate allocation and corresponding video macroblock (MB) mode selection have to be selected through simulations over packet-loss channel models. In [11], a parameterized model is used for the analysis of the overall mean square error (MSE) in hybrid video coding for the error-prone transmission. Models for the video encoder, a bursty transmission channel, and error propagation at the video decoder have been combined into a complete model of the entire video transmission system. However, the model for video encoder involves several parameters and the model is not theoretically optimal because of the use of random MB intramode updating, which does not consider the different motion activities within a video frame to deal with error propagation. Furthermore, the models depend on the distortion-parameter functions obtained through ad hoc numerical models and simulations over specific video sequences, which also involves a lot of simulation efforts and approximation. The authors of [12] proposed an operational rate-distortion (RD) model for DCT-based video coding incorporating the MB intra-refreshing rate and an analytic model for video error propagation which has relatively low computational complexity and is suitable for realtime wireless video applications. Both methods in [11,12] focus on the statistical model optimization for general video sequence, which is not necessarily optimal for a specific video sequence because of the nonstationary behavior across different video sequences.
In this paper, we propose an integrated framework to obtain, the near-optimal source and channel rate allocation, and the corresponding robust video coding scheme for a given total channel bit rate with the knowledge of the stochastic characteristics of the wireless fading channel. We consider the video coding error (quantization and mode selection of MB), error propagation, and concealment effects at the receiver due to transmission error, packetization, and channel coding in an integrated manner. The contributions of this paper are the following. First, we present an integrated system design method for wireless video communications in realistic scenarios. This proposed method takes into account the interactions of fading channel, channel coding and packetization, and robust video coding in an integrated, yet simple way, which is an important system design issue for wireless video applications. Second, we propose an improved video distortion estimation which is about 1-3 dB peak signal-to-noise ratio (PSNR) more accurate than the original integer-pel-based method (IP) in [13] for half-pel-based video coding (HP), and the computational complexity in the proposed method is less than that in [13].
The rest of the paper is organized as follows. Section 2 describes first the system to be studied, then the packetization and channel coding schemes used. We also derive the integrated relation between MB error probability and channel coding error probability given the general wireless fading channel information such as average signal-to-noise ratio (SNR) and Doppler frequency. Section 3 presents the improved end-to-end distortion estimation method for HPbased video coding. Simulations are performed to compare the proposed method to the IP-based method in [13]. Then we employ RD-optimized video coding scheme to optimize the end-to-end performance for each pair of source and channel rate allocation. Simulation results are shown in Section 4 to demonstrate the accuracy of the proposed endto-end distortion estimation algorithm under different channel characteristics. Conclusions are stated in Section 5.
PROBLEM DEFINITION AND INTEGRATED SYSTEM STRUCTURE
The problem to be studied is illustrated in Figure 1 which can be specified by five parameters (r, r c , ρ, f d , F): r is the total channel bit rate, r c is the channel coding rate, ρ is the average SNR at the receiver, f d is the Doppler frequency of the fading channel targeted, and F is the video frame rate. H.263 [14] is used for video coding. A video sequence denoted as is the pixel spatial location and l = 1, . . . , L is the frame index, is encoded at the bit rate r s = r × r c b/s and the frame rate F f/s with the MB error probability P Mb = f (ρ, f d , r c ) that will be detailed next. The resulted H.263 bitstream is packetized and protected by forward error correction (FEC) channel coding with the coding rate r c . The resulted bitstream with rate r b/s is transmitted through wireless channels characterized by ρ and f d . The receiver receives the bitstream corrupted by the channel impairment, then reconstructs the video sequencẽ f s l after channel decoding, H.263 video decoding, and possible error concealment if residual errors occur. The end-toend MSE between the input video sequence at the encoder and the reconstructed video sequence at the decoder is defined as (1) For the video system in Figure 1, there are two tasks to be performed with the five given system parameters (r, r c , ρ, f d , F). First, we need to decide how to allocate the total fixed bit rate r to the source rate r s = r × r c to minimize the end-to-end MSE of the video sequence. Furthermore, the video encoder should be able to, for a source/channel rate allocation (r s , r c ) with residual channel decoding failure rate denoted as p w (r c ), select the coding mode and quantizer for each MB to minimize the end-to-end MSE of the video sequence. The goal is to obtain the source/channel rate pair (r * s , r * c ) and the corresponding robust video coding scheme to minimize (1).
In practical applications, there are only finite number of source/channel pairs available. We can find the robust video encoding schemes for each rate pair (r s , r c ) that minimizes (1) and denote the minimal end-to-end MSE obtained as D * E (r s , r c ), then the optimal source/channel rate pair (r * s , r * c ) and the corresponding video coding scheme can be obtained as For each pair (r s , r c ), we use RD-optimized video coding scheme to trade off between the source coding efficiency and robustness to error propagation. An improved recursive method which takes into account the interframe prediction, error propagation, and concealment effect is used to estimate the end-to-end MSE frame by frame. In this paper, the wireless fading channel is modeled as a finite-state Markov chain (FSMC) model [15,16,17], and the Reed-Solomon (RS) code is employed for forward error coding.
Modeling fading channels using finite-state Markov chain
Gilbert and Elliott [15,16] studied a two-state Markov channel model, where each state corresponds to a specific channel quality. This model provides a close approximation for the error rate performance of block codes on some noisy channels. On the other hand, when the channel quality varies dramatically such as in a fast Doppler spread, the twostate Gilbert-Elliott model becomes inadequate. Wang and Moayeri extended the two-state model to an FSMC model for characterizing the Rayleigh fading channels [17]. In [17], the received SNR is partitioned into a finite number of intervals. Denote by 0 = A 0 < A 1 < A 2 < · · · < A K = ∞ the SNR thresholds of different intervals, then if the received SNR is in the interval [A k , A k+1 ), k ∈ {0, 1, 2, . . . , K − 1}, the fading channel is said to be in state S k . It turns out that if the channel changes slowly and is properly partitioned, each state can be considered as a steady state, and a state transition can only happen between neighboring states. As a result, a fading channel can be represented using a Markov model if given the average SNR ρ and Doppler frequency f d .
Performance analysis of RS code over finite-state Markov channel model
RS codes possess maximal minimum distance properties which make them powerful in correcting errors with arbitrary distributions. For RS symbols composed of m bits, the encoder for an RS(n, k) code groups the incoming bitstream into blocks of k information symbols and appends n − k redundancy symbols to each block. So the channel coding rate is r c = k/n. For an RS(n, k) code, the maximal number of symbol errors that can be corrected is t = (n − k)/2 . When the number of symbol errors is more than t, RS decoder reports a flag to notify that the errors are uncorrectable. The probability that a block cannot be corrected by RS(n, k), denoted as a decoding failure probability p w (n, k), can be calculated as where P(n, m) denotes the probability of m symbol errors within a block of n successive symbols. The computation of P(n, m) for FSMC channel model has been studied before (see [16,18]).
Packetization and macroblock error probability computation
We use baseline H.263 video coding standard for illustration. H.263 GOB/slice structure is used where each GOB/slice is encoded independently with a header to improve resynchronization. Denoting by N s the number of GOB/slice in each frame, the RS(n, k) code block size n (bytes) is set to such that each GOB/slice is protected by an RS codeword in average, where x is the smallest integer larger than x. No further alignment is used. In case of decoding failure of an RS codeword, the GOBs (group of blocks) covered by the RS code will be simply discarded and followed by error concealment. If a GOB is corrupted, the decoder simply drop the GOB and performs a simple error concealment as follows: the motion vector (MV) of a corrupted MB is replaced by the MV of the MB in the GOB above. If the GOB above is also lost, the MV is set to zero, then the MB is replaced by the corresponding MB at the same location in the previous frame. To facilitate error concealment at the decoder when errors occur, the GOBs which are indexed by even numbers are concatenated together, followed by concatenated GOBs indexed by odd numbers. By using this alternative GOB organization, the neighboring GOBs are normally not protected within the same RS codeword. Thus, when a decoding failure occurs in one RS codeword, the neighboring GOBs will not be corrupted simultaneously, which helps the decoder to perform error concealment using the neighboring correctly received GOB. In order to estimate the end-to-end distortion, we need to model the relation between video MB error probability P MB (n, k) and RS(n, k) decoding failure probability p w (n, k), that is, Since no special packetization or alignment is used, one RS codeword may contain part of one GOB/slice or overlap more than one GOB/slice. It is difficult to find the exact relation between P MB (n, k) and p w (n, k) because the length of GOB in each frame is varying. Intuitively, α should be between 1 and 2. Experiments are performed to find the suitable α. Figure 2 shows the experiment results of RS codeword failure probability and GOB error probability over Rayleigh fading channels. It turns out that α ≈ 1.5 is a good approximation in average. For a source and channel code pair (r s , r c ) or RS(n, k), the channel code decoding failure probability p w (n, k) can be derived from ρ and f d as described in Sections 2.1 and 2.2, then we have the corresponding video MB error probability P MB (n, k) from (5). Based on the derived MB error rate P MB (n, k), a recursive estimation method and an RD-optimized scheme are employed to estimate the minimal end-to-end MSE of the video sequence and obtain the corresponding optimized video coding scheme, which is to be described in detail in the next section.
OPTIMAL DISTORTION ESTIMATION AND MINIMIZATION
We first describe the proposed distortion estimation method for both HP-and IP-based video coding over error-prone channels. Simulations are performed to demonstrate the improved performance of the proposed method. Then an RD framework is used to select the coding mode and quantizer for each MB to minimize the estimated distortion, given the source rate r s , P MB which is derived as in Section 2, and the frame rate F.
Optimal distortion estimation
Recently, modeling of error propagation effects have been considered in order to optimally select the mode for each MB to trade off the compression efficiency and error robustness [11,13,19]. In particular, a recursive optimal per-pixel estimate (ROPE) of decoder distortion was proposed in [13] which can model the error propagation and quantization distortion more accurately than other methods. But the method in [13] is only optimal for the IP-based video coding. For the HP case, the computation of spatial cross correlation between pixels in the same and different MBs is needed to obtain the first and second moments of bilinear interpolated HPs, the process is computationally prohibitive. Most of the current video coding use the HP-based method to improve the compression performance. We propose a modified recursive estimate of end-to-end distortion that can take care of both IP-and HP-based video coding. The expected end-to-end distortion for the pixel f s l at s = (x, y) in frame l is Assuming thatẽ s l is an uncorrelated random variable with a zero mean, which is a reasonable assumption when P MB is relatively low as will be shown in the simulations later, we have We derive a recursive estimate of E{(ẽ s l ) 2 } for intra-MB and inter-MB as follows.
Intramode MB
The following three cases are considered.
(1) With the probability 1 − P MB , the intra-MB is received correctly and thenf s l =f s l . As a result,ẽ s l = 0.
(2) With the probability (1−P MB )P MB , the intra-MB is lost but the MB above is received correctly. Denoting by v c = (x c , y c ) the MV of the MB above, two cases of error concealment are considered depending on whether v c is at the HP location or not.
The clipping effect is ignored in the computation. (ii) If v c is at the HP location, without loss of generality, assume that v c = (x c , y c ) is at HP location that is interpolated from four neighbouring IP locations with MVs: (3) With the probability P 2 MB , both the current MB and the MB above are lost. The MB in the previous video frame at the same location is repeated, that is, v c = 0 = (0, 0): Combining all of the cases together, we have the following results.
(1) If v c is at the IP location, then (2) If v c is at the HP location in both x and y dimensions, then The cases when v c has only x c or y c at the HP location can be obtained similarly.
Intermode MB
(2) If v = (x, y) is at the HP location in both x and y dimensions, the prediction is interpolated from four pixels with MVs: The results of the other MB loss cases are the same as that of the intra-MB. We have the following two results.
(2) If v and v c are at the HP location in both x and y dimensions, then The encoder can use the above procedures to recursively estimate the expected distortion d s l in (7), based on the accumu-lated coding and error propagation effects from the previous video frames and current MB coding modes and quantizers.
To implement the HP-based estimation, the encoder needs to store an image for E{(ẽ s l ) 2 }; for the locations in which either x or y is at HP, the value is obtained by scaling the sum of the neighboring two values by 1/4, and for locations in which both x and y are at HP precision, it is obtained by scaling the sum of the neighboring four values by 1/16. It should be noted that the scaling by 1/4 or 1/16 can be done by simple bitshift. Both IP-and HP-based estimations need the same memory size to store either two IP images, E{ f l } and E{ f 2 l }, or one HP image, E{(ẽ s l ) 2 }, but E{(ẽ s l ) 2 } requires smaller bitwidth/pel since it is an error signal instead of a pixel value. The HP-based computational complexity is less than the IP-based method since it only needs to compute E{(ẽ s l ) 2 } instead of computing both E{ f l } and E{ f 2 l } in the IP-based estimate.
We now compare the accuracy of the proposed HP-based estimation to the original IP-based method (ROPE) in [13]. In the simulation, each GOB is carried by one packet. So the packet loss rate is equivalent to the MB error probability P MB . A memoryless packet loss generator is used to drop the packet at a specified loss probability. QCIF sequences Foreman and Salesman are encoded by the Telenor H.263 encoder with the intra-MB fresh rate set to 4, that is, each MB is forced to be intramode coded if it has not been intracoded for consecutive four frames. The HP-and IP-based estimates are compared to the actual decoder distortion averaged over 50 different channel realizations.
In Figure 3a, the sequence Foreman of 150 frames is encoded with HP motion compensation at a bit rate of 300 Kbps, frame rate of 30 f/s, and MB loss rate of 10%. In Figure 3b, the sequence Salesman is encoded in the same way.
It can be noted that the HP-based estimation is more accurate to estimate the actual distortion at the decoder compared to the IP-based estimation. Figure 4 also shows the average PSNR of the 150 coded frames with respect to MB loss rates from 5% to 20%. When MB loss rate is as small as 5%, the HP-based estimation is almost the same as the actual distortion, while the IP-based method has about 3 dB difference.
The results is as expected since there is about 2-4 dB PSNR difference between HP-and IP-based video coding efficiency given the same bit rate. As the MB loss rate increases as large as 20%, the HP-based estimation is about 1 dB better than the actual distortion, while the IP-based estimation is about 2 dB worse. So the HP-based method is still 1 dB more accurate than the IP-based method. The reason is that the error propagation effects play a more significant role when MB loss rate gets larger, so the coding gain of the HP-based motion compensation is reduced. Also, the assumption in HPbased method that the transmission and propagation errors are not correlated and zero mean may become loose. For practical scenarios, it is demonstrated that the HP-based estimation outperforms the original IP-based method by about 1-3 dB.
Rate-distortion-optimized video coding
The quantizer step size and code mode for each MB in a frame is optimized by an RD framework. [q i, j,l , m i, j,l ] ∈ C the encoding vector for b i, j,l , where C = Q × M is the set of all admissible encoding vectors. For each source/channel pair (r s , r c ), we have the corresponding P MB (n, k) from (5). The encoder needs to determine the coding mode and quantizer for each MB in total L frames to minimize the end-to-end MSE D E (r s , P MB ) of the video sequence, which is defined as where R l is the number of bits used to encode frame l, its maximal value is denoted as R max l = r s /F + ∆ l which is the maximal number of bits available to encode frame l provided by a frame level rate control algorithm with average r s /F and buffer related variable ∆ l . Moreover D l (R l , P MB ) is the estimated end-to-end MSE of frame l, l = 1, 2, . . . , L, which can be obtained as where D(c i, j,l , P MB ) is the end-to-end MSE of MB b i, j,l using encoding vector c i, j,l and D(c i, j,l , P MB ) can be computed from d s l as Since there is dependency between neighboring interframes because of the motion compensation, the optimal solution of (17) has to be searched over C H×V ×L , which is computationally prohibitive. We use greedy optimization algorithm, which is also implicitly used in most JSCC video coding methods such as [10,11,13], to find the coding modes and quantizers for MBs in frame l that minimize D l (R l , P MB ), then find coding modes and quantizers for MBs in frame l +1 that minimize D l+1 (R l+1 , P MB ) based on the previous optimized frame l, and so on. The optimal pair (r * s , r * c ) and the corresponding optimal video coding scheme can be found such that The goal now is to optimally select the quantizers and encoding modes on the MB level for a specific MB error rate P MB and frame rate R max l to trade off the source coding efficiency and robustness to error. The notation of P MB and (r s , r c ) is dropped from now on unless needed.
The optimal coding problem for frame l can be stated as subject to Such RD-optimized video coding schemes have been studied for noiseless and noisy channels recently [19,20,21,22,23,24]. Using Lagrangian multiplier, we can solve the problem by minimizing where λ ≥ 0. For video coding over error-prone channels, GOB coding structure is used for H.263 video coding over noisy channels with each GOB encoded independently. Therefore, if the transmission errors occur in one GOB, the errors will not propagate into other GOBs in the same video frame.
For video coding over noiseless channels, the independent GOB structure leads to the fact that the optimization of (23) can be performed for each GOB separately. However, when considering RD-optimized video coding for noisy channels, the MB distortion D i, j (c i, j , P MB ) depends not only on the mode and quantizer of the current MB but also on the mode of the MB above to take into account error concealment distortion. Therefore, there is a dependency between neighboring GOBs for this optimization problem. We use greedy optimization algorithm again to find the solution by searching the optimal modes and quantizers from the first GOB to the last GOB in each frame.
SIMULATION RESULTS
We first use a simple two-state Markov chain model for simulation to show the performance of the integrated source and channel rate allocation and robust video coding scheme, where the given channel stochastic knowledge is accurate. Then simulations over Rayleigh fading channel is performed to verify the effectiveness of the proposed scheme for practical wireless channels.
Two-state Markov chain channel
Simulations have been performed using base mode H.263 to verify the accuracy of the proposed integrated scheme. In the simulations, the total channel signaling rate r equals 144 kbps, which is a typical rate provided in the 3G wireless systems. Video frame rate is F = 10 f/s. The video sequence used for simulation is Foreman in QCIF format. RS code over GF (2 8 ) is used for FEC. The channel coding rate used are {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. The source and channel coding rates r s , r c and the corresponding RS code (n, k) are listed in Table 1. A two-state Markov channel model [16] is used, where the state transition is at the RS symbol level. The two states of the model are denoted by G (good) and B (bad). In state G, the symbols are received correctly (e g = 0) whereas in state B, the symbols are erroneous (e b = 1). The model is fully described by the transition probabilities p from state G to state B, and q from state B to state G. We use the probabil-ity of state B: and the average bursty length: which is the average number of consecutive symbol errors to model the two-state Markov model [11,16]. The simulations are performed through the following steps.
(i) For each channel coding rate r c (or RS(n, k)) in each column of Table 1, the RS code decoding failure rate p w (n, k) is computed using (3) for a given two-state Markov channel model. The results for different r c and channel models are shown in Table 2. The average estimated PSNR, PSNR E , of video signals is used to measure the performance: where PSNR l E r s , r c = 10 log 10 is the estimated average PSNR between the original frame l and the corresponding reconstruction at the decoder using the pair (r s , r c ), and D * E (r s , r c ) is the minimal estimated endto-end MSE from (17) where PSNR (n,l) S (r s , r c ) is the PSNR between the original frame l and the corresponding reconstruction at the decoder in the nth simulation using the source/channel rate pair (r s , r c ). Figure 5a shows the average estimated PSNR E of the optimal rate allocation and robust video coding for different channel code rates when the symbol error rate is P B = 0.01 and the bursty length L B = 16 symbols, and the corresponding average simulated PSNR S is of 50 times video transmission. Figure 5b also shows the same comparison when the symbol error rate is P B = 0.05. It can be noted that the estimated PSNR E , which is obtained at the encoder during RDoptimized video encoding, matches the simulated PSNR S very well. The optimal source and channel rate pair can also be found through Figures 5a and 5b for different channel characteristics. The corresponding channel decoding failure rate of the optimal channel coding rates in Figures 5a and 5b are 0.018 and 0.034, respectively.
We also compare the performance when the knowledge of channel model used at video encoder does not match the real channel used in simulations. Figure 6 shows two cases of channel mismatch. In Figure 6a, the video stream, which is encoded based on P B = 0.01 and L B = 16 two-state Markov channel, is simulated using two-state Markov channel with P B = 0.01 and L B = 8. The simulated average PSNR is bet-ter than the average PSNR estimated at the encoder during encoding because the channel model used in estimation is worse than the model used in simulation. On the other hand, when the video stream, which is encoded based on P B = 0.01 and L B = 8 two-state Markov channel, is simulated using two-state Markov channel with P B = 0.01 and L B = 16, the simulated average PSNR is much worse than the average PSNR estimated at the encoder as shown in Figure 6b. Furthermore, the optimal source and channel coder pair obtained at the encoder is not optimal when the channel condition used in simulation is worse than the channel information used at the encoder. This simulation result suggests that the optimal rate allocation and video coding should be focused on the worse channel conditions for broadcasting services.
Rayleigh fading channel
The simulation over the Rayleigh fading channel is also performed to verify the effectiveness of the proposed scheme over realistic wireless channels. In the simulation, QPSK with coherent demodulation is used for the sake of simplicity. The channel is a frequency-nonselective Rayleigh fading channel. An FSMC with K = 6 states is used to model the Rayleigh fading channel. The SNR thresholds for the K states are Figure 6: Average PSNR obtained in channel mismatch cases. (a) Error burst is shorter than that used in estimation. (b) Error burst is longer than that used in estimation.
selected in such a way that the probability that the channel gain is at state s k , k = 0, 1, . . . , K − 1, is , The FSMC state transition is described at the RS codeword symbol level (8-bit RS symbol) with the assumption that the four QPSK modulation symbols within an RS codeword symbol stay in the same FSMC state. Given the average SNR ρ and the Doppler frequency f d , we can obtain the parameters such as steady state probability p k , RS symbol error probability e k , and state transition rates [17]. Then following the procedures described in Section 2.1, we are able to analyze the RS code performance over Rayleigh fading channels. Table 3 shows the estimated RS code decoding failure probability using FSMC model and the simulation values when the SNR is 18 dB and the Doppler frequency is 10 Hz and 100 Hz, respectively. The RS codeword error rate obtained by the FSMC matches the simulation results very well when f d is 10 Hz. When f d is 100 Hz, the FSMC-based estimate is not as accurate as the results when f d is 10 Hz, but is still within acceptable range compared to the simulated values. Figure 7a shows the average estimated PSNR E and simulated PSNR S of the video coding after optimal rate allocation and robust video coding for different channel code rates when the SNR is 18 dB and f d is 10 Hz. Figure 7b also shows the comparison when the f d is 100 Hz. Even though it can be noted that there are about 1 dB difference between the estimated PSNR E and the simulated PSNR S , the near-optimal source and channel rate allocation (or the channel code rate r c ) obtained from the estimation (0.8 and 0.5 as shown in Figure 7) still has the maximal simulated end-to-end PSNR over Rayleigh fading channels. The simulation results verify the effectiveness of the proposed scheme to obtain the optimal source and channel coding pair when given a fixed total bit rate for wireless fading channels.
Experiments are also performed when the knowledge of channel Doppler frequency used at the video encoder does not match the actual Doppler frequency used in simulations. Figure 8 shows two cases of channel mismatch. In Figure 8a, the video bitstream which is encoded based on f d = 10 Hz is simulated over fading channels with Doppler frequency Figure 8b, the video bitstream which is encoded based on f d = 100 Hz is simulated over fading channels with Doppler frequency of 100 Hz and 10 Hz, separately. In both scenarios, the video quality would be better if the actual condition in terms of MB loss rate is smaller than the knowledge used at the encoder, and would be worse otherwise. Furthermore, the optimal source and channel coder pair obtained at the encoder is not optimal when the channel condition used in simulation is worse than the channel information used at the encoder. This simulation result again suggests that the optimal rate allocation and video coding should be focused on the worse channel conditions for broadcasting services.
CONCLUSION
We have proposed an integrated framework to find the nearoptimal source and channel rate allocation and the corresponding robust video coding scheme for video coding and transmission over wireless channels when there is no feedback channel available. Assuming that the encoder has the stochastic channel information when the wireless fading channel is modeled as an FSMC model, the proposed scheme takes into account the robust video coding, packetization, channel coding, error concealment, and error propagation effects altogether. This scheme can select the best source and channel coding pair to encode and transmit the video signals. Simulation results demonstrated the optimality of the rate allocation scheme and accuracy of end-to-end MSE estimation obtained at the encoder during the process of robust video encoding. | 8,146.6 | 2004-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Re-Thinking the Very Concept of Peace
The appreciation of Peace, the promotion of its values, and the efforts for its attainment as the only way to cope with horrifyingly destructive dimensions of the war we are facing with on a daily basis since so many long years all across the world urges nowadays to the extreme. This necessity appears to such an extent, and with such intensity, as to having been transformed more than ever in one of the most dominant catchphrases of political, social, intellectual and practical discourses of our violent times, a ubiquitous topic within universities, governments, civil societies and other non-governmental organizations and institutions. There are large pacifist movements which are facing off ever more actively against the war. There is also an ever more active engagement of many intellectuals and artists poised to face off against the hawkish and bellicose aesthetics we were facing with up to last two or three decades in most Western countries by a constructive bolstering and promotion of a peaceable and pacifistic aesthetics. By the 1970s the new discipline of peace studies, embracing the history and philosophy of peace, was well establish.1 Since 1980 there is even a university dedicated to Peace studies, the United Nations mandated “University for Peace”, with its main campus in Costa Rica, which is launching its programs and establishing its centers around the world. About 30 years ago will faced and will be very active well known the CPP, Concerned Philosophers for Peace is the largest, most active organization of professional philosophers in North American involved in the analysis of the causes of violence and prospects for peace. And, many philosophers and thinkers are engaged in the international peace dialog and a large number of separated initiatives that have involving a significant number and pages of essays and conferences on philosophy of war and on the Philosophy of Peace, too.
Introduction
The peacemaking role of Philosophy in the actual world conflicts
Starting point
Blessed are the peacemakers, for they will be called sons of God.
In the last two or three decades various theories as to how world peace could be achieved have been proposed.However, we have no intention to embark in this paper upon a defense or assessment of any of them, neither to articulate any theory or ideology in regards.Rather than a mere partisan approach to this or to that pacifistic and anti-bellicistic theory or ideology whatsoever, for such a crucial question as peace the activation of a from-the-roots analytical approach is what we necessarily need.And this is what we intend to do here.By examining analytically, from a philosophical viewpoint, the very concept of Peace, we aim to show the essential role Philosophy could play, and pricelessly constructive contribution it could give to what it can be considered as one of the greatest challenges faced by the societies of our time -that is, the prevailing of Peace and Reconciliation over War and Conflict.
Large-scale pacifistic and anti-bellicistic movements are nowadays a matter-of-fact, and, beyond any doubt, they are all positive.Nevertheless, what it is at stake in an issue as that of the peace is from a philosophical point of view of such a crucial importance as to not permitting us to limit ourselves to, and to be pleased with just highlighting the existence of pacifistic feelings and bemoaning attitudes against the war.However important these feelings and emotive responses may be, they are still not sufficient in themselves to make peace possible, nor to make it prevail over war and conflict.War and conflict often come into being and are moved on by forces essentially other than emotive ones.
Rather than emotional feelings and attitudes, the achievement of peace in a world that is tremendously devastated by wars and conflicts needs a multisided and multilevel radical change.And it is precisely at this very level that Philosophy can play its immense role and give its priceless constructive contribution, inasmuch as the human world rests upon, conforms to, and manifests the very conception we have of it.This premise applies both to its current and future states, to its actuality and potentiality.Our actual world is such, and not other, because such, and not other, is the very conception we have of it.There is absolutely no other alternative manner to change our world except by modifying and by transforming the conception we have of it.Rather than emotional feelings and bemoaning attitudes against the war, all we need today is first and foremost a from-the-roots change of the very conception of peace upon which most of actual large-scale pacifistic and antibellicistic movements are being based on.
Most of people do not love peace as such, for its being peace; most of people want peace because they have nowadays a sense of horror towards the horror of the war.It is this fear and horror of the war that constrains most of people to appreciate peace; the greater the sense of fear and horror of war, the greater the appreciation of peace.The result of all this is nothing but the depauperation of the concept of peace and of its nature on at least three genealogically and structurally interweaved levels: semantic, existential and ontological.
As for the first level, the concept of peace appears reduced to a mere negation of the war conceived as external physical violence only, leaving thus out other forms of violence, such as the spiritual, psychological, metaphysical, epistemological, symbolic, discursive, structural, cultural, anthropological, racial, political, economical, ecological, etc.
As for the second level, it is an inevitable consequence of the first level; once reduced to a mere negation of the war conceived as external physical violence only, peace becomes synonym of apathos, of inaction and lethargy, while war stands as synonym of pathos, of action and liveliness.
As far as the third level is concerned, it is an inevitable consequence of the second level; once reduced to apathos, to inaction and lethargy, peace appears a secondary, mediated reality, pertaining to the sphere of qualities and values, while the war appears a primary, initial reality, pertaining to the sphere of being; the first being thus artificial, suffering the everchanging equation of being and nonbeing, whilst the second is natural, enjoying the never-changing fullness of being.This triple depauperation of the concept of peace is at the heart of Western culture since its very beginnings.The war, in the broadest and most literal sense of the word, forms the core of the very éthos of Homeric poems, which are beyond any doubt the fountain-head of the entire Western culture.The Greek term "pólemos", which signifies the "war", even as a hypostatized being, as "daímon", shares the very same etymology with "pólis", the "city-state", regarded as the very "éthos", the "dwelling place", the "living space" of the human beings 1 , expressing thus clearly enough the view that war is a pure natural state, and, besides this, the political view that the "living space" of man (pólis, from which the term "politics" derives) is built on through war rather than through peace 1 .Besides these ontological and existential nuances, the war was the source of the aesthetic experience of the archaic Greek society.Sharing the same etymology of the verb "chaíro", meaning "to be pleased", "to be satisfied", "to rejoice", and of the substantive "cháris", signifying the "grace", the "beauty", even as hypostatized being, as "daímon", the Greek term "chármi" does not refer to the "fighting" only, but first and foremost to the "pleasure received and experienced from the fighting" 2 .And the greatest pleasure received and experienced in and from fighting was what ancient Greeks used to name "kléos", a "commemoration", a "remembrance through song and poetry"the core of their cultural and moral éthos. 3is very idea was embraced and adopted from the successive generations and culture, to which belongs philosophy.For Heraclitus, the war and conflict was the very principle of all that it is, of the existence as a whole."We must know that war (pólemos) is common to all and strife is justice, and that all things come into being through strife necessarily" 4 ."War is the father and king of all things, it shows some as gods, some as men; it makes some freemen and others slaves." 5 For Sophocles the pure natural state of human being is that of "tò deinótaton", "the most open of the beings to suffer from and make use of the violence".This bellicistic cosmology and ontology are present -explicitly or implicitly -in most of modern philosophies; these "war philosophies", as Karl Popper defines them, can be gathered in three main groups: the biologicalvitalistic (Stirner, Darwin, Spencer, Nietzsche, Freud, Ortega y Gasset, Walzer, Dockrill etc.), the mechanistic-utilitarianistic (Machiavelli, Hobbes, Bacon, Locke, Grotius, Rousseau, Orinde etc.), and the dialectic-historicistic (Hegel, Marx, Lenin, Clauserwitz, Strauss etc.).
In spite of the differences, they all share the same conceptual view about the war.That is, the war is quite natural, profoundly biological, and practically unavoidable; that the war and violence represent the vitality through which the life overcomes itself, and by this, it generates its new possibilities.
Immanuel Kant reflection on the war and peace have extended over more than forty years, beginning from 1755 until to the first edition of Zum ewigen Frieden (Perpetual Peace), 1795, and Anthropologie in Pragmatischer Hinsicht ( known as Anthropology), at 1798, that was the final extended work entirely from Kant's own hand. 6We can see the evolution of ideas on the war and the peace through notable estimates of evils and benefits of wars, to the suggestion of a federation of nations and the perpetual peace.Kant thought that perpetual peace was an ideal to be approached but not completed.In Kant's language, it is an "ideal incapable of realization." 7It's interesting the interpretation of Hegel, in Philosophy of Right, on Kant's idea of perpetual peace as an ideal toward which mankind should approximate. 8gel had claimed more than once that antagonism is the very core of the dialectic self-evolution of the Objective Spirit or Objective Mind, and more, according to what make evident Karl Popper, Hegel thought that "war is not a common and abundant evil but a rare and precious good" 9 .
More, it will be no right to charge the Marks' paradigm as the first "war philosophy", or "class struggle philosophy" ; that's enough to come here a piece of the Marx' letter (1852) to J. Weydemeyer, in New York: "…no credit is due to me for discovering the existence of classes in modern society or the struggle between them.Long before me bourgeois historians had described the historical development of this class struggle and bourgeois economists, the economic economy of the classes" Whereas, in Nietzsche's words, "the war is a condition to the life's generation".After few years Bertrand Russell have been tried to classified the wars in the his articles Ethics of the War (1915) 1 , and so-called the class struggles would be the Wars of Principles; we can make the difference between the class struggles and the civil war, or to the violent revolution.
Peace must be understood some dimensions: personal, social, national, international and global, etc.So, we can remember the ancient cultures and meanings of peace, such as goddess of peace Irene in the Greek tradition which leads to material well-being, Salam that expresses a wish for peace interpersonal dimension of relations, etc. 2 What we need nowadays is precisely the from-the-roots change of the very conception of peace and its respective nature.And this can not be but a challenging task of Philosophy, which is called on today, more than ever, to embark on that very path that Heidegger defined paradigmatically as a "metanoic evolution".More than to fostering just pacifistic and antibellicistic feelings and attitudes, the actual task of Philosophy is to deracinate what it has itself planted, the triple supremacy of the war over the peace.It is its duty and mission towards the very future of the humanity to eliminate the semantic dependency of the concept of peace on that of the war; it is its duty and mission towards the very future of the humanity to eliminate the reduction of the concept of peace to a mere negation of the war conceived as an external physical violence; it is its duty and mission towards the very future of the humanity to eliminate even the reduction of the war to a mere external physical violence only; it is its duty and mission towards the very future of the humanity to work on for the conception of peace as a vital state, as it did for the conception of war.Finally, it is its duty and mission towards the very future of the humanity to work on for the conception and the development of a new anthropology -an anthropology that could pave the way for a possible semantic and existential supremacy of peace over the war, as far as the war will continue to exist as a reality in its own, same as the peace, and in such a case, when we choose one of two alternatives, we will choose peace for the peace sake, or war for the war sake, without any "because of" in-between.And to do this, Philosophy must work on for the conception of human being as personal, that is, as an opened, co-existential, relational and dialogic being.And to continue with conception of reason and wisdom, so on.It's a very crucial role to revision the idea of reason, to revitalize the notion of wisdom and philosophy.As we can select by professor Jenny Teichman: "The inimical "appearance of reason," which permeates contemporary philosophy, is the result of Western man abandoning his search for wisdom, distorting his "philosophical vocabulary," and "pervert(ing) the meaning of the noetic symbols."" 3 Certainly it is an extremely difficult task, but not impossible.And we have no doubt that Philosophy can achieve it successfully.For our human world rests upon, conforms to, and manifests the very conception we have of it.And this premise applies both to its current and future states, to its actuality and potentiality.Our world will not be such as it is, but other, because other, and not such as it is, will be the very conception we will have of it. | 3,295.6 | 2017-05-19T00:00:00.000 | [
"Political Science",
"Philosophy"
] |
Development of a high-sensitivity 80 L radon detector for purified gases
Development of a high-sensitivity 80 L radon detector for purified gases K. Hosokawa1, A. Murata1, Y. Nakano2, Y. Onishi1, H. Sekiya2,3, Y. Takeuchi1,3,∗, and S. Tasaka4 1Department of Physics, Kobe University, Kobe, Hyogo 657-8501, Japan 2Kamioka Observatory, Institute for Cosmic Ray Research, The University of Tokyo, Higashi-Mozumi, Kamioka, Hida, Gifu, 506-1205, Japan 3Kavli Institute for the Physics and Mathematics of the Universe, Todai Institutes for Advanced Study, The University of Tokyo, Kashiwa, Japan 277-8583 (Kavli IPMU, WPI) 4Information and Multimedia Center, Gifu University, Gifu 501-1193, Japan ∗E-mail<EMAIL_ADDRESS>
Introduction
The noble gas 222 Rn is continuously generated from the decay chain of the 238 U series.It has a halflife of 3.82 days and potentially dissolves into water, xenon, argon, and so on.It can be a serious background source for underground experiments.
For example, in the Super-Kamiokande experimental area in Kamioka mine, the radon concentration is about 3000 and 50 Bq/m 3 in summer and winter, respectively, because of the wind direction [1].Thus, if one conducts a direct dark-matter detection experiment by searching for the annual modulation signal [2,3], the radon concentration around the detectors is required to be controlled and kept low.
A higher-sensitivity radon assay would be one of the essential techniques to improve the sensitivity of the underground experiments.Therefore, we have developed a new high-sensitivity radon detector for purified gases, such as xenon, argon, and air.
High-sensitivity radon detector for purified gases
We developed a high-sensitivity radon detector (70 L radon detector) [4] for underground experiments in Kamioka.The principle techniques in radon detection are the electrostatic collection of the daughter nuclei of 222 Rn and the energy measurement of their alpha decays with a PIN photodiode.More than 90% of 218 Po atoms tend to become positively charged [5].They will be collected on PTEP 2015, 033H01 K. Hosokawa et al. the surface of the PIN photodiode by the electric field and alpha decays in the decay chain will be observed.
In order to lower the background level of the 70 L radon detector and measure the radon concentration in purified gases like the noble gas xenon, we have developed a new radon detector (80 L radon detector) [6].In the 80 L radon detector, knife-edge flanges with metal gasket were used instead of the acrylic plate and Viton O-rings that were used in the 70 L radon detector.
Due to the structure, the lowest vacuum level of the 70 L detector was 10 kPa.After replacing the parts (acrylic plate and O-rings), the vacuum level limit of the 80 L radon detector became less than 10 −4 Pa.
The detector geometry has also been modified from the 70 L radon detector to apply the same "Grade MC" electrochemical buffing used in another previous radon detector [7].The volume of the detector vessel increased from 70 L to 80 L. The standard value of the supplied high voltage for electrostatic collection was also changed in the 80 L radon detector.
These development works were done collaboratively among Kamioka Observatory, Gifu University, and Kobe University.
Performance of the 80 L radon detector
In order to use the 80 L radon detector in underground particle physics experiments, several basic performance elements were measured with calibration systems that were prepared in Kobe University and in Kamioka Observatory.
Calibration system
Figure 1 shows a schematic diagram and a picture of the calibration system for the 80 L radon detector in Kobe.The calibration system in Kobe consists of a refrigerator to control absolute humidity, a circulation pump, a dew point meter, a mass flow controller, a 222 Rn source, and an 80 L radon detector.A PYLON RNC 226 Ra with 78.3 Bq radioactivity was used as the 222 Rn source.The accuracy of the radioactivity of the radon source is ±4% according to specification of the device.
The calibration system in Kamioka consists of almost the same devices, but an ionization chamber was also used as an alternative device to estimate the radon concentration in the system.We assigned ±5% uncertainty as the accuracy of the ionization chamber from the variation of the output values under the same measurement conditions.The details of the calibration system in Kamioka will be explained in a future publication (Y.Nakano et al., manuscript in preparation).
A different 80 L radon detector from that in Kobe was used in Kamioka.We carried out several measurements in both Kobe and Kamioka, then compared the results.We assigned ±10% uncertainty as the systematic difference between the different 80 L detectors.Within this uncertainty, the results from both sites were consistent.
Energy spectrum
Figure 2 shows a typical pulse height distribution from alpha decays of 222 Rn daughter nuclei.Since 214 Po exists at a lower stream of the decay chain, the radon detector has a higher collection efficiency for 214 Po than 218 Po.Furthermore, alpha-rays emitted from 212 Bi decay in a Th-series decay chain could overlap with the 218 Po signal region.Thus, the 214 Po peak was used to measure 222 Rn concentration.For example, in this measurement, the analog-to-digital converter (ADC) channels from 169 to 179 contain 90% of signals in the 214 Po peak and were selected as the 214 Po signal region to obtain the integrated count rate in count/day.The supplied voltage value and the settings of the amplifier PTEP 2015, 033H01 K. Hosokawa et al. affect the peak position.In each measurement condition, this pulse height distribution was checked, then the appropriate ADC region was used for the integration window.
Calibration factor
The calibration factor (CF) in (count/day)/(mBq/m 3 ) was used to obtain the radon concentration in mBq/m 3 from the integrated 214 Po count rate in count/day.Using the calibration system, the dependence of the calibration factor on the supplied high-voltage value and/or humidity of the purified gases was obtained.
High-voltage dependence
Reverse bias minus high voltage was supplied to the PIN photodiode of the 80 L radon detector.An electric field between the photodiode and stainless steel vessel was formed by the supplied high voltage.The high-voltage value affects the electrostatic collection efficiency of the 222 Rn daughter nuclei.Thus, the calibration factor depends on the supplied high-voltage value.The high-voltage dependences of the calibration factors were measured with the calibration systems in both Kobe and Kamioka.The result is shown in Fig. 3.The calibration data were taken with the Rn source and G2-grade Ar gas (>99.9995vol.%) or G1-grade air (impurity <0.1 vol.ppm) at atmospheric pressure.The averaged absolute humidities for Ar and air were 0.22 g/m 3 and 0.0021 g/m 3 , respectively.The supplied high-voltage values were changed from −0.2 kV to −2.0 kV.The calibration factor rises with higher supplied voltage value, as expected.The dashed line shows 30% detection efficiency of the 80 L radon detector for 214 Po, assuming 90% efficiency of the ADC integration region.Note that the maximum detection efficiency would be 50% considering the direction of the alpha decays on the photodiode surface.This means that 60% of the radon daughter nuclei were collected by the electrostatic field at the 30% detection efficiency line.The same line is also drawn in Fig. 5. [4].The air data were taken in Kamioka, and others were taken in Kobe.For details on the dashed line and the error bars, see Fig. 3.
4/7
We choose −2.0 kV as the standard high-voltage value for Ar and air.For Xe, we choose −1.0 kV as the standard value, since a discharge around the feed-through of the PIN photodiode was observed in some calibration runs at −2.0 kV.
Humidity dependence
Concerning the electrostatic collection method, the neutralization effect of 218 Po atoms is known [8].Therefore, humidity dependence calibrations were conducted.
Using a refrigerator, we controlled the dew point temperature, i.e., the absolute humidity.Figure 4 shows an example of the calibration run.In this measurement, the radon source was bypassed from the calibration system after supplying radon gas to the system.Therefore, both the radon decay and the dependence of the integrated 214 Po count can be seen in the plot.
The calibration factor as a function of absolute humidity is shown in the 80 L radon detector.G1-grade air, special-grade Ar (99.999%), and purified Xe were used for these measurements.The air data were taken in Kamioka, and others were taken in Kobe.We estimated the total systematic uncertainties in these calibration factor measurements as 12%.The sources of the systematic uncertainties were the following: difference in the 80 L detectors 10%, accuracy of the radon concentration 5%, accuracy of the dew point meter 2%, and accuracy of the total volume estimation of the calibration system 2%.Regarding the accuracy of the radon concentration, we took the larger uncertainties from the ionization chamber.
Background level
Figure 6 shows the pulse height spectrum of 132.5 days' live-time background data in Kobe.After evacuating the 80 L radon detector, purified air was filled to atmospheric pressure level, then the inlet and outlet valves were closed.The supplied high-voltage value was −2.0 kV in this background run.The integrated count in the 214 Po signal region during the background run was 107 count.Therefore, the obtained background level of the 80 L radon detector was 0.81 ± 0.08 (stat.only) count/day.
The background level of the 70 L radon detector for air was 2.4 ± 1.3 count/day [4].The background level of the 80 L radon detector has been lowered by a factor of 3 from the 70 L radon detector.
In Fig. 6, some noise hits were observed during the background run, at around ∼0.02 count/day/bin level.However, if we just apply the calibration factor for purified air, the corresponding radon concentration during the background run becomes 0.37 ± 0.05 (stat.only) mBq/m 3 .
Conclusion
A new high-sensitivity radon detector (80 L radon detector) has been developed.The basic performances of the 80 L radon detector have been studied.
From the high-voltage dependence of the 80 L radon detector, we chose −2.0 kV as the standard voltage to supply for the measurements with purified air and argon.For purified xenon, we chose −1.0 kV as the standard voltage.
As the background level of the 80 L radon detector, we obtained 0.81 ± 0.08 (stat.only) count/day with purified air and −2.0 kV high-voltage value.This count rate is about one-third of the 70 L radon detector's background level.Owing to the improved hermeticity of the 80 L radon detector and the improved electropolishing, the background level was lowered.
The background level of the 80 L radon detector corresponds to a 0.37 ± 0.05 (stat.only) mBq/m 3 radon concentration.This provides a high-sensitivity radon assay in µBq/m 3 concentration in the application of this 80 L radon detector in underground particle physics experiments.
Fig. 1 .
Fig. 1.Schematic diagram of the 80 L radon detector calibration system in Kobe.A picture of the system is also shown.
Fig. 4 .Fig. 5 .
Fig. 4.An example of the variation of the integrated count rate of 214 Po in the 80 L radon detector with different dew point values.The solid red line shows a fit to the data at −60 • C with the expected radon decay constant.The vertical dashed lines indicate the regions used to obtain the CF.
Fig. 5 .Fig. 6 .
Fig.6.Pulse height spectrum in the background run with purified air under 1 atmospheric pressure and −2.0 kV high-voltage value in Kobe.ADC channel 137-147 was used as the 214 Po signal region.The huge peak at around 50 ADC count is due to 210 Po, which was accumulated during the calibration measurements. | 2,644.6 | 2015-03-01T00:00:00.000 | [
"Physics"
] |
Social and investment redistribution – the trend of the future
Russia is going through a stage of becoming accompanied by complex internal problems. They are associated with the development of a legal framework and reconsidering the new type of economy, the negative impact of global crises and evident challenges in the distribution of the money supply. An active social policy of the state is aimed at the population strata in need of support: pensioners, disabled people, children. But the gap between low-income and rich groups is widening, and the middle stratum as a social component has practically disappeared. This situation suggests the need to search for tools to improve the prosperity of the population. Besides, low-efficiency investment systems of large players in the sector with state participation are registered, which requires the search for additional sources of investment and ways of their rational use with proper control. Which, in its turn, will reset the load from the state
Introduction
Many countries all over the world are addressing the issue of social equality, especially those that have experienced rapid economic growth.
And it was this fact that demonstrated the growing gap between the rich and the poor, between actively-developing types of business and uncertainty of infrastructure. The importance of considering these issues is addressed in the studies of Chinese economists, who indicate that "without the support of effective social policy, economic growth cannot be sustainable" [1].
The search for opportunities to stabilize social phenomena drew our attention to the model developed by John M. Keynes, which combines several significant parameters of society affecting growth of the population's prosperity.
The scientist deduced several laws which are also effective in times of crisis. He determined the role of the state and the degree of responsibility. Keynesian work showed the possibility of combining microeconomic indicators into a single macro-system. And even though the theory in question has many opponents, there are also followers. They do not only support the derived postulates but also actively develop them. According to M.
Milgate, the work of J. M. Keynes "is a valuable reference for economists and researchers interested in the interrelation of capital and employment" [2].
Modern scientists are also making attempts of model-building in search of evidence of the effectiveness and rationality of the proposed innovations.
Alberto Bucci and Alberto Russo [3] speak about the influence of finance on long-term development with the proper organization of its direction and its orders. Tobias Götze and Marc Gürtler discuss the role of securities in the insurance system [4].
Our research is aimed at finding ways and means to raise prosperity, primarily of the employable population having low incomes and personnel of large industrial companies with state participation.
The choice of the corporation was obvious. The transport industry is one of the most problematic, but at the same time it is highly-demanded for our living. Russian Railways is one of the largest companies experiencing some difficulties with the investment system. The study has revealed significant gaps the elimination of which may have a positive effect on the growth of income of the company's employees and its capitalization.
Micro-economic opportunities are not involved, the implementation of which is constrained by the institutional-recessionary economy, management conservatism and inconsistency of economic interests.
We believe that in this case a mechanism for the redistribution of the corporate level uniting the interests of the corporation and the personnel is prospective. The proposed mechanism is based on the model of J.M. Keynes, transforming the concept of the macrolevel to the micro-level of Russian Railways (JSCo RZD).
Materials and methods
Adhering to the generally accepted rules of scientific and practical research, we used empirical methods -observation, comparison, classification.
Theoretical methods included formal logic, and specifically modelling. Also, we employed statistical analysis, exploration of developments and approaches to solving the stated problem by domestic and foreign schools and practical solutions based on the experience of foreign and domestic companies.
Results
Nowadays, investments as an economic category cannot be considered only from a theoretical point of view.
The need and prospects for transforming the macro-and microeconomic environment dictate a rational approach and actualize the development of investment policy, its mechanism and tools and its implementation within specific industries and companies.
In the course of our study we identified several main reasons for the low efficiency of investment activity of JSC "Russian Railways". Among these we include: -on the part of the state -the negative impact of state policy (state monopoly, problems of economic literacy), legislative barriers (limited sources, lack of benefits, lack of compensation for losses and negative earnings), lack of mutual responsibility; -within the company -lack of targeted focus of investment projects, limited tools for the investment process, imperfect forms and methods of engaging investment resources, lack of awareness and public control.
This division is relative because all the reasons are interrelated and co-dependent, their combination aggravates the situation, which is expressed, primarily, in a shortage of investment resources.
With relatively large-scale investments into Russian Railways, there is a clear deficit of them, which hinders the necessary progressive development of the transport holding, including the innovative one [5].
The main sources (inhouse and government) are limited by profit, the possibilities of state financing and the directions of their use (a significant share provides simple reproduction, and has the nature of subsidies). It is necessary to search for an additional source, innovative for the domestic economy, a mechanism and tools for corporate investment, mediated by redistribution processes that combine economic interests of the subjects of accumulation, transformation, investment use and distribution of the increased value.
The classical and Keynesian SI savings-investment models solve problems and provide macroeconomic equilibrium.
At the same time, the methodology and tools for its implementation can be used at the level of a macro-forming company, which is Russian Railways. In our opinion, the transfer of the macro-level economic philosophy to the micro-level will allow not only to universalize the mechanism of social and investment redistribution but also to enrich it with processes that take into account the specific nature of the leading transport company.
In the redistributive process we are considering social savings are the basic component of the investment and savings process, which is most fully considered and substantiated by J.M. Keynes.
As a part of income, savings surpassing consumption are an integral part of an active investment process.
The saved part of the income is not only associated with the investment activity of the population but also through individual instruments has a significant impact on the volume of production and the level of unemployment.
The accumulation process depends not only on the level of income but also on the motives of behaviour dictated by the ability to foresee the economic situation in the future.
The motives and factors affecting the choice in favour of savings were studied by Adrian Furnham and Helen Cheng [6].
Redistribution is a set of numerous tools for the interaction of all actors in the distribution process: owners and users of production factors connected by economic and social interests.
The more complex economic relations are and the more developed the economic system is, the more widespread and effective are the mechanisms and instruments of redistribution. In our case, these are the tools for transforming social savings into investments and, as a sign of their effectiveness, the growth of welfare and capitalization.
Transforming the economics philosophy of redistribution of macrolevel into microlevel is a logical continuation of the development of social and investment redistribution.
State participation as a stimulator of investment activity and a regulator of social savings has not been realized yet.
We should note that potential investors are not sufficiently informed and aware of their opportunities to gain additional income and other benefits from the manifestation of investment activity.
Similar problems occur in our country and in some other countries likewise. It is confirmed in the study "Individual Investor Ownership and the News Coverage Premium" by Paul Marmora [7].
A low level of general economic and financial literacy entails adverse outcomes for all subjects of investment and redistribution relations, limiting investment activities, restraining the development of the economy, hindering the growth of effective instruments: -consumers of financial services: citizens and corporations; -institutions providing financial services; -the state regulating the financial market.
Attempts are made to implement educational programs through further training as part of the extension of competencies for students of high schools.
As for adults, citizens of pre-retirement age have been involved after the pension reform was launched. The information environment focuses on familiarization with the legal framework -complex and obscure [8].
There are positive aspects of this process, in our opinion, which is the emergence of involving and motivating training programs which have been developed and implemented by the interested banking structures in the recent two years, such as Sberbank, VTB, Tinkoff and insurance parties included in their group.
Involving corporate workers and population into economic development must provide overcoming low level of financial literacy and understanding the processes of redistribution and its socio-economic, institutional and informational characteristics and also methodological approaches.
Insufficient activity coupled with a lack of basic knowledge of the financial market, a lack of desire to take responsibility, supported by the low standard of living of the population, are aggravated by an inconsistent reform system on the part of the state, a lack of effective means of consumer protection; weak regulation of the insurance market, including the pension market, and a paucity of motivational and stimulating instruments.
All this contributes to citizens' distrust in the existing financial institutions and government agencies. As a result, the population is actually excluded from the number of economic entities -potential investors.
The results of diagnosing the investment problems of the Russian transport company have outlined some areas of study of the practising schemes and tools of social, structural, organizational-managerial and banking-financial properties.
Attention has been drawn to the corporate social responsibility (CSR), which originated in the 50s-70s of the previous century and affects different issues, including social security of personnel and sponsorship of social protection.
The CSR implementation program has become widespread, its level being the designation of the company's status in the world ranking. CSR includes several items. Its influence on different risks and optimality of investment is studied on the example of corporations in the Asia-Pacific region [9].
The study of foreign experience and the emerging domestic practice of engaging temporarily free funds from the population through the institutions of social and investment redistribution makes it possible to systematize its instrumental elements into accumulation and transformation subsystems and to substantiate the relevance of the development of the mechanism within the company.
Studies in the field of engaging part of the population's income into the economic development of different countries have shown that substantial results were achieved due to the reservoir-investment role of savings, the share of which in personal income, for example, in Japan equals 20%, in Germany, France, Italy equals 11-12 %.
In the redistribution process, personal savings account for 1/3 of all capital investments. As a reverse effect, there is a decline in production in the post-crisis period: studies note that a decrease by 1% in production is associated with a decrease in investment by 2.4%.
Trends in the development of investment business and the totality of existing financial and non-financial instruments, management of investment processes are revealed.
In the American practice, where the state stimulates a high level of consumption, and, consequently, the share of savings is relatively low, the investment business is mobile, multi-structure, and the stock market is primary in active attracting of savings.
In the EU countries, particularly in Germany, an active conductor of savings in investments is the banking and financial sector, through which savings are channelled to investment-attractive sectors of the economy.
At the same time, all national models demonstrate rationality, regulatory security and, on the part of the state, an ability to control and support accumulation institutions and the direction of transforming the unused part of the population's income into the future development of the economy. In the Asian practice, where the share of savings is much higher, there is a clear pattern of government regulation and planning.
These conclusions will help to find options of efficient solutions when choosing incentives, motives and tools, and an effective working mechanism of social and investment redistribution on the scale of a large corporation.
As a result of studying the international experience of using promising and effective instruments, we have outlined the Block of Savings and Investment Income, the Block of Savings Accumulation, and the Block of Savings Transformation.
The Block of Savings and Investment Income is designated as the determining one. It contains the key corporate components: the company itself, which is in need of constant and massive investments; a source of constant investment -savings of employees (population); the possibility of forming corporate structures (pension fund, insurance company, investment fund, corporate bank) -under the general management and organization of savings and investment processes.
The instruments included in this block are extremely clear and profitable: interest rate on the deposit; investment dividends; the opportunity to choose an income-generating pension plan, lucrative insurance services and special conditions of the banking sector, including favourable lending; an expanded range of stimulating social services (nonmaterial).
The widest group of institutions is represented by the Block of Savings Accumulationthis is a common international practice that indicates the interest of the state and business in attracting private investment to decrease the burden [10].
The degree of development, efficiency, structural features depend on national approaches and legislative frameworks.
Affiliation is exclusive and characterized by elements of corporate and/or industryspecific form. The accumulation institutions include the securities market, investment banks, universal banks, development banks, stock and guarantee banks, public pension funds, private pension funds, corporate pension funds, savings banks such as pension, insurance, construction ones; insurance entities, insurance companies. The direction and affiliation of institutions determine the set of tools and the direction for the development. The tools for substantiating the possibility of accumulating savings for investment purposes are the state target strategy, state incentives, state support, state control, regulatory support, tax preferences, information consulting. State participation directly depends on investment policy and legislation, but progressive systems are mobile and promptly respond to challenges and threats. The tools described above are the macro level, the basic component of the corporate micro-level efficiency. A modern feature of this stage is state support for national economic systems of a new format and the urgent need for their formation [11].
The Block of Savings Transformation into investments is limited to the stock market, investment companies, investment banks which run investment business.
The same institutions can carry out direct investments, providing that they are merged with the pension, insurance institutions, insurance funds, etc. The instruments include securities, investment lending, direct investment, risk redistribution, corrective linking mechanisms, mobile changes in legislative conditions. We should draw attention to the absence of management companies, which does not mean their absence in the savings and investment structures. They develop under certain conditions, and those functions which contribute to the development of investor relations are legally defined, in contrast to the functions of Russian management companies, whose activities will be discussed in the next paragraph. The international experience of the institutions of this block is of interest because the states are guided by the trends and forecasts of the investment business, which are expressed in loyal legislative reforms.
Foreign experience and the emerging domestic practice of engaging temporarily free funds from the population through the institutions of social and investment redistribution made it possible to systematize its instrumental elements into subsystems and substantiate the relevance of the development of the mechanism within the company.
So the authors (Reshetnikova et al. (2019)), to have a high degree of enterprise financial security is considered «to be able to develop and implement the financial strategy independently following the objectives of the overall corporate strategy and the principles of corporate governance in conditions of uncertain" [12].
The domestic practice of corporate redistribution of a social and investment nature is represented primarily by corporate plans for non-state pension insurance, some relatively voluntary insurance, certain areas of mutual investment funds that bring them closer to the low-income stratum of the population, banking programs for loyal and VIP clients, instrumental elements of corporate social responsibility.
Exploring the possibility of involving low-income strata of the population into investment process is not new and has been studied by some foreign authors [13].
The corporate mechanism of social and investment redistribution of Russian Railways is algorithm-driven by the purpose; tasks; organizational and functional principles all of which unite the logistics of the functions of intercorporate institutions by means of its instruments (directors, accumulators, transformers, recipients, guarantors) and formalizing the methodological model of "income-savings-investment-income" [14].
A subsystem toolkit has been determined that corresponds to the elements of the system of corporate institutions (corporate pension fund, insurance company, investment fund, universal bank, management company), the strategic partnership of which is based on the interdependent control of collective investments [15], distribution of investment profits and guarantors of the mechanism (state, corporation, society), ensured by the improvement of the relevant law principles on the macro-and micro-level, and providing universality of the system of corporate social and investment redistribution [16].
The element of the corporate mechanism of social and investment redistribution in the figure 1. Thus, the SI macroeconomic model adapted to the corporate conditions of Russian Railways, in our opinion, is fully justified and looks promising in solving the main taskconvergence of the economic interests of its subjects (growth in prosperity of the personnel and capitalization of the holding). | 4,103.4 | 2021-01-01T00:00:00.000 | [
"Economics"
] |
Biochar-Mediated Control of Phytophthora Blight of Pepper Is Closely Related to the Improvement of the Rhizosphere Fungal Community
Biochar is a new eco-material with the potential to control soilborne diseases. This study explored the relationship between the rhizosphere fungal community and the suppression of Phytophthora blight of pepper in the context of time after biochar application. A pot experiment was conducted and rhizosphere soils were sampled to determine the biochar-induced soil chemical properties, fungal community composition, and abundance of biocontrol fungi. The biochar-enriched fungal strains were screened by the selective isolation method, and their control effects against Phytophthora blight of pepper were determined using a pot experiment. Biochar treatments effectively inhibited pathogen growth and controlled the disease, with biochar applied immediately before planting (BC0) having greater effects than that applied 20 days before planting (BC20). Compared to the control, biochar-amended rhizosphere soils had a higher pH, available nutrient content, and fungal richness and diversity. Moreover, biochar treatments significantly increased the abundance of potential biocontrol fungi. The proliferation in BC0 was stronger as compared to that in BC20. Several strains belonging to Aspergillus, Chaetomium, and Trichoderma, which were enriched by biochar amendment, demonstrated effective control of Phytophthora blight of pepper. Canonical correspondence and Pearson’s correlation analysis showed that a high content of soil-available nutrients in biochar treatments was favorable to the proliferation of beneficial fungi, which was negatively correlated with both the abundance of Phytophthora capsici and disease severity. In conclusion, biochar-mediated improvement in the fungal community suppressed the Phytophthora blight of pepper. The biochar application time had a great impact on the control effect, possibly due to the short-term proliferative effect of the biochar on biocontrol fungi.
INTRODUCTION
Biochar, the solid by-product of biomass pyrolysis, features stable aromatic carbon structures, large surface areas, and high contents of certain nutrients and organic carbon (Sohi et al., 2010;Lehmann et al., 2011). In addition to sequestering carbon and reducing greenhouse gas emissions, biochar application can improve the soil pH, moisture retention, physical structure, nutrient status, and biological properties, which in turn enhance plant growth and health under biotic and abiotic stress conditions (Sohi et al., 2010;Egamberdieva et al., 2016;Nair et al., 2017;Meng et al., 2019).
Recent studies have shown that biochar application can effectively control soilborne plant diseases caused by pathogenic fungi and bacteria, such as Fusarium oxysporum, Rhizoctonia solani, and Ralstonia solanacearum (Jaiswal et al., 2014(Jaiswal et al., , 2015Elmer, 2016;Zhang et al., 2017;Gao et al., 2019;Chen et al., 2020). Our previous study first reported that the addition of biochar to soil resulted in a good control effect of Phytophthora blight of pepper, caused by the pathogenic oomycete Phytophthora capsici L. (Wang et al., 2017). However, the mechanisms remain unknown. Biochar-induced soil chemical properties are closely associated with the control of diseases caused by soilborne bacteria (Zhang et al., 2017;Gao et al., 2019;Chen et al., 2020), but whether it is conducive to the control of diseases caused by oomycetes needs to be explored.
Biochar has a high C/N ratio and can provide a habitat that is conducive to colonization by fungal hyphae (Lehmann et al., 2011). In addition, many fungi can proliferate by degrading the organic components within biochar (Anyika et al., 2015). Therefore, biochar application is beneficial for soil fungal growth. This has been confirmed by a significant increase in fungal abundance and changes in fungal communities following biochar amendment (Bamminger et al., 2014;Yao et al., 2017;Zhang et al., 2019;Wu et al., 2020). In addition, several researchers observed the enrichment of potential biocontrol fungi, such as Trichoderma and Paecilomyces, in biochar-amended soil (Hu et al., 2014;Elmer, 2016;Zhang et al., 2019). Thus, we hypothesized that biochar-mediated improvement of the rhizosphere fungal community, especially the enrichment of biocontrol fungi, is closely related to the suppression of soilborne diseases. Until now, no in-depth studies have analyzed the association between biochar-mediated disease control and the fungal community. Jaiswal et al. (2017Jaiswal et al. ( , 2018 determined the fungal and bacterial communities in response to biochar and established a relationship between the bacterial community and disease suppression, but did not establish a relationship between the fungal community and disease suppression. There are many studies showing the influence of feedstock, pyrolysis temperature, and application dose of biochar on their disease control effects (Jaiswal et al., 2014(Jaiswal et al., , 2015Lu et al., 2016;Wang et al., 2017;Chen et al., 2020), but not one focused on the biochar application time. The application of biochar just before planting may maximize the control effect because of the short-term effect of biochar on soil biological properties (Farrell et al., 2013;Jiang et al., 2016). However, application a few days before planting to form disease-suppressing soil microflora may help improve the control effect. Therefore, we speculate that the biochar application time has a significant influence on its control effect and that this is caused by the time-dependent influence of the biochar on the soil microbial community.
Our aims for this study were as follows: (i) to analyze whether the biochar-mediated control of Phytophthora blight of pepper is related to an improved fungal community; (ii) to study whether the response of soil chemical properties to biochar amendment contributes to the improvement of the rhizosphere fungal community and suppression of Phytophthora blight of pepper; and (iii) to explore the association between the biochar application time and the rhizosphere fungal community as well as disease suppression. Therefore, we investigated the function of biochar in shaping soil chemical properties, fungal community composition, abundance of biocontrol fungi, pathogen abundance, and disease severity under different application times. Moreover, the screening and verification of biocontrol agents were performed to ascertain the relationship between biochar-enriched biocontrol fungi and disease suppression by biochar. The results of our work are expected to provide a practical approach for the utilization of biochar to control soilborne diseases and provide theoretical support for the further enhancement of plant disease control with biochar application.
Soil and Biochar Preparation
The experimental soil had a sandy loam texture. It was collected from the top 20-cm layer in a pepper greenhouse in Huangma town, Jiangsu Province, China. The basic chemical characteristics of the soil were as follows: pH, 7.44; electrical conductivity (EC), 1,879 µs/cm; total N, 3.1 g/kg; total P, 2.0 g/kg; total K, 12.6 g/kg; and organic matter, 29.4 g/kg.
The biochar was made from corn stalk based on the methods of Xie et al. (2013) and was passed through a 40-mesh screen. The basic characteristics of the biochar were as follows: pH, 9.73; EC, 5,763 µs/cm; total C, 490 g/kg; total H, 23 g/kg; total O, 146 g/kg; total N, 17.5 g/kg; ash, 324 g/kg; organic matter, 286 g/kg; available P, 2.2 g/kg; and available K, 24.7 g/kg.
Pot Experiment
Three treatments were established: (i) soil incubated for 20 days without biochar (CK); (ii) CK soil amended with biochar at a rate of 13.3 g/kg just before planting (BC0); and (iii) CK soil amended with biochar at a rate of 13.3 g/kg 20 days before planting (BC20). Treatments were prepared in triplicate and incubated at ambient temperatures of 15-30 • C. The soil moisture content was maintained approx. 20% during incubation. All of the treatments were mixed with a P. capsici zoospore suspension at a density of 100 zoospores per gram soil at the end of the incubation period. Each treatment was repeated three times, and each replicate had 20 pots (12 cm × 15 cm, diameter × height). Each pot contained 600 g of soil and one 5-week-old pepper plant. The pots were randomly arranged and incubated inside the greenhouse for 45 days, as mentioned above.
The disease index was recorded every 15 days using a 0-4 scale according to Wang et al. (2019). Disease index = [ (number of infected plants of a specific scale × the specific scale/(4 × (total number of plants))] × 100%.
Sample Collection and DNA Extraction
Before planting, the soil samples were harvested from all of the replicates. At 15, 30, and 45 days after planting, five plants were randomly dug out from each replicate and the rhizosphere soil was harvested. The soil samples were passed through a 10-mesh screen. One portion of each sample was air dried and then stored at 4 • C for the later analysis of pH, EC, organic matter, available P, and available K; the other portion was stored at −70 • C and used for the later extraction of total DNA and the determination of nitrate N. In addition, the soil samples harvested at day 30 after planting were used for the screening of biochar-enriched biocontrol fungi and the determination of the fungal community composition. Soil total DNA was isolated using a FastDNA SPIN Kit (MP Biomedicals).
Analysis of Soil Chemical Properties
Soil pH and EC (soil/water = 1:5, w/v) were measured using a pH meter (Mettler-Toledo FE20, Mettler-Toledo Instrument Factory, Shanghai, China) and an EC meter (DDS307, Shanghai Jingke Instrument Factory, Shanghai, China), respectively. Organic matter, nitrate N, available P, and available K were assayed according to the methods of Wang et al. (2014).
Illumina MiSeq Sequencing and Analysis
Thirty days after planting, rhizosphere soil DNAs were extracted and then subjected to Illumina MiSeq sequencing by Majorbio Co., Ltd. (Shanghai, China). The primers ITS1F (5 -CTT GGT CAT TTA GAG GAA GTA A-3 ) and ITS2R (5 -GCT GCG TTC TTC ATC GAT GC-3 ) were used for the PCR assay of the fungal internal transcribed spacer (ITS) region of the target ribosomal gene (Bokulich and Mills, 2013). The PCR products were sequenced using the MiSeq sequencing platform, and all sequences were clustered into operational taxonomic units (OTUs) at 97% identity using UPARSE (Kõljalg et al., 2013). The fungal community was characterized in terms of the number of OTUs, Shannon index, coverage, and the richness estimators Chao1 and ACE (abundance-based coverage estimation) using the mothur software (Schloss et al., 2009). OTUs were classified using the UNITE ITS database (Kõljalg et al., 2013). Principal coordinate analysis (PCoA) based on the weighted UniFrac distances was performed to determine the differences in the fungal community composition. Canonical correspondence analysis (CCA) was performed to determine the relationships between the soil chemical properties and the fungal community composition.
Screening and Assessment of Biochar-Enriched Biocontrol Fungi
Rhizosphere soil samples for the CK and BC0 treatments at 30 days after planting were collected as described above. Ten grams of the soil sample was added to 90 ml 0.85% saline solution and shaken for 30 min at 160 rpm. After performing 10-fold dilutions, 100 µl of the diluted suspensions (10 −2 and 10 −3 ) was plated on potato dextrose agar (PDA) containing 30 mg L −1 rifampin in 15 replicates per dilution and incubated at 26 • C for 3-7 days. When fungal colonies appeared on the PDA plates, Trichoderma, Chaetomium, Penicillium, and Aspergillus spp. strains induced by the BC0 treatment were picked and inoculated onto new PDA plates to obtain pure strains. Morphological features such as colony, mycelium, and spore characteristics were used to avoid replication of the strains. The remaining strains were identified by amplification and sequencing of the respective ITS regions. The nucleotide sequences were analyzed using a similarity search against the GenBank database (Zheng et al., 2011).
The strains were cultivated using potato dextrose broth and then their mycelia were harvested by filtration through sterile gauze. Homogeneous suspensions of propagules were obtained by washing the mycelia with sterile distilled water and homogenizing the washed mycelia with a homogenizer. The experimental soil was mixed with a P. capsici zoospore suspension at a density of 100 zoospores per gram soil. The inoculated soil was divided into fungal treatment groups and one control. The fungal inoculation rate was 10 g fresh mycelia/kg soil. The soil inoculated with the pathogen only was set as the control. Each treatment was replicated three times, and there were 15 pepper plants for each replicate. One 5-weekold pepper plant was grown in a pot containing 600 g of soil.
All plants were acclimated in the greenhouse for 45 days, as mentioned above. The disease severity was determined at 15 and 30 days after planting. The disease index was assessed as described previously. The biocontrol efficacy was assessed using the following equation: Biocontrol efficacy = [(disease index under control − disease index under fungal treatment)/disease index under control] × 100%.
Data Analysis
All data were analyzed with the SPSS 19.0 software package (IBM, United States). Statistical significance between the treatments was tested with one-way analysis of variance (ANOVA). p < 0.05 was considered statistically significant.
Effect on Disease Severity and Pathogen Abundance
As shown in Figure 1A, the disease indices of all treatments continued to increase with longer growing period. Biochar amendment significantly alleviated the disease development rate and disease indices compared to CK. In addition, the disease index of BC0 was reduced compared to that of BC20. The disease indices of BC0 and BC20 on day 15 were reduced by 91 and 72% FIGURE 1 | Temporal effects of biochar applied at different times on the disease index (A) and abundance of Phytophthora capsici (B) during the growing period. Bars represent average values ± standard error. Different letters above columns indicate significant differences (p < 0.05) according to one-way ANOVA (n = 3). compared to those of CK, and those on day 45 were reduced by 63 and 35%, respectively. This suggested that biochar amendment reduced disease development and that the control effect of BC0 was stronger than that of BC20.
The pathogen abundance in each treatment decreased by day 15 and then increased by day 30, followed by a gradual decrease on day 45 ( Figure 1B). The pathogen abundance under the biochar treatments was significantly reduced relative to that under CK, and that under BC0 was significantly reduced compared to that under BC20 during the entire growing period. In comparison, pathogen abundance under BC0 and BC20 on day 15 after planting was reduced by 95 and 58%, respectively, and by 50 and 34%, respectively, on day 45 after planting. Notably, biochar-induced suppression of P. capsici diminished over time.
Chemical Properties
The soil chemical properties changed to varying degrees after amendment with biochar and incubation for 20 days (Supplementary Table S1). Biochar amendment markedly increased the EC and contents of organic matter, available P, and available K, which were correlated with the high contents of ash, organic matter, available P, and available K in the biochar.
Fungal Abundance and Community Composition
The qPCR results suggested a significantly reduced abundance of total fungi and a significantly increased abundance of C. globosum, Aspergillus, Penicillium, and Trichoderma 20 days after the amendment of the soil with biochar (Figures 2A,B). In addition, the results of high-throughput sequencing indicated a reduced relative abundance of Cladosporium and Emericella. Furthermore, biochar significantly increased the relative abundance of Chaetomium, Funneliformis, Penicillium, and Trichoderma (Supplementary Figure S1). Therefore, although biochar amendment significantly reduced the total fungal abundance after 20 days of incubation, it increased the abundance of potential biocontrol fungi.
Chemical Properties
Both biochar treatments significantly increased the soil pH, EC, and contents of organic matter, available P, and available K compared with CK during the entire growing period (Figure 3). The difference between biochar treatments, however, was not significant. The promotion of pH, available P, and available K by biochar amendment gradually declined with extended planting time. Compared to CK, BC0 and BC20 had 1.36 and 1.11% higher pH values before planting, which decreased by 0.64 and 0.54% at 45 days after planting, respectively. Similarly, the available P content increased significantly by 19.13 and 21.38% before planting in BC0 and BC20, but at 45 days after planting, the differences were only 9.02 and 3.40%, respectively. The available FIGURE 2 | Effect of biochar amendment on the abundance of total fungi (A) and potential rhizosphere-associated biocontrol fungi (B) after incubation for 20 days. Bars represent the standard error of each mean. Different letters above columns indicate significant differences (p < 0.05) according to one-way ANOVA (n = 3).
K contents were 259 and 271% higher in BC0 and BC20 before planting and were still 161 and 155% higher than CK at 45 days after planting, respectively. Thus, biochar-induced changes in soil nutrient qualities and chemical properties were only slightly affected by the time after biochar application, but more obviously affected by the planting time.
Fungal Richness and Diversity
The community richness indices, OTUs, Chao1, and ACE values were higher under the biochar treatments than under CK, and significant differences in the OTUs and Chao1 values were observed at 30 days after planting ( Table 1). The Shannon diversity indices of the biochar treatments were higher than those of the CK, but significant difference was not observed. No significant difference in the richness and diversity indices was observed between BC0 and BC20.
Fungal Community Composition
A comparison of the relative abundance of the different fungal genera with an abundance of >1% revealed significant differences with respect to the different treatments at 30 days after planting (Figure 4). The dominant genera were , Cephaliophora (7.82-19.66%), Chaetomium (3.84-14.35%), Pseudaleuria (2.67-9.12%), and Penicillium (5.62-6.30%), followed by several other genera including Emericella, Aspergillus, Trichoderma, etc. PCoA clearly revealed that biochar amendment shifted the rhizosphere fungal community. In addition, the fungal community of BC0 was clearly separated from that of BC20 (Figure 5).
Groups 1 and 2 in Table 2 included the fungal genera that were higher and lower in relative abundance, respectively, in response to the biochar treatments. Significantly higher relative abundance of Chaetomium and Trichoderma and significantly lower abundance of Alternaria and Cladosporium were observed under the biochar treatments than under CK. Biochar treatments also enriched the genera Conocybe, Paecilomyces, and Arthrographis and depleted Cephaliophora and Aspergillus. In addition, BC0 and BC20 had different effects on several fungal genera, as observed for group 3. BC0 significantly increased the relative abundance of Mortierella by 17.25% and reduced those of Pseudaleuria, Emericella, and Pyrenochaetopsis by 7.16, 10.45, and 2.09%, respectively, compared with BC20.
Fungal Abundance
The qPCR results showed that the fungal abundance increased significantly under BC20 compared with that under CK after pepper planting (Figure 6A). BC0 resulted in a consistently higher fungal abundance during the entire growing period than CK. Fungal abundance under BC0 was higher than that under BC20 at 30 and 45 days after planting, but the differences were not significant. In addition, the differences in fungal abundance between the CK and biochar treatments gradually decreased, suggesting a diminishing biochar-mediated proliferative effect on rhizosphere fungi over time.
Figures 6B-E indicate that the abundance of C. globosum, Aspergillus, and Penicillium under BC20 subsequently increased slightly and then deceased rapidly, but remained higher than that under CK. BC0 showed a strong enrichment of these biocontrol fungi during the entire growing period. The biocontrol fungal abundance under BC0 was higher or significantly higher than under BC20 at 30 and 45 days after planting. Compared to CK, the abundance of C. globosum, Aspergillus, Penicillium, and Trichoderma increased 0.46-, 4.56-, 2.48-, and 5.37-fold, respectively, under BC0 and by 1.14-, 0.12-, 0.37-, and 0.44fold, respectively, under BC20 at 45 days after planting. Thus, the shorter the duration of biochar in the soil, the stronger the increase in the abundance of biocontrol fungi it induced. Table 3 shows that the abundance of P. capsici correlated positively (p < 0.01) with the disease index. The abundance of P. capsici correlated negatively (p < 0.01) with the abundance of C. globosum, Aspergillus, and Penicillium as well as with Trichoderma at p > 0.05. Similarly, disease severity correlated negatively (p < 0.01) with the abundance of C. globosum, Aspergillus, and Penicillium as well as with the abundance of Trichoderma at p < 0.05. In addition, the total fungal abundance correlated negatively with both the disease index and abundance of P. capsici at p < 0.05. These correlations suggested that the abundance of total fungi, C. globosum, Aspergillus, Penicillium, and Trichoderma may be the important factor that suppresses the pathogen and controls Phytophthora blight of pepper in response to biochar amendment.
Biocontrol Efficacy of Biochar-Enriched Biocontrol Fungi
Two Aspergillus strains, three Chaetomium strains, three Penicillium strains, and three Trichoderma strains enriched by BC0 were screened by comparing their colony characteristics on PDA plates from CK and BC0. The results of the molecular identification are shown in Supplementary Table S2. Two Aspergillus (AS1 and AS2), two Chaetomium (CH1 and CH3), three Penicillium (PE1, PE2, and PE3), and two Trichoderma (TR1 and TR3) strains were confirmed to be enriched in response to biochar amendment according to their colonizations in the rhizospheres with and without biochar amendment (data not shown).
In the pot experiment, nine antagonistic fungi showed major variations in the reduction of disease severity (Table 4). At 15 days after planting, Penicillium PE1 had no control effect. The control efficacies of the other eight fungal strains were in the range of 18.27-59.62%. In particular, Aspergillus AS1 had the highest control efficacy of 59.62%, followed by 51.92% and 45.19% for Trichoderma TR3 and TR1, respectively, and 44.23% for Chaetomium CH1. The disease indices of all treatments increased with extended planting time, while the control efficacies of the biocontrol strains continued to decline. At 30 days after planting, the control efficacy of Trichoderma TR3 was 30.53%, representing the best biocontrol strain, followed by 24.43% for Aspergillus AS1. In addition, the control efficacies of two Chaetomium strains were 17.37 and 19.08%, while of the Penicillium strains, only PE3 had a control efficacy of 15.27%. In short, the Aspergillus and Trichoderma strains had the best control effect, followed by the Chaetomium strains, and then the Penicillium strains. The abundance of P. capsici and the disease indices of all treatments showed the same trend, i.e., a high disease index indicated high pathogen abundance. All seven biocontrol strains (except PE1 and PE3), which showed a control effect, were associated with a significantly lower abundance of P. capsici Pseudaleuria 9.12 ± 3.30a 2.67 ± 1.23b 9.83 ± 1.88a Emericella 1.07 ± 0.18b 0.84 ± 0.35b 11.29 ± 5.46a Pyrenochaetopsis 0b 0b 2.09 ± 1.62a Different letters after the values in the same row indicate significant differences (p < 0.05) according to one-way ANOVA (n = 3). Group 1 includes the fungal genera with higher relative abundances under the biochar treatments. Group 2 includes fungal genera with lower relative abundances under the biochar treatments. Group 3 includes fungal genera that responded differently to the biochar treatments.
than the control. The pathogen abundance among the seven treatments with similar control effects did not differ significantly.
DISCUSSION
Biochar demonstrated its possibility as a control agent against Phytophthora blight of pepper, supporting similar control effects of soilborne diseases observed in previous studies (Jaiswal et al., 2017;Zhang et al., 2017;Gao et al., 2019;Jaiswal et al., 2019;Chen et al., 2020). Furthermore, biochar applied before planting showed a significantly higher control effect than that applied 20 days before planting, indicating its declining control effect with prolonged application time prior to pepper planting. We assessed the chemical properties, fungal community composition, and abundance of potential biocontrol fungi in the rhizosphere soil, providing novel insights into the mechanisms underlying the biochar-mediated control of soilborne diseases. Supporting previous studies (Gul et al., 2015;Yao et al., 2017;Zhang et al., 2017;Chen et al., 2020), biochar amendment directly affected the soil chemical properties, in particular soil pH, EC, and the contents of organic matter, available P, and available K. Elemental P and K can promote sugar and protein metabolism in plants, stimulate root growth, accelerate root absorption, and effectively alleviate root diseases . Zhang et al. (2017) and Chen et al. (2020) reported that biochar-induced changes in the soil chemical properties were favorable for beneficial microorganisms, but unfavorable for Ralstonia and bacterial wilt. Similarly, in our study, the increases in pH, EC, and contents of available P and available K caused by biochar amendment were conducive to increasing the relative abundance of beneficial fungi, such as Trichoderma, Chaetomium, Penicillium, Emericella, and Paecilomyces (Supplementary Figure S2). In addition, the soil EC and the contents of organic matter, available P, and available K correlated significantly and negatively with the abundance of pathogens and disease severity (Supplementary Table S3), indicating a significant association between the soil chemical properties and disease suppression in response to biochar amendment. As there was little difference between BC0 and BC20 in terms of soil chemical properties, the difference in the disease control effect between these two biochar treatments was most likely due to the differences in the microbial properties within the rhizosphere. Fungal abundance significantly increased in the biocharamended soils, which was in agreement with the results of Bamminger et al. (2014) and Yao et al. (2017). Although this stimulation was weakened with extended planting time, fungal abundance in the biochar-amended soils was still greater than that in unamended soils during the growing period. Zhang et al. (2017), Jaiswal et al. (2018 indicated that biochar amendment positively influences fungal richness and diversity. Supporting that, significantly higher fungal richness and diversity indices were found in the biochar-amended soils. Many studies have documented that soil microbial richness and diversity and the ratio of pathogens to fungi may contribute to soil disease suppression (Bender et al., 2016;Frac et al., 2018;Huang et al., 2018;Saleem et al., 2019), suggesting that the lower disease severity in the biochar treatments could be partly linked to the increased fungal abundance as well as fungal richness and diversity. However, the difference between BC0 and BC20 was small, indicating that the significant difference in disease control between the two biochar treatments was mostly due to the differences in the fungal community composition.
High-throughput sequencing revealed that the relative abundance of certain fungal genera significantly decreased or increased in response to biochar amendment. The relative abundance of Chaetomium, Paecilomyces, Penicillium, and Trichoderma was higher in the biochar-amended soils. These four genera have been reported to be linked to the promotion of plant growth, production of antibiotic compounds, induction of plant defenses, and suppression of soilborne disease (Howell, 2003;André and Schmoll, 2010;Sibounnavong et al., 2011;Khan et al., 2012;Prakob et al., 2012;Bladt et al., 2013;Shanthiyaa et al., 2013;Saldajeno et al., 2014). Another important finding was that the relative abundance of Alternaria and Cephaliophora, which may cause plant disease (Sweta et al., 2014;Mukesh et al., 2017), decreased under the biochar treatments, suggesting that biochar could suppress crop pathogens. The differences noted between the BC0 and BC20 treatments, such as a higher abundance of Chaetomium and a lower abundance of Cephaliophora, suggest that the BC0-induced fungal community may have a higher soilborne disease suppression ability.
Consistent with the high-throughput sequencing results, the qPCR results showed that the abundance of C. globosum, Penicillium, and Trichoderma significantly increased in the biochar-amended soils. However, the higher abundance of Aspergillus in the biochar treatments was in contrast to the results of the high-throughput sequencing. This may be related to the accuracy and adaptability of the two methods (Murray et al., 2011). The increased abundance of Aspergillus, Penicillium, and Trichoderma was probably related to the degradation of polycyclic aromatic hydrocarbons in the biochar via the production of enzymes (Anyika et al., 2015;Al-Hawash et al., 2018), which was in agreement with other studies (Jaiswal et al., 2017(Jaiswal et al., , 2018Vecstaudza et al., 2018). Several authors have reported that C. globosum, Aspergillus, Penicillium, and Trichoderma could control plant diseases caused by Phytophthora spp. and suppress these pathogens (Fang and Tsao, 1995;Kang et al., 2005;Shanthiyaa et al., 2013;Widmer, 2014). Correlation analyses indicated that the biochar-mediated disease suppression was closely associated with the proliferation of C. globosum, Aspergillus, Penicillium, and Trichoderma. The biochar-mediated proliferation of biocontrol fungi may contribute to reducing the abundance of P. capsici due to the production of antibiotic compounds with activities against pathogens (Park et al., 2005;André and Schmoll, 2010;Bladt et al., 2013). The Aspergillus, Chaetomium, and Trichoderma strains enriched by the biochar exhibited strong suppression of P. capsici and Phytophthora blight of pepper. However, the Penicillium strains, except one, showed little control efficacy and failed to reduce the disease. Thus, the enrichment of Aspergillus, Chaetomium, and Trichoderma probably contributed much to the disease suppression effects of biochar. We assumed that soil incubated with biochar for 20 days could form a soil microbial community that is adverse to soilborne pathogens, thereby improving the soil disease suppression. However, although the abundance of potential biocontrol fungi was high under BC20 before planting, the effect of this treatment on Phytophthora blight of pepper was profoundly reduced compared to that of BC0. The proliferation of pathogens and plant morbidity are time-dependent processes. In the mid-and late growing periods, the abundance of biocontrol fungi under BC0 was markedly increased relative to that under BC20, which is probably an important reason for the higher control effect of BC0.
Overall, our results indicate that biochar-induced improvement in the rhizosphere fungal community, especially the increased abundance of total fungi and beneficial fungi as well as the augmented fungal richness and diversity, conferred inhibition of P. capsici and Phytophthora blight of pepper. Biochar-mediated improvement of soil chemical properties had positive effects on beneficial soil fungi and disease suppression. The disease control effect of biochar was significantly weakened with a prolonged period between the application and planting, which may be largely explained by the short-term promoting effect of biochar amendment on the abundance of biocontrol fungi, such as Aspergillus, Chaetomium, and Trichoderma. Further work is required to pinpoint more biochar-enriched microorganisms that contribute to disease suppression and to elucidate the contribution of each of these microorganisms on controlling soilborne disease.
DATA AVAILABILITY STATEMENT
The original DNA sequence data were deposited in the National Center for Biotechnology Information (NCBI) with accession number SRP224915.
AUTHOR CONTRIBUTIONS
GW, YM, and RG designed the study. GW performed the experiments and was involved with writing the manuscript. YM, RG, and HC contributed to revising the manuscript. JL participated in the greenhouse and lab work. RG helped with sequence data analysis. All authors contributed to the article and approved the submitted version. | 6,960.8 | 2020-07-08T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Stepping up ELISpot: Multi-Level Analysis in FluoroSpot Assays
ELISpot is one of the most commonly used immune monitoring assays, which allows the functional assessment of the immune system at the single cell level. With its outstanding sensitivity and ease of performance, the assay has recently advanced from the mere single function cell analysis to multifunctional analysis by implementing detection reagents that are labeled with fluorophores (FluoroSpot), allowing the detection of secretion patterns of two or more analytes in a single well. However, the automated evaluation of such assays presents various challenges for image analysis. Here we dissect the technical and methodological requirements for a reliable analysis of FluoroSpot assays, introduce important quality control measures and provide advice for proper interpretation of results obtained by automated imaging systems.
Introduction
After its first description in 1983 [1], the solid-phase enzyme-linked immunospot (ELISpot) assay became one of the most commonly used immune monitoring assays in basic and translational research in many fields of immunology [2]. In addition to numerous research applications, it is nowadays also applied in clinical trials to search for biomarkers indicative of the success of immunotherapeutic interventions [3], or even as a diagnostic tool [4]. The importance of the ELISpot assay is further underlined by the wealth of conducted and ongoing projects that aim at the quality of assay performance and its reproducibility across laboratories [5][6][7][8][9][10][11]. Established ELISpot harmonization guidelines have reduced the variability in reported results [12], and proficiency panel testing is now available for any laboratory seeking an external validation of their assay conduct [13].
The widespread use of ELISpot over the past decades can mainly be attributed to its outstanding sensitivity, ease of implementation, and robustness. Further, the technique itself varies only little, if at all, for the analysis of a great variety of cytokines, antibodies, and other secreted proteins, e.g., chemokines or apolipoproteins [14,15]. Hence it was only a question of time until the polyfunctional analysis capabilities of ELISpot were addressed by looking at the simultaneous secretion of two analytes in one well. This was originally attempted with the establishment of dual color enzymatic ELISpot assays [16]. Plates were coated and developed with two antibody pairs with affinities for different cytokines, e.g., IFNɣ and IL-2 [17]. Spots were developed using two different combinations of enzymes and substrates (e.g., alkaline phosphatase with a blue colored spot-forming chromogenic substrate for the first analyte, and horseradish peroxidase with a red spot-forming substrate for the second). Such assays opened the possibility to determine the number of cells secreting either one of the two analytes or both analytes simultaneously by using spot color to differentiate between the three cell populations. Double secreting cells would produce purple colored spots while single secreting cells would give blue or red spots. However, it became evident early on that such spots are difficult to interpret [2,16] as their color may range across all shades of purple, from completely red to completely blue. Specifically the problem lies in a bias towards single secretion for dual colored spots with a considerable difference in size and/or intensity, where a weaker spot in one color is easily obscured by a stronger spot in a second color ( Figure 1). This can lead to an underestimation of the number of double spots, but also an underestimation of the total number of spots in each color as faint double spots may be missed completely.
The introduction of the FluoroSpot assay [18,19], with visualization of spots by fluorophores instead of enzyme and substrate combinations, led to an improvement in the detection of double stained spots [19]. But still, at that time assay analysis by automated spot-counting systems was based on spot color. Only with the introduction of a new, two level image analysis technology, an unambiguous differentiation between single and double (or even triple) stained spots became possible. We provide here a detailed description of the improved analysis technology for accurate FluoroSpot evaluation, technical requirements related to fluorophores and the automated imaging system, important controls for deeming an imaging system suitable for FluoroSpot evaluation, and an assessment of results obtained with automated evaluation of FluoroSpot assays. Color bias in colorimetric dual color ELISpot assay. Evaluation of enzymatic dual color ELISpot assays is solely based on spot color, with single cytokine spots being red or blue, and dual cytokine spots theoretically purple. Dual spots with high intensity in only one color risk being misinterpreted as single spots as the stronger spot will obscures the weaker one. This will result in an underestimate of the number of dual secreting cells as well as an underestimate of the total number of spots for the weaker color.
Experimental Section
This section is written in compliance with the MIATA guidelines for transparent reporting [20]. The Sample: Peripheral blood mononuclear cells (PBMC) were prepared by Ficoll density centrifugation of buffy coats obtained from heparinized blood from two anonymous healthy blood donors at Karolinska Hospital, Solna, Sweden. The time frame between blood draw and PBMC isolation averaged 20 h. PBMC were frozen at −80°C in 20% fetal calf serum (FCS)/10% dimethyl sulfoxide (DMSO) using a Mr. Frosty device, and transferred and stored in liquid nitrogen until further use (within < 1 year). PBMC were thawed and washed twice before left to rest for two hours at 37°C and 5% CO2. Cell concentration and viability were determined by the Guava ViaCount assay (Guava Technologies, Hayward, CA, USA). Viability was >90%.
For IFNɣ/IL-22/IL-17A triple FluoroSpot, 300,000 PBMC were seeded per well and incubated over two nights with or without Candida albicans extract (20 µg/mL), considering the slower secretion kinetics especially for IL-17A with the given stimulating agent. On day three the cells were washed away as described above, and anti-IFNɣ Data Acquisition: Analysis and counting of spots were done with a FluoroSpot reader system (iSpot Spectrum, AID, Strassberg, Germany) with software version 7.0, build 14790, where fluorescent spots were counted utilizing separate filters for FITC, Cy3, and Cy5. Camera settings (Exposure and Gain) were adapted for each filter to obtain high quality spot images preventing over-or underexposure. Fluorophore-specific spot parameters were defined using spot size, spot intensity and spot gradient (fading of staining intensity from center to periphery of spot); and a spot separation algorithm was applied for optimal spot detection. Parameters were fine-tuned comparing negative control wells (PBMC alone) and wells containing CEF-stimulated PBMC. Additional images were taken by a Zeiss ELISpot reader (Carl Zeiss, Inc., Thornwood, NY, USA) equipped with a double band fluorescence filter for FITC and Rhodamine for fluorescent signal detection in one image by color discrimination, using KS ELISpot 4.9 software.
Data and Response determination: Raw data can be made available upon request. No statistical analysis or response determination was performed for this study.
Lab Operations: The experiments were performed in accordance with established Standard Operating Procedures (SOPs) and with validated reagents in an investigative laboratory that operates in accordance with ISO 9001/ISO 13485. The laboratory participates in ELISpot proficiency panels conducted by the Cancer Immunotherapy Consortium (CIC), the Association for Cancer Immunotherapy (CIMT) and Immudex.
The Multi-Dimensional FluoroSpot Analysis
The introduction of a new approach for FluoroSpot analysis was driven by the need to overcome the limitations of spot analysis by color (Table 1). Analysis by color can lead to either falsely defined single color spots (the fainter spot is simply not distinguishable due to overlay by the stronger spot), or to falsely defined dual colored spots when the larger spot partially overlays a smaller spot that is located in its periphery, but which is produced by a different cell ( Figure 2). Further, spots of very high intensity (e.g., for Cy3, typically dark orange to red with most broadband filters) may appear with a yellow center, wrongly implying a dual colored spot ( Figure 2). Lastly, since camera settings cannot be set individually for two fluorophores when using a dual band filter, spots of one color (e.g., green spots for FITC) may appear as weak signals while spots of another color (e.g., orange-red spots for Cy3) may be overexposed (Figure 2).
Figure 2.
Challenges of broadband filter analysis of FluoroSpot plates. PBMC were stimulated with the CEF peptide pool (A) or anti-CD3 (B) and tested simultaneously for IFNɣ and IL-2 secretion using FITC (for IFNɣ) and Cy3 (for IL-2) fluorophores. Images were taken with a Zeiss reader utilizing a dual band filter for FITC and Rhodamine. FITC signals are weak and small, while Cy3 signals are strong and larger, overlaying smaller FITC spots (B), leading to false low counts for green spots. In panel (A), the blue arrow points to a yellow spot that is likely to be caused by dual secretion of cytokine by one cell (see insert for close up). Similar yellow signals can also be found in the center of larger, high intensity red spots (green arrows), leading to false high dual spot counts (see insert for close-up). The challenges of such one level FluoroSpot analysis are illustrated on the right. Two level FluoroSpot analysis. PBMC were stimulated with CEF peptide pool and tested simultaneously for IFNɣ and IL-2 secretion using FITC (for IFNɣ) and Cy3 (for IL-2) fluorophores. Images were taken with an AID Imaging Analyzer utilizing narrow band filters for each fluorophore. Panel (A) demonstrates the two separate images taken of the same well (Level one analysis). Spots are analyzed for e.g., spot size, spot intensity and location. Images are then superimposed and a location algorithm is applied that allows the identification of spots resulting from single and dual cytokine-producing cells based on the exact location parameters of each spot center (Level two analysis, panel (B)).
The key innovation introduced for FluoroSpot plate evaluation is a two level analysis of every well (Figure 3). On the first level, separate images of each analyte/fluorophore are acquired and analyzed. In case of a dual color FluoroSpot, two separate images are taken; in case of triple color FluoroSpot, three separate images are taken etc. Importantly, camera settings (e.g., Exposure, Gain) can be adjusted for every analyte/fluorophore to compensate for different fluorescent intensities.
Two prerequisites are essential for successful FluoroSpot evaluation: 1. Narrow band filters with specific excitation and emission wavelength range for each fluorophore to avoid bleed-over between different fluorophores ( Figure 4); 2. Software features for the identification of the exact position of each spot in a two dimensional coordinate system.
Figure 4. Excitation and Emission ranges for selected narrow band filters.
A selection of narrow band filters as used in the AID Imaging Analyzer for the evaluation of FluoroSpot assays is depicted. Of note, these filters provide filtration on two levels: 1. Filtering of the incoming excitation light to prevent excitation of fluorophores with partial spectral overlap, and 2. Filtering of emitted signals to obtain defined spot images.
At level one, the analysis of each image obtained per well is done separately with analyte/fluorophore specific parameters as defined by the user following lab-specific or SOP-defined steps (also see Materials and Methods section, Data Acquisition).
During the second level of analysis, the positions of counted spots in the separate images are compared by a location algorithm. If spots in different images (for different fluorophores) of the same well have identical positions as defined by the location of the spot center they are detected as multi-stained spots, otherwise they are counted as single stained spots of the respective analyte/fluorophore (Figure 3). For this approach, it is important that the automated reader system i. locks in the plate at a specifically designated well position, and ii. allows an automated switch of filters as defined by the fluorophores used in the assay. The imaging software applied should further allow the definition of maximum acceptable space between centers of spots detected for different fluorophores for the identification of multi-colored spots. In other words, the different kinetics of cytokine secretion (related to onset, speed, duration, strength) as well as cell movement during incubation may cause a minimal shift of spot centers for a given analyte (typically in the µm range) that needs to be taken into consideration for cells secreting more than one analyte upon stimulation. Technical causes have to also be considered, which can, for example, be related to the image alignment. Rebhahn et al. have addressed this topic in detail previously [21]. The authors show that, likely due to the reasons given above, a spot is not perfectly round. Different circularity values for overlaying spots, as determined by measurements of the radial variability (distance of the pixels with highest intensity [namely the spot center mass] to all pixels
AID FluoroSpot Filters
Excitation filter Emission filter distributed in its periphery), consequently lead to slight shifts of spot centers. They further address the random overlay of spots by determining a coincidence limit using matched and unmatched image overlays. In our experiments, it was empirically determined that for medium sized spots as apparent in Figures 3,5-7, a pixel shift with the AID Spectrum reader of app. five pixels (roughly 15 µm) works well for capturing the true dual secretors while keeping the rate of random overlay at a minimum.
Fluorophores and Testing of Imaging Systems for Signal Bleed-Over
Currently there are only a limited number of fluorophores used in FluoroSpot, and their selection is mainly driven by the goal of obtaining efficient signal strength and stability while being able to prevent bleed-over signals, also see [22]. The most commonly used fluorophores are FITC, Cy3 and Cy5. A significant overlap of excitation and emission spectra exists for these three fluorophores. However, the use of narrow band filters for the excitation as well as emission can efficiently prevent bleed-over ( Figure 4). It is advisable to test if the automated imaging system chosen for FluoroSpot analysis does indeed prevent signal bleed-over and with that offers accurate plate analysis capabilities. This is efficiently achieved, for example, by using only one fluorophore per well for the detection of cytokine secretion, but evaluating each well with all filters. No signal should be detected when using any filter other than the one designated for the used fluorophore ( Figure 5).
The True Meaning of Color in FluoroSpot Analysis
It is important to realize that -different to colorimetric ELISpot assays and FluoroSpot analysis by color (see Figures 1 and 2) -spot color in the captured image is of no importance to FluoroSpot analysis. Fluorescent signals are separated by filters (as discussed above), and image color will only affect the graphical visualization of data. The analysis of FluoroSpot assays as described for this two level technology uses gray values of spots for their identification, and is based on spot parameters as spot size, intensity and position in the first level of analysis. In imaging processing it is a common approach to calculate the intensities of objects across a picture by converting it to a grayscale. The translation of an 8-bit grayscale image into a binary image gives 256 possibilities (from 0 to 255, with 0 being black and 255 being white). Image color in FluoroSpot assays has a pure visualization aspect and colors may be substituted by the software user based on personal preferences to provide a clearer and more intuitive presentation of different spot types or cell populations (see Figures 6 and 7). Caution is advised for existing reports about more than three or four color Fluorospot capabilities, which may simply refer to the different subpopulations detectable, which are marked with different colors by the operator using appropriate software features (see also Figure 7E).
Discussion
The introduction of detection systems using fluorophores for the ELISpot technique has opened the door to the successful analysis of multiple secreted analytes within one well. For data acquisition, existing imaging systems have to be equipped with a strong light source and special filters. It became evident rather fast that evaluation of FluoroSpot plates based on color differentiation of spots provided insurmountable challenges. The solution was found in a two level analysis approach during which a separate image is taken for each fluorophore, the image evaluated for fluorophore-specific spots and the spot location recorded. The separate images for the same well are then checked for spots with the same coordinates to define single and dual (or more) secretors. This approach has already been successfully applied in various fields, like Tuberculosis research [23], HIV research [24], research related to other infectious diseases [25], as well as vaccination and translational research [26][27][28]. One of the key elements of this evaluation approach is the use of narrow band filters for excitation and emission. The commonly used fluorophores FITC, Cy3 and Cy5 exhibit significant spectral overlap, but by use of specific filters bleed-over between signals can be reduced to inconsequential levels. Such filters are widely available nowadays, and various FluoroSpot imagers can be readily expanded for the simultaneous use of multiple filters. Hence, the hardware required for complex FluoroSpot analysis does not limit the FluoroSpot technique to the detection of only two analytes.
In addition, the large antibody binding capacity of FluoroSpot plates (roughly 100 µg/well) generally allows the expansion of the number of tested cytokines per well, considering the average use of only 1-2 µg per well of coating antibody for a specific analyte. A review of the chemical features of the PVDF membranes applied for ELISpot and FluoroSpot assays has been given elsewhere [29], and demonstrates the effective binding mechanism of proteins by the PVDF membrane if pretreated with Ethanol, and their stabilization for prolonged times when the coating procedure is followed by the addition of proteins (as done by "blocking" as included in most Elispot protocols, or the addition of stabilizer solutions, as done for commercially available pre-coated plates). A hypothetical situation where a region on the membrane efficiently binds one capture mAb but not the two other capture mAbs, resulting in single stained spots rather than dual or triple stained spots, is highly unlikely. If the membrane displayed such drastic variation in protein binding across a well, results within e.g., triplicates would display unusual high variation. Elsewhere in this issue, Dillenbeck et al. describe a comparison between results obtained from single and triple FluoroSpot for IFN-ɣ, IL-17A and IL-22. The two methods yielded a high correlation with similar spot numbers as well as spot intensities and quality. This too supports the excellent binding capacity of PVDF membranes for multiple capture mAbs.
Currently there are commercial kits available that test the secretion pattern for three cytokines in one well simultaneously. Such triple cytokine FluoroSpot approach is reviewed elsewhere in this issue (Dillenbeck et al.). When looking at multiple secretion patterns in FluoroSpot with more than two cytokines, it becomes evident that plate evaluation with a two level approach is essential for accurate analysis. The analysis of secretion patterns for three cytokines in one well will reveal seven sub-populations of cells (three single secretors, three combinations of dual secretors, and one triple secretor) (Figure 7), which will be impossible to separate by spot color. Furthermore, for every additional fluorophore introduced the number of potential subpopulations will roughly double, which in the case of a fourth analyte will require the software to identify 15 individual subpopulations (four single secretors, six combinations of dual secretors, four combinations of triple secretors, and one quadruple secretor). Hence the future of FluoroSpot has the potential to offer an extensive amount of data related to cytokine secretion patterns, while being performed in a similar straight-forward format as the conventional single color ELISpot.
The main current limitation for more complex FluoroSpot assays is the availability of high quality detection reagents, with high sensitivity and photostability. The same principles used for the analysis of two and three color FluoroSpot assays can be adapted to handle additional analytes in assays with more colors. Considering that a FluoroSpot assay evaluating two or three analytes simultaneously is principally as easy and straightforward to perform as a single color enzymatic ELISpot assay and exhibits comparable sensitivity, it can only be a question of time until complex FluoroSpot analysis will enter the market on a wide scale [30]. The principles for data acquisition and analysis have already been developed, as described here, and are in place for further expansion of the FluoroSpot technology.
Author Contributions
SJ, MR, TD designed research; SJ and TD performed research and analyzed the data; SJ, MR, TD wrote the paper. All authors read and approved the final manuscript.
Conflicts of Interest
SJ is the president of ZellNet Consulting, an ELISpot consulting company. MR is employed by Autoimmun Diagnostika GmbH, a company providing automated imagers for ELISpot and FluoroSpot analysis. TD is employed by Mabtech, a company providing ELISpot and FluoroSpot kits. | 4,790.8 | 2014-11-27T00:00:00.000 | [
"Biology"
] |
Autoencoder based Anomaly Detection and Explained Fault Localization in Industrial Cooling Systems
Anomaly detection in large industrial cooling systems is very challenging due to the high data dimensionality, inconsistent sensor recordings, and lack of labels. The state of the art for automated anomaly detection in these systems typically relies on expert knowledge and thresholds. However, data is viewed isolated and complex, multivariate relationships are neglected. In this work, we present an autoencoder based end-to-end workflow for anomaly detection suitable for multivariate time series data in large industrial cooling systems, including explained fault localization and root cause analysis based on expert knowledge. We identify system failures using a threshold on the total reconstruction error (autoencoder reconstruction error including all sensor signals). For fault localization, we compute the individual reconstruction error (autoencoder reconstruction error for each sensor signal) allowing us to identify the signals that contribute most to the total reconstruction error. Expert knowledge is provided via look-up table enabling root-cause analysis and assignment to the affected subsystem. We demonstrated our findings in a cooling system unit including 34 sensors over a 8-months time period using 4-fold cross validation approaches and automatically created labels based on thresholds provided by domain experts. Using 4-fold cross validation, we reached a F1-score of 0.56, whereas the autoencoder results showed a higher consistency score (CS of 0.92) compared to the automatically created labels (CS of 0.62) -- indicating that the anomaly is recognized in a very stable manner. The main anomaly was found by the autoencoder and automatically created labels and was also recorded in the log files. Further, the explained fault localization highlighted the most affected component for the main anomaly in a very consistent manner.
INTRODUCTION
Malfunctions or even a failure of refrigeration systems are a risk with very high damage potential for food wholesalers. In the course of Industry 4.0 and digitization, sensors and instrumentation drive the central forces of innovation. New potentials for monitoring and machine learning based predictive maintenance of the cold stores are opening up. However, the training and updating of such machine learning based models poses several challenges. First, damage and outage reports, which represent the required ground-truth for a supervised learning task, are not yet collected in a systematic and consistent manner. Second, a refrigerating system is a large, complex system including several hundreds of sensors presenting a widely varying data domain such as temperature, vibration or engine speed (Weerakody, Wong, Wang, & Wendell, 2021). Third, the sensors are often retrofitted or upgraded successively in the course of digitization. Therefore the question remains open how a machine learning model can be scaled from one component to an entire system or several systems.
The challenge of missing ground truth data and therefore learning useful representations with little or no supervision is a key challenge in machine learning. In the context of pre-dictive maintenance, unsupervised learning has shown to be successful in identifying system failures without supervision. However, identifying the root cause of the failure employing unsupervised learning procedures remains an open task. For this reason we transform our unsolvable task into a closely related solvable task, which aims to achieve a similar benefit and business impact. Therefore, we have started with a requirements analysis. The outcomes of the requirements analysis showed us that the localization of the affected sensors, associated with the root cause of the detected failures are an important step. As the affected system is very complex including several hundreds of sensors, the maintenance employee can save valuable time and effort if the affected sensors and thereby the affected subsystem can be localized. Therefore we propose an algorithm for fault localization in large cooling systems with no supervision or little supervision.
C1:
We define a real-world learning task based on industrial requirements and provide a 18 months ground truth data set for an entire cooling system unit including 34 sensor signals. C2: We provide an autoencoder based anomaly detection workflow suitable for multivariate and increasingly upgraded time series data. C3: Our workflow includes an algorithm for explained fault localization based on the individual reconstruction error for each sensor signal. C4: Our workflow includes a root cause analysis enabled by integrated expert knowledge. C5: Our workflow was compared against automatically created labels showing an F1-score of 0.56 and a consistency score of 0.92. The explained fault localization highlighted the most affected component in a very consistent manner.
RELATED WORK
Industrial cooling systems (ICS) are widely deployed in large supermarkets and storage warehouses to preserve perishables with a global market valued to over USD 5 billion (Reportlinker, 2020). ICS are subject to faults, such as compressor failures and bearing damage, that can degrade the operational efficiency and even result in their breakdown. Accurate and timely detection of faults and degradation is critical to prevent food spoilage, customer inconvenience, maintenance costs, and other related losses. Automated fault diagnosis in ICS has been explored for many years (Grimmelius, Klein Woud, & Been, 1995), ranging from Kalman-filter based methods (Yang, Rasmussen, Kieu, & Izadi-Zamanabadi, 2011), random forest (Kulkarni, Devi, Sirighee, Hazra, & Rao, 2018) and neural network based approach. AI based predictive maintenance is estimated to decrease breakdowns by up to 70 % and lowers maintenance costs by 25 % (Deloitte, 2017) in the next years.
Fault classification is typically based on supervised learning (Ismail Fawaz, Forestier, Weber, Idoumghar, & Muller, 2019;Dempster, Petitjean, & Webb, 2020;Z. Wang, Yan, & Oates, 2017;Karim, Majumdar, Darabi, & Chen, 2018) and required a large number of ground truth data, often lacking in industrial applications. Furthermore, these supervised approaches are often difficult to scale and difficult to transfer to other, similar systems (Kemnitz, Bierweiler, Grieb, von Dosky, & Schall, 2021;Heistracher, Jalali, Strobl, et al., 2021). As the affected system is very complex including several hundreds of sensors, the maintenance employee can save valuable time and effort if the affected sensors and thereby the affected subsystem can be localized. Further, explaining and quantifying the individual contribution helps increase trust and interpretability (Grezmak, Wang, Sun, & Gao, 2019). The knowledge about individual localization and contribution can be combined with expert knowledge and thereby enables root cause analysis.
Therefore we propose an end-to-end autoencoder based workflow for fault localization in large cooling systems with no supervision or little supervision. This proposed workflow is inspired by heat and saliency maps (Simonyan, Vedaldi, & Zisserman, 2014) applied in computer vision (Goebel et al., 2018).
INDUSTRIAL REQUIREMENTS
Hauser aims to build a machine learning based monitoring, alarm, and remote maintenance systems for industrial refrigeration systems, which are deployed in hundreds of locations around the world. Since many of these systems are of the same type, a model-driven approach that could predict damages or outages to cold storage's, would scale from a business perspective and could also be offered as a service to customers. The following requirements result from this vision: It is important to start with an approach that is expandable. Each step should add substantial business value.
Overall Workflow
We propose a workflow for preprocessing, anomaly detection, explained fault localization and root cause identification of multivariate time series data, see Fig. 1. For anomaly detection, we use a LSTM autoencoder. We identify system failures using a threshold on the total reconstruction error RE total (S) Eq. (7) of sensor signals S := [S 1 , . . . , S n ]. For fault localization, we compute the individual reconstruction error RE ind (S i ) Eq. (9) for each sensor signal S i allowing us to identify the signals that contribute the most to the total reconstruction error RE total (S). Provided an expert knowledge "look-up table", we can thus locate the affected subsystem in the cooling system.
Preprocessing
Data preprocessing is a crucial task in machine learning pipelines (Leukel, González, & Riekert, 2021). Here, data preprocessing denotes the process of preparing raw data for the machine learning model including data cleaning, signal selection, resampling, missing values treatment and data normalization. In industrial systems, data preprocessing is a major challenge due to its high dimensionality and complex structure (Bekar, Nyqvist, & Skoogh, 2020). In industrial cooling systems, we find numerous components with various data sources including several hundreds of sensors, a widely varying data domain, and retrofitted and upgraded sensors requiring distinct approaches in the mentioned preprocessing steps.
In the following, we want to present our approach to data preprocessing in an industrial cooling system with time series data. Let S i denote the i th sensor signal in the cooling system, 1 ≤ i ≤ n. The data is acquired without a defined frequency, that is, the data is not available at the same frequency and each sensor signal S i is recorded with an individual frequency or irregularily.
Data Cleaning
In the course of digitization, sensors are often retrofitted or upgraded successively. We therefore removed early time periods lacking sensor signals considered important by Hauser experts.
Signal Selection
The industrial cooling system consists of numerous components with a wealth of data sources providing a huge amount of sensor signals. The massive amount of data is a key challenge in data preprocessing (Bekar et al., 2020). In order to decrease the number of signals, we calculated the correlation coefficient and removed highly correlated signals (Nahian et al., 2021). Let X and Y denote random variables with covariance cov(X, Y ) and standard deviation σ X , σ Y respectively. Then, the correlation coefficient ρ X,Y is given by Extracting information from the timestamp can increase the quality of prediction models (Latyshev, 2018). Thus, we added the signals month, time (hour) and weekday to our observations.
Resampling
The acquired data consists of irregularly sampled time series data. With the growth of multi-sensor systems, the preprocessing of irregular time series data is becoming increasingly important (Weerakody et al., 2021). Due to numerous components with various data sources including several hundreds of sensors and a widely varying data domain, we cannot expect to find all sensor signals sampled at a constant sampling rate with common timestamps. For modeling, however, we need to resample the data to a regular frequency. Let x i t denote a observed value of sensor signal S i at time t. Let r denote a regular sampling rate. We then obtain new timestamps T by equidistant time intervals of length r. Depending on the signal type, we compute a new valuex i t of signal S i at timestamp t ∈ T . In general, the acquired sensor signals represent numerical values, e.g. measuring temperature, vibration or engine speed. However, the cooling system also includes signals of boolean values giving information about the system's health and up-counting signals giving information about the pause time of components in the system. When the value is counting, the minimum value is the most representative one, as it was the initial position of the counter. And using the maximum boolean value, was the same result as using the logical OR, which means that it is zero if and only if all samples of the corresponding interval are zero. In the general case, we simply average over all observed values x i u in the interval t, t + r , that is, In the case of boolean values x i ∈ {0, 1}, we definê and in the case of constantly (by c ∈ N) increasing values x i ∈ {k + c|k ∈ N}, we definê
Missing Values Treatment
Resampling irregularly sampled time series data will result in missing values for one or more sensor signals S i at a given timestamp t ∈ T . Simple statistical techniques include forward-filling and zero imputation (Weerakody et al., 2021). We used the fill-forward method for missing values. Before the first appearance of a value, we initialized a default value by computing the median of all available values.
Feature Representation
In contrast to the acquired signals in the cooling system, let features define the representation of data feed into the machine learning model. Let w denote the window size. Applying a window of size w to the sensor signals S i at timestamp t, the features F t at timestamp t are given by For modeling, we reshaped the features and obtained where F j with j = iw + k corresponds tox i t+k .
Data Normalization
Using min-max normalization, we standardized the features by scaling each feature individually to the range (0, 1). The training data is scaled, and then the scaling parameters are applied to the test data. In the domain of machine learning, normalization plays a key role in the preprocessing of data including variables of different scale. In normalization, each variable is scaled individually to the range (0, 1) avoiding a variable dominating the machine learning model.
Anomaly Detection Model
We applied a LSTM autoencoder with the following settings: an input layer of size 1 × 370 (37 signals and a window of size 10), the first encoding layer with output size 1 × 370, the second encoding layer with output size 1 × 185, a repeat vector with output size 1 × 185, the first decoding layer with output size 185, the second decoding layer with output size 370, and a time distributed layer of size 1 × 370, see For hyper-parameter tuning, we used Bayesian optimization. Turner et al. (Turner et al., 2020) demonstrated decisively the benefits of Bayesian optimization over random search and grid search for tuning hyperparameters of machine learning models. Bayesian optimization benefits from previous evaluations of hyper-parameter configurations by including past hyper-parameter configurations in the decision of choosing the next hyper-parameter configuration. Therefore, it avoids unnecessary evaluations of the expensive objective function and requires fewer iterations to find the best hyper-parameter configuration.
We searched for the sampling rate, number of layers, dropout rate, activation function, optimizer, learning rate and batch size. We obtained a sampling rate of 60 seconds, 2 autoencoder layers, a dropout rate of 0.2, the tanh activation function, the rmsprop optimizer, a learning rate of 0.001, and a batch size of 16.
Relative Reconstruction
Error (%) Figure 3. Graphical Abstract of Paper, including 1) anomaly detection (identify system failures using a threshold on the total reconstruction error of all sensor signals), 2) explained fault localization (compute the the individual reconstruction error for each sensor signal and identify affected signals), 3) and root cause analysis (locate the affected subsystem based on a look-up table and the affected signals).
Explained Fault Localization
We propose an end-to-end autoencoder based workflow for fault localization in a large cooling system not providing ground truth data. The total reconstruction error is given by Eq. (7). Motivated by heat and saliency maps (Simonyan et al., 2014) applied in computer vision (Goebel et al., 2018), we derive from the individual reconstruction error Eq. (9) how much each sensor signal S i contributes to the total reconstruction error Eq. (7). Thus the individual reconstruction error Eq. (9) can be used to identify affected signals and thereby locate the affected subsystem.
The total reconstruction error of feature F t at timestamp t is given by where is the prediction of feature F t at timestamp t.
We define the individual reconstruction error of signal S i , 1 ≤ i ≤ n, at timestamp t by wherex i t+k andx ′i t+k correspond to F j and F ′ j with j = iw + k respectively.
In a k-fold cross validation approach, we compute for each test dataset l, 1 ≤ l ≤ k, a threshold where µ l and σ l are the mean and the standard deviation of the total reconstruction error RE total on the remaining training data and c is a constant found by plotting ROC curves, see Fig. 5.
In algorithm 1, we give the pseudocode for the Fault Localization algorithm. The algorithm takes the threshold T , the feature F t at timestamp t, the predicted feature F ′t at timestamp t and the number of significant signals m, 1 ≤ m ≤ n, as an argument, and returns either the m most significant signals S * or the empty set. We compute the total reconstruction error RE total of feature F t at timestamp t based on Eq. (7). If the total reconstruction error RE total exceeds the threshold T , we determine the m most significant signals S * ⊂ {S 1 , S 2 , . . . , S n }. Therefore, we compute the individual reconstruction error RE ind [i] based on Eq. (9), i = 1 . . . n. Then, we select iteratively the signal idx that yields the highest individual reconstruction error RE ind [idx], append signal idx to S * and remove the selected signal from further calculations by setting its value to zero. The method APPEND takes a set and an element as an argument and appends the element to the set. When the iteration terminates, set S * contains exactly the m most significant signals, that is, the signals with the highest individual reconstruction error. If the total reconstruction error RE total does not exceed the threshold T , S * is the empty set. Finally, the set S * is returned.
Algorithm 1: Fault Localization
Input: threshold T , feature F t at timestamp t, predicted feature F ′t at timestamp t, number of significant signals m Output: significant signals S * ⊂ {S 1 t , S 2 t , . . . , S n t } with #S * = m RE total := 1
Root Cause Analysis and Integrated Expert Knowledge
The proposed workflow for fault detection and root cause identification -the identification of the affected sensor signals S i -allows us to locate the affected subsystem. Each sensor signal S i is assigned to a component in the cooling system. The component can then be used to determine the root-cause and failure type. Domain knowledge and physical connections are stored in the system using a look-up table.
Over the years, the experts have collected which sensors are typically associated with a root-cause. In our case, a physicsaware look up table, see Fig. 3, was created by three domain experts with several years of maintenance experience. While all previous steps in the workflow are fully automatic -and salable to other components -the look-up table will remain component specific and will always require a manual step.
Data Set Description
The acquired data consists of 34 irregularly sampled sensor signals in an industrial cooling system including numerous components with various data sources and a widely varying data domain, measured over a period of 18 months. The cooling system is divided into numerous components, and each of them then again divided into several sub-components. Each sensor signal is assigned to a sub-component in the cooling system. We added 3 additional timestamp signals including month, time (hour) and weekday to our observations resulting in a total of 37 signals. By resampling the data to a regular frequency of 60 seconds, we obtained a time series dataset of 37dimensional data samples. In general, the acquired sensor signals represent numerical values, e.g. measuring temperature, vibration or engine speed. However, the cooling system also includes signals of boolean values giving information about the system's health and up-counting signals giving information about the pause time of components in the system.
In the course of digitization, sensors are often retrofitted or upgraded successively and additional sensors are integrated in the system posing challenges in the preprocessing and updating of the machine learning model. In general, the number of measured data points greatly increases over the measurement period of 18 months for each sensor signal. For missing signals, we computed the mean and standard deviation and sampled from the corresponding normal distribution.
The dataset will be made public available, however, it will be anonymized due to privacy reasons.
Experimental Setup
Due to the steady increase in the number of data points in the system over 18 months, data acquired in the first 10 months is not representative and thus inadequate for testing. Therefore, we restricted our test data to the last 8 months. We evaluated the machine learning workflow in two different cross validation approaches. In scenario 1, we performed a conventional 4-fold cross validation procedure on the last 8 months (datasets 7-10). In scenario 2, we performed a 4-fold cross validation approach on the last 8 months (datasets 7-10), including a basic training dataset consisting of the first 10 months. We partitioned the data of the last 8 months into 4 equally sized folds, each fold receiving 54 days of acquired data. For each fold k, we performed the following steps: Fold k is held out for testing, and the remaining 3 folds are used for training, in scenario 2 the training data includes the basic dataset. The training data is preprocessed and prepared for modeling using the preprocessing steps described in section 4.2. We then train the LSTM autoencoder described in section 4.3 for 50 epochs with early-stopping on the training data. Finally, we test the autoencoder on the held-out test data.
Ground Truth provided by Automatically Created Labels
We tested our proposed workflow for anomaly detection, fault localization and root cause analysis with thresholds based on expert knowledge, derived from PLC-system, failure log files, system sheets and documentation. A programmable logic controller (PLC) is a device used to control a machine or industrial system. The thresholds have been repeatedly evaluated in extensive feedback discussions with several domain experts and statistically confirmed. In case of absence of expert knowledge, we derived thresholds based on the 98% confidence interval. We obtained for each signal a threshold allowing us to provide automatically created labels. In the preprocessing procedure, after data cleaning, signal selection, resampling, missing values treatment but before data normalization, we compared the input features with the derived thresholds and obtained for each timestamp and signal a label (healthy -0, anomalous -1) enabling us to define a label for each sample We called a timestamp t + j, 0 ≤ j ≤ 9, anomalous if at least 10 signals of all 37 signals were anomalous at that timestamp, that is, at least 10 of all valuesx 1 t+j ,x 2 t+j , . . . ,x 37 t+j exceed their thresholds. Finally, we called a sample F t with window size w := 10 anomalous if t+ j was an anomalous timestamp for all 0 ≤ j ≤ 9. Then, we applied a smoothing filter to the labels, and labelled a sample F t anomalous if the smoothed value was greater or equal to 0.5. Fig. 4 shows the labels of data over a time period of 8 months (datasets 7-10 in Fig. 7 and Fig. 8) and compares the results of the corresponding models to the automatically created labels (ground truth).
Evaluation Metrics
In order to validate our model performance, we computed evaluation metrics including the F1 score, precision and recall for both scenarios, see Table 2. Further, we included ROC curves plotting the false-positive rate against the truepositive rate in several threshold settings, see Fig. 5. The true-positive rate is also known as sensitivity or recall. The ROC curves also helped us find the thresholds Eq. (10). Inspired by the imaging domain, we used the Jaccard index to measure the similarity of machine learning and threshold derived anomalies. The Jaccard index measures the similarity of two datasets comparing their intersection and union. Fig. 6 shows a part of Fig. 4. We use automatically created labels as ground truth references, however, these labels are also affected by precision and recall errors. We observe that the labels are not consistent in the occurrence of an anomaly. In order to measure the reliability of the system, we define a consistency score indicating the consistency of an anomaly over time, see Table 2. It is based on the assumption that, in the real-world, errors often persist consistently over a longer period of time. Further, we define the consistency score κ for a model as where A := a (e a N − s a 1 ).
Experimental Results
We tested our workflow for anomaly detection, fault localization and root cause analysis described in section 4.4 and 4.5 in scenario 1 and scenario 2. Scenario 1 refers to the conventional 4-fold cross validation procedure on the last 8 months (datasets 7-10). Scenario 2 refers to the 4-fold cross validation approach on the last 8 months (datasets 7-10) including a basic training dataset consisting of the first 10 months. Fig. 7 shows the results for anomaly detection in scenario 1. Fig. 8 shows the results for anomaly detection in scenario 2. The plot shows the total reconstruction error Eq. (7) and threshold computed according to Eq. (10) on the respective test dataset over a period of 54 days. Setting the sampling rate r := 60 (seconds) and the window size w := 10, we obtain 7776 (6 × 24 × 54) data samples over a period of 54 days. In both scenarios, the autoencoder detected an anomaly in dataset 8 around data sample 12000 and an anomaly in dataset 9 around data sample 16000. However, in scenario 2, the autoencoder, trained on the training data including the basic dataset, detected the anomaly in dataset 9 slightly earlier than in scenario 1. Therefore, we suggest that including the data acquired in the first 10 months increases the performance of the autoencoder. Fig. 9 shows the results for fault localization in scenario 2 on dataset 8. Fig. 10 shows the results for fault localization in scenario 2 on dataset 9. The plot shows the individual reconstruction error Eq. (9) of an anomalous data sample for all 37 signals, absolutely and relatively with the respective threshold Eq. 10. The color indicates if the individual reconstruction error of signal S i exceeds the threshold. Sensor 36 Component 13 5 Please note that the data is anonymized due to privacy reasons. Table 3 shows the results for root cause analysis in scenario 2 on dataset 8. Table 4 shows the results for root cause analysis in scenario 2 on dataset 9. The explained fault localization and root cause analysis highlighted the most affected component in a consistent manner.
CONCLUSION AND FUTURE WORK
We provided a 18 months dataset of multivariate time series data for an industrial cooling system including 34 sensor signals and automatically created labels based on thresholds derived from expert knowledge and PLC-system. We presented our machine learning workflow for anomaly detection and explained fault localization suitable for multivariate and increasingly upgraded time series data. Therefore, we presented our preprocessing steps and provided an algorithm for explained fault localization and a root cause analysis enabled by integrated expert knowledge. Sensor 31 Component 12 3 6.03 Sensor 32 Component 12 3 Please note that the data is anonymized due to privacy reasons.
We performed a conventional 4-fold cross validation approach over a time period of 8 months and a 4-fold cross validation approach including a basic dataset of 10 months and compared the model results to automatically created labels based on thresholds provided by domain experts. Using 4-fold cross validation, we reached a F1-score of 0.56, whereas the model results showed a higher consistency score (CS of 0.92) compared to the automatically created labels (CS of 0.62) -indicating that the anomaly is recognized in a very stable manner. The automatically created labels, however, detected anomaly earlier. The main anomaly was found by the model and ground truth, and was also recorded in the log files. Further, the explained fault localization highlighted the most affected component for the main anomaly in a very consistent manner.
A limitation of this work is the comparison of our model results to automatically created labels as ground truth references. These labels are also affected by precision and recall errors. Still, automatically created labels were the best available reference for this work and are frequently considered ground truth for other real-world applications. However, a strength of our study is that we also created and provided consistency scores indicating the consistency of an anomaly over time for autoencoder results and automatically created labels. We believe that this score will also be helpful in the future for evaluation using real-world data missing ground truth. It is based on the assumption that, in the real-world, errors often persist consistently over a longer period of time.
In the future, we would like to further expand root cause analysis and increase the transparency of our proposed workflow. Further, we aim to investigate whether the increase in reconstruction error and exceeding the anomaly thresholds can be predicted for the user. A further step is to bring the developed end-to-end model into a productive environment.
Our work shows that scalable anomaly detection based on AI and explained fault localization is feasible for multivariate and increasingly upgraded time series data. Our proposed workflow provides a satisfying performance regarding the F1 score. We were also able to show that a root cause analysis enabled by integrated expert knowledge can be carried out without supervised learning highlighting the most affected component in a consistent manner. The integrated expert knowledge enabled ground truth references. We are sure that our results can also be transferred to other applications and will enable the monitoring of large systems in the future.
DATA AVAILABILITY STATEMENT
The dataset will be shared upon reasonable request. Please contact the senior author of the paper. Please cite this paper if you are using the dataset. | 6,868.4 | 2022-06-29T00:00:00.000 | [
"Computer Science"
] |
Mathematical modelling of liquid meniscus shape in cylindrical micro-channel for normal and micro gravity conditions
Mathematical model of liquid meniscus shape in cylindrical micro-channel of the separator unit of condensing/separating system is presented. Moving liquid meniscus in the 10 μm cylindrical microchannel is used as a liquid lock to recover the liquid obtained by condensation from the separators. The main goal of the liquid locks to prevent penetration of a gas phase in the liquid line at the small flow rate of the condensate and because of pressure fluctuations in the vapor-gas-liquid loop. Calculation of the meniscus shape has been performed for liquid FC-72 at different values of pressure difference gas liquid and under normal and micro gravity conditions.
Introduction
In recent decades, the use of two-phase flows for cooling of electronic components, such as computer chips, power electronics, converter chips and inverters in hybrid cars, powerful lasers, light-emitting diodes, etc., has developed significantly. One of the promising solutions allowing to remove high heat fluxes is technology using processes with phase change, for example evaporation of a thin liquid film moving in a flat micro-channel under the action of a gas flow [1,2]. Often it is of fundamental importance for the film to use an inert (non-condensable) gas. The vapour produced during evaporation is mixed with noncondensable gas. To operate in a closed loop, the vapour should be condensed and the resulting liquid separated from the gas. Further liquid and gas has to be separately sent again to the evaporator for cooling. Thus, the creation of a condensing/separating system is one and the most important task for improving the efficiency of the two-phase cooling system. Specifically it plays a central role in thermal control systems for space application [3,4].
Concept of condenser with liquid separators is shown in Fig. 1. To separate a condensate from a vapor-gas flow two separators are used. Liquid flows from condenser to the slot of the first separator by the tangential stress on the interface and is continuously pumped out by the pump. The gas phase is not completely free of vapour and liquid at this stage. The inert gas flow that contains some residuals of vapour and liquid droplets passes through a second separator. This device is used to extract from the gas phase a very small liquid droplets created in the condenser but transported by the inert gas flow beyond the first separator. The so-called "liquid lock" is used to recover the liquid obtained by condensation from the separators. The main goal of this work is development of a mathematical model of liquid meniscus shape in cylindrical microchannel of the separator unit of condensing/separating system. Moving liquid meniscus is used as the liquid locks to prevent penetration of a gas phase in the liquid line at the small flow rate of the condensate and because of pressure fluctuations in the vapour-gas-liquid loop.
Mathematical model
The motion of a two-phase flow in a separator is described by the Navier-Stokes equations. The liquid-gas interface is assumed to be stable, since the process is stationary. Assuming that the flow rate of liquid pumped from the separator is relatively small, it can be considered that the velocities of liquid and gas are zero.
Under gravity meniscus shape is not round and it is deformed by hydrostatic pressure.
Let find meniscus shape in this case. Wetting contact angle is equal . It means that angle between tangent plane to liquid -gas interface and tangent plane to solid surface in contact point is equal .
Pressure difference between liquid and gas is given by Laplas equation: In this case r m <<R 2 . It allows to neglect the radial curvature of the interface and pass on two dimensional problem.
Let x m is distance from the beginning the channel widening to meniscus, r m is the meniscus radius, R s is the radius of separator wall, h is a half of separator gap thickness, R in -inner radius of separator chamber. Real sizes in separator are following: R s =10 mm, R in =13.35 mm and h=0.005 mm, Fig.2. The meniscus radius r m obey the following system of equations: Excluding from this system coordinates of the contact point x, y one obtain an expression for radius r m of the meniscus: Let pressure difference pis known. At gravity condition it satisfy to the expression dp m here p=p l -p g is measured value with detector pressure difference between liquid and gas, Z dp is coordinate z of the pressure difference detector sensor location. Hydrostatic pressure in the gas phase is neglected.
Value of coordinate z of the interface can be expressed through polar angle , counting from the positive direction of the axis x in the plane zx, sin z r (4) where r is a radius vector from separator centre r=R in -x m (5) Substitution (2) and (4) Let m x is distance from the beginning the channel widening to meniscus, m r is the meniscus radius, s R is the radius of separator wall, h is a half of separator gap thickness, in Rinner radius of separator chamber. Real sizes in separator are following:
Results
Meniscus shape at different values of pressure difference liquid -gas, given at the level of separator centre is shown in Fig. 3. Solid circumference is interior boundary of separator gap. Calculation of equation (8) is performed for liquid FC72 and Z dp =0. One can use dimensionless criterion K dp =gR in /p for evaluation of gravitation influence on meniscus shape. At K dp <1 the gravitation influence on meniscus shape is weak and meniscus shape is close to circumference. At K dp of order of unity and higher meniscus is flattened from below by the gravity , Fig 3. Fig. 3. Meniscus shape at different values of pressure difference liquid -gas.
38
One can use dimensionless criterion in dp gR K p for evaluation of gravitation influence on meniscus shape. At 1 dp K the gravitation influence on meniscus shape is weak and meniscus shape is close to circumference. At dp K of order of unity and higher meniscus is flattened from below by the gravity , Fig 4.3a),b). Mathematical model of liquid meniscus shape in cylindrical micro-channel of the separator unit of condensing/separating system has been developed. Moving liquid meniscus in the 10 µm cylindrical micro-channel is used as a liquid lock to recover the liquid obtained by condensation from the separators. The main goal of the liquid locks to prevent penetration of a gas phase in the liquid line at the small flow rate of the condensate and because of pressure fluctuations in the vapor-gas-liquid loop. Calculation of the meniscus shape has been performed for liquid FC-72 at different values of pressure difference gas -liquid and under normal and micro gravity conditions. | 1,600.2 | 2017-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Tracing X-ray-induced formation of warm dense gold with Boltzmann kinetic equations
In this paper, we report on the Boltzmann kinetic equation approach adapted for simulations of warm dense matter created by irradiation of bulk gold with intense ultrashort X-ray pulses. X-rays can excite inner-shell electrons, which triggers creation of deep-lying core holes. Their relaxation, especially in heavier elements such as gold (atomic number Z=79\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z= 79$$\end{document}) takes complicated pathways, involving collisional processes, and leading through a large number of active configurations. This number can be so high that solving a set of evolution equations for each configuration becomes computationally inefficient, and another modeling approach should be used instead. Here, we use the earlier introduced ’predominant excitation and relaxation path’ approach. It still uses true atomic configurations but limits their number by restricting material relaxation to a selected set of predominant pathways for material excitation and relaxation. With that, we obtain time-resolved predictions for excitation and relaxation in X-ray irradiated bulk of gold, including the respective change of gold optical properties. We compare the predictions with the available data from high-energy-density experiments. Their good agreement indicates ability of the Boltzmann kinetic equation approach to describe warm dense matter created from high-Z materials after their irradiation with X rays, which can be validated in future experiments.
Introduction
Highly excited matter created after intense X-ray radiation is an object of intense experimental studies with high power laser sources, in particular, with free electron lasers (FELs), see e.g., [7,9,21]. The experiments trace non-equilibrium mechanisms of plasma formation, also through a transient state of warm dense matter [12,22,24]. Ultrashort duration of FEL pulses also makes it possible to probe the unexplored regime of electronic thermalization which takes up to several tens of femtoseconds since the beginning of X-ray exposure. Further dedicated experiments are underway. However, the analysis of such experimental results requires development of theoretical tools able to describe plasma formation under strong non-equilibrium conditions. For example, modeling of the ionization dynamics can con-veniently be done in many applications with a continuum approach [5,17,25]. With such an approach, evolution equations in phase-space are formulated for distributions of electrons, atoms and ions. They are solved on a phase-space grid. As computational costs depend only on the grid size, one then avoids a direct scaling of the computational costs with particle number, O(N 2 ), typical for particle approaches. This feature of continuum models allows one to also treat large samples efficiently.
For the simulation of the early non-equilibrium stages of the sample evolution, full kinetic equations should be applied. They follow sample evolution during this stage, delivering the information on transient electron and ion distribution. Such equations should treat every active atomic configuration appearing during sample excitation and relaxation.
In this paper, we report on the application of the Boltzmann kinetic equation approach for simulations of bulk gold irradiated with X-ray pulses, performed with Boltzmann equation solver combined with the 'predominant excitation and relaxation path' approach (PERP) introduced in [32]. The Boltzmann equation solver was originally designed to describe the evolution of 'plasma-like' samples composed of atoms, ions and free electrons [25]. Therefore, the bulk Au is assumed to be initially composed of (unbound) atoms. After X-ray excitation, ions and free electrons appear. Whereas for samples irradiated with VUV radiation the number of active atomic configurations is small, as the incoming photons can excite only outer-shell electrons, for samples excited with X-rays this number rapidly increases, as the Xrays can excite inner-shell electrons. This triggers the creation of deep-lying core holes, which complex relaxation, especially in heavier elements, involves a large number of active configurations. To illustrate, in carbon (Z = 6) the total number of possible atomic configurations, with 6 electrons distributed on levels (1s, 2s, 2p), is 27. It is then still possible to solve the set of evolution equations for all the configurations. The same can be done for other light elements. However, already for noble gas atom, neon (Z = 10), the total number of atomic configurations is 63. For argon (Z = 18), it amounts to 1323. This steep increase of the configuration number gives a clear limitation for kinetic simulations of high-Z materials, if including all active configurations.
To overcome this difficulty, the existing approaches such as, e.g., [4] use a superconfiguration approach, i.e., they do not follow the evolution of individual configurations but, instead, use a simplified set of 'averaged' configurations [11,14]. In [32], we introduced an alternative approach which still uses true atomic configurations but limits their number by restricting the sample relaxation to the predominant excitation and relaxation pathways. Applying this approach in what follows, we will obtain time-resolved predictions for excitation and relaxation of X-ray-irradiated bulk of gold, including the change of gold optical properties in response to an X-ray pulse.
The paper is organized as follows. First, we will recall kinetic equation formalism for X-ray-irradiated samples. Especially, simulations of irradiated bulk material will be discussed. With this scheme, we will follow Xray irradiation of bulk gold, leading to the formation of warm dense matter. Bulk gold has been selected due to many available theoretical and experimental data on its behavior in the warm dense matter regime (see, e.g., [16]). The results obtained for electronic and optical properties of the irradiated sample will then be discussed, and compared to the available data from highenergy-density experiments. Finally, we will present our conclusions and outlook.
Description of Boltzmann equation solver
For the description of X-ray excited bulk Au, we have applied our solver of Boltzmann kinetic equations developed in [6,[25][26][27][28][29][30][31][32]. The classical Boltzmann equations originate from the reduced N-particle Liouville equations and include only single-particle phasespace densities of ions and free electrons in the sample. The equations then model all systems as samples built of atoms/ions and of free electrons represented by their classical phase-space densities. In the atomistic approximation, the resulting kinetic equations for the electron distribution in phase-space, ρ (e) , as well as the kinetic equations for various Au ion configurations, ρ (i,j) are formulated as follows: for electrons, and: for ions, with the index, i = 0, . . . , N J , describing ion charge (where N J is the highest charge state present in the system), and the index, j = 0, . . . , N C (i), being the active configuration number (where N C (i) is the maximal number of ion configurations considered for a given ith ion charge). Here, m is the electron mass, and M is the ion mass.
In the most general case, the electromagnetic force acting on a single electron, , with e being the magnitude of electron charge, has two components, the electric one, E (r, t) and the magnetic one, B(r, t). They describe the interaction of charges within the sample with the external laser field as well as mutual electromagnetic interactions between charges. In our specific case of X-ray irradiation, the magnetic component of the electromagnetic interaction, B(r, t), as well as the driving effect of the laser field, E (r, t) on charged particles can be neglected (for details, see [25]). The force then reduces to the component describing mutual electrostatic interactions between electrons and ions in the sample. This component is a non-local function of electron and ion densities and was described in detail in [25] (Eq. (5) therein).
The kinetic equations follow non-equilibrium evolution of the emerging free-electron and ion densities due to photoexcitations, Auger decays of core holes and the subsequent electronic collisional processes such as elastic electron-ion collision, electron impact ionization and three-body recombination. The respective collision terms, Ω (e) and Ω (i,j) , then describe the change of the electron and ion densities, respectively, due to: (i) the creation of the secondary electrons and highly charged ions via photo-and collisional ionizations of atoms and ions, and Auger decays of core holes, (ii) elastic and inelastic collisions of electrons and ions, (iii) recombination processes, and (iv) short range electronelectron scattering. The cross sections and rates of those processes are derived in the atomistic approximation including the interaction of isolated atoms with impact particles, and implemented within the two and three body Boltzmann collision integrals (for details see, e.g., [3,20]). The cross sections and rates of atomic processes induced by X-ray photons are obtained with the XATOM code [10,18] based on Hartree-Fock-Slater scheme. The impact ionization cross sections (and the respective recombination rates) are calculated with Lotz formulas [13]. The short range electron-electron scattering are modeled with the Fokker-Planck collision integral [20]. The Pauli blocking is not included, as the electron system is assumed to be classical in this model.
Details on the collision terms can be found in Refs. [25]. If the collision terms are neglected, Boltzmann equations, Eqs. (1-2), reduce to the Vlasov equation [20] describing the evolution of a collisionless plasma. Initial input of Eqs. (1-2) is given by atomic density function, which represents the neutral sample.
Both the collisionless part of the equations and the collision terms conserve per construction the number of particles and the total energy in the system. This has been proven also for the numerical algorithms applied [25]. The numerical conservation of particle number and energy have been checked by dedicated tests in all numerical implementations of the Boltzmann Eqs.
The results obtained with the first ('core') version of the Boltzmann equation solver were published in [25]. The sample studied there was a noble-gas cluster irradiated with VUV/XUV radiation. The system of equations contained only ground-state ion configurations as the incoming photons excited electrons from valence levels only. The inverse bremsstrahlung process was also included as it contributed to the electron heating in this photon energy regime. On the femtosecond timescales considered, electron recombination was neglected. Further tests and applications of the code to the similar systems (weakly bonded noble gas clusters) then followed in Refs. [26][27][28]. The observables studied there were usually ion time-of-flight spectra.
An extension of the code was performed in [29], where the three-body recombination (in solids known as the Auger recombination) was added to the code. In [30], the Boltzmann equation solver was successfully applied for the analysis of electron spectra. It followed the evolution of the electron distribution through its nonequilibrium stage toward local thermodynamic equilibrium, without making any 'instantaneous thermalization' assumption, quite frequently used at that time (see, e.g. [8]). This ability encouraged the first application of the code in the context of plasma and WDM studies: in [6] the code was used to investigate femtosecond thermalization of electrons created after X-ray irradiation of hydrogen. In [31], for the first time heterogeneous samples, atomic clusters, containing atoms of two noble gases, were successfully studied.
The Boltzmann equation solver, originally prepared to follow non-equilibrium evolution of finite systems (e.g., atomic clusters), was adapted in [32] to study creation of plasma from bulk materials irradiated by Xrays. Irradiation with high fluence X-ray pulses enables to achieve transiently an almost homogeneous ionization dynamics within a large volume inside the irra-diated solid, provided wide beam focusing and thickness of the target layer comparable to the absorption depth of the X-rays used [23]. When compared with the simulations of finite samples, for which particle distributions in full phase space should be followed, the spatially-uniform ionization dynamics within the bulk material allows for a significant simplification of kinetic equations. One can then assume that both electron and ion distributions are approximately uniform within the bulk, which implies that the evolution of these distributions is position-independent: Consequently, the quasineutrality condition is naturally imposed on the sample, as at each space point the number of created electrons and ions is identical. The net electrostatic force responsible for mutual interactions of electrons and ions is then equal to zero. The equations still account for the fast electron thermalization, as the short-range electron-electron interactions remain unaffected.
Further, any directionality imposed by the initial photoionization process is quickly lost, due to a large number of collisional ionization events with isotropic emission of secondary electrons, occurring soon after Xray exposure begins. In this way, one can with a good accuracy restrict to the isotropic component of electron and ion distributions [25]. The evolution of the distributions only in momentum space is then followed. These simplifications, as well as the net electrostatic force equal to zero, extensively speed up calculations, as no adaptive stability condition for the evolution in real and momentum space, restricting computational timesteps, is then necessary. The simplified equations can then be conveniently applied to analyze ionization dynamics within a bulk material after its X-ray irradiation (for further details see [32]).
In parallel, the extension of the code applicability to hard X-ray regime was performed. To circumvent the 'bottleneck' of very high number of active configurations involved in the excitation and relaxation of an Xray-irradiated sample, in [32], an alternative 'predominant excitation and relaxation path' (PERP) approach was proposed. It still uses true atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths, determined by largest cross sections and transitions rates. The current scheme includes the most probable photoionization and the most probable Auger decay from each configuration within the path. Consistently with the treatment of only the predominant photoinduced processes, we included into the code the predominant collisional ionization processes (from the outermost shell of all considered atoms and ions) and the corresponding three-body recombination rates. Let us emphasize that including collisional processes extends the number of active configurations by those that can be obtained from each 'photoinduced' atomic configuration by a sequence of collisional ionization or recombination from outermost atomic shells. This forms a ladder of additional active configurations with charges from 0 to the highest one allowed in the system for each active configuration obtained through a photo-or Auger ionization. The reliability of the scheme was tested by performing respective calculations for a bulk material consisting of light atoms (carbon) and comparing their results with a full calculation including all relaxation paths. Later, these results were also benchmarked with an independent Molecular Dynamics calculations and found to be in a good agreement [1]. The selection of the predominant excitation and relaxation paths is now performed automatically by a dedicated code. This allows to apply the approach also to heavy elements, and at any X-ray photon energies.
Application of Boltzmann equations to follow evolution of X-ray-irradiated Au layer into warm-dense-matter state
The Boltzmann equation solver was originally designed to follow ultrafast electron and ion dynamics on a few 100 fs timescale. As ion masses were much larger than electron masses, the recoil momenta of the ions during electron-ion collisions could be were neglected on those ultrashort timescales. Therefore, the ions remained 'cold', i.e., they retained their initial kinetic energy throughout the entire (a few 100 fs long) simulation.
However, on longer simulation timescales the energy exchange between ions and electrons and the resulting electron-ion thermalization cannot be neglected. For the purpose of the current project, the original Boltzmann equation solver has been extended so as to also include the electron-ion coupling, modeled here with the Spitzer rate [20]. With this extension, the code can now be used on picosecond timescales. However, as the electron-ion coupling in bulk Au is weak, it does not visibly affect the sample evolution on a picosecond timescale. It should be also emphasized that the plasma code per construction does not include band structure. It describes a sample as an ensemble of unbound atoms, and models their interaction with X-rays (including subsequent Auger decays), and with the Auger and secondary electrons, using atomistic cross sections and rates.
However, due to the presence of numerous electrons and ions in the dense plasma, the ions cannot be treated any longer as isolated particles. In order to account for this effect, we introduce the respective continuum lowering [15], which can be estimated with the code XATOM for plasmas within a wide regime of electron density and temperature [10,19]. We then calculate the excitation energy for each level in respect to the lowered continuum. This yields 6s electrons to be delocalized (free) in ions already under the ambient conditions (see [16], Fig. 3a therein) and after X-ray excitation. Consequently, the 6s level is then the limiting energy level splitting the domains of localized (bound) and delocalized (free) electrons. The initial state of the irradiated sample, the neutral Au, is then an 'ensemble' composed of atomic Au +1 ions with delocalized 6s electrons as indicated by experiments [16].
The code can treat atomic Au ions emerging after X-ray excitation up to a charge of +9. The number of photoinduced transition between 144 various ion configurations of Au considered in the PERP approach [32] is ∼140. As mentioned earlier, their cross sections and rates were obtained with the XATOM code [10,18].
Results for electronic and optical properties of X-ray irradiated gold
We applied our code to a test case, using a set of pulse parameters available at the FLASH experimental facility for high-energy-density experiments. We assumed that X-ray pulses with photon energy of 245 eV and FWHM pulse duration of 60 fs irradiated a 30-nm-thick layer of Au. The thickness of the Au layer is comparable with the attenuation length of 245 eV photons in bulk gold (∼37 nm) which prevents forming a strongly nonuniform distribution of X-ray energy absorbed within the layer. Pulse intensity (of a Gaussian temporal profile) was adjusted so as to yield an average dose of ∼3 eV/atom which corresponded to a deposited dose of 1.6 MJ/kg.
During the irradiation of the Au layer with 245 eV X-ray photons, 4f-shell electrons are predominantly excited, leaving behind 4f holes with lifetime of ∼6.6 fs. The 4f core hole is later filled with an electron, most probably from 5d shell, and another 5d Auger electron is emitted. Further photoionization of the 4f core hole configuration is also possible to a double core hole state. Photoinduced transitions occur further but at the same time the released electrons start to ionize Au ions through impact ionization or to recombine with the ions. The complex ionization dynamics leads to a fast increase of the average ion charge and fast energy exchange within the electronic system, establishing a local thermodynamic equilibration on tens femtosecond timescale, as it will be shown below.
Under the current X-ray pulse conditions, the model predicts a presence of atomic ions of maximal charge +4. Au +1 ions build the ground-state configuration (with delocalized 6s electrons). For Au +2 ions, we have 2 active configurations, for Au +3 and Au +4 ions we have 3 active configurations each. The total number of active configurations with charges up to +4 is 9, and the number of the respective photoinduced transitions involved is 8. Within the PERP approximation used in the Boltzmann equation solver, established for the current experimental conditions, subdominant rates for other transitions were consistently neglected. The list of the selected active configurations (of charge up to +4) is given in Appendix A. In Fig. 1, we show the corresponding average ionization degree of the bulk gold (per atom, Fig. 1a) and the relative charge content, i.e., the number of specific Au ions (summing contributions from all active ionic configurations) normalized to the initial total number of Au atoms (Fig. 1b) as a function of time. Please note that in this picture already the 'neutral' bulk contains only ions, Au +1, which are not included in the calculation of < Z >. We show our predictions on timescales up to 200 fs after the maximum intensity of FEL pulse (i.e., time zero), when ionization process is rapid. At later times, there is not much change in the plotted quantities.
The average charge < Z > per atom increases shortly after the exposure start (Fig. 1a). It saturates at around 100 fs to increase later slightly due to electron production in long-timescale Auger decays. At longer times, also three body recombination starts to play a role, making the relaxation dynamics even more complex. The largest relative ion content is observed for Au +1 with one (delocalized) 6s electron (neutral bulk Au). The ion content for Au+2 with delocalized 6s electron is the second largest. Au+2 charge states can be achieved either after a core hole excitation 4f or 5p, or a photoor impact ionization from level 5d. The ion content Au +3 constitutes only a small fraction of the overall number of ions and originates either from Auger decay of core hole (here 4f -5d 5d), or from a photoionization or the impact ionization of Au +2 ion. Au +4 ions are created in a similar way (see Appendix B).
In Fig. 2, we show the prediction on the kinetic electron temperature and transient electron-ion collision time, τ , calculated selfconsistently from the actual electron-ion collision rates in the sample, involving transient electron and ion distributions. The kinetic electron temperature is obtained as 2/3 of the total kinetic energy of all free electrons above the 6s level (calculated in respect to the 6s level) divided by the number of free electrons above the 6s level. The factor of 2/3 originates from a standard definition of kinetic temperature in 3D hot electron gas: E kinetic = 3/2 kT kinetic . When electron gas becomes thermalized, kinetic temperature equals to the Maxwell-Boltzmann temperature. These predictions do not include the contribution from the delocalized 6s electrons, which cannot be extracted within the framework of our model.
Kinetic electron temperature corresponds initially to the temperature of the emitted photoelectrons (∼93 eV) after the excitation of a 4f electron (Fig. 2a). Due to fast energy exchange among numerous secondary electrons, the electron distribution quickly thermalizes, and the system enters a local thermodynamic equilibrium regime at ∼50 fs, when kinetic temperature almost stops to change. This thermalization is also reflected in the predicted electron-ion collision time defined as an inverse of the total electron-ion collision rate normalized per one electron (derived in the same way as described in Section 2.3 of [4]). The calculation of the total electron-ion collision rate takes into account the total collisional cross section, including the scattering of hot electrons on ions in any possible atomic configuration included in the model, i.e., all free electronbound electron collisions. The collision time strongly increases, following the decrease of the kinetic temperature. It peaks around −35 fs, i.e., at the same time, when the kinetic electron temperature reaches its minimum. Later, after the thermalization of free electrons, it saturates at the value of ∼ 1.2 fs, in a very good agreement with the equilibrium value measured in Ref. [16] after relaxation of optically excited Au. In Fig. 3, we show the evolution of transient freeelectron-energy distribution, n e (E) (normalized per electron) as a function of energy. Snapshots for the times −90 fs, −50 fs, 0 fs (FEL pulse maximum), 50 fs and 90 fs are shown. The normalized distribution is multiplied by the actual value of the transient ionization degree, < Z >, i.e., the number of free electrons above the 6s level divided by the total number of atoms (Fig. 1a), in order to show the increase of the free electron density per atom in the sample. The transient electron energy distributions are compared with the corresponding Maxwell-Boltzmann (M-B) distributions calculated, using the transient kinetic electron temperature, and multiplied by the value of the transient ionization degree. Initially, the transient electron energy distributions show a significant deviation from the M-B distributions both in the low and in the high energy range, where the contribution of photo-and Auger electron peaks to the spectra at ∼100 eV is visible. After time zero, the latter contribution continuously decreases and the low energy part of the transient spectra approaches the M-B distribution. This shows that thermalization of the electrons progresses toward a local thermodynamics equilibrium. The M-B curves for times 50 fs and 90 fs are almost overlapping, coinciding with the respective transient free-electron-energy distributions. This confirms that the thermalization has been completed by those times.
The free-electron density and electronic temperature can be used to obtain predictions on the transient optical properties of WDM Au. For the conversion, we use the Drude model described in [16], including contributions of interband transitions. The scheme is the following: using known experimental data on gold from [16], we can identify the initial value of the complex dielectric function, ε(ω, t 0 ). Taking the Eq. (1a) and (1b) from [16], we arrive at the expression: As we know the both components of the complex dielectric function ε(ω, t 0 ) (from the experimental data), as well as the Drude terms, ε f 1 (ω, t 0 ), ε f 2 (ω, t 0 ), calculated with the equilibrium value of the collision time and the initial plasma frequency (obtained with 6s electron density), we can calculate ε ib 1 (ω, t 0 ) and ε ib 2 (ω, t 0 ) from Eq. (4). Please note that in this way the interband contributions remain temperature-independent which may be a potential source of error. However, the analysis provided in [16] indicates that the temperature dependence is rather weak under the conditions of the actual simulation.
The Drude model applied here a parameter-free model, as it uses the electron-ion collision time calculated on-the-fly from the Boltzmann equations. For the initial condition, let us recall that within the atomistic picture, the 'cold' Au corresponds to an ensemble of Au+1 ions with the delocalized 6s electrons, i.e., the valence electron density of 'cold' bulk Au (with 1 valence electron per atom) is then equal to the number density of Au.
Below we plot the DC conductivity, σ 0 as a function of time, σ 0 = ω 2 p τ/4π, where ω p is the electron plasma frequency, and τ is the electron-ion collision time (Fig. 4). As our atomistic calculation cannot include the contribution of the delocalized 6s electrons per se, at early times (up to −80 fs) we use an ambient value of τ for gold from [2], and switch to τ from Fig. 2b, collision time, Fig. 2b), until free electrons thermalize (∼50 fs) (cf. Fig. 2) and then saturates at an equilibrium value of 1.94 × 10 16 1/s which is in a very good agreement with the equilibrium value of ∼ 2 × 10 16 1/s measured in Ref. [16].
The predictions on transient optical properties, reflectivity and transmissivity are shown in Fig. 5. They were obtained with the optical probe of wavelength: (a) 500 nm (2.48 eV), and (b) 900 nm (1.38 eV). The 500 nm pulse probes bulk Au directly above the 'cold' interband transition threshold (∼ 2 eV), where interband contributions dominate. The 900-nm probe pulse restricts to the regime dominated by intraband contributions. Both regimes can be tested with our Drude model which includes both the interband and intraband contributions. In Fig. 5, we show predictions obtained with the Drude model, using the transient electronion collision time, τ, from Fig. 2b (green curves). As in case of DC conductivity, at early times (up to −80 fs) we use an equilibrium value of τ for gold from [2], and switch to τ from Fig. 2b, as soon as the transient electron-ion collision time approaches this equilibrium value.
Generally, both the initial increase or decrease of transmissivity and reflectivity follow the increase of the FEL pulse intensity. The corresponding extrema of optical properties are slightly delayed in respect to the time zero. This is due to the still developing electron cascades at this time. Only after the cascading stops, optical properties stop to change. The predictions show also a strong effect of the finite collision time which is especially pronounced for reflectivity with intraband contributions only (for 900-nm probe pulse). Neglecting collisions induces a reverse behavior of reflectivity, observed also for other probe pulse wavelengths (not shown). Further validations of these predictions require a dedicated time resolved experiment. The respective efforts are in progress.
Conclusions and outlook
The predictions on the evolution of X-ray-irradiated gold obtained with atomistic Boltzmann kinetic equations have been reported. The equations followed complex relaxation path of bulk Au after the impact of 245 eV photons, using the 'predominant excitation and relaxation path' approximation introduced in [32]. The equilibrium values of free-electron density, electron collision time, as well as the DC conductivity obtained after electronic thermalization show good agreement with the measurements from [16] obtained for optically excited gold at the similar absorbed dose. This demonstrates the ability of the Boltzmann equations approach to trace formation of warm dense matter created from materials composed of heavy elements, irradiated with X rays. In addition, the self-consistent parameter-free Drude model, including interband transitions and transient collision time calculated on-the-fly from the actual state of the Au sample, has been used to obtain predictions on transient optical reflectivity and transmissivity of warm dense gold. These predictions can be validated by dedicated experiments in future. | 6,736.8 | 2021-08-01T00:00:00.000 | [
"Physics"
] |
Learning and Adaptation in Physical Heterogeneous Teams of Robots
In this paper we present a novel approach to assigning roles to robots in a team of physical heterogeneous robots. Its members compete for these roles and get rewards for them. The rewards are used to determine each agent’s preferences and which agents are better adapted to the environment. These aspects are included in the decision making process. Agent interactions are modelled using the concept of an ecosystem in which each robot is a species, resulting in emergent behaviour of the whole set of agents. One of the most important features of this approach is its high adaptability. Unlike some other learning techniques, this approach does not need to start a whole exploitation process when the environment changes. All this is exemplified by means of experiments run on a simulator. In addition, the algorithm developed was applied as applied to several teams of robots in order to analyse the impact of heterogeneity in these systems.
Learning and Adaptation in Physical Heterogeneous
Teams of Robots
Josep Lluis de la Rosa and Israel Muñoz
Abstract-In this paper we present a novel approach to assigning roles to robots in a team of physical heterogeneous robots.Its members compete for these roles and get rewards for them.The rewards are used to determine each agent's preferences and which agents are better adapted to the environment.These aspects are included in the decision making process.Agent interactions are modelled using the concept of an ecosystem in which each robot is a species, resulting in emergent behaviour of the whole set of agents.One of the most important features of this approach is its high adaptability.Unlike some other learning techniques, this approach does not need to start a whole exploitation process when the environment changes.All this is exemplified by means of experiments run on a simulator.In addition, the algorithm developed was applied as applied to several teams of robots in order to analyse the impact of heterogeneity in these systems.
I. INTRODUCTION
M ULTI-ROBOT Systems have developed extensively in the last years and have drawn the attention of many researchers worldwide.A considerable number of real applications of these systems have been applied to real-life problems ranging from cleaning tasks and foraging to soccer [1].Multi-Robot Systems can be classified as either homogeneous or heterogeneous.Heterogeneity in these systems can be found at two different levels: behavioural and physical.Robots can differ in how they are programmed [2], in other words, how they act depending on the information provided by the sensors.Robots can also be different in their physical features, that is, in their actuators, sensors, shape, size, etc. From the point of view of physical heterogeneity robots may differ in which tasks they are able to accomplish and in how efficiently they perform the same task [3].Most of the work done so far has focused on homogeneous teams of robots and teams of identical robots with heterogeneous behaviours.However, in recent years heterogeneous robot systems have attracted more and more interest.Teams of heterogeneous robots are often assembled by combining several robots created previously for different purposes.Efforts so far have focused on deploying these systems, but little attention has been paid to the benefits of using these systems and how their full potential might be exploited.
Josep Lluis de la Rosa is with EASY -Centre de la xarxa IT del CIDEM University of Girona, E17071 Girona.E-mail<EMAIL_ADDRESS>Muñoz is with Institute for Agro-Food Reserach and Technology (IRTA) and a part-time lecturer at the University of Girona (UdG).E-mail<EMAIL_ADDRESS>paper attempts to address some of the key points of this emerging field.We focus on learning and adaptation in these systems and also analyse the benefits/drawbacks of heterogeneity.The structure of this paper is as follows: section 2 stress the motivation of heterogeneity, section 3 introduces the experimental environment, section 4 contains the ecosystem based model for physical agents as robots, section 5 explains the set of experiments which are analysed in section 6. Finally conclusions are set in section 7.
II. MOTIVATION
Heterogeneous teams of robots raise several questions not posed by homogeneous ones.These issues are:
A. How to exploit the full potential of heterogeneity
In the literature, the potential of these systems is generally exploited through decision making.Most of the systems deployed in real applications assign tasks to robots based on the individual capabilities of each agent.One of the most widespread methods is auctions.This is the case for MUR-DOCH [4] and ETHNOS [5], in which each robot is aware of their individual capabilities and the decisions are taken considering the capabilities of the various robots available to complete a given task.Tasks are assigned based on a utility value created by each robot using information about their capabilities.These co-operation strategies have two main problems.On one hand, robot capabilities can change over time.Therefore, decisions may be taken based on outdated information, except if the robot is able to update their own physical capabilities over time.This feature is implemented by L-ALLIANCE [3], where robots performing a given task are observed by other team-mates, so that robot capabilities can be evaluated and updated.On the other hand, distributing tasks based on the most appropriate robot for the task do not guarantee the best team performance when there are several tasks to be performed at the same time.One possible way to solve this problem is by allowing agents to learn which is the best task distribution based on the overall team performance.However, the literature on heterogeneous systems does not contain any work on learning in Multi-Robot Systems.In order that robot capabilities do not overlap the decision making goal, the different elements of the system need to be coordinated efficiently.For example, in [6] three robots in charge of assembling a structure co-operate through explicit coordination using a layered architecture.In [7] the movement of two robots with different features is coordinated using the information on their controllers.
B. Benefits of heterogeneous teams over homogeneous ones
The question of the benefits of heterogeneity in robot teams does not have a clear answer.On one hand, the benefits of heterogeneity are evident when all capabilities needed for a specific task can not be built into one robot.However, when robot capabilities overlap, these questions cannot be answered.Only [3] was able to determine, in a group of robots with overlapping capabilities performing several tasks, which performance decreased as heterogeneity increased.The answer to these two questions should help designers create and exploit these systems.This paper addresses the first question in depth by explaining and testing a new algorithm for learning in a heterogeneous multi-robot system.The second question is extensively analysed by summarising the results of applying the algorithm detailed in the first part of the paper to a set of homogeneous and heterogeneous robot teams.
III. EXPERIMENTAL DOMAIN
For this research robotic soccer has been selected as the test-bed for several reasons.According to [8] RoboCup is an attempt to foster AI and robotics research using a soccer game as a representative domain in which a wide range of technologies can be integrated and new technologies can be developed.The framework of the RoboCup Physical Agent Challenge provides a good test-bed to see how physical bodies play a significant role in carrying out intelligent behaviours.In addition, we are familiar with this domain, as we have participated over the past years with a team of real robots in several competitions around the world.The experiments presented in this paper have been run on a simulator called JavaSoccer that emulates the Small Size League in RoboCup.
IV. AN ECOSYSTEM BASED MODEL FOR PHYSICAL AGENTS
The literature on multi-robot and multi-agent systems represents a large amount of research aimed at enabling a group of robots to learn and adapt (see [9] for a survey).However, most of the approaches are not conceived for dealing efficiently with heterogeneous members and, in particular, for learning how to complete a joint task efficiently.One of the particular features of these systems is the size of the state space.As the components are different, the number of possible solutions to the problem increases dramatically.One of the most interesting works related to this research was developed by [10].In this work, sets of physical components (pump, heater...) learn to participate in the negotiations to execute a given plan based on their physical capabilities.Although this work presents some interesting issues, it does not deal with the complexity of the problem completely.This section leads to the selection of the algorithm for this task.
Quite often researchers have turned to natural phenomena to solve complex problems.One of the most popular sources of inspiration is insect societies.The research done in this area is also known as swarm intelligence and has been applied to solve a huge number of problems (from telecommunications routing to robot control) [11].This research builds on two different aspects: on one hand, a natural phenomenon that allows task partition in animal and insect societies through competitions among members [12] and, on the other, the work done by [13] on chaos in distributed systems and the model developed as a result.
Heterogeneity is an intrinsic feature of many insect and animal societies.Insects usually live in groups or colonies and work co-operatively towards the same goal.In these societies tasks are partitioned according to the physical abilities of each member of the colony (see [14] for detailed examples).One of the phenomena that allows tasks to be partitioned is competition among the members.Insects and animals (including humans) that live in social groups establish a social structure called dominance hierarchy within their groups.This hierarchy serves to maintain order, reduce conflicts and promote cooperation among group members.Each member of the group has a position within the dominance hierarchy based on the outcomes of interactions with other members.The tasks that each member carries out depend on the position in the hierarchy.In many cases, the resulting hierarchy is affected by the physical features of the group members.This phenomenon offers an example of how nature solves the problem of task allocation in a heterogeneous system.A model has to be defined to emulate this process for the purpose of learning in heterogeneous multi-robot systems.Hogg and Huberman [13] developed a model to study heterogeneous collections of agents competing for the use of bounded resources.Agents select resources depending on the number of agents already using them and compute a perceived pay-off for their use.The number of agents of a given type also increases depending on how well each type of agent is performing with respect to other types of agents.Although this model was developed to study computational ecologies [15], it can be reformulated so that it emulates the competition for responsibilities and tasks in a group of agents depending on how well each member is doing.The next section details how this model has been adapted to deal with the problem of learning in heterogeneous multi-agent systems.
A. Ecosystem description
The main aim of this model is to define how agents change their preferences for performing a given type of task and how they co-operate.This model is also referred to as the ecosystem, as it models some phenomena observed in natural systems.In this ecosystem a set of agents S = s 1 , s 2 , s 3 , s 4 , s 5 competes for a set of roles R = r 1 , r 2 , ..., r 10 , r 11 (representing a subset of responsibilities).In this explanation subindex r stands for roles and subindex s for agents.Agents in this ecosystem have different physical dynamical features and roles are roles in a soccer team.These agents use the information about the roles in the form of rewards to decide which roles to use.The way these preferences and rewards are combined is represented in a modified model of [13].This approach also allows the fitness (how well each member is doing when compared with other members) of each species of the ecosystem to be modelled.This fitness affects how agents interact with each other, as well as each agent's preferences.
B. Role definitions
In this domain tasks are roles on a soccer team.Each task contains a limited number of high level actions (kick b all, move b all, def end g oal, cover g oal, regain b all...), some of which are shared by several roles.Rules propose actions to agents depending on the position of the ball and the player on the field who is taking the decision.Each condition is a fuzzy variable, whose values are defined by a fuzzy set.Each rule has a certainty (ϕ) associated with it (φ ∈ [0, 1]).This value depends on the level of activation of each condition and the operators (AND/OR).Conditions and operators are combined together using fuzzy algebra and implemented by means of a possibilistic approach.The roles are: goalie, fullback, left/right defender, defensive midfielder, left/right midfielder, centre forward, left/right winger and central striker.Two or more agents can use one role at the same time, as long as it consists of more than one action.Each agent gets rewards when these roles are used.
C. Creating heterogeneous physical agents
Every agent in this ecosystem has different physical and dynamical features.From experience, big robots are heavier, as they carry more batteries and more material is needed to assemble them.This affects the speed of the robot and its dynamical behaviour.From our experience controller design, controllers tend to be more accurate when they are designed to control a robot that moves slowly, and this feature is also affected by the accuracy of the vision system.On the other hand, when a controller is designed to control (smaller) robots with faster speeds, it tends to be more inaccurate.Bigger robots can also have enough room to contain a powerful kicking device, while this is not so easy for small robots, whose kicking devices are not usually so powerful.Using these ideas, heterogeneous players are created by building robots ranging from fast, inaccurate, and with small kicking devices to small, accurate and with powerful kicking devices (see Table I).
The criteria applied to building the robots allow us to use robots that can contribute in different ways to the team's overall performance, as each robot has unique features.
D. Rewarding actions
Agents use the rewards they get from the action they select to compute their preferences and the fitness of each agent.Rewards are given by taking into account several aspects: • The player is situated properly on the field according to the role it is fulfilling • Ability to prevent the ball from going into the goal • Goals scored • Goals made • Dribbling ability • Ability to regain control of the ball • Ability to move the ball forward.
As each agent can use any role at any given time, every tw seconds (in this work 100 seconds) agent s adds all rewards obtained over this period of time for each role r (R rs ).Using additional information this agent tries to estimate the possible rewards (G rs ) if the given role had been used at each decision step.Several parameters are applied to compute this expected value.• Each agent keeps track of its decisions and every t w computes the number of times that role r has been used (P rs ), and then uses this value to compute p rs (normalising P rs for agent s).• µ rs is a confidence parameter that depends on the amount of information available to calculate G rs , see 1.This means that µ rs = h(P rs ).µ rs ∈ [0, 1].
G rs is also affected by the team's performance.If the goal margin is increasing with respect to the previous results, G rs increases proportionally to each role usage.Otherwise, the values of G rs decrease.This is modelled by the function learn.
Next, G rs is rescaled and if the value is under a threshold, values are not considered significant of agent preferences.This is achieved by means of the function sclp, which assigns a constant value to G rs if G rs is below a given value; otherwise, the value G rs is proportional to G rs .
This function filters out rewards and discards those under a given value, as they are not considered representative of agents' preferences.
E. Modelling agent
Preferences Once each agent knows its rewards it can start the process of updating the model parameters, which are the model agent preferences and the fitness values.
Preferences−f rs
Given G rs , each agent s can compute its current preferences P rs for each role r.This value is used to update rs , which can be understood as a weighted average of each agent's preferences over time.The average is helpful for several reasons: • Agents should be able to interact several times with the same roles to evaluate the rewards they get.• The preferences for each role are not only determined by the physical features of the players and the opponents, but also by the preferences of the rest of the team.This is a dynamic process in which each agent decision affects the rest of the team.On the other hand: • Agent preferences should be able to be updated quickly when they change as a result of a change in the environment.This value is used for the decision making process.High values of rs mean a high preference of agent s for role r resulting in a higher likelihood of using this role.These preferences can also change as a result of the changes in the value N s .Fitness −N s N s is used to compare each agent's individual performance with respect to the whole team and can be understood as the fitness of each agent in the group.Above average values of N s mean that agent s is better adapted to the environment and getting more rewards from it.This value is updated depending on η s , the current fitness using the latest rewards obtained by each agent over t w .The motivation for using this average is the same as the one for updating preferences (see Equ. 4,5, 6 and 7).
This model deals with the way decision making is done at two different levels.Each agent s tends to use the roles with the highest preferences ( rs ).When two agents think that role r is best for them, then N s plays a decisive role.This parameter increases/diminishes the values of rs , so agents with higher N s tend to win in these conflictive situations, resulting in a hierarchical structure of the system, in which some agents are dominant with respect to others in conflictive situations.This aspect is detailed in depth in the next section: decision making.F. Decision Making Agents co-operate in order to accomplish their goals.This co-operation technique focuses on reducing the number of conflicts between several agents-for instance, preventing two or more agents from taking the same decision.This consensus technique has been used in soccer robotics [16] to reduce the amount of conflicts.The procedure is the following: initially each robot takes the decision with the highest certainty associated with it according to the revision process imposed by the consensus technique for fuzzy rule certainties.If two or more agents intend to take the same decision, the one with the highest certainty wins.This process is repeated until each agent has taken a different action.Agents use their communication capability to exchange information and reach agreement on their decisions.The revision process is based on two parameters, Prestige (P) and Necessity (N), which modify the value of certainties (ϕ).P is related to each agent's preferences.Both parameters have unique values depending on each agent.Here both values are used without a subindex for the sake of the explanation (see later on in this section for more details).
Prestige performs a linear transformation over ϕ , as described in (8).
Then, Necessity performs a non-linear transformation over ϕ' as described in (9).
Prestige can be understood as the confidence that an agent has in one role.Higher values of Prestige mean high confidence values for these roles.Prestige is a conservative parameter.If an agent has a low confidence in a role, the Prestige value will be very low.Here this value is assimilated to the preferences of each agent.
Necessity is a parameter that increases ϕ' according to the necessity of the information source that is being revised.This is a non-linear parameter that prevents agents from settling in one/several roles too quickly.
In order to deal with the idea of preference for one role, the value of rs is used to assign the parameter Prestige (P) for the certainty revision process (see (10)).
where k is a constant value.The values of Necessity are computed depending on the use of roles.We use this parameter to make sure that the robot initially uses all the roles.As the agent interacts with the roles and starts to show more preference for one or more roles, the effects of this parameter tend to disappear (see (11)).
where δ is a constant value and One of the most interesting features of this algorithm is that it not only allows a group of heterogeneous components to learn how to perform a joint task co-operatively, it also allows a group of agents to adapt to changes in the environment while working as a team.In order to illustrate the explanations a simplified view of the robot decisions has been developed.Robots are represented according to their role usage, and each role has a position on the field.An average position on the field is determined based on the role usage of each robot.Fig. 1 displays the typical positions defined for each role.
A. Learning
Learning takes place through a self-organising process, in which agents interact with the environment and other teammates.Team-mates interact through co-operation and competition to fill roles.The result is an emergent process in which agents converge to roles.Fig. 2 shows how the five robots used in the experiments carry out the roles.Two different phases can be perceived in this learning process: exploration and consolidation of agent preferences.During the first phase agents interact with the environment and tend to fill all the roles several times, which can be observed in the middle field (in order to have a clearer view of the results, the information represented is the average of several periods).As initially, preferences are very similar, the exploration of the environment is determined by Necessity, one of the parameters that defines the consensus process.Using this parameter ensures that all agents interact several times with all roles.At this point in the process the team has chaotic behaviour, as agents start using a given role but, a few seconds later, switch to a different one.Agents start to focus on a small number of roles, even though they are still exploring the environment.After a given time, they usually focus on two or three different roles, as their preferences for them are slightly higher than for the rest of the roles.One of the things that usually takes place at this point in the game is that several agents may become interested in the same role.This is due to the same factors detailed previously.This gives rise to competition for roles, which emulates the competition for limited resources that can be observed in animal societies.Agents that have a similar interest in the same role tend to fill the same role in turn, as the Necessity parameter allows agents to explore other roles when the preferences are still very similar.The result of this process depends on the following factors: • Opponent's skills.The disorganised behaviour of the system may keep the team from reaching attacking positions, for instance, if the opponent has very good attacking skills.This may force agents to focus on defensive/midfield roles at the beginning.• Physical features.Each robot's physical features determine the number of rewards that each player can obtain; the skills needed to fill a role usually depend on the opponent.• Other team-mates.Other team-mates often have a similar preference for the same role.This situation can make it more difficult for other agents to use a given role, as the number of conflicts increase (two or more robots decide to choose the same action in the same field area).This may decrease the probability that other players select this role.
During the last phase in the learning process, agents tend to increase their preferences for the role that they were using most of the time at the end of the previous phase.In some cases, some players may finally switch to a different role as a result of the increased order in the system.At the end of this process a set of agents is assigned to different roles.The whole process leads the team to develop a team formation adapted to the opponent with very efficient role distribution among the heterogeneous players.Results show that the result of the learning process is that the team performs, on average, around 90 % of the best hand-coded solution.However, this result can be obtained in about 30 minutes of interaction, while the hand-coded solution may take several days, as the designer needs to know which robots are best suited to a given position and a given opponent.If the opponent changes the designer has to start the whole process again.
Fig. 3 shows how the score for a learning team and a hand-coded team changes over time.The learning team is Fig. 3. Learning process represented by the black line and the opponent by the dotted line.Although initially the opponent performs better than the learning team and the goal margin is increasing, the learning team is able to self-organise and increase the number of goals scored, while the opponent is only able to score a few goals after step 20.
VI. ADAPTATION
One of most interesting features of this algorithm is that it allows a team of agents to adapt/develop when it is working and performing reasonably well.The key for this feature is the concept of fitness (N s ) and the function sclp.As the environment evolves (changes in the opponent's behaviour or in physical features of a robot), the amount of rewards that each agent obtains from the environment also changes, thus modifying the preference and fitness for each agent.In some cases, agents do not get enough rewards to keep their current role preferences (due to the filter imposed by sclp).In this situation, these agents start to explore the environment again, while the other members continue to fill the same roles.They tend to settle into a different role, thus improving the team performance in the new environment.Fitness contributes to adaptation, especially when the physical features of one of the members change as a result of a mechanical failure.
The following example analyses how the team is able to adapt when one of the team components breaks down, as in example A1.A1 is filling the role of striker when its speed falls from 0.45 m/s to 0.15 m/s and its rotational speed from 7.28 rad/s to 5.28 rad/s.Its performance falls from a goal margin of 80 goals to 30 goals.A3 and A2 are filling the roles of left and right midfielder, while A5 fills the role of goalie and A4 that of defensive midfielder (see Fig. 4).
After the robot failure the team starts to evolve thanks to the change in fitness.A2 tends to move towards more offensive positions, while the other agents, except A3 which also plays in attacking positions, keep their current positions.The reason for these changes is that A1 is not able to play its striker position efficiently any more, as a result of which its rewards decrease, and thus its preferences for this role and the fitness.This allows A2, and also A3, to participate more often in attacking positions.This change in the team configuration results in a better team performance, which jumps to a goal margin of 60 (see Fig. 5).Finally A2 replaces A1 in the role of striker and A1 fills the role that A2 was filling.As A2 is damaged, A4 helps it by playing on the right side of the field.As a result of these changes in team configuration, the performance jumps to a goal margin of 76 goals (see Fig. 6).This example shows that the team is able to adapt while playing efficiently.
If the same fitness value is considered for each robot during the entire process, the system is not able to fully adapt, that is, the performance improves only slightly.This is explained by the fact that the decrease in A1's fitness allows other robots to participate in A1's tasks in attacking positions, as the other members tend to win more conflicts with A1, in this example Fig. 6.Team adaptation 3 A2.As this robot tends to fills offensive roles more often, the overall team performance increases, thus increasing the rewards for A2 in the attacking roles.
VII. HETEROGENEITY ANALYSIS
The algorithm developed has been applied to a set of homogenous and heterogeneous teams to analyze the benefits/drawbacks of heterogeneity.Five homogenous and five heterogeneous teams were built.Heterogeneous teams were composed of a combination of the robots detailed in Table I, while homogenous teams were composed of 5 identical robots of each of the types described.These teams played against 3 hand-coded teams that represent 3 different levels of difficulty (easy, intermediate and difficult).Eighty simulations were done for each case.Table II details the results of the simulations as a goal margin.The best team against a given opponent is assigned a 1 and the worst team a 0.
Results show that heterogeneous teams generally perform better than the homogeneous ones: 4 out of 5 heterogeneous teams perform better than the homogeneous teams.Analysing each case individually, in two cases a homogeneous team performed better than any heterogeneous team.Although H5 is the best team against the third team, it is the worst against the first and second teams.Heterogeneous teams tend to perform more uniformly than homogenous ones.Their performances show a considerably smaller variance.An in-depth analysis of the results shows the following: • Task specialization.Physical features demanded for a given role depend on the tasks that must be fulfilled.Heterogeneous teams present a wider range of capabilities.Thus, a correct distribution of these capabilities may result in better efficiency when the team performs the different tasks.• Flexibility.One of the features observed in the results is that role distribution in a team of heterogeneous components changes depending on the opponent.This means that the features demanded for each role also depend on the opponent.Thus, heterogeneous teams, on average, tend to perform better because they can combine their capabilities in different ways.Another feature observed in heterogeneous systems is the surprising results of combining skills when the components of the team cooperate.• Skills combination.There are several examples (HT1 and HT2, HT3 and HT4) where replacing a robot that individually performs worse against the same opponent results in a poorer team performance.
VIII. CONCLUSIONS AND FUTURE WORK
This paper has presented a novel approach to allow a team of heterogeneous members to select the best roles for them based on the other team-mates and the environment.In addition, the algorithm detailed in this paper allows the team of robots to adapt while still performing efficiently.Finally, this algorithm was applied to several teams of homogeneous and heterogeneous robots.The results have shed some light on the question of heterogeneity in multi-robot systems.We will extend this algorithm to other domains and other problems.One of these new domains is Rescue [17], in which mainly heterogeneous teams of robots are deployed to rescue people after a catastrophe.The algorithm can also be improved so that tight co-operation can be learnt.This model will be tested using other decision-making processes.
Team Performance Performance Performance Average 1 Team 2 | 7,305.2 | 2007-09-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Thermodynamically Stable Intermediate in the Course of Hydrogen Ordering from Ice V to Ice XIII
Even though many partially ordered ices are known, it remains elusive to understand and categorize them. In this study, we study the ordering from ice V to XIII using calorimetry at ambient pressure and discover that the transition takes place via an intermediate that is thermodynamically stable at 113–120 K. Our isothermal ordering approach allows us to highlight the distinction of this intermediate from ice V and XIII, where there are clear differences both in terms of enthalpy and ordering kinetics. We suggest that the approach developed in the present work can also reveal the nature of partially ordered forms in the hydrogen order–disorder series of other ice phases.
W ater is a simple molecule, but its unique physicochem- ical properties are key to life as we know it, to many geological processes on and within Earth, and to the evolution of planetary systems.Its crystalline forms, ices, are known for their tremendous structural variety, where 20 polymorphs are experimentally accessible today. 1−3 In most cases, each hydrogen-disordered phase has a single ordered counterpart with a symmetrically equivalent oxygen lattice.
In general, ordered structures can be found at lower temperatures for their lower enthalpy, which leads to their lower free energy.On the other hand, disordered structures become dominant at higher temperatures because the contribution of the configurational entropy overcomes the enthalpic disadvantage.Under the prerequisite that the transition is reversible and the system is in equilibrium at isobaric condition, the enthalpy difference (ΔH) between the hydrogen order−disorder pair can be related to their difference in configurational entropy (ΔS conf ) which corresponds to the difference in the degree of hydrogen (dis)order through ΔS conf = ΔH/T c , where T c indicates the equilibrium order−disorder temperature.
The description as a pair of one with full disorder and a counterpart with complete order is an ideal classification.Experimentally observed ice phases are sometimes neither of them, but take partially hydrogen-(dis)ordered structures (e.g., ices III/IX, 4−7 V, 7,8 XIV, 9,10 XV, 11,12 and XIX 13,14 ), in which the molecular orientations feature both some randomness and certain order at the same time.Their understanding is hampered mostly because of kinetic reasons: molecular reorientations become too slow at low temperatures and are often frozen before the ordering process has completed, producing what might be called a glassy state of molecular orientations in which the oxygen arrangement displays longrange order.By contrast, amorphous ices feature a glassy nature of the oxygen atom arrangements.The kinetically frozen, orientational glassy state is often not clearly distinguished from truly thermodynamically stable partially ordered phases (e.g., see refs 12 and 15−18).Previous experimental studies mostly focused on obtaining samples ordered enough to distinguish them from disordered phases experimentally, such as by calorimetry, 19 Raman, 20 and neutron diffraction. 9Various approaches have been attempted to enhance the degree of hydrogen order such as acid/base dopant, 9,21 H/D dopant, 13 slow cooling, 10 and cryo-storage. 22evertheless, resultant "ordered" phases still contain substantial degrees of disorder (ices IX, 4−6 XI, 21,23−26 XIV, 9,10 XV, 11,12 and XIX 13,14 ).It has remained unclear whether such ices represent merely transient states on the way to the ideal ordered ices or whether these ices are thermodynamically stable forms distinct from the ideal ordered ices.Such ambiguity is also shared for some disordered phases such as ices III 5−7 and V. 7,8 In other words, the fundamental details of the order−disorder phenomena are still far from complete, despite decades of research on ordered ices.
In this study, we investigate the calorimetric behavior of the ice V−XIII pair as a model case to elucidate the hydrogen ordering process in ice.Here, ice V−XIII is selected for its three properties: (i) a completely ordered configuration can be defined for ice XIII, 9,27,28 (ii) the hydrogen order−disorder transition takes place reversibly at ambient pressure, 9,19,28 and (iii) the orientational glass-transition temperature is below the hydrogen order−disorder transition boundary, 19,29 which means that the water molecules have enough mobility to rearrange the configurations.
Ice V is a thermodynamically stable crystalline phase at around 0.5 GPa and 250 K. 30 Ice V is recognized as a disordered phase but is also known to contain partial order. 7,8n contrast to some other "ordered" phases, its ordered counterpart, ice XIII, can form experimentally in an almost completely ordered structure except for a small degree of remnant disorder. 9,28The enthalpic preference of the ice XIII configuration is also confirmed by calculation from density functional theory. 27This means that the ideal ordered structure without residual configurational entropy can be defined explicitly for ice XIII.Here, we refer to "ice XIII" as the ideally ordered phase but also transient states that continuously transform into the ideal order.
In practice, pure ice V (without the acid dopant) does not order to produce ice XIII, 31−33 and the hydrogen ordering needs the assistance of acid dopants like HCl. 19,28 The acid dopant facilitates hydrogen ordering by the addition of Bjerrum and ionic defects in the crystal lattice. 9,19,28Dielectric spectroscopy reveals forty thousand times faster hydrogen dynamics in doped ice V. 29 The ice V−XIII phase transition takes place reversibly at 110−125 K and ambient pressure, 9,19 below the temperature of its irreversible decomposition into stacking-disordered ice I sd. 32Moreover, previous studies indicate that the molecular kinetics is unfrozen in HCl-doped ice V/XIII above 103 K (dielectric spectroscopy 29 ) or 105 K (calorimetry 19 ).The reversibility of the transition and the facile molecular reorientations provide us with direct access to the thermodynamic properties under equilibrium conditions.It should be noted that in the energy landscape at ambient pressure, ice V/ XIII is in the metastable basin mostly defined by oxygen sublattice compared to ordinary ice I. Hydrogen order− disorder takes place among the many shallow energy minima within this metastable basin.
Ice V undergoes two-stepped hydrogen-(dis)ordering events upon heating/cooling. 15,19,34Authors of previous detailed structural studies put forward the idea that these features are related to separate processes at two different types of hydrogen bonds. 9,15,19,28Such an interpretation fits with experimental observations such as the crystal structure model refined from neutron diffraction.However, this idea is too simplified to describe the complicated hydrogen ordering phenomena.Rather than that, other approaches such as a statistical description based on mixtures of configurations would be needed (e.g., see refs 12, 27, and 35).Thus, a comprehensive understanding of the two-step ordering events is also far from completion.
A slow cooling technique is a common approach to increase the degree of order (e.g., see refs 19 and 28), but several possible types of ordering processes take place over a temperature range and we can never know whether the product is sufficiently equilibrated, representing a thermodynamically stable phase or just an orientational glassy state frozen transiently.Here, an isothermal annealing approach is introduced that allows us to extract kinetic and thermodynamic properties simultaneously, allowing us to separate and isolate different types of ordering processes.This supersedes the previous state-of-the-art technique of continuous cooling/ heating cycles and, thereby, opens the door to elucidate the (dis)ordering process in ice V/XIII in an unprecedented way.This approach provides us with access to the intermediateordered ice that has been inaccessible previously.
Figure 1 shows calorimetry scans representing the thermal behavior of ice V/XIII upon cooling at 2 K min −1 at ambient pressure.Curve 1 features two exothermic processes upon continuous cooling, corresponding to the two-step hydrogen ordering, as reported. 15,19,34To exclude the possibility that the ordering is a single process that takes place with two types of kinetics, e.g., kinetics of bulk and surface, a separate experiment was done.This experiment involves two cooling scans from 134 to 115 K and from 119 to 93 K.In between, 40 min of isothermal annealing was applied for the sample at 119 K, i.e., below the first endotherm, but above the second one.The first scan (Curve 2-1) before the second exotherm is identical to the continuous cooling (Curve 1).In the second scan starting from 119 K after isothermal annealing, ice V/XIII still exhibits the second exotherm at ≈114 K upon cooling (Curve 2-2).This observation rules out the scenario of an orientational glassy state.If the second exothermic feature comes from a heat capacity undershoot in the glassy scenario, 18 this undershoot should disappear or diminish after providing the system enough time for the molecular reorientations. 29evertheless, in trace 2-2 in Figure 1, it clearly remains even after a 40 min anneal.Considering the reversibility of the ordering transitions, 19 these processes are attributed to hydrogen ordering in ice V/XIII.Furthermore, the separated 2 shows the results of the heating scans for isothermally annealed ice V/XIII at different anneal temperatures T anneal = 100−119 K for various anneal times t anneal = 0.1−362 min.As seen in the inset of Figure 2, the size of the endotherm increases with annealing time at 110 K.This indicates that hydrogen ordering proceeds with time.In the limit of infinite time, this annealing produces a thermodynamically stable state at T anneal .At finite times, transient states are encountered that slowly converge to equilibrium.As a result, ΔH monotonically increases with t anneal and reaches a plateau after long t anneal.The mere observation of the plateau suggests that the ordering converges to a thermodynamically stable, equilibrated state.In the range of 100−110 K, the same kind of plateau is reached, suggesting that the same type of equilibrated and ordered state is reached.Yet, at 115−119 K, lower-lying plateaus are reached (Figure 2), which suggests that different types of order form in equilibrium.
Here, an exponential-based function is introduced to analyze the development of hydrogen order, as represented by ΔH, which converges to a specific value.The actual formula is similar to the modified Johnson−Mehl−Avrami−Kolmogorov (JMAK) equation which is widely used for nucleation and growth 36−39 as well as kinetic study for hydrogen (dis)ordering in ice VI−XV−XIX. 40The ΔH increase upon isothermal ordering as a function of time (t) can be formulated as where k is the rate constant and n corresponds to the Avrami exponent.Here, ΔH max is the maximum enthalpy change that is reached asymptotically after a long t anneal , and t 0 is the time offset for ordering before isothermal annealing.That is, ΔH max represents the limit of infinite time, corresponding to a thermodynamic property.On the other hand, k describes the kinetic property, i.e., how fast ordering proceeds toward the equilibrium.In many cases, the value of n allows for mechanistic interpretations, but practical interpretations of experimentally derived n values for complicated events are not straightforward (e.g., see ref 41).Thus, we focus on only two properties to extract the characteristics of the hydrogenordering behavior of ice V/XIII: (i) thermodynamic (ΔH max ) and (ii) kinetic (k).
The thermodynamic property (ΔH max ) is almost constant at 210 J mol −1 up to 110 K and then has a clear kink at 110−113 K (Figure 3a), identical to the disordering onset temperature T c = 112−113 K of ice XIII. 19,29The similar ΔH max at T anneal = 100−110 K indicates that we deal with one ordered phase in this temperature range, namely ice XIII.At 112−113 K and below, ice XIII appears as the dominant phase, with tiny contaminants of other configurations.
On the other hand, ΔH drastically drops above 113 K.This implies that ice XIII is no longer the dominant phase.Instead,
The Journal of Physical Chemistry Letters
a different type of order starts to dominate.In other words, this threshold of 110−113 K is the crossover temperature between ice XIII and another thermodynamically stable intermediate featuring partial-order, hereafter called the β intermediate of ice V/XIII.The boundary between the β intermediate and ice V is at ≈120 K (SI Figure S2).In other words, the ice V/XIII system features ice V at T > 120 K, ice XIII at T < 113 K, and the β intermediate at 113−120 K.These correspond to distinct potential minima in the Gibbs free energy landscape into which the system equilibrates.
For the time being, detailed structural characterizations are not available for this β intermediate.However, we can see a hint of the structural discrepancy in the temperature dependence of the c-length which drops at 112−120 K assigned for the β intermediate (SI Figure S7; Figure 3 in reference 28).This is in contrast to the continuous changes in the a-and b-lengths (Figure 3 in reference 28).The anisotropy in lattice parameters often reflects the difference in hydrogen ordering manner as seen in the case of ice XV and XIX. 13,14he anomaly in c-length can be a result of the difference in the hydrogen ordering manner of the β intermediate from both ice V and XIII.
The kinetic property (k) increases in general with temperature (Figure 3b) except for T anneal = 119 K, close to the upper limit for the β intermediate.If the hydrogen ordering is governed by a single type of kinetics, the rate constant can be described with the pre-exponential factor (k 0 ) and the activation energy (E a ) as Two distinct trends are found in the Arrhenius plot (Figure 3b) with the temperature threshold of 110−113 K (see fits in Figure 3b), just like for ΔH.The linear fits correspond to activation energies of 20.4 (18) kJ mol −1 below 110 K (for ice XIII) and 53 (7) kJ mol −1 between 113−117 K (for the β intermediate).That is, there is not only the distinction in ΔH max , but there are also two types of potential barriers differing in height.This again demonstrates that the β intermediate is clearly different from ice XIII.
Let us now switch from the ordering upon cooling to the disordering upon heating.The kinetics of disordering can be deduced from the shape of the endotherm, especially the peak width and the peak top temperature.At a fixed heating rate, these reflect how fast disordering takes place: narrow peaks and early peak tops reflect fast disordering kinetics.In some cases, a wider peak can arise from two overlapping events (e.g., see ref 42). Figure 4 summarizes the endotherm shift (ΔT top ) defined as the difference of the peak top compared to the reference case (peak top at T top = 119−120 K), which is ice V/XIII cooled at 30 K min −1 without annealing in the same calorimetry run.
In general, well-ordered structures can survive up to higher temperatures because molecular reorientations are locked by a high energy barrier. 29This high barrier originates from the high cost of introducing a defect in the highly ordered structures, as seen in the monotonic increase of ΔT top up to t anneal ≈ 10 min (SI Figures S3).For T anneal = 115−120 K (assigned to the β intermediate), ΔT top reaches a plateau of 1− 2 K.The plateau ΔT top decreases at higher T anneal , which is simply attributed to a lower degree in hydrogen order as represented by ΔH max (see Figure 3a).
The highest ΔT top and hence most thermally stable types of hydrogen order are reached for T anneal = 113 K (Figure 4).Especially after long anneals of t anneal = 1000 min, ΔT top reaches up to ΔT top ≈ 5 K for T anneal = 108−113 K.Such upshifts represent the highest kinetic thermal stability, corresponding to the well-ordered structures of ice XIII with fewer disordered defects.This trend is consistent with the high ΔH max values (Figure 3a).In more detail, the large ΔT top is not a result of a simple shift of the whole endotherm but an enhancement of a higher-temperature feature seen as a shoulder at ≈125 K (red curve in the inset of Figure 4; see also SI Figure S5).Such two features in the endotherm can be observed in the slow heating of ice XIII 15,19,34 without the annealing protocol, but the lower-temperature feature is more prominent (detailed in SI Section S5).Here, this highertemperature feature is assigned mainly to the disordering of well-ordered XIII.That is, the isothermal annealing protocol can produce a properly ordered ice XIII.
Such a well-ordered XIII is expected for all T anneal below 113 K from the trend of ΔH max (Figure 3).However, ΔT top becomes negative after long anneal for T anneal below 103 K, despite the high ΔH max .This instability can be ascribed to tiny disordered domains remaining in the structure from ice V.These cannot convert to the ideally ordered ice XIII structure for kinetic reasons, which makes the overall ordered state the orientational glass.This temperature window of kinetic freezing is consistent with the previous indications from dielectric spectroscopy 29 and calorimetry. 19These glassy remnants behave as orientational defects of the ordered structure and promote disordering.Considering that the formation of the well-ordered ice XIII needs t anneal longer than 6 h at T anneal below 113 K (SI Figure S3), the reported ice XIII prepared by slow cooling (0.1−0.2 K min −1 ) 9,28 is considered to be frozen in a transient state.That would be a The Journal of Physical Chemistry Letters reason why a small degree of disorder still remains in ice XIII even at 12 K.
In summary, we have developed an isothermal annealing approach involving calorimetric heating scans that provides access to both the kinetics of hydrogen ordering and the thermodynamic properties of the resulting ice, as well as their stability against the disordering.This approach is applied to the case of hydrogen ordering in the ice V/XIII pair and allows us to identify a thermodynamically stable intermediate state called the β intermediate.We focus on the limit of long times, i.e., equilibrated conditions, at ambient pressure avoiding the common uncertainty of ex-situ experiments whether or not the observed ice is transient or thermodynamically stable.Such distinguishments were hampered in many studies on ice polymorphs, especially those that include irreversible changes, due to several factors such as pT-dependency in high-pressure preparation (e.g., see refs 17, 42, and 43).
In more detail, below 113 K, the single completely ordered configuration, known as ice XIII, 9 is dominant.This boundary has been regarded as the disordering temperature from calorimetry 19 and dielectric spectroscopy. 29Above 120 K, the disordered state, known as ice V, forms with some partial order. 7,8Our study points out the existence of the thermodynamically equilibrated β intermediate which is distinct from both ices XIII and V in enthalpy, implying also differences in hydrogen order.The previously unexplained two events upon cooling observed in calorimetry scans 15,19,34 can now be attributed to these two types of thermodynamic boundaries from our study.Moreover, long annealing at 110− 113 K exhibits the highest kinetic thermal stability of ice XIII against disorder upon heating, which is attributed to a wellordered structure.This also suggests that the ordering of ice XIII may proceed further when using appropriate annealing protocols, superseding earlier slow-cooling literature protocols.
These findings highlight the complexity of the hydrogen (dis)ordering phenomena, far from the picture of one-by-one pairs of ordered and disordered ice forms.Different types of order can develop, as clarified through the recent discovery of hydrogen sublattice polymorphism. 13,14In the present work, we go one step beyond the previous ice XIX study and show that the β intermediate represents a thermodynamically stable state of partial order.Further elaboration will need computational approaches (e.g., see refs 27 and 44) and their complementation with experimental observations such as vibrational spectroscopy 20 and neutron diffraction. 9,28Such an intermediate can show up in other ice phases or would be dominant ubiquitously.Specifically, the known "ordered" phases which retain substantial degrees of disorder, such as ices IX, 4−6,33 XI, 21,23−26 XIV, 9,10 XV, 11,12 and XIX 13,14 may not be the most ordered phases but represent intermediates just like the β intermediate revealed here.
■ EXPERIMENTAL METHODS
Ice V was prepared by crystal−crystal transitions starting from ice I h containing 0.01 M HCl upon isobaric heating of at 0.5 GPa up to ≈250 K using a piston−cylinder cell, following the established procedure. 9,19,45Afterward, the samples were quenched at 77 K and retrieved at ambient pressure.The hydrogen (dis)ordering processes of ice V at ambient pressure were investigated by differential scanning calorimetry (DSC).Before all the measurements, the sample was heated once to 134 K to erase any kind of hydrogen order from ice XIII and to produce ice V.This ambient-pressure preprocess eliminates the uncertain factors on the hydrogen order which potentially occur at high pressures during the preparation.
Three sets of DSC runs were performed to focus on (1) the hydrogen ordering process upon cooling, (2) the ordering process as a result of isothermal annealing, and (3) the disordering behaviors of long-annealed samples upon heating.For run set (1), cooling scans were collected at 2 K min −1 in T = 93−134, 115−134, and 93−119 K as shown in Figure 1.The last scan was measured after isothermal annealing at 119 K for 40 min after the second scan.
For each scan in run set (2), the sample was cooled at 30 K min −1 to specific anneal temperatures (T anneal = 100−119 K).After annealing for a certain anneal time (t anneal = 0.1−362 min), the sample is quenched to 93 K.The thermal response to hydrogen disordering, an endotherm, was measured upon heating to 134 at 30 K min −1 (e.g., inset of Figure 2).The thermograms with the same anneal temperature T anneal were measured in a single DSC run changing t anneal .For run set (3), other data for longer t anneal values up to 1000 min were also taken in separate DSC runs.The enthalpy change (ΔH) upon disordering was evaluated by the integration of the endotherm after background subtraction and normalization.Further details are given in SI Section S1.
Figure 2 .
Figure 2. Enthalpy change ΔH upon disordering against anneal time t anneal .The colors and shapes of the symbols correspond to the anneal temperature T anneal described in the legend.Solid lines are fitted curves, using eq 1.The inset shows representative thermograms for T anneal = 110 K. Representative thermograms are aligned by subtracting the linear baseline derived for T = 100−104 K for a clear comparison of the endothermic features.
Figure 3 .
Figure 3. (a) Maximum enthalpy change ΔH max and (b) rate constant k as fitted parameters for ΔH with eq 1 against t anneal .The thick blue and red lines in (a) are shown as guides to the eye.The blue and red lines in (b) correspond to the linear regressions for T anneal = 100−110 K and 113−117 K, respectively.
Figure 4 .
Figure 4. Disordering kinetics inferred from the endotherm shift ΔT top against T anneal upon heating of ice V/XIII annealed during cooling in the calorimeter.Red diamonds and cyan circles correspond to t anneal = 363 and 1000 min, respectively.Inset shows the graphical definition of ΔT top for representative thermograms of ice V/XIII annealed at 110 K measured at 30 K min −1 in a single DSC run.Representative thermograms are aligned by subtracting linear baseline derived for T = 100−104 K for a clear comparison of the endothermic features.
The Journal of Physical Chemistry Letters pubs.acs.org/JPCL Letter
the ordered state of ice V/XIII in this intermediate temperature range has discrepancies in enthalpy from both ices V and XIII.That is, ice V transforms first into an intermediate, and then the intermediate transforms into ice XIII.The next question is whether the intermediate is just a transient state or a thermodynamically stable state as an outcome of equilibration. Figure https://doi.org/10.1021/acs.jpclett.3c03411exothermic peaks imply | 5,409.6 | 2024-01-25T00:00:00.000 | [
"Physics",
"Chemistry"
] |
CONDITIONS AND CHARACTERISTICS OF WATER CRYSTALLIZATION ON THE WORKING SURFACE OF EVAPORATOR HEAT PUMPS IN RESERVOIRS WITH LOW TEMPERATURES
Is carried out the mathematical simulation of heat transfer processes in the small environment of vaporizer camera of heat pump (HP) when the source of lowpotential energy is present. Are obtained the temperature distributions, which characterize thermal condition HP. Is carried out the comparative analysis of the results of mathematical simulation and experimental data from the work of heat-pumping installation in the conditions of relatively low (up to 4 °C) temperatures. In the experiments, the partial freezing around of the tubes of vaporizer is established. It is revealed, that the formation of the layer of ice on the surface of vaporizer leads to the temperature contrast by the volume of liquid in the camera and reduction in the effectiveness in the work of heat-pumping installation.
Introduction
The efficiency of the heat pump (HP) depends on several factors [1].The surface condition of vaporizer is one of them.In the case of its icing the heat-transfer, intensity of low-potential energy to heat-transfer agent is reduced.In the case of its icing the heat-transfer intensity of low-potential energy to heat-transfer agent is reduced.Therefore, the ranges of operating temperatures most common HP with the evaporative elements in the reservoirs with the relatively low temperature are substantially limited.But, from other side, the expansion of these ranges at least on several degrees can lead to the significant positive results in the work HP.Therefore the analysis of conditions and characteristics of the process of crystallizing the water on the working surfaces of the vaporizers of the heat pumps, which work in the reservoirs, is urgent task.Up to now studies of the processes of heat transfer in surrounding water evaporator were not conducted.
Statement of the Problem
In general, there are two basic modes of heat transfer between the cold water and the surface of the evaporator -free (in closed reservoirs) and mixed (with an ordered movement of water) convection.There are models that describe the processes of fluid flow and heat transfer in closed areas to conducted with local sources of heat release [2,3] in a mixed [4] supply in one of the boundaries of modeling).Models of [2][3][4] have been developed for conditions of intense local radiant heating [5.6] in the accumulation of energy in the building envelopes.Approach [2][3][4][5][6] is used for solving the problem of heat transfer in the evaporator HP (Fig. 1).
Figure 1.
The experimental heat pump system: 1 -compressor, 2 -capacitor 3 -expansion valve, 4 -evaporator We solved the problem of axisymmetric convective heat transfer for a rectangular cavity with a heat exchanger-evaporator of the heat pump (Fig. 2).Navier-Stokes equations and the energy of the water and the evaporator in this case is: here X, Y -dimensionless Cartesian coordinates; τ -dimensionless time; The initial conditions: ( , ,0) 0, ( , ,0) 0, ( , ,0) 0 Boundary conditions: -three external borders of the area set the conditions of insulation: ( ) -on the boundary ɏ=L given symmetry conditions: -at the boundaries of the heat exchanger: -number of Kirpichev; λ -the thermal conductivity of a solid wall, W/(m⋅Ʉ).The peculiarity of the problem being solved is to form on the evaporator surface layer of ice (crystallized water) has a significant impact on the conditions of the heat supply to the evaporator.Analysis of methods, algorithms and results of solving similar problems of heat transfer in the conditions of intensive phase transformations [7] showed that the modeling of intense absorption heat (evaporation) leads to a significant complication of procedures and algorithms, as well as to increase the duration of the repeated calculations.Similar problems should arise in the solution of problems of local intense heat (in the conditions of the condensation).For these reasons, when solving the problem stated above the effect of heat during the crystallization of water on the surface of the evaporator TN was ignored.The validity of this assumption was also confirmed by evaluation of convection, and conduction heat fluxes due to the crystallization of the surface of the evaporator using the machine [8].The numerical solution of the problem (1) -( 7) carried out by finite difference method using an algorithm for solving a proven group of tasks [2,3,5,6] conjugate heat transfer in areas with local sources of energy.To substantiate the reliability of the results of numerical modeling audited conservative difference scheme is similar [9,10].
Simulation results
Figure 3 shows the line current and temperature field, characterizing the convective heat transfer in a viscous incompressible fluid surrounding the evaporator of the heat pump.It was established that two of the central vortex formed: in the lower and upper parts of the rectangular area under consideration.At the initial time (1000 s) is well defined temperature inhomogeneity in the depth of the tank and there is a possibility of freezing of the heat exchanger.Cold water mass flow circulating about the heat is not mixed with the upper main fluid flow.As a result, a dead space, which reduces the heat transfer rate of the evaporator with the ambient liquid.When time is reached 15000 s with the temperature field in the cross section X = L becomes almost uniform height.
The methodology of experimental studies
Experimental studies were carried out using the apparatus (Fig. 1), which is a vapor compression heat pump with its dedicated camera (dimensions 0,20 × 0,25 × 0,26 m), the evaporator heat exchanger.As a source of low-grade heat is used with fresh water close to the real conditions of water bodies in regions with cold climates temperature (4 °C < T <14 °C).The experimental setup includes a compressor, an expansion valve, two heat exchangers (evaporator, condenser) and test equipment (Fig. 1).
In the development of the experimental procedure were the main tasks of ensuring the conditions of temperature measurement on the surface of the evaporator in the HP and small neighborhood.At the same time we take into account the conclusions [11], justifying the use of thermocouples to measure the temperature of the medium under the conditions of natural convection in confined areas with local sources of energy.The temperature of the low-potential power in the chamber, the evaporator in the experiments was 14 0 C; ambient temperature -T os = 20 ɨ C. The volume of water in the tank in the series of experiments was 0.016 m 3 .
Temperature measurement was carried out on the surfaces of heat exchanger tubes (see Fig. 5.6) and in specific points in space (in section X = L / 2) inside the pilot block (Fig. 7, 8) filled with water using graded chromel-alumel thermocouple junction diameter of 1 mm.To determine the errors of measurement results each experiment was repeated several times at the fixed initial data and external conditions.Measurements were carried out simultaneously by 4 blocks of 8 thermocouples each.
In the experiment, a thermocouple fixed in predetermined time intervals: first hour every 5 minutes, then for three hours every 20 minutes.
To connect an analog-to-digital converter with a personal computer, the adapter uses a network that provides galvanic isolation between devices.As a means of data processing software package used by LabVIEW.The values of temperatures were recorded in the real time with the retention of the obtained results of measurements.The total relative error of determining the values of temperature did not exceed 4% in the whole range of parameters.
The experimental results and discussion
In Fig. 4,5 are represented typical values of local temperatures, conducting of experimental studies obtained as a result.
Distributions Ɍ(τ) in Fig. 4 is characterized heat exchange between the heat-transfer agent and the heated liquid, whose intensity considerably was reduced after 9000 seconds of the work of heatpumping installation.It should be noted that from 1200 second begins the freezing around of the upper tube of vaporizer (Fig. 5).Reduction in the temperature in the lower part of the vaporizer (Fig. 5) it is noticeable to 10000 seconds, after which the rate of crystallization slows down also to the end of the experiment practically does not change (Fig. 6,7).
The results of mathematical modeling of free-convective flows in a closed region in the presence of heat source are presented in Fig. 2-6.Numerical studies were conducted at the temperature: initial -Θ 0 = 0, the heat source -Θit = 1, the environment -Θ e = 0.In the initial period of time rate of icing is high (Fig. 6, 7).This is due to the fact that there is intense heat transfer from the low-potential power source to the wall of the heat exchanger.But the ice cover reduces heat transfer in the evaporator and after 6000 s to work HP ice thickness reaches its maximum value (Figure 7).Intense convective flow, mixing of water masses in the region considered to be caused by a decrease in temperature at the initial time.At times more than 5000 seconds (Fig. 8) is noticeably sharp drop in temperature at several levels of the heat exchanger (thermocouples 1-4).This is due to the fact that in some areas the tank Ɍ≈4 0 ɋ.At this value of T becomes the maximum density of water and changes the structure of convective flow.After that there is a slight decrease in temperature due to convection and a decrease in the intensity of the beginning of crystallization of the water on the surface of the heat exchanger.
CONCLUSION
The results of experimental studies illustrate the process of crystallization of the water on the surface of the heat exchanger.Crystallization has a significant effect on the thermal regime of the evaporator of the heat pump.Forming a layer of ice on the evaporator leads to uneven temperature distribution over the volume of the liquid in the chamber and significantly reduce the intensity of heat transfer in the heat exchanger.The results allow for the selection of the operating mode of HP in full or partial freezing of the evaporator.
convection (also with local energy DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015
Figure 2 .
Figure 2. Scope of the task: 1 -the liquid in the evaporator; 2 -tube evaporator; L, H -the length and width of the solutions.
Figure 4 .Figure 5 .
Figure 4.The change in temperature of the liquid (1) and the carrier (2) in time
Figure 6 . 6 Figure 7 .
Figure 6.Change the thickness of the ice along the length of the evaporator tubes.
Fig. 8
Fig.8shows the values of the local temperature obtained as a result of experimental (in section X=0,1 ɦ ɢ Y=0,1 ɦ) and numerical studies.It can be seen that the change in temperature in time (characteristic points) meet the conditions of the generation of free-convection flow in this mode.
Figure 8 .
Figure 8. Change in water temperature at the time of low-grade heat source 14 0 C | 2,494.8 | 2015-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Development of a Negative Ion Micro TPC Detector with SF$_{6}$ Gas for the Directional Dark Matter Search
A negative ion micro time projection chamber (NI$\mu$TPC) was developed and its performance studied. An NI$\mu$TPC is a novel technology that enables the measurement of absolute $z$ coordinates for self-triggering TPCs. This technology provides full-fiducialization analysis, which is not possible with conventional gaseous TPCs, and is useful for directional dark matter searches in terms of background rejection and the improvement of the angular resolution. The developed NI$\mu$TPC prototype had a detection volume of 12.8 $\times$ 25.6 $\times$ 144 mm$^{3}$. The absolute $z$ coordinate was determined with a location accuracy of 16 mm using minority carrieres of SF$_{5}^{-}$. Simultaneously, there was a successful reconstruction of the three-dimensional (3D) tracks with a spatial resolution of 130 $\mu\rm{m}$. This is the first demonstration of 3D tracking with the detection of absolute $z$ coordinates, and it is an important step in improving the sensitivity of directional dark matter searches.
Introduction
Dark matter in the universe remains one of the unsolved mysteries in physics. Despite many worldwide experimental efforts, no experiment has reached a widely agreed discovery of the dark matter. Directional dark matter searches are said to provide clear evidence of the direct detection. Gaseous time projection chambers (TPCs) have been studied as a directional detector because they can detect the tracks of recoiled nuclei. Several groups, such as DRIFT [1], NEWAGE [2], and MIMAC [3], have developed gaseous TPCs, and measurements have been performed in underground laboratories.
For rare-event search experiments like dark matter searches, fiducialization is a powerful tool for removing external and internal backgrounds. An example of the successful application of this analytical technique is a two-phased detector using liquid noble gas [4]. In this application, the time difference between the first (before the drift) and the second (after the drift) signals is used to reconstruct the absolute z coordinate. Here, the z coordinate is defined as the drift direction. Unfortunately, the absolute z coordinate cannot be reconstructed with conventional gaseous TPCs because there is no effective way to know the time of the event as the scintillation signal was used in the two-phased liquid noble gas detectors. Therefore it was thus considered impossible to achieve the full fiducialization with gaseous TPCs. However, a discovery of "minority carriers" in a CS 2 + O 2 gas mixture by the DRIFT group enabled to measure absolute z coordinates and broadened the potential for the gaseous TPCs [5]. In electro-negative gases like CS 2 , electrons are captured by the molecules shortly after the interaction, and the negative ions instead of electrons are drifted. If more than two species of negative ions are produced, the difference in velocity can be used to measure the absolute z coordinate. Following the discovery of the minority carriers in the CS 2 + O 2 gas mixture, SF 6 , which is a safer gas compared to CS 2 , was found to perform in a similar way [6].
In addition, fluorine has a large cross-section for spin-dependent interaction. Therefore, SF 6 is considered to have excellent properties as a TPC gas for the directional dark matter search. Several studies on SF 6 gas have been conducted in recent years [7][8][9].
Despite the importance of 3D tracking performance as a directional detector, there have been no studies on 3D tracking for SF 6 -based TPCs. This is simply due to the lack of micropatterned gaseous detectors (MPGDs) coupled with readout electronics suitable for negative ion TPCs. We developed a prototype negative ion micro TPC (NIµTPC) with a micro-patterned gaseous detectors (MPGDs) and originally-developed electronics. This paper describes a study of NIµTPC performance, including the first demonstration of 3D trackings with absolute z coordinate reconstruction.
NIµTPC detector
In this section, the experimental measurement setup is described for the TPC, the readout electronics, and the operating system.
Negative ion micro TPC
A schematic drawing of the NIµTPC is shown in Figure 1. The gas amplification section consists of two layers of gas electron multipliers (GEMs; SciEnergy Co., Ltd.) and a micro pixel chamber (µ-PIC; Dai Nippon Printing Co., Ltd.) [10]. These devices were arranged in a cascade with a 3 mm transfer gap and a 3 mm induction gap. The substrate of the GEM was a 100 µm thick liquid crystal polymer with a hole size and pitch of 70 µm and 140 µm, respectively. The µ-PIC had 256 × 256 pixels with a pitch of 400 µm, which was read with 256 anode and 256 cathode strips. The anode and cathode strips were orthogonally formed, and two-dimensional imaging are realized by taking the coincidence of these strips. The cathode strips had circular openings with a diameter of 250 µm. The anode strips were formed on the backside of the substrate. Cylindrical anode electrodes with a diameter of 60 µm were formed on the anode strips, piercing the substrate at the center of the cathode openings (see Ref [10] for details). The detector was operated at a gas gain of 1,900, with the bias voltages shown in Figure 1. The effective areas of gas multiplication for both GEM and µ-PIC were 10 × 10 cm 2 ; however, the readout area was limited to 1.28 × 2.56 cm 2 due to the number of readout channels (32 + 64 strips).
A TPC detection volume with a drift length of 144 mm was formed using 12 copper rings, each with an inner diameter of 64 mm. These rings were connected using 50 MΩ resistors. The drift plane was made of stainless-steel mesh, and a negative voltage of −7.12 kV was supplied, forming an electric field of 0.40 kV/cm in the detection volume. A 241 Am α-ray source was set inside the vessel; the effective size of the source was 4 mm in diameter. The source position was controlled from outside the vessel using a neodymium magnet.
The performance of negative ion SF − 6 TPC is known to be affected by water vapor due to outgassing [6]. Water vapor contamination was monitored with a dew point meter (DMT152, Vaisala). At the beginning of each measurement, to reduce the initial water contamination, the vessel was evacuated to below 1 Pa, then flushed and filled with SF 6 gas at 20 Torr. A gas circulation system with Zeolum (A-3) was installed to capture the water vapor produced by the out-gassing. The water contamination level was maintained below 300 ppm during the measurement.
Data acquisition system
A schematics of the data acquisition system is shown in Figure 2. The 32 anode-strips and 64 cathode-strips were read by preamplifier chips (LTARS2014 [11]) through 100 pF capacitors. The preamplifier chip was a low-noise ASIC developed by KEK group in Japan for liquid argon TPC detectors. The amplified waveformes were digitized by a digital board at a sampling rate of 2.5 MHz. The dynamic range and resolution of the digitization were 2 V and 12 bit, respectively. The specifications of the analog and digital boards are summarized in Table 1. The data stored in the memory of the digital board were sent to the computer using SiTCP technology [12], as required by the trigger signal. The trigger was made one of two ways: a PIN-photodiode trigger for the measurement of drift velocity or a µ-PIC trigger created by the signal from the cathode or anode strip next to the detection area for 3D tracking measurements.
Performance of the NIµTPC
The performance of the NIµTPC, the detection efficiency of minority carriers, the location accuracy of the absolute z coordinate, and the spatial resolution of the 3D reconstruction are described in this Figure 2. Schematic of the data acquisition system.The red region is the detection area. The µ-PIC anode signals (32 channels) and cathode signals (64 channels) were processed by LTARS2014 chips. The output signals were digitized and sent to a computer. The triggers were made by either the PIN photodiode or the µ-PIC cathode or anode signal.
Minority carrier detection
Measurement was performed with the 241 Am source set at a position of z = 89 mm. This measurement was used to confirm the detection of minority carriers generated at a known z coordinate. The α-rays were emitted from the source, which passed through the detection volume and were detected by the PIN photodiode located on the opposite side. The signal from the PIN photodiode was used to determine the event timing, i.e., the time-zero of the drift time. The waveforms were smoothed with a Gaussian filter to suppress high-frequency noise during the first stage of analysis. Examples of 32-anode signals and an averaged waveform are shown in Figures 3a and 3b, respectively. A major peak due to SF − 6 negative ions and a minority peak due to SF − 5 can be seen in both waveforms and are more clearly visible in the averaged waveform. As the time-zero was set by the PIN photodiode, the signal time indicated that the drift time of these ions corresponded to a drift length of 89 mm. Average times for major peaks were calculated from thousands of events, and a drift velocity of 8.1 ± 0.2 cm/ms was obtained. Minority carrier drift velocity (8.9 ± 0.2 cm/ms) was obtained by the same method. These drift velocities were used to determine the absolute z coordinate, as discussed below.
It should be noted that some waveforms show more than two peaks in Figure 3a. Figure 3b shows a wide range components between the minority peak and the main peak. The waveform structure depends on the chemical reactions associated with electron capture in SF 6 gas. In particular, as explained in Ref [6], water vapor contamination and the strength of the electric field in the drift region have a significant effect on the production of negative ions. In this study, the main source of water vapor was out-gassing from acrylic plates that were placed to prevent discharge from the copper rings. H 2 O contamination in SF 6 gas creates stable SF − 6 (H 2 O) n clusters and can produce negative ions SOF − 4 and F − (HF) 2 as final products. These negative ions create fake peaks at similar time as SF − 5 , which reduces the accuracy of absolute z determination. Previous studies [6] have shown that these effects are suppressed by a high electric field of approximately 1,000 V/cm 2 . This should possible with our future detectors.
Detection efficiency of minority carriers
Minority carrier detection efficiencies were evaluated using the same data used in previous studies. The region of interest (ROI) time for the minority-peak search was set at between 30 µs and 700 µs prior to the main peak timing (−700 µs < Time < −30 µs). Peaks were searched in the ROI of each anode strip waveform using an analytical threshold. Figure 4 shows the detection efficiency of minority carriers as a function of energy deposition in one strip. Detection efficiency increases as energy deposition increases, reaching a plateau of ∼90% at ∼4 keV. Here, the energy deposition of the α-rays on each strip is known by the linear energy transfer calculated in the SRIM simulation [13]. Tracking the fluorine nuclei of O(10 keV) is a major consideration in the dark matter search applications. The energy deposition of a fluorine nuclei of 10 keV in SF 6 gas 20 Torr has been found by the SRIM simulation to be approximately 3.4 keV per strip (400 µm). Therefore, this NIµTPC was found to perform sufficiently well in terms of detection efficiency in minority peaks of fluorine nuclei.
Determination of absolute z coordinates
The difference in the arrival times of SF − 5 and SF − 6 can be used to reconstruct the absolute z coordinate in practical dark matter detector applications. A measurement of the z coordinate reconstruction was performed using this method. The z coordinate was determined by the following equation: where v m and v M are the drift velocities of the minor negative ion (SF − 5 ) and major negative ion (SF − 6 ), respectively. To assess the accuracy of the absolute z determination, the 241 Am was set at z = 89 mm, and the data were acquired using the PIN photodiode trigger. The peak finding algorithm used in previous studies was applied for the anode 32 strips, and the averaged time difference was set as ∆T. The reconstructed absolute z coordinates of thousands of events are shown in Figure 5, together with the 241 Am source position. The difference between the actual source position and the mean value of the distribution was 1.2 mm; therefore, the reconstructed z coordinate was in good agreement with the source position. The location accuracy of 16 mm for one event was obtained as the σ of the Gaussian fit. As mentioned in Section 3.1, the fake peaks, which are created by negative ions of other species, worsened the location accuracy and confused the peak finding algorithm.
The 241 Am position was then scanned from 39 mm to 139 mm at 10 mm intervals, and the absolute z coordinate was determined using the same method. The trigger from the µ-PIC anode strip was used in the same way as the dark matter detector. Tracks with an elevation angle θ of −5 • < θ < 5 • were chosen to select the tracks parallel to the detection plane. This selection was made to evaluated the intrinsic z reconstruction without any deterioration caused by the widened z coordinate. Figure 6 actual position. As can be seen in Figure 6, the z coordination was properly reconstructed within the error, and it can be concluded that the minority carrier can be used for the z reconstruction of 39 mm < z < 139 mm. The peak finding algorithm did not work for events smaller than 39 mm due to the charge spread of the major peaks. This performance is expected to be improved in stronger electric fields.
Reconstruction of three-dimensional tracks
Finally, the reconstruction of 3D tracks was investigated. The data were acquired by triggering the µ-PIC cathode signal, and the time coincidence of the anode and cathode signals was also recorded. The coincidence window was adjusted depending on the elevations. Figure 7 shows a typical examples of five reconstructed events. The measured z coordinate is the absolute z coordinate, which was determined using the minority carriers. It should be emphasized that this is the first demonstration to reconstruct absolute z-coordinates and 3D tracking simultaneously.
To estimate the 3D spatial resolution for one anode-cathode coincidence or hit, we calculated the residual distribution of the hit and the fitted with a straight line. The events with an elevation angle of smaller than 10 • were not used because the anode-cathode coincidence did not work. The residual distribution was also calculated with a Geant4 simulation for several 3D spatial resolutions. Figure 8 shows the experimental data together with the simulation results at a 3D spatial resolution of 130 µm. This spatial resolution reproduced the experimental distribution best, and the value was comparable to the conventional electron-tracking micro TPC. As as result, the NIµTPC was found to possess comparable tracking performance to the conventional TPC and additionally enable z reconstruction or the full 3D fiducialization. Therefore, the NIµTPC is expected to expand the scope of directional dark matter searches.
Conclusions
A prototype NIµTPC with SF 6 gas was developed and its performance was studied. The absolute z coordinate was reconstructed with the minority peaks, obtaining a location accuracy of 16 mm for the absolute z coordinate. The first-ever reconstruction of simultaneous absolute z-determination and 3D tracking was demonstrated. A spatial resolution of 130 µm for one hit was obtained. These results indicate that the NIµTPCs can provide a similar tracking performance to conventional TPCs while allowing full 3D fiducialization. Therefore, the NIµTPC is expected to expand the reach of directional dark matter searches. | 3,714.2 | 2020-04-21T00:00:00.000 | [
"Physics"
] |
The health and economic burden of dust pollution in the textile industry of Faisalabad, Pakistan
Background Exposure to dust in textile mills adversely affects workers’ health. We collected epidemiological data on textile workers suffering from respiratory diseases and assessed work absence associated with illnesses in Faisalabad, Pakistan. Methods We recruited 206 workers using multistage sampling from 11 spinning mills in Faisalabad, Pakistan. The data were collected using 2-week health diaries and face-to-face interviews. The data pertains to socio-demographics, occupational exposures, the state of the workers’ health, and other attributes. A theoretical framework of the health production function was used to estimate the relationship between cotton dust exposure and respiratory illnesses. We also estimated functional limitations (e.g., work absence) associated with dust exposure. STATA 12 was used to calculate descriptive statistics, an ordered probit for byssinosis, a probit model for chronic cough, and three complementary log-log models for blood phlegm, bronchitis, and asthma to measure dose–response functions. A Tobit model was used to measure the sickness absence function. Results We found that cotton dust exposure causes a significant health burden to workers, such as cough (35%), bronchitis (17%), and different grades of byssinosis symptoms (22%). The regression analysis showed that smoking cigarettes and working in dusty sections were the main determinants of respiratory diseases. Dusty work sections also cause illness-related work absences. However, the probability of work absence decreases with the increased use of face masks. Conclusion The study’s findings imply the significance of promoting occupational safety and health culture through training and awareness among workers or implementing the use of safety gadgets. Promulgating appropriate dust standards in textile mills is also a need of the hour. Supplementary Information The online version contains supplementary material available at 10.1186/s42506-024-00150-2.
Introduction
According to the Joint Statement of the World Health Organization (WHO) and the International Labor Organization (ILO), approximately 2 million workers died from occupational hazards in 2016 at the global level [1].Work-related diseases majorly contribute to workers' deaths, followed by occupational injuries.Occupational injuries and illnesses come with a massive economic burden and constitute massive social costs.According to an ILO report (2003), the economic costs of work-associated sick leaves, compensation for work injuries, production interruptions, and medical expenses account for 4% of the annual world gross domestic product (GDP), amounting to USD 2.25 trillion [2].The textile sector is among the most labor-intensive industries, employing approximately 60 million workers globally [3].Generally, the typical textile industry comprises different segments: spinning, weaving, processing, bleaching, dyeing and finishing, and stitching.Workers associated with the textile industry, especially those working in highly exposed sections, e.g., in the spinning segment, are at high risk of inhaling a large amount of cotton dust, likely affecting their lungs' function [4].The burden of occupational diseases is directly associated with such exposure.At the same time, chronic exposures can lead workers to suffer from byssinosis or fatal diseases [5,6].The typical symptoms of byssinosis include but are not limited to chest tightness, breathlessness, cough, tuberculosis, asthma, and phlegm.However, the type and severity of the problems usually depend on the intensity and the exposure period [7][8][9][10].
A dose-response relationship is well documented between respiratory symptoms and working conditions in the textile industry [5,10,11].Advances in dust control measures have assisted developed countries in lessening the prevalence of respiratory symptoms and byssinosis among textile industries.For example, a UK-based study reported that only 3% of workers in the spinning segment had byssinosis.,This number was relatively minor (0.3%) among workers in the weaving segment [12].In contrast, developing countries have an alarming situation where the disease is common in African and Asian countries, e.g., Ethiopia (46%) [11], India (12%) [13], Pakistan (16%) [6], and Benin (21.1%) [14].
The textile industry of Pakistan has a large manufacturing sector that employs a significant proportion of the workforce by contributing approximately 8% to the country's GDP and 58.98% to export earnings [15].This sector also concentrates on the spinning segment, which enables the sector to export a significant proportion of good-quality yarn.Regardless of the pivotal role of the textile sector in the economic growth of the country, it is considered the most polluting domestic industry [16].Unfortunately, the country lacks sector-wise disease-specific updated data, but according to an old estimate, over 0.8 million textile workers are routinely exposed to cotton dust [17].
In Pakistan, evidence suggests a significant relationship between the respiratory symptoms of textile workers and their exposure to cotton dust in the workplace.Many studies have also reported moderate to high (ranging from 8 to 35%) levels of byssinosis among textile workers with a high percentage of chronic respiratory morbidities, including bronchial asthma, chronic bronchitis, tuberculosis, and other obstructive pulmonary symptoms [5,6,8,10,[18][19][20][21].Although a relationship exists between cotton dust and its effects on workers' health, there is a shortage of information on the economic valuation of dust pollution that directly affects workers' health, productivity, and quality of life.Such information is also vital for policymakers, enabling them to implement dust standards for matters related to cotton dust pollution and propose the allocation of resources for the welfare of workers.The textile industry of Pakistan is a model country to conduct this research, whose findings may be generalized beyond the geographical boundaries since it represents nearly 15 million global laborers in the textile sectors [8].The objectives of this study were to record detailed epidemiological data on textile workers suffering from respiratory diseases and to assess the economic burden of health problems arising from dust pollution among textile workers.
Study setting
This study was conducted in Pakistan's third-largest industrial zone, Faisalabad District.Of the 612 largescale industries in the district, nearly 40% are textile and garment industries [22].The heavy concentration of industries depicts high participation in the labor force [23].Unfortunately, air quality in the city is not yet monitored, and industrial centers have no wastewater treatment facilities [22].
Participant recruitment and consent to participate
In total, 210 workers from 11 spinning mills were randomly selected who met the inclusion and exclusion criteria.Workers were eligible if they were 18 or older, were employed in the textile industry over the past 2 years, and could comprehend and communicate in local languages [24].However, textile workers who did not provide written informed consent were barred from participating in the study.Additionally, translated leaflets in the local language (i.e., Urdu) were provided and verbally guided before the data collection to the workers, informing them of the purpose of the research, ethical approval, and confidentiality of the participants' details.
Development of study instruments
We used two instruments to collect data: health diaries and workers' respiratory health surveys.Health recordkeeping diaries are used to collect data due to their higher accuracy, fewer recall problems, better sequencing capabilities, and the ability to capture low-profile events.A validated health diary was adopted from Usha Gupta (2008) and Naveen Adhikari (2012) [25,26].We used a modified version of the health diary to collect detailed data on the state of workers' health along with potentially harmful consequences such as lost workdays and medical expenditures associated with ill health.Each worker was given a health diary to complete for 15 consecutive nights.After the prescribed time, 206 workers (out of 210 workers) returned completed diaries.The workers who returned the health diaries were qualified for face-to-face interviews.
Face-to-face interviews were based on the Workers' Respiratory Health Questionnaire.The validated Respiratory Health Questionnaire was taken (on request) from Professor David C. Christiani from Harvard T. H. CHAN School of Public Health.The same questionnaire was used in the study of Chinese textile mill workers [27].The survey comprised workers' information regarding sociodemographics, occupational exposures, chronic diseases, etc.The questionnaire validation statistics show that the scales are reliable.The Cronbach's alpha test score is 0.7762, which is a generally acceptable score.
The data were collected in August and September 2013.The study instruments (health diary and survey questionnaire) were pretested on twenty workers of a textile spinning mill.The questionnaire required a few modifications before the final data collection based on the pretest experience.
Sample size and sampling technique
Data on spinning mills in the district of Faisalabad were retrieved from the All-Pakistan Textile Mills Association (APTMA).The total number of spinning mills in the district was 46.Faisalabad District is divided into six tehsils (subdistricts), namely, Tehsil Saddar, Tehsil Jaranwala, Tehsil Chak Jhumra, Tehsil Khurrianwala, Tehsil Tandianwali, and Tehsil Samundary.To ensure representation of the population from every Tehsil, we short-listed spinning mills by Tehsil.
Stage 1: Based on the distribution of mills by tehsils, we purposively decided to select two mills from three Tehsils (Jaranwala, Chak Jhumra, Tandianwali), one mill from Tehsil Samundary and Tehsil Saddar (because of the low concentration of textile mills there), and three mills from Tehsil Khurrianwala (because of the high concentration of textile mills there).
Stage 2: We then started the randomization process to select the desired data.Therefore, using the "=RAND ()" command in Excel Spreadsheet, 11 spinning mills were randomly selected across tehsils.
Stage 3: In this stage, 15 to 25 workers representing each section of the spinning mill were purposively selected from sample mills based on the workforce size.Later, an informed worker (called a 'monitor') from each textile mill was chosen to help the research team contact workers to participate in the study and remind workers every day to complete a health diary.The monitors also maintained contact between the data enumerators and the sample workers.
The theoretical model
This study used a health production function (HPF) as a theoretical framework.The household health production function is a work of Grossman, which was expanded by Harrington and Portney.The health production function is complex and dynamic and incorporates an individual willingness to pay ("investment in human capital") to increase an individual's utility in the form of reduced illness over several periods.Individuals maximize their utility by selecting an optimal combination of demand functions for averting and mitigating activities [28].Therefore, estimating a health production function can be complex due to the multifaceted and interrelated nature of health determinants.Additionally, data availability and quality can influence the accuracy of the estimates.As a result, many studies in recent times, such as Khan and Lohano (2018) [28], used a simplified version of the health production function described in Freeman et al. 2014 [29] (see pages 214 and 215).
Under this framework, a person's health status depends on exogenous variables (exposure to pollution), choice variables (e.g., averting actions), and other characteristics related to physical and socioeconomic conditions.We also used the simple model described by Freeman et al. (2014;214) [29].Suppose health is represented by the number of sick days at any time (15 days in the current study) and is denoted by S.
where S is sickness days (an indicator of health status); D represents the dose of pollution; and G represents demographic and personal characteristics.The level of pollution exposure or dose D and demographic characteristics G determine the health S. The dose D is a scaler variable that depends on the concentration of pollution or contaminant, C (if the contaminant is air pollution, C could be interpreted as the number of days during which some measure of air pollution exceeds the stated standard), and averting activities A, to reduce exposure such as the use of face masks.Hence, D depends on pollution concentration C and Averting actions A (e.g., use of mask), as shown in Eq. 2: (1) Substituting Eq. ( 2) for Eq.(1) gives Eq. 3: with ∂S ∂C > 0, ∂S ∂A < 0, ∂S ∂B < 0 In Eq. ( 3), the "dose-response function" shows the relationship between cotton dust exposure and health status.The dose-response function requires collecting physical data on factory conditions (i.e., suspended cotton dust) and medical examination of workers' health status.These data can then associate an illness with a specific agent.
The consideration of collaborating physical evidence is affected by resource constraints and lack of cooperation by most of the mill administrations in the sample mills.The literature on byssinosis can reasonably solve this problem.The literature has shown that a dose-response relationship has repeatedly been established between byssinosis in cotton textile workers and levels of dust in cotton mills.Most of the studies building upon the literature on the dose-response relationship focused only on the prevalence of disease in textile mills rather than using scientific instruments to collect dust data [6,30].
It must be noted that over the last six decades, studies have been using self-reported data from textile workers about symptoms of byssinosis using standard questionnaires of Schilling's grading methodology for diagnosing the disease [31].Self-reported data are often followed by workers' medical examinations (spirometry or pulmonary function tests).Spirometry provides the additional advantage of diagnosing impaired lung function among those who do not have apparent symptoms, and it may not be possible to capture such impairment by using questionnaires alone [32][33][34].
A study by Jamali and Nafees compared the results of spirometry and byssinosis questionnaires in identifying byssinosis and respiratory diseases [34].The results illustrated that self-reported respiratory symptoms identified by the questionnaire could be good predictors of impaired lung function, and the questionnaire could be used as a validated tool to estimate the burden of respiratory symptoms among the working population.Similar findings have been reported in previous studies [32,33].Therefore, both the Pulmonary Function Test (PFT) and byssinosis questionnaires are acceptable diagnostic criteria for byssinosis.This is the reason why a few studies used the PFT for diagnosing byssinosis, whereas the majority of the studies only applied Schilling's grading methodology [6,14,18,19].
Following the literature, the current study defines factory conditions by characterizing dust by work section.The literature shows the association of the disease with the work area.Many studies noted that byssinosis is significantly higher in earlier stages of the textile process, such as bale opening, blow room, and card room, (3) S = S(C, A, G) because of the high concentration of dust in these areas [6,11,13,14,18,19,26].In addition, we collected selfreported data on dust level, e.g., less dusty than normal to more dusty than normal.
Another potential problem in the study is the lack of standard diagnostic measures for byssinosis (e.g., relying on self-reported illness, which may correlate with workers' perception rather than physical exam).Again, literature has somewhat addressed this problem.In recent times, few studies have attempted to verify the results across different diagnostic criteria for byssinosis [18,19].These studies collected self-reported data from textile workers about symptoms of byssinosis using standard questionnaires of Schilling's grading methodology for diagnosing the disease.Medical examinations of the workers then followed the self-reported data.The variation between results across techniques was negligible.Therefore, both PFT and standard byssinosis questionnaires are acceptable diagnostic criteria for byssinosis.In fact, most studies applied Schilling's grading methodology in the literature compared to fewer studies that used PFT for diagnosing byssinosis.
This study collects self-reported data (using a standard questionnaire) from the target respondents concerning the prevalence of byssinosis, chronic cough, phlegm, and blood with phlegm.For bronchitis, asthma, and tuberculosis, we asked workers to report whether the healthcare providers diagnosed these conditions.
Measurement of byssinosis
A standard tool for measuring byssinosis was adopted for this study, i.e., a byssinosis questionnaire using Schilling's classification (grading) criteria, which various studies have previously used to diagnose byssinosis [18,19,30].Thus, byssinosis was graded as grade 0: no symptoms of breathlessness or chest tightness on the opening day of work after the weekly break; grade ½: occasional breathlessness or chest tightness on the opening day of work after the weekly break; grade 1: breathlessness or chest tightness only on the opening day of work after the weekly break; grade 2: breathlessness or chest tightness on the opening day of work after the weekly break as well as on other weekdays; grade 3: evidence of permanent impairment in capacity from reduced ventilator defect along with grade 2 symptoms [31].
The estimation methods
Various dose-response functions were estimated using the appropriate regression models.Equation (3) can be rewritten in the following econometric form: The dependent variable S indicates the following respiratory diseases: byssinosis, asthma, blood phlegm, chronic cough, and bronchitis.For byssinosis, the dependent variable is defined as the categorical variable, which includes values from 0, 1, 2, 3: i.e., 0 if there is grade 0 byssinosis (no byssinosis), 1 if grade ½ byssinosis, 2 if grade 1 byssinosis and 3 if grade 2 and 3 byssinosis.Since values of byssinosis are in ordered form, an ordered probit model is used for the analysis.The dependent variable is binary for the rest of the diseases.Therefore, probit models are appropriate for use in other diseases.However, blood phlegm, asthma, and bronchitis data were not normally distributed.Therefore, the probit model is used for the chronic cough variable only.For other diseases, complementary log-log regression is used.Complementary loglog regression is an alternative to logit or probit when data are not normally distributed.The definitions of all the independent variables can be found in Supplementary Table.
Statistical analysis
The statistical package STATA 12 (Stata Corp LLC TX, USA) was used to estimate the dose-response function.Before model estimation, descriptive statistics of key variables were provided.We estimated an ordered probit model for byssinosis, a probit model for chronic cough, and three complementary log-log regression models for blood with phlegm, bronchitis, and asthma.
Results
Table 1 shows that most workers were males (86%) compared to their female counterparts (14%).The average age of the workers was 28 ± 6.94 years.The average wage of workers amounted to PKR 13,392 ± 5734.Furthermore, the average wage of female workers was 30% lower (PKR 9,857) than that of male workers (PKR 13,949).Only 14% of workers reported using masks at the workplace.In response to a question, 'Is the factory providing them with face masks?' , all the workers reported that they are provided face masks by the textile mills at no cost.
Table 1 further shows that 36% of workers complained of chronic cough, 9% blood with phlegm 4.4% asthma, and 17% at least one episode of bronchitis.Overall, 22.3% of workers reported that they experienced byssinosis.Regarding the classification of byssinosis by grade, it was found that 7.8% reported grade ½, 13.1% reported grade 1, and 1.4% reported grade 2 or 3.
Results of the dose-response function
The dose-response function estimated the relationship between exposure to cotton dust and the development of respiratory diseases among textile workers.The ordered probit model and the marginal effects were calculated, Pseudo R 2 = 0.106 (Table 2).The results showed that work sections and temperature were significant among the environmental (pollution) and factory characteristic variables.The results showed that workers working in dusty or above-average temperature sections were more likely to develop byssinosis than workers working in less dusty and normal temperature sections.The marginal effect of the work section dummy indicated that if the worker leaves from the ring (or base) section (a relatively less dusty section) and joins the opening section, the probability of developing different grades of byssinosis increases by 14% for grade ½, 7% for grade 1 and 9% for grade 2 or 3.
Similarly, if the worker leaves the ring section and joins the blow room section, the probability of developing different grades of byssinosis increases by 16% for grade 1/2, 11% for grade 1, and 20% for grade 2 or 3. Similarly, if an employee moves from the ring section to the card room department, the chances of developing different grades of byssinosis increase by 17% for grade ½, 11% for grade 1, and 20% for grade 2 or 3.The marginal effect of temperature indicates that an increase in temperature by 1 centigrade in the workplace increases the probability of developing byssinosis of grade 1/2 by 0.029%, byssinosis of grade 1 by 0.026%, and byssinosis of grade 2 or 3 by 0.026%.
Among personal characteristics, smoking was a highly significant (at 1%) determinant of byssinosis.The marginal effects of the variable showed that smoking one additional cigarette per day increases the probability of developing byssinosis of grade 1/2 by 0.5%, byssinosis of grade 1 by 0.2%, and byssinosis of grade 2 or 3 by 0.2%.
Table 3 shows the probit regression results for chronic cough and complementary log-log regression results for other respiratory diseases, such as asthma, blood phlegm, and bronchitis.The dust level within the environmental and factory characteristics variables was positively and significantly associated with blood phlegm and bronchitis.Again, the results show that respiratory diseases, chronic cough, and bronchitis are more prevalent in dusty work sections, e.g., blow rooms, card rooms, and opening sections, than in less dusty sections.The temperature coefficient was positive, indicating that the incidence of cough increased as the temperature increased.The results of environmental and factory characteristic variables show that there is a clear association between dust exposure and respiratory illnesses in textile factories.
Within personal factors, the results showed that cigarette smoking was positively associated with asthma, blood phlegm, and bronchitis, and the results were significant at the 1% level for blood phlegm and bronchitis.The results also indicated the role of sex in the development of asthma.For example, the estimates showed that the probability of developing asthma is significantly higher among female workers than male workers.
Results of the sickness absence function
Within the environmental and factory aspects, the coefficients of all the variables were positive with sickness absences (Table 4).Only two variables, temperature, and blow room section, were not significant.The marginal effects of dustiness showed that an increase in the level of dustiness of the mill by one stage leads to the loss of 5 h of work.Specifically, the result of marginal effects indicated that when a worker moves from the ring (base) The marginal effects of the use of the mask (averting action) variable showed a negative association with work hours lost, and the result was significant at the 10% level.Specifically, the marginal effect showed the probability of work hours lost by 2.4 h if workers wore a mask during work compared to workers who did not use the mask.
Discussion
The results of this study provided evidence of respiratory impairments such as byssinosis, chronic cough, bronchitis, asthma, and blood with phlegm among textile workers.The analysis suggests that the prevalence of these illnesses is associated with dust concentration in the work environment.Cigarette smoking was also associated with byssinosis, asthma, blood phlegm, and chronic bronchitis.Additionally, the illness-associated work absence was higher in workers of dusty worksites (sections) than those working in the less dusty sites.Work absences were significantly less common among workers who used face masks than among those who did not.
Local studies reported a high prevalence of byssinosis in Karachi (ranging from 19% to 35%) and Faisalabad (16.4%) [5,6,8,18,19,35].Generally, workers engaged in the dustier work sections, e.g., opening, blow room, and card room of textile mills, reported an elevated risk of byssinosis than those who work in less dusty sections, e.g., simplex, ring, and auto cone [6,18,19].Similar results were reported in other studies.An Indian study found that byssinosis was as high as 30% and 38% in the blow room and card room, respectively [6].
The textile spinning process is divided into many stages (sections).The early sections, e.g., bale opening, blow room, and card room, contain high levels of endotoxins [10,13,14,27,30,36].Endotoxins are typically considered one of the significant reasons for byssinosis [16].Several studies reported that most workers in the early sections of spinning processes showed byssinosis due to the high concentration of respirable cotton dust with endotoxin.Therefore, the persons who work in these sections can have a definite association with byssinosis and respiratory diseases [11,14,37,38].Conversely, subsequent work sections such as simplex, ring frame, and auto cone pose an average risk of byssinosis due to less dustiness [5,6,28,36,38].In addition, many studies around the globe found a relationship between dust pollution and respiratory problems such as chronic cough, bronchitis, and asthma in textile workers [5-7, 9-14, 21, 27, 39].
Smoking is considered an important risk factor for byssinosis among textile mill workers.Similar to our results, previous studies showed that the risk of byssinosis was significantly higher in workers who smoke than in those who do not smoke [11,13,39].Research has shown that cigarette smoking multiplies the effect of cotton dust exposure and elevates the risk of developing Byssinosis [38].These results were supported by a study conducted in India showing higher symptoms of byssinosis among smokers [39].These findings strengthen the results of our study that smoking serves as an additive to cotton dust exposure, aggravates the effects of dust pollution in the work environment, and causes an increased risk of respiratory diseases.
Work-related illness is typically associated with functional limitations.One functional limitation is the loss of workdays [6].Occupational diseases and injuries are preventable by taking control measures at sources of pollution and by educating workers to adopt safety gadgets to avoid health hazards in the workplace [6,40,41].Hence, workers and mill management can play a pivotal role in achieving compliance with occupational safety and health standards and reducing workplace health hazards [41].Regarding workers, the results showed that despite the free provision of face masks, only a small percentage of workers were using face masks, whereas the use of earplugs was almost nonexistent in our sample.It must be noted that similar results have been found in other developing countries [8, 11-13, 36, 38-40, 42].Research has shown that the The results of the current study state that the use of masks may have little effect on preventing respiratory illnesses.Rather, environmental and factory characteristics are real factors that determine the illness.However, the use of a mask may reduce the severity of the illness; in particular, the use of masks reduces sickness absences and visits to the doctor.It is emphasized here that the use of masks was higher among female workers than male workers.Furthermore, female workers are significantly less frequent visitors to doctors than male workers (although Mitigating Activities Function is not part of the analysis, the results stated that visits to a doctor are significantly more frequent for male workers than female workers, and this may be attributed to the use of masks).Whatever the underlying reasons for the use of face masks, the use of masks turns out to be beneficial in terms of reducing the functional limitations of the illness.
Regarding mill management, these results showed that the cost of work lost due to illness could be averted by mandating safety masks in the workplace.Apart from reducing work hours lost among mask users, the use of masks may also improve the safety culture in textile mills [6].For example, a study in Karachi, Pakistan, showed that mill management strictly enforced the use of safety masks, resulting in fewer respiratory problems among workers [19].Undoubtedly, preventive measures are the most cost-effective tool to reduce diseases and accident rates in the workplace [40,41,43].This is particularly important in the case of Pakistan, where the promulgation of cotton dust standards seems unlikely to happen in the period ahead.Therefore, the only available option in the near future is the use of protective measures by workers [6].
Meanwhile, it is crucial to understand the barriers and motivations for adopting face masks by workers, which may be due to the high temperature that dissuades workers from consistently using face masks [40,41,43].This may also be due to gender differences in safety perception.Generally, male workers are assumed to be less cautious about workplace safety than their female counterparts [42].Alternatively, the higher use of face masks among female workers may be due to cultural aspects, as most Pakistani women cover their face and head with a shawl "locally called dupatta".Therefore, the higher use of face cover by female workers may be due to cultural factors rather than safety concerns [6].It could be a good area for researchers to explore the barriers to and motivations for workers to adopt face masks in the future.
Limitations of the study
Although this study provides valuable insights into the association between cotton dust exposure and respiratory illness in the textile industry, it does have some limitations.This study recruited only current industry workers, which did not allow us to explore the perspective of ex-factory workers.In addition, the study relied on self-reported information for assessing cotton dust exposure and respiratory diseases, which is not an accurate measurement method.Furthermore, self-reported symptoms cannot be fully defined as clinical diagnoses.Hence, self-reported information can generate biased estimates about symptoms.Therefore, we propose that the results may be validated by using valid instruments for dust measurement and the clinical diagnosis of respiratory symptoms.However, the study estimates closely resemble clinical studies, and the reported severity of the symptoms of chest tightness on the opening day of the week, which becomes less severe on other days, are clear indications that reported symptoms are likely to be Byssinosis [6,30,34].Furthermore, we used a simplified version of the Health Production Function, which makes no distinction between one episode of illness of two days or two separate illnesses of a single day each.Of course, this simplification ignores the severity of the illness.Furthermore, a richer specification of health production functions incorporating types of illness and severity of the symptoms can better explain the burden of illness.
Conclusion
Cotton dust exposure is a major risk factor for many respiratory disorders, including byssinosis.The illness caused by cotton dust adversely affects workers' health and quality of life.The results showed that textile workers face multiple respiratory symptoms, including chronic cough, phlegm, blood with phlegm, bronchitis, and byssinosis.Dusty work sections and smoking cigarettes were potential factors of respiratory illnesses.The workers use fewer safety gadgets irrespective of their level of education.However, the use of masks is higher among female workers than male workers.The textile workers employed in dusty sections take more days off.However, the number of work hours lost can be decreased if workers use face masks.
The findings of this study have some important policy implications.The disease varies across the work sections, showing an evident link between respiratory diseases and dust concentration at worksites.This information is crucial in establishing dust concentration standards within the textile industry.It is imperative for factory management to enforce the use of personal protective equipment.Alternatively, raising awareness among workers can serve as a motivating factor for mask use.It must be noted that implementing personal protective equipment is the most cost-effective strategy to reduce the occupational health burden in textile mills.
Page 2 of 11
Khan et al.Journal of the Egyptian Public Health Association (2024) 99:3
Table 1
Socio-demographic, diagnoses, and work-related characteristics of workers in Spinning mills, Faisalabad District, Pakistan, 2013 section to the opening section, the chances of work absence increase by 3 work hours.Similarly, if a worker moves from the ring section to the card room section, the chances of work absence increase by 2.8 work hours.As in, if a worker moves from the ring section to the samplex section or Auto cone section, the chances of work absence increase by 2.65 and 3.4 work hours, respectively.
Table 2
Marginal effects of the status of illness (ordered probit) (N = 206)
Table 3
Marginal effects of the status of illness (N = 206)The significance levels at 1%, 5%, and 10% are denoted by a , b , and c , respectively.The values in parentheses are standard errors
Table 4
Marginal effects of sickness absence function (Tobit Model) (N = 206)The significance levels at 1%, 5%, and 10% are denoted by a , b , and c , respectively.The values in parentheses are standard errors | 7,177.8 | 2024-01-29T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Temperature-induced hysteresis in amplification and attenuation of surface-plasmon-polariton waves
The propagation of surface-plasmon-polariton (SPP) waves at the planar interface of a metal and a dielectric material was investigated for a dielectric material with strongly temperature-dependent constitutive properties. The metal was silver and the dielectric material was vanadium multioxide impregnated with a combination of active dyes. Depending upon the volume fraction of vanadium multioxide, either attenuation or amplification of the SPP waves may be achieved; the degree of attenuation or amplification is strongly dependent on both the temperature and whether the temperature is increasing or decreasing. At intermediate volume fractions of vanadium multioxide, for a fixed temperature, a SPP wave may experience attenuation if the temperature is increasing but experience amplification if the temperature is decreasing.
Introduction
The planar interface of a plasmonic material and dielectric material guides the propagation of surface-plasmonpolariton (SPP) waves [1][2][3]. As the propagation of SPP waves is acutely sensitive to the constitutive properties of the plasmonic and dielectric materials involved, these surface waves are widely exploited in optical sensing applications [4]. The prospect of harnessing dielectric materials whose constitutive properties are strongly temperature dependent opens up possibilities of further applications for SPP waves in reconfigurable and multifunctional devices [5][6][7][8].
At visible wavelengths, vanadium dioxide is a dissipative dielectric material whose constitutive properties are acutely sensitive to temperature over the range 25°C-80°C [9][10][11][12][13]. Indeed, the crystal structure of vanadium dioxide is monoclinic at temperatures below 58°C and tetragonal at temperatures above 72°C [14], with both monoclinic and tetragonal crystals coexisting at intermediate temperatures. Furthermore, the temperatureinduced monoclinic-to-tetragonal transition is hysteretic. The electromagnetic response of vanadium dioxide is characterized by its (complex-valued) relative permittivity ò VO , with { } Re 0 VO > and { } Im 0 VO > at visible wavelengths. The value of ò VO depends upon temperature; also, over the range 25°C-80°C, it depends upon whether the material is being heated or cooled. Parenthetically, the dissipative dielectric material-to-metal phase transition [14] that vanadium dioxide exhibits at free-space wavelength λ 0 > 1100 nm [15] is not relevant to our study.
For optical applications, thin films of vanadium dioxide may often be desired [16,17]. Such thin films are conveniently fabricated by a vapor deposition process. However, depending upon the processing conditions and thickness of the film, the deposition process may result in significant proportions of vanadium oxides other than vanadium dioxide being present in such films. Accordingly, in the absence of definitive stoichiometric evidence, we shall refer these films as being composed of vanadium multioxide. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Losses due to the dissipative nature of vanadium multioxide represent a potential impediment for optical applications. However, these losses may be overcome by mixing vanadium multioxide with an active material. Rhodamine dyes provide a class of suitable active materials that are commonly used to overcome losses at optical wavelengths in otherwise dissipative metamaterials [18,19]. The use of active materials to amplify SPP waves is a well-established practice [20][21][22][23].
Therefore, in the following, we investigate the temperature dependence of SPP waves guided by the interface of (i) a homogenized mixture of vanadium multioxide and rhodamine dyes, and (ii) a plasmonic material which is taken to be silver. In particular, the thermal hysteresis is explored for both amplified and attenuated SPP waves. The canonical boundary-value problem is considered in which SPP waves are guided by the interface z = 0; the plasmonic material occupies the half-space z < 0 and the dielectric material occupies the halfspace z > 0.
Plots of the real and imaginary parts of ò VO are provided in figure 2 for the temperature range [ ] 25 C, 80 C .
These values were derived by extrapolation of experimentally-determined values which were found at λ 0 = 800 nm for both heating and cooling phases, following the method described in [24]; and using values determined by ellipsometry at 25°C and 95°C for λ 0 = 710 nm. The hysteresis phenomenon displayed in Re VO between heating and cooling phases is approximately 0.9 and the maximum difference in { } Im VO between heating and cooling phases is approximately 0.11. A homogenized mixture of vanadium multioxide, characterized by the relative permittivity ò VO and volume fraction f VO , and a combination of rhodamine dyes, characterized by the relative permittivity ò rho and volume fraction f rho = 1 − f VO , occupies the half-space z > 0. The relative permittivity of the homogenized mixture, namely ò mix , is estimated using the Bruggeman homogenization formalism [25,26]. Accordingly, ò mix is extracted from the Bruggeman equation Since the Bruggeman equation (3) is quadratic in ò mix , it is readily solved by means of the quadratic formula. The electromagnetic response properties of vanadium multioxide is assumed to be unchanged by the gain in the rhodamine dyes, but the foregoing equation clearly shows the gain to affect the electromagnetic response properties of the mixture of vanadium multioxide and rhodamine dyes. Plots of the real and imaginary parts of ò mix versus temperature are presented in figure 3 for f VO = 0.2, 0.5, and 0.8, for both heating and cooling phases. The real part of ò mix is positive valued across the entire temperature range for all volume fractions considered. When f VO = 0.2, { } Im 0 mix < across the entire temperature range; therefore, the homogenized mixture is effectively an active dielectric material for f VO across the entire temperature range; therefore, the homogenized mixture is effectively a dissipative dielectric material for f VO = 0.8. When f VO = 0.5, { } Im 0 mix < at low temperatures (less than 63°C for the heating phase and less than 32°C for the cooling phase), and { } Im 0 mix > at high temperatures. Therefore, for f VO = 0.5, the homogenized mixture is effectively an active material at low temperatures and effectively a dissipative material at high temperatures.
Whereas the Bruggeman formalism has been widely used to provide estimates of the relative permittivity of homogenized mixtures that agree well with experimentally determined values [27][28][29][30], it is illuminating to compare the estimates provided in figure 3 with estimates provided by another much-used homogenization formalism, namely the Maxwell Garnett formalism [26]. The basis for the Maxwell Garnett formalism is somewhat different to that of the Bruggeman formalism: For the Maxwell Garnett formalism, inclusions of relative permititivity ò inc are distributed with a volume fraction f inc in a host medium of relative permittivity ò host . The relative permittivity of the homogenized mixture is estimated as (3). The most conspicuous differences are observed in the estimates of the imaginary part of the relative permittivity of the homogenized mixture, especially for f VO = 0.2; in this regime the differences are no more than 2%.
The relative permittivity of the plasmonic material that occupies the half-space z < 0, namely silver, was taken to be ò Ag = − 23.40 + 0.39i. Note that ò Ag at λ 0 = 710 nm is sufficiently insensitive to temperature over the range 25°C < T < 80°C that its temperature dependence need not be considered here [31].
Surface-plasmon-polariton waves
For the canonical boundary-value problem, the wave number of the SPP wave is given by [32] ( ) q k , 6 0 mix Ag mix Ag = +
wherein k 0 = 2π/λ 0 is the free-space wave number. Notice that equation (6) holds regardless of the sign of { } Im mix [33]. The real part of q is inversely proportional to the phase speed of the SPP wave, while the imaginary part of q is a measure of the SPP wave's attenuation rate, with { } q Im 0 < signifying amplification and { } q Im 0 > signifying attenuation. The real and imaginary parts of q are plotted against temperature for the range [ ] 25 C, 80 C in figure 5 for both heating and cooling phases. The volume fractions considered are f VO = 0.2, 0.5, and 0.8. The real part of q is positive valued across the entire temperature range for all volume fractions considered. Since, at each temperature, { } q Re is greater for the heating phase than for the cooling phase, SPP waves propagate at a lower phase speed for the heating phase than for the cooling phase. When f VO = 0.2, { } q Im 0 < across the entire temperature range; therefore, the SPP wave is amplified at all temperatures for f VO = 0.2 and the degree of amplification is greater if the temperature is increasing rather than decreasing. When f VO = 0.8, { } q Im 0 > across the entire temperature range; therefore, the SPP wave is attenuated at all temperatures for f VO = 0.8 and the degree of attenuation is greater if the temperature is decreasing rather than increasing. When f VO = 0.5, { } q Im 0 < at low temperatures (less than 63°C for the heating phase and less than 32°C for the cooling phase), and { } q Im 0 > at high temperatures. Therefore, for f VO = 0.5, at a given temperature, whether the SPP wave is amplified or attenuated depends upon whether the temperature is increasing or decreasing. In particular, at f VO = 0.5, the SPP wave is neither attenuated nor amplified at (i) T = 63°C if the temperature is increasing; and (ii) T = 32°C if the temperature is decreasing. which quantify the differences in the real and imaginary parts of q/k 0 between the heating phase, i.e. | q k 0 heat , and the cooling phase, i.e. | q k 0 cool , over the temperature range 25°C < T < 80°C. Plots of H R and H I versus f VO are presented in figure 6. As expected, both H R and H I vanish in the limit f VO → 0, and both attain their maximum values in the limit f VO → 1. For intermediate values of f VO , both H R and H I increase monotonically as f VO increases. Furthermore, whereas H R is almost a linearly increasing function of f VO , H I increases rapidly with f VO when f VO < 0.6 but saturates for f VO > 0.6.
Closing remarks
The propagation of SPP waves at the planar metal/dielectric interface can be controlled by temperature by choosing a dielectric material whose constitutive properties are strongly temperature dependent and which is impregnated with an active dye. Specifically, if the dielectric material is a homogenized mixture of vandium multioxide and rhodamine dyes and the metal is silver, then either attenuation or amplification of the SPP waves may be achieved, depending upon the volume fraction of vanadium multioxide. The degree of attenuation or amplification is strongly dependent on both the temperature and whether the temperature is increasing or decreasing. At intermediate volume fractions of vanadium multioxide, for a fixed temperature, a SPP wave may experience attenuation if the temperature is increasing but experience amplification if the temperature is decreasing. This thermal hysteresis in amplification and attenuation of SPP waves may be usefully exploited in applications involving reconfigurable and multifunctional devices, as well as those involving temperature sensing.
The canonical boundary-value problem considered here provides insights into the physics underlying SPP wave propagation at the planar metal/dielectric-mixture interface and highlights the role of temperatureinduced hysteresis in the dielectric properties of the non-active (i.e. passive) component of the dielectric mixture. In a more realistic scenario, the dye close to the metal interface may be dielectrically nonuniform in the direction normal to the interface due to position-dependent dipole lifetime and pump irradiance. Theoretical consideration of those phenomenons for a homogeneous dye layer (i.e. f VO = 0) requires careful approximations [34]. These phenomenons are even more difficult to incorporate in theoretical studies when f VO > 0, and require suitable experimental data that is currently unavailable. It would also be desirable to take into account temperature corrections arising from the power density profile of the SPP wave, but this too requires currently unavailable experimental data. Regardless, the results presented here clearly show that thermal hysteresis will affect amplification and attenuation of SPP waves when the dielectric mixture contains active component materials.
Lastly, photobleaching of rhodamine dyes [35] depending upon the power density profile of the SPP waves [36] is a potential issue to be addressed. Photobleaching could be ameliorated by reducing the excitation intensity, and thereby extending the life of the dye molecules, but this would reduce the SPP wave's energy. Accordingly, a careful balance must be found. Rhodamine was chosen as a widely-used representative dye for our study. Alternative dyes that are less susceptible to photobleaching and more photostable, such as Alexa Fluor 633 [37] or Atto 655 [38], could be used. | 3,049.2 | 2023-05-31T00:00:00.000 | [
"Physics"
] |
Morphological condition of the pulp of intact and affected by caries third molars
To date, there is a theory that increased resistance to caries is observed in the teeth, which for any reason underwent destructive changes in the pulp. That is why there is a need to study the impact of pulp vitality on the development of the carious process. The aim of the research was to study the microscopic structure of epoxy sections of intact and carious third molars. We studied 4 intact and 6 carious third molars. For this purpose, specimens were made taking into account the free penetration of the fixative solution into the pulp. To this end, immediately after the tooth was removed, we cut off its roots almost near the crown, preserving the integrity of the latter. The method relied on the impregnation of specimens with epoxy resin, according to the method of epoxy plastination of tooth specimens, developed at the Department of Human Anatomy of Ukrainian Medical Stomatological Academy. The epoxy blocks were cut with a disk into two halves until the hard tissues of the tooth crown were exposed together with the pulp. We found that the hard tissues (dentin and enamel coating) of intact third molars did not have any structural defects. However, their pulp chamber contained mainly an amorphous substance, devoid of any typical pulp tissue structures. That is, the pulp was in a state of complete devolution. Quite the opposite presentation was observed in specimens of carious teeth. We found that their pulp chamber contained quite noticeable tissue structures typical of the dental pulp. It is interesting that in the subodontoblastic layer, in front of the carious alteration of the enamel, there was compaction of the pulp, which may be due to infiltration of perivascular connective tissue by immunocompetent cells. It was found that on the border with carious destruction of enamel, there was a compacted spot of altered dentin, whose matrix was intensely pigmented in brown colour, due to the accumulation of melanin on the dentino-enamel junction. Its excessive formation is associated with the destruction of protein-carbohydrate complexes of organic matter in the deep layers of dentin. We found that the pulp compaction and the focus of carious alteration of the enamel are projectively connected by a radial cord of altered dentin, known in the literature as "dead tracts". Hence, there is reason to believe that the identified changes indicate a latent form of caries, with a pulpogenic mechanism of development. Thus, it can be argued that the teeth, which for any reason underwent degenerative changes in the pulp, are not prone to carious lesions, whereas in carious teeth, the pulp is active and involved in the pathogenesis of the carious process.
Introduction
Today, the existent theory of the exogenous nature of caries has not brought its correctness [6,7,8,9,10,13,18,21,23,25,28]. Thus, it is created a lot of contradictions, the main of them are the next facts that we find in the modern scientific literature [2,5,14,21,33]. They testify about the pulpogenic mechanism of the development of caries: 1. Discovery of subenamel caries and also retrograde (centered) its progress.
2. The hard tissues of the tooth which were left after carious necrosis of the pulp cannot be damaged by caries.
3. Cases of carious damage of the enamel-dentine area while the saving of integrity of the superficial enamel. 4. Caries has damaged the retained third molars. More clear and principal new conception which opposite to the exogenic theory of caries, were proposed and substantiated theoretically Yu.P. Kostylenko and I.V. Boiko [21].
Authors noticed that organic structures of enamel, which are located in the border with dentine, have immunological properties and enamel is considered as "unbarrier" tissue which lets to consider the problem in the view of the theory about mechanisms of the damage of the immunological toleration. While, the source of the primaries sensitization can be both primary, natural and acquired, secondary autoantigens occur in the fissured areas of the teeth, under the influence of some physicochemical or infectious factors (theory of altered antigens) [15,21]. In the works of many authors [5,6,9,17,26,27,29] were proved that damage of enamel occurs as a result of immune reaction in the tooth pulp. This conclusion was based that different affections of the dental system are related to any unfavorable condition of the organisms, are beginning of the pulp. Thus, degenerative changes in it during chronic rheumatic disease, endocrine disorders, infection disease, avitaminosis, toxicosis of pregnancy, etc. can be noticed when changes in the hard tissues had not been found yet.
That is enough for making a general conclusion: increasing of the stability to caries was observed in the teeth from various causes were took place destructive changes in the pulp while the teeth were isolated from the oral cavity and did not protect from carious damage of the hard tissues. Thus, we have reasons to consider the vital activity of the pulp as an indirect link in the development of carious process. But it should be taken into account, that this thesis has not proved yet, because the direct research in the literature is missing today. That is why we have tried to use opportunities which were received during the study of the individual diversity of third molars that may be got in the clinic during the extraction of the teeth in different clinical indication.
The aim of the research: the study of the microscopic structure of the epoxy sections of the intact and damaged by caries third molars.
Material and methods
Connection of the study with planned scientific research works. This work is fragment of scientific research work "Age aspects of structural organization of immune system organs, glands of gastrointestinal tracts and urogenital system of human in norm and pathology", state registration № 0116U004192.
Four intact (crown without visible external damaged by caries) and six third molars with damage by caries, that were received from the Department of Oral and Maxillofacial Surgery with Plastic and Reconstructive Surgery of the Head and Neck of UMSA were studied.
In the view of methodical feature of our research, that includes the study of hard tissues of teeth with their pulps, we had to take measures for prevention of pulp` destroy in process of preparation of necessary prepares. For this, it was necessary to create ability for free penetration of fixative solution into the pulp. In this purpose, immediately after extraction and visual assessment of tooth, we cut its root off on level of crown, care about saving of one's integrity. After this, it was washed for a short time by warm (t -37°С) saline and plunged into bottle with 10% solution of neutral formalin.
After three-day fixation, received prepares washed from formalin away under running water, and they were dehydrated by spirit of incising concentration with the gradual transition to pure acetone according to our method [20,22]. According to it, further procedure consisted in impregnation of epoxy resin. In this purpose, we used epoxy glue "Chimcontact-Epoxy". The final stage was moving prepares into appropriate for size ditches and filling them by freshly made epoxy resin.
After polymerization, received epoxy blocs were cut by disk for separation into two halves in necessary plane of intersection, which caused baring (on the end surfaces of the two halves) of hard tissues of tooth crown with pulp. It was found, that pulp of these teeth was impregnated by hardened epoxy resin.
The next stage in preparation of prepares for microscopic research was creation of polished epoxy sections that were carried out manually with sandpaper by alternatively changing it in decreasing order of the degree of abrasiveness. After this, these prepares can be studied by light microscope. However, excepting of Hunter-Schreger enamel strips, it was impossible to see any microscopic details. Besides, in this aspect tissue structure of the tooth on the section were unapproachable for reaction with according coloring because their organic components were hidden in the vast majority of minerals. Therefore, for get rid of them we subjected epoxy sections to superficial digestion (decalcification) in chelating agent -Trilon B. Necessity of fixation of hard tissues of tooth in epoxy resin becomes clear from it. It plays as a limiter for the directed action of the decalcifying solution in the section. It is possible only during the grinding of surface of hard tissues of tooth. After this procedure even uncolored sections permit to visualize their structure more clearly than those we used in some cases. However, full information we can receive only after coloring of epoxy section by 1% solution of methylene blue on 1% solution of borax.
Study and photo documentation of prepares were carried out by binocular magnifying glass MBS-9 equipped with camera at different magnifications.
Conducted research replies moral-ethical norms and basic principles of Council of Europe Convection on Human Rights and Biomedicine and relevant legislative documents of Ukraine. The Commission on Bioethics of VDNZU "Ukrainian Medical Stomatological Academy" (protocol №160 dated 14.12.2017 did not reveal any violations of moral and ethical norms during the research work. There was no special need for the application of parametric statistical analysis, because the changes in the microscopic structure of the third molars obtained by us did not differ significantly. Figure 1 shows microphoto of crown department of two intact third molars on the same magnification. It attracts attention, that their enamel coating has no destructive defects. Due to surface decalcification and coloring by methylene blue, the Hunter-Schreger bands, which essentially nodal bundle0like chains of crystal fibers, became clearly visible. Dentyne was without any visible alterative changes, in it the radial stripe was clearly visible due to typical orientation of dentinal tubules. Along with this, the content of the pulp chamber was mainly amorphous substance with no signs of the presence of such typical pulp tissues structures as: connective tissues elements, blood microvessels, nerve fibers and odontoblasts. Only ones formations that were clearly manifested were pathological deposits in the form of denticles and various forms of petrifaction. Notable that most often they were in the area of the horns of the pulp chamber (Fig. 1).
Results
Quite the opposite picture is presented by prepares of carious third molars with stored pulp in them. Figure 2 convinces that pulp has peculiar to its tissue components. Presence of odontoblastic tissue component, collagen fibers and blood microvessels although they appear in somewhat coarser form indicates in this. It may be explained by inability to prevent completely of autolytic processes during preparation of prepares (Fig. 2). During more attentive study we paid attention, which on some prepares in odontoblastic layer just opposite the carious alteration of the enamel, there is a noticeable compaction of the pulp (Fig. 3).
According to location of pigmented spot clearly was manifested destruction of deep layers of enamel. It should be noted that focuses of its carious alteration were located not only in region of grooves, but quite often they were manifested on side of the masticatory tubercles and lateral surfaces of the crown.
Discussion
Thus, in investigated by us intact teeth, pulp was manifested in condition of complete degeneration, reason of which unknown for us. But in this case we can refer to the literature, according to which development of pathological process in pulp should be considered from polyetiological positions [10,12,18,21]. Herewith, the main link of its pathogenesis may be both external and internal factors, which cause loss of its caries-resistant properties, causing the pulpogenic mechanisms of caries development. This conception should be considered as important, because it force to pay attention on character of structural changes in tooth pulp in the beginning of the development of carious process, and it does not limit only with presence of external signs of damage of the enamel.
However, should be noticed that according to the thoughts of Yu.P. Kostylenko and I.V. Boiko [21] caries cannot damage enamel of teeth which for one reason or another it is non-available completely or it was prone to sclerosis, that is, no pulp -no caries. It also was confirmed by results of our research.
Pulp of carious third molars looks more full-fledged in comparison of their intact analogs. Thus, we want to say that in teeth which are prone to carious process, the pulp is in an active state. It means, it has all necessary reactogenic properties to change the antigenic composition of the hard tissues of the tooth, due to the influence of pathogenic microorganisms which occur in the grooves. Besides, this side of question about etiopathogenesis of caries is not as monosemantic as it seems at the first glance.
Due to the fact, that in the subodontoblastic layer of pulp, in opposite to the carious alteration of the enamel, locates exchange blood vessels, therefore it can be assumed that this consolidation occurred as result of infiltration of perivascular connective tissue, immunocompetent cells. The certainty of this assumption is proved by the fact that consolidation of the pulp and center of carious alteration projectively connect between each other with radial cord of alterative dentine known in literature as "dead tracts" [1,10,11,12,14,19,21,31,32]. According to thoughts of these authors which are based on the positions of modern immunology, in deep areas of the grooves under the influence of pathogenic microflora on organic components of basal enamel and superficial dentine occur creation of high active intermediate antigens (autoantigens) [9,17,21,29,33]. They are getting through the dentinal tubules into the pulp, cause activation of the local immune system, effector elements of which will cause the alteration of dental channels with the creation of "dead tracts" and further destruction of appropriate areas of enamel. Its damage may produce the new wave of antigen stimulation of immune reactions which cause the creation of antibodies that can react with antigen both damaged and intact enamel in connection with identity of separate specific determinant groups. This process accompanied by increasing of present damage, which cause new antigen galling. Thus, it is caused chain autoimmune (autoaggressive) process, which determines the pathogenesis of caries (carious disease).
In connection with this, it is impossible to leave without attention one very revealing morphological fact, which has not been taken into account by researchers in the study of the pathogenesis of caries before [3,21]. It means, that on the border with carious destruction, the consolidated spot of alterative dentine is located, a matrix of which has intensive brown pigmentation, which is very clearly visualized on unpainted epoxy sections (Fig. 3). It has the greatest intensity in the basal layer, where its compactness is decreasing and disappearing in the matrix of "dead tracts" of the dentine. Clearly pigmented spot on the border between dentine and enamel gives an opportunity to identify the hidden form of caries even on the simple sections of teeth without any coloration.
Results of our research don not give reasons to speak about exogenous nature of this pigmentation. However, today this phenomenon can be explained by that during caries, it occurs as the result of accumulation on the dentineenamel border of melanin which was created by the way of some metabolic transformations of tyrosine which is a product of phenylalanine -one of the amino acids in the protein substance of dentine [4,16,21,24,30]. Thus, there is reason to believe that this pigmentation of superficial dentine, which is located under the defect of enamel, is a result of storing of melanin which was created in the process of dystrophic dissociation of protein-carbohydrate complexes of the organic substance of the deep layers of dentine. Products of the metabolic transformations of phenylalanine achieve the superficial layer of dentine due to the centrifugal movement of the "dental fluid" along the dentinal tubules.
Clearly expressed morphological signs of carious damage of hard tissues, which are characterized by destructive changes look as radial "dead tracts" with their location from the pulp chamber to the damaged enamel, in its deep layers and creation of dark pigmented spots on the border with enamel indicate a latent form of caries.
In our view, the primary reason of the dystrophic changes in dentine with caries needs to find in the pulp. Thus, we want to say that teeth, which are prone to carious process, have the pulp in an active condition.
Thus, the above facts clearly indicate that hard tissues of third molars, which for one reason or another have degenerative changes in the pulp, are not prone to carious damage. Whereas, the pulp of carious teeth is in active condition, thus it has all need reactogenic properties for the changing of the antigen composition of the hard tooth tissues, so it has an influence on the development of the carious process.
There are reasons, to think that in the further development of the carious process, the pulp will be prone to complete necrosis, on which the destructive process of the hard tooth tissues will have stopped [11,12,21]. Practical conclusion: for the stopping of the carious process should be used depulping of the damaged tooth follows from it. But, it does not mean that it will stop a carious disease. It may include other teeth.
Pathomorphological connection between reactive changes in the pulp and carious damage of the enamel is alteration of the dentine which looks as radial "dead tracts". Thus, we pay attention to one important event which is characterized for carious damage, which consist in creation on the border with destructive enamel, dentine pigmented into brown color intensively. We think that destructive disintegration of protein-carbohydrates complexes of dentine with carious alteration which produces melanin lays at the base of this event. In our view, the study of this process can have determinant meaning in the knowledge of the etiopathogenesis of carious disease in which we see the further prospect of our research.
Conclusions
1. In teeth that are prone to carious process, the pulp is in an active state.
2. Hard tissues of the third molars with degeneratively altered pulp are not prone to carious lesions.
3. In order to stop the carious process in the tooth, it must be depulped. | 4,180.4 | 2020-12-28T00:00:00.000 | [
"Materials Science"
] |
Endocrine Disrupting Compounds (Nonylphenol and Bisphenol A)–Sources, Harmfulness and Laccase-Assisted Degradation in the Aquatic Environment
Environmental pollution with organic substances has become one of the world’s major problems. Although pollutants occur in the environment at concentrations ranging from nanograms to micrograms per liter, they can have a detrimental effect on species inhabiting aquatic environments. Endocrine disrupting compounds (EDCs) are a particularly dangerous group because they have estrogenic activity. Among EDCs, the alkylphenols commonly used in households deserve attention, from where they go to sewage treatment plants, and then to water reservoirs. New methods of wastewater treatment and removal of high concentrations of xenoestrogens from the aquatic environment are still being searched for. One promising approach is bioremediation, which uses living organisms such as fungi, bacteria, and plants to produce enzymes capable of breaking down organic pollutants. These enzymes include laccase, produced by white rot fungi. The ability of laccase to directly oxidize phenols and other aromatic compounds has become the focus of attention of researchers from around the world. Recent studies show the enormous potential of laccase application in processes such as detoxification and biodegradation of pollutants in natural and industrial wastes.
Introduction
Pollution of the aquatic environment with substances such as pesticides, heavy metals, polycyclic aromatic hydrocarbons, microplastics, and pharmaceuticals enter waters as a result of anthropogenic activities and endanger the health of plants, animals, and humans due to their acute toxicity and potential accumulation. Endocrine disrupting compounds (EDCs) are substances of considerable risk to human health. The list of EDCs is rapidly growing. According to the TEDX (The Endocrine Disruption Exchange) database, the number of suspected EDCs was 881 in 2011, increasing to 1419 in 2017 [1].
EDCs include a wide range of chemicals that are present in people's daily lives as ingredients in everyday items, cleaning products, and medications. In 2015, the total production of chemicals within the 28 member states of European Union (EU) was 323 million tons, 205 million tons of which were considered hazardous to health [2]. Ismail et al. in 2017 differentiated the EDCs natural and synthetic compounds, such as hormones, alkylphenols (AP), polyhalogenated compounds, bisphenol A (BPA), phthalates, pharmaceuticals, and pesticides. These EDCs are released into the environment from various sources [3]. An especially dangerous group of waste are the alkylphenols (AP), because of their interference with the proper functioning of the endocrine system in humans and animals. A typical representative of this group of compounds are nonylphenols (NPs) and bisphenols (BPs)-xenoestrogens used in the production of products commonly used in households [4].
Determining the key features of the EDCs by Merrill et al. in 2020 will facilitate the evaluation of chemicals in terms of their effects on the endocrine system. At this point, of hormone receptors (ii) antagonization of hormone receptors (iii) alteration of hormone receptor expression (iv) alteration in signal transduction (including changes in protein or RNA expression, post-translational and/or ion flux modifications) in hormone responsive cells (v) induction of epigenetic modifications in hormone producing or hormone responsive cells (vi) alteration of hormone synthesis (vii) alteration transport of hormones across cell membranes (viii) alteration of hormone distribution or circulating hormone levels (ix) alteration of metabolism or clearance of hormones (x) alteration of the fate of hormone producing or hormone responsive cells. The activities of EDCs depicted include enhancement and weakening of effects. Various well-known EDCs show different interference characteristics with endocrine systems [5].
The mechanism of estrogen signaling is complex, and involves intracellular and extracellular signaling networks. The intracellular network includes genomic and nongenomic pathways. In the genomic pathway, transcription of target genes is altered by the binding of chemicals (or ligands) to the nuclear ER, ERα and Erβ. In the non-genomic pathway, signal transduction is started by ligand binding to membrane/cytoplasmic ER receptors and/or other receptors, e.g., GPER, ER-X. Autocrine and/or paracrine signaling pathways include other hormones, growth factors, and cytokines. Intracellular networks cooperate with diverse types of autocrine and/or paracrine signaling, and thus cells in different tissues or locations are also involved in estrogen signaling pathways e.g., estrogens, by influencing the synthesis and secretion of growth hormone (GH) or insulin growth factor, can influence somatic growth [37].
NP and BPA, chemicals belongings to alkylphenols, were selected for further analysis showing a strong relationship between exposure and endocrine effects in humans, wild animals, or aquatic organisms. Both are quite common in the aquatic environment as organic pollutants of anthropogenic origin and are biodegradable by both fungal and bacterial laccase.
Nonylphenol
Nonylphenol (NP) belongs to the group of alkylphenol compounds (APEs) of the family of non-ionic surfactants, and it was produced for the first time in 1940 [38]. It is a xenoestrogen formed during degradation of ethoxylates nonylphenol (NPEO) [39,40].
4-NP is the most common commercial form of NP and is used in experiments and analyzes. NP is persistent, lipophilic, and tends to bioaccumulate more than NPEO [41]. Nonylphenol is a hydrophobic compound with octanol-water partition coefficient (log K OW ) of 4.48 and low water solubility. The result is low NP mobility and a significant reduction in distribution in the aquatic environment [41]. NP is a semi-volatile organic compound capable of binding water. Once nonylphenol enters the atmosphere, it can be reintroduced into the surface water ecosystem with precipitation [42]. At room temperature, NP is a light-yellow liquid with an approximate molecular weight of 215 to 220 g/mol and a specific weight of 0.953 g/mL at 20 • C. It has a dissociation constant (pKa) of 10.7 ± 1.0 and a log K OW between 3.8 and 4.8, and shows both pH and temperature dependent solubilities, showing values of 6350 µg/L at pH 5 and 25 • C [41].
NP is a compound that has isomers consisting of nine-carbon alkyl chains attached to a phenolic ring. A hydroxyl group is present on the phenolic ring of NP. The isomers differ in the carbon atom in the phenolic ring to which the alkyl chain is attached. Isomers include compounds with phenolic substitution at the meta-, ortho-and para positions (called 2-, 3-, and 4-alkylphenols, respectively). 4-nonylphenols (4-NP) and 4-octylphenols (4-OP) account for over 80% of total APE production. The alkyl group is branched or linear. It should be noted that shorter chain NP isomers (i.e., 4-n-NP) are more resistant to degradation than branched isomers and persist in sediments. The half-life of NPs in sediments has been found to be over sixty years [43,44].
Due to its structural similarity, NP mimics the natural hormone 17β-estradiol and competes with it for binding sites in the receptors, although with a lower affinity than the natural hormone [45,46]. The estrogenic effects of NP vary depending on the isomers, with a high estrogenic effect observed with 4-(1 ,1 -dimethyl-2 -ethylpentyl)-phenol (NP7) [47]. Not all nonylphenol isomers are capable of inducing the estrogenic activity.
NP also has an anti-androgenic effect, i.e., it may interfere with the proper functioning of androgens necessary for the proper development of men and their reproductive system due to the multi-stage activation of the androgen receptor [48]. As a result of the above mechanism, NP induces disorders in men, including lowering circulating testosterone levels in the blood, decreased activity of antioxidant enzymes in sperm, and disturbed structure of the testes, as well as increased apoptosis of Sertoli cells [49][50][51]. Moreover, it has been shown that high exposure of women to NP in the second trimester of pregnancy led to reduced birth weight and premature deliveries [52][53][54].
NP may cause feminization of aquatic organisms, reduce male fertility, and survival of young animals at a concentration of 8.2 mg/L [46,55]. It has significant acute toxicity to phytoplankton, zooplankton, amphibians, invertebrates, and fish [56]. Male fish have intersex traits, low testosterone values, produce vitellogenin (a female-specific protein), and show gonadal changes that reduce fertility [57,58].
Due to the harmful effects of the degradation products of ethoxylates nonylphenol, the use and production of such compounds has been banned in EU countries. NP and its ethoxylates have been identified as priority hazardous substances (PHS) in the Water Framework Directive (Directive 2000/60/EC, 2000) and most of their uses are now regulated (Directive 2003/53/EC, 2003) [46]. Maximum allowable concentrations of NP in the EU in freshwater is 0.3 µg/L, in sediments of freshwater reservoirs 0.18 mg/kg dry weight, no limits for sea water [44].
The United States Environmental Protection Agency (EPA) recommends a nonylphenol concentration in freshwater below 6.6 µg/L and in salt water below 1.7 µg/L TDI for NP is 5 µg/kg body weight/day [46].
Due to the lack of NP mineralization in anaerobic conditions, these compounds are subject to bioaccumulation in the environment and may reach higher concentrations in next trophic levels. This is more likely for benthic organisms in close contact with contaminated sediment [59,60]. NP bioaccumulation has been noted in algae, fish, and aquatic birds living in the environment surrounding the contaminated river [61][62][63]. The presence of nonylphenol in fish is usually associated with wastewater discharge from sewage treatment works (STW), leading to concentrations of up to 110 µg/kg in fish [64,65].
Aquatic organisms are susceptible to the release of new chemicals into the environment because the recipients of wastewater from wastewater treatment plants are rivers, estuaries, and oceans. Even though nonylphenol is excreted from the tissues of aquatic species, these organisms may be continuously exposed to chemicals throughout their lives [34,39,40].
BPA
Bisphenol A (BPA) is one of the most common pollutants found in water bodies. In chemical terms, BPA is an organic synthetic compound that belongs, like NP, to the group of alkylphenols. 2,2-(4,4 -dihydroxy diphenyl) propane is the full name of BPA (molecular formula C 15 H 16 O 2 ; M w = 228.29 g/mol). BPA is a structural analogue of bisphenol that is 4,4 -methanediyldiphenol in which the methylene hydrogens are replaced by two methyl groups. This compound is obtained by combining 2 moles of phenol with 1 mole of acetone [66].
BPA has a low vapor pressure, high melting point, and moderate solubility [14]. The water solubility is 300 mg/L at 25 • C. The physicochemical properties of BPA indicate that it has a low rate of evaporation from soil and water.
BPA has low or moderate hydrophobicity, log K OW 3.32; organic carbon partition coefficient (log K OC ) ranging from 2.50 to 4.5 showing a moderate mobility and bioaccumulation potential of this compound [14,67]. However, mobility can be affected by soil chemistry and texture. Some studies confirm that increased BPA sorption occurs in the presence of iron, cadmium, and lead [68,69] and in sandy, acidic soils there is a rapid and complete desorption of BPA [70].
BPA is a solid and is commercially sold in the form of crystals, pills, or flakes. When BPA melts at elevated temperature during production, particles released into the environment are typically dissolved in water [16].
BPA is associated with an approximately 10,000-100,000 times weaker for ERβ than that of estradiol, so it was considered a very weak environmental estrogen [71,72]. Although BPA has been shown to have weak estrogenic activity, it can disrupt the proper functioning of the endocrine system even at extremely low concentrations [73]. Prenatal exposure to BPA may increase the tendency to develop breast cancer in adulthood [74].
Since BPA in the human body is rapidly absorbed through the stomach and intestines, the major glucuronic acid metabolite is formed in the liver and then excreted in the urine with a short half-life. BPA exposure is usually determined by analyzing urine samples. Screening studies have shown that the urine of most people in industrialized countries contains measurable levels of BPA and its metabolites [75,76]. The human body contains two types of BPA, unconjugated BPA and conjugated BPA. Unlike unconjugated BPA, conjugated BPA has no estrogenic activity and is harmless to humans. Absorbed BPA is metabolized in the liver to conjugated BPA. When BPA enters the body through skin, bypassing the digestive system, more unconjugated BPA circulates in the blood [77][78][79].
Studies have showed the effect of BPA on the development of invertebrates. Both the midge larvae (Chironomus riparius) and the sea copepod (Tigriopus japonicus) showed growth inhibition at exceptionally low BPA concentrations (0.08 and 0.1 mg/L, respectively). At concentrations from 1.1 to 12.8 mg/L BPA is systemically toxic to various taxa, including daphnia, mice, freshwater fish (Pimephales promelas), and saltwater fish (Menidia menidia) [14]. The effects of BPA appear to vary significantly among related taxa, and it appears that invertebrates may be hypersensitive to BPA exposure (in particular freshwater molluscs and insect larvae, and marine copepods in particular).
In aquatic vertebrates, exposure to BPA has effects on fish reproduction due to alteration of sex cell proportions, masculinization or feminization, structural changes in gonads, reduction of sperm quality, and delay or inhibition of ovulation, and induction of VTG (egg yolk protein precursor). The VTG protein is a widely used biomarker of vertebrate exposure to estrogenic compounds [25,80,81].
Assessment of the effects of exposure to BPA on wild mammals is currently based on data from laboratory studies in model organisms which show harmful effects on rodents at high BPA levels. Such effects include faster maturation, increased obesity, complications of pregnancy, malformations of male and female reproductive organs, impact on prostate, and increase in cancer incidence [66,72,82,83]. BPA can cross the placenta during pregnancy and accumulate in the amniotic fluid and fetal plasma [84]. BPA is listed as class 1B toxic for reproduction by the European CLP regulation [85,86].
BPA toxicity is expressed through endocrine disruption, but also through neurotoxicity, cytotoxicity, reproductive toxicity, genotoxicity, and carcinogenicity [79]. Due to the harmful effects on human and animal health, checking the levels of this compound and structural analogues in food and beverages is a priority. Therefore, the U.S. Environmental Protection Agency (EPA) has set the largest Tolerable Daily Intake (TDI) for BPA at 50 µg/kg body weight per day, and the oral reference dose (RfD) for BPA as a total allowable concentration (TAC) in drinking water is 100 µg/L. The EU has also introduced similar regulatory measures [44].
Published papers confirm the operation of BPA as EDCs. The BPA molecule is structurally similar to steroid hormones and has an estrogen-mimicking effect (E2). Studies show that BPA exerts an agonistic effect on both types of estrogen receptors (ERα and ERβ) and on G-protein coupled receptors (GPCR) and the gamma estrogen-related receptor (ERRc) [5,71,72,87] BPA meets nine out of ten key features of EDCs proposed by La Merrill et al. in 2020 [5]. BPA activates nuclear and membrane ERs, GPER, which has multiple effects on organs, antagonizes the androgen receptor, increases ER mRNA expression in specific areas of the mouse brain, induces proliferation Sertoli TM4 cells by inducing ERK phosphorylation, affects promoter-specific methylation in brain, prostate and breast cancer cells, lowers the level of cytochrome P450 aromatase and the expression of other steroidogenic regulatory proteins, reduces insulin secretion from pancreatic β-cell vesicles, increases the level of SHBG in men and lowers the level of circulating androstenedione and free testosterone, and developmental exposure to BPA increases the proliferation index in the mammary gland and pancreas endothelial cells of the uterus [44]. Chronic diseases such as prostate and breast cancer, type 2 diabetes, obesity, and brain development disorders occur due to early exposure to BPA [88][89][90].
As a result of the xenoestrogen activity, BPA can activate immune effects mediated by estrogens, and thus can induce pro-inflammatory pathways. By acting on estrogen receptors, BPA also affects the immune system, e.g., changing the function of dendritic cells and T and B lymphocytes and can induce production of reactive oxygen species by neutrophils [91].
Overall, its action is based on different molecular and epigenetic mechanisms that converge in the endocrine and reproductive systems [72].
Endocrine Disrupting Compounds (EDCs)-Occurrence and Removal Methods in Water Environment
Environmental pollution has become a major challenge in recent years due to increasing population, urbanization, and industrialization [27]. The priority is to find the sources of EDCs and the routes by which they enter the aquatic environment.
EDC has been detected in a variety of fresh, brackish, and marine ecosystems. Due to their physical and chemical properties, EDCs can bioaccumulate, biomagnify, are persistent, and are very toxic to aquatic organisms, both for plants and animals [3].
These compounds are released into the environment from a variety of sources, primarily municipal and industrial waste, agricultural practices, animal waste, and sewage treatment plants (STP) [92]. Most of the packaging for food, cosmetic products, solvents, preservatives, and pesticides is also made of EDC-containing materials [3,44,93].
Long-term use of EDC, regardless of the concentration level, may accumulate in animals and be partially released into the environment through animal feces. In fish and the largest consumers of the food web, the rate of bioaccumulation is higher because most EDCs are lipophilic and concentrated in the fat of the consuming organisms [11]. Therefore, these substances will penetrate the food chain and ecosystems, potentially adversely affecting human health.
In agriculture, non-metabolized and non-degradable compounds in animal fertilizers still have active metabolites, reducing the quality of surface and groundwater and significantly affecting aquatic life. In water, EDCs undergo biodegradation and chemical and photochemical degradation, dilution, and sorption to sediments, which partially leads to their elimination from aquatic ecosystems [3,[94][95][96].
Nonylphenol
The presence of nonylphenol in the environment is clearly correlated with anthropogenic activities such as sewage treatment, storage, and recycling of sewage sludge. NPEOs are used as non-ionic surfactants in industry (cellulose and paper, textiles, agriculture, metals, plastics, petroleum refining), in households in the form of detergents, solubilizers, and personal care products in non-EU countries [38,45]. Nonylphenol is a xenobiotic compound used in the production of antioxidants, additives to lubricating oils and in the production of ethoxylated surfactants nonylphenol, which is its main use (65%) [46].
As surfactants, NPs are used in cleaning agents, so their primary source in the environment are discharges of wastewater from industrial and municipal wastewater treatment plants (WWTPs), as well as land enriched with solid sewage or manure, runoff from pesticides and fertilizers on agricultural fields, and fodder livestock. Due to their cheapness, NPEO surfactants are used in various areas, for example in agricultural pesticides where surfactant is added to control the properties of pesticides [34,97].
Inadequately treated domestic sewage causes high concentrations of NP in the aquatic environment. The levels of NP depended on the size of NP discharges into the river, temperature, flow velocity, biodegradation, etc. About 60% of NP and its derivatives produced in the world ends up in water supply [98][99][100]. In addition, the presence of NP is observed in polyvinyl chloride (PVC), which can contaminate water passing through PVC plumbing [101].
Technical NP is the major form of NP that is released into the environment. Technical nonylphenol consists of a mixture of more than one hundred isomers that have an alkyl moiety attached at various positions on the phenolic ring, with para-substituted NP (4-NP). However, in the environment, the proportions of isomers may be different [10,34,102].
Due to its high hydrophobicity, resistance to biodegradation and low solubility, NP tends to accumulate in various environmental matrices [4]. NP can evaporate into the atmosphere from wastewater discharges, wastewater treatment plant (liquids and sludge) or heavily contaminated surface waters. NP binds to the aerosols generated by the wastewater treatment plant, leading to a reduction in air quality in the vicinity of STW. From the atmosphere, NP may re-enter aquatic and terrestrial ecosystems with rain and snowfall [46].
The concentration of nonylphenol in the surface layers of natural waters may decrease due to photolysis induced by sunlight [94,103]. Biodegradation of NPs is difficult due to physicochemical properties such as low solubility and high hydrophobicity. NP accumulates in environmental compartments that are characterized by a high content of organic substances, usually sewage sludge and river sediments. NP occurs in river waters in concentrations up to 4.1 µg/L and 1 mg/kg in sediments [98,99,104]. The concentration of NP in the water and sediments are shown in Table 1. The presence of NP in surface waters is correlated with anthropogenic activity and wastewater discharge from STW, industrial plants, municipal wastewater treatment plants, and rainwater discharges [46,47]. NP concentrations in rivers are subject to seasonal fluctuations with higher concentrations in summer due to increased microbial activity at higher temperatures leading to increased degradation of ethoxylates nonylphenol [121,122]. Other factors such as river flow rate, sedimentation rate, and particle size also influence the rate of degradation. NP occurs in the aquatic environment in quite different concentrations in surface waters, from several dozen ng/L to several dozen mg/L [41].
Groundwater is of particular interest as it accounts for about twenty percent of the world's freshwater supply and is extremely susceptible to contamination by various pollutants as a result of urban activities. NP concentrations in groundwater were incredibly low [40,123,124]. The microbial decomposition of NP in aquifers is limited due to the low temperature conditions prevailing there: low temperature, low C, and low O 2 content). The processes controlling the entry of pollutants into groundwater are sorption and biodegradation [46,98].
Ethoxylates nonylphenol (NPEO) ends up in significant amounts in wastewater treatment plants, where they biodegrade into several by-products, including nonylphenol, which is more resistant than the parent compound [46,125]. NP is the major degradation product, does not undergo further transformation, and is highly adsorbed in the sludge; therefore, it is often found in higher concentrations in wastewater than in tributaries [46,125,126]. In most cases, it is the strong sorption of pollutants, and not the microbiological activity, which limits the rate of biodegradation [127]. On the rate of biodegradation NPEO, the main sources of NP in WWTPs, is affected by many factors e.g., temperature, NP isomer present in the environment, oxygen availability, pH and additives of yeast extracts, surfactants, aluminum sulphate, acetate, pyruvate, lactate, manganese dioxide, iron chloride, sodium chloride, hydrogen peroxide, heavy metals, and phthalic acid esters [128][129][130].
Conventional physicochemical wastewater treatment methods have not been shown to be effective in removing endocrine disruptors such as NPs due to their low molecular weight [27,131]. Novel purification techniques, including advanced oxidation methods, UV [132], adsorption (powdered activated carbon (PAC), and granular activated carbon (GAC)), ion exchange and membrane filtration (ceramics, polymers, and zeolites) are still under analysis [133,134].
One of the other methods of NP removal is to use cells and enzymes. Tanghe et al. in 1999 first described the Sphingomonas strain that degraded NP [135]. Since then, microorganisms involved in the biodegradation of NPs in the aquatic environment have been discovered (Sphingobium, Pseudomonas, Pseudoxanthomonas, Thauera, Novosphingonium, Bacillus, Stenotrophomonas, Clostridium, Arthrobacter, Acidurvorax, Rhizobium, Corynebacterium, Traynebacterium, Rhodococcus, Candida, Phanerochaete, Bjerkandera, Mucor, Fusarium and Metarhizium) [34,130,[136][137][138][139]. NP-degrading microorganisms were isolated from municipal sewage treatment plants from sludge, sewage sludge and activated sludge under aerobic and anaerobic conditions. Appropriate pH value, temperature, and the level of aeration of sewage sludge contribute to the increase of microbiological activity, and thus increase the degradation of NP.
Anaerobic degradation of NP has only recently been shown. Half-lives of anaerobic degradation ranged from 23.9 to 69.3 days. The rate of anaerobic degradation of NP was enhanced by increasing the temperature and adding yeast extract or surfactants [41,98,140]. Bacterial strains are effective in improving NP biodegradation and short-chain fatty acid accumulation. The species Propionibacterium, Paludibacter, Proteiniphilum, Guggenheimella, Lactobacillus, Anaerovorax and Proteiniborus correlated with the short-chain fatty acids synthesized as a result of NP degradation [137,141].
Some bacteria and fungi produce laccase, a multifunctional enzyme. Laccases are well-known enzymes that have found application in bioremediation, both as free and immobilized enzymes. Thanks to their broad range substrate specificity, laccases are able to remove xenobiotics, including EDCs. Therefore, the white rot fungi (WRF) are a promising tool in the above-mentioned elimination of EDCs during wastewater treatment processes [53]. The advantage of fungi over bacteria in lignin mineralization results from the production and secretion of laccase-non-specific enzyme outside the cell, which gives fungi access to non-polar and insoluble substances, operation over a wide range of temperatures and pH values, and the developing fungal hyphae also make it possible to reach contaminants inaccessible to bacteria [53,142,143]. The laccase from WRF like Pleurotus eryngii, Trametes versicolor or Phanerochaete chrysosporium can adhesion to alkylphenols, because these compounds have functional groups such as amino, hydroxyl, or alkyl groups in its chemical structure, acting as electron donors for oxygenases [53,[144][145][146][147][148].
BPA
BPA is a monomer in the production of polycarbonate plastics and epoxy resins, and as an additive in the production of PVC coatings is added to various plastics as a plasticizer. These materials are used in food storage containers, water, baby bottles, food, and drink cans. Under the influence of elevated temperature, BPA may migrate from containers to food and beverages [44,149]. BPA-derived monomers, especially bis-GMA (methacrylate bisphenol-glycidyl) are used in dental materials, from where they can be released [150].
The release of BPA to the aquatic environment takes place in several ways, including from production plants, from wastewater as a result of incomplete treatment, or in physicochemical and biological processes in treatment units, from leachate from landfills, as well as from leaching from discarded BPA-based products [17,19,67,86,105]. The problem is also sludge from recycled paper, the production of which uses BPA as a reactive agent. These sediments are used as fertilizers in agriculture, eventually contaminating the groundwater with BPA [151]. BPA also enters groundwater through its release to landfill leachate [40,[152][153][154][155].
The results showed that the leaching of BPA from polyethylene microplastics (LDPE) and polycarbonate (PC) was 2.68 µg/g and 14.45 µg/g, respectively [13]. Studies have shown that diffusion of BPA in the environment is related to the hydrolysis of PC: processing time for polycarbonate bottles in the presence of water is only a few (3-7) years [79].
BPA concentrations in surface water vary depending on the location, sampling period and depth of sampling. Studies in which both water and sediment were collected indicate significantly higher BPA concentrations in the sediments than in the upper water column, which is related to the slowing down of biodegradation processes in anaerobic environments [15,16,152,156]. Rivers in Europe and North America with higher BPA concentrations detected are commonly associated with production facilities [16,79]. Although BPA dissolved in surface water has a short half-life due to photo-and microbial degradation, its metabolites can persist much longer [14]. The BPA metabolites can be toxic for aquatic organisms [67]. Like BPA, its metabolites are xenoestrogens, e.g., 4.40-dihydroxymethylstilbene and 4-methyl-2,4-bis (4-hydroxyphenyl) pent-1-ene (MBP), whose estrogenic activity levels exceed BPA levels respectively 40-and 300-fold [157,158].
Observed BPA concentrations in oceans and estuaries are low compared to freshwater systems (Table 1). It is related to the fact that sewage from municipal and mixed municipalindustrial wastewater treatment plants is the main source of environmental BPA, and its runoff goes to rivers [15,17,22,[159][160][161]. BPA is leached faster in marine systems than in freshwater systems [162,163], and microbial degradation may be slower [24,164]. Moreover, the bioavailable fraction of dissolved BPA may increase with salinity [163].
The poor biodegradability of BPA in nature leads to the contamination of surface and groundwater. We can remove BPA using the following methods: photodegradation (TiO 2 ) is the most commonly studied and uses photocatalyst, adsorption (natural adsorbents, carbon and graphene, clay, nanomaterials, and composite materials), biodegradation with the use of microorganisms, or phytoremediation [96,[165][166][167][168].
The main source of microorganisms in biodegradation is activated sludge from WWTPs. Bacillus thuringiensis, Pseudomonas putida YC-AE1, Sphingomonas paucimobilis FJ-4, Lactococcus lactis, Bacillus subtilis and many more bacteria are capable to use BPA as substrate in their metabolism [79,169]. Among fungi, BPA degradation was observed in Saccharomyces cerevisiae and WRF [170,171]. WRF converts BPA into a much less reactive substance in enzymatic oxidation. The oxidized form of BPA does not bind to ERα dependent estrogen receptors. This may be due to laccase oxidation of both BPA hydroxyl groups. The product obtained by laccase catalyzed oxidation of BPA is 2,2-bis (4-phenylquinone) propane [172].
The best described and characterized fungal laccases are most often included in the ligninolytic enzyme complex, whose main function is to depolymerize lignin present in plant cell walls. In addition, fungal laccases engage in morphogenesis and sporulation, in phytopathogenic species they take part in the detoxification of toxic substances synthesized by the plant immune system [177,178]. For example, bacterial laccases can help protect the spore from UV radiation and hydrogen peroxide [179]. Among other things, plant laccases participate in the polymerization of lignin [180]. In insects, on the other hand, laccases oxidize pyrocatechins in the cuticle to the corresponding quinones, which catalyze protein cross linking reactions and are involved in the sclerotization of the cuticle [181]. Differences between laccases also exist in the structure of the protein molecule. Fungal and plant laccases are glycoproteins. The glycosylation process affects copper ions, providing the enzyme with thermal stability and protection from proteolytic degradation. Intracellular bacterial enzymes are often not glycosylated [182,183].
Laccase Reaction Mechanism
In the catalyzed reaction, laccases use molecular oxygen as an electron acceptor and produce a water molecule due to oxygen reduction [178,184]. The mechanism of the reactions catalyzed by laccase, is based on the formation of a product as a result of the oxidation of substrate molecules to oxygen radicals with the simultaneous reduction of an oxygen molecule to two water molecules according to the general notation: These are enzymes with broad substrate specificity, their natural substrates being phenolic and non-phenolic derivatives, including di-and methoxyphenols, phenolic acids, aromatic amines, and other lignin-related compounds [177]. The chemical structure of substrate is an extremely crucial factor for the reaction to continue with high efficiency. The preferred functional groups are amine, hydroxyl, carboxyl, methoxyl, and sulfonic groups, which make the oxidation of the substrate by the enzyme faster. Compounds with a more complex structure, which do not have functional groups, need the presence of reaction mediators (mediating compounds) in the exchange of electrons between laccase and the complex substrate [185,186]. Laccase forms multimeric complexes, which can be di-, or tetrameric proteins. Each monomer has four copper atoms with distinctive characteristics, interacting with amino acids to form the active center. Type 1 paramagnetic cooper (T1 Cu), a blue copper, transfers electrons during the substrate oxidation reaction. Type 2 (T2 Cu), non-blue copper is also a paramagnetic atom and engages in electron transfer. The two diamagnetic copper atoms that make up type 3 (T3 Cu) are a spin-coupled copper-copper pair (T3α and T3β), responsible for binding the oxygen molecule. A copper atom of type 2 and two copper atoms of type 3 form a three-atom complex, through which the binding and reduction of molecular oxygen to water takes place [187,188]. The involvement of individual copper atoms in the next reaction steps is explained in selected review papers, for example by Ren at al. (2021) [189].
Three types of reactions catalyzed by laccase have been described. The first involves direct oxidation of the substrate and does not require the presence of mediators. These are oxidation reactions of simple organic compounds such as mono-, di-or polyphenols or derivatives having amine, carboxyl, methoxyl, or sulfonic functional groups. Such reactions often occur in the environment, and are characteristic of cell wall regeneration at the site of plant tissue damage. The second type of reactions are also oxidation reactions but require the presence of supporting mediators, which are low-molecular compounds having specific functional groups (e.g., -NO, -NOH). An example of a reaction belonging to this group is the decomposition of lignin by white wood rot fungi. The third and final type of reactions catalyzed by laccases are coupling reactions, involving the synthesis, formed in the oxidation of phenolic substrates, of unstable and reactive radicals. Such a reaction mechanism allows the formation of new phenolic structures in non-enzymatic processes. An example of such a reaction is the transformation of toxic compounds present in industrial wastewater [173,177,190]. A property of laccases that is extremely important because of its application potential is the redox potential (E0) of the type I copper atom. According to this criterion, laccases were divided into three categories: enzymes with low, medium, and high redox potential. Bacterial and plant laccases usually have a low redox potential, in contrast to fungal laccases, which most often fall into the medium and high potential categories. Enzymes with medium redox potential, are produced by Ascomycota and Basidiomycota fungi, while those with high potential are synthesized by white rot fungi [191]. The value of the redox potential (E • ) of laccases is directly related to the energy required to remove an electron from a reducing substrate, This is why laccases of such fungi are of particular interest in biotechnology, as they are capable of oxidizing substrates with high E • (E • > 400 mV), such as EDCs [191][192][193].
The mentioned catalytic abilities of laccases have great application potential, especially in the degradation of xenobiotics. Considering the aspect of bioremediation, an attention is paid to laccases of bacterial and fungal origin. Particularly dangerous for human health, regarding water contamination, is a group of compounds known as EDCs. It has been proven that in the decomposition of these compounds, laccases, which are the subject of this work, play an essential role [194,195]. The biochemical properties of selected bacterial and fungal laccases and their participation in the degradation of NP and BPA as representatives of EDCs showed in Table 2. As we can see in Table 2, laccases with potential use in removing alkylphenols such as NP and BPA are mainly produced by fungi. Most of them belong to mesophilic species, developing in the range of optimal temperatures of 25-35 • C, which increases their usefulness in removing environmental pollutants on a large scale. The main problem may be obtaining the appropriate pH value for the effective operation of laccase and fungal growth, since industrial and municipal wastewater has much higher pH values. Some of the microorganisms were able to be completely removed BPA or NP from the environment by producing exogenous laccases under appropriate conditions.
Immobilization-Method for Improving the Properties of the Enzyme
According to the literature, free as well as immobilized enzymes of fungal, plant, and bacterial origin are used in enzymatic degradation of EDCs [189,205,206]. The efficiency of the bioprocesses depends on the selection of appropriate catalytic tools and optimization of reaction conditions. Enzyme proteins are not always sufficiently stable against organic solvents, so methods are still being looked to increase the efficiency and stability of biocatalysts. This is especially important for the efficiency of catalysis in aqueous environments. One such technique to improve the efficiency of catalysis of xenobiotic decomposition is laccase immobilization. Immobilization is a widely used method in biotechnology to help control of reaction conditions. Immobilized enzyme is less prone to autoinactivation and can be used several times, which reduces the cost intensity of such a process. Thanks to immobi-lization, it is possible to change some properties of enzymes e.g., shifting the optimum pH to values more convenient for the technological process, often increasing thermostability.
The most basic method of immobilization is adsorption onto the surface of solid ion exchangers. The enzyme binds to the carrier through electrostatic interactions, hydrogen bonds, or van der Waals interactions [189,207,208]. Various natural and synthetic carriers have been used for laccases: porous glass beads [209], green coconut fibers [210], and nanofiber membrane [211]. In this way, laccases used in degradation were immobilized. These are mainly fungal enzymes derived from Trametes versicolor or Trametes hirsuta [189,212,213].
Techniques in which covalent bonds are produced between the enzyme and the carrier, using functional groups such as amine, imidazole, or phenyl groups and others, are also used for enzyme immobilization. An economical yet effective solution for covalent immobilization of laccases seems to be the use of natural, carbon-based carriers such as biochairs. These are carbon products obtained after pyrolysis, with extremely high specific surface area. Such natural, readily available carbon carriers are often further changed with organic acids to increase the availability of functional carboxyl groups. Glutaraldehyde is often used as a binding agent, and pine wood shells from almonds and even pig manure can be used as an organic carrier [214].
An interesting example is the use of laccase immobilized on biochar modified with magnetic nanoparticles to degrade bisphenol A in an aqueous environment. The laccase immobilization process itself was a three-step process involving adsorption, precipitation, and crosslinking with glutaraldehyde. According to the authors' results, such an immobilization technique ensured high efficiency of T. versicolor laccase in the degradation of bisphenol A [215].
Bisphenol A was successfully degraded by laccase trapped in a matrix of polyethylene glycol-based hydrogel microparticles. Hydrogels of diverse types including agar-agar, gelatin, or combination in poly (acrylamide/crotonic acid)/sodium alginate, poly (acrylamide/crotonic acid)/K-carrageenan have been successfully used for years in enzyme immobilization [216].
Emulsion polymerization of a matrix consisting of poly (ethylene glycol) diacrylate (PEGDA), and poly (ethylene glycol) methacrylate (PEGMA) was supported by UW radiation. The literature data show that such immobilization ensured enzyme stabilization and efficient bisphenol A transformation [217]. It is also worth mentioning here the immobilization technique developed in recent years using nanofibers produced by the electrospinning method as carriers. Immobilization of T. versicolor laccase by adsorption and encapsulation using poly (l-lactic acid)-co-poly (ε-caprolactone) (PLCL) electrospun nanofibers as carriers supplied efficient degradation of naproxen and diclofenac [218].
Conclusions
Long-term exposure to even low concentrations of EDCs and the wide range and complexity of their mechanisms of action have an enormous impact on human and animal health. The scope of these impacts depends on many factors, including type of cells and tissues exposed to contact with hazardous chemicals, circadian rhythms, changes in seasons, stage of development, or gender [1].
Many factors influence the bioremediation of EDCs. The organic content of sediments was one of the important determinants of the adsorption process, especially for ethoxylates shorter chain NP [219,220]. NP concentrations in sediments were higher than in surface water [41]. NP remediation is possible by physicochemical and microbiological methods. Industrially produced technical grade NP is less biodegradable due to branched alkyl quaternary carbon in more than 85% of NP isomers [99,221].
BPA is ubiquitous in the environment due to its continuous release. Release may occur during the production, transport, and processing of chemicals. Post-consumer emissions result from the discharge of wastewater from municipal wastewater treatment plants, leaching from landfills, incineration of household waste, and the natural degradation of plastics in the environment [14]. Thus, BPA removal from the natural environment is an increasing worldwide concern. Biodegradation is expected to be the dominant process for removal of BPA from the aquatic environment.
EDCs penetrate aquatic ecosystems through the wastewater treatment system, industrial waste, municipal waste, agriculture, aquaculture, direct release of pharmaceuticals, chemicals, and indirect releases from sources such as rainwater runoff [222][223][224]. As a consequence, EDCs are absorbed, accumulated, and biomagnified in water, sediment, and biota. These harmful chemicals are periodically released into water bodies as they undergo biogeochemical processes [225]. The use of green methods that do not require toxic and hazardous materials should be a special aspect in the removal of NP and BPA.
Author Contributions: Conceptualization, A.G.; writing-A.G. and U.J. All authors have read and agreed to the published version of the manuscript. | 9,074.8 | 2022-11-01T00:00:00.000 | [
"Engineering"
] |
A convolutional neural network with self-attention for fully automated metabolic tumor volume delineation of head and neck cancer in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[^{18}$$\end{document}[18F]FDG PET/CT
Purpose PET-derived metabolic tumor volume (MTV) and total lesion glycolysis of the primary tumor are known to be prognostic of clinical outcome in head and neck cancer (HNC). Including evaluation of lymph node metastases can further increase the prognostic value of PET but accurate manual delineation and classification of all lesions is time-consuming and prone to interobserver variability. Our goal, therefore, was development and evaluation of an automated tool for MTV delineation/classification of primary tumor and lymph node metastases in PET/CT investigations of HNC patients. Methods Automated lesion delineation was performed with a residual 3D U-Net convolutional neural network (CNN) incorporating a multi-head self-attention block. 698 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[^{18}$$\end{document}[18F]FDG PET/CT scans from 3 different sites and 5 public databases were used for network training and testing. An external dataset of 181 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[^{18}$$\end{document}[18F]FDG PET/CT scans from 2 additional sites was employed to assess the generalizability of the network. In these data, primary tumor and lymph node (LN) metastases were interactively delineated and labeled by two experienced physicians. Performance of the trained network models was assessed by 5-fold cross-validation in the main dataset and by pooling results from the 5 developed models in the external dataset. The Dice similarity coefficient (DSC) for individual delineation tasks and the primary tumor/metastasis classification accuracy were used as evaluation metrics. Additionally, a survival analysis using univariate Cox regression was performed comparing achieved group separation for manual and automated delineation, respectively. Results In the cross-validation experiment, delineation of all malignant lesions with the trained U-Net models achieves DSC of 0.885, 0.805, and 0.870 for primary tumor, LN metastases, and the union of both, respectively. In external testing, the DSC reaches 0.850, 0.724, and 0.823 for primary tumor, LN metastases, and the union of both, respectively. The voxel classification accuracy was 98.0% and 97.9% in cross-validation and external data, respectively. Univariate Cox analysis in the cross-validation and the external testing reveals that manually and automatically derived total MTVs are both highly prognostic with respect to overall survival, yielding essentially identical hazard ratios (HR) (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {HR}_{\text {man}}=1.9$$\end{document}HRman=1.9; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p<0.001$$\end{document}p<0.001 vs. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {HR}_{\text {cnn}}=1.8$$\end{document}HRcnn=1.8; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p<0.001$$\end{document}p<0.001 in cross-validation and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {HR}_{\text {man}}=1.8$$\end{document}HRman=1.8; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.011$$\end{document}p=0.011 vs. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {HR}_{\text {cnn}}=1.9$$\end{document}HRcnn=1.9; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.004$$\end{document}p=0.004 in external testing). Conclusion To the best of our knowledge, this work presents the first CNN model for successful MTV delineation and lesion classification in HNC. In the vast majority of patients, the network performs satisfactory delineation and classification of primary tumor and lymph node metastases and only rarely requires more than minimal manual correction. It is thus able to massively facilitate study data evaluation in large patient groups and also does have clear potential for supervised clinical application. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-023-06197-1.
Neural network architecture
Automated lesion delineation was performed with a modified residual 3D U-Net CNN, see fig. 1. The network consists of encoder and decoder paths each of which represents an alternating sequence of residual blocks and, respectively, downsampling (2 × 2 × 2 max-pooling) and upsampling (3 × 3 × 3 deconvolution) steps. The residual block combines 3 × 3 × 3 convolution, batch normalization, leaky ReLU and dropout (rate = 0.1) layers and applies them to the input twice in a row while also providing a bypass skip-connection to improve the gradient flow and accelerate training. Since the number of input and output features can differ, 1 × 1 × 1 convolution is used in the skip-connection for equalization.
Features identified by the encoder are transferred to the decoder via skip-connections and concatenated with the corresponding decoder features at all image scales except the lowest one. At the bottom of the U-Net the encoder and decoder are connected through a Multi-Head Self-Attention (MHSA) block to improve awareness of the global context in classification of FDG-avid regions. MHSA was first introduced in [1] for natural language processing and was further adapted for image processing. Our implementation of MHSA mostly follows [2] but includes a few modifications. We refer to [2] for further details on, and motivation behind, MHSA while here we only provide a short overview of the method.
In our implementation, the MHSA block comprises two Self-Attention (SA) heads which share the same input. Each SA head establishes a rule according to which the information is transferred across the whole image. The rule is defined via projection operators (implemented as 1 × 1 × 1 convolution) of the input tensor into queries, keys, and values matrices: Q ∈ R n×k , K ∈ R n×k , and V ∈ R n×k , respectively. Here, n = 16 × 16 × 4 is the total number of voxels in the input tensor and k = 64 is the number of dimensions of the projection space. This way, each voxel i is represented by vectors q i , k i , and v i of length k which are stored as lines of the respective matrices. Values vector v j holds the relevant information about voxel j which is going to be transferred while vectors q i and k j determine the information transfer rate from voxel j to voxel i based on their similarity. In our implementation, we define similarity between q i and k j as a cosine of the angle between them calculated as a dot-product of the normalized vectors. The similarity coefficients of voxel i to every other voxel j are softmaxed and written in the form of an attention matrix A ∈ R n×n . Each matrix element A ij denotes the relative importance of the voxel j for classification of voxel i. The output of the SA head for each voxel i is the sum of values v j weighted with A ij : where the tilde denotes row-wise L 2 -normalization of the matrix and the softmax is applied row-wise. The outputs of all SA heads are concatenated and combined together via another 1 × 1 × 1 convolution (without bias).
Importantly, the SA mechanism has no means to account for the relative position of the voxels in the image. To give the SA heads access to the spatial information the input tensor is concatenated with a positional encoding tensor [1]. Moreover, we added a residual connection around the MHSA block to improve the gradient flow.
The described architecture was implemented using the Apache MXNet (version 1.9.0) package for the R language and environment for statistical computing (version 4.2.0) [3]. Figure 1: Architecture of the utilized CNN. Numbers above and beside each block designate the number of feature channels and matrix size at the given state, respectively. The images on the left are exemplary CT (top) and PET (bottom) images (input) and the image on the right is the corresponding output image (probability maps of the background, primary tumor, and lymph-node metastases shown in black, red, and green respectively)
Loss function and evaluation metrics
The loss function used for the network training is the sum of soft Dice and Cross-Entropy (CE) losses: where y ij and p ij are, respectively, ground truth (binary) and predicted probabilities of voxel i belonging to class j. N is the number of voxels in the training batch and C = 3 is the number of classes. Here, the smoothing constant ε = 1 is added to the denominator in (2) for numerical stability. The evaluation metric used for monitoring the training process in the validation data was the multiclass soft Dice function. The formula for the soft Dice metric function is similar to (2) with a difference in the expression sign and the fact that the loss function is calculated separately for each (relatively small) image batch while the evaluation metric is computed for the whole validation dataset at once. The variance of the evaluation metric was reduced in runtime using exponential smoothing (smoothing factor α = 0.6). All PET data were reconstructed including necessary attenuation, randoms, and scatter corrections. The corresponding CTs served also as input for training and testing. The matrix size of all CTs was 512x512. The in plane voxel size was (0.97 -1.37)mm and the off plane voxel size was (2 -5)mm. No postprocessing filter was applied to the data reconstructed with the BLOB-OS-TF method. All other data for which this information was available were postprocessed with a Gaussian filter with (2 -5)mm FWHM. In 176 cases this information was not available.
Fold 5
Epoch DSC Training subset Validation subset Best score Figure 2: Logs of the training process as represented by evaluation metric (multiclass soft Dice) for training and validation subsets. Each plot corresponds to the respective split of the main dataset into train-ing+validation and test subsets (fold). The dashed line marks the epoch for which the maximum of the evaluation metric was achieved. Note that training DSC stays generally lower than validation DSC due to dropout and data augmentations applied during the training. | 2,220.4 | 2023-04-20T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
ConAnomaly: Content-Based Anomaly Detection for System Logs
Enterprise systems typically produce a large number of logs to record runtime states and important events. Log anomaly detection is efficient for business management and system maintenance. Most existing log-based anomaly detection methods use log parser to get log event indexes or event templates and then utilize machine learning methods to detect anomalies. However, these methods cannot handle unknown log types and do not take advantage of the log semantic information. In this article, we propose ConAnomaly, a log-based anomaly detection model composed of a log sequence encoder (log2vec) and multi-layer Long Short Term Memory Network (LSTM). We designed log2vec based on the Word2vec model, which first vectorized the words in the log content, then deleted the invalid words through part of speech tagging, and finally obtained the sequence vector by the weighted average method. In this way, ConAnomaly not only captures semantic information in the log but also leverages log sequential relationships. We evaluate our proposed approach on two log datasets. Our experimental results show that ConAnomaly has good stability and can deal with unseen log types to a certain extent, and it provides better performance than most log-based anomaly detection methods.
Introduction
With the increase of many people's needs, the complexity of modern systems is increasing day by day. The more complex the system, the greater the likelihood of vulnerabilities that an invader may exploit to launch attacks. As a result, anomaly detection has become an important task in building trusted computer systems [1]. An accurate and effective anomaly detection model can reduce abnormal damage to the system, which is very important for business management and system maintenance. Logs are widely used to record important events and system status in operating systems or other software systems. Since system logs contain noteworthy events and runtime states, they are one of the most valuable data sources for anomaly detection and system monitoring [2].
Logs are semi-structured text data.One of the important tasks is anomaly detection in logs [3]. It is different from computer vision [4][5][6], digital time series [7][8][9] and graphic data [10]. In fact, the traditional way of handling log anomalies is very inefficient.Operators manually check system logs based on their domain knowledge by matching regular expressions or searching keywords (such as error and Failure). However, this anomaly detection method is not suitable for large-scale systems.
More and more works start to apply schemes to process the logs automatically. Existing log-based system anomaly detection methods can be roughly classified into two categories: one is based on log event indexes, such as PCA [11], Invariant Mining [12], Deeplog [13], and QLLog [14]. The other is based on log templates, such as LogAnomaly [15] and LogRobust [16]. Although both of these two methods first parse the logs, there are two differences: one is that the log event index-based method converts the log to the event index, while the log template-based method removes the numeric information in the log to obtain the log invariant (event template). For instance, the log template of log "Received block blk_7503483334202473044 of size 233,217 from /10.250. 19.102" is "Received block * of size * from *". The other is that the first method is to encode the log event index number (e.g., using a one-hot encoding), and the other is to vectorize the log template. Although the event template-based methods can utilize semantic information in log messages compared to the event index-based methods, they cannot handle the log template that has not been seen. Moreover, both methods are highly dependent on the log parser [17][18][19][20][21], especially for log event index-based approaches. The performance of the log event index-based approach degrades significantly when the log parsers are incorrect [22].
Although log templates are structured, they are still text data. Most of the input of machine learning models needs to be digital data, not text. Therefore, extracting the features of the log template or deriving its digital representation is the core step. Meng et al. [23] form the log event vector by the frequency and weights of words. The log event vector is transformed into the log sequence vector as the input of the anomaly detection model. The transformation from word vector to log event vector or log sequence vector is called coordinate transformation. However, the frequency and weight of words ignore the relevance between words. Recently, More and more works start to apply natural language processing (NLP) methods for the log event vectorization, especially word2vec, which generates word vectors based on the positional relationship of words. However, if word2vec is directly used in the system log template, it will generate a huge word space and cause unnecessary waste of resources. Therefore, this article has made corresponding improvements to the model.
In this article, we propose ConAnomaly, an anomaly detection method that takes advantage of both the semantic relationships of log messages like the template-based method and the sequential relationships between logs. In the ConAnomaly model, we improve the word2vec [24][25][26] model to obtain the log2vec that preprocess logs. It vectorizes the log content and removes invalid information through part of speech tag [27]. Finally, multi-layer LSTM [28] and other models are used for anomaly detection. We evaluated our proposed method on BGL [11] and HDFS [29] datasets. Experimental results show that ConAnomaly is versatile and has excellent detection performance.
The key contributions of this article can be summarized as follows: • We use the part of speech of the vocabulary as the standard for preliminary filtering of the log content, which reduces unnecessary waste of computing resources. To the best of our knowledge, our work is the first to utilize this to weight features. • This study provides new insights to handle unseen log templates and reduce the dependence on the log parser on the market. • We proposed ConAnomaly, which considers the semantic information in the log message into the log sequential anomaly detection, which improves the detection performance to a certain extent.
The rest of this article is organized as follows. We introduce the related work in Section 2 and present the theory of our work in Section 3. Besides, an overview of our scheme has two main components: log2vec and a model for anomaly detection. Finally, we evaluate the performance of the proposed model in Section 4 and conclude this work in Section 5.
Related Work
Log-base anomaly detection mainly consists of three steps: log parsing, feature extraction, and anomaly detection. We review the related works for each step.
Log Parsing
Log parsing extracts the log template or log event from the raw log. Figure 1 19.102:50010" that come from the HDFS dataset.It is parsed into log template "Receiving block <*> src: /<*> dest: /<*>" and event "E5". Here '<*>' is a wildcard to match parameters. There have been many studies on log parsing, e.g., Drain [21] and Spell [30]. Drain is an online log parsing method based on a fixed depth tree. When a new raw log message arrives, Drain preprocesses it using simple regular expressions based on domain knowledge. Then search for a log group, that is, for the leaf node of the tree by following specially designed rules encoded in the number of internal nodes. If the appropriate log group is found, the log messages will match the log events stored in that log group. Otherwise, a new log group is created based on the log message. It achieves high performance compared to many other log parser methods. The spell is an LCS-based [31] online stream processing log parsing method for structured stream parsing of event logs. It can dynamically accept log input, process the input in real-time, and constantly generate new log templates. In addition, He et al. Designed and implemented a parallel log parser (POP) on Spark, a large data processing platform.The original logs were divided into constants and variables, and the same log events were combined into the same cluster group through hierarchical clustering.
Feature Extraction
Extracting the feature of logs is the basis of anomaly detection. Generally, researchers select features from system logs, including log templates, event occurrences, event index, log variables and encode them through one-hot encoding or other weight methods. Lin et al. [2] parsed logs into log events using the log abstraction technique and convert them to vectors. Log sequences were represented as a vector of weight in an N-dimensional space after calculating the weight for each event, where N is the number of unique events. In DeepLog [13], besides the log events, it also considers the variant characteristics in the logs. Hua et al. [32] modeled the sample data as Hermitian positive-definite (HPD) matrices, and the geometric median of a set of HPD matrices is interpreted as an estimate of the clutter covariance matrix (CCM). Then By manifold filter, a set of HPD matrices are mapped to another set of HPD matrices by weighting them, that consequently improves the discriminative power by reducing the intra-class distances while increasing the inter-class distances.
In addition, more and more work start applying natural language processing(NLP) methods to log preprocessing, such as bag-of-words [33], TF-IDF [34], and word2vec.
He et al. [35] form the event count vector for each log sequence by counting the occurrence number of each log event, whose basic idea origins from bag-of-words. Lin et al. [2] propose an approach named LogCluster which turns each log sequence into a vector by Inverse Document Frequency (IDF) and Contrast-based Event Weighting. Meng et al. [15] propose a framework to model a log stream as a natural language sequence. They propose a novel, simple feature extraction method, template2vec, to extract the semantic information hidden in log templates by a distributional lexical-contrast embedding model (dLCE). The word vector is transformed to the log event vector. In this way, the semantic relationship of logs can be learned effectively.
Anomaly Detection
The existing anomaly detection methods based on log data are mainly classified into three categories, which are graph model-based [36][37][38], probability analysis-based [39] and machine learning based detection methods [40]. Anomaly detection based on graph is used to model the sequence relationship, association relationship, and log text content. The anomaly detection based on probability statistics adopts correlation analysis, comparison, etc., to calculate the correlation probability between log and anomaly.
At present, the machine learning-based method mainly utilizes the LSTM model to infer whether the log is abnormal or not by judging the log sequential relationships. Deeplog [13] leverages LSTM to model the sequence of log keys for a particular type of log, automatically learning normal patterns from normal log data to identify system exceptions. References [41,42] analyze the application of various LSTM models in anomaly detection, such as bidirectional LSTM and stacked LSTM.
Limitation of Previous Models
The limitations are as follows: • The existing log-based anomaly detection system is very effective, which mostly depends on the existing log parser tools. If the tool is not available for the current log data set, the model may not perform well. Moreover, they cannot handle unknown log events or templates. In DeepLog, it utilizes Spell, an unsupervised streaming parser that parses incoming log entries in an online fashion based on the idea of the longest common subsequence (LCS), to preprocess log files. Its input for classification is a window w of the h most recent log keys. That is, w = m t−h , . . . , m t−2 , m t−1 , where each m i is the log key from the log entry e i . However, if an undefined log instance is printed in a real-time environment, there is a risk that the model will crash or make incorrect predictions. • Logs as unstructured data have two characteristics: one is that there is a temporal relationship between logs, which is a manifestation of the workflow; The second is that the log itself has semantics. But most of the tools available take advantage of only the first feature of logs in the anomaly detection part. For example, in LogCluster, the clustering method is leveraged to cluster log sequences that are similar in sequences.
In this paper, we propose ConAnomaly, which utilizes both the semantic and sequential relationships of logs to detect anomalies. Our approach also addresses the limitations of the previous approaches to some extent. For example, in most previous detection models, if the incoming log is slightly different from the defined log template, it will be treated as unknown data. However, since our method is to build a database of the vocabulary of the log content, most of the vocabulary used in the log content is not very different, the occurrence of unknown data will be greatly reduced.
Overview
The overview of ConAnomaly is shown in Figure 2. The first step is log parsing, which extracts the log content from the original log files and then removes the numbers and punctuation marks in the log by regular matching. Each log is a set of words in the semi-structured text. Inspired by word2Vec, we propose a digital method, log2vec, which effectively converts the obtained log invariant into a vector sequence. The detail of this model will be described in the next section. The last step is to utilize the multilayer LSTM to learn the log sequential relationship and to conduct anomaly detection through modeling.
Log2vec
Word2vec, also known as word embeddings, turns words in natural language into dense vectors that computers can understand, and maps words that have similar meanings to nearby locations in the vector space [43]. However, it cannot be directly used in the vectorization process of the logs. Firstly, some words in the log are invalid words, such as "to" and "can", but it cannot filter them. Second, it targets at vocabulary and cannot vectorize sentences. Last but not least, if the log is not filtered, the word vector space will be large, which will be an unnecessary waste of computing resources. Therefore, we make some improvements to word2vec. We propose log2vec, a sentence representation method, which can get rid of the invalid word in logs and effectively construct log vectors.
As shown in Figure 3, log2vec includes three steps in the learning stage: (1) use the word2vec model to vectorize the word in the log content. (2) Tag the words in the obtained lexicon with part of speech. The vectors of words with part of speech labels 'CC', 'TO', 'IN' and 'MD' (as shown in Table 1) were set as zero vectors. (3) Calculate log vectors by weighted averaging the vectors of the words in the corresponding log invariants. Moreover, the log vectors in the BGL dataset are serialized by batch processing. However, this approach cannot be leveraged directly to the HDFS dataset, because the number of logs in the HDFS log block is different. Therefore, before batch processing, it is necessary to truncate or padding the log blocks in HDFS.
Log Anomaly Detection
The flow of anomaly detection is shown in the solid line box in Figure 2. We firstly leverage multilayer LSTM to learn sequential relationships between logs, then use the full connection(FC) [44] layer to make the linear transformation of the learning results and map them to the label space, and finally use the softmax layer to do normalization processing.
Lstm
The long short-term memory model (LSTM) is a popular recursive neural network structure, which has been proved to be able to predict data sequence effectively. As shown in Figure 4, LSTM controls cell state by three gates, which are respectively called the forgotten gate, input gate, and output gate. A forgotten gate is a sigmoid unit that leveraged to determine what information needs to be discarded in the cell state. It outputs a vector between 0 and 1 by operating on h t−1 and x t . The value of 0 to 1 in this vector indicates what information in the cell state is retained or discarded. 0 means no reservation, 1 means all reservations. The formula is as follows: The input gate and the candidate cell choose what new information to add to the cell state. First, an input gate is used to determine which information to update, and then h t−1 and x t are used to obtain new candidate cell information through a tanh layer that may be updated into the cell information. The formula is as follows: A part of the old cell information is forgotten by the forgotten gate selection while a part of the candidate cell information is added by the input gate to get the new cell information C t . Equation (4) is the update operation.
After updating the cell state, the final output of the model is obtained through the operation of the output gate. The formula is as follows:
FC
The core operation of a fully connected network is a matrix-vector product.
The essence of this layer is the linear transformation from one feature space to another. This layer is used to make a weighted sum of the features of the previous design, turning the hidden layer space back into the label space.
Softmax
The softmax function [45], also known as the normalized exponential function, aims to show the results of multiple classifications in the form of probability.
Assumed that there has an array Y, and y i represents the ith element in Y, then the softmax value of this element is:
Experiment
In this section, we first describe the experimental dataset and evaluation metrics, and then compare the performance of ConAnomaly on large system log data with existed methods. Lastly, we investigate the performance impact of various parameters in this model.
Datasets
We conduct our experiments on two datasets: the HDFS dataset and the BGL dataset. The summary statistics for the two datasets are listed in Table 2 In the following experiments, for both datasets, we first separate the normal and anomalous logs, and then, 80% of logs are extracted from both types of logs as the training data (according to the timestamps of logs), the rest are the testing data. Moreover, to solve the problem of data imbalance, SMOTE algorithm is used to synthesize the data.
Implementation
All the experiments are conducted on a Windows machine with Intel Core 3.40 GHz CPU and 8 GB memory. ConAnomaly is implemented through Pytorch [46] and we refer to the results from the corresponding literature for the three baseline methods.
Evaluation Metrics
Precision, recall, and F1-score are used to evaluate the accuracy of anomaly detection methods. Precision shows the percentage of true anomalies among all anomalies detected; Recall measures the percentage of anomalies in the dataset being detected; F1-score is the harmonic average of the two indexes.
TP (True Positive) refers to the real case, that is, the real situation is positive, and the predicted situation is also positive.
FP (False Positive) means false positive example, the real situation is negative, and the predicted situation is positive. FN (False Negative) refers to a false negative example, that is the real situation is positive, and the predicted situation is negative. Figure 5 shows the performance of ConAnomaly compared to five baseline methods over the BGL dataset. ConAnomaly achieves the highest recall among the six methods, having an F1 score of 0.98. LogAnomaly and ConAnomaly can detect anomalies with a more than 95% F1-score, which demonstrates that the semantic information of the log is helpful for log anomaly detection. LogAnomaly generates more false alarms than ConAnomaly because it does not address the problem that the new log does not match the old log template and that the log data is not balanced. LogCluster does not achieve good detection accuracy on BGL data. The poor performance of LogCluster is caused by the high dimensional sparsity of the event count matrix. As a result, log clustering makes it difficult to distinguish between anomalies and normal conditions, which often results in a large number of false positives. At the same time, we found that the BGL dataset has certain particularity. Table 3 shows a partial digitized BGL log sequence. The numbers in the table represent the different types of log events, for example, '149' represents logs that can be extracted as "External Input Interrupt (.*) (.*) (.*) tree Receiver (.*) in Resynch mode" (as shown in Table 4). As you can see from Table 3, the logs in the BGL dataset are highly stacked, which means that logs of the same type are always repeated consecutively. Based on this phenomenon, we believe that the BGL dataset is not very representative. Furthermore, we wanted to explore the detection capabilities of ConAnomaly on datasets with identifiers, so we did a similar experiment on the HDFS dataset. Table 3 The Log Template It Represents Figure 6 shows the performance of ConAnomaly over the HDFS dataset. ConAnomaly achieves the best accuracy among those methods and HitAnomaly has an F1-score of 0.98 as the second. Both ConAnomaly and HitAnomaly utilize word vectors. However, Hi-tAnomaly transforms the log template into a fixed dimensional vector, while ConAnomaly vectorizes the contents of the logs.
The Number in
In fact, many existing detection methods perform well on HDFS datasets (over 90%). This is mainly due to the log parser, which extracts the log template on the HDFS dataset very accurately. Because of this, most log processing methods use a log parser, such as the rest of the five methods in Figure 6 except ConAnomaly.
Analysis of ConAnomaly
We first investigate the effect of window size and the number of layers on performance in ConAnomaly. As shown in the following figures, we varied the value of one parameter while using the default values for the others (the control variates method), and reported the results over the HDFS dataset. Figure 7 shows the influence of the number of LSTM network layers (n-layer) in ConAnomaly. We observe that when the value of n-layer is greater than 2, the ConAnomaly model is not very sensitive to a different number of layer settings and the detection performances of the model are almost the same when n-layer = 2 and n-layer = 6. However, a greater number of parameters lead to a longer training time and prediction time. With this factor in mind, we choose a smaller n-layer (n-layer = 2). Figure 8 presents the performance impact of window size in ConAnomaly. Window size refers to the maximum distance between the current and predicted word within a log. As can be seen from the figure, in general, the precision of ConAnomaly is fairly stable with concerning different window sizes while it has an impact on the recall of the model. When the window size value is equal to 3, the model has the highest recall value. The smaller the window size, the worse the semantic relevance of the logs the system learns. The larger the size of the Widow, the easier it will be to overfit the model.
Experiment Based on the Unseen Logs
In this section, we evaluate the robustness of our model based on unseen log types. The log types are compared by the final representation of the different block_id log sequences.
First, we explore the log distribution rules of the HDFS data set. As can be seen from Figure 9 and Table 5, the number of log types increases rapidly between the first 10% and 40% of data ordered by the log timestamp. After random shuffling, the log types are close to linear distribution. That means log data of HDFS have an obvious update before 50% data. The HDFS data is suitable for evaluating the robustness of the model. As shown in Table 6, data of 1%, 10%, 20%, and 50% in the dataset were respectively adopted as the test set, and then the number of log types in the training set, test set, and those not in the training set but appearing in the test set (the unseen log types) were counted. The results showed that the F1-score increased as the unseen log types decreased. At the same time, we observe that when the training data accounts for 1%, the detection performance of ConAnomaly is also higher than 90%, which indicates that our model has good stability and can deal with unseen log types to a certain extent. However, in the experiment, we find that ConAnomaly presents the phenomenon of a single category of prediction when it predicts data, such as the predicted results are all normal. When the training data accounts for 10%, this phenomenon occurs 22 times. For this limitation of the model, we will conduct further research.
Conclusions
This article proposes ConAnomaly, an anomaly detection method that takes advantage of both the semantic relationships of log messages like the template-based method and the sequential relationships between logs. We designed a novel log sequence encoder to obtain log sequence representations and built its classification model based on the lstm mechanism. We evaluated our proposed method on two log datasets. Our experimental results demonstrate that ConAnomaly has outperformed other existing log-based anomaly detection methods and has a strong versatility.
One of our future work directions is to incorporate the structure of the attention mechanism into the task of log-based anomaly prediction, and we may consider the parameters in logs. Acknowledgments: We thank the Innovation Environment Construction Special Project of Xinjiang Uygur Autonomous Region and NSFC for funding this research. We thank the anonymous reviewers for their contribution to this paper.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 5,934.6 | 2021-09-01T00:00:00.000 | [
"Computer Science"
] |
Theory of Noise-Scaled Stability Bounds and Entanglement Rate Maximization in the Quantum Internet
Crucial problems of the quantum Internet are the derivation of stability properties of quantum repeaters and theory of entanglement rate maximization in an entangled network structure. The stability property of a quantum repeater entails that all incoming density matrices can be swapped with a target density matrix. The strong stability of a quantum repeater implies stable entanglement swapping with the boundness of stored density matrices in the quantum memory and the boundness of delays. Here, a theoretical framework of noise-scaled stability analysis and entanglement rate maximization is conceived for the quantum Internet. We define the term of entanglement swapping set that models the status of quantum memory of a quantum repeater with the stored density matrices. We determine the optimal entanglement swapping method that maximizes the entanglement rate of the quantum repeaters at the different entanglement swapping sets as function of the noise of the local memory and local operations. We prove the stability properties for non-complete entanglement swapping sets, complete entanglement swapping sets and perfect entanglement swapping sets. We prove the entanglement rates for the different entanglement swapping sets and noise levels. The results can be applied to the experimental quantum Internet.
Here, a theoretical framework of noise-scaled stability analysis and entanglement rate maximization is defined for the quantum Internet.By definition, the stability of a quantum repeater can be weak or strong.The strong stability implies weak stability, by some fundamentals of queueing theory [101][102][103][104][105]. Weak stability of a quantum repeater entails that all incoming density matrices can be swapped with a target density matrix.Strong stability of a quantum repeater further guarantees the boundness of the number of stored density matrices in the local quantum memory.The defined system model of a quantum repeater assumes that the incoming density matrices are stored in the local quantum memory of the quantum repeater.The stored density matrices formulate the set of incoming density matrices (input set).The quantum memory also consists of a separate set for the outgoing density matrices (output set).Without loss of generality, the cardinality of the input set (number of stored density matrices) is higher than the cardinality of the output set.Specifically, the cardinality of the input set is determined by the entanglement throughput of the input connections, while the cardinality of the output set equals the number of output connections.Therefore, if in a given swapping period, the number of incoming density matrices exceeds the cardinality of the output set, then several incoming density matrices must be stored in the input set (Note: The logical model of the storage mechanisms of entanglement swapping in a quantum repeater is therefore analogous to the logical model of an input-queued switch architecture [101][102][103].).The aim of entanglement swapping is to select the density matrices from the input and output sets, such that the outgoing entanglement rate of the quantum repeater is maximized; this also entails the boundness of delays.The maximization procedure characterizes the problem of optimal entanglement swapping in the quantum repeaters.
Finding the optimal entanglement swapping means determining the entanglement swapping between the incoming and outgoing density matrices that maximizes the outgoing entanglement rate of the quantum repeaters.The problem of entanglement rate maximization must be solved for a particular noise level in the quantum repeater and with the presence of various entanglement swapping sets.The noise level in the proposed model is analogous to the lost density matrices in the quantum repeater due to imperfections in the local operations and errors in the quantum memory units.The entanglement swapping sets are logical sets that represent the actual state of the quantum memory in the quantum repeater.The entanglement swapping sets are formulated by the set of received density matrices stored in the local quantum memory and the set of outgoing density matrices, which are also stored in the local quantum memory.Each incoming and outgoing density matrix represent half of an entangled system, such that the other half of an incoming density matrix is stored in the distant source quantum repeater, while the other half of an outgoing density matrix is stored in the distant target quantum repeater.The aim of determining the optimal entanglement swapping method is to apply the local entanglement swapping operation on the set of incoming and outgoing density matrices such that the outgoing entanglement rate of the quantum repeater is maximized at a particular noise level.As we prove, the entanglement rate maximization procedure depends on the type of entanglement swapping sets formulated by the stored density matrices in the quantum memory.We define the logical types of the entanglement swapping sets and characterize the main attributes of the swapping sets.We present the efficiency of the entanglement swapping procedure as a function of the local noise and its impacts on the entanglement rate.We prove that the entanglement swapping sets can be defined as a function of the noise, which allows us to define noise-scaled entanglement swapping and noise-scaled entanglement rate maximization.The proposed theoretical framework utilizes the fundamentals of queueing theory, such as the Lyapunov methodology [101], which is an analytical tool used to assess the performance of queueing systems [101][102][103][104][105][106], and defines a fusion of queueing theory with quantum Shannon theory [17,[63][64][65][66][68][69][70][71] and the theory of quantum Internet.
The novel contributions of our manuscript are as follows: 1. We define a theoretical framework of noise-scaled entanglement rate maximization for the quantum Internet.
2. We determine the optimal entanglement swapping method that maximizes the entanglement rate of a quantum repeater at the different entanglement swapping sets as a function of the noise level of the local memory and local operations.
3. We prove the stability properties for non-complete entanglement swapping sets, complete entanglement swapping sets and perfect entanglement swapping sets.
4. We prove the entanglement rate of a quantum repeater as a function of the entanglement swapping sets and the noise level.
This paper is organized as follows.In Section 2, the preliminary definitions are discussed.Section 3 proposes the noise-scaled stability analysis.In Section 4, the noise-scaled entanglement rate maximization is defined.Section 5 provides a performance evaluation.Finally, Section 6 concludes the results.Supplemental information is included in the Appendix.
System Model
Let V refer to the nodes of an entangled quantum network N , which consists of a transmitter node A ∈ V , a receiver node B ∈ V , and quantum repeater nodes R i ∈ V , i = 1, . . ., q.Let E = {E j }, j = 1, . . ., m refer to a set of edges (an edge refers to an entangled connection in a graph representation) between the nodes of V , where each E j identifies an L l -level entanglement, l = 1, . . ., r, between quantum nodes x j and y j of edge E j , respectively.Let N = (V, S) be an actual quantum network with |V | nodes and a set S of entangled connections.An L l -level, l = 1, . . ., r, entangled connection E L l (x, y), refers to the shared entanglement between a source node x and a target node y, with hop-distance since the entanglement swapping (extension) procedure doubles the span of the entangled pair in each step.This architecture is also referred to as the doubling architecture [10,[14][15][16].
For a particular L l -level entangled connection E L l (x, y) with hop-distance (1), there are d (x, y) L l − 1 intermediate nodes between the quantum nodes x and y.
Fig. 1 depicts a quantum Internet scenario with an intermediate quantum repeater R j .The aim of the quantum repeater is to generate long-distance entangled connections between the distant quantum repeaters.The long-distance entangled connections are generated by the U S entanglement swapping operation applied in R j .The quantum repeater must manage several different connections with heterogeneous entanglement rates.The density matrices are stored in the local quantum memory of the quantum repeater.The aim is to find an entanglement swapping in R j that maximizes the entanglement rate of the quantum repeater.
Quantum memory
An i-th incoming entangled density matrix i An l-level entangled connection
L l
An i-th outcoming entangled density matrix The problem of entanglement swapping in quantum repeater R j with N input and N output connections in a quantum Internet scenario.Quantum repeater R j stores an ρ i incoming entangled density matrix from the i-th input (the other half of ρ i is shared with a source quantum repeater R i ) and the σ k outgoing entangled density matrix (the other half of σ k is shared with a target quantum repeater R k ) in its local quantum memory.The U S entanglement swapping operation in R j generates long-distance entangled connections between the distant quantum nodes.The incoming and outgoing density matrices formulate sets S I (R j ) and S O (R j ) together formulate the entanglement swapping set.The aim of the optimization procedure is to determine the optimal entanglement swapping to maximize the outgoing entanglement rate of R j .)
Entanglement Fidelity
The aim of the entanglement distribution procedure is to establish a d-dimensional entangled system between the distant points A and B, through the intermediate quantum repeater nodes.Let d = 2, and let |β 00 be the target entangled system A and B, |β 00 = 1 √ 2 (|00 + |11 ) , subject to be generated.At a particular density σ generated between A and B, the fidelity of σ is evaluated as Without loss of generality, an aim of a practical entanglement distribution is to reach F ≥ 0.98 in (2) for a given σ [10-12, 14-17, 39].
Entanglement Purification and Entanglement Throughput
Entanglement purification [88][89][90] is a probabilistic procedure that creates a higher fidelity entangled system from two low-fidelity Bell states.The entanglement purification procedure yields a Bell state with an increased entanglement fidelity F , where F in is the fidelity of the imperfect input Bell pairs.The purification requires the use of two-way classical communications [10-12, 14-17, 39].Let B F (E i L l ) refer to the entanglement throughput of a given L l entangled connection E i L l measured in the number of d-dimensional entangled states established over E i L l per sec at a particular fidelity F (dimension of a qubit system is d = 2) [10-12, 14-17, 39].
For any entangled connection E i L l , a condition c should be satisfied, as where B * F (E i L l ) is a critical lower bound on the entanglement throughput at a particular fidelity F of a given E i L l , i.e., B F (E i L l ) of a particular E i L l has to be at least B * F (E i L l ).
Definitions
Some preliminary definitions for the proposed model are as follows.
Definition 1 (Incoming and outgoing density matrix).In a j-th quantum repeater R j , an ρ incoming density matrix is half of an entangled state |β 00 = 1 √ 2 (|00 + |11 ) received from a previous neighbor node R j−1 .The σ outgoing density matrix in R j is half of an entangled state |β 00 shared with a next neighbor node R j+1 .
Definition 2 (Entanglement Swapping Operation).The U S entanglement swapping operation is a local transformation in a j-th quantum repeater R j that swaps an incoming density matrix ρ with an outgoing density matrix σ and measures the density matrices to entangle the distant source and target nodes R j−1 and R j+1 .
Definition 3 (Entanglement Swapping Period).Let C be a cycle with time t C = 1/f C determined by the o C oscillator in node R j , where f C is the frequency of o C .Then, let π S be an entanglement swapping period in which the set S I (R j ) = i ρ i of incoming density matrices is swapped via U S with the set S O (R j ) = i σ i of outgoing density matrices, defined as π S = xt C , where x is the number of C. Definition 4 (Complete and Non-Complete Swapping Sets).Set S I (R j ) formulates a complete set S * I (R j ) if set S I (R j ) contains all the Q = N i=1 |B i | incoming density matrices per π S that is received by R j during a swapping period, where N is the number of input entangled connections of R j and |B i | is the number of incoming densities of the i-th input connection per π S ; thus, ) contains all the N outgoing density matrices that are shared by R j during a swapping period π S ; thus, S O (R j ) = N i=1 σ i and |S O (R j )| = N .Let S (R j ) be an entanglement swapping set of R j , defined as (5) with cardinality Otherwise, S (R j ) formulates a non-complete swapping set S (R j ), with cardinality Definition 5 (Perfect Swapping Sets).A complete swapping set S * (R j ) is a perfect swapping set at a given π S , if holds for the cardinality.
Definition 6 (Coincidence set).In a given π S , the coincidence set S ) is a subset of incoming density matrices in S I (R j ) of R j received from R i that requires the outgoing density matrix σ k from S O (R j ) for the entanglement swapping.The cardinality of the coincidence set is refer to the number of density matrices arriving from R i for swapping with σ k at π S .This means the increment of the Z (π S ) where π S is the next entanglement swapping period.The derivations assume that an incoming density matrix ρ chooses a particular output density matrix σ for the entanglement swapping with probability Pr (ρ, σ) = x ≥ 0 (Bernoulli i.i.d.).
Definition 8 (Incoming and outgoing entanglement rate) Let |B R i (π S )| be the incoming entanglement rate of R j per a given π S , defined as where |B (R i (π S ) , σ k )| refers to the number of density matrices arriving from R i for swapping with σ k per π S .Then, at a given |B R i (π S )|, the B R j (π S ) , the outgoing entanglement rate of R j is defined as where L is the loss, 0 < L ≤ N , and D (π S ) is a delay measured in entanglement swapping periods caused by the optimal entanglement swapping at a particular entanglement swapping set.
Definition 9 (Swapping constraint).In a given π S , each incoming density in S I (R j ) can be swapped with at most one outgoing density, and only one outgoing density is available in S O (R j ) for each outgoing entangled connection.
Definition 10 (Weak stable (stable) and strongly stable entanglement swapping).Weak stability (stability) of a quantum repeater R j entails that all incoming density matrices can be swapped with a target density matrix in R j .A ζ (π S ) entanglement swapping in R j is weak stable (stable) if, for every ε > 0, there exists a B > 0, such that where S (π S ) I (R j ) is the set of incoming densities of R j at π S , while S (π S ) I (R j ) is the cardinality of the set.
For a strongly stable entanglement swapping in R j , the weak stability is satisifed and the cardinality of S
Noise-Scaled Entanglement Swapping Sets
Proposition 1 (Noise Scaled Swapping Sets).Let γ be a noise coefficient that models the noise of the local quantum memory and the local operations, 0 ≤ γ ≤ 1.For γ = 0, the swapping set at a given π S is complete swapping set S * (R j ), while for any γ > 0, the swapping set is non-complete swapping set S (R j ) at a given π S .
In realistic situations, γ corresponds to the noises and imperfections of the physical devices and physical-layer operations (quantum operations, realization of quantum gates, storage errors, losses from local physical devices, optical losses, etc) in the quantum repeater that lead to the loss of density matrices.For further details on the physical-layer aspects of repeater-assisted quantum communications in an experimental quantum Internet setting, we suggest [6].Fig. 2 illustrates the perfect swapping set, complete swapping set and non-complete swapping set.For both sets, the incoming densities are stored in incoming set S I (R j ).Its cardinality depends on the incoming entanglement throughputs of the incoming connections.The outgoing set S O (R j ) is a collection of outgoing density matrices.The outgoing matrix is half of an entangled state and the other half is shared with a distant target node.
The input set S I (R j ) and output se S O (R j ) of R j consist of the incoming and outgoing density matrices.For a non-complete entanglement swapping set, the noise is non-zero; therefore, loss is present in the quantum memory.As a convention of our model (see the swapping constraint in Definition 9), any density matrix loss is modeled as a "double loss" that affects both sets S I (R j ) and S O (R j ).Because of a loss, the U S swapping operation cannot be performed on the incoming and outgoing density matrices.
Problem Statement
The problem formulation for the noise-scaled entanglement rate maximization is given in Problems 1-3.
Problem 1 Determine the entanglement swapping method that maximizes the entanglement rate of a quantum repeater at the different entanglement swapping sets as a function of the noise level of the local memory and local operations.
Problem 2 Prove the stability for non-complete entanglement swapping sets, complete entanglement swapping sets and perfect entanglement swapping sets.
Problem 3 Determine the outgoing entanglement rate of a quantum repeater as a function of the entanglement swapping sets and the noise level.
Problem 4 Define the optimal entanglement swapping period length as a function of the noise level at the different entanglement swapping sets.
The resolutions of Problems 1-4 are proposed in Proposition 1 and Theorems 1-4.
Entanglement Swapping Stability at Swapping Sets
This section presents the stability analysis of the quantum repeaters for the different entanglement swapping sets.
Proof.At a given π S , let ζ ik (ρ A , σ k ) be constant, with respect to the swapping constraint, defined as (a) Logical types of entanglement swapping sets at a given π S .(a).For a perfect entanglement swapping set Ŝ (R j ), the cardinalities of the input and output sets are For a non-complete entanglement swapping set S (R j ), some densities are randomly lost due to noise (depicted by empty dots) in S I (R j ), leading to |S * (R j )| = Q + M , where Q ≤ Q, and M = N − L, where L is the number of lost densities at a given π S .Since for each S (π S ) R j ((R i , σ k )) an output σ k is associated at a given π S , in a density loss in a given coincidence set also causes a decrease in the cardinality of the output set S O (R j ) (due to the swapping constraint); thus, |S O (R j )| = M .The cardinality of the coincidence sets for the swapping with σ k , and 0 otherwise.The aim is therefore to construct a feasible entanglement swapping method for all input and output neighbors of R j , for all π S entanglement swapping periods.Then, from Z Definition 7) and ( 17), a ω (π S ) weight coefficient can be defined for a given entanglement swapping ζ (π S ) at a given π S , as where • is the inner product, Z R j (π S ) is a matrix of all coincidence set cardinalities for all input and output connections at π S , defined as while ζ (π S ) is as given in (18).For a perfect and complete entanglement swapping set, at γ (π S ) = 0, let ζ * (π S ) refer to the entanglement swapping, with ω (γ (π S ) = 0) = ω * (π S ), where ω * (π S ) is the maximized weight coefficient defined as where ζ * (π S ) is an optimal entanglement swapping method at γ (π S ) = 0 (in general not unique, by theory) with the maximized weight.By some fundamental theory [101][102][103]105], it can be verified that for a non-complete set with entanglement swapping χ (π S ) at γ (π S ) > 0, defined as with norm a |χ (π S )| [101,103], defined as with |χ (π S )| ≤ 1, the relation holds for the weights.Then, let L Z R j (π S ) be a Lyapunov function [101][102][103]105]of Z R j (π S ) (see (20)) as Then it can be verified [101][102][103]105] that holds, as Z R j (π S ) is sufficiently large [101][102][103]105], where ε > 0. Since ( 26) is analogous to the condition on strong stability given in (16), it follows that as (21) holds for all π S entanglement swapping periods, the entanglement swapping at γ (π S ) = 0 in R j is a strongly stable entanglement swapping with maximized weight coefficients for all periods.Since for any complete and perfect entanglement swapping set, the noise is zero, the ω (π S ) weight coefficient of at a perfect swapping set Ŝ (R j ) is also a maximized weight with f (γ (π S )) = 0 as ω (π S ) ≤ ω * (π S ) .
Non-Complete Swapping Sets
Theorem 1 (Noise-scaled stability at non-complete swapping sets) An ζ (π S ) entanglement swapping at γ (π S ) > 0 is stable for any non-complete entanglement swapping set S (R j ).
Proof.Let γ (π S ) > 0 be the noise at a given π S , and let ζ (π S ) be the actual entanglement swapping at any non-complete entanglement swapping set S (R j ).Using the formalism of [105], let L (X) refer to a Lyapunov function of a M × M size matrix X, defined as where x ik is the (i, k)-th element of X.
Let C 1 and C 2 constants, C 1 > 0, C 2 > 0.Then, an ζ (π S ) entanglement swapping with γ (π S ) > 0 is stable if only where ∆ L is a difference of the Lyapunov functions L Z R j (π S ) and L Z R j (π S ) , where π S is a next entanglement swapping period, defined as and by theory [101][102][103]105].
To verify (34), first ∆ L is rewritten via (25) as where Z R j (π S ) can be evaluated as where B (R i (π S ) , σ k ) ≤ 1 is the normalized number of arrival density matrices from R i for swapping with σ k at a next entanglement swapping period π S , defined as where B R j (π S ) = i,k |B (R i (π S ) , σ k )| is a total number of incoming density matrices of R j from the N quantum repeaters.Using (38), the result in (37) can be rewritten thus ( 34) can be rewritten as where E B (R i (π S ) , σ k ) is the expected normalized number of density matrices arrive from R i for swapping with σ k at π S .
Complete and Perfect Swapping Sets
Lemma 1 extends the results for entanglement swapping at complete and perfect swapping sets.
Lemma 1 (Noise-scaled stability at perfect and complete swapping sets) An ζ * (π S ) optimal entanglement swapping at γ (π S ) = 0 is strongly stable for any complete S * (R j ) and perfect Ŝ (R j ) entanglement swapping set.
Proof.Let ω * (γ (π S ) = 0) be the weight coefficient of ζ * (π S ) at γ (π S ) = 0 at a given S * (R j ), as given in (21), with ζ * (π S ) as where S (ζ (π S )) is the set of all possible N !entanglement swapping operations at a given π S , and at N outgoing density matrices, and where π S is a next entanglement swapping period; as Then, for any complete swapping set S * (R j ), from (45 where C 1 is set as in (46), and by some fundamentals of queueing theory [101][102][103]105], the condition in ( 16) can be rewritten as where ε as given in (50), while Z * R j (π S ) is the cardinality of the coincidence sets of S * (R j ) at a given π S , as Thus, ( 57) can be rewritten as By similar assumptions, for any Ŝ (R j ) perfect entanglement swapping with cardinality ẐR j (π S ) of the coincidence sets of Ŝ (R j ) at a given π S , the condition in ( 16) can be rewritten as Thus, from ( 59) and ( 60), it follows that ζ * (π S ) ( 52) is strongly stable for any complete and perfect entanglement swapping set, which concludes the proof.
Noise-Scaled Entanglement Rate Maximization
This section proposes the entanglement rate maximization procedure for the different entanglement swapping sets.
Since the entanglement swapping is stable for both complete and non-complete entanglement swapping, this allows us to derive further results for the noise-scaled entanglement rate.The proposed derivations utilize the fundamentals of queueing theory (Note: in queueing theory, Little's law defines a connection between the L average queue length and the W average delay as L = λW , where λ is the arrival rate.The stability property is a required preliminary condition for the relation.).The derivations of the maximized noise-scaled entanglement rate assume that an incoming density matrix ρ chooses a particular output density matrix σ for the entanglement swapping with probability Pr (ρ, σ) = x ≥ 0.
Preliminaries
Let Z R j (π S ) be the cardinality of the coincidence sets at a given π S , as and let |B R i (π S )| be the total number of incoming density matrices in R j per a given π S , as where |B (R i (π S ) , σ k )| refers to the number of density matrices arrive from R i for swapping with σ k per π S .From ( 61) and ( 62), let D (π S ) be the delay measured in entanglement swapping periods, as where is the delay for a given R i at a given π S , as At delay D (π S ) ≥ 0, the B R j (π S ) number of swapped density matrices per π S is where 0 < L ≤ N is the number of lost density matrices in S O (R j ) of R j per π S at a non-zero noise γ (π S ) > 0, L = 0 if γ (π S ) = 0, and πS is the extended period, defined as with π S / (π S ) ≤ 1; thus, (65) identifies the B R j (π S ) outgoing entanglement rate per π S for a particular entanglement swapping set.
Non-Complete Swapping Sets
Theorem 2 (Entanglement rate decrement at non-complete swapping sets).For a non-complete entanglement swapping set is the total incoming entanglement throughput at a given π S , and , where lim Proof.The entanglement rate decrement for a non-complete entanglement swapping set is as follows.
After some calculations, the result in (45) can be rewritten as where where B (R i (π S ) , σ k ) refers to the normalized number of density matrices arrive from R i for swapping with while β is defined as [101,105] From ( 67), E L Z R j (π S ) can be evaluated as thus, after P entanglement swapping periods π S = 0, . . ., P − 1, E L Z R j (P π S ) is yielded as where π S (P ) is the P -th entanglement swapping, while E L Z R j (π S (0)) identifies the initial system state, thus by theory [105].Therefore, after P entanglement swapping periods, the expected value of Z R j (π S ) can be evaluated as where L Z R j (π S (P )) ≥ 0, thus, assuming that the arrival of the density matrices can be modeled as an i.i.d.arrival process, the result in ( 74) can be rewritten as lim π S →∞ Then, since for any noise γ (π S ) at π S , the relation holds, thus for any entanglement swapping period π S , from the sub-linear property of f (•), the relation follows for f (γ (π S )).Therefore, ω (γ (π S )) from ( 29) can be rewritten as such that for f Z R j (π S ) , the relation lim holds, which allows us to rewrite (75) in the following manner: where lim thus from ( 81), ( 80) is as lim where Therefore, the D (π S ) delay for any non-complete swapping set is as As a corollary, the B R j (π S ) outgoing entanglement rate per π S at (84), is as that concludes the proof.
Complete Swapping Sets
Theorem 3 (Entanglement rate decrement at complete swapping sets).For a complete entanglement swapping set Proof.Since for any complete swapping set, it follows that ξ (γ) = 0, therefore ( 82) can be rewritten for a complete swapping set as lim Proof.Let ζ * (π S ) be the optimal entanglement swapping at γ (π S ) = 0with a maximized weight coefficient ω * (π S ), and let χ (π S ) be an arbitrary entanglement swapping with defined as where ζ * (π S − π * S ) an optimal entanglement swapping at an (π S − π * S )-th entanglement swapping period, while π * S is an entanglement swapping period, defined as where h > 0.
Then, the ω (π S − π * S ) weight coefficient of entanglement swapping χ (π S ) (94) at an (π S − π * S )th entanglement swapping period is as while ω (π S ) at π S is as It can be concluded, that the difference of ( 96) and ( 97) is as thus ( 96) is at most π * S N more than (97), since the weight coefficient of χ (π S ) can at most decrease by N every period (i.e., at a given entanglement swapping period at most N density matrix pairs can be swapped by χ (π S )).
Performance Evaluation
In this section, a numerical performance evaluation is proposed to study the delay and the entanglement rate at the different entanglement swapping sets.
Entanglement Swapping Period
In Fig. 3
Delay and Entanglement Rate Ratio
In Fig. 4(a), the values of D (π S ) are depicted as a function of the incoming entanglement rate B R j (π S ) for the non-complete S (R j ), complete S * (R j ) and perfect Ŝ (R j ) entanglement swapping sets.The D (π S ) delay values are evaluated as D (π S ) = Z R j (π S ) B R j (π S ) via the maximized values of ( 84), ( 92) and ( 92), i.e., the delay depends on the cardinality of the coincidence set and the incoming entanglement rate (This relation also can be derived from Little's law; for details, see the fundamentals of queueing theory [101][102][103]105]).In Fig. 4(b), the ratio r of the B R j (π S ) outgoing and B R j (π S ) incoming entanglement rates is depicted as a function of the incoming entanglement rate.For the non-complete entanglement swapping set, the loss is set as L = 0.2N .The ratio r = B R j (π S ) / B R j (π S ) of the B R j (π S ) outgoing and B R j (π S ) incoming entanglement rates for the different entanglement swapping sets as a function of the B R j (π S ) incoming entanglement rate.The entanglement rate decrease for the non-complete swapping set caused by losses is L N B R j (π S ) , while L = 0 for the complete and perfect entanglement swapping sets.
The highest D S delay can be for non-complete entanglement swapping set S (R j ), while the lowest delays can be found for the perfect entanglement swapping set Ŝ (R j ).For a complete entanglement swapping set S * (R j ), the delay values are between the non-complete and perfect sets.This is because for a non-complete set, the losses due to the γ (π S ) > 0 non-zero noise allow only an approximation of the delay of a complete set; thus, D R j (π S ) > D * R j (π S ).For a complete entanglement swapping set, while the noise is zero, γ (π S ) = 0, the Z * R j (π S ) cardinality of the coincidence set is high, Z * R j (π S ) > ẐR j (π S ) ; thus, the D * R j (π S ) delay is higher than the DR j (π S ) delay of a perfect entanglement swapping set, D * R j (π S ) > DR j (π S ).For a perfect set, the noise is zero, γ (π S ) = 0 and the cardinality of the coincidence sets is one for all inputs; thus, Z * R j (π S ) > ẐR j (π S ) = N .Therefore, the DR j (π S ) delay is minimal for a perfect set.
From the relation of D R j (π S ) > D * R j (π S ) > DR j (π S ), the corresponding delays of the entanglement swapping sets, the relation for the decrease in the outgoing entanglement rates is straightforward, as follows.As r → 1 holds for the ratio r of the B R j (π S ) outgoing and B R j (π S ) incoming entanglement rates, then B R j (π S ) → B R j (π S ) , i.e., no significant decrease is caused by the entanglement swapping operation.
The highest outgoing entanglement rates are obtained for a perfect entanglement swapping set, which is followed by the outgoing rates at a complete entanglement swapping set.For a noncomplete set, the outgoing rate is significantly lower due to the losses caused by the non-zero noise in comparison with the perfect and complete sets.
Conclusions
The quantum repeaters determine the structure and performance attributes of the quantum Internet.Here, we defined the theory of noise-scaled stability derivation of the quantum repeaters and methods of entanglement rate maximization for the quantum Internet.The framework characterized the stability conditions of entanglement swapping in quantum repeaters and the terms of non-complete, complete and perfect entanglement swapping sets in the quantum repeaters to model the status of the quantum memory of the quantum repeaters.The defined terms are evaluated as a function of the noise level of the quantum repeaters to describe the physical procedures of the quantum repeaters.We derived the conditions for an optimal entanglement swapping at a particular noise level to maximize the entanglement throughput of the quantum repeaters.The results are applicable to the experimental quantum Internet.
A.1 Notations
The notations of the manuscript are summarized in Table A R j A current j-th quantum repeater, j = 1, . . ., q, where q is the total number of quantum repeaters.q Total number of quantum repeaters in an entangled path ρ In a j-th quantum repeater R j , an ρ incoming density matrix is a half of an entangled state |β 00 received from a previous neighbor node R j−1 . σ The σ outgoing density matrix in R j is a half of an entangled state |β 00 shared with a next neighbor node R j+1 .
U S Entanglement swapping operation is a local transformation that swaps an incoming density matrix ρ with an outgoing density matrix σ in a quantum repeater R.
S I (R j ) Set of incoming density matrices stored in the quantum memory of R j , S I (R j ) = i ρ i , where ρ i is an i-th density matrix.
S O (R j ) Set of outgoing density matrices stored in the quantum memory of R j , S O (R j ) = i σ i , where σ i is an i-th density matrix.L Z R j (π S ) Lyapunov function of Z R j (π S ), as ∆ L Difference of Lyapunov functions L Z R j (π S ) and L Z R j (π S ) , where π S is a next entanglement swapping period, defined as ∆ L = L Z R j (π S ) − L Z R j (π S ) ., where B R j (π S ) = i,k |B (R i (π S ) , σ k )| is a total number of incoming density matrices of R j from the N quantum repeaters.
Expected normalized number of density matrices arrive from R i for swapping with σ k at π S .
α ik Parameter, defined as
Figure 4 :
Figure 4: The D (π S ) delay values for the different entanglement swapping sets as a function of the B R j (π S ) incoming entanglement rate, B R j (π S ) ∈ 10 0 , 10 8 , N = 5, L = 0.2N , β = 0.78 for the complete and perfect sets, β = 0.64 for a non-complete set and C 1 = 0.7, γ (π S ) = 0.2, h = 0.2, π * S = 1.2π S and f (γ (π S )) = 2π * S N = 12.(b) The ratio r = B R j (π S ) / B R j (π S ) of the B R j (π S ) outgoing and B R j (π S ) incoming entanglement rates for the different entanglement swapping sets as a function of the B R j (π S ) incoming entanglement rate.The entanglement rate decrease for
l
Level of entanglement.L l (x, y) An l-level entangled connection between quantum nodes x and y. d (x, y) L l Hop-distance at an L l -level entangled connection between quantum nodes x and y, d (x, y) L l = 2 l−1 .O C An oscillator with frequency f C , f C = 1/t C , serves as a reference clock.C A cycle, with t C = 1/f C .π S An entanglement swapping period, in which the set S I (R j ) of density matrices are swapped via the U S entanglement swapping operator with the S O (R j ) of density matrices, defined as π S = xt C , where x is the number of C. π S A next entanglement swapping period after π S .
BF
Entanglement throughput [Bell states per π S ]. |B F | Number of entangled states [Number of Bell states].L l (k) A k-th entangled connection.B F (L l (k)) Entanglement throughput of the entangled connection L l (k) [Bell states per π S ].
S *I (R j ) A complete set of incoming density matrices.Set S I (R j ) formulates a S * I (R j ) complete set if S I (R j ) contains all the Q = N i=1 |B i | incoming density matrices per π S that is received by R j in a swapping period, where N is the number of input entangled connections of R j , |B i | is the number of incoming densities of the i-th input connection per π S , thusS I (R j ) = Q i=1 ρ i and |S I (R j )| = Q.S * O (R j ) A complete set of outgoing density matrices.An S O (R j ) set formulates a S * O (R j ) complete set, if S O (R j) contains all the N outgoing density matrices that is shared by R j during a swapping period π S , thusS O (R j ) = N i=1 σ i and |S O (R j )| = N .S (R j )An entanglement swapping set of R j , S (R j ) = S I (R j ) S O (R j ) that describes the status of the quantum memory in R j .S * (R j ) A complete entanglement swapping set.A S (R j ) is a S * (R j ) complete swapping set, if S * (R j ) = S * I (R j ) S * O (R j ), with cardinality |S * (R j )| = Q + N .S * (R j ) A perfect entanglement swapping set.A S * (R j ) complete swapping set is a Ŝ (R j ) = ŜI (R j ) ŜO (R j ) perfect swapping set at a given π S , if Ŝ (R j ) = N + N .S (π S ) R j ((R i , σ k ))A coincidence set, a subset of incoming density matrices in S I (R j ) of R j received from R i that requires the same outgoing density matrix σ k from S O (R j ) for the entanglement swapping.Z (π S ) R j ((R i , σ k )) Cardinality of the coincidence set S (π S ) R j ((R i , σ k )) [Number of Bell states].
C 1 S
Constant, C 1 = 1 − z ν z .Z R j (π S )Cardinality of the coincidence sets at a given π S , asZ R j (π S ) = i,k Z (π S ) R j ((R i , σ k )) = |S I (R j )|.|B R i (π S )|Total number of incoming density matrices in R j per a given π S , asB R j (π S ) = i,k |B (R i (π S ) , σ k )|.πSAn extended entanglement swapping period, defined as πS = π S + D (π S ), with π S / (πS ) ≤ 1 [Number of π S periods].B R j (π S )Outgoing entanglement rate per π S for a particular entanglement swapping set [Bell states per π S ].f (•) Sub-linear function.ξ (γ) Parameter, defined as ξ (γ) = (N −L) 2C 1 f (γ (π S )).βParameter, defined asβ = i,k B (R i (π S ) , σ k ) − B (R i (π S ) , σ k ) 2 ,where B (R i (π S ) , σ k ) refers to the normalized number of density matrices arrive from R i for swapping withσ k at π S as B (R i (π S ) , σ k ) = |B(R i (π S ),σ k )| i |B(R i (π S ),σ k )| .P Number of entanglement swapping periods .D (π S ) Delay per π S at a non-complete entanglement swapping set.D * (π S ) Delay per π S at a complete entanglement swapping set.D (π S ) Delay per π S at a perfect entanglement swapping set.Z R j (π S ) Cardinality of the coincidence sets at a given π S , for a non-complete entanglement swapping set.Z * R j (π S ) Cardinality of the coincidence sets at a given π S , for a complete entanglement swapping set.ẐR j (π S ) Cardinality of the coincidence sets at a given π S , for a perfect entanglement swapping set.π * An extended entanglement swapping period, defined as π * S = (1 + h) π S , where h > 0 [Number of π S periods]. .1. | 10,504.2 | 2020-02-17T00:00:00.000 | [
"Physics"
] |
Biomaterial Implants in Abdominal Wall Hernia Repair : A Review on the Importance of the Peritoneal Interface
Biomaterials have long been used to repair defects in the clinical setting, which has led to the development of a wide variety of new materials tailored to specific therapeutic purposes. The efficiency in the repair of the defect and the safety of the different materials employed are determined not only by the nature and structure of their components, but also by the anatomical site where they will be located. Biomaterial implantation into the abdominal cavity in the form of a surgical mesh, such as in the case of abdominal hernia repair, involves the contact between the foreign material and the peritoneum. This review summarizes the different biomaterials currently available in hernia mesh repair and provides insights into a series of peculiarities that must be addressed when designing the optimal mesh to be used in this interface.
Introduction
Biomaterials are being extensively used as scaffolds in the field of tissue engineering and reparative medicine.The term biomaterial defines a biological or synthetic material whose aim is to contribute to the repair or regeneration of a damaged tissue by its partial or total replacement [1].For this reason, biomaterials find their widest range of application in surgical procedures, their design determined by the specific function for which they are intended.
The promising results that they provide in the repair of tissue defects have led to a spectacular increase of their use in current clinical practice, which has in turn contributed to the development and evolution of the surgical techniques performed in different medical specialties.Biomaterials turn out to be vital in solving important functional conditions such as orthopedic, vascular or ophthalmologic-related medical issues, among others.Thereby, an improvement in the patients' quality of life due to biomaterials is not only positive from a clinical perspective, but also through the contribution to their psychological well-being.
The complexity of biomaterials and the great responsibility that their use implies requires that their design and development take up a multidisciplinary approach.Thus, the involvement of professionals from different fields (e.g., chemists, biologists, engineers, histopathologists and surgeons) is essential to achieve the expected outcomes that would benefit patients suffering from different pathologies.
One of the most frequent surgical application of biomaterials in recent years has been hernia repair.Every year around twenty million hernia repair procedures are performed around the world [2].Inguinal hernia repair is the surgical procedure most often conducted by general surgeons [3].The use of biomaterials for this purpose in the form of surgical meshes has drastically contributed to a decrease in the hernia recurrence rate [4], which is one of the most common complications that occur in patients undergoing this type of surgery.
Biomaterials in Abdominal Wall Repair
The repair of the abdominal wall is commonly required in the event of abdominal hernias or open wounds.Abdominal hernias require surgical intervention since they cause pain or discomfort and, more importantly, can produce the protrusion of intraabdominal organs through these defects, which could cause tissue strangulation.The incidence of ventral hernias is high; nearly 350,000 repairs are performed each year in the United States [5].
Abdominal wall reconstruction is a complex procedure that seeks to restore the abdominal wall structure by maintaining its natural strength and elasticity as much as possible while causing the least side effects.The traditional repair methods consisted of primary closure by open suture techniques.However, these techniques are no longer recommended since they are related to high recurrence and wound dehiscence rates [6,7] that could eventually lead to evisceration, especially in the event of large defects [7].The placement of mesh as an alternative technique in abdominal wall repair offers some advantages over the suture closure [8].Meshes confer an extra surface, avoiding the surgical approximation of the defect edges and the subsequent excessive tension in the area.This tension would be responsible for impaired tissue healing, tissue ischemia, and defective closure or reconstruction of the wall, that could result in wound dehiscence and herniation [6].However, although superior to traditional suture closure, the use of meshes is not without complications.This underlies the complexity of the processes carried out during abdominal wall reconstruction and the large amount of factors involved.
The improved outcomes achieved by the use of surgical meshes have triggered the development of different biomaterials to be used in the abdominal location.Research on abdominal meshes has been traditionally based on comparative analyses of materials with different chemical or biological nature and/or the optimization of their physical and mechanical properties.Different reviews on the different biomaterials available from the point of view of their composition, bio-functionality or their structural and mechanical properties have been previously published [9][10][11][12][13].In this review, we have specifically focused on their behavior at the peritoneal interface.The still high incidence of postsurgical peritoneal adhesions after intraperitoneal mesh implantation and the severe clinical complications that result make necessary a comprehensive understanding of the most relevant factors implied.Here, we provide a review on the abdominal cavity contents involved in adhesion formation, the host tissue, and cell response exerted by biomaterials in this cavity and the adhesiogenic process.An updated classification of biomaterials available for abdominal surgery is presented, targeting principally their performance in relation to adhesion formation.
Mesh Positioning in the Abdominal Wall
According to the position relative to the peritoneum, meshes can be implanted: extraperitoneally i.e., in a retromuscular plane and not in direct contact with the bowels; or intraperitoneally, between the peritoneum and the intraabdominal organs and bowels.In both alternatives, complications can arise.However, the intraperitoneal position poses an increased risk of dangerous events such as mesh migration [14][15][16][17][18][19], adhesions [20,21], intestinal obstruction [15,19] or fistulae [16,[20][21][22][23][24], that can occur even several years after the mesh placement.Notwithstanding, the IPOM (intraperitoneal onlay mesh) technique is indicated in several patients who have undergone a previous laparoscopic repair, an infraumbilical surgery with violation of the preperitoneal space, or suffer from a recurrent inguinal hernia [25].
The Abdominal Cavity
The success of a biomaterial implant in the abdominal cavity is conditioned by the resolution of different processes characteristic of this anatomical site.The damage to intraabdominal tissues/organs like the peritoneum or the omentum provokes a specific cell and tissue response.
The Peritoneum
A key factor in the intraabdominal mesh implantation is the contact between the biomaterial and the peritoneum.The peritoneum is a serous membrane that consists of a basal lamina and a submesothelial stroma covered by a mesothelial cell monolayer [26].This membrane covers the inner side of the abdominopelvic cavity-defined as parietal peritoneum-as well as the surface of the intraabdominal structures, known as visceral peritoneum.The contact of a biomaterial with the parietal and visceral peritoneum-when in the intraperitoneal position-or just with the visceral peritoneum-in total defects hernia repair which include the removal of the parietal peritoneum-requires some special considerations to be made when selecting the most appropriate mesh to be used.The peritoneum can be easily harmed during abdominal surgery.The first layer exposed in peritoneum is the mesothelium, which is a delicate structure.At the intercellular junctions in the mesothelium some openings-stomata-that provide direct access to the submesothelial lymphatic system are found [27,28].This makes this layer highly permeable to the peritoneal fluid.Mesothelial cells (MCs) present numerous microvilli at their apical membrane surrounded by a lubricating glycocalyx [28].This glycocalyx has an anti-inflammatory function and plays an important role in intercellular contacts and tissue remodeling [28,29].Thus, the mesothelial layer confers a protective cover for the underlying tissue.MCs are supported by the basal lamina through weak bindings, which indicates that these cells can be easily detached in case of mechanical insult [30].Considering the slight thickness of the basal lamina, less than 100 nm thick [26], when the peritoneum is injured during intraabdominal procedures, both the mesothelial monolayer and the basal lamina are usually removed leaving the submesothelial stroma underneath exposed.Besides collagen type I fibers, laminin, fibronectin, proteoglycans and glycosaminoglycans, also fibroblasts, adipocytes, nerves, blood and lymphatic vessels can be found in this layer [31].The exposure of these cell types and components after trauma is of importance for the reparation of the zone and has an influence on the adhesion formation process [32].In addition to disruption of the mesothelial layer, the mechanical injury and the peritoneal inflammation produce the release of cytokines and growth factors, such as TGF-β (transforming growth factor-β) [33], that provokes the epithelial-to-mesenchymal transition of MCs [34][35][36].This process plays a pivotal role in peritoneal fibrosis through the conversion of MCs into migratory and invasive cells with a myofibroblastic phenotype [37,38].These cells secrete-among other growth factors-VEGF (vascular endothelial growth factor), which is an inductor of angiogenesis [39,40].Reparative macrophages also promote neoangiogenesis and release growth factors and matrix-remodeling enzymes [41].These events, together with the release of other proangiogenic factors like b-FGF (basic fibroblastic growth factor) [42], can contribute to the stabilization of peritoneal adhesions as permanent structures between the biomaterial and the opposing intraabdominal organs.
The Omentum
The omentum is a highly vascularized tissue that lies posterior to the abdominal wall and serves as coverage and protection for the intraabdominal contents [43].It is of greatest importance in adhesion formation, since it is involved in 92% of postsurgical adhesions and in 100% of spontaneous adhesions [44].It exhibits a particular predisposition to attach to foreign materials like surgical meshes in the abdominal cavity [45,46], which is probably due to its particular cell composition that provides this tissue with an immunologic role [47,48] and tissue remodeling properties [43].It is mainly composed of white adipose tissue in a lobular configuration septated by connective tissue and delineated by a mesothelial layer.It contains abundant blood and lymphatic vessels, especially in the submesothelial layer, and lymphoid bodies, so-called milky spots, in the outermost layer of the omentum or embedded in the adipose tissue [49].The existence of this organ in the abdominal cavity largely conditions the host tissue response to a biomaterial implant in this location.The omentum shows a rapid response to abdominal injury, with the mobilization of cells comprising the milky spots that proliferate and spread over the omental tissue [49] and secrete growth factors and cytokines related to tissue repair and remodeling [43,49].MCs (especially those near milky spots) have shown changes in their phenotype in response to injury, returning to normality only after tissue repair [50].Besides, fibrocytes, pericytes and fibroblasts contained in the omentum provide an environment that supports tissue growth via angiogenic factors and cytokines that promote wound closure, vascular development and remodeling as well as collagen deposition [43].A different progression of the omental tissue involved in adhesions to an adipose or fibrotic phenotype has been observed and correlated to the presence of different isoforms of TGF-β (TGF-β1 and TGF-β3) and the concomitant expression of the soluble or the membrane-bound form of betaglycan (type III TGF-β receptor) [49].A similar role for the different isoforms of TGF-β and their receptors in the response of peritoneum to abdominal injury is still to be investigated.
Bearing all this in mind, it seems clear that the abdominal cavity represents an anatomical location with particular features that need to be considered when designing or selecting the mesh to be employed in order to minimize adverse medical outcomes.
Host Tissue and Cell Response
The presence of a foreign material into the abdominal cavity triggers a series of events influenced by the individual response of the patient and the surgical procedure performed.As part of the reparative process, an inflammatory response is exerted in an attempt to contribute to the restoration of the damaged area and to encapsulate the foreign biomaterial to separate it from the surrounding tissue [51].The normal course of the reparative process requires a perfect orchestration of all the phases-hemostasis, inflammation, proliferation and remodeling-and every cell type involved.For this reason, the understanding of the events and signaling processes occurred during wound healing, and specifically in the presence of a foreign material, is crucial in abdominal wall repair.
After peritoneal injury during a surgical procedure or mediated by the subsequent mechanical aggression of the implanted mesh, different substances like histamine or vasoactive quinines are released.Thereby, the permeability of the blood vessels is favored.A protein fibrinous exudate covers the damaged area (Figure 1) and is infiltrated by inflammatory cells.The first cell type attracted by chemokines that appear in the damaged area are polymorphonuclear neutrophils, which contribute to the ingestion of foreign particles or microorganisms.The following important event in the inflammatory phase is the appearance of monocytes that are attracted by the pro-inflammatory cytokines IL (interleukin) -1, IL-6, IL-8, and TNF-α (tumor necrosis factor alpha) released in the peritoneal fluid [52].Monocytes differentiate into macrophages once in the tissue and adhere to the wound.There, they will release numerous cytokines that constitute the real effectors of the phagocytic defense system.Adherent macrophages attempt to phagocyte the biomaterial and fuse to form foreign body giant cells in a biomaterial-dependent process [51].Macrophages can also prevent during the first 48 h and then stimulate from 48 to 54 h after damage the MCs proliferation.Also, MCs release different cytokines and growth factors to the peritoneal fluid to mediate the peritoneal healing.Two macrophage subpopulations are involved in the post-implantation response.M1 macrophages favor inflammatory reaction, while the M2 subpopulation has a role in tissue remodeling.Leukocytes in the early phases also promote the proliferation of the normally quiescent MCs [52].Lymphocytes type T have been found in the macrophage infiltrates, developing the immune response.The secretory products of macrophages modulate the fibroblasts proliferation during the proliferative phase.Under the action of TGF-β, quiescent fibroblasts differentiate into myofibroblasts [51], a cell type that exerts an essential role into the reparative process by synthesizing collagen and restoring the extracellular matrix.Lately, type III collagen fibers are replaced by type I collagen during the remodeling phase.
Fibrillar collagens provide the support and tensile strength that give the extracellular matrix its structural integrity.The third day after the lesion to the peritoneum, MCs cover the peritoneal macrophages present in the damaged area and proliferate during the following days, forming multiple cell islets.The confluence of these islets leads to the restoration of the mesothelium (Figure 1) which, as previously mentioned, represents the protective cover of the peritoneum and eventually the abdominal cavity.The neoperitoneum promotes fibrinolysis through the release of tissue-type (tPA) and urokinase-type (uPA) plasminogen activator (Figure 1), together with the inhibition of cell-cell and cell-tissue interactions through the release of hyaluronic acid from the MCs [53].In this intricate and time-organized process, any imbalance or mismatch in the healing events or in the function of the cells involved due to the presence or degradation of the biomaterial could produce unexpected responses of the host tissue that could result in clinical complications.
Peritoneal Adhesions
Adhesiogenesis is the most common cause of long-term complications observed after abdominopelvic surgery [54], leading to serious consequences such as bowel obstruction, or chronic abdominal pain or infertility in women undergoing a gynaecological procedure [55,56].In fact, 80-90% of patients develop adhesions after intraabdominal surgery [54,57], especially after surgical mesh implantation.Adhesions are responsible for the majority of bowel obstructions in the Western world [58].For these reasons, postoperative adhesions remain one of the most challenging issues in surgical practice [59][60][61].
Adhesions are pathologic bands connecting adjacent structures [59].Under normal conditions, the blood clot and the fibrinous connections formed after trauma to the peritoneal interface are lysed within a few days by fibrinolytic substances, resulting in the repair of the damaged area [32].Inflammation at the site of injury can inhibit or delay this fibrinolytic activity through the release of plasminogen activator inhibitors (PAI-1 and PAI-2), leading to persistent fibrin deposits that become an insoluble network on which cells can migrate and proliferate [32,52] (Figure 1).This situation produces permanent connections of fibrous tissue between two previously unrelated surfaces [59,62], giving rise to adverse complications of varying severity [56].Different types of adhesions have been observed, leading to different classifications [63][64][65][66][67].A correlation between the macroscopic and/or microscopic characteristics-such as the resistance to traction, thickness, tissue composition or the degree of the vascularization of the adhesion-and the severity and clinical significance of adhesions can be established.Thus, loose adhesions, usually corresponding to an adipose or fibrinous content, are poorly vascularized, easily dissected, and do not lead to very serious complications.On the contrary, a fibrotic phenotype corresponding to firm-vascularized and difficult to dissect-or integrated adhesions that are highly vascularized and require sharp dissection, occasionally produce serosal damage of the organ involved, which can produce incarceration of intraabdominal organs and eventual bowel obstruction and enterocutaneous fistulae.Thus, the extent and clinical severity of the adhesions formed after the placement of a surgical mesh into the abdominal wall are highly influenced by the performance of the surgical procedure itself and the degree of peritoneal injury and inflammation that the specific biomaterial triggers.The required features for the most suitable biomaterial in this regard are still to be unequivocally established, while the individual response of the patient seems to play a crucial role.
Available Biomaterials for Abdominal Surgery
The difficulty in finding the proper equilibrium between the intended clinical effect and avoidance of collateral damage has resulted in a significant evolution in the number and types of prosthetic materials available for abdominal wall reconstruction.Currently, nearly 150 options for prosthetic materials with varying composition, weight, cost, and indications for use in the surgical field are available to the general surgeon [68,69], with the ongoing development of new additional meshes [9].An in-depth knowledge of the advantages and disadvantages of the diverse materials currently available is needed when selecting the optimal mesh according to a specific situation.
Permanent Reticular Materials
After the use of high-density polyethylene fiber (Marlex®) as the first synthetic mesh [70], polypropylene (PP) started to be used since it offered a more malleable and heat-resistant option that could be autoclaved [71].Nowadays, PP still constitutes the most employed material in the abdominal location [10] even if other materials such as polyester (PS) were introduced [72].Since these materials usually present a reticular disposition of the filaments (Figure 2), the damage to the peritoneum is a common event that gives rise to high adhesion formation rates.Infection is also a common adverse event in the use of synthetic materials [73].Besides, PP shows shrinkage rates of 30-50% at 4-weeks, which could be responsible for secondary postimplantation folding in cases of poor elasticity and small pores [74].Thus, the use of reticular meshes is discouraged in the intraperitoneal position.While the behavior at the biomaterial/parietal peritoneum interface is satisfactory (proper host tissue integration), several adverse complications can be found at the biomaterial/visceral peritoneum interface.Different modifications such as increasing the pore size (Figure 2) or coating the mesh with a second component have been developed to avoid these complications, with different results.The proper mesothelialization on the visceral side of the biomaterial is crucial since it enables a free of micro-traumas movement of the intraabdominal organs in contact with the mesh.Reticular materials have shown a delay in mesothelial reparation, which favors the appearance and permanence of fibrin deposits that constitute the scaffold for peritoneal adhesions.
Permanent Laminar Materials
Polypropylene and polyester remained the two dominant mesh options until 1985, when expanded polytetrafluorethylene (ePTFE) emerged as an option, with some initial reports of improvement in adhesion formation [75].ePTFE is a laminar microporous material (Figure 2), which induces less damage in the intraabdominal organs and creates less adhesions [76].Mesothelialization of the laminar meshes is much better and faster than in reticular structures [77].
Permanent Laminar Materials
Polypropylene and polyester remained the two dominant mesh options until 1985, when expanded polytetrafluorethylene (ePTFE) emerged as an option, with some initial reports of improvement in adhesion formation [75].ePTFE is a laminar microporous material (Figure 2), which induces less damage in the intraabdominal organs and creates less adhesions [76].Mesothelialization of the laminar meshes is much better and faster than in reticular structures [77].A reduced inflammatory foreign reaction has also been noticed in laminar PTFE compared to PP filaments.Notwithstanding, although smaller pores show an advantage in adhesion prevention, they prevent tissue in-growth and therefore integration into the host tissue [78].Also, higher rates of infection are shown in laminar meshes that can lead to its removal [79].When a reticular prosthesis composed of ePTFE suture thread is implanted, the adhesion incidence significantly increases compared to a laminar ePTFE [80].This indicates that it is the spatial structure of a biomaterial that modulates the behavior at the peritoneal interface, and that the composition of the material has a lower influence.The influence of structural features has also shown to be crucial on mesh mechanical behavior in relation to the abdominal wall biomechanics [10].Different modifications have been included in PTFE meshes to improve tissue ingrowth, giving rise to products like MycroMesh®, DualMesh®or MotifMESH TM [81].It is difficult to make any definitive statements about the clinical effectiveness of these meshes since clinical trials are not performed under identical conditions [82] and have shown very disparate results regarding adhesion formation [83,84].
Composites
Since reticular meshes offer proper host tissue integration that cannot be reached by laminar materials and laminar materials confer prevention against the adhesion formation frequently found with reticular meshes, composites were developed as the logical step in the evolution of materials to be used in abdominal wall hernia repair.Composites consist of the combination of two different components linked together whether by suturing, heat-sealing, vacuum pressing or polymer adhesion.They include a reticular mesh facing the abdominal wall with the aim to integrate into and reinforce the abdominal tissue.The second component is a laminar material facing the inner cavity that provides a smooth surface and avoids damage to the intraabdominal organs, allowing MC colonization to ensure an adequate contact with the visceral peritoneum.Thus, they acquire a bi-or multi-layered configuration that requires a careful handling to obtain the proper implantation of the device.While the reticular component on the parietal side is usually based on a permanent synthetic material, the layer facing the visceral peritoneum can take the form of a physical or chemical barrier [85].Physical barriers consist of a nondegradable material, while chemical barriers are based on resorbable components or chemical solutions.In both cases, the laminar barrier must induce a minimal inflammatory response, allow a proper mesothelialization, and enhance neoperitoneal formation.The presence of a neoperitoneum in the visceral side of the mesh prevents the contact between the foreign material and adjacent organs and hence, avoids adhesiogenesis.Some of these composites include added components as adhesion barriers or antimicrobial layers from a synthetic or biological origin.Among composite meshes with physical barriers, the combination of PP with ePTFE (Composix TM ) or PP with polyurethane (Combimesh Plus) can be found.Some of the composites containing chemical barriers include the following combinations: PP with omega-3 fatty acids (C-Qur); PP with polyglycolic acid and hydrogel (Ventralight TM ); PP with a film made of collagen, polyethylene glycol and glycerol (Parietene TM Composite); PP with an absorbable barrier of polydioxanone and oxidized regenerated cellulose (Proceed®); PP with sodium hyaluronate and carboxymethylcellulose (Seprafilm®); PP and polydioxanone fibres with an absorbable poliglecaprone 25 film (Physiomesh TM ); PS with a type I collagen, polyethylene glycol and glycerol layer (Parietex TM Composite) (Figure 3); or a fully resorbable poly-4-hydroxybutyrate (P4HB) mesh combined with a hydrogel barrier (Phasix TM ST Mesh) (Figure 3) [82,[86][87][88], among others.In clinical studies, composite devices have been associated with lower infection, lower recurrence rates and comparable hospital stays [78].However, the use of PTFE alone has shown better results in relation to visceral peritoneum than these composites [53].Moreover, there is evidence that most of the composites prevent adhesion formation just in the short term and that the effect is diminished after 30 days [86].The separation of the layers integrating the composite or adhesion to the bowels are also undesired events observed with these devices [89].Despite some possible complications after the use of composites, these materials have shown an appropriate behavior at different interfaces.Adhesion formation is minimal and usually restricted to the mesh margins.An important finding is that, in the event that adhesiogenesis occurs after a composite implantation, adhesions tenacity is lower, with a tendency to the loose type [82,90,91].Loose adhesions pose less serious complications than firm or integrated adhesions since the movement of adhered organs is not so restricted.Furthermore, when a chemical barrier is employed, the sequential absorption of this layer could theoretically provoke the release of the tissue adhered to it while reducing the presence of foreign residues into the host.
The combination of a permanent synthetic mesh and a biological graft -defined as hybrid mesh in the sense of bringing together materials from different nature-has also been considered, producing a device called Zenapro TM .It consists of a large pore, lightweight PP mesh sandwiched between layers of extracellular matrix of porcine small intestinal submucosa (SIS).A multicenter study has been recently published [92], in which acceptable short-term outcomes and recurrence rates for Zenapro in low and medium-risk patients with clean wounds out to 12 months are shown.However, further clinical trials to determine long-term outcomes and complications with these devices [9,92] as well as to elucidate the performance at the peritoneal level are needed.In summary, composites represent a valid solution for intraperitoneal implantation, since they can provide proper tissue integration, adequate performance at the peritoneal level and good postimplantation mechanical resistance.However, further clinical trials to determine long-term outcomes and complications with these devices [9,92] as well as to elucidate the performance at the peritoneal level are needed.In summary, composites represent a valid solution for intraperitoneal implantation, since they can provide proper tissue integration, adequate performance at the peritoneal level and good postimplantation mechanical resistance.
Absorbable Materials
Absorbable materials, also known as biosynthetic or bioabsorbable, like polyglactin 910 (Vicryl®), polyglycolic acid (Dexon TM ), polyglycolic acid: Trimethylene carbonate (Bio A®) or a copolymer of glycolide lactide and trimethylene carbonate (TIGR®) (Figure 4) [93] were introduced based on the idea that full reabsorption of the material into the patients´tissue would leave no foreign material behind.When the absorbable material is introduced as a barrier, separation is achieved between the implant and viscera until the mesh becomes covered by a neoperitoneum that prevents adhesion formation [94].These devices are supposed to act as scaffolds providing an environment for tissue in-growth and the repopulation of host cells [95] under a limited inflammatory foreign body reaction.This should diminish adhesion formation.However, some studies [96] have demonstrated that the interposition of a resorbable mesh between a PP mesh and the abdominal viscera did not reduce adhesion formation but elicited a more evident early inflammatory response.One of the major drawbacks of these materials is, in addition, the lack of long-term tensile strength that can end in recurrence [97].For this reason, they have been indicated just for temporary use [10].
Hybrid Meshes
Hybrid meshes also combine different components but follow a different strategy to composites.In these meshes, the term hybrid highlights that filaments of different composition are knitted or woven together to produce a single monolayer mesh structure, or that a second element is introduced as a coating over the reticular mesh.The latter differ from the layered coated meshes in that the coating element surrounds the polymer fibers while maintaining the original reticular structure of the mesh, which does not cover the mesh pores.Hybrid meshes, despite displaying a reticular structure, include highly inert materials in the visceral side-such as polyvinylidene fluoride (PVDF) [10] in the case of DynaMesh®-or around the filaments-such as titanium, in the case of TiMESH®-that induce very low inflammatory response and have poor adhesiogenic potential.They can also include an absorbable material in thread form knitted together with a synthetic reticular permanent mesh [98].However, these meshes have not either showed an acceptable performance regarding adhesion formation [99][100][101][102] since the reticular/protruding profile of the mesh provokes peritoneal damage even when an inert material is employed.The injury to the peritoneum is the event that triggers the coagulation cascade and the genesis of adhesions in a case of persistent inflammation.
Hybrid Meshes
Hybrid meshes also combine different components but follow a different strategy to composites.In these meshes, the term hybrid highlights that filaments of different composition are knitted or woven together to produce a single monolayer mesh structure, or that a second element is introduced as a coating over the reticular mesh.The latter differ from the layered coated meshes in that the coating element surrounds the polymer fibers while maintaining the original reticular structure of the mesh, which does not cover the mesh pores.Hybrid meshes, despite displaying a reticular structure, include highly inert materials in the visceral side-such as polyvinylidene fluoride (PVDF) [10] in the case of DynaMesh®-or around the filaments-such as titanium, in the case of TiMESH®-that induce very low inflammatory response and have poor adhesiogenic potential.They can also include an absorbable material in thread form knitted together with a synthetic reticular permanent mesh [98].However, these meshes have not either showed an acceptable performance regarding adhesion formation [99][100][101][102] since the reticular/protruding profile of the mesh provokes peritoneal damage even when an inert material is employed.The injury to the peritoneum is the event that triggers the coagulation cascade and the genesis of adhesions in a case of persistent inflammation.
Biological Meshes
Biological meshes-usually referred to as grafts or biomeshes-consist of materials derived from animal (xenograft) tissue like Surgisis® [103], Permacol TM [104,105] CollaMend TM [106], Tutomesh®and Strattice® [12,107] or human (allograft) tissue like Alloderm TM [87].The first tissue-based implant composed of porcine intestinal submucosa for use in abdominal wall reconstruction (Surgisis®) was approved in 1998 [103].These decellularized matrices allow soft tissue to infiltrate the mesh, which eventually becomes integrated into the body by a process of remodeling.Unfortunately, this process also appears to lead to a rapid reduction in their mechanical strength, which leads to a high degree of bulging and recurrence, especially with allografts [108].Due to this, concerns regarding this issue have restricted their use to infected environments.The use of some chemically cross-linked meshes like Permacol TM (a porcine-derived acellular dermal sheet) contributed to an increase in graft stability and durability that led to lower hernia recurrence rates while still being incorporated successfully [12,104,105].However, some authors [9] concluded that cross-linking does not significantly impact the tensile strength or stiffness of the graft-tissue composites in the long term.While cross-linking these materials slows down the material absorption [109], thus increasing the mesh stability, this process can also result in a similar foreign body reaction as seen in permanent synthetic meshes [110].Thereby, the desired effect of the so-called biocompatibility would be reduced.
Although the general consensus has traditionally advised the use of permanent synthetic materials in clean non-infected fields and the use of biologic materials in infected environments, some lightweight, macroporous permanent synthetic meshes have shown good outcomes in contaminated fields [111].Thus, further evidence supporting the superiority of biological meshes in contaminated fields is still lacking [13,112,113], with synthetic meshes proven to be superior to biologic reinforcement in some patient populations [9].For this reason, even an antibacterial-coated biological graft has been developed for its use in contaminated fields (XenMatrix™ AB Surgical Graft).These facts, together with the possibility of an immunologic response to the mesh [88], high rate of seroma formation and the higher cost for biological than for synthetic materials [113,114], have led to a reduced use of this kind of meshes.Nevertheless, these biomeshes offer some advantages, such as a convenient behavior regarding the peritoneal interface.Collagen-based meshes have shown low rates of adhesion formation, similar or even lower (depending on crosslinking of the matrices) to those observed for PTFE [115].
Cell-Coated Meshes
The paramount importance of the interaction between the surgical mesh and the peritoneal membrane in the performance of the implant, together with the fact that the time for remesothelialization of the damaged area and the mesh surface is critical to avoid adhesion formation, supports the idea that coating the mesh with autologous cells is a very promising alternative.Both synthetic and biological meshes (e.g., Parietex TM , TIGR®or Strattice TM ) have been coated with different cell populations such as fibroblasts or mesenchymal stem cells [116,117].These studies focused mainly on tissue integration and found that cell-coating had a positive effect on integration with improvements in collagen deposition and ingrowth, particularly in the subcutaneous position [116].Mesenchymal stem cells reduced mesh-induced inflammation and foreign body reaction [117], blunting the immunogenic effect.Regarding adhesions, Dolce et al. [118] showed that coating Vicryl®(polyglactin) with mesenchymal stem cells was successful in reducing the incidence of this postoperative complication, along with reduced inflammation.Also bone marrow-derived mesenchymal stem cells have shown a positive effect in reducing adhesions [119].Recently, Cheng et al. [120] demonstrated that coating a PP mesh with adipose-derived stem cells reduced the tissue adhesion, fibrosis degree and the occurrence rate of mesh-related complications.
Despite the promising results shown by cell-coated meshes in abdominal hernia repair, the technical difficulties and added workload that the attachment of autologous stem cells to a scaffold material implies prior to implantation, and the possibility of cells detaching prematurely must be considered.Additionally, these devices must pass strict regulatory restrictions, which can make their use in clinics is not so widespread [121].This results in a lower use of cell-coated meshes in abdominal hernia repair.
Conclusions
The evolution of the biomaterials for abdominal wall repair has followed a logical process in which the modifications included have tried to sort out the inherent drawbacks of the current materials being used at the time.However, when comparing the performance of different commercially available meshes, the influence of just one parameter (pore size, filament distribution, composition, e.g.,) is difficult to assess since more than just one single modification is usually included in new devices and differences in the mesh structure and the knitting pattern between the meshes compared usually exist.
Furthermore, the experience has shown that the reasoned design of a mesh from a theoretical point of view not always offers the expected outcomes when experimentally tested, showing even worse results in some cases than those found for the devices being previously employed.This fact underscores the intricacy of the reparative/regenerative process in the abdominal cavity, which requires full attention and a deep understanding to obtain satisfactory results.For this reason, experimental animal models have become vital in the evaluation of abdominal meshes for hernia repair.They allow the comparison between different meshes implanted with the same surgical technique and exactly in the same anatomical position, providing essential information about the most important parameters that determine the performance of an abdominal mesh such as the degree of integration into the host tissue, the recurrence rate, proneness to encapsulation, susceptibility to infection, capacity of remesothelialization or the adhesiogenic potential.
The surgical technique itself also represents a key point in the success of an abdominal implant, which makes necessary the use of easy-handling materials and experienced personnel that produce as little damage as possible to the peritoneal interface.Despite the major progress in the field of biomaterials for abdominal wall repair, there is no ideal mesh that can perform well in every situation.Nevertheless, composites have shown positive outcomes at every interface of the implant.The combination of two specifically oriented materials-one of them designed to offer proper host tissue infiltration, and the other one providing optimal behavior at the biomaterial/visceral peritoneum interface-are composites that represent a valuable solution that can be placed at any tissue interface.While providing an appropriate tissue integration and tensile strength in abdominal wall repair, composites also avoid the most important adverse effect in intraperitoneal mesh hernia repair, the adhesion formation.
Figure 1 .
Figure 1.Diagram showing the two possible pathways after peritoneal injury during intraperitoneal onlay mesh repair.The presence of a mesh into the abdominal cavity produces an inflammatory response and the appearance of a fibrinous exudate in the damaged areas.Under normal circumstances (left panel), the fibrinolytic system degrades fibrin and a neoperitoneum is formed, leading to tissue repair and mesh integration.If fibrinolysis is inhibited or delayed (right panel), fibrin deposits persist and permanent tissue connections (adhesions) are established between opposing surfaces.ECM, extracellular matrix.
Figure 1 .
Figure 1.Diagram showing the two possible pathways after peritoneal injury during intraperitoneal onlay mesh repair.The presence of a mesh into the abdominal cavity produces an inflammatory response and the appearance of a fibrinous exudate in the damaged areas.Under normal circumstances (left panel), the fibrinolytic system degrades fibrin and a neoperitoneum is formed, leading to tissue repair and mesh integration.If fibrinolysis is inhibited or delayed (right panel), fibrin deposits persist and permanent tissue connections (adhesions) are established between opposing surfaces.ECM, extracellular matrix.
Figure 1 .
Figure 1.Permanent synthetic meshes.Reticular PP meshes with different pore sizes (Surgipro TM , Prolene® and Optilene® Elastic) and the laminar expanded polytetrafluorethylene (ePTFE) mesh (Preclude®) are shown.Macroscopic appearance is shown in the upper images.Scanning electron micrographs show a magnified view of the meshes structure in the lower images (20x magnification).
Figure 2 .
Figure 2. Permanent synthetic meshes.Reticular PP meshes with different pore sizes (Surgipro TM , Prolene®and Optilene®Elastic) and the laminar expanded polytetrafluorethylene (ePTFE) mesh (Preclude®) are shown.Macroscopic appearance is shown in the upper images.Scanning electron micrographs show a magnified view of the meshes structure in the lower images (20x magnification).
Figure 3 .
Figure 3. Composites.Scanning electron microscopy images of two different composites containing chemical barriers (Parietex TM composite and Phasix TM ST Mesh) are shown.The reticular mesh facing the abdominal wall is shown in A and D (20x magnification).A lateral view (SEM) of composites is shown in B and E (20x magnification) and C and F (50x magnification).Polyester (PS), Collagen, polyethylene glycol and glycerol layer (*), Poly-4-hydroxybutyrate mesh (P4HB), Hydrogel barrier (H).
Figure 3 .
Figure 3. Composites.Scanning electron microscopy images of two different composites containing chemical barriers (Parietex TM composite and Phasix TM ST Mesh) are shown.The reticular mesh facing the abdominal wall is shown in A and D (20x magnification).A lateral view (SEM) of composites is shown in B and E (20x magnification) and C and F (50x magnification).Polyester (PS), Collagen, polyethylene glycol and glycerol layer (*), Poly-4-hydroxybutyrate mesh (P4HB), Hydrogel barrier (H).
Figure 2 .
Figure 2. Top images: Macroscopic appearance of a matrix long-term absorbable mesh (TIGR®), hybrid meshes (DynaMesh® and TiMESH®), and a biological mesh (Surgisis®).Bottom images: Scanning electron microscopy images showing a magnified view of the structure of the meshes (20x magnification).
Figure 4 .
Figure 4. Top images: Macroscopic appearance of a matrix long-term absorbable mesh (TIGR®), hybrid meshes (DynaMesh®and TiMESH®), and a biological mesh (Surgisis®).Bottom images: Scanning electron microscopy images showing a magnified view of the structure of the meshes (20x magnification). | 8,842.4 | 2019-02-16T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Finite Element Implementation of Failure and Damage Simulation in Composite Plates
© 2012 Žmindak and Dudinský, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Finite Element Implementation of Failure and Damage Simulation in Composite Plates
Introduction
Composite materials are now common engineering materials used in a wide range of applications. They play an important role in the aviation, aerospace and automotive industry, and are also used in the construction of ships, submarines, nuclear and chemical facilities, etc.
The meaning of the word damage is quite broad in everyday life. In continuum mechanics the term damage is referred to as the reduction of the internal integrity of the material due to the generation, spreading and merging of small cracks, cavities and similar defects. Damage is called elastic, if the material deforms only elastically (in macroscopic level) before the occurrence of damage, as well as during its evolution. This damage model can be used if the ability of the material to deform plastically is low. Fiber-reinforced polymer matrix composites can be considered as such materials.
The use of composite materials in the design of constructions is increasing in traditional structures such as for example development of airplanes or in the automotive industry. Recently this kind of materials is used in development of special technique and rotating systems such as propellers, compressor turbine blades etc. Other applications are in electronics, electrochemical industry, environmental and biomedical engineering (Chung, 2003).
The costs for designing of composite structures is possible partially eliminate by numerical simulation of solving problem. In this case the simulation is not accepted as universal tool for analyzing of systems behaviour but it is an effective alternative to processes of experimental sciences. Simulations support development of new theories and suggestion of new experiments for testing these theories. Experiments are necessary for obtaining of input data into simulation programs and for verification of numerical programs and models.
The theory of crack growth may be developed by using one of two approaches. First, the Griffith energetic (or global) approach introduces the concept of energy release rate (ERR) G as the energy available for fracture on one hand, and the critical surface energy Gr as the energy necessary for fracture on the other hand. Alternatively, the Irwin (local) approach is based on the stress intensity factor concept, which represents the energy stress field in the neighborhood of the crack tip. These two approaches are equivalent and, therefore, the energy criterion may be rewritten in terms of stress intensity factors.
Microcracking in a material is almost always associated with changes in mechanical behavior of the material. The problem of microcracking in fiber-reinforced composites is complicated due to the multitude of different microcracking modes which may initiate and evolve independently or simultaneously. Continuum Damage Mechanics (CDM) considers damaged materials as a continuum, in spite of heterogenity, micro-cavities, and microdefects and is based on expressing of stiffness reduction caused by damage, by establishing effective damage parameters which represent cumulative degradation of material. There are basically two categories of CDM models used for estimating the constitutive behavior of composite materials containing microcracks -phenomenological models and micromechanics models.
The phenomenological CDM models employ scalar, second order or fourth order tensors using mathematically and thermodynamically consistent formulations of damage mechanics. Damage parameters are identified through macroscopic experiments and in general, they do not explicitly account for damage mechanism in the microstructure. On the other hand the micromechanics-based approaches conduct micromechanical analysis of representative volume element (RVE) with subsequent homogenization to predict evolving material damage behavior (Mishnaevsky, 2007). Most damage models do not account for the evolution of damage or the effect of loading history (Jain & Ghosh, 2009). Significant error can consequently accrue in the solution of problems, especially those that involve nonproportional loading. Some of these homogenization studies have overcome this shortcoming through the introduction of simultaneous RVE-based microscopic and macroscopic analysis in each load step. However, such approaches can be computationally very expensive since detailed micro-mechanical analyses need to be conducted in each load step at every integration point in elements of the macroscopic structure. Jain & Ghosh (2009) have developed a 3D homogenization-based continuum damage mechanics (HCDM) model for fiber-reinforced composites undergoing micro-mechanical damage. Micromechanical damage in the RVE is explicitly incorporated in the form of fibermatrix interfacial debonding. The model uses the evolving principal damage coordinate system as its reference in order to represent the anisotropic coefficients, which is necessary for retaining accuracy with nonproportional loading. The HCDM model parameters are calibrated by using homogenized micromechanical solutions for the RVE for a few strain histories.
There are many works written about damage of composite plates and many models for various types of damage, plates or loading have been developed. Shu (2006) presented generalized model for laminated composite plates with interfacial damage. This model deals with three kinds of interfacial debonding conditions: perfect bonding, weak bonding and delamination. Iannucci & Ankersen (2006) described unconventional energy based composite damage model for woven and unidirectional composite materials. This damage model has been implemented into FE codes for shell elements, with regard to tensile, compressive and shear damage failure modes. Riccio & Pietropaoli (2008) dealt with modeling damage propagation in composite plates with embedded delamination under compressive load. The influence of different failure mechanisms on the compressive behavior of delaminated composite plates was assessed, by comparing numerical results obtained with models characterized by different degrees of complexity. Tiberkak et al. (2008) studied damage prediction in composite plates subjected to low velocity impact. Fiber-reinforced composite plates subjected to low velocity impact were studied by use of finite element analysis where Mindlin's plate theory and 9-node Lagrangian element were considered. Clegg et al. (2006) worked out interesting study of hypervelocity impact damage prediction in composites. This study reports on the development of an extended orthotropic continuum material model and associated material characterization techniques for the simulation and validation of impacts onto fiber-reinforced composite materials. The model allows to predict the extent of damage and residual strength of the fiber-reinforced composite material after impact.
Many studies of an effect of various aspects of damage process on behavior of composite plates can be found in the literature (e.g. Gayathri et al. , 2010). Many authors have been dealing with problematic of damage of composite plated under cyclic loading or problematic of impact fatigue damage (e.g. Azouaoui et al. , 2010). Composite materials are becoming more and more used for important structural elements and structures, so the problematic of fatigue damage of composites is becoming more and more actual. Numerical implementation of damage is not simple. Finite element method (FEM) is the most utilized method for modeling damage. Fast Multipole BEM method or various meshless methods are also establishing at the present time.
Firstly, the goal of this chapter is to present the numerical results of the delamination analysis of two laminae with different thickness with two orthotropic material properties and subjected to a pair of opposed forces. For this goal we used commercial FEM software ANSYS and the mode I, II, and III components of energy release rate (ERR) were calculated. Secondly, the goal is to present the numerical results of elastic damage of thin composite plates. The analysis was performed by user own software, created in MATLAB programming language. This software can perform numerical analysis of elastic damage using FEM layered plate finite elements based on the Kirchhoff plate theory.
This chapter is organized in three sections: Second section is focused on failure modeling in laminates by using standard shear deformable elements, whereas interface elements were used for the interface model. The delamination propagation is controlled by the critical ERR. In third section, in first, a general description of damage is provided, then damage model used is examined. Finally, some numerical results obtained for damage of plate are presented.
Theoretical background of failure modeling in laminates
The mechanisms that lead to failure in composite materials are not yet fully understood, especially for matrix or fiber compression. Strength-based failure criteria are commonly used with the FEM to predict failure events in composite structures. Numerous continuumbased criteria have been derived to relate internal stresses and experimental measures of material strength to the onset of failure (Dávila et al., 2005). In Fig. 1 a laminate contains a single in-plane delamination crack of area D Ω with a smooth front D Ω . The laminate thickness is denoted by h0. The x-y plane is taken to be the mid-plane of the laminate, and the z-axis is taken positive downwards from the mid-plane.
Plate finite elements for sublaminate modeling
Each sublaminate is represented by an assembly of first order shear deformable (FSDT) plate elements bonded by zero-thickness interfaces in the transverse direction as shown in Fig. 2. The delamination plane separates the delaminated structure into two sublaminates of thickness h1, h2 and each sublaminate consist the upper nu plates and the lower nl plates. Each plate element is composed from one or few physical fiber-reinforced plies with their material axes arbitrarily oriented. Lagrangian multipliers through constraint equations (CE) are used for enforcing adhesion between the plates inside each sublaminate. Accordingly, the displacements in the z-th plate element, in terms of a global reference system located at the laminate mid-surface, are expressed by (e.g. Carrera 2002; Reddy 1995) At the reference surfaces, the membrane strain vector i ε , the curvature i κ , and the transverse shear strain γ i , respectively are defined as The constitutive relations between stress resultants and corresponding strains are given in (Reddy & Miravete, 1995;. In these works standard FSDT finite elements available in ANSYS software are used (ANSYS, 2007) to join these elements at the interfaces inside each sublaminate using CE or rigid links characterized by two nodes and three degrees of freedom at each node.
Interface elements for delamination modeling
Delamination is defined as the fracture of the plane separating two plies of a laminated composite structure (Fenske, et al., 2001). This fracture occurs within the thin resin-rich layer that forms between plies during the manufacturing process. Perfect adhesion is assumed in the undelaminated region D Ω Ω , whereas sub-laminates are free to deflect along the delaminated region D Ω but not to penetrate each other. A linear interface model, is introduced along D Ω Ω to enforce adhesion. The constitutive equation of the interface involves two stiffness parameters, kz,kxy, imposing displacement continuity in the thickness and in-plane directions, respectively, by treating them as penalty parameters. The relationship between the components of the traction vector σ acting at the lower surface of the upper sublaminate, σzx, σzy and σzz, in the out-of-plane (z) and in the in-plane (x and y) directions, respectively, and the corresponding components of relative interface displacement vector Δ , Δu, Δv and Δw is expressed as Interface elements are implemented using COMBIN14 element. Relative opening and sliding displacements are evaluated as the difference between displacements at the interface between the lower and the upper sublaminate.
Finite Element Implementation of Failure and Damage Simulation in Composite Plates 137
Contact formulation for damage interface
In order to avoid interpenetration between delaminated sublaminates in the delaminated region D Ω , a unilateral frictionless contact interface can be introduced, characterized by a zero stiffness for opening relative displacements (Δw≥0) and a positive stiffness for closing relative displacements (Δw≤0), then the contact stress zz σ is where kz is the penalty number imposing contact constraint and sign is the signum function.
A very large value for kz restricts sublaminate overlapping and simulates the contact condition. Unilateral contact conditions may be implemented in ANSYS using COMBIN39. This element is a unidirectional element with nonlinear constitutive relationships with appropriate specialization of the nonlinear constitutive law according to (4).
In this work we use the formulation via FEs, related to plate elements, interface elements and Lagrange multipliers. It is worth noting that in commercial FEA packages the Lagrange multipliers are represented by either CE or rigid links, whereas interface elements are implemented by the analyst using a combination of spring elements (COMBIN14) and CE.
Mixed mode analysis
In order to predict crack propagation in laminates for general loading conditions, ERR distributions along the delamination front are needed. Fracture mechanics assumes that delamination propagation is controlled by the critical ERR. Delamination grows on the region of the delamination front where the following condition is satisfied and is of the form where α, β and γ are mixed mode fracture parameters determined by fitting experimental test results.
They are obtained by means of the interface model, using FE code to check whether propagation occurs. Once made a global FEA of the laminate, then the calculation of G(s) along the delamination front reduces to a simple post-computation. The extent of the propagation of the delamination area may be established by releasing the node in which the relation (6) is first satisfied, leading to a modification of the delamination front, which in turn requires another equilibrium solution. It follows the fact that the delamination growth analysis must be accomplished iteratively. For simplicity, only the computation of ERR is described here. The study of the propagation for a 3D planar delamination requires the use of nonlinear incremental numerical computation.
The delaminated laminate is represented using two sublaminates (Fig. 2). In this case, the model is called a two-layer plate model. Multilayer plate model in each sublaminate is necessary to achieve sufficient accuracy when the mode components are needed. Sublaminates are modeled using standard shear deformable elements (SHELL181), whereas interface elements can be used for the interface model. Available interface elements (INTER204) are only compatible with solid elements, therefore interface elements are simulated here by coupling CE with spring elements (COMBIN14). Plate and interface models must be described by the same in-plane mesh.
The FE model of the plates adjacent to the delamination plane in proximity of the delamination front is illustrated in Fig. 3. Interface elements model the undelaminated region D Ω Ω up to the delamination front. The mesh of interface and plate elements must be sufficiently refined in order to capture the high interface stress gradient in the neighborhood of the delamination front, which occurs because high values for interface stiffness must be used to simulate perfect adhesion. The individual ERR at the general node A of the delamination front are calculated using the reactions obtained from spring elements and the relative displacements between the nodes already delaminated and located along the normal direction.
ERRs are computed by using (9), which is a modified version of (8) in order to avoid excessive mesh refining at the delamination front. This leads to the following expressions where z A R is the reaction in the spring element connecting node A in the z-direction,
B B
Δw is the relative z-displacement between the nodes B and B'. These are located immediately ahead of the delamination front along its normal direction passing through A. Similar definitions apply for reactions and relative displacement related to modes II and III. The characteristic mesh sizes in the normal and tangential directions of the delamination front are denoted by n Δ and t Δ . In (9), the same element size is assumed for elements ahead of and behind the delamination front. Value of t Δ /2 must be used in (9) instead of t Δ when the node is placed at a free edge.
In order to simplify the FE modeling procedure, it is possible to introduce spring elements only along the delamination front instead of the entire undelaminated region. The perfect adhesion along the remaining portion of the undelaminated region can be imposed by CE. However, when the delamination propagation must be simulated, it is necessary to introduce interface elements in the whole undelaminated region D Ω Ω .
In the next example, the delamination modeling techniques presented so far are applied to analyze typical 3D delamination problems in laminated plates. The ERR distribution along the delamination front are computed for different laminates and loading conditions.
Finite element modeling and numerical example
One of the most powerful computational methods for structural analysis of composites is the FEM. The starting point would be a "validated" FE model, with a reasonably fine mesh, correct boundary conditions, material properties, etc. (Bathe, 1996). As a minimum requirement, the model is expected to produce stress and strains that have reasonable accuracy to those of the real structure prior to failure initiation. In spite of the great success of the FEM and BEM as effective numerical tools for the solution of boundary-value problems on complex domains, there is still a growing interest in the development of new advanced methods. Many meshless formulations are becoming popular due to their high adaptivity and a low cost to prepare input data for numerical analysis (Guiamatsia et al., 2009).
The results of the delamination analysis of two laminae with different thickness and material are processed in this section. The laminae are fixed on one side and free on the other side. Loads are applied on the free side depending on the analyzed type of delamination.
The upper sublamina has these properties: The upper sublaminate is composed by four plates nu= 4 and the lower by two nl = 2. The plates are meshed by SHELL181 elements. The zone of mesh refinement has these dimensions 5 * 20 mm, it is centered around of the delamination front which is placed in the middle of the laminate. The interface between the sublaminates is modeled without stiffness for opening displacements and with positive stiffness for closing displacements. The interface between sublaminates is modeled by means of CE (Constrain Equation), since it is easier to apply than beam elements and delamination propagation is not solved. The delamination front is created by spring elements COMBIN 14, in each node of the delamination front by three elements. The stiffness of the spring elements binding the laminae is chosen as kz a kxy = 10 8 N/mm 3 . These elements are oriented in different directions, they were created always from a pair of nodes placed on the surface of the lower sublamina. One of the pair nodes is bounded to the upper plate by means of CE and the second one is bounded to the lower plate. ERR is calculated by using deformation along the delamination front.
Model I
At model I the ERR for delamination type I is analyzed. Model I is loaded with opening forces T of magnitude 1 N/mm, which are parallel to z axis, displayed on Fig. 5a. For the calculation of the ERR the equation (9) was used for the type I. As the biggest ERR is in the middle of the model, it is expected that the beginning of the delamination is in the middle of the model. The distribution of ERR through the width of the laminate is displayed on the Fig.5b.
Model II
In this model the delamination type II (sliding type)was simulated. The applied forces are parallel to the x axis, Fig. 6 a). Two types of ERR were analyzed in this model, II G in the x direction and III G in the y direction. ERRs were calculated separately for each direction. The reaction of the spring elements are used for the calculation of II G and the y reactions are used for the calculation of III G . Both distributions are displaying the absolute values of ERR, at both distributions the values of ERR are smaller then the values of GI. The values of GII are in the range of (0.5, 2). 10 -4 and the values of GIII are in the range of (0, 4). 10 -5 (Fig. 7) Model III At this model the delamination type III (tearing type) was analyzed. The geometry of model I was used, with the mesh and its refinement around the delamination front, boundary conditions and the linking between the shell plates, but the direction of the applied forces has changed, Fig.8a). Both types of ERR are analyzed here, II G in the x direction and III G the y direction. The values of ERR are in these ranges: value of GII in the range of ( 0, 14).10 -3 and the value of GIII in the range of (0, 0.02). It is possible that better results could be achieved by increasing of the number of plate elements layers simulating the sublamina. These models can be also modeled by solid elements, but there is greater number of elements needed for accurately simulating of the stress and ERR gradients. Thereby the number of equations and computing time increase.
Continuum damage mechanics
There are many material modeling strategies to predict damage in laminated composites subjected to static or impulsive loads. Broadly, they can be classified as (Jain & Ghosh, 2009): failure criteria approach (Kormaníková, 2011), fracture mechanics approach (based on energy release rates), plasticity or yield surface approach, damage mechanics approach We consider a volume of material free of damage if no cracks or cavities can be observed at the microscopic scale. The opposite state is the fracture of the volume element. Theory of damage describes the phenomena between the virgin state of material and the macroscopic onset of crack (Jain & Ghosh, 2009;Tumino et al., 2007). The volume element must be of sufficiently large size compared to the inhomogenities of the composite material. In Fig. 9 this volume is depicted. One section of this element is related to its normal and to its area S. Due to the presence of defects, an effective area S for resistance can be found. Total area of defects is therefore: The local damage related to the direction n is defined as: For isotropic damage, the dependence on the normal n can be neglected, i.e.
We note that damage D is a scalar assuming values between 0 and 1. For D = 0 the material is undamaged, for 0<D<1 the material is damaged, for D = 1 complete failure occurs. The quantitative evaluation of damage is not a trivial issue, it must be linked to a variable that is able to characterize the phenomenon. We note that several papers can be found in literature where the constitutive equations of the materials are a function of a scalar variable of damage (Barbero, 2008). For the formulation of a general multidimensional damage model it is necessary to generalize the scalar damage variables. It is therefore necessary to define corresponding tensorial damage variables that can be used in general states of deformation and damage (Tumino at al., 2007).
In this part, we focused on presenting the methodology of numerical solving of elastic damage of thin composite plates reinforced by long fibers based on continuum damage mechanics by means of the finite element method.
Damage model used
The model for fiber-reinforced lamina mentioned next was presented by Barbero and de Vivo (Barbero, 2001) and is suitable for fiber -reinforced composite material with polymer matrix. On the lamina level these composites are considered as ideal homogenous and transversely isotropic. All parameters of this model can be easily identified from available experimental data. It is assumed that damage in principal directions is identical with the principal material directions throughout the damage process. Therefore the evolution of damage is solved in the lamina coordinate system. The model predicts the evolution of damage and its effect on stiffness and subsequent redistribution of stress.
Damage surface and damage potential
where the thermodynamic forces Y1, Y2 and Y3 can be calculated by means of relations where stresses and components of matrix S are defined in the lamina coordinate system. Matrix S gives the strain-stress relations in the effective configuration (Barbero, 2007). Equation (13) and (14) can be written for different simple stress states: tension and compression in fiber direction, tension in transverse direction, in-plane shear. Tensors J and H can be derived in terms of material strength values.
Hardening parameters
In the present model for damage isotropic hardening is considered and the hardening function was used in the form of The hardening parameters γ0, c1 and c2 are determined by approximating the experimental stress-strain curves for in-plane shear loading. If this curve is not available, we can reconstruct it using function 6 12 6 6 6 where F6 is the in-plane shear strength, G12 is the in-plane initial elasticity modulus and γ6 is the in-plane shear strain (in the lamina coordinate system). This function represents experimental data very well.
Critical damage level
The reaching of critical damage level is dependent on stresses in points of lamina. If in a point of lamina only normal stresses in the fiber direction or transverse direction (i.e., normal stress in lamina coordinate system) occur, then simply comparing the values of damaged variables with critical values of damage variables for given material at this point is sufficient. The damage has reached critical level if at least one of the values D1, D2 in the point of lamina is greater or equal to its critical value. If in a given point of lamina also shear stress occurs (in lamina coordinate system), it is additionally necessary to compare the value of the product of (1 -D1) (1 -D2) with ks value for given material. If the value of this product is less or equal to ks value, the damage has reached a critical level.
Implementation of numerical method
The Newton-Raphson method was used for solving the system of nonlinear equations. Evolution of damage has been solved using return-mapping algorithm described in (Neto, 2008). The input values are strains and strain increments in lamina coordinate system, state variables D1, D2, and δ in integration point from the start of last performed iteration, C matrix (gives the stress-strain relations in the effective configuration (Barbero, 2007) and damage parameters related to damage model. The output variables are D1, D2, and δ, stresses and strains in lamina coordinate system in this integration point at the end of the last performed iteration. Another output is constitutive damage matrix CED in lamina coordinate system, which reflects the effect of damage on the behavior of structure. Flowchart of this algorithm is described in Fig. 10. Figure 10. Flowchart of return-mapping algorithm used for solving damage evolution in particular integration points
Numerical example
One problem for two different materials was simulated in order to study the damage of laminated fiber-reinforced composite plates. The composites are reinforced by carbon fibers embedded in epoxy matrix. The simply supported composite plate with laminate stacking sequence of [0, 45, -45, 90]S with dimensions of 125×125×2.5 mm was loaded by transverse force F = -4000 N in the middle of the plate. Own program created in MATLAB language was used for this analysis. Four-node layered plate finite elements based on Kirchhoff (classical) plate theory were used.
Material properties, damage parameters and hardening parameters and critical damage values are given in Table 1 -Table 3. The parameters J33 and H3 are equal to zero. The plate model was divided into 8×8 elements and was analyzed in fifty load substeps. The linear static analysis shows that the largest magnitudes of stress are in parallel direction with fibers and transverse to fibers and they occur in the outer layers in the middle of the plate. The largest magnitudes of shear stress occur in the outer layers in the corner nodes. The largest magnitudes of stress in layers 2, 3, 6 and 7 occur in the center of the plate. According to the results of linear static analysis it can be expected that damage will reach the critical level in some of the above points. Fig. 11 shows the analysis results of elastic damage of the plate made from material M30/949. Fig. 11a shows the evolution of individual stress components in dependence on strains (both in lamina coordinate system) in the midsurface of layer 1 (first layer from the bottom) in integration point (IP) 1 (in element 1, nearest to the corner). Fig. 11b shows the evolution of individual stress components in the midsurface of layer 2 in IP 872 (in element 28, nearest to the center of the plate). Fig. 12 plots described damage variables evolution in these IPs. The analysis results show that reaching the critical level is caused not by normal stresses in the lamina coordinate system, but by shear stress (in the lamina coordinate system). The analysis results of the plate made from material M30/949 show that for given load the critical level of damage was reached in layers 2 and 7 in the center of the plate and its vicinity. In IPs that are closest to the center of the plate in these layers, the critical level of damage was reached between 13th and 14th load substep. However, it is not postulated that used damage model predicts failure: it only predicts damage evolution and its effect on stiffness and consequent stress redistribution (Barbero & de Vivo, 2001). In some cases, failure can occur before the critical level of damage is reached. For plate made from material M30/949 load F=-1096 N is already critical. Fig. 13 shows the analysis results of elastic damage of the plate made from material M40/948. Fig. 13a shows the evolution of individual stress components in dependence on strains (both in lamina coordinate system) in the midsurface of layer 1 in IP 868 (in element 28, nearest to the center of the plate). Fig. 13b shows the evolution of individual stress components in the midsurface of layer 2 in IP 872 (in element 28, nearest to the center of the plate). The results show that reaching the critical level of damage will be also caused by shear stress in lamina coordinate system. However, the critical damage level was reached in layers 1 and 7 in the center of plate and its vicinity. The critical level of damage was reached between 12th and 13th load substep in the nearest IPs. The critical level of damage would be also reached in the center of the plate and its vicinity in layers 2 and 7 (in the nearest IPs it would be reached between 16th and 17th load substep) and also in layers 3 and 6 (in the nearest IPs it would be reached between 27th and 28th load substep). Fig. 14 shows damage variables evolution in IP 868 and IP 872. For plate made from material M40/948 load F = -990 N is already critical.
Conclusion
The methodology of delamination calculation in laminated plates was applied in this chapter. The analyses shows that if mixed mode conditions are involved, a double plate model is suitable to accurately capture the mode decomposition in region near the midpoint of delamination front. The solution converges quickly because a small number of plates is needed to obtain a reasonable approximation Damage model presented in this chapter has been utilized in this solution. This damage model is suitable for elastic damage of fiber-reinforced composite materials with polymer matrix. The postulated damage surface reduces to the Tsai-Wu surface in stress space. The problem of elastic damage is considered as material nonlinearity, so we get system of nonlinear equations. The Newton-Raphson method has been used for solving this system of nonlinear equations. Evolution of damage has been solved using the return-mapping algorithm. Flowchart of this algorithm was also presented. Numerical example of one problem for two different materials was presented next. Own program created in MATLAB language was used for this analysis. Four-node layered plate finite elements based on Kirchhoff (classical) plate theory were used. The analysis results show that change of material as well as the presence and values of shear stress have significant influence on the evolution of damage as well as on location of critical damage and load at which the critical level of damage will be reached. Critical damage level has not necessary to be reached in places with maximum magnitude of equivalent stress, but can be reached in other places.
Milan Žmindák and Martin Dudinský
University of Žilina, Slovakia | 7,477.4 | 2012-08-22T00:00:00.000 | [
"Economics"
] |
Valid lower bound for all estimators in quantum parameter estimation
The widely used quantum Cramer-Rao bound (QCRB) sets a lower bound for the mean square error of unbiased estimators in quantum parameter estimation, however, in general QCRB is only tight in the asymptotical limit. With a limited number of measurements biased estimators can have a far better performance for which QCRB cannot calibrate. Here we introduce a valid lower bound for all estimators, either biased or unbiased, which can serve as standard of merit for all quantum parameter estimations.
Introduction
An important task in quantum metrology is to find out the ultimate achievable precision limit and design schemes to attain it. This turns out to be a hard task, and one often has to resort to various lower bounds to gauge the performance of heuristic approaches, such as the quantum Cramér-Rao bound [1][2][3][4], the quantum Ziv-Zakai bound [5], quantum measurement bounds [6] and Weiss-Weinstein family of error bounds [7]. Among these bounds the quantum Cramér-Rao bound (QCRB) is the most widely used lower bound for unbiased estimators . However, with limited number of measurements many practical estimators are usually biased. For example the minimum mean square error (MMSE) estimator, which is given by the posterior meanx(y) = p(x|y)xdx [34], is in general biased in the finite regime, here x denotes the parameter and y denotes measurement results, the posterior probability distribution p(x|y) can be obtained by the Bayes' rule p(x|y) = p(y|x)p(x) p(y|x)p(x)dx , with p(x) as the prior distribution of x and p(y|x) = T r(ρ x M y ) given by the Born's rule. The MMSE estimator provides the minimum mean square error The performance of this estimator, however, cannot be calibrated by quantum Cramér-Rao bound in the finite regime as with limited number of measurements it is usually biased. This is also the case for many other estimators including the commonly used maximum likelihood estimator [27][28][29][30].
In this article we derive an optimal biased bound (OBB) which sets a valid lower bound for all estimators in quantum parameter estimation, either biased or unbiased. This bound works for arbitrary number of measurements, thus can be used to gauge the performances of all estimators in quantum parameter estimation. And the difference between this bound and the quantum Cramér-Rao bound also provides a way to gauge when quantum Cramér-Rao bound can be safely used, i.e., it provides a way to gauge the number of measurements needed for entering the asymptotical regime that the quantum Cramér-Rao bound works. The classical optimal biased bound has been used in classical signal processing [35,36].
Main Result
Based on different assumptions there exists different ways of deriving lower bounds, for example some Bayesian quantum Cramér-Rao bound, which based on quantum type of Van Tree inequality, has been obtained [31][32][33]. These bounds require the differentiability of the prior distribution at the boundary of the support region, thus may not apply, for example, to the uniform prior distribution. The optimal biased bound does not require the differentiability of the prior distribution at the boundary, thus can be applied more broadly. For the completeness, we will first follow the treatment of Helstrom [1] to derive a lower bound for estimators with a fixed bias, from which we then derive a valid lower bound for all estimators by optimizing the bias.
We consider the general case of estimating a function f (x) for the interested parameter x with a given prior distribution. To make any estimation, one needs to first perform some measurements on the state ρ x , which are generally described by a set of Positive Operator Valued Measurements (POVM), denoted as {Π y }. The measurements have probabilistic outcomes y with probability p(y|x) = Tr(Π y ρ x ). An estimatorf (y), based on the measurement results y, has a mean E(f (y)|x) = f (y)Tr(ρ x Π y )dy = f (x) + b(x), where b(x) represents the bias of the estimation. This equation can be written in another form where we use E(x) as a short notation for E(f (y)|x) which equals to f (x) + b(x) and only depends on x. Assuming the prior distribution is given by p(x), the mean square error is then in the form where δf 2 = (f (y) − E(x)) 2 Tr(ρ x Π y )dy is the variance off (y). Differentiating Eq. (2) with respect to x and use the fact that Now multiply p(x) at both sides of Eq. (4) and substitute the following equation into it here L is known as the symmetric logarithmic derivative of ρ x which is the solution to Eq. (5). We then obtain where Re(·) represents the real part. Multiply both sides again with a real function z(x) then integrate with respect to x, Using Schwarz inequality we have the last equality we used the fact that and Here J(ρ x ) = Tr(ρ x L 2 ) is the quantum Fisher information [1,2]. Based on above equations, we can obtain which is valid for any z(x) that satisfies the inequality p( From Eq. (3) we then get the lower bound for the mean square error When b(x) = 0, i.e., for unbiased estimators the bound reduces to a Bayesian Cramér-Rao bound [31] (another Bayesian QCRB using left logarithmic derivative is in Ref. [32]) Furthermore, if f (x) = x, the bound reduces to the well-used Cramér-Rao form [3]. If we only consider f (x) = x and take the prior distribution as a uniform one, above bound can be treated as the quantum version of the biased Cramér-Rao bound [1]. The bound given in Eq. (13) vividly displays the tradeoff between the variance and the bias of the estimate: at one extreme by letting b(x) = 0 the unbiased estimates minimize the term b 2 (x), while the first term is fixed; at the other extreme by letting b(x) = −f (x) we can minimize the the first term, but now with a fixed bias b 2 (x) = f 2 (x). The actual minimum of this bound lies somewhere between these two extremes, which provides a lower bound for all estimators.
To obtain a valid lower bound for all estimators we use the variational principle to find the optimal b(x) that minimizes the bound in Eq. (13), which follows the treatment in Ref. [36]. Suppose the support of the prior distribution )}, and using variation of calculus, the optimal b(x) that minimizes with the Neumann boundary condition ∂G ∂b ′ | x=a 1 = ∂G ∂b ′ | x=a 2 = 0. Substituting the expression of G(b, x) into the equation, one can obtain which gives the following differential equation for the optimal b(x) which can be reorganized and written compactly as . Note that the obtained solution of b(x) may not correspond to an actual bias of an estimator, it is just used as a tool to get the lower bound [35]. The optimal bias b(x) can then be obtained by solving this equation, either numerically or analytically. Next, substituting it back to Eq. (13), one can get a valid lower bound for all estimates. If the prior distribution p(x) and the quantum Fisher information J(ρ x ) are independent of x, then the equation simplifies to which can be analytically solved. For example consider a uniform prior distribution on (0, a), and we would like to estimate the unknown parameter itself, i.e., f (x) = x. In this case we can obtain an analytical solution for the optimal bias Substituting it back to the right side of the inequality (13), we obtain a valid lower bound for all estimates Compare to the quantum Cramér-Rao bound, this bound has an extra term which is then always lower.
Examples
In this section, we give four examples for the valid lower bound. In the first three examples, the QFI is independent of the parameter under estimation. In these examples, taking the prior distribution as uniform, the MSE can be directly obtained via Eq. (20). However, in some cases, the QFI is actually dependent on the estimated parameter. The fourth example is such a case. In this example, the optimal bias has to be solved via Eq. (17). Example 1. As the first example, we consider N spins in the NOON state, (|00 · · · 0 + |11 · · · 1 )/ √ 2, which evolves under the dynamics U(x) = (e −iσ 3 xt/2 ) ⊗N (same unitary evolution e −iσ 3 xt/2 acts on each of the N spins) with σ 1 = |0 1| + |1 0|, σ y = −i|0 1| + i|1 0| and σ 3 = |0 0| − |1 1| as Pauli matrices. After t units of time it evolves to We can take the time as a unit, i.e., t = 1. This NOON state has the quantum Fisher information J = N 2 [14]. For n times repeated measurements, the quantum Fisher information is nN 2 . If the prior distribution p(x) is uniform on (0, a), then from Eq. (20), we have We will compare these bounds with an actual estimation procedure using the MMSE estimator. Consider the measurements in the basis of |ψ 0 = (|00 · · · 0 + |11 · · · 1 )/ √ 2 and |ψ 1 = (|00 · · · 0 − |11 · · · 1 )/ √ 2, which has the measurement results 0 and 1 with probability distribution p 0 = | ψ 0 |ψ x | 2 = cos 2 (Nx/2) and p 1 = 1 − p 0 = sin 2 (Nx/2). Assuming the measurement is repeated n times, the probability that has k outcomes as 1 is given by where n k is the binomial coefficient. From which we can then obtain the MMSE estimator as explained in the introduction.
To compare the QCRB, MMSE and OBB with the mean square error of this procedure, we plot these three quantities as functions of measurement number n in Fig. 1. The solid red, dashed blue lines and black dots in this figure represent the mean square error for the MMSE estimator, the QCRB and the OBB, respectively. From which we can see that while QCRB fails to calibrate the performance of the MMSE estimator, the optimal biased bound provides a valid lower bound. And from the closeness between the MMSE estimator and the optimal biased bound, one can gauge that in this case the MMSE estimator is almost optimal. The bias for the MMSE estimator is also plotted in Fig. 2. It can be seen that when n is small, the MMSE estimator is indeed biased, for this reason the QCRB fails to calibrate the performance, while when n gets larger, the estimator becomes more unbiased, indicating a transition into the asymptotical regime where the QCRB starts to be valid.
Example 2. We consider a qubit undergoing an evolution with dephasing noise. The master equation for the density matrix ρ of the qubit iṡ where γ is the decay rate and x is the parameter under estimation. Take the initial state as |ψ 0 = (|0 + |1 )/ √ 2, then after time t, which we normalize to 1, the evolved state reads where η = exp(−γ). The quantum Fisher information in this case is given by J = η 2 . The quantum Cramér-Rao bound for n repeated measurements then gives For the optimal biased bound we again takes the prior distribution p(x) as uniform on (0, π). Based on Eq. (20), one can get the optimal biased bound as We also use this bound to gauge the performance of a measurement scheme, which measures in the basis of |ψ 0 = (|0 + |1 )/ √ 2 and |ψ 1 = (|0 − |1 )/ √ 2. The distributions of the measurement results are given by The probability that has k outcomes as 1 among n repeated measurements is p(k|x) = n k p k (1|x)p n−k (0|x). Again using the minimum mean square error estimator, which is given by the posterior meanx(k) = p(x|k)xdx, we can get the mean square error via Eq. (1). In Fig. 3, we plotted the mean square error for the MMSE estimator, the optimal biased bound and quantum Cramér-Rao bound at different strength of dephasing noise. It can be seen that while the quantum Carmér-Rao bound fails to provide a valid lower bound, the optimal biased bound provides pretty tight bound at all ranges of dephasing noise, which indicates that the MMSE estimator is close to be optimal even at the presence of dephasing noises. Example 3. In this example, we consider a SU(2) interferometer described via a unitary transformation exp(−ixS 2 ). Here S 2 is a Schwinger operator defined as
the annihilation (creation) operators for ports A and B.
x is the parameter under estimation. Now we take the import state as a coherent state |β for port A and a cat state N α (|α +|−α ) for port B. Here N 2 α = 1/(2+2e −2|α| 2 ) is the normalization number. Taking into account the phase-matching condition, the quantum Fisher information for x in this case is in the form [37] where n A = |β| 2 and n B = |α| 2 tanh |α| 2 are photon numbers in port A and B. Based on above expression, the quantum Fisher information J is independent of x. Thus, for the optimal biased estimation, the mean square error MSE(x) satisfies Eq. (20). The maximum Fisher information with respect to n A and n B for a fixed yet large total photon number in this case can be achieved when photon numbers for both ports are equal, which is J m = N 2 + N [37], with N the total photon number in the interferometer. Using the optimal biased bound and taking the prior distribution as uniform on (0, a), for n times repeated measurements, MSE(x) then satisfies nJ .
(31) Figure 4 shows the quantum Cramér-Rao bound (dashed blue line), the optimal biased bound (dash-dotted black line) and the minimum mean square error for the MMSE estimator (solid red line). The prior distribution taken as uniform in (0, π/5). In this figure, n A = n B = 1. For the MMSE estimator, we measure along the state |11 . We can see that the optimal biased bound provides a valid lower bound at all range of n, however the gap between the mean square error of the MMSE estimator and the bound indicates that the measurement along the state |11 may not be optimal.
Example 4. The quantum Fisher information in above examples is independent of the estimating parameter x. We give another example with the quantum Fisher information depending on x.
Consider a qubit system with the Hamiltonian which describes the dynamics of a qubit under a magnetic field in the XZ plane, the interested parameter denotes the direction of the magnetic field. The quantum Fisher information of this system has been recently studied with various methods [38][39][40]. For a pure initial state (|0 + |1 )/ √ 2, the quantum Fisher information is given by (with the evolution time normalized as t = 1) which depends on x. In this case, we have to solve Eq. (17). Like previous examples, we take the prior distribution p(x) as uniform on (0, π/2). If we take B = π/2, with n repeated measurements, J = n(2 − sin 2 x), then Eq. (17) reduces to This equation can be numerically solved and by substituting the obtained b(x) into Eq. (13), the optimal biased bounds can be obtained which is plotted in Fig. 5. Again we use this bound to gauge the performance of a measurement scheme which takes measurements along |ψ 0 = (|0 + |1 )/ √ 2 and |ψ 1 = (|0 − |1 )/ √ 2. The probability distribution of the measurement results are given by and p(0|x) = 1 − p(1|x). When B equals to π/2, above probability reduces to p(1|x) = (sin 2 x)/2. The probability having k outcomes as 1 among n repeated measurements is p(k|x) = n k p k (1|x)p n−k (0|x). Using the posterior mean as the estimator, we can obtain the mean square error for the MMSE estimator which is also plotted in Fig. 5. From this figure, one can again see that while the quantum Cramér-Rao bound (dashed blue line) fails to gauge the performance of the MMSE estimator (solid red line), the optimal biased bound (dash-dotted black line) provides a valid lower bound and from the closeness between the mean square error of the MMSE estimator and the optimal biased bound, one can tell that the MMSE estimator is a good estimator here.
Summary
The optimal biased bound provides a valid lower bound for all estimators, either biased or unbiased. It can thus be used to calibrate the performance of all estimators in quantum parameter estimation. Asymptotically the widely used quantum Cramér-Rao bound provides a lower bound for quantum parameter estimation, however in practice the number of measurements are often constrained by resources, and it is hard to tell when quantum Cramér-Rao bound applies. From the difference between the optimal biased bound and quantum Cramér-Rao bound it also provides a way to estimate the number of measurements needed to enter the asymptotical regime. | 4,068.4 | 2016-09-06T00:00:00.000 | [
"Mathematics"
] |
Localization of directed polymers with general reference walk
Directed polymers in random environment have usually been constructed with a simple random walk on the integer lattice. It has been observed before that several standard results for this model continue to hold for a more general reference walk. Some finer results are known for the so-called long-range directed polymer in which the reference walk lies in the domain of attraction of an $\alpha$-stable process. In this note, low-temperature localization properties recently proved for the classical case are shown to be true with any reference walk. First, it is proved that the polymer's endpoint distribution is asymptotically purely atomic, thus strengthening the best known result for long-range directed polymers. A second result proving geometric localization along a positive density subsequence is new to the general case. The proofs use a generalization of the approach introduced by the author with S. Chatterjee in a recent manuscript on the quenched endpoint distribution; this generalization allows one to weaken assumptions on the both the walk and the environment. The methods of this paper also give rise to a variational formula for free energy which is analogous to the one obtained in the simple random walk case.
Introduction
The probabilistic model of directed polymers in random environment was introduced by Imbrie and Spencer [34] as a reformulation of Huse and Henley's approach [33] to studying the phase boundary of the Ising model in the presence of random impurities. In its classical form, the model considers a simple random walk (SRW) on the integer lattice Z d , whose paths-considered the "polymer"-are reweighted according to a random environment that refreshes at each time step. Large values in the environment tend to attract the random walker and possibly force localization phenomena; this attraction grows more effective in lower dimensions and at lower temperatures. On the other hand, the random walk's natural dynamics favor diffusivity. Which of these competing features dominates asymptotically is a central question in the study of directed polymers. Much progress has been made over the last thirty years in understanding polymer behavior; for a comprehensive and up-to-date survey, the reader is referred to the recent book by Comets [23].
In [22], Comets initiated the study of long-range directed polymers. In this model, the simple random walk is replaced by a general random walk capable of superdiffusive motion. More specifically, it is assumed that the walk belongs to the domain of attraction of an α-stable law for some α ∈ (0, 2]. For example, any walk having increments with a finite second moment belongs to the α = 2 case. Under this assumption the longrange polymer can model the behavior of heavy-tailed walks, such as Lévy flights, when placed in an inhomogeneous random environment. Indeed, Lévy flights in random potentials have been used to study chemical reactions [19] and particle dispersions [57,14]. Moreover, their continuous-time analogs, Lévy processes, appear in a variety of disciplines including fluid mechanics, solid state physics, polymer chemistry, and mathematical finance [8]. This is relevant because α-stable polymers are known to obey a scaling CLT at sufficiently high temperatures ( [22,Theorem 4.2] and [66,Theorem 1.9]), which generalizes the Brownian CLT proved in [29,Theorem 1.2] when the reference walk is SRW.
Interestingly, universal behaviors have also appeared at low temperatures, where the system exists in a "disordered" phase. Part of the work in [22,66] was to extend localization results known for polymers constructed from a SRW to those constructed with α-stable reference walks. This paper continues the advance in this direction by proving that in the localization regime, certain qualitative behaviors of the polymer's endpoint distribution are the same for any reference walk. Namely, the strongest forms of localization known for arbitrary environment and arbitrary dimension, which were only recently proved for the SRW case in [10], are established here for the general case.
The organization of the remaining introduction is as follows. After the polymer model is formally introduced, we will recall the relevant facts from the literature in Section 1.2 and state our main results in Section 1.3. The proof strategy is outlined in Section 1.4, which describes how the approach used in [10] must be expanded to work for general polymers. Finally, Section 1.5 offers references on other fronts of progress in both the short-and the long-range settings.
The model
Let d be a positive integer, to be called the spatial or transverse dimension. Let P denote the law of the reference walk, which is a homogeneous random walk (ω i ) i≥0 on Z d . To be precise, we state the assumptions on P : P (ω 0 = 0) = 1, P (ω i+1 = x | ω i = y) = P (ω 1 = x − y) =: P (y, x) < 1. (1.1) Next we introduce a collection of i.i.d. random variables η = (η(i, x) : i ≥ 1, x ∈ Z d ), called the random environment, supported on some probability space (Ω, F, P). We will write E and E for expectation with respect to P and P, respectively. Finally, let β > 0 be a parameter representing inverse temperature. Then for n ≥ 0, the quenched polymer measure of length n is the Gibbs measure ρ n defined by Polymers with general reference walk The normalizing constant Z n := E(e −βHn(ω) ) = x1,..., xn∈Z d P (x i−1 , x i ), x 0 := 0, is called the quenched partition function. A fundamental quantity of the system is calculated from this constant, namely the quenched free energy, F n := log Z n n .
We specify "quenched" to indicate that the randomness from the environment has not been averaged out. That is, ρ n , Z n , and F n are each random processes with respect to the filtration F n := σ(η(i, x) : 1 ≤ i ≤ n, x ∈ Z d ), n ≥ 0.
When Z n is replaced by its expectation E(Z n ), one obtains the annealed free energy, log E(Z n ) n = log(E e βη ) n n = log E(e βη ) =: λ(β), where η denotes (here and henceforth) a generic copy of η(1, 0). Notice that λ(·) depends only on the law of the environment, which we denote L η . We will assume finite (1 + ε)exponential moments, 0 < β < β max := sup{t ≥ 0 : λ(±t) < ∞}, (1.2) so that Z n has finite ±(1 + ε)-moments at the given inverse temperature. Otherwise L η is completely general, although to avoid trivialities, we will always assume that η is not an almost sure constant, and that β is strictly positive.
Overview of known results
Two fundamental facts are the convergence of the free energy and the existence of a corresponding phase transition. The first result below was initially shown by Carmona Theorem A ([26, Proposition 2.5]). Let P be SRW and assume λ(t) < ∞ for all t ∈ R. Then there exists a deterministic constant p(β) such that lim n→∞ F n = p(β) a.s. Theorem B ([29, Theorem 3.2(b)]). Let P be SRW and assume λ(t) < ∞ for all t ∈ R. Then there exists a critical value β c ∈ [0, ∞] such that 0 ≤ β ≤ β c ⇒ p(β) = λ(β) β > β c ⇒ p(β) < λ(β).
Polymers with general reference walk
Theorem C ( [26,Corollary 2.2]). Let P be SRW and assume λ(t) < ∞ for all t ∈ R.
Define the (random) set Then p(β) < λ(β) if and only if there exists ε > 0 such that In [62], A ε i is called the set of ε-atoms. In other words, a polymer is localized when it contains macroscopic atoms that persist with positive density. This result was generalized to the case of 0 < β < B := sup{t ≥ 0 : λ(t) < ∞} by Vargas [62,Theorem 3.6], whose argument was extended to general P by Wei [ ρ i (ω i ∈ A ε i ) ≥ δ a.s. (1.4) We conclude this section by recalling a simple sufficient condition for localization. Notice that the condition depends only on the entropy of the random walk increment, not its actual distribution. One can therefore interpret the result as providing a temperature threshold below which concentration of the polymer measure is strong enough to overcome any superdiffusivity of the random walk.
Results of this paper
The temperature range at which localization occurs depends on P , but to the extent described in our two main results below, the type of localization does not. We will also prove Theorems A and B in the general setting so that the statements of localization results make sense. First, it was recently shown in [10] that β 0 in (1.4) can be taken equal to β c . Alternatively, the right-hand side of (1.3) can be taken arbitrarily close to 1. These realizations continue to hold true when P is any walk satisfying (1.1), as demonstrated in the following improvement of Theorems C and D, thus extending [10, Theorem 1.1] beyond the SRW case. (a) If p(β) < λ(β), then for every sequence ε i → 0, Polymers with general reference walk This result is later stated and proved as Theorem 5.3. In case (a) above, we say that the sequence (ρ i (ω i ∈ ·)) i≥0 is asymptotically purely atomic 1 , which is meant to indicate that all of the limiting mass is atomic, not just an ε-fraction.
(a) If p(β) < λ(β), then for any δ > 0, there exists K < ∞ and θ > 0 such that , then for any δ ∈ (0, 1) and any K > 0, This result appears as Theorem 5.4 in the sequel. In case (a), we say that the sequence of measures (ρ i (ω i ∈ ·)) i≥0 exhibits geometric localization with positive density. The constant θ gives a lower bound on the density of endpoint distributions displaying the desired level of concentration. If θ can be taken equal to 1, then essentially all large atoms remain close, so we would say the sequence is geometrically localized with full density. That this is the case is sometimes called the "favorite region conjecture", although one possibility is that it is true only in low dimensions. The only model for which it is known to be true is the one-dimensional log-gamma polymer [56], whose exact solvability was leveraged in [25] to obtain a limiting law for the endpoint distribution. The general method of [10] also was to establish a limiting law for the endpoint distribution, but in a more abstract compactification space. A sufficient condition for the existence of a favorite region was given [10,Theorem 7.3(b)], although a way to check the condition is not currently known.
Comment on moment assumptions
While Theorems 1.1 and 1.2 were shown in [10] in the case P is SRW, that paper also critically assumed λ(±2β) < ∞. Here we are establishing the results not only for an arbitrary reference walk, but also under the weaker hypothesis (1.2). In this way, the present study both expands previous results to the setting of a general walk, and optimizes assumptions even in the classical case. While this latter point may seem minor, it actually permits parameter ranges for which the regions of interest had been 1 To the author's knowledge, this terminology was first used by Vargas The assumption λ(±2β) < ∞ only covers β < 1 2 . By Theorem E, we know that there is localization (in the SRW case) at any β such that βλ (β) − λ(β) > log(2d). But for any positive integer d (in particular d ≥ 3, where localization does not necessarily occur), Therefore, the temperature regime for which localization is actually known has no intersection with the regime in which hypothesis λ(±2β) < ∞ is true. This paper eliminates this discrepancy by assuming only (1.2). The technical challenges incurred are non-trivial, but the fact that they can be overcome reflects the generality with which our methodology may be useful, possibly in contexts other than polymer models. Indeed, this feature is itself a motivation for the present study.
Outline of methods
We write f n to denote the probability mass function on Z d of the (random) endpoint distribution ρ n (ω = ·) for the length-n polymer. It is not difficult to see that (f n ) n≥0 is an 1 (Z d )-valued Markov process with respect to the canonical filtration (F n ) n≥0 (c.f. Section 3.3). Unfortunately, the space 1 (Z d ) is too large to establish any type of convergence. More to the point, we cannot expect any tightness for (ρ n ) n≥0 .
In [10], this issue is resolved by constructing a compact space in which to embed the Markov chain. Specifically, the mass functions on Z d are identified with mass functions on N × Z d , which themselves are part of a larger space: A pseudometric d * can be defined on S 0 such that the resulting quotient space S := S 0 /(d * = 0) is compact. After not too much work, one can show that the equivalence classes under d * are the orbits under translations. In other words, the only information that is lost by passing from 1 (Z d ) to S is the location of the origin. We call the elements of S partitioned subprobability measures.
In hindsight, then, the only obstructions to compactness were translations, which form a non-compact group under which Z d is invariant. Observed also for more general unbounded domains, this phenomenon is often called "concentration compactness" in the study of PDEs and calculus of variations. In the 1980s, Lions [39][40][41][42] made highly effective use of the concept by transferring problems to compact spaces using concentration functions introduced by Lévy [38], although the idea for this type of compactification scheme goes back to work of Parthasarathy, Ranga Rao and Varadhan [53]. The particular construction in [10] was inspired in part by a continuum version used by Mukherjee and Varadhan [46] to prove large deviation principles for Brownian occupation measures.
Once the Markov chain of endpoint distributions is embedded in a compact space, a few key ideas can be wielded to great effect: (i) Consider the "update map" T which receives a starting position for the chain and outputs the law after one step. Provided T is continuous, the chain's empirical measure will converge to stationarity.
(ii) A convexity argument shows that the starting configuration of P (ω 0 = 0) = 1 is optimal in yielding the minimal expected free energy. (iii) The two previous points imply that the chain's empirical measure must converge to energy-minimizing stationarity. This fact provides a variational formula (4.8) for the limiting free energy. Furthermore, by examining what properties an energyminimizing stationary distribution must have, we can deduce certain asymptotic properties of the chain. In particular, Theorems 1.1 and 1.2 will follow, as described in Sections 4 and 5.
Two of the most challenging steps are (a) constructing the compact space; and (b) proving continuity of T . Although the remaining work to prove Theorems 1.1 and 1.2 can largely be taken from [10], these two parts must undergo non-trivial generalizations. This is done in Section 3.
First, the construction in [10] of the compact metric space (S, d * ) depended on the assumption λ(±2β) < ∞. Now working under the more natural hypothesis (1.2), we must define a one-parameter family of metrics (d α ) α>1 , whereby α can be chosen so that αβ < β max . For each α > 1, it must be checked that d α is a metric, and that it induces a compact topology.
Second, the definition of T depends very explicitly on P , the law of the reference walk. When this walk is SRW, showing continuity of T amounts to proving that local interactions between endpoints are modified continuously with the addition of another monomer. When P is general, however, there can be interactions of endpoints arbitrarily far apart. Showing that these interactions do not spoil continuity complicates the proof of Proposition 3.4. For the same reason, technical details become more difficult in the proof of Proposition 2.4, which is the generalization of Theorem B.
While this paper is focused on polymers in Z d , the above program could be carried out on other countable, locally finite Cayley graphs. In this setting, the translation action on Z d is generalized by the group's natural action on itself. One case of interest is the infinite binary tree, considered as a subset of the canonical Cayley graph for the free group on two generators. An intriguing observation is that while analogs of Theorems 1.1 and 1.2 should go through, the "favorite region conjecture" mentioned in Section 1.3 will not. Indeed, when the binary tree having 2 n leaves is identified with [0, 1] having 2 n dyadic subintervals, the limiting endpoint distribution of a low-temperature tree polymer converges in law to a purely atomic measure [9], in analogy with Theorem 1.1. While this measure sometimes confines almost all of its mass to a very narrow interval-in analogy with Theorem 1.2-the low probability of this event prevents a stronger localization result. For more on the comparison of tree polymers versus lattice polymers, see [23,Chapter 4].
Variational formulas for free energy
As in other statistical mechanical models, there is great interest in computing the limiting free energy p(β) from Theorem A. For P having finite support (i.e. P (y, x) > 0 for only finitely many x), a series of papers due to Georgiou, Rassoul-Agha, Seppäläinen, and Yilmaz [54,55,31] provides two types of variational formulas for p(β), one using cocycles (additive functions on N × Z d ) and another of a Gibbs variational form (optimizing the balance of energy versus entropy). A third type of variational formula appeared in [10] for the SRW case, and is extended to the general case in this paper as (4.8). This formula arises naturally as an optimization over a functional order parameter-in this case, the law of the endpoint distribution-following a variant of the cavity method used to study spin glass systems, in particular the Sherrington-Kirkpatrick model [59,60,51]. As such, the variational formula (4.8) can be considered in analogy with the Parisi formula [58], which has recently been the object of intense study (e.g. see [48-50, 20, 52, 4-6, 35, 7, 21]).
Martingale phase transition and free energy asymptotics
It is not difficult to check that the normalized partition function W n := Z n /E(Z n ) forms a positive martingale with respect to F n . Therefore, the martingale convergence theorem guarantees the existence of a random variable W ∞ ≥ 0 such that W n → W ∞ almost surely, while Kolmogorov's 0-1 law implies P(W ∞ > 0) ∈ {0, 1}. It is known in the SRW case [29, Theorem 3.2(a)] and in the long-range case (stated but not proved in [66,Theorem 1.4]) that there is a phase transition. That is, there existsβ c ∈ [0, ∞] such that β <β c ⇒ P(W ∞ > 0) = 1 ("weak disorder") β >β c ⇒ P(W ∞ = 0) = 1 ("strong disorder").
Since W n → 0 exponentially quickly when β > β c , it is clear thatβ c ≤ β c , although it is conjectured thatβ c = β c in general. 2 This is known only in exceptional (but highly non-trivial) cases whenβ c = 0. For the SRW case, it was proved that β c = 0 by Comets and Vargas for d = 1 [ 3 Recently, Wei showed [65, Theorem 1.3] that in the critical case α = d = 1 and under some regularity assumptions on P ,β c = 0 again implies β c = 0.
The results just mentioned from [36,66,65] are in fact corollaries of asymptotics obtained for p(β) as β 0. In the SRW case, the bounds in [36] have been subsequently sharpened [64,47,3,11]. For d = 1, the precise exponent seen in these results is related to the scaling of β n 0 that generates the intermediate disorder regime in which a rescaled lattice polymer converges to the continuum random polymer [2]. This method of identifying a KPZ regime was initiated in [1] and extended to certain long-range cases in [16,17]. In fact, the authors of [17] are able to identify a universal limit for the point-to-point log partition functions, in critical cases, for both d = 1 and d = 2. Related work on the stochastic heat equation has been done for d ≥ 3 [45]. Finally, in [24] the asymptotics of p(β) as β → ±∞ are derived when P has a stretched exponential tail, and the environment consists of Bernoulli random variables.
Continuous versions of long-range polymers
In [44], a continuous version of α-stable long-range polymers is considered. Specifically, a phase transition was shown for the normalized partition function associated to a Lévy process subjected to a Poissonian random environment. Sufficient conditions were given for either side of the transition. In the same way that the α-stable polymer introduced in [22] generalized classical lattice polymers, this Lévy process model generalized a Brownian motion in Poissonian environment, which was introduced in [28] and also considered in [37,30].
Free energy and phase transition
In order to give context for the main results, which concern the behavior of polymer measures above and below a phase transition, we must first check that such a phase transition exists. To do so, we need to prove Theorems A and B in the general setting. First, in Section 2.1 we show that the quenched free energy has a deterministic limit. The arguments used here are standard, and the expert reader may skip them; nevertheless, the details are included to verify that no essential facts are lost when working with an arbitrary reference walk. Next, a proof of the phase transition is given in Section 2.2. In particular, the methods initiated in [29] must be refined to account for general P and weaker assumptions on the logarithmic moment generating function λ(β).
Convergence of free energy
We begin by showing the existence of a limiting free energy. Throughout this section one may assume a condition just slightly weaker than (1.2), namely λ(±β) < ∞.
The proof follows the usual program of showing first that E(F n ) converges, and second that F n concentrates around its mean.
The nonnegativity of all summands allows us, by Tonelli's theorem, to pass the expectation through the sum. That is, In particular, the inequality in (2.2) follows from the above display. On the other hand, a lower bound is found by again using Jensen's inequality, but applied to t → e t and with respect to P : where the final equality is a consequence of Fubini's theorem. Indeed, we may apply Fubini's theorem because Therefore, we obtain the desired lower bound: Now we may prove superadditivity. For a given integer k ≥ 0 and y ∈ Z d , let θ k,y be the associated time-space translation of the environment: , the random variables Z n and Z n • θ k,y have the same law. Furthermore, for any 0 ≤ k ≤ n, we have the identity where Z k (y) := E(e −βH k (ω) ; ω k = y) is the contribution to Z k coming from the endpoint y. By Jensen's inequality, Notice that Z n−k • θ k,y depends only on the environment after time k, meaning it is independent of F k . We thus have Using this observation in the previous display, we arrive at the desired superadditive inequality: EJP 23 (2018), paper 30.
Existence of critical temperature
Now we prove a phase transition between the high temperature and low temperature regimes. • Two generalizations were claimed to follow easily from the same methods. Vargas [62,Lemma 3.4] suggests the hypothesis β max = ∞ can be dropped, while Comets [22, Theorem 6.1] allows P to be α-stable, α ∈ [0, 2). We do both by assuming only β max > 0 and allowing P to be a general random walk. It seems the resulting difficulties are only technical, but how to resolve them is not obvious, and so we provide a full proof.
Proof of Proposition 2.4. Note that log Z n = E(e −βHn(ω) ) is the (random) logarithmic moment generating function for −H n (ω) with respect to P , and is finite for all β ∈ [0, β max ) almost surely by the proof of Lemma 2.2. Therefore, E(log Z n ) is convex on (0, β max ). Now seen to be the limit of convex functions, p must be convex on (0, β max ). It follows from general convex function theory that p is differentiable almost everywhere on (0, β max ), and Furthermore, convexity implies p is absolutely continuous on any closed subinterval of [0, β max ), and thus Then we could conclude that In particular, the existence of β c will be proved. Therefore, we need only to show (2.6). To do so, we would but each of (a), (b), and (c) require justification. Postponing these technical verifications for the moment, we complete the proof of (2.6) assuming (2.7).
For a fixed ω ∈ Ω p , we can write where E denotes expectation with respect to the probability measure P given by dP.
Since the Radon-Nikodym derivative is a product of independent quantities (with respect to P), the probability measure P remains a product measure. Therefore, we can apply the Harris-FKG inequality (e.g. see Therefore, where the penultimate equality is a consequence of Tonelli's theorem, since Z −1 n e −βHn(ω) > 0. The inequality (2.6) now follows by dividing by n. (2.7) Fix β ∈ (0, β max ). Choose q > 1 such that qβ < β max , and let q be its Hölder conjugate:
Justification of (c) in
Step (c) in (2.7) will follow from Fubini's theorem once we verify that Let g and h be as in Lemma 2.6, so that we can write Temporarily fix a path ω ∈ Ω p . Since Z n and −H n , and therefore g(−H n ), are nondecreasing functions of all η(i, x), the Harris-FKG inequality shows (2.11) The first factor satisfies The second factor satisfies where we have used Tonelli's theorem to exchange the order of integration. We have thus shown as desired.
Justification of (a) in
We will ultimately invoke dominated convergence to pull the limit through the expectation.
Consider any fixed h satisfying |h| ≤ ε. Now, log Z n is almost surely continuously differentiable, and so by the mean value theorem, a.s.
for some ε η depending on the random environment η and satisfying |ε η | ≤ |h| ≤ ε. But then by convexity of log Z n , we can bound the difference quotient by consider the endpoints of the interval [β 0 − ε, β 0 + ε]: where the maximum is now a dominating function independent of h. We showed (2.17) holds almost surely, and so < ∞.
In particular, the dominating function is integrable:
Adaptation of abstract machinery
In this section we recall and adapt some necessary definitions and results from [10]. The key change we will make is to the definition of the "update map" that sends a fixed endpoint distribution ρ n (ω n = ·) to the conditional law of ρ n+1 (ω n+1 = ·) given F n , viewed as a random variable in a suitable metric space (S, d). The construction of S is identical to what was done in [10]; we briefly describe it in Section 3.1 and justify some adaptations in Section 3.2. Then, in Section 3.3 we newly define the update map to allow for general reference walk P , and then lift it to a map of probability measures on S, a space denoted P(S). Finally, in Section 3.4 we prove continuity of the update map with respect to Wasserstein distance, which implies continuity of its lift.
Partitioned subprobability measures
The quenched endpoint distribution at time n, given by is a Borel measurable function of η when considered in the space 1 (Z d ). For each α > 1, we construct a pseudometric d α on S 0 as follows. Define "addition" and "subtraction" on N × Z d by extending the group structure of Z d , but only if the first coordinates agree: Similarly, define the 1 "norm" by The maximum integer m for which (3.3) holds (possibly infinite) is called the maximum degree of φ, and is denoted deg(φ). The following lemmas demonstrate two useful properties of isometries: composition and extension. . Suppose that φ : A → N × Z d is an isometry of degree m ≥ 3. Then φ can be extended to an isometry Φ : By induction, if φ has deg(φ) ≥ 2k + m, then φ can be extended to an isometry Φ : Given an isometry φ (which implicitly stands for the pair (A, φ)) and α > 1, we define the α-distance function according to φ: Finally, the pseudometric is obtained by taking the optimal α-distance: where deg(φ) ≥ 1 means φ is injective. The case α = 2 was considered in [10], and we can easily adapt the proof given there to show d α satisfies the triangle inequality. Since d α is clearly symmetric in f and g (by changing φ to φ −1 ), this result verifies that d α is a pseudometric.
With this pseudometric, a new space is realized by taking the quotient of S 0 with respect to d α : That is, S is the set of equivalence classes of S 0 under the equivalence relation We call S the space of partitioned subprobability measures, and it naturally inherits the metric d α . Lemma 3.4 below shows that for distinct α, α > 1, we have d α (f, g) = 0 if and only if d α (f, g) = 0. Therefore, we are justified in not decorating the space S with an α parameter, since S 0 /(d α = 0) is always the same set.
The quotient map ι : S 0 → S that sends an element to its equivalence class is Borel measurable with respect to the metric topology, cf. [ The support number of f is the cardinality of H f , which is possibly infinite. For g ∈ S 0 with N-support H g , d α (f, g) = 0 if and only if there is a bijection σ : (3.10) The key fact, and indeed the goal of constructing S, is the following result. It was proved for α = 2 in [10, Theorem 2.9], and once more the proof readily extends to any α > 1 by a modification as simple as changing the 2's to α's. We will generally write f for an element of S, and explicitly indicate f ∈ S 0 when referring to a representative in S 0 . When f is being evaluated at some u ∈ N × Z d , a representative has been chosen. For certain global functionals such as · defined in Here Π(µ, ν) denotes the set of probability measures on S ×S having µ and ν as marginals. Lemma 3.5 implies (P(S), W α ) is also a compact metric space. It is a standard fact (for instance, see [63, Theorem 6.9]) that Wasserstein distance metrizes the topology of weak convergence. In the compact setting, weak convergence is equivalent to convergence of continuous test functions. If L is (lower/upper semi-)continuous, then L is (lower/upper semi-)continuous.
Equivalence of generalized metrics
We have introduced a family of metrics (d α ) α>1 on S, where the flexibility of choosing α sufficiently close to 1 will allow us to make more effective use of the abstract methods in [10]. Namely, the only assumption we need is (1.2). It is important, however, that each metric induces the same topology. The next proposition verifies this fact. In particular, any functional on S that was proved in [10] to be continuous with respect to d 2 remains continuous under d α , α > 1. Proof. Since α and α are interchangeable in the claim, it suffices prove the "only if" direction. That is, we assume d α (f, f n ) → 0 as n → ∞. Fix representatives f, f n ∈ S 0 . Given ε > 0, set Then choose N sufficiently large that d α (f, f n ) < δ for all n ≥ N . In particular, for any such n, there is an isometry φ n : A n → N × Z d satisfying d α,φn (f, f n ) < δ. In particular, These four inequalities together show As ε > 0 is arbitrary, it follows that d α (f, f n ) → 0.
Generalized update map
Throughout the remainder of the manuscript, we fix β ∈ (0, β max ) according to (1.2), and we also fix some α > 1 such that αβ < β max . We then restrict our attention to S equipped with the metric d α , and P(S) with W α . Proposition 3.8 tells us that the topology on S does not depend on α, although the same is not true for the topology on P(S) induced by W α . Indeed, there can exist functions ϕ : S → R which are Lipschitz-1 with respect to some d α but not Lipschitz at all with respect to some other d α .
We write f n to denote the (random) endpoint distribution under the polymer measure ρ n , belonging to either 1 (Z d ) or S depending on context. Notice that we have the .
This identity shows how f 0 → f 1 → · · · forms a Markov chain when embedded into S. Namely, we identify f n−1 with its equivalence class in S so that a representative takes values on N × Z d instead of Z d . (In this case, the support number is just 1.) Then the law of f n ∈ S given f = f n−1 is the law of the random variable F ∈ S defined by , (3.12) where (η w ) w∈N×Z d is an i.i.d. collection of random variables having the same law as η, and P (v, w) = P (y, z) if v = (n, y), w = (n, z) 0 otherwise.
To simplify notation, we write v ∼ w in the first case (i.e. v and w have the same first coordinate) and v w otherwise.
Remark 3.9.
Although the indexing of η w by w ∈ N × Z d might appear to reflect a notion of time, we are not using N to consider time. Rather, in order to compactify the space of measures on Z d , we needed to pass to subprobability measures on N × Z d . To avoid confusion, we will never write N to index time. Following this rule, we will write η w whenever we wish to think of a random environment on N × Z d , always at a fixed time.
When considering the original random environment defining the polymer measures, we will follow the standard η(i, x) notation. In either case, we will continue to use boldface η when referring to the entire collection of environment random variables.
Generalizing (3.12) to f ∈ S that may have f < 1, we define T f ∈ P(S) to be the law of F ∈ S defined by Notice that the expectation (with respect to η) of the numerator is e λ(β) v∼u f (v)P (v, u), while the expectation of the denominator is e λ(β) . Therefore, these quantities are almost surely finite, and so F is well-defined. In order for T f to be well-defined, we must check the following: (i) Given any f ∈ S 0 , the map R N×Z d → S given by η → F is Borel measurable, where R N×Z d is equipped with the product topology and product measure (L η ) ⊗N×Z d , and L η is the law of η. Claim (i) is immediate, since η → F is clearly a measurable map from R N×Z d to S 0 . After all, it is simply the quotient of sums of measurable functions. And then F → ι(F ) from S 0 to S is measurable by [10,Lemma 2.12]. Claim (ii) is given by the following lemma. (3.14) where the ζ w are i.i.d., each having law L η . Then when these functions are mapped into S by ι, the law of F is equal to the law of G.
Proof. To show that F and G have the same law, it suffices to exhibit a coupling of the environments η and ζ such that F = G in S. So we let H f and H g denote the N-supports of f and g, respectively, and take σ : Next we couple the environments. Let ζ u be equal to η ψ −1 (u) whenever u ∈ H g × Z d . Otherwise, we may take ζ u to be an independent copy of η u . Now, for any u = (n, x) and v = (n, y) with n ∈ H f , Together, (3.17), (3.18), and the fact that f = g (cf. discussion following Lemma 3.5) Letting ε tend to 0 gives the desired result.
We have now verified that the map S → P(S) given by f → T f is well-defined. It remains to be seen that the map is measurable, although this fact will be implied by EJP 23 (2018), paper 30. the continuity proved in the next section. Given measurability, we can naturally lift the update map to a map on measures. For µ ∈ P(S), define the mixture T µ(dg) := T f (dg) µ(df ), (3.19) which means More generally, for all measurable functions ϕ : S → R that either are nonnegative or satisfy In this notation, we have a map P(S) → P(S) given by µ → T µ. We can recover the map T by restricting to Dirac measures; that is, T f = T δ f , where δ f ∈ P(S) is the unit point mass at f . For our purposes here, it suffices to know that µ → T µ is continuous, by an argument which requires only the continuity of f → T f (see [10, Appendix B.1]).
Continuity of update map
The goal of this section is to prove the following result. d α (f, g) < δ ⇒ W α (T f, T g) < ε.
The proof will proceed in a similar manner as in the nearest-neighbor random walk case, although modification is necessary to account for the fact that now the set of "neighbors" may be arbitrarily large, even all of Z d . In preparation, we record the following results.
Lemma 3.13. Let X 1 , X 2 , . . . be i.i.d. copies of an integrable, centered random variable X. For any t > 0, there exists b > 0 such that whenever c 1 , c 2 , . . . are constants satisfying (3.23) Given t and L, let b > 0 be sufficiently small that . (3.24) As in the hypothesis, assume |c i | ≤ b for all i, and (3.25) In order to apply martingale inequalities, we recenter the remaining sum: For each i, the random variable c i (X i − E(X)) has mean 0 and takes values between −|c i |(L + 1) and |c i |(L + 1). Therefore, by the Azuma-Hoeffding inequality [13, Theorem Proof of Proposition 3.11. Since (S, d α ) is a compact metric space, uniform continuity of f → T f will be implied by continuity. So it suffices to prove continuity at a fixed f ∈ S. Let ε > 0 be given. To prove continuity at f ∈ S, we need to exhibit δ > 0 such that if g ∈ S satisfies d α (f, g) < δ, then there exist representatives f, g ∈ S 0 and a coupling of environments (η, ζ) such that under this coupling the following inequality holds: Here F, G ∈ S are given by (3.13) and (3.14). We shall see that it suffices to choose δ satisfying conditions (3.32a)-(3.32d) below.
Let q := max x∈Z d P (0, x), which is strictly less than 1 by (1.1). Next choose t > 0 sufficiently small to satisfy one of the following conditions: κ ≤ 1 7 Then fix a representative f ∈ S 0 , and choose A ⊂ N × Z d finite but sufficiently large that By possibly omitting some elements, we may assume f is strictly positive on A. Next, let k be a positive integer sufficiently large that If A is nonempty, we will also demand that (2d) k |A|δ 1/α < κ. (3.32b) Otherwise, we relax this assumption to δ 1/α < κ. (3.32c) Finally, we assume We claim this δ > 0 is sufficient for ε > 0 in the sense described above. Assume g ∈ S satisfies d α (f, g) < δ. Then given any representative g ∈ S 0 , there exists an isometry ψ : By (3.32a), it follows that deg(ψ) > 3k. In particular, upon defining φ := ψ| A , we have deg(φ) ≥ deg(ψ) > 3k. Therefore, Lemma 3.2 guarantees that φ can be extended to an isometry Φ : Alternatively, we can express this set as The inequality deg(Φ) > k implies that for any B ⊂ A, Furthermore, (3.33) and (3.32d) together force A ⊂ C, since Given a random environment η, we couple to it ζ in the following way. If u ∈ Φ(A (k) ), then set ζ u = η Φ −1 (u) . Otherwise, let ζ u be an independent copy of η u . Note that F and G are now distributed as T f and T g, respectively, when mapped to S. On the other hand, Φ is deterministic, and so no measurability issues arise in the bound It thus suffices to show E(d α,Φ (F, G)) < ε. To simplify notation, we will write so that F (u) = f (u)/ F and G(u) = g(u)/ G.
Consider the first of the four terms on the right-hand side of (3.35). Observe that for Summing over A (k) and taking expectation, we obtain the following from the first term in the last line of (3.36): Meanwhile, the second term in (3.36) gives These preliminary steps suggest two quantities we should control from above, namely the right-hand sides of (3.37) and (3.38). Consideration of the second and third terms EJP 23 (2018), paper 30. on the right-hand side of (3.35) suggests four more quantities, since the Harris-FKG inequality yields and similarly Therefore, we should seek an upper bound for E( F −α ) and E( G −α ), as well as E u / For clarity of presentation, we divide our task into the next four subsections.
Upper bound for
On the other hand, and so if f > 0, then Lemma 3.12 gives
Upper bound for
, meaning | f (u) − g(Φ(u))| is a non-decreasing function of η u and independent of η w for w = u. Since F −1 is a non-increasing function of all η w , the Harris-FKG inequality yields where Of the two absolute values above, the second is easier to control. Indeed, Next consider the first absolute value in the final line of (3.46). The difference between the two sums can be bounded as Each of the four terms above can be controlled separately. For the first term, notice that That is, if ψ(v) ∈ N (Φ(u), k) ∩ ψ(A), then v ∈ N (u, k) ∩ A and satisfies P (v, u) = P (ω 1 = u − v) = P (ω 1 = Φ(u) − ψ(v)) = P (ψ(v), Φ(u)). We thus have a bijection between N (u, k) ∩ A and N (Φ(u), k) ∩ ψ(A), meaning where the final inequality is trivial since A ⊂ C. Summing over u ∈ A (k) gives the total Considering the second term on the right-hand side of (3.48), we have (3.30) < κ. (3.50) Similarly, for the third term, Finally, for the fourth term, 33) < (2d) k |A|δ 1/α (3.32b) < κ. Combining (3.48)-(3.52), we arrive at (3.53) Using (3.47) and (3.53) with (3.46) reveals and thus we have the desired bound: (3.54)
Therefore, by the choice of b in relation to Lemma 3.13, we deduce from (3.56) that E| F − G| ≤ 3t. (3.60) There are now two cases to consider. First, if f < 1, then On the other hand, if f = 1, then we can consider the three sums in (3.56) separately. For any u ∈ A (k) , the quantity We must also have and so by applying Lemma 3.12 once more, we see Again appealing to independence and then Lemma 3.13, we obtain the bound Finally, we use independence and Lemma 3.13 once more to obtain Combining (3.62)-(3.64), we conclude that if f = 1, then where the first sum is bounded by 5κ according to (3.53), and the second sum satisfies Hence (3.57) holds: Next consider the c u . By definition of A (k) , Last, consider the c u . We have the implication and thus (3.30) < d α,ψ (f, g) + κ (3.33) < δ + κ
Variational formula for free energy
Given the results of Sections 2 and 3, there is little difficulty left in obtaining the main results. Indeed, the majority of remaining proofs go through as in [10] with no change. The main reason for this is the high level of abstraction in our approach, the essential components of which are S and T . Since T can be thought of as a continuous transition kernel for a Markov chain in a compact space S, general ergodic theory provides swift access to results on Cesàro limits. Furthermore, the high and low temperature phases can be characterized using the variational formula (4.8) of Theorem 4.1.
Outline of abstract methods
Let us now recall a progression of results, with appropriate references to [10]. Whenever modification is necessary, an updated proof is provided in Section 4.2. Recall from (3.1) that f n is the quenched endpoint distribution of the polymer at length n, identified as an element of S. which is a random element of P(S), measurable with respect to F n . Since T f n is the law of f n+1 given F n , a martingale argument shows that µ n almost surely converges to the set of fixed points of T , which we denote K := {ν ∈ P(S) : T ν = ν}. When f = f n is the n-th endpoint distribution, (3.11) shows Therefore, Upon lifting R to the map R : P(S) → R given by we obtain a continuous functional on measures by Lemma 3.7. Furthermore, the above calculation can be rewritten as E(R(µ n−1 )) = E(F n ). It is proved in [10,Proposition 4.6] that in fact R(µ n−1 ) − F n → 0 almost surely as n → ∞, and so Proposition 2.1 implies The only part of the proof that requires modification is stated as Lemma 4.2 in the next section.
Proofs in general setting
Once the lemmas of this section are checked, all of the results stated in Section 4.1 will be proved.
Adaptation of a fourth moment bound
The proof of [10, Proposition 4.6] uses the Burkholder-Davis-Gundy (BDG) inequality [15, Theorem 1.1] applied to a martingale whose differences are of form W − E(W ), where W is a random variable defined using a fixed f ∈ S 0 with f = 1. Specifically, thus making the BDG inequality useful, but the proof assumed λ(±2β) < ∞. Here we make a simple adaptation of the proof that assumes only λ(±β) < ∞ and obtains a different value for C.
Adaptation of a free energy inequality
The proof of (4.8) can be written exactly as the proof of Theorem 4.9 in [10], but it requires the result of the next lemma. We introduce the boldface notation 1 to denote the element of S having representatives in S 0 of the form Similarly, 0 will denote the element of S whose unique representative is the constant zero function.
Lemma 4.3.
For any f 0 ∈ S and n ≥ 1, where δ f0 ∈ P(S) is the unit mass at f 0 . Equality holds if and only if f 0 = 1.
Proof. Fix a representative f 0 ∈ S 0 . Let (η (i) u ) u∈N×Z d , 1 ≤ i ≤ n, be independent collections of i.i.d. random variables with law L η . For 1 ≤ i ≤ n, inductively define f i ∈ S to have representative Observe that when i = n, the first summand in (4.12) is equal to un∈N×Z d un−1∼un f n−1 (u n−1 ) exp(βη (n) un )P (u n−1 , u n ) = 1 D n−1 un∈N×Z d un−2∼un−1∼un f n−2 (u n−2 ) exp(βη (n−1) un−1 + βη (n) un )P (u n−2 , u n−1 )P (u n−1 , u n ) . . . By summing the final expressions in (4.14) and (4.15) to obtain the right-hand side of (4.12), and then clearing the fraction, we see Using the concavity of the log function, we further deduce where equality holds throughout if and only if f 0 (u 0 ) = 1 for some u 0 ∈ Z d . Since the random variable un∼un−1∼···∼u0 exp β n i=1 η (i) ui n i=1 P (u i−1 , u i ) is equal in law to Z n EJP 23 (2018), paper 30.
It follows that with equality if and only if f 0 = 1.
Since (S, d α ) has an equivalent topology to (S, d 2 ) by Proposition 3.8, continuous functionals on the latter space are also continuous on the former. Consequently, the proof in [10, Section 6] given for the SRW case requires no modification.
Geometric localization with positive density
As usual, let f i (·) = ρ i (ω i = ·) denote the probability mass function for the i-th endpoint distribution. We say that the sequence (f i ) i≥0 exhibits geometric localization with positive density if for every δ > 0, there is K < ∞ and θ > 0 such that lim inf As for Theorem 5.3, the proof is equivalent to the one in the SRW case, which the reader can find in [10,Section 7]. | 12,027.4 | 2017-08-11T00:00:00.000 | [
"Mathematics"
] |
Trade, Global Value Chains and Upgrading: What, When and How?
This paper explains how successful innovation systems interact with trade and global value chains (GVC) participation to foster learning and technological upgrading. It conducts an empirical investigation of 74 developing countries for 3 years, 2000, 2005 and 2010, to show that, while some countries manage to trade and export across a large number of technological export categories, many remain embedded in the export of low technology goods with little movement technologically. The analysis looks at why this is the case and what factors account for how firms are able to leverage trade to learn and upgrade in some instances, but not all. The results show that the ability to technologically diversify across export categories is linked to stronger innovation systems, as measured by national capability indicators, such as public R&D investments, scientific publications, intellectual property payments and patents by residents. The results also confirm the rise of several outperforming countries, the emerging economies. We conclude that, in successful, outperforming countries, firms rely on several attributes of the innovation system to leverage knowledge flows within and outside of GVCs to build export capacity and diversify horizontally into new GVCs.
Introduction
Global value chains (GVCs) have become the central mechanism for trade and investment in the world economy today. According to recent estimates, production today is unprecedentedly fragmented and conducted within GVCs, which accounted for 85% of total global trade in 2016 (UNCTADStat 2017). 1 This re-organization of production through GVCs transforms international trade dynamics from operating predominantly at the level of countries to operating between firms, where each firm adds value in a sequential fashion or trades in intermediate products that serve as inputs into final products elsewhere (Flento and Ponte, 2017;Ponte and Sturgeon, 2014). A new actor -the lead firm -upends the production process as we know it in the traditional sense, creating new forms of interfirm relationships along the chain, thus also determining access to international markets, access to technologies and capabilities building (Gereffi, 1999;Pietrobelli, 2008;Pietrobelli and Rabelloti, 2011).
These changes carry profound implications for all countries in general, and developing countries in particular, given the central role of technological change in structural transformation, catch-up and economic development (Johnson and Noguera, 2012;Suder et al, 2015). Although the literature on GVCs has focused on many aspects of this dynamic, including looking at how the fragmentation of production impacts upon industrial organization and employment, the research has predominantly emphasized these results for developed countries (Foster-McGregor et al, 2015). Studies have tended to assume that lead firms generally have positive impacts on other firms that participate in GVCs in terms of enabling them to upgrade and supply products and services to global markets (Gereffi, 1999;Sturgeon et al, 2008). 2 Extending the analogy further, more recent GVC analyses have argued that value chains could present a rare option for local firms and suppliers not only to access new markets but also to access new technologies (Pietrobelli, 2008), identifying different kinds of possibilities for learning (Pietrobelli and Rabelloti, 2011). It has been proposed that GVCs may provide an ideal opportunity for smaller firms in developing countries to specialize in niche product categories, instead of struggling to build capabilities to master entire production systems (Baldwin, 2012).
Although many of these outcomes can be substantiated by evidence, the full range of effects that GVCs can have on countries at different levels of development are yet to be understood. Until now, most work on GVCs in developing countries has taken the form of case studies of firms in different sectors, and the inferences differ, based on the value chain and country in question. 3 However, many of the studies converge on one point, namely, the important role played by national innovation systems in enabling learning not only at the firm level but also at the sector or industry level. For example, studies of GVCs in East Asian countries have found that local firms are able to leverage learning from GVC participation to extract sector-and economy-wide effects (Estevadeordal et al, 2013;Feenstra and Hamilton, 2006;Lee, 2013), but the studies found that the learning effects were made possible mainly because of supportive incountry institutions and cohesive policy frameworks to promote innovation and capabilities building. Other case studies of GVCs that have looked at difficulties for upgrading often conclude the inverse: that reasons underlying why local firms benefit, or fail to benefit, from GVCs are more systemic (see, for example, Baffes, 2006;Gereffi, 1999;Gibbon and Ponte, 2005;Ponte, 2002).
The underlying systemic factors that allow/hinder firms from building capabilities have been studied extensively in evolutionary economics, and, more recently, using the innovation systems approach. Innovation studies have sought to analyze why firms fail to learn, even when 482 Ó 2018 The Author(s) 0957-8811 The European Journal of Development Research Vol. 30, 3, 481-504 exposed to knowledge-based opportunities within or outside the economy, highlighting a range of systemic factors that dictate how firms perform and make use of knowledge from internal and external sources for adaptation, use and innovation. These systemic features of an innovation system derive from strong and supportive institutions (or the lack thereof), which foster inquisitiveness and exploration, learning linkages and capabilities formation at the collective level. By extension, they also dictate how able firms are to absorb technologies and tacit know-how in their day-to-day transactions (Cohen and Levinthal, 1990). Thus, institutions build social capabilities (Gerschenkron, 1962) through the provision of a system of education and the availability of trained labour, as well as technological capabilities (Ernst and Kim, 2002) by providing for public research and development (R&D) institutes, universities and university centres of excellence, among other things (Amsden and Chu, 2003;Ernst and Kim, 2002). Many scholars have also emphasized the linkage building aspects of such institutions, which depend on policy support that promotes institutional cohesion and collaboration, further supporting the emergence of capabilities (Fagerberg and Srholec, 2009a, b).
Viewing these insights from GVC studies and innovation studies as part of a broad, and more traditional, discourse on economic development, the question is: How can trade -and opportunities generated through trade, such as GVCs -promote structural change and sustained economic growth in developing countries? The answer to this question is not easy and calls for an assessment of learning and capabilities building at a more aggregate (macro) level in countries. Recent empirical and theoretical advances from that perspective suggest that what countries export matters (Haussmann et al, 2007;UNCTAD, 2016). However, the export basket of a country is dictated by the presence of support structures that foster the capabilities of local firms to innovate and create complex technological products of the kind that generate manufacturing value added (Balland and Rigby, 2007;Hidalgo et al, 2007).
This article combines the central tenets of all these three approaches to understand how GVCs and national innovation systems interact to shape the ability of firms to create manufacturing value added and export capacity across complex technological categories. 4 We build our argument thus: local technological capabilities are instrumental to the way in which manufacturing value added is generated across sectors in developing countries. These are determined by systemic factors which shape technological capabilities building within innovation systems, such as the capacity of the public research institutes to support industry (as evidenced by public expenditure on R&D), the scientific capacity of institutions to engage in industrial and academic research (as evidenced by scientific and journal publications) and the ability of local firms to engage in R&D and innovation (as evidenced by patents granted to residents or by licensing activities of local firms), among other things. These capability indicators show the strength of the innovation system in which the firms operate (Fagerberg and Srholec, 2008;Oyelaran-Oyeyinka and Gehl Sampath, 2007). When national innovation institutions are strong, the firms are ready and able to absorb technological know-how, assimilate learning processes through interactions, technologically upgrade and add value domestically in all sectors (particularly in manufacturing), which supports structural change. Case studies of successfully industrializing countries exemplify this. In South Korea, for example, in sectors where local firms were able to leverage and benefit from GVCs and facilitate export-led growth, local institutions played a fundamental role. They not only dictated how firms integrate into and benefited from GVCs but they also helped firms channel these learning benefits to create broader sectoral and industrial spillovers or to move into other kinds of production frontiers when GVCs were not entirely conducive (Hobday, 1995;Lee et al, 2017).
In our analysis, we use trade data (trade in manufactured goods classified into technological export categories using the Lall, 2000 classification) of developing countries (as measured by capability indicators) dictate their ability to add value across manufacturing sectors. This approach is somewhat different from the traditional GVC approaches, which use input-output (IO) databases such as the Eora Multi-Region Input-Output (MRIO) for 189 countries; the OECD-WTO Trade in Value Added (TIVA) with information on 63 economies from 1995 onwards, or the World Input-Output Database (WIOD), which covers 43 countries starting from 2000. We acknowledge that trade in intermediate products is not the same as GVC participation; however, we use this approach to understand (1) the interaction between the development of technological capabilities and export capacity in sectors of different technological complexity (see among others, Lall, 1992Lall, , 2004 and (2) the ability of countries to benefit from integration into trade more broadly, including GVCs, than what is being explored in GVC studies. A thorough consideration of issues from a trade and technology perspective, we argue, can provide an alternate assessment of the circumstances under which the beneficial effects of GVCs for learning and technological upgrading can materialize.
The analytical focus is on the manufacturing sector and the dependent variable is manufacturing value added (MVA). The empirical analysis looks at how the national capabilities indicators that determine the strength of the national innovation system explain the existence (or absence) of MVA in different technological sectors in developing countries. ''Understanding GVCs and Upgrading in the Broader Context of Economic Development'' section of this article describes the relationship between GVCs, technological capabilities and economic development, homing in on the key variables relevant to this investigation. In our empirical analysis in ''Empirical Analysis'' section, we construct a dataset of 78 developing countries for all these variables. However, data inconsistencies for these countries prevented the creation of a balanced panel over time. In order to prevent any adverse impacts on the results, we ran the regressions for 3 years as snapshots, namely, 2000, 2005 and 2010, in order to draw conclusions about how and which national capabilities indicators condition the technological export categories that countries sustain over time. Our findings suggest a synergistic relationship between GVCs and the presence of local technological capabilities. ''Results and Discussion'' section discusses the results and ''Concluding Remarks'' section presents the broader implications of our findings.
Understanding GVCs and Upgrading in the Broader Context of Economic Development
Economic development results from structural change in an economy that shifts labor from low productivity activities (such as traditional agriculture) to higher productivity activities (Ros, 2000). This indispensable process, however, is not as simple as it sounds. 'Successful' structural change involves not only diversifying activities but also adopting and adapting existing technologies and climbing the technology ladder by continuously upgrading production structures in key sectors of manufacturing (Amsden, 2001;Gerschenkron, 1962).
In classical economic literature, manufacturing is considered crucial to building capabilities because it promotes cumulative causation that reinforces and increases the pace of economic growth (Hirschman, 1958;Myrdal, 1957). 6 However, some manufacturing sub-sectors are better suited than others to build and sustain the technological capabilities of the kind required to promote diversified production structures than others (Kaldor, 1981;Lall, 1992;Pavitt, 1986;Prebish, 1950). Particularly, when learning takes place in manufacturing sub-sectors that call for design and engineering activities -which is mostly in those sectors that are classified as 484 Ó 2018 The Author(s) 0957-8811 The European Journal of Development Research Vol. 30, 3, 481-504 medium-technology sectors -it forms the basis of a more virtuous cycle of technological change, prompting synergies and spillovers in a broader spectrum of manufacturing activities in the local economy (Hobday, 1998;Nelson, 1993). The learning accumulated in these sectors can be used to improve and technologically upgrade existing production capacity in lowtechnology domains, while serving as building blocks to move into more high-technology product categories. Clearly, for developing countries seeking to promote knowledge accumulation, generating learning in such sub-sectors that create the base for sectoral diversification is highly relevant. Over time, steeper learning curves in such sectors, along with rapidly falling costs and growing market shares, lead to economic catch-up (Cimoli et al, 2006). 7 The task of achieving such synergies in manufacturing sub-sectors in developing countries in light of expanding trade and GVCs is not easy, and at least two important issues arise. First, as Felipe (2010) proposes, there is a 'proximity' in trading relationships, where countries with similar capabilities, technologies and infrastructure are likely to manufacture similar products, thus increasing the possibility that they crowd each other out. Second, exports facilitate technological diversification depending on current specialization patterns of countries: When a country is specialized in sectors that have synergies for learning and technological upgrading, it finds it easier to enter new sectors and industries by trading up (Haussmann and Klinger, 2006).
GVCs and Upgrading
The GVCs approach offers many insights into how countries can target opportunities in specific sub-sectors to learn and upgrade. There is a wealth of evidence showing that, when firms in developing countries integrate into existing trading patterns, they have ample leeway to move horizontally into other sectors (that demand a similar level of technological intensity), vertically into technological intensive sectors, or stay put in the same sector (Cirera and Maloney, 2017;Taglioni and Winkler, 2016). The approach also considers the notion of upgrading at length, but mainly in the context of the 'governance' of chains, which refers to the kinds of relationships that develop in the value chain and the power relationships they entail. As studies highlight, governance of value chains is the critical aspect that affects market access, determines the fast track acquisition of production capabilities, dictates the distribution of gains, and often also suggests various policy entry points to change GVC-related outcomes (Humphrey and Schmitz, 2002). In general, five key forms of GVC governance have been identified -market, modular, captive, relational and hierarchical (see Gereffi et al, 2005) and a wide number of other studies expand on these modes (see, for example, Ponte and Sturgeon, 2014). Humphrey and Schmitz (2000, pp. 3-4) provide the most basic template for classifying upgrading within GVCs: process upgrading, product upgrading and functional upgrading. While process upgrading involves minor changes, product upgrading (changing the production of new products) and functional upgrading (adding new functions within the GVC) require greater capabilities on the part of local firms (Bazan and Navas-Aleman, 2004). A fourth form of upgrading -interchain upgrading (introduced more recently in the approach) -offers the possibility of a firm upgrading its products to move into an associated value chain (Pietrobelli and Rabelloti, 2011).
Strictly speaking, these forms of upgrading cannot be mapped on a one-to-one basis to the processes underlying technological change and do not necessarily conform to the notion of technological upgrading. However, GVC studies provide evidence of successful cases that show how GVCs open up several avenues for technology transfer. In these cases, GVCs enable local firms to enter into certain production networks that open them up to new business practices, management methods and organizational skills, in addition to promoting day-to-day technological change within firms (Gereffi and Fernandez-Stark, 2010a, b;Hernandez et al, 2014).
More recently, there have been efforts to link the discussion on governance modes to that on upgrading in the GVC literature. Pietrobelli and Rabelloti (2011), for example, link the different forms of governance with differential upgrading prospects for developing countries, arguing that modular and relational GVC governance forms may open up wider opportunities for technological upgrading when compared to captive or hierarchical GVCs that are widely found in the commodities or low-technology sectors.
However, while these insights might help explain some aspects of what happens when firms are inserted into particular value chains depending on the sector in question, not all insertions into GVCs carry positive outcomes for learning and technological upgrading, for a variety of reasons (Morrison et al, 2008). Furthermore, intangible knowledge protected through intellectual property rights is increasingly becoming an invaluable asset in value chain governance, helping lead firms to maintain advantages and gain larger shares of the revenue on a consistent basis (WIPO, 2017). Therefore, it seems plausible that, while some local firms manage to upgrade, others will lag behind and even face marginalization and exclusion within existing GVCs (see Gibbon and Ponte, 2005, p. 138).
All these reasons suggest that a narrow view of GVCs is not enough to tell the entire story. In fact, the difficulty in explaining many of these outcomes in a clear way has led many scholars to question the traditional, rather 'linear', paradigm of GVCs, arguing that many such processes are actually non-linear in nature (Horner and Nadvi, 2018). This is particularly true when viewed from the perspective of developing countries, where there is a need for a more structured discussion on how learning through GVCs can be promoted on a routine, systematic basis, rather than leaving it to the mercy of market outcomes.
The Relevance of a Technology Capabilities Perspective
How firms expand, learn, technologically upgrade and prosper within GVCs is not just a matter of the GVC itself. Rather, learning occurs as a result of the dynamic interactions between the firm and the value chain on the one hand and the firm and its innovation system on the other. The innovation system is instrumental in creating technological capabilities that shape the ability of actors to master and use existing technologies to carry out routine tasks, and to create new products and processes. These capabilities are what dictate learning and allow actors to innovate. Therefore, although the firm is the locus of innovation, it relies on social capabilities which are created by the system of education (especially at the tertiary level) and supportive policy regimes, and on technological capabilities, which are determined by sustained public R&D, the scientific capacity of institutions and the innovation potential of the economy.
As a result, although a firm's performance is ultimately linked to its own technological efforts, it is shaped by the technological capabilities available within the innovation system in general. Technology and innovation studies have created several useful taxonomies for technological capabilities that look at firm-level capabilities Pavitt, 1993, 1995;Lall, 1992Lall, , 2001Pavitt, 1984). There has been a parallel effort to create capability indicators that can measure institutional strengths of national innovation systems Srholec, 2008, 2009a, b;Kim, 1997).
In our analysis, we use Kim's (1997) notion of technological capabilities which account for a successful and supportive national innovation system, namely, the quality of a country's science base (as measured by publications), R&D investments (as measured by public expenditure on R&D) and patents and trademarks (as measured by intellectual property payments or by patents granted to residents). These capabilities foster collaborative learning in the innovation system, enable firm-level technological change and upgrading and support the diversification of production structures by facilitating continuous product or process improvements that generate MVA.
Hence, the evolution of a country's exports -whether through trade or GVC participationwill equally depend on the local support given to firms to develop their technological capabilities, as it does on international technological progress, competition or collaboration with foreign firms (Lall, 2000). If over time there is a 'deepening' of national technological capabilities, then we should be able to see two kinds of outcomes: Firms will upgrade technologically within existing activities (producing better quality products), and firms will move to new sectors or technologies with more complex activities (Lall, 2000, p. 5). In this process, if national capability indicators support the assimilation of ''increasingly complex technologies that are mastered to international levels of efficiency'', this helps create intra-and inter-sectoral externalities (Cassen and Lall, 1996, p. 331) within economies.
Empirical Analysis
The Data The dataset used in this article relies on three different databases: the United Nations Statistical Division (UNSD) National Accounts Main Aggregates Database, the United Nations Conference on Trade and Development (UNCTADStat) and the World Development Indicators (WDI) Database.
The UNSD database was used to compute the dependent variable, total MVA. Trade in manufacturing exports and imports according to technological intensity were derived from UNCTADStat based on the Lall classification (Lall, 2000). 8 The capability variables used in the analysis come from the WDI database of the World Bank (see Table 1).
The analysis considers developing countries (including least developed countries, hereafter LDCs) 9 and contains information for the years 2000, 2005 and 2010 in constant USD.
Model Specification
As it is not possible to construct a balanced panel that contains the same variables for all developing countries for the entire time period, we constructed a dataset for 78 countries for 3 years, 2000, 2005 and 2010. We ran four regressions per year t, namely, (1) a regression including all developing countries after controlling for outliers (n = 74); (2) a robust regression (n = 73); (3) a regression excluding outperforming developing countries from the sample (n = 65); and (4) a robust regression excluding outperforming developing countries. The same procedure was used for year t +5 and year t +10 .
The model follows the form: where Yit represents the observed value of MVA for country i in year t in constant USD (base year 2005), X 1it denotes manufacturing export variables with different levels of technological intensity for country i in year t in USD, X 2it is the value of manufacturing import variables with different levels of technological intensity for country i in year t in USD, X 3it is the value of capabilities indicators with different levels of technology for country i in year t, U it is a random error term. Lastly, b 1it -b 3it are the coefficients for year t capturing the influence of the different explanatory variables on the endogenous one, while b 0t is the intercept of the model in year t.
MVA, the dependent variable, was measured as the net output of country i after adding up all outputs and subtracting the intermediate inputs invested into production in constant USD (base year 2005). This variable was divided by GDP to control for country-size effects (see Table 1).
The explanatory variables consist of trade in manufactured goods classified into technological export/import categories using the Lall classification (Lall, 2000) and four capabilities indicators: patents of residents, scientific and technological publications, R&D expenditure and intellectual property payments. Each trade variable related to manufacturing exports and imports with different levels of technological intensity was divided by real gross domestic product (GDP) to control for country-size effects. This method was chosen instead of including real GDP as a variable in the regression because it allowed us to factor the level of development of each country into the sample more effectively. Capabilities variables, represented by the number of journal publications and number of patents by residents are presented in their logarithmic form to reduce skewness and improve normality, and R&D expenditure is presented as a percentage of GDP. As suggested in Table 1, we expect a positive relationship between exports of manufactures with different levels of technological complexity and MVA. The same positive relationship is expected between our dependent variable and our capabilities indicators.
Identification of Outliers
Several data points were located far outside the mean of the group. To identify these data points, which are observations with large residuals that affect the dependent variable value in an unusual form, we first calculated the leverage by standardizing the predictor variable to a mean equal to zero and a standard deviation equal to one. Given that the influence of an observation is dependent on how much the predicted scores for other observations would differ if the observation in question was not included, we used a Cook's D to calculate this influence, as those points with the largest influence produce the largest change in the equation of the regression line (Altman and Krzywinski, 2016;Cook, 1979). In particular, to identify potential outperformers, we applied the following expression for country i in each of the considered regressions: where n is the number of countries and k the number of regressors. After repeating this exercise for all 3 years and analyzing the outliers, we found four atypical observations that we did not include in the simple regressions for the years 2000, 2005 and 2010, thus limiting our sample to 74 observations. 10 In the robust regressions, we found an additional atypical observation, making our sample size 73 for all 3 years. 11 When comparing regressions with and without these outliers, we found no major changes in the results. Due to the reduced sample size, no causality test was performed in the analysis; therefore, any interpretation of the results should carefully consider causality running on both sides. Moreover, the reduced sample size could also account for the low statistical power of the regression, as well as the low statistical significance of some variables, with this being a potential limitation of our analysis. Table 2 presents descriptive statistics (i.e., the number of observations, mean and standard deviation) for the dependent and explanatory variables used in the study for the years 2000, 2005 and 2010. The observations used in the regression in their transformed state correspond with the number of observations presented in this table. Columns (2) and (3) of Table 2 present the statistics for all developing countries for the year 2000. Columns (4) and (5) refer to the year 2005 and columns (6) and (7) to the year 2010. The standard deviation for all variables does not show a large spread of the data with respect to the mean (i.e., less than 3 times the mean). Columns (2), (4) and (6) We found a large correlation 12 between the dependent variable and exports of L2 and M3, and the number of publications in years 2000 and 2010. Additionally, a large correlation was found between exports of M2 and MVA in 2010. A moderate correlation 13 was also observed Mean (2) SD (3) Mean (4) SD (5) Mean (6) SD (7 between the dependent variable and exports of M2 and patents by residents in all years. The rest of the variables showed a smaller level of correlation 14 with MVA in all years.
Descriptive Statistics, Correlations and Multicollinearity
Expecting relationships between the variables used in the regression, we ran multicollinearity tests with all the variables in our sample before proceeding with the analysis. Our results indicate high levels of multicollinearity among certain variables that would affect the results of the regression if included. This is the case in particular with imports and exports of hightechnology manufactures: electronic and electrical (H1). This variable is highly correlated to imports and exports of medium-technology manufactures, particularly those related to engineering (M3). Therefore, we excluded H1 for both imports and exports from the analysis.
Results and Discussion
This section shows the results of estimating Eq. (1) for the three years considered 2000, 2005 and 2010 as explained in the previous section. Despite having eliminated outliers from our analysis, we found some countries consistently outperforming in the sample. Considering it important to check how the conclusions of our analysis are affected by the outperforming developing countries, we identified the outperformers in our sample (Table 3), a list that fits neatly with the discussions on emerging economies in the current academic and policy thinking. Graphic analysis that plots the performance of these countries across different export categories (not reported here but available from the authors on request) shows that these countries have managed to sustain export levels and maintain MVA in several technology export categories in a sustained way. For this reason, the analysis was performed for all the developing countries in our sample, including and also excluding the outperformers. Table 4 presents the results of the regressions performed for both samples. 15 Columns (1), (3) and (5) correspond to a simple regression for the years 2000, 2005 and 2010, respectively. Columns (2), (4) and (6) present the robust regression for these years. The regressions excluding the group of outperforming developing countries (as identified in Table 3) are presented in columns (7) The results show that exports of low-technology manufactures (L1), that is, exports of textiles, garments and footwear, is the only category of exports positively and significantly associated with MVA in 2000, 2005 and 2010 for all developing countries. This holds true both with and without the outperforming developing countries. A graphic analysis of this (5) (8) relationship (not reported here) shows that China is the only country that increased its volume of exports of low-technology manufactures while maintaining almost the same level (with a slight decrease) of MVA in both years. In 2000 and 2005, exports of manufactured goods in the low technology category of other products including office equipment and stationery (L2) were positively and significantly associated with MVA, both with and without the outperforming developing countries. However, the relationship between these two variables although positive, was not significant for all countries in 2010 both with and without the outperformers.
The results also show that, in the year 2000, there was a negative relationship between MVA and exports of manufactured goods in the medium-technology automotive category (M1), both with and without the outperforming developing countries. This relationship, however, became positive and significant (at 10%) in 2010 for all developing countries in the sample, both with and without the outperforming countries, indicating that countries exporting manufactured goods in the automobile sector exhibited a greater level of MVA in 2010 than in 2000. This result suggests that learning and technological upgrading took effect not only for the group of identified outperformers but also for the mean of all developing countries analyzed in our sample. The evolution in the assembling operations characterizing the automobile industry in many developing countries is critical in explaining this change, as analyzed by a number of innovation and industrial organization studies of the automobile sector in recent years. These studies note that inbound firms have undergone generational changes to assembly operations of a kind involving more local research and development (Doner et al, 2014;Vallejo, 2010).
There is a negative and significant relationship between MVA and exports of manufactured goods in the medium-technology category of process technologies (M2) in 2000, both with and without outperformers. In 2005 and 2010, this relationship remained negative but not significant, once again both with and without the outperformers. This indicates that a critical technological sector -M2 -which could serve as the backbone of diversified production structures may not be developed/supported sufficiently enough to facilitate MVA. The continuation of the trend in 2010 lends support to the conclusion that national capability indicators are not strong enough to support this kind of learning in many countries in our sample.
Exports of manufactured goods in the medium-technology category of M3 (i.e., engineering technologies) of a broad spectrum that are critical for diversification was associated positively in 2000 for all developing countries, excluding the outperformers, and in 2010 for all developing countries in the sample, but only when outperformers were included. This suggests that, compared to 2000, developing countries in general seem to have lost ground in MVA exports of engineering technologies (M3) to the outperformers, who emerged by 2010 as leading the sector in generating MVA. The growth of this sector, which is critical for diversified production structures (in addition to M2), seems to explain the rising competitiveness of the outperforming countries between 2000 and 2010.
With respect to imports, none were significantly associated with MVA for all countries in 2000, 2005 and 2010 except for the imports of medium-technology engineering products (M3), which show a negative relationship with MVA in 2010 for all countries in our sample. This indicates that developing countries importing this type of products demonstrated lower MVA at a significant level over time, suggesting that it might be both the result of, and leading to, lower learning and capabilities formation in their economies. These results are supported by other studies that suggest that, as countries acquire more and more ready products, particularly those products demanding substantial engineering skills, they do not present significant learning and technological upgrading possibilities and also eliminate several local firms actively engaged in producing such products, thereby deskilling (UNCTAD, 2013(UNCTAD, , 2016UNECA, 2014).
We assessed the role of the national capability indicators in the performance of countries in our sample. Our analysis shows that there was no significant relationship between R&D expenditure (which signifies the capacity of public sector R&D) and the capacity to generate MVA in all technology categories in all countries in our sample. The number of scientific publications (which serves as an indicator of scientific capacity) was positively associated with MVA in 2000 for all developing countries with and without outperformers, and in 2005 for all developing countries without outperformers, but by 2010 it was not significant for MVA, also indicating a gradual weakening of innovation system support structures of this kind.
The two variables that were more closely related to firm-level efforts were significantly associated with MVA, indicative of a situation that is common in many developing countries where firm-level performance is not always supported and bolstered strongly by local institutions. This reinforces the argument that more support from the national innovation system to the firms could help strengthen their performance further. The IP payments variable (which denotes the ability of local firms to license existing IPRs from foreign firms) was positively and significantly associated with MVA in all developing countries excluding outperformers in all years. This, however, also shows that there is extensive reliance on foreign proprietary technologies. The variable patents by residents had a significant and positive relationship with value added in manufacturing in 2010 for all countries with and without outperformers, signalling the importance of local firm-level R&D in generating MVA in all countries in the sample.
Concluding Remarks
This article has combined the central tenets of the GVC approach, the innovation systems approach and the traditional discourse on economic development to analyze how technological capabilities as shaped by national innovation systems impact the ability of countries to trade -and by extension participate in GVCs -in ways that facilitate learning, technological upgrading and the generation of MVA. We have analyzed the situation from the perspective of trade with a view to complement existing GVC approaches. Our empirical results indicate that developing countries in our sample are not generating MVA across technological export categories other than category L1 (low technology) and M1 (automobiles). The analysis also identified several outperforming developing countries (coinciding neatly with the emerging economies), which account for most of the MVA in L1 and M1 categories and also exhibit technological diversification into the M3 category (design engineering products) by 2010.
We note two caveats. First, we were not able to use panel data for the entire period, which will be an avenue for future exploration. Second, the findings should be interpreted cautiously given that the sample consisted of 74 (and 65 when excluding outperformers) developing countries. Nevertheless, the analysis leads us to the following general conclusions that deserve further research. First, while countries are integrated into trade and GVCs based on their static comparative advantages, in the countries under consideration there has been a change in capacity to generate manufacturing value added, moving away from those sectors that are seen as critical for capabilities building (M2, which signifies exports in process technologies and M3, which signifies exports in design and engineering products) in the literature on learning and industrial catch-up. While M3 exports shifted from developing countries as a whole to the outperformer countries alone by 2010, there has been a general shift away from MVA in the M2 category for all countries in the sample, including the outperformers.
Second, we find a weakening relationship between capabilities indicators of the innovation systems (both public-sector R&D, as captured by public R&D expenditure, and scientific skills and capacity, as captured by scientific publications) and the capacity to generate MVA in exports in all countries over time. This suggests that national innovation systems need to be further strengthened and aligned more closely with firm-level needs in developing countries in order to better support them to trade and participate in GVCs in a beneficial manner. Patents by residents are positively and significantly associated with MVA in all countries with and without outperformers, showing the relevance of firm-level R&D efforts in the diversification process. In all developing countries in the sample, there is a positive and statistically significant relationship between IPR payments (which denotes the ability of local firms to license existing IPRs from foreign firms) and MVA, once again showing the technological dependence of the countries on proprietary technologies.
Third, our analysis corresponds accurately with the rise of the emerging economies globally: there is a significant overlap between the countries that are outperformers in our sample and those denoted as emerging economies in the wider literature (UNCTAD, 2012). Finally, our analysis points to the critical role of national capabilities in accounting for how countries benefit from trade and by extension, participation in GVCs. The gradual delinking of export value added from learning is predominantly linked to weak national innovation systems linkages in developing countries and a very worrisome trend. Coupled with the fact that institutional responses have been slow to enable firms to deal with export pressures in certain sectors, and that weak public support for innovation persists (exacerbated by the financial crises of 2007-2008), we conclude that technological upgrading in and through trade and GVCs can be understood and promoted only when considered in conjunction with national capabilities indicators. More work in this direction is required to study how countries can effectively address these issues. | 8,820 | 2018-06-27T00:00:00.000 | [
"Economics"
] |
Filter Pruning Without Damaging Networks Capacity
,
I. INTRODUCTION
The deeper and wider structure makes deep convolutional neural networks perform well in a variety of computer vision tasks, such as semantic segmentation [1], object detection [2] and image classification [3]- [5]. However, its over-parameterized design has led to a huge amount of parameters and expensive computational consumption. For example, VGGNet-16 [4] has 15M parameters and requires 313M float point operations (FLOPs) [6] to process a color image of size 32 × 32. It is difficult to deploy CNNs on resource-constrained devices, such as mobile devices. Therefore, the recent optimization trend of deep convolutional neural networks is to reduce its parameters and computational consumption, and at the same time ensure its performance, so that CNNs can be deployed on resource-constrained devices.
In recent years, many methods have been proposed to compress and accelerate CNNs. These methods can be roughly divided into networks pruning [6]- [15], low-bit quantification [16]- [20], knowledge distilling [21]- [23] and matrix The associate editor coordinating the review of this manuscript and approving it for publication was Zijian Zhang . decomposition [24]- [26]. Networks pruning is one of the most popular fields and has been widely studied. This paper also focuses on it. Reference [7] finds that there are some parameters that have little influence on the final accuracy and can be pruned. Reference [8] proposes a method of weight pruning based on a threshold. However, these methods of weight pruning may cause unstructured sparsities and require additional sparse matrix operation libraries or even specific hardware devices. Therefore, filter pruning is more widely studied. Reference [9] proposes a global greedy filter pruning method, which uses 1 -norm to evaluate the importance of filter. After analyzing the sensitivity of each layer to pruning filters, the filters with smaller 1 -norm will be pruned globally at a time. Reference [12] rethinks the norm-based criterion for filter pruning and proposes a filter pruning method via geometric median to prune redundant filters which are close to geometric median, rather than those less important filters.
As shown in Fig.1, there are some redundant feature maps in the convolutional layer, and these feature maps can be approximately generated from other feature maps. Among the above-mentioned pruning methods, only the filter selection method of [12] considers the similarity in feature maps. This method cannot actually maintain the model capacity, although it still remains pruned filters which are set to zero in the networks for training. In this paper, we propose a filter pruning method without damaging model capacity. Our method can be described in the following steps. 1) The filters that are more replaceable are selected in each convolutional layer. The feature maps generated by these replaceable filters can often be approximately transformed by the remaining feature maps. 2) The selected filters are pruned, and to maintain the original networks capacity, the feature maps corresponding to the remaining filters are treated as the original feature maps to generate new feature maps with lighter group convolution [5].
3) The pruned networks are retrained to restore its accuracy.
The contributions of this paper are as follows. 1) We pay more attention to the damage by filter pruning to the model capacity and propose a method of maintaining the integral capacity of the model when pruning filters. 2) We combine filter pruning with lightweight networks structure design to compress and accelerate deep convolutional neural networks for the first time. 3) Experiments on two benchmark datasets demonstrate the effectiveness of our method. Compared with previous methods, our method achieves the state-of-the-art results.
II. RELATED WORKS
In order to apply deep convolutional neural networks in actual production, many studies have focused on balancing the computational consumption and accuracy of the models. Reference [16]- [19] compress the original network by reducing the number of bits needed to represent each weight. Reference [17] proposes an incremental networks quantization method that can convert a full-precision float point neural networks model of any structure to a lossless low-bit binary model through three independent operations consisted of weight division, group quantization and retraining. To reduce the parameters and computational consumption, [24]- [26] utilize low-rank matrices to approximate the weight matrix in a neural networks. Reference [21]- [23] utilize large teacher networks to supervise small student networks for training to achieve networks compression. According to the similarity in feature maps in the convolutional layer, [27] builds the ghost module which generates virtual feature maps by cheap linear operations and build compact GhostNet model with ghost modules.
Reference [11]- [13], [28] utilize networks pruning to prune redundant weights and filters for compressing and accelerating CNN. Reference [28] treats pruning as an optimization problem to find weights that minimize the loss and satisfy a pruning cost condition. Reference [11] utilizes spectral clustering to classify the filters and prune the filters according to the importance of the categories. To identify insignificant channels, [13] applies L 1 regularization to the scaling factor of the batch normalization (BN) [29] layer. Reference [12] analyzes traditional norm-based criterion for evaluating the importance of filters and proposes a method to prunes the filters that can be replaced via geometric median, but not less important filters.
III. METHODOLOGY A. PRELIMINARIES
We will introduce symbols and notations in this subsection.
where W i represents the weight matrix connecting the i th and i + 1 th convolutional layers and N i , C i , K , L represent the number of output channels, the number of input channels, the kernel size of filters and the number of network layers respectively. F i,j (1 ≤ j ≤ N i ) represents the j th filter of the i th layer, and the dimension of filter F i,j is R C i ×K ×K . We assume that input feature maps of i th layer is X i ∈ R C i ×H i ×W i , where C i is the number of input channels and H i and W i are the height and width of the input feature maps. Y i ∈ R N i ×H i ×W i is output feature maps of i th layer.
B. FILTER SELECTION
We can find from Fig.1 that some feature maps generated by the convolution layer are similar. If we prune some of these similar feature maps, the pruned feature maps can roughly recover from the remaining feature maps. Therefore, we select the most replaceable filters to prune, which is similar to [12].
We utilize the Euclidean distance to evaluate the similarity between the two filters. The smaller the distance is, the more similar the feature maps corresponding to the two filters are. We calculate the sum of Euclidean distances from each filter to all other filters in each layer as the evaluation criterion.
(1) (1) represents the sum of Euclidean distance from j th filter to all other filters in i th layers. The filter F i,j with small VOLUME 8, 2020
Algorithm 1: Algorithm of Filter Selection
Calculate the sum of Euclidean distance from 3 F i,j to other filters according to (1) end 4 Find N i × P filters that satisfy (2) 5 end 6 Output: The mask matrix of filters Sum i,j is selected to prune. (2) represents the selected filters in i th layer. The filter selection methods are summarized in Algorithm 1.
C. FILTER PRUNING AND RECONSTRUCTION
According to the filter selection method of III-B, we perform filter pruning globally on all convolutional layers at one time, and introduce group convolution in the pruned convolutional layer to generate new feature maps, as shown in Fig.2. We make the number of feature maps generated by pruned model in each layer same as that of the original model. In Fig.2, ''Conv'' represents common convolution operation, ''BN'' represents batch normalization, ''Relu'' represents nonlinear activation function, ''Identity'' represents identity mapping [30], ''Group Conv'' represents group convolution, ''Concatenate'' represents dimensional concatenation. (a) is the original convolutional layer. Fig.2(b) and Fig.2(c) are the structure of reconstructing pruned feature maps. The difference between Fig.2(b) and Fig.2(c) is the order of Relu. Experiments in IV show that Fig.2(b) performs better than Fig.2(c) and the comparison of the performance of them at different pruning rates is shown in Table 1. In the case of maintaining the original model capacity, fine-tuning the pruned model can restore its accuracy easily, and even exceed the original accuracy. Our method is summarized in Fig.3.
In Fig.3, the blue matrix is pruned feature maps and the yellow matrix is residual feature maps. On one hand, the new feature maps are generated from residual feature maps and are shown as green matrix. On the other hand, the residual feature maps concatenate with the new feature maps to be the final output which has same channels with original output.
D. ANALYSIS ON THEORETICAL ACCELERATION AND COMPRESSION
We apply Algorithm 1 to select filters, and the method of Fig.3 is applied to prune filters and reconstruct feature maps. The group convolution shown in Fig.2 is applied to reconstruct pruned feature maps. According to the description of group convolution in [5], the parameters and FLOPs of the group convolution are 1/g of the common convolution, where g is the number of groups. In fact, g can influence acceleration rate and the bigger g is, the bigger acceleration rate becomes. Moreover, g must be divided by input and output channels of the group convolutional layer. We assume that pruning rate is P. N i × P is the number of pruned filters and N i × (1 − P) is the number of remaining filters. In order to achieve the maximum acceleration rate, We set P to make one of N i × P and Ni × (1 − P) can be divided by the other. Therefore, Maximum of g can be calculated as Kernel size, strike and padding of the convolution operation are same with those of original convolution layer to ensure that the size of the feature map generated by the group convolution is the same as that of the original output. Inspired by [27], we set kernel size of group convolution as K , which is same as the kernel size in VGGNet and ResNet. In the i th layer, the FLOPs of original model can be calculated as The FLOPs of pruned model can be divided into two sections including primary convolution and group convolution. we can calculate the FLOPs of primary convolution as The input and output channels of group convolution are N i × (1 − P) and N i × P respectively. The FLOPs of group convolution can be calculated as where g is demonstrated in (3), K = K and P N i . The theoretical speed-up ratio of pruned model can be calculated as Similarly, the compression ratio can be calculated as where W o , W p and W g represent original convolutional parameters, primary convolutional parameters and group convolutional parameters respectively. They can be calculated as Finally, the compression ratio is
E. HANDLING CROSS LAYER CONNECTIONS
The methods in III-B and III-C can be directly applied to plain CNN architectures such as VGGNet. However, some adaptations are required when it is applied to the networks with cross layer connections such as ResNet. For these networks, the structure of reconstruction is designed as Fig.4. In Fig.4 (b), group convolution generates new feature maps based on the remaining feature maps after pruning to ensure that number and size of the output feature maps are the same as those of the original output.
A. DATASETS AND SETTING 1) CIFAR [33] It is widely used in the field of image classification as a standard dataset. CIFAR dataset contains 60,000 32 × 32 colored images, with 50,000 images for training and 10,000 for testing. They are labeled for 10 and 100 classes in CIFAR-10 and CIFAR-100 respectively.
2) TRAINING SETTING
The strategies of VGGNet and ResNet baseline training are same as [12] and [34] respectively. All pruned models are VOLUME 8, 2020 retrained for 400 epochs with multi-step learning rate policy (0.1 for the first 200 epochs, 0.01 for the following 100 epochs, 0.001 for the next 75 epochs and 0.0001 for the rest epochs). Stochastic gradient descent (SGD) with momentum [35] 0.9 and weight decay 1e −4 is applied to retrain networks.
3) PRUNING SETTING
The first layer of VGGNet and ResNet networks has a small amount of parameters and low computational cost and it is sensitive to filter pruning, which is analyzed in [9]. Therefore, we do not prune the first layer. The hyperparameter P is the pruning rate which is same in all convolutional layer. The groups of group convolution in Fig.2 We evaluate the different structure in Fig.2 with VGGNet-16 on CIFAR10 and the results are shown in Table 1. As we can see, the structure in Fig.2(b) performs better than that in Fig.2(c). We will apply the structure in Fig.2(b) to our following experiments.
1) VGGNet-16 ON CIFAR10
We test our method on VGGNet-16 with five different pruning rates: 0.125, 0.25, 0.5, 0.75 and 0.875 and compare with other methods. Table 2 shows the results. When pruning rate is 0.5, our method can achieve comparable performances comparing with previous methods with 49.5% FLOPs pruned. More importantly, our method can achieves 11.8% speed-up ratio with even 0.36% accuracy improvement, which achieves the state-of-the-art results.
2) ResNet ON CIFAR10
We test our method on ResNet-56 and ResNet-110 with three different pruning rates: 0.25, 0.5 and 0.75. As shown in Table 3 In addition, our method can accelerate ResNet with relative accuracy improvement.
D. ANALYSIS ON RESULTS
From the Table 2-Table 4 we come to the conclusion that the model accuracy decreases as the pruning rate increases, which is in line with experimental expectations. Compared with other methods on different models, our method can achieve higher accuracy. In addition, we find that our method performs better on CIFAR10 than CIFAR100. Each category in CIFAR10 has more images for training than CIFAR100.
In other words, networks on CIFAR10 can learn more information, to which our method can be better applied.
V. CONCLUSION
In this paper, we find that the previous filter pruning methods are facing the problem of damaging networks capacity.
To solve the problem, we propose a method to prune the redundant filters that are similar with the others and generate new feature maps on the basis of the remaining feature maps with lighter structure to restore the original model capacity.
The experiments show the effectiveness of proposed method. In addition, our method of restoring model capacity doesn't conflict with previous filter pruning methods which can also be optimized by our method. | 3,418.8 | 2020-05-11T00:00:00.000 | [
"Computer Science"
] |
Circuit Analysis and Optimization of GAA Nanowire FET Towards Low Power and High Switching
The main aim of this work is to study the effect of symmetric and asymmetric spacer length variations towards source and drain on n-channel SOI JL vertically stacked (VS) nanowire (NW) FET at 10 nm gate length (LG). Spacer length is proved to be one of the stringent metrics in deciding device performance along with width, height and aspect ratio (AR). The physical variants in this work are symmetric spacer length (LSD), source side spacer length (LS) and drain side spacer length (LD). The simulation results give the highest ION/IOFF ratio with LD variation compared to LS and LSD, whereas latter two variations have similar effect on ION/IOFF ratio. At 25 nm (2.5 × LG) of LD, the device gives appreciable ON current with the highest ION/IOFF ratio (2.19 × 108) with optimum subthreshold slope (SS) and ensures low power and high switching drivability. Moreover, it is noticed that among optimal values of LS and LD, the device ION/IOFF ratio has an improvement of 22.69% as compared to other variations. Moreover, the effect of various spacer dielectrics on optimized device is also investigated. Finally, the CMOS inverter circuit analysis is performed on the optimized symmetric and asymmetric spacer lengths.
Introduction
Increased growth in semiconductor market improved the transistor performance but with the various adverse SCEs. Various attempts to overcome the SCE problem have resulted in radical changes in transistor structural design. Many researchers predict that the vertically stacked nanowire (VS-NW) structure, is future option to drive electronic industry, which has good gate controllability and great packing density, will be the eventual destination of the shrinking transistor. The VS superior NW performance is due to its gate-all-around (GAA) based architecture. Recent research [1] has shown that VS-NWs are capable of balancing outstanding low OFF state and high ON-state properties. However, a very few of these studies have taken into account the VS-NW structure's spacer optimization, which is an unavoidable for sub-10 nm nodes for better gate controllability [2][3][4]. Moreover, to increase device efficiency at sub-10 nm regime JL devices are formed. Simple manufacturing method, junction free nature, low thermal budgets, doping concentration gradient, improved scalability, and immunity to short channel effects (SCEs) are all advantages of JL based devices. [5][6][7][8].
Moreover, the JL devices exhibits better I OFF characteristics due to volume depletion nature. For volume depletion in JL FETs the need of channel thickness less than 10 nm is fundamental and advanced architectures like trigate, vertical super thin body (VSTB), double gate, Gate all Around (GAA) structure is essential [9,10]. Moreover, higher gate work function is also required for full depletion [11,12]. In a VSTB FET, carrier mobility improves for decreased body thickness since carrier transport is predominantly controlled by a single gate [13]. As a result, I ON improves. Furthermore, this novel device is significantly easier to make than SOI FETs [14]. Since JL device use volume conduction phenomena hence improves carrier transit speed and minimize surface roughness through scattering [15].
The NW FETs were proposed as a successor to FinFETs, with a GAA design and improved gate electrostatics. However, because to their limited cross-section area and width constraint, the NWFETs have weak driving. While a large number of nanowires may be employed to increase I D , this also leads to higher parasitic capacitances.
To minimize SCEs for sub-20 nm device's, the introduction of spacers is fundamental [4]. The addition of spacers, on the other hand, enhances series resistance and so minimizes drive current (I ON ) current performance. The flow of gatesource/drain carriers is restricted as the spacer length increases, even at high V DS . High-k spacers increase switching ratios (I ON /I OFF ) by inducing field coupling via the fringing effect [16]. However, in order to achieve superior performance metrics, spacer length should be carefully chosen. Aside from thickness and width, spacer length is also carefully adjusted to improve transistor performance. According to research [14], the effect of drain asymmetry variation reduces leakages by 57 %. The inclusion of spacer reduces leakages mostly due to edge tunneling of carriers. This paper explores symmetric and asymmetric spacer length variation on GAA NW FET at nano regime. Various performance metrics like I ON , I OFF , I ON /I OFF , and SS are analyzed.
The paper is organized as follows. The section 2 describes device physics and device geometrical parameters. In section 3.1 symmetrical dielectric variation of spacer length optimization is performed on SS and I ON /I OFF . In section 3.2 source spacer length is varied (L S ) by keeping drain spacer length (L D ) constant. In section 3.3 drain spacer length is varied (L D ) by keeping L S as constant. Section 4 illustrates the CMOS inverter performance of symmetric and asymmetric spacers.
2 Device Structure and Simulation Methodology Figure 1 depicts the 3-D JL SOI nanowire FET and 2-D view of symmetric spacer. In this paper we have considered 3-D JL SOI VS NW FET to understand the effect of spacer length on device DC performance. The high-k dielectric HfO 2 is used as a spacer material to increase switching performance. Although the use of spacer length improves subthreshold performance, but reduces I ON . As a result, a spacer length with a high-k dielectric is provided to compensate for this impact, increasing I ON by increasing electron flow from source to drain. Furthermore, introducing high-k gate dielectric along with interfacial oxide (SiO 2 ) achieves a lower EOT and better gate electrostatics, the suppression of leakages, and the suppression of random threshold voltage changes [17,18]. The I OFF is maintained <100 pA for all variations with a fixed work function of 4.8 eV. With Titanium (Ti) as the gate metal, continuous and uniform doping is maintained. The device parameters used for the simulation are listed in Table 1.
Due to higher channel doping concentrations Fermi Dirac statistics are activated. Since carrier degradation phenomena are produced by surface roughness, acoustic phonon scattering, and doping dependency mobility reduction, the Lombardi mobility model is taken into account. A band-to-band tunneling model is included to handle the band gap narrowing effect that can occur as a result of increased channel doping. To account for carrier production and recombination events, the Shockley-Read-Hall (SRH) model is used. To account for quantum correction effects, quantum models are used. The threshold voltage is extracted at (W/L) × 10 -7 A at V DS = 0.9 V and V GS = 1.2 V. The simulation models have been thoroughly calibrated using experimental data [19]. The simulations are carried out through 3D Cogenda Visual TCAD simulator [20]. Figure 2 depicts the ON and OFF parameters of the VS NW FET from the TCAD simulator. The VS NW FET demonstrates behavioral change with modification in spacer length, as shown in Fig. 2a and b, with both I ON and I OFF decreasing as spacer length increases. Longer spacers produce good subthreshold behavior but result in a decrease in I ON due to the increased series resistance. Due to downfall in edge tunneling from source to drain and gate overlap results in lowering of I OFF with more spacer distance i.e., higher L SD /L G ratio.
Symmetric Variation of Spacer Length
From Fig. 3 it is observed that the symmetric spacer exhibits highest I ON /I OFF of 2.65 × 10 8 and lower SS of 63 mV/ dec at L SD = 1.5 × L G . Moreover, the device exhibits diminished I ON /I OFF at L SD = 2 × L G and thus removed from design of symmetric spacer perspective.
L S Variation with Fixed L D
The length of the spacer dielectric is asymmetrically altered in this section. The source side spacer length is adjusted while the drain spacer length is kept constant at 1.5 × L G , because the device achieves the maximum I ON /I OFF ratio and mild SS at this symmetric spacer length. From the Fig. 4a and b increase in L S length the I ON decreases at fixed L D length of 15 nm. From the Fig. 5 the I ON /I OFF ratio increases with raise in L S /L G value and reaches to highest value at 1.5 × L G . Figure 5 depicts the asymmetric spacer variation of L S in ON state. The device exhibits highest I ON /I OFF ratio at L S = 1.5 × L G . The I ON /I OFF of 2.6 × 10 8 is obtained at source
L D Variation with Fixed L S
In this section, the same analysis as in section 3.2 is carried out, but the drain side spacer length is altered while the source side spacer length is fixed at 1.5 × L G . The greater the L D /L G ratio, lower the I OFF and I ON , as shown in Fig. 6a and b. The I ON increases with lower L D length. Since higher L D of device leads to higher resistance to electron flow. Moreover, the I OFF which is significant for low stand by power applications is reduced with higher L D /L G value. Since in the OFF state the spacer dielectric fringing fields increase the potential barrier height and restrict tunneling of electrons. From Fig. 7, increase in spacer length the I ON /I OFF ratio increases up to 2.5 × L G and then degrades. The device achieves the highest I ON /I OFF ratio at L S = 15 nm and L D = 25 nm at L G = 10 nm with acceptable SS. Moreover, the device I ON /I OFF ratio falls after 2.5 × L G due to reduced fringing effect with larger L D . Thus, spacer optimization is vital for enhanced performance at nano regime.
Comparison of Source/Drain Side Spacer Length Variation
The performance metrics of L S and L D variation on I ON , I OFF , I ON /I OFF , and SS are compared in this section and displayed in Figs. 8 and 9. Both L S and L D variation on VS NW FET shows contrasting effect on performance metrics. From Fig. 8a and b, both ON current and OFF current decreases with increase in L S and L D . The ON current of L S is higher compared to L D up to 1 × L G , whereas opposite behavior results from 1.5 × L G to 2.5 × L G .
The I OFF of L S is more compared to L D with spacer distance variation. The lowest I OFF for L S and L D takes place at the highest spacer length i.e., at 2.5 × L G whereas, the highest I OFF occurs at lowest spacer distance i.e., 0.2 × L G . Since the highest I OFF occurs at 0.2 × L G , 0.5 × L G and hence they are discarded for device design prospective. The permissible L S and L D values for the device design are 1 × L G , 1.5 × L G , 2 × L G , and 2.5 × L G respectively. However, the best optimized device set is at L S = 1.5 × L G and L D = 2.5 × L G with acceptable SS. Figure 9a and b shows SS and the I ON /I OFF performance of the VS NW FET with both L S and L D versions. The I ON /I OFF ratio diminishes after 2 × L G for L S , whereas it increases up to 1.5 × L G for both L S and L D spacer length variations. So, the maximum allowable range of L S variation is limited to L S = 1.5 × L G to drive device for better switching and low power applications. Figure 9a shows the SS performance of the VS NW FET for both L S and L D variations, with the highest SS value at 0.2 × L G and the lowest value at 2.5 × L G . Except at 0.2 × L G , the device achieves the lowest SS among L S , and L D variations.
Spacer Dielctric Optimization
The Fig. 10a The simulated transfer characteristics (I D -V GS ) with different spacer dielctrics of JL NW FET with symmetric spacer are shown in Fig. 10a. With all spacer dielectrics, the device has an I OFF of less than nA. With spacer dielectrics, however, the I ON varies from 60 A to 75 A. The I D -V GS of asymmetric spacer variation follow the same pattern as symmetric variation, as shown in Fig. 10b. For all spacer combinations, the I OFF of the device with asymmetric spacer is less than nA. With the HfO 2 spacer, the I ON reaches a maximum of 68 A, while with no spacer, it reaches 54 A. According to the results, a rise in the 'k' value causes a decrease in the I OFF . Stronger the fringing fields result in lower I OFF when the 'k' value is higher. Due to the spacer fringing electric fields, the depletion region improves. The p-n junctions form the depletion zone in inversion mode FETs, whereas energy barrier generation owing to depletion in the OFF state occurs in JL devices. The subthreshold current decreases as the spacer dielectric value increases because of high vertical electric field at V DS = 0.9 V and V GS = 0 V i.e., in the OFF state. Furthermore, the I D is marginally affected in the ON state due to the zero electric field induced by the flat band situation. In comparison to Air and SiO 2 spacers, the HfO 2 followed by Si 3 N 4 spacer has good switching behavior and a lower I OFF at nano-regime. As a consequence of the analysis, high-k spacer dielectrics such as Si 3 N 4 and HfO 2 excel with better subthreshold and switching behavior at nano-regime, ensuring potential candidate for low-power applications [21] The I ON for a device is calculated at V DS = 0.9 V and V GS = 1.2 V whereas, I OFF is calculated at V DS = 0.9 V and V GS = 0 V. As seen in Fig. 11a, the I ON is much lower with the asymmetric spacer than with the symmetric spacer. HfO 2 has the smallest I ON decrease of all the spacer combinations, at 11.24%. The Si 3 N 4 spacer and no spacer materials had a 13.26 percent and 15.8 percent drop, respectively. Because higher fringing fields with a high-k spacer diminish the I ON decrement with asymmetric spacer compared to a low-k spacer, the I ON decrement with asymmetric spacer is minimized. The I OFF for various spacer dielectrics is shown in Fig. 11b. Although symmetric spacers improve I ON , asymmetric spacers diminish direct tunneling of electrons in the OFF state due to the greater distance between the channel and drain. The I ON /I OFF ratio of a device with varied spacer dielectrics is shown in Fig. 11c. With only SiO 2 , Si 3 N 4 , and HfO 2 spacers, the asymmetric spacer has a greater I ON /I OFF ratio than the symmetric spacer. In comparison to Air and no spacer, the symmetric spacer exhibits a modest increase in the I ON /I OFF ratio due to increased I ON and marginal I OFF fluctuation. The negligible difference in I OFF between symmetric and asymmetric spacers for Air and no spacer is attributed to ineffective leakage control due to decreased dielectric fringing fields. Furthermore, the asymmetric spacer aims to improve the I ON /I OFF ratio while lowering coupling and parasitic capacitances [22,23]. With HfO 2 spacer, the asymmetric spacer improves the I ON /I OFF ratio by 19.6% and reduces I OFF by 34.13% when compared to the symmetric spacer. Furthermore, as seen in Fig. 11d, the performance of SS is poorer with an asymmetric spacer. Although the I ON is The electric field on the channel region of the symmetric spacer is higher than that of the asymmetric spacer, as shown in Fig. 12a and b. Due to larger distance between the channel and drain in the asymmetric spacer the electric field lines are minimized into the silicon and thus enhanced tunneling width. Figure 12c and d demonstrate the potential distribution of JL nanowire FETs with symmetric and asymmetric spacers. Because of the long distance between channel and drain, an asymmetric spacer ensures lower SCEs. Figure 13 depicts the I D -V GS characteristics of both NMOS and PMOS with optimized symmetric and asymmetric spacers. The gate length L G = 10 nm, EOT (high-k + SiO 2 ) = 0.75 nm, Si channel thickness = 10 nm, and HfO 2 as spacer material have all been maintained same as in NMOS. The design for PMOS symmetric spacer is L S = L D = 15 nm and L S = 15 nm and L D = 25 nm for asymmetric spacer, which is maintained same as NMOS. The V th is matched for both NMOS and PMOS by work function engineering. The SS and DIBL of symmetric and asymmetric spacers NMOS are depicted inside Fig. 13. Furthermore, the delay performance is calculated by CMOS inverter as shown in Fig. 14.
CMOS Inverter Performance Analysis
The CMOS inverter delay (T D ) is calculated using the effective drive current model, such as in equation 1 [24], where I EFF is the effective drive current, C L is the load capacitance, and V DD is the supply voltage of the first stage inverter at the output node.
The evaluation of C L is carried through parasitic first stage output and input capacitance of second stage as (2) and a value of 1.5 is considered for miller coefficient (M) [24]. The C IN2 is calculated by using the weighted distribution of NMOS and PMOS during input transitions of the OFF and ON-state capacitances. During the output-fall transition to 0.5V DD , the transistor P2 remains ON while N2 switches from OFF to ON. As a result, the OFF to ON ratio of 0.25: 0.75 [24,25] is utilized to calculate C IN2 (3).
Where [25], and are taken from the individual I D -V GS characteristics. Figure 15 depicts the CMOS inverter delay of symmetric and asymmetric spacer dielectrics. The terms t PHL and t PLH defines the speed of the logic and detrimental in calculating propagation delay (t P ). The symmetric spacer exhibits lower delay compared to asymmetric spacers. Since in asymmetric spacer the L D is 25 nm which is higher compared to symmetric spacer which is 15 nm. Thus, symmetric spacer is better for circuit applications at nano regime [ 27] and [ 28]. However, asymmetric spacer outperforms symmetric spacer in terms of OFF current, subthreshold performance, and good switching behavior. The proposed VS nanowire FET is compared with FinFET, nanowire, and nanosheet FET and their electrical characteristics are presented in Table 2.
Conclusion
In this work detailed study of spacer length has been presented on n-channel SOI JL VS NW FET. According to the performance optimization metrics, spacer length modification has a serious effect on SCE reduction. By result analysis the I ON / I OFF ratio with L S =1.5 nm and L D = 2.5 nm exhibits best performance. Among optimized L S and L D values an improvement of 22.69 % in I ON /I OFF ratio is noticed with L D = 2.5 × L G , whereas the SS is reduced i.e., 63 mV/dec to 62 mV/dec. The highest OFF current with 0.2 × L G and 0.5 × L G for both L S and L D variations are not considered for device design considerations. For L S variations 2 × L G and 2.5 × L G are neglected, whereas for L D the spacer length 3 × L G is not considered since the device I ON /I OFF ratio tends to down fall. From the result analysis with L S = 15 nm L D = 25 nm and high-k spacer for 10 nm n-channel JL VS NW FET shows best optimized results. Hence optimized asymmetric VS NW FET exhibit low OFF current, higher I ON /I OFF ratio and hence assures low standby power requirements and low power applications. Moreover, symmetric spacer exhibits higher I ON and lower delay and assures high performance applications.
Declarations
Consent to Participate Not applicable.
Consent for Publication Not applicable.
Financial Interests The authors declare they have no financial interests. | 4,860.8 | 2021-11-11T00:00:00.000 | [
"Engineering",
"Physics"
] |
General Index and Its Application in MD Simulations
In the long-term practice, it is recognized that the properties of materials are not uniquely determined by their average chemical composition but also, to a large extent, influenced by their structures. The impurities and defects in metal will hinder the movement of free electrons and reduce their conduction, therefore, the thermal conductivity of alloy is significantly smaller than that of pure metal. The yield strength, fracture strength, fatigue toughness and other mechanical properties of metal are influenced by defects, such as dislocations, grain boundaries, micro voids and cracks. In weak external magnetic field, due to the existence of spontaneous magnetization within a small area, e.g. magnetic domains, ferromagnet shows strong magnetism. The bonding strength and density of crystalline phases dramatically influence the strength of ceramic. Due to the existence of independent molecules, linear structure (including the branched-chain structure) polymers are flexible, malleable, less hard and brittle, and can be dissolved in a solvent or be heated to melt. However, in three-dimensional polymers, as there are no independent molecules, they are hard and brittle, can swell but cannot be dissolved or melt, and are less flexible. In nematic liquid crystal, the rod-like molecules are arranged parallelly to each other, but their centres of gravity are in disorder. Under external force, molecules can flow easily along the longitudinal direction, and consequently have a considerable mobility. In smectic liquid crystal, the molecules align in a layered structure via lateral interaction of molecule and interaction of functional groups contained by molecules. Two-dimensional layers can slide between each other, but the flow perpendicular to layers is difficult.
Introduction
In the long-term practice, it is recognized that the properties of materials are not uniquely determined by their average chemical composition but also, to a large extent, influenced by their structures.The impurities and defects in metal will hinder the movement of free electrons and reduce their conduction, therefore, the thermal conductivity of alloy is significantly smaller than that of pure metal.The yield strength, fracture strength, fatigue toughness and other mechanical properties of metal are influenced by defects, such as dislocations, grain boundaries, micro voids and cracks.In weak external magnetic field, due to the existence of spontaneous magnetization within a small area, e.g.magnetic domains, ferromagnet shows strong magnetism.The bonding strength and density of crystalline phases dramatically influence the strength of ceramic.Due to the existence of independent molecules, linear structure (including the branched-chain structure) polymers are flexible, malleable, less hard and brittle, and can be dissolved in a solvent or be heated to melt.However, in three-dimensional polymers, as there are no independent molecules, they are hard and brittle, can swell but cannot be dissolved or melt, and are less flexible.In nematic liquid crystal, the rod-like molecules are arranged parallelly to each other, but their centres of gravity are in disorder.Under external force, molecules can flow easily along the longitudinal direction, and consequently have a considerable mobility.In smectic liquid crystal, the molecules align in a layered structure via lateral interaction of molecule and interaction of functional groups contained by molecules.Two-dimensional layers can slide between each other, but the flow perpendicular to layers is difficult.
With the change of external conditions, the microscopic structures of material may change and consequently alter the material macroscopic properties.For example, the hardness of eutectoid steel with 0.77% carbon content is about HRC15 after annealing, but up to HRC62 after quenching, because the structure of carbon steel differs after different heat treatments.Electric field can change the order of liquid crystal molecules; LCD is made by using this feature, and now has been widely used in everyday life.It is one major goal of material sciences and urgent need of engineering applications to quantitatively clarify the relationship between macroscopic behaviour and microscopic structure.This goal imposes inorganic non-metallic materials and polymer materials.In the fields of nano-materials and bio-pharmaceuticals, molecular dynamics simulation is becoming an indispensable basic tool.In the studies on the large deformation of metal, molecular dynamic simulation plays a crucial role in explaining inverse Hall-Petch behaviour, nucleation and annihilation of dislocation involved in grain boundary and the relationship between shear band and dimple crack surface (Kumar et al., 2003).In the studies on protein folding, molecular dynamics is one of the most effective ways to simulate protein folding and unfolding.Via molecular dynamics simulation the transition state and change of energy in folding (or unfolding) process can be clearly analysed.
Structure analysis is the core issue of material simulation.Over the years, many defect identification methods have been proposed.For metal defects distinction, there are the excess energy method, coordination number method, centro-symmetry parameter method (Kelchner et al., 1998), Ackland's bond-angle method (Ackland & Jones, 2006), and bondpair method (Faken & Jonsson, 1994), etc.
As we know, the complex computations between particles cannot be avoided in any elaborate defect identification methods.Since the computation complexity of traditional methods are of high order, the computation time needed will dramatically increase with growing the system size.Therefore, it is necessary to design new data structure and indexing algorithm to reduce computation complexity.The computation complexity of defects identification methods will be greatly reduced by using background grid and linked list.The background grid index, together with the linked list data structure, is suitable for management of uniform distributed points, and it has been widely applied in computation and analysis of many simulations.The analysis of complex structure in non-uniform system refers not only to points, but also to lines, surfaces and bodies, and their distributions are usually non-uniform.The background grid index cannot meet the needs for managing these objects, but the multi-level division of space is effective way to manage complex data.The SHT (space hierarchy tree), a newly proposed data structure, is a powerful dynamical management framework of any complex object in any dimensional space.Index of objects with complex structure can be created based on SHT, and corresponding fast searching methods could be designed to meet various search needs.
As the elements managed by SHT are abstract objects, such as geometry objects (points, lines, surfaces and bodies) in three-dimensional space and characteristic regions in phase space, the general index based on SHT data structure can be used in various application areas.In this chapter, index methods in MD simulation will be introduced, including the background grid index and general index based on SHT, and then with the emphasis on several applications of SHT general index in computational geometry (shortest path problem and Delaunay division) and characteristic structure analysis (cluster construction, defects identification, and interface construction).
General index
In many programs, such as post processing of MD, discrete elements simulation, computational geometry, smart recognition, spatial objects (i.e., points, lines, surfaces, bodies, structures, etc.) are used to construct complex structures in the system.Without index of spatial objects, the quantity of computation for searching object is very high.If a system contains N objects, the computation complexity related to two objects is N 2 and N 3 for three objects.If the total number of objects is more than 10 4 , the computation complexity is not acceptable.Therefore, effective storage and fast search of objects are necessary, and consequently the index of spatial objects should be established.Currently, there are three kinds of spatial indexes: background grid, tree and linked list.These indexing methods mainly focus on points and neighbour search.In this chapter, an SHT general index is presented, which is suitable for management of any objects in any dimensional space.Fast searching algorithms meeting any given requirements are proposed based on SHT.In traditional programming, intuitive idea does not lead to intuitive algorithm.However, if following the idea of SHT to write codes, intuitive idea can directly lead to intuitive algorithm, so that it's beneficial to programming.Taking into account the integrity of this description, background grid index is firstly introduced, and then we introduce the SHT data structure and fast searching methods.
Background grid index
For points and small objects with uniform sizes, if their spatial distribution is not extremely scatted, the background grid index is suitable for managing.This indexing method is widely used in molecular dynamics, Monte Carlo method, smooth particle hydrodynamics, material point method and discrete element method to search neighbour objects.For this case, small object can be viewed as a point located at the centre of the object or its feature point.
The idea of background grid index (Michael, 2007) is as follows: set a box to contain all objects according to their locations; divide the box into smaller grids with a certain size, then put objects into appropriate grids according to their locations.Hence, an index for spatial objects is created.When searching for objects, several grids are firstly selected according to searching conditions, and then search in selected grids to get the needed objects.In figure 1, small black rectangles stand for objects and thin lines are for background grid.When search for objects in a given circle, what we need to search through is not all the grids but only those covered by the circle.For creating background grid index, one way to store the objects within each grid is to use array.Thus an array with possible maximum size must be firstly defined.In this way, a large quantity of memory may be wasted.So, the array method is not suitable for the case with objects nonuniformly distributed.A good alternative is to use the linked list to store such kinds of objects.Figure 2 is the scheme for linked list of objects in background grid (two-dimensional space).The indexing steps are as follows: Fig. 2. Scheme for the linked list of objects in background grid.
1. Set the grid size ∆ according to the search requirements; 2. Check all objects to get system size (x 0 , x 1 )×(y 0 , y 1 ); 3. Calculate the grid array size (I, J), where , allocate the grid array dynamically; 4. For each object A, according to its location (x A , y A ) to get the grid (i, j) containing it, where . Insert object A into the top of linked list of grid (i, j).In figure 3, the red point stands for grid g(i, j), blue point stands for object A, g(i,j)→obj stands for the object pointed to by grid g(i, j).The two steps are as follows: (, ) (, ) The algorithm for searching objects in a given region is as follows: (i) search for grids crossing with the given region; (ii) search for objects within those grids.
In background grid index, correlated grid refers to the grid crossing with the given region, number of object included is the number of objects within each grid.The computational complexity of background grid index is related to the number of correlated grid (N gird ) and the average number of object included (N obj ), it is of order O(N gird N obj ).The larger the grid size, the less the number of correlated grid and the more the average number of objects included, and vice versa.The memory needed increases with decreasing grid size.A proper grid size has to be chosen to reduce computational complexity.Usually, the selected average number α of objects included ranges from 3 to 5. The grid size can be gained by calculating the average density of object.Such as, in two-dimensional space, the grid size is , and it is The data structure of background grid index is very simple.If the size and spatial distributions of objects are uniform, the calculation based on this index is fast.A few disadvantages of this index scheme should also be pointed out: (i) the background grid cannot be dynamically created, it must be recreated with local dynamical adjustment; (ii) for the size of object is not contained by the index, if the size of objects is not uniform, it is hard to support index related to sizes.(iii) When the dimension is high or the spatial distribution is not uniform, the needed memory is too large.
SHT (Space Hierarchy Tree) and general fast search
Currently, there is no general scheme for managing different types of objects, there is also no universal way to quickly search for objects under flexible requirements.Different problems require different designs.A large number of complex operations are unavoidable in the corresponding coding procedure.This leads to lots of drawbacks, such as design difficulties, without university, long codes and hard to maintain.
When indexing discrete spatial points, background grid linked list structure can be used.
For spatial data, tree structure can provide convenient index.Examples are referred to BSP (Binary space partitioning) tree, K-D tree (short for k-dimensional tree) and octree (de Berg, 2008;Donald, 1998).The BSP and K-D trees have their own specific index methods and are not suitable for general index.For example, the BSP tree is suitable for identification of inner or outer region of polyhedra.
The data structure of object determines its indexing, for instance, the index of discrete points by using background grid is limited to neighbor point index.A better data structure design can implement general index.A data structure meeting any indexing requirements must be appropriate to describe the locations and sizes of objects, adapted to the dynamic changes in the data, and its structure has to be standardized.The octree can manage three-dimensional points.It can also manage other kinds of spatial objects.
In this section, a new data structure, SHT, extended from octree is proposed to unify the management of any objects in any-dimensional space, and two general search methods (conditional search and minimum search) are presented to implement search for object meeting any given requirement.The computation complexities of two search schemes are both logN.By using SHT and corresponding fast searching methods, programming becomes easy.
SHT management structure
In three-dimensional space, SHT data structure is similar to octree.Go a further step, for a system in n-dimensional space, a n-dimensional cube (a line segment in one-dimensional space, a rectangle in two-dimensional space, and a cube in three-dimensional space) is designed to contain the system; then divide this cube in each dimension into two parts to form 2 n sub-cubes; only retain the cubes with objects inside; continue to decompose each cube until the required resolution is reached; put the objects (points, lines, surfaces, bodies) into the appropriate cube according to their locations and sizes; retained cubes are connected together, according to their belonging relationships, to form a 'spatial hierarchy tree'.Up to now, an index for a set of spatial objects has been established, needed objects can be quickly found via the index.This SHT data structure contains the properties of directed tree.Each cube is named a 'branch'.Its child-cubes are named 'sub-branches' and its parent-cube is called the 'trunk'.
The largest or the top-level cube is named the 'root' and the smallest or bottom-level cubes are called 'leaves'.The process for linking an object with a branch is named 'putting into' and the opposite process is called 'cutting down'.The SHT is different from the octree in two aspects.Firstly, it can be used in any-dimensional space and the number of subbranches is arbitrary.Secondly, objects can be put into not only leaves but also any other branches, and the number of objects is arbitrary.Due to the uncertainty of the number of 'sub-branches' in a 'branch', 'branches' sharing the same 'trunk' are linked as a list; Similarly, 'objects' belonging to the same 'branch' are also linked as a list.Figure 5 is the scheme for object management by SHT.For the convenience of description, from now on, unless specifically pointed out, we do not distinct between 'branch' and its corresponding cube region, such as, the 'center of branch' is also referred to the geometrical center of the cube, the 'size of branch' is also the cube size, and the 'branch corner' is also the corner of the cube, etc.Meanwhile, tree are named according to the managed objects, such as, tree created to manage points is called 'point tree', and to triangle is 'triangle tree', etc.In practical applications, the number of objects may be variable.Therefore, the SHT is constructed dynamically.The dynamic management procedure of SHT includes three basic operations: (I) establishment of a tree, (II) adding a new object to a tree, (III) removing an object from a tree.
(I) The establishment of a 'tree' from an object is to establish a minimum 'branch' to contain the object.The idea of algorithm is thus: Enlarge the size of the given minimum cube until the object can be put into.The typical procedure consists of two steps: (i) Get the known minimum resolution 'σ', i.e. the smallest edge length of cubes.Use the center of an object as the geometrical center of the cube to create a 'branch'; (ii) Check whether or not the branch can contain the object.If yes, the branch size is proper, and put the object into it; if not, double the branch scale and go back to step (ii) to continue.Up to this step, a 'tree' with only one 'branch' which contains one 'object' has just been established.Figure 6 shows the construction process of the original tree, where a triangular object in two-dimensional space is used as an example.(II) The idea of adding an object to a tree is thus: Enlarge the tree to contain the object and then add the object to an appropriate branch.The algorithm for adding a new object B to the tree is as below: (i) Adjust the root.(a) Check whether or not the object B can be contained by the current root r.If yes, end this step and go to step (ii).If not, take the center of root r as a reference point to calculate the quadrant where the center of object B is located, and then create trunk s of root r, link root r to trunk s and set trunk s as root.(b) Go back to step (a).
(ii) Place object B. (c) Set the root as current branch; (d) Check whether or not the sub-branch of current branch, with respect to quadrant B, contains object B. If not, place the object into current branch and break; if yes, enter the sub-branch (if the sub-branch does not exist, create it) and take it as the current branch.(e) Go back to step (d).
Figure 7 is the scheme for adding a triangle to an existing tree, where A and B stand for triangle objects.O1 is the geometrical center of root of the original tree and O2 is the corresponding center of the enlarged root.The root is '/', the second-level sub-branches are 'I', 'II', 'III' and 'IV', 'a' is the third-level sub-branch.B0 is the geometrical center of object B.
In figure 7(c), the rectangle in each row stands for the corresponding branch (symbol in the left part is the branch, and in right part is the object).(III) The algorithm for removing an object from the tree is as follows: (i) According to the link, pick up the branch containing the object.Remove the object from the corresponding object-list, and then set this branch as current branch; (ii) Check whether or not current branch contains other objects and sub-branches.If not, remove this branch, enter its trunk and set the trunk as new current branch, and then go back to step (ii) to continue this process until all the useless branches are eliminated.Figure 8 is the scheme for removing an object from a tree.
In the dynamical algorithm of SHT, except for adding a sub-branch or trunk of a branch, other operations have nothing to do with space dimension.We can use four functions to describe the operations related to space dimension and to the location and size of object in this process: (i) Create an appropriate branch to contain the current object, (ii) Check whether or not a branch contains the current object, (iii) Take the current branch as reference point, and then create a trunk of current branch in the quadrant where the object is located, (iv) Take the current branch as reference point, and then create sub-branch of current branch in the quadrant where the object is located.The computer memory required by the SHT is approximately equal to kNlogN, where N is the number of objects.It is independent of the spatial dimension.When the spatial dimension is higher or spatial objects are have a highly scattered distribution, the SHT can save a large quantity of memory compared with the background grid method.In addition, because SHT is dynamically constructed, the size of the system can dynamically increase or decrease with the addition or deletion of objects.This is a second obvious advantage over the traditional background grid method.
Fast searching algorithms based on SHT
In practical application, we generally need a fast search of objects satisfying certain requirements.The computational complexity of an ergodic search is N.It is clearly not practical when dealing with a huge number of objects.For such cases, we need to develop fast searching algorithms.As the objects managed by SHT can be located, fast searchers with computational complexity logN can be easily created.The basic idea is as below: It is unnecessary to search the objects directly, but rather check branches.Skip those branches without objects under consideration.Thus, searching is limited to a substantially smaller range.Depending on requirements of applications, two fast searching algorithms are presented: conditional search and minimum search.The goal of conditional search is to search for objects meeting certain conditions.For example, find objects in a given area.The goal of minimum search is to search for an object whose function value is minimum.For example, find the nearest object to a fixed point.
Conditional search
Conditional search is to search for objects meeting given conditions among the objects hanged up to the SHT.The idea of algorithm is as follows: Check whether or not a branch contains objects meeting given conditions.If not, do not search the branch (including all sub-branches of it and corresponding objects).For example, if searching for all objects in a given circle, check whether or not a branch crosses with the given circle.If not, skip this branch.In this way, the searching is limited to branches crossed with the given circle.
For the convenience of description, we define candidate branch as the branch which may contain the objects meeting the given condition during search procedure.Conditional search is implemented using the stack structure.The steps are as follows: (i) Push the root into a stack A; (2) Check whether or not the stack A is empty.If yes, end the search; if not, pop out a branch b from the stack A; (ii) Check whether or not each sub-branch of b is a candidate branch.If yes, push it into stack A; (iv) Check whether or not the object in b satisfies the given conditions; pick out the required objects; (v) Go back to step (ii).
Figure 9 shows the given circle and spatial division for managing planar triangles.Figure 10 is the scheme for the SHT corresponding to Figure 9, where '/' stands for the root.The process to find out objects in the given circle is shown in Figure 11.In the example, apply the procedure given above, we will finally find object A and B is in the given circle.In the searching, other operations have nothing to do with space dimension and type of object except for two operations, (i)checking whether or not an object is the needed one, or (ii) checking whether or not an branch is a candidate branch.So, the algorithm can be built on the abstract level.The conditional search is implemented by providing a conditional function and an identification function.The conditional function is used to check whether or not an object is needed.Assuming condition(o) is the conditional function, the argument o is object and the function value is bool number.The identification function is used to assess whether or not a branch is a candidate branch.Assuming maycontain(b) is identification function, the argument b is branch and the function value is bool number.With defining the above two functions, conditional searching meeting any given conditions can be easily implemented.
The conditional search can search for a collection of objects.With the search marks, the actual search process is done step by step.In each step, only one object can be gained.The computation complexity of conditional search is kL, where L~ln N is total level of SHT.The efficiency of the conditional searching algorithm depends on the identification function.The more efficient the identification function, the smaller the k and the faster the searching procedure.If identification function value is always true, k tends to infinity and this searching algorithm goes back to an ergodic browser.However, the computation complexity of identification function influences the k value.Since the computation for sphere regions is more efficient than for cube ones, in complex conditional searching algorithms, circumspheres of a cube are extensively used to reduce the quantity of computations.
Minimum search
In programming related to spatial objects, we often need to find objects meeting some given extreme condition.For example, to find a point with the largest z component from a set of three-dimensional points, or to search a point with the nearest distance to a given point, or to search a sphere closest to a plane, etc.Such searches can be attributed to the minimum searching problem.For spatial objects, each one can be assigned a function value related to its location and size such that the minimum search is to find the object with the minimum function value.
Fast searching can be created based on the SHT.The idea is as follows: Design a function to assess the range of the function value of all objects.Some branches can be excluded by comparing the ranges of the function value of different branches.For example, in a region, ∑ is a set of discrete points, we need to search the nearest points to a given point A. The fast searching is not to compute the distance between each point in ∑ and point A, but assess the range of distance between 'branches' and point A to excluding unnecessary searching in branches (including the point in it and its sub-branches) with longer distance.
For convenience of description, we define a few concepts.(i)Range of a branch: It means the range covering all the values of the given function for objects in this branch.(ii) B-R-branch: It is a new branch data structure composed of the branch itself and the range of it.(iii) Candidate B-R-branch: It is the B-R-branch which may be checked in the following procedure.It may contain objects whose functional values are minimum.In the minimum searching procedure, we must keep enough candidate B-R-branches.Some of them may be dynamically added or removed according to the need.In order to accelerate the searching speed, the candidate B-R-branches should be linked as a list.According to the above definition, each B-R-branch has a range.So, each B-R-branch has a lower limit to its range.The B-R-branches in the list are arranged in such a way that their lower limits subsequently increase.Obviously, the B-R-branch with the smallest lower limit is placed at the head of the list.(iv) Candidate object: the object with the minimum values during the searching procedure.It is initialized as null, and at the end of searching, it is the object with the minimum value.(v) Candidate value: function value of candidate object.
The minimum searching algorithm is as follows: (i) The root and its range are merged as a B-R-branch; Add the B-R-branch to a candidate list named L; The candidate value V is set as positive infinity; The candidate object is set as null.(ii) Pick out a B-R-branch, for example, B, from the head of candidate list L; Check the values of its objects.If the value of an object O is smaller than V, then replace V with this value; in the meantime, set object O as a candidate object.Remove the B-R-branches whose lower limit values are greater than V from the list L. (iii) Construct a B-R-branch Z for each sub-branch of B. If the minimum value of Z is larger than V, cancel Z; If the maximum value of Z is smaller than the minimum value of the B-R-branch C in L, then all the B-R-branches behind C are removed from L; If the minimum value of Z is larger than the maximum value of a B-R-branch in L, cancel Z; Otherwise, insert Z into list L according to its lower limit.(iv) Repeat steps (ii) and (iii) until the candidate list L is empty.The final candidate object is the required one.
As an application example of the proposed minimum search algorithm, we consider a case to find the nearest point to a fixed point A. Figure 12 shows the distribution of planar points and spatial division.Figure 13 is the corresponding SHT.Suppose point A in Figure 12 is the given fixed point.To seek the nearest point to it, the range of the branch is calculated by a sphere evaluation method.Figure 14 shows the flowchat.In minimum search, except for computing the function value of an object and the range of a branch, other operations have nothing to do with space dimension and type of object, so that the algorithm can be built on abstract level.Similar to conditional search, the minimum search is implemented by providing a value-finding function and range-evaluation function.
The value-finding function is used to calculate the function value of an object.Assuming value(o) is value-finding function, the argument o is an object and the function value is a real number.The range-evaluation function is used to compute the range of a branch.Assuming M(b) is the upper limit and m(b) is the lower limit of the range, the argument of function is branch b and the function value is a real number.With defining the above two functions, various minimum searches can be easily implemented.
The computation complexity of minimum searching is kL, where L~ln N is total level of SHT.The efficiency of the minimum searching algorithm depends on the range-valuation function.The smaller the range given by the range-evaluation function, the smaller the k and the faster the searching procedure.The worst range-evaluation function gives a range from -∞ to +∞.In such a case, the searching algorithm goes back to the ergodic browser.In the case with a large quantity of objects, one should use a good range-evaluation function to reduce the number of objects to be searched.However, a precise range-evaluation generally needs a large quantity of computations, which also decreases the global efficiency.We should find a balance between the two sides.Since the computation for sphere regions is more efficient than for cube ones, in complex minimum searching algorithms, circumspheres of a cube are extensively used to evaluate the range of a branch.
During conditional searching and minimum searching, we often need to screen some objects.For example, in the searching for the nearest point to a point A, if A is not screened, the searching result is just A and it's not what we need.In the algorithm implementation, a screen function is designed by breaking the corresponding connections in the tree, each call of screen function can screen an object, and a recover function is been design to recover all screened objects.
Applications
SHT framework provides a general index of set of spatial objects and effective management of data.Most problems in specific applications can be solved with SHT.In this chapter, object with physical properties is named as 'matter element'.Examples are referred to molecule or molecular clusters with certain energy, mass, momentum and angular momentum, particles with specific density and phase, grains with orientation and finite element with density and energy.For the majority of applications, problem can be summarized as constructing new sets of matter element meeting requirements from the existing one.In this section, the applications of general index in shortest path problem, space division, cluster construction, defect atom identification and interface construction are presented.
Shortest path problem
The shortest path problem (Moore, 1959) is a classical problem in graph theory.It is to find a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized.The shortest path not only refers to the shortest geographical distance, but also can be extended to other metrics, such as time, cost, and line capacity and so on.The selection and implementation of the shortest path algorithm is the basis of channel route design.It is the research focus of computer science and address information science.Many network-related problems can be incorporated into the category of the shortest path problem.Due to the effective integration of classic graph theory and the continuous development and improvement of computer data structure and algorithm, new shortest path algorithms are emerging with time.
The postman shortest path problem is as follows: Given N cities, one needs to find a shortest loop that the postman can go through all these cities.This is a typical NP problem, because it needs to compare all of the loops to find the optimal path.As a simple application example, the approximate solution is given, it is to find a shorter loop without intersection between its segments to connect all discrete points, and this approximate solution is called a labyrinth.
The algorithm of the approximate solution is as follows: add point to the loop one by one in such a way that a previous segment is replaced by two where the newly added point becomes the vertex in between and connecting two previous vertexes, at the same time keep the loop length as shortest as possible.Figure 15 is the schematic for adding a point to loop.The preparatory work for algorithm based on SHT is to design fast searchers.In this section, a design, including three minimum searchers and a conditional searcher, is proposed, where one value-finding function and two range-evaluation functions are defined for each minimum searcher and one conditional function and one identification function are defined for the conditional searcher.
(I) Minimum searcher MS1: search for the nearest point to a fixed point r 0 in a twodimensional point tree.The value-finding function is The two range-evaluation functions are as follows: where C is the circumcircle of branch b.
(III) Minimum searcher MS3: Given a point P, in two-dimensional segment tree, search for a segment AB with the minimum value of the length of fold-line APB minus the length of segment AB.The value-finding function is () Ditto, the circumcircle C is used for assessment.The range-evaluation functions are as follows: Up to now, the preparatory work has been finished.The labyrinth construction algorithm is as follows:(I) Initialization: Construct point tree tp from given planar discrete points, cut down a point P 0 , search for the nearest point P 1 to point P 0 (P 0 is screened) with MS1 searcher in tree tp, search for point P 2 (P 0 and P 1 are screened) to make the length of fold-line P 0 P 2 P 1 shortest with MS2 searcher in tree tp, generate segment P 0 P 1 and construct a segment tree tl from it, generate segment P 2 P 0 and P 1 P 2 , put them on the tree tl. Figure 16 shows the labyrinth constructed by connecting 40000 random points.
Fast constructing algorithm for Delaunay division
Since the good mathematical characteristics of Delaunay division (de Berg, 2008), it is widely applied in many areas, such as the pre-processing of three-dimensional finite element method, medical visualization, geographical information systems and surface reconstruction.For designing a subdivision algorithm, an important part is to create a mesh with low complexity and good performance.Today, there are lots of algorithms to the construction of Delaunay tetrahedrons in three-dimensional space or Delaunay triangles in two-dimensional space.The complexities of most algorithms are involved with the searching procedures.Here, we propose an algorithm based on the SHT.The algorithm is simple and intuitive.It is convenient to extend to higher-dimensional space.
It's necessary to use two new data or objects, i.e. 'extended-tetrahedron' and 'extendedtriangle' in the two-dimensional and three-dimensional algorithms.The 'extendedtetrahedron' is a combination of tetrahedron and its circumsphere, so its data structure contains that of tetrahedron and its circumsphere, and its center and size are those of the circumsphere.The 'extended-triangle' is a combination of triangle and its circumcircle, so its data structure contains that of triangle and its circumcircle, and its center and size are those of the circumcircle.
The algorithm for constructing Delaunay division from discrete points is as follow: when a new point is added to the formed Delaunay division structure, the division is adjusted to meet the condition for Delaunay division.Only tetrahedra whose circumspheres include the added point need to adjust, and they are named to-be-adjusted-polyhedron.The adjustment is to construct new tetrahedron by connecting each outer surface of the to-be-adjustedpolyhedron with the added point.Figure 17 shows the steps for adding a two-dimensional point and re-dividing the space, the red points stands for the newly added point P, the green triangles in figure 7 The preparatory work for algorithm is to design fast searchers.In this section, a design, including two conditional searchers, is proposed.
(I) Conditional searcher CS1: Given a fixed point r 0 , search for an 'extended-tetrahedron' CT containing point r 0 .The conditional function is as follows: (II) Conditional searcher CS2: Given three points P 1 , P 2 and P 3 , search for an extendedtetrahedron CT where P 1 , P 2 and P 3 are three of its vertexes.The conditional function is as follow: 12 3 three vertexes are , and () true P P P condition CT false else The identification function is as follows: 123 ,, () true P P P b maycontain CT false else In three-dimensional space, the algorithm for the constructing Delaunay tetrahedron from given discrete points is as follows: (I)Initialization: Generate a sufficiently large tetrahedron to contain all discrete points, record the center and radius of its circumsphere, form an 'extended-tetrahedron' and construct an extended-tetrahedron tree t; (II) Construction of Delaunay tetrahedron: (i) Pick out a discrete point P, search for the extended-tetrahedra whose circumspheres contain P in tree t with searcher CS1, remove these extendedtetrahedra from tree t; put the removed tetrahedra together to form a set named Q; (ii) Add every surface of each tetrahedron in Q to SHT S which is a tree for the external triangular interfaces, search for the surfaces appearing twice with searcher CS2 and remove them because they are interfaces.Triangles in S constitute the external surface of Q; (iii) Pick out each face of S, together with point P, to construct a new tetrahedron; add the newly formed extended-tetrahedra to tree t; (iv) Go back to step (i) until all points are used out.
Up to this step, the group of tetrahedra contained by tree t is just the Delaunay tetrahedra division.Figure 18 shows the Delaunay triangle division of 20000 randomly distributed twodimensional points.Figure 19 shows the Delaunay tetrahedron division of 20000 randomly distributed points in a three-dimensional sphere.The algorithm can be easily extended to n-dimensional space.We need only to replace the tetrahedron with an n-simplex and replace the triangle with an (n-1)-simplex.The circumsphere of the n-simplex constructed from n+1 point, {r 1 , r 2 ,⋯,r n+1 ,} in n-dimensional space is used in the algorithm.The formula to calculate the center of the circumsphere of the and e i is the unit vector of the i-th direction.The circumsphere radius is 2 () i cr .
Cluster construction and analysis method
Complex configuration and dynamic physical fields are ubiquitous in weapon-physics, astrophysics, plasma-physics, and material-physics.Those structures and their evolutions are characteristic properties of the corresponding physical systems.For example, the interface instability makes significant constraints on the design of an inertial confinement fusion (ICF) device (Ament, 2008), shock waves and jet-flows in high energy physics are common phenomena (de Vriesl et al, 2008), distributions of clouds and nebulae are very concerned issues of astrophysics (Bowle et al., 2009;Hernquist, 1988;Makino, 1990), clusters and filaments occur in the interaction of high-power lasers and plasmas (Hidaka et al., 2008), and structures of dislocation bands determine the material softening in plastic deformation of metals (Nogaret et al., 2008).These structures are also keys to understand the multi-scale physical processes.Laws on small-scales determine the growth, the change and the interactions of stable structures on larger-scales.Description of the evolution of stable structures provides a constitutive relation for larger-scale modeling.Because of the lack of periodicity, symmetry, spatial uniformity or pronounced correlation, the identification and characterization of these structures have been challenging for years.
Existing methods for analyzing complex configurations and dynamic fields include the linear analysis of small perturbations of the background uniform field, characteristic analysis of simple spatial distributions of physical fields, etc.These methods are lacking in a quantitative description of characteristics of the physical domain.For example, the size, the shape, the topology, the circulation and the integral of related physical quantities.Therefore, it is difficult to trace the evolution of the characteristic region or the background.For example, the laws of growth and decline, or the exchange between them.
The difficulties in characteristic analysis are twofold.The first is how to define the characteristic region.The second is how to describe it.The former involves the control equations of the physical system.The latter is related to recovering the geometric structure from discrete points.In recent years, cluster analysis techniques (Kotsiantis & Pintelas, 2004;Fan et al., 2008) in data mining have found extensive applications in identification and the testing of laws of targets.They mainly concern the schemes for data classification.Physicists are concerned more about the nature underlying these structures.Recovering characteristic domains can be attributed to the construction of spatial geometry.The key point is how to connect the related discrete points.The Delaunay grid (Chazelle et al., 2002;Clarkson & Varadarajan 2007) has an excellent spatial neighboring relationship.In this chapter, the Delaunay triangle or tetrahedron is used as the fundamental geometrical element.
For the discrete points in space, there is no strict cluster structure.If the discrete points are considered as objects, such as a molecular ball, lattice or grid, then, the objects can be connected to form clusters.The average size of these assumed objects is the revolution of clusters to be constructed with discrete points.The construction of clusters is very simple.A cluster is formed by connecting all points whose distance in between is less than the revolution length.
After the construction of the Delaunay division for given set of discrete points, remove the lines whose lengths are greater than the revolution length.The remaining spatial structure may have various dimensions.According to connectivity, the structures that are not connected to each other can be decomposed into different clusters.Each cluster may also have structures with various dimensions.For example, a structure consisted of two triangles with a common side, or a structure formed by a triangle and a tetrahedron, etc.In physical problems, the structures with high-dimensional measures play a major role in describing the system.Generally, we need only to analyze clusters with the maximum dimensions.
The cluster construction algorithm consists of three parts.Preparation part: (i) Construct Delaunay tetrahedra from given discrete points.The corresponding SHT is notated as t.The algorithm for adding a face S to the tree i is as follows.Search and check if a face with the opposite direction of S exists in the tree i.If exists, remove it from the tree i.If does not exist, add S to the tree i.
Up to now, all the constructed clusters are put to the cluster tree c.For each cluster C in the tree c, all tetrahedron elements are placed on the body tree C->t, all the surface triangles are placed on the tree for faces C->s.Figures 20 and 21 show respectively the clusters constructed with random points in two-dimensional and three-dimensional space.The algorithm is also applicable to n-dimensional discrete points.We need only to replace the tetrahedron with an n-simplex and replace the triangular surface with an (n-1)-simplex.
For space with a dimension higher than three, the number of neighboring points and the connectivity, as well as the number of n-simplex, grow rapidly with the dimension.So, the required memory increases quickly.The Delaunay division can be constructed partition by partition.The main skill in this algorithm is that the space is partitioned according to the main branches of SHT, and points in each partition are added sequentially.After the completion of adding all points in a partition, we need to delete the n-simplex that satisfies two conditions: (i) its external circumsphere is in the completed partition, (ii) at least one side is longer than the given resolution.
Identification methods of defect atoms
The knowledge about the world is gradually deepened.Before the 19 th century, the most prominent theoretical models primarily describe the movement of object.In the 19 th century, two important theoretical models are proposed: one is field, another is uniform system.In the 20 th century, the outstanding recognition model is about regular structure, represented by energy bond theory and phonon theory on studying crystal structure.Due to the enhanced computing power and increased awareness in the latter half of the 20 th century, cognitive models on non-periodic structure and non-equilibrium emerge.
The primary recognition is as follows: During the system evolution, spatial structures of various scales appear, and the evolution of these structures determines the evolution laws of the system, when the system parameter arrives at the critical one, structures of different scales show an overall correlation with each other, and become indistinguishable.The typical structure characteristic is self-similar and renormalization theory can easily handle this singularity.
In the studies on non-periodic structure and non-equilibrium, the phase transition theory and dislocation theory are representatives, and nearly all phenomena can be explained by nonlinear theory.It indicates that the macroscopic properties of non-equilibrium system do not depend on its fast processes, but on the evolution of more stable structure.With different scales of concern, the system structure is not same.
During the technology development with gradually reduced time and space scales, scientific understanding on some processes, which cannot be studies or are unclear before, are gradually obtained, i.e. microscopic mechanism of the fracture process.Meanwhile we've got the multi-scale cognitive model: The overall properties of system are determined by the evolution of large-scale structures, small-scale structures and fast processes determine the large-scale structure and slow processes.In researches, different methods and theories are developed according to structures of different scales.The properties of small-scale structure provide model parameters and constitutive relationships for system with large-scale structures.For the studies on strongly coupled system which is also non-uniform and nonequilibrium, the key issue is to understand the properties and evolution laws of structures.
In particle simulation and 2D or 3D simulations of complex physical systems, how to effectively analyze the spatial and dynamical characteristics the system is the key to understand physical laws.There two problems: how to identify the stable structures existing in the system and how to compute them.For instance, in molecular dynamical simulation studies on the mechanical properties of metal, plastic deformation, phase transitions, and damage processed closely refer to defect structures, such as dislocations, stacking faults, grain boundaries and interfaces.The emergence of those defect structures indicates the change stages of materials, and the evolution of them determines material properties.As defect structures are collection of arranged atoms, they can be appropriately identified with proper analysis methods.When face a huge number of atom coordinates, the key issue of physical analysis is how to recognize various defect structures.
Excess energy method and centro-symmetry parameter method
According to the physical quantities, structure symmetry or local topological connections, defect atoms can be distinguished by a few corresponding methods, for example, the excess energy method, parameter method (CSP) (Kelchner et al., 1998) and bondpair analysis (BPA) method (Faken & Jonsson, 1994).
In excess energy method, the atoms with excess energy are selected as defects atoms.
Because the lattice equilibrium positions are stable positions for atoms, the potential energies of atoms deviating from their stable positions are higher.This method depends on the physical quantities of particles, so the output data from MD simulation must be completed.However, in many cases, such as, in phase transition represented by symmetric double-well energy function, energy cannot be used to distinguish different structures.
In CSP method, the geometrical symmetry of the collection of nearest atoms of an atom is used to identify defect atoms.All atoms of perfect crystal are in the geometrical center of its nearest atoms, but the defect atoms are not.Therefore, an order parameter is defined as follows: Atoms whose order parameter s is greater than a critical value s c are defect atoms.In the case of strong temperature perturbation, the result of CSP method is not correct, because random thermal motion reduces the lattice symmetry and the order parameter of perfect lattice becomes greater.
The nearest atoms of a specific atom are the atoms within a given sphere region whose center is the specific atom.The radius of the given sphere must be greater than the distance of nearest atoms in perfect crystal and less than the distance of second nearest atoms.In fcc and bcc crystals, as the second nearest distance is 2 times of the nearest distance, the sphere center can be given as (1 2) / 2 1.2 times as the nearest distance to resist a certain degree of randomness.In the larger deformation of crystal, as the lattice constant alters, the given sphere radius needs to be adjusted.In more complicate cases, as the lattice constant is not known in advance, a better way is first to compute radial distribution function (RDF) and then set the distance corresponding to the first peak of RDF as lattice constant of perfect crystal.
During the computation procedure of radial distribution function and order parameter algorithm, as atoms in given region need to be searched, an index of atoms must be constructed.Since the distribution of atoms is uniform, the background grid index can be used.
Bond-pair analysis method
The CSP and excess energy methods can distinguish defect atoms, but can not easily identify types of defects atoms.The bond-pair analysis (BPA) based on local topological connections can more accurately identify atom type.The idea of BPA is as follows: a bond type is marked in terms of the connections among atoms bonding with the two atoms composing the bond, and an atom type is marked in terms of all bonds of itself.
The 'bond' is defined as the connection between two atoms whose distance is less than a given value R (bonding distance).For convenience, the name of 'bond' proposed here is same as the one used in chemistry, but their meaning is different.The concept of the proposed bond doesn't contain any meaning of quantum chemistry where it indicates overlap between the electric wave functions.The bonding distance is often set as 1.2 times as the nearest distance in perfect lattice.In the case of unknown lattice constant, the lattice constant is usually set as the distance of the first peak of RDF, so to some extent, the topological analysis can tolerate random disturbance.
Bond-type identification
For a given bond L, the bond-type of L is determined by the indirect bonding feature of both atoms of L. Pick out all atoms which bond with both the two atoms of L. The bonding feature of picked atoms is used to identify bond-type of L. Assume the two atoms of bond L are A and B. The collection of atoms bonding with both A and B is c.Specifically, a bondtype is marked by a three-digit number.The first digit number is the atom number in c, the second is the number of bond among atoms in c, and the third is the largest coordination number of atoms in c.In the perfect crystals, the bond-type ID is easy to calculate.For example, all bond types are 421 in fcc crystal.There are two bond-types, 422 and 421, in hcp crystal.There are two bondtypes, 441 and 661, in bcc crystal.All bond-types can be gathered as a list which will be referred to bond-type list later.Due to the variety of defect atoms in grain boundaries, there may be a large number of bond-types.For example, {421, 422, 441, 661, 200, 100, 311, 211, 411, 432, 542, 300, 400 …}.The bond-type list is dynamically increasing except for several special bond-types.
Atom-type identification
After identifying the bond-types, each atom gets its own bond-types.A distribution vector of bond-type of an atom can be generated by comparing its bond-types with the system bond-type list, and each component of the vector is number of the corresponding bond-type.For a fcc atom, it contains twelve 421 bonds, the corresponding distribution vector is {12}; for a hcp atom, it contains six 421 bonds and six 422 bonds, the corresponding distribution vector is {6, 6}; for a bcc atom, it contains six 441 bonds and eight 661 bonds, the corresponding distribution vector is {0, 0, 6, 8}; and for a kind of boundary atom, a distribution vector {5, 4, 0, 0, 0, 0, 2, 1} indicates that it contains five 421 bonds, four 422 bonds, two 200 bonds and a 100 bond.The distribution vector accurately describes an atomtype.In calculation, all distribution vectors can be gathered as a system atom-type list, the list is dynamically adjusted.
Bond-pair analysis algorithm
The preparatory work for algorithm is to design fast searchers.In this section, a design, including two conditional searchers, is proposed, where design a conditional function and an identification function for each conditional searcher.
(I) Conditional searcher CS1: Given a point p, search for a point whose distance to p is less than r c .The conditional function is as follows: () The circumcircle of branch b is used to identify.The identification function is as follows: () (II) Conditional searcher CS2: Given two points P 1 and P 2 , search for a points whose distances to P 1 and P 2 are less than r c .The conditional function is as follows: The circumcircle of branch b is used to identify.The identification function is as follows: The algorithm for bond-pair analysis is as follows: (i) Initialization: set a bond-type array B and an atom-type array A, empty A and B, set a bond-type distribution vector V. Generate a point tree tp from the given discrete points (atoms), calculate the RDF of the system, set bonding distance as r c according to the distance corresponding to the first maximum of RDF.
(ii) For each atom a, empty its bond-type distribution vector V. Search in the tree tp for an atom b bonding to the atom a (the distance between a and b is less than r c ) with searcher CS1.For each a-b bond, search in the tree tp for all atoms bonding to a and b (whose distance to a and b are less than r c ) with searcher CS2, compute the number of those atoms (denoted as l), check the connections between those atoms to get the number of bonds (denoted as m) and the largest coordination number (denoted as n), compute the bond-type of a-b as 100l+10m+n and then check whether or not it is a new bond-type by comparing with the bond-type list in B. If yes, add it to array B and then plus one on the corresponding component of V. Complete the loop of all bonds of a, check whether or not the atom-type of a is new by comparing with A. If yes, add it to array A. (iii) Go back to step (ii) until all atoms are used out.
Results show
Figure 23 shows the stacking faults and dislocations formed during the low-temperature evolution of gathered point defects (Frank loop) in fcc copper crystal.The defect atoms belonging to dislocations and stacking faults can be accurately identified, where blue atoms are stacking faults, and dislocation atoms are red.Figure 24 shows the growing of the two sphere voids in fcc copper under tension, at the very beginning, dislocations grows from the void surfaces, and different dislocations cross with each other in the late evolution.The atoms belonging to void surface, dislocations and stacking faults (no shown) can be strictly distinguished, they are marked with different colors.www.intechopen.com General Index and Its Application in MD Simulations 389
Surface construction algorithm
A key issue in the analysis of complex dynamic system is to form interface of concerned region.For example, in first order phase transition process, particle transportation and structure transition will cause growth and deformation of phase change zone.Interface determination of structural zone is the foundation of analyzing interface movements, crossinterface physical flow and understanding characterization of structure development.
Interface construction is similar to surface reconstruction in computational geometry.Surface in computational geometry is formed from sampled points, while physical interface is the results of system evolution.
Packing-sculpting method for constructing object surface from disorder spatial points
It is an important issue in computational geometry to construct object surface from disorder points.The current algorithms can be categorized into four groups (Mencl & Muller, 1997), i.e. space partitioning method (Boissonant, 1984), distance function method (Hoppe et al., 1992), deformation method (Zhao, 2002), and growth method (Bernardini, 1999).Space partitioning is generally based on Delaunay division.The outer surface is generated by removing some Delaunay mesh in the sculpting method.The packing-sculpting method presented below is an intuitive method, the out surface is constructed by directly sculpting the packing convex hull.The basic idea is as follow: First of all, generate the packing convex hull from discrete points, and then sculpt the convex hull to construct the object surface.
Packing algorithm
The packing algorithm is to generate a convex hull by packing all given points.In this section, half-plane rotation method is introduced.
For the convenience of description, a new data structure, 'extended-segment', is defined as the combination of one side of triangle and triangle itself, it is an extended side of triangle with the data of the other vertex.The center and length of 'extended-segment' is that of the corresponding side of triangle.In the algorithm, there are various complicate searching for points, lines and surfaces.With the general index based on SHT, fast searching can be designed according to given searching conditions.
The preparatory work for algorithm is to design fast searchers.In this section, a design, including a minimum searcher and a conditional searcher, is proposed.(I) Minimum searcher MS1: Given a triangle ABC, pick out a segment AB, the extended half-plane of triangle ABC rotates around axis AB, searcher for the first point P meeting the extended half-plane to make smallest the dihedral angle shown in figure 25.
The value-finding function is as follows: () where c b is the center of branch b.
(II) Conditional searcher CS1: Given a directed segment P 1 P 2 , search in extended-segment for an extended segment BD being equal to segment P 1 P 2 .The conditional function is as follows: 12 () true B P and D P condition BD false else The identification function is as follows: 12 , () true P P b maycontain b false else The packing algorithm is as follows: (I) Initialization: Construct a point tree tp from the given discrete points.According to the region size of root tp, pick out two vertexes: P 1 =(-a,a,a) and P 2 =(-a,a,a), where a is half of the edge length of root region.Given a point P 3 =(-2a,0,a).Search in the tree tp for the first point Q 1 met by rotating triangle P 1 P 2 P 3 around axis P 1 P 2 with searcher MS1, generate a new triangle P 1 Q 1 P 2 and cut Q 1 down from tp.Search in the tree tp for the first point Q 2 met by rotating triangle P 1 Q 1 P 2 around axis P 1 Q 1 with searcher MS1, generate a new triangle Q 1 Q 2 P 1 and cut Q 2 down from tp.Search in the tree tp for the first point Q 3 met by rotating triangle Q
Sculpting algorithm
The sculpting algorithm refers to the sculpting method and the sculpting standard.We have to guarantee that no point is carved out during the sculpting procedure and make as smooth as possible the surface after sculpting.The curvature of smooth surface is small, and it indicates that the circumsphere radius of the tetrahedron which is carved is larger.For a triangle ABC under sculpting, the sculpting process means to find a point P in currently packed region to make smallest the height of circumsphere cap ABCP. Figure 28 is the scheme for sculpting algorithm.For a triangular surface, the stopping criterion for sculpting should be that the radius of circumsphere is less than a given value.In the case of uniform sampling of object surface, the criterion meets the requirements.However, in the case of nonuniform sampling (large curvature and dense sample points), the curvature must be related to the distance between local sample points.The ratio of the height of sphere cap and the circumcircle radius of a triangle, i.e. sculpture degree c=h/r<c cri , is more suitable for stopping criterion.The criterion can be used in both cases above.Figure 28 shows the sculpting algorithm.
The preparatory work for algorithm is to design fast searchers.In this section, a design, including a minimum searcher, is proposed.
The minimum searcher MS is as follows: Given a triangle ABC, search in point tree for a point P to make smallest the height of circumsphere cap ABCP.The construction of valuefinding function is as follows: By solving equations,
Results show
Figure 30 shows the convex hull constructed from discrete points containing two nano-voids and uniform boundary points of a point defect by packing algorithm and object surfaces generated by sculpting.Figure 31 is for the reformed surface from random points sampled from tori. Figure 32 is for the reformed surface from 9000 random points sampled from spheres and tori.The value of critical sculpture degree is 2.0 in sculpting algorithm.Application examples show that the algorithm is suitable for the cases of multi-connected or many-body surfaces.
In the algorithm, the correctness of topology and geometry are guaranteed and the right surface can be constructed even in regions of greater curvature and at the transition zone of surfaces with different curvature.
Rolling-ball method for finding interfaces of physical regions from disorder spatial point
The difference between searching for interface of physical region from disordered spatial points and constructing surface of object is that the spatial points contain not only the points of physical interface but also points of other structures.The most common approach is to construct physical field on a regular grid, and the contour of physical field is used as the appropriate physical interface.This method is suitable for the case that the distribution of discrete points closed to interface is uniform.In the case of complex distribution of discrete points, it is hard to preserve the smoothness of the constructed interface, the calculated interface is very different from the actual interface.A better way is to use the rolling-ball method without constructing physical fields.The basic idea of rolling-ball method is as follows: roll a ball with fixed size on discrete point group, each rolling goes through three points, and these points constitute a surface element of interface.After the rolling-ball goes through the overall region, the physical interface is constructed.In rolling-ball method, the key parameter is the sphere radius.In the case of sparse sampling, different radiuses define different interfaces.Therefore, the size of the rolling-ball radius is obtained from experience.
Rolling-ball algorithm
The preparatory work for algorithm is to design fast searchers.In this section, a design, including three minimum searchers and a conditional searcher, is proposed.(III) Minimum searcher MS3: Given a point and a rotation axis, search in point tree for the first point by the rolling-ball with fixed size.The algorithm is basically the same as MS1.We do not repeat here.
(IV) Conditional searcher CS1: Given two point P 1 and P 2 , search in extended-segment tree for segment BD whose vertexes are P 1 and P 2 .The conditional function is as follows: The circumsphere S of branch b is used for identification.The identification function is as follows: 12 , () true P P S maycontain b false else The rolling-ball algorithm is as follows: (I) Initialization: Set the radius of rolling-ball and the center as P 0 , generate point tree tp from given discrete points, search for the nearest point P 1 to P 0 with searcher MS2.Use searcher MS3 to search in tree tp for a point P 2 which is the first point met by the rolling-ball rotating along x axis.Use searcher MS3 to search in tree tp for a point P 3 which is the first point met by the rolling-ball rotating along the direction of segment P 1 P 2 .Generate a triangle from P 1 , P 2 and P 3 , construct a triangle tree tt from triangle P 1 P 2 P 3 and put it into extended-segment tree tb.(II) Interface construction: Check whether or not the tree tb is null.If yes, exit.If not, cut down a segment AB of triangle ABC.Search in tree tb for a point P with searcher MS1 to make smallest the rotation angle of circumsphere of triangle ABC, where AB is the rotation axis.Construct triangle BAP and put it into the triangle tree tt.Use CS1 searcher to search in tree tb for an extended-segment L whose vertexes are point B and P. Check whether or not L exist.If yes, cut L down from tb and then delete it.If not, generate an extended-segment PB and put it into tb.Do the same operations to point P and A. (III) Go back to step (II).The collection of triangles contained by tree tt is just the needed physical interface.
Results show
In molecular dynamical simulations on voids coalescence in fcc copper, defects atoms include atoms in void walls and dislocations.Figure 35 shows interface of voids constructed from discrete atom positions.Figure 36 shows the corresponding construction process.The radius of rolling-ball is set as 2/3 times of the lattice constant so as to distinguish residual structures on the wall of voids after dislocation slipping.In figure 35(b), the stairs composed by array of atoms can be clearly seen from the constructed surface.When the voids are growing, the trace of dislocation migration and the irregular shape of the void coalescence zone are truly shown up.
Fig. 1 .
Fig. 1.Scheme for the background grid index of objects.
Fig. 3 .
Fig. 3. Scheme for inserting an object into a linked list: (a) before inserting; (b) after inserting.
Figure 4 -
a (Figure4-b) is a scheme for the SHT management structure of two-dimensional (three-dimensional) discrete points.
Fig. 4 .
Fig. 4. Scheme for management region of SHT of discrete points.(a) Two-dimensional points; (b) three-dimensional points.
Fig. 5 .
Fig. 5. Scheme for object management by SHT.The rectangle in each row stands for branch, horizontal grey arrow stands for object, and vertical black arrow stands for the list which connects the sub-branches belonging to a same branch.
Fig. 6 .
Fig. 6.Scheme for the establishment of the original tree.(a) triangle object; (b) create an original branch with minimum resolution 'σ'; (c) double the scale of original branch until the triangle can be contained; (d) established triangle tree, where '/' is root ID, grey arrow stand for the triangle.
Fig. 7 .
Fig. 7. Scheme for adding objects to the tree.(a) Generation of new root; (b) placing an object; (c) SHT corresponding to (b).
Fig. 8 .
Fig. 8. Scheme for removing an object from a tree.(a) pick up the branch containing the object, in this figure, it is branch 'a'; (b) remove the object from branch 'a'; (c) remove branch 'a'; (d) remove branch 'I', which contains no more objects and sub-branches.
Fig. 9 .
Fig. 9. Distribution of planar triangles and the corresponding spatial division.
Fig. 11 .
Fig. 11.Flowchart for the fast search of objects in a circular area.
Fig. 14 .
Fig. 14.Scheme for the fast search of the nearest point to a given fixed one.
Fig. 15 .
Fig. 15.The change of a loop after adding a point P. The segment AB becomes two segments AP and PB.
(
is the coordinate of point p, c b the coordinate of the center of branch b, 2 b d the circumradius of branch b, M(b) the upper limit and m(b) the lower limit.(II) Minimum searcher MS2: In a two-dimensional point tree, search for a point with the shortest fold-line connecting the two vertexes of a given segment AB.The valuebranch b is used to assess the range of b.The range-evaluation functions are 7) (IV) Conditional searcher CS1: Given a segment AB, in two-dimensional segment tree, search for a segment ab crossed with segment AB.The conditional function is as follows: y are obtain from solving the equation, xr A +(1-x)r B =yr a +(1-y)r b .The identification function is as follows: (II)Connection construction: (i) Cut down a point P from tree tp and empty stack s. (ii) If P is null, exit.(iii) Search for a segment L (all segments in stack s are screened) in tree tl with MS3 searcher to make the increased length shortest by adding point P. (iv) Search for a segment L 1 crossed with the segment L with CS1 searcher.(v) If L 1 is null, cut down L from tree tl, generate two segments by connecting the two vertexes of L with point P and put them on the tree tl, go back to step (i), and if the stack s is not empty, then push L into s and go back to step (iii).
Fig. 16 .
Fig. 16.Labyrinth constructed by connecting 40000 random points.Fig.(b) shows the enlarged portion chosen by the small black rectangle in Fig. (a).
Fig. 17.Three steps to add a new point to a two-dimensional Delaunay division.(a) Finding the triangles whose circumcircle contains the newly added point P; (b) Removing the internal lines of these triangles, retaining the external ones; (c) Connecting each left line with point P to form new triangles.
branch b is used to identify, the center of b is c b and the radius is 3 b d .The identification function is as follows:
Fig. 18 .
Fig. 18.Delaunay division constructed from randomly distributed discrete points in a twodimensional square area [0,4]×[0,4].(b) is the enlarged picture of the portion in the small black rectangle in (a).
Fig. 19 .
Fig. 19.Delaunay division constructed from randomly distributed discrete points in a threedimensional spherical region.
(ii) Remove the tetrahedrons whose lengths are greater than the given resolution from t. Single cluster construction part: (iii) Remove tetrahedron T from t if such a T still exists.Create a new cluster named C. Initialize the body tree C->t and face tree C->s as null.Add T to the body tree C->t.Add each of the triangle faces to a triangle tree named i.(iv) Pick out a triangle face S from i. Search tetrahedron Y containing face S from t.If found, add Y to tree C->t and add all faces of Y to tree i.Two faces with opposite directions will annihilate if they meet each other during the adding procedure.If none are found, add S to the tree C->s.(v) Repeat step (iv) until the tree i becomes null.Construct all the clusters: (vi) Add the constructed cluster C to a tree for clusters named c.Repeat the process of constructing single clusters, and add the new cluster to c until t becomes null.
Figure 22
Figure 22 shows an example for bond-type identification.The bond-type of bond is 443, where, from left to right, the first number, 4, denotes the number of common neighbor of atoms A and B (i.e.H, I, C, and D), the second number, 4, means the bond number of the common neighbor atoms (i.e.HI, DI, HD, and CD), the third number, 3, indicates the largest coordination number (the coordination number of atom D, i.e.CD, HD, and DI).
Fig. 25 .
Fig. 25.Scheme for finding the first point during half-plane rotation.
construct an extendedsegment tree tb from Q3Q1 and put Q1Q2 and Q2Q3 into tb.Figure26 shows the initialization procedure of packing-algorithm.
Fig. 26 .
Fig. 26.Scheme for initialization of packing algorithm.(a) the first point Q 1 met by rotating triangle P 1 P 2 P 3 around axis P 1 P 2 ; (b) the first point Q 2 met by rotating triangle P 1 Q 1 P 2 around axis P 1 Q 1 ; (c) the first point Q 3 met by rotating triangle Q 1 Q 2 P 1 around axis Q 1 Q 2 ; (d) initial triangle interface.
Fig. 27 .
Fig. 27.Two cases.(a) L a P is in the extended-segment tree; (b) L a P is not in the extendedsegment tree.
r of triangle ABC, the center position o, and the normal direction n are obtained.By solving equations, circumsphere center r c and the height λ of r c to circumcircle of triangle ABC are obtained.The height of circumsphere cap is as follows (i.e. the value-finding function): evaluation function is calculated according to the tangent cases between circumspheres of branch b and sphere cap ABCP.It is as follows: the two cases shown in figure29.Set P as the tangent point of sphere cap and the circumsphere of b.When P becomes point Q 1 , the height λm of sphere cap ABCP is the smallest.When P becomes point Q 2 , the height λM of ABCP is the largest..
Fig. 29 .
Fig. 29.Two tangent cases between circumspheres of branch b and sphere cap ABCP (crosssection picture), where black circle is for the circumsphere of branch b, blue segment is for the circumcircle of triangle ABC, green and red circle are both for sphere cap ABCP, Q 1 and P 2 are for tangent points.The sculpting algorithm is as follows:(I) Initialization: Take all triangles obtained by packing algorithm as triangle under sculpting, and the corresponding tree as triangle tree tt under sculpting.Set the critical value as 2.0 for stopping sculpting, set surface tree tt0 as empty.(II)
(Fig. 33 .
Fig. 33.Scheme for the rotation of triangle ABC.The procedure for constructing range-evaluation function is as follows: calculate the position of the tangent point T of rolling-ball and the circumsphere of a branch.The calculation formula read
Figure 34
Figure34shows the two tangent cases between the circumspheres of branch b and the rolling-ball (cross section picture), where back circle is for the circumsphere of branch b, blue, green and red circle are for rolling-balls, ML and mL are for corresponding tangent points.If the tangent point is ML, the rotation angle of rolling-ball is the smallest.If it is mL, the rotation angle is the largest.
Fig. 34 .
Fig. 34.Scheme for the two tangent cases between the rolling-ball and the circumsphere of branch b.(II) Minimum searcher MS2: Given a point r 0 , search for the nearest point P of the given point in point tree.The value-finding function is | 17,257.8 | 2012-02-29T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Enhanced Neuronal Glucose Transporter Expression Reveals Metabolic Choice in a HD Drosophila Model
Huntington’s disease is a neurodegenerative disorder caused by toxic insertions of polyglutamine residues in the Huntingtin protein and characterized by progressive deterioration of cognitive and motor functions. Altered brain glucose metabolism has long been suggested and a possible link has been proposed in HD. However, the precise function of glucose transporters was not yet determined. Here, we report the effects of the specifically-neuronal human glucose transporter expression in neurons of a Drosophila model carrying the exon 1 of the human huntingtin gene with 93 glutamine repeats (HQ93). We demonstrated that overexpression of the human glucose transporter in neurons ameliorated significantly the status of HD flies by increasing their lifespan, reducing their locomotor deficits and rescuing eye neurodegeneration. Then, we investigated whether increasing the major pathways of glucose catabolism, glycolysis and pentose-phosphate pathway (PPP) impacts HD. To mimic increased glycolytic flux, we overexpressed phosphofructokinase (PFK) which catalyzes an irreversible step in glycolysis. Overexpression of PFK did not affect HQ93 fly survival, but protected from photoreceptor loss. Overexpression of glucose-6-phosphate dehydrogenase (G6PD), the key enzyme of the PPP, extended significantly the lifespan of HD flies and rescued eye neurodegeneration. Since G6PD is able to synthesize NADPH involved in cell survival by maintenance of the redox state, we showed that tolerance to experimental oxidative stress was enhanced in flies co-expressing HQ93 and G6PD. Additionally overexpressions of hGluT3, G6PD or PFK were able to circumvent mitochondrial deficits induced by specific silencing of genes necessary for mitochondrial homeostasis. Our study confirms the involvement of bioenergetic deficits in HD course; they can be rescued by specific expression of a glucose transporter in neurons. Finally, the PPP and, to a lesser extent, the glycolysis seem to mediate the hGluT3 protective effects, whereas, in addition, the PPP provides increased protection to oxidative stress.
Introduction
Huntington's disease (HD) is a heritable neurodegenerative disorder characterized by lesions in the striatum, progressive deterioration of motor and cognitive functions and psychiatric disturbances. HD is caused by expansion of a poly-glutamine (poly-Q) tract in the N-terminus of the Huntingtin protein (Htt) and consequently, accumulation of short N-terminus fragments of the protein. Poly-Q stretches of more than 36 residues are associated with pathology which affects the brain and numerous peripheral organs and tissues. New toxic functions were described for the mutated Htt; moreover, the mutation induces a loss of function of the wild-type Htt which plays an important role on cell survival and embryonic development [1]. Extensive therapeutic strategies were developed, but none of them proved to be effective in halting the disease progression and, up to date, treatments focus on alleviating HD-associated symptoms. Indeed, although the cascade of molecular and cellular events leading to mHtt pathology is still unclear, mHtt compromises several vital cellular functions such as intracellular trafficking [2], transcriptional regulation [3,4], cytoskeleton [5], energy metabolism and mitochondrial functions [6], that were extensively described in reviews [7,8].
Neurons depend on glucose for providing energy and redox protection. Because they are unable to synthesize or store glucose, neurons are fully dependent on glucose import. In human tissues, glucose homeostasis is mainly maintained by the members of the glucose transporter family (referred as SLC2A) comprising 14 isoforms mediating facilitative sugar transport [9]. The various isoforms show different affinity for glucose suggesting adaptation to the various metabolic requirements of each cell. In mammalian brain, although several GluT members are present, GluT1 and GluT3 are the predominant GluTs responsible for glucose transport [10]. GluT1 is detected in glial cells such as astrocytes [11,12] and permits glucose storage into astrocytes by glycogen synthesis. GluT3 is the major neuronal GluT and transports glucose from the extracellular space into neurons [13,14]. Glucose is metabolized through glycolysis in the cytosol, generating two molecules of pyruvate which fuel mitochondria where most ATP is produced. Neurons present relatively weak expression of glycolytic enzymes; in contrast, they favor another important metabolic pathway in glucose oxidation, the pentose phosphate pathway (PPP) in order to compensate for their limited antioxidant reserve [15,16]. Indeed, the PPP provides intermediates for nucleotide synthesis and is the major source of cytosolic NADPH which is a critical factor for enzymes implied in cellular defense system against oxidative stress. The NADPH levels are mainly maintained by glucose 6-phosphate dehydrogenase (G6PD) activity, the first and rate-limiting enzyme of the PPP. As a by-product of energy production, the mitochondria also generate most of the endogenous reactive oxygen species (ROS), damaging both mitochondria and the rest of the cell. Thus, maintenance of mitochondrial integrity and function is the highest priority to brain cells. Any defect in brain mitochondria functioning may lead to severe energy deficiency as well as increased generation of ROS in neurons and ultimately to neuronal degeneration [17,18,19].
Numerous studies in HD patients and in animal models have indicated energy metabolism defects in the pathogenesis of HD before the occurrence of any overt pathology. They suggest that changes in energy metabolism are not a consequence of neuronal loss, but rather a contributory factor to the progression and development of the disease [20,21,22]. Studies on postmortem brain tissue or positron emission tomography imaging analyses revealed reduced cerebral metabolic activity in cortex and striatum of symptomatic HD patients and also in presymptomatic subjects, so even before the onset of pathological symptoms [23,24,25,26]. Moreover, Gamberino and Brennan [27] described decreases in glucose transporter levels by post-mortem analysis in caudate and not in cerebral cortex of HD brains; these decreases were not closely related to brain atrophy but can be associated to changes in transporter expressions.
Interestingly, in a recent study on HD patients, a correlation was found between the onset of the disease and the copy number of the GluT3 gene [28]. Moreover, biochemical studies on different HD models tended to demonstrate dysfunctions linked to mitochondrial homeostasis [29]. The activity in oxidative phosphorylation was reduced [30,31,32]; they are associated with increased ROS levels and triggers oxidative impairment as observed in several neuropathological diseases [20,33]. Plasma levels of oxidative damage products were found to be increased in HD patients and asymptomatic HD gene carriers [34]. Enzymes of the TCA cycle were impaired in post-mortem brain from HD patients or in HD models: mitochondrial aconitase presented a loss of activity [35] and pyruvate dehydrogenase (PDH) levels were altered in HD transgenic mice [36,37].
Drosophila models for human neurodegenerative diseases (Alzheimer's disease, Parkinson's disease, or poly-Q diseases such as SCA1, HD. . .) have been created by expression of the relevant human pathogenic protein and provided many valuable insights into pathogenic mechanisms resulting in the course of hallmarks of these diseases [38,39]. Specifically as in the HD, expression of the pathogenic Huntingtin in the Drosophila nervous system leads to neuropathology and premature cell death. The neuronal expression of the exon 1 of the human mutant huntingtin gene (containing 93 repeats of the CAG codon) in our Drosophila model (named hereafter HQ93 flies, or mHtt for mutated Huntingtin) affects functions of both neurons and glial cells [40]. Glucose homeostasis is maintained in a conserved manner in Drosophila, and metabolic regulation shows strong similarities to mammals [41,42].
Therefore, we postulated that impaired brain glucose metabolism in Drosophila contributes to HD pathogenesis and supposed altered expression of genes concerned with energy metabolism. In order to test these hypotheses, we generated transgenic Drosophila bearing the human glucose transporter hGluT3 and showed that its overexpression in HQ93 neurons is an effective approach to rescue lifespan, restore locomotor activity and slow down neurodegeneration. Results show that overexpressed PFK glycolytic enzyme in neurons has no impact on the early death of HQ93 flies; nevertheless it was able to decrease eye neurodegeneration, suggesting a moderate efficiency of increasing glycolysis. However, both organismal lifespan and neurodegeneration can be significantly rescued by overexpression of G6PD. As G6PD is involved in mechanisms of antioxidative defenses, we showed that resistance to experimental oxidative stress induced by H 2 O 2 was enhanced in flies carrying both HQ93 and G6PD. In order to investigate the role of glucose metabolism on mitochondria defects which are one of the hallmarks of HD, we mimicked mitochondria dysfunction by silencing genes required for mitochondrial activities. Fly neurons were knocked down for genes expressed at the first step of the TCA cycle and in the first complex of the respiratory chain. We observed that hGluT3, PFK and G6PD transgenes increased the survival of flies presenting mitochondrial defects. This suggests that the mitochondrial activity deficit can be rescued by improving glucose metabolism. Finally, we conclude that increased neuronal glucose uptake by increasing glucose transporter expression alleviates HD pathogenesis. This effect was poorly mediated by up-regulation of glycolysis, but can be triggered by activation of G6PD in the PPP which permits to neurons to replenish NADPH pool and so, ameliorating the capacity of neurons to resist to oxidative stress induced by HQ93.
Results
The transgenic Drosophila HD model used in this study has been widely exploited; it carried the exon 1 of the human mutated Huntingtin with 93 CAG repeats [40,43]. In our experiments, expression of mHtt was regulated by the yeast UAS/Gal4 system in which the mHtt transgene lies downstream of the UAS sequence. HD flies were generated by crossing UAS-HQ93 flies to flies expressing the Gal4 protein in a tissue-specific manner. To achieve co-expression, crosses were conducted so that in the female F1 progeny, both UAS-mHtt and other UAS-transgenes were expressed in neurons under the regulation of the pan-neuronal specific Elav-Gal4 driver.
Expression of a glucose transporter
Previously we showed that the overexpression of an isoform of a predicted Drosophila sugar transporter family, referred as DmGluT1 [44], improved survival of flies expressing HQ93 in glial cells [45]. Since DmGluT1, like the other putative Drosophila orthologs of human sugar transporters, are not functionally characterized, we have verified whether DmGluT1 was able to transport glucose in comparison with the previously described activity of hGluT3 [46]. We measured the intracellular concentration of glucose with a FRET glucose sensor which allows estimation of transporter activity at cellular resolution in HEK-293 cells [47]. Using this method, we measured glucose in HEK-293 cells expressing the glucose sensor FLII 12 Pglu-700μΔ6 alone or in the presence of the DmGluT1 or hGluT3 cDNAs (Fig. 1A). GluT3 overexpression induced a strong significant increase in intracellular glucose concentration. In the presence of 5 mM extracellular glucose concentration, hGluT3-expressing cells maintained a steady-state intracellular glucose concentration averaging 1.12 + 0.17 mM instead of 0.35 + 0.03 mM for control cells. The S1 Fig. showed that hGluT3-expressing cells displayed cytoplasmic fluorescence for the glucose sensor compared to non-transfected cells. Glucose clearance was enhanced by hGluT3 expression after lowering extracellular glucose concentration to 0 mM (Fig. 1A). Finally, hGluT3-expressing cells also uptake galactose in accordance with previously characterized parameters in Xenopus ovocytes [46]. In the same conditions we found that DmGluT1 expression did not change glucose entry and clearance in HEK-293 cells, although close to significance levels (Fig. 1A); in contrast to hGluT3, no galactose uptake was observed with DmGluT1. In order to demonstrate that the low activity of DmGluT1 was not due to a failure of expression in HEK-293 cells, we also performed analysis using a DsRed-tagged construct. The localization of the tagged DmGluT1 was comparable to that of hGluT3 (Figs. 1B and 1C), namely at the plasma membrane. Nevertheless, DsRed-DmGluT1 expression was able to significantly increase intracellular glucose levels when HEK-293 cells were bathed with 25 mM extracellular glucose (Fig. 1D). However, in this case, the steady-state intracellular glucose concentration still remained lower than the one measured with hGluT3 at 5 mM extracellular glucose. Thus, DmGluT1, which can transport glucose with a low affinity but does not transport galactose, is probably not a close "functional" ortholog of hGluT3.
From these results, to investigate whether expression of a glucose transporter could affect the course of a human disease, it seems more relevant to establish Drosophila lines expressing a high glucose affinity transporter. For this purpose, we generated flies carrying an insertion of hGluT3 and by crossing them to flies bearing the pan-neuronal Elav-Gal4 driver, we could induce hGluT3 expression in control or HQ93 neurons. To validate that our construct was expressed in Drosophila, we detected specifically the hGluT3 transcript by RT-PCR in neurons (Fig. 1E).
Glucose transporter overexpression improves survival of mHtt flies in neurons
As previously reported, expression of mHtt in Drosophila neuronal cells reduces fly longevity [48]. Adults expressing HQ93 displayed a lifespan mean occurring at 17 + 1 days ( Fig. 2A) and a maximum lifespan (time to 90% mortality) of 22 + 1 days, whereas lifespan mean of wildtype flies ranges around 66 days (S2 Fig.). When hGluT3 transgene was expressed together with HQ93, the life expectancy mean reached up to 29 + 2 days ( Fig. 2A), indicating a 71% increase of lifespan. In the presence of the two transgenes, the maximum survival time was also significantly enhanced since it was 34 + 1 days. We conclude that hGluT3 overexpression is sufficient to significantly improve the survival of the HD flies, by enhancing the mean and maximum lifespans. Expression of hGluT3 did not alter significantly the lifespan mean of control flies (63 days) (S2 Fig.), suggesting that its neuronal overexpression alone has no effect on fly survival. Moreover, HQ20 flies, which express an unexpanded tract of 20 glutamine residues showed no premature death (S3 Fig.) compared to HQ93 flies, in the presence or not of hGluT3. As expected, the survival curve of HQ20 flies was not different from flies expressing no transgene.
mHtt-induced locomotor impairments are rescued by glucose transporter expression
We utilized a behavioral test, the climbing assay, also named geotaxis negative test which has been extensively used to measure locomotor activity of Drosophila [48,49]. Placed in a column and tapped down, control flies containing the neuronal driver Elav-Gal4 alone promptly started to climb along the column and 87% reached the top (S4 Fig.), indicating normal locomotor activity. In contrast, only a reduced fraction of flies expressing HQ93 in neurons (48%) were able to reach the top (Fig. 2B). However, flies expressing HQ93 and the human glucose transporter improved significantly their locomotor performances since 66% flies had reached the top of the column.
The glucose transporter slows neurodegeneration in HD fly eyes
The fly compound eye, although dispensable for normal development and viability, is a powerful model for analyzing genes contributing to human neurodegenerative diseases [39] and allows to observe neurodegeneration [50]. The Drosophila eye is composed of ommatidia with a regular arrangement of seven visible photoreceptor neurons as observed in control by pseudopupil analysis (Fig. 2C) [51]. Expression of HQ93 in neurons led to an obvious loss of one or more photoreceptors leading to a disorganization of ommatidia arrangement. In these flies, the number of ommatidia with seven visible photoreceptors declined from 24% at day 1 to 11% at day 4 after adult emergence (Fig. 2D). This increased loss of photoreceptors was significantly evidenced and showed that neuropathology in the fly is progressive as in the human condition. However, disruption of ommatidia was markedly rescued when the human glucose transporter was co-expressed with HQ93: 78% of ommatidia had 7 photoreceptors one day after adult eclosion and 52% of intact ommatidia were detected at the fourth day of adult life. Overexpression of hGluT3 alone had no effect on the photoreceptors (data not shown).
Implication of the glycolysis pathway in mHtt Drosophila neurons
To investigate the role of glycolysis, we analyzed whether the up-regulation of the glycolytic flux was beneficial in the Drosophila HQ93 neurons by overexpressing PFK which catalyses a rate-limiting step between fructose-6-phosphate and fructose-1, 6-biphosphate. For this, we used transgenic flies bearing Drosophila PFK [52]. After crossing them with HQ93 flies and Elav-Gal4 as driver, we analyzed lifespan and eye neurodegeneration. The Fig. 3A shows that overexpression of this enzyme in neurons did not change the survival of HQ93 flies. However, statistical analysis of the pseudopupil data ( Fig. 3B) showed that PFK overexpression prevented neurodegeneration at day 1 and day 4. Since this result could be interpreted as an inefficient import of glucose in brain sensu stricto in comparison with the photoreceptors, we analyzed the impact of hGluT3 and PFK co-expression on HD toxicity in the brain and eyes. We showed that PFK and hGluT3 co-expression had no additional effect on survival compared to HD flies expressing hGluT3 alone (Fig. 3C). Similarly, the rescue of photoreceptor degeneration was not enhanced when hGluT3 and HQ93 were co-expressed (Fig. 3D). All these results allow to conclude that overexpression of PFK has no sufficient impact in the brain to rescue longevity of HQ93 flies but was able to delay neurodegeneration in photoreceptors.
Implication of the pentose-phosphate pathway (PPP) in mHtt Drosophila neurons G6PD activity, by counteracting oxidative stress, can protect neuronal cells [15,53]. We hypothesized that overexpression of G6PD would extend the lifespan of HQ93 flies and enhance their resistance to oxidative stress by its ability to produce NADPH. To perform these experiments, we used a Drosophila transgenic line previously characterized, exhibiting a high enzyme activity in the brain and an increase of NADPH content; this line also presented an extension of lifespan and an enhanced resistance to oxidative stress generators as hyperoxia and paraquat treatments [54,55]. As shown in Fig. 4A, increased expression of G6PD in HD flies was significantly associated with an extension of lifespan: this increase was up to 33% in comparison with flies expressing only the HQ93 transgene. The mHtt-induced neurodegeneration in eyes was significantly rescued by the overexpression of G6PD as seen in Fig. 4B; flies expressing both HQ93 and G6PD have more intact photoreceptor cells (17%) at day 4 after adult eclosion than flies expressing HQ93 alone (5%). To investigate the effects of G6PD on HD fly survival in the presence of hGluT3, we overexpressed HQ93 and G6PD together with hGluT3. As shown in the Fig. 4C, lethality of these flies was not statistically different from that of HD flies expressing hGluT3 alone. This suggests that the co-expression of hGluT3 and G6PD has no cumulative effect on survival rate.
Next, we tested resistance of HQ93 flies in the presence or not of G6PD to an oxidizing agent, hydrogen peroxide (H 2 O 2 ). At first, we verified that after 48hr exposure, flies with G6PD-expressing neurons were more tolerant to H 2 O 2 than wild-type flies (Fig. 5A). Secondly, in the presence of HQ93 alone, exposure of 12 day-old flies to H 2 O 2 drastically reduced fly survival from 70 to 22% (Fig. 5B). Flies with neuronal HQ93 and G6PD transgenes presented a significantly enhanced resistance to oxidant compared to flies carrying only HQ93: 63% of flies with the two transgenes survived in the presence of H 2 O 2 whereas 22% of HQ93 flies remained alive (Fig. 5B). The amplitude of the decreased lifespan was only 30% with G6PD instead of 48% without G6PD. This result suggests that G6PD was able to rescue fly survival by exerting its anti-oxidative activity even in the presence of mHtt. By contrast, we observed that 6 day-old HQ93 flies were insensitive to oxidative stress generated by oxidative treatment (S5 Fig.), but it is noteworthy that these flies did not yet present pathological symptoms. Further, to determine whether the antioxidant capacity of G6PD in HQ93 neurons was responsible for its positive action, we analyzed the effects of NADPH-dependent peroxidases on fly survival. Peroxiredoxins and thioredoxins have been identified by their ability to reduce oxidant activities in conjunction with thiol-reducing systems using NADPH pool as electron donor [56,57,58]. The neuronal expression of Jafrac1, a Drosophila homologue of the human peroxiredoxin 2 (Fig. 5C) or the Drosophila thioredoxin deadhead (S6 Fig.) significantly extended the fly lifespan in the presence of HQ93. Thus, G6PD expression and, consequently, activation of the PPP, in addition to their anabolic function, were protective by providing antioxidative supply for mHtt-expressing neurons which are particularly vulnerable to oxidative stress.
The glucose transporter expression rescues mitochondrial dysfunction
An increasing number of studies have shown that mutant Htt action results in mitochondrial dysfunction [59,60]. We tested whether or not increasing glucose metabolism could prevent effects of mitochondrial dysfunction. Therefore, we examined the impact of hGluT3 neuronal overexpression on fly lifespan after genetic inactivation of two key genes for mitochondrial activity: the pyruvate dehydrogenase complex and the mitochondrial respiratory system. In mitochondrial matrix, the pyruvate dehydrogenase complex (PDH) catalyses the conversion of pyruvate to acetyl-coA and constitutes the first step of the TCA cycle. The mitochondrial respiratory complex I contains evolutionary conserved NADH ubiquinone oxidoreductase complex components; it produces significant amounts of ROS and its dysfunction triggers oxidative impairment as observed in several neuropathological diseases [17,61,62]. Firstly, we verified by RT-qPCR analysis that RNA interference (RNAi) expressions in neurons have efficiently reduced the expression of their respective targets: the alpha-subunit of the acetyl-transferring component of the PDH complex (E1-PDH) and the 23kD subunit (ND23) in the complex I (Fig. 6A). The Fig. 6B shows that knockdown of these both genes in neurons led to a dramatic reduction of the lifespan with a expectancy mean of 5 and 4 days respectively; this shows the key role of mitochondria in neuronal functions. However, when the hGluT3 transgene was expressed together with each RNAi, life expectancies were very significantly rescued since the The Mann-Whitney test indicates a significant difference between the two genotypes for H 2 O 2 -exposed flies (*, p = 0.016). (B): Representative survival rate of 12 day-old flies expressing HQ93 (black bar), or coexpressing HQ93 and G6PD (grey bar) after 48 hr exposure to 2% sucrose or to 1.5% H 2 O 2 in 2% sucrose. Numbers of flies included in this assay were 36; 56; 93; 85 respectively. Results represented the means + SEM of the percentages obtained from a representative experiment. The Mann-Whitney test indicates a significant difference between the two genotypes for non-treated flies (*, p = 0.041) and for H 2 O 2 -exposed flies (**, p = 0.006). (C): The survival curve of flies expressing HQ93 as control (squares) or HQ93 and Jafrac I (diamonds) under the neuronal driver Elav-Gal4, with n = 100 and 159 flies respectively. The log-rank test indicates that the two survival curves were very different (***, p<0.0001).
doi:10.1371/journal.pone.0118765.g005 (Fig. 6B). These results indicate that hGluT3 overexpression was sufficient to rescue the survival of the flies presenting mitochondrial defects. Then, to test the respective roles of the glycolysis or PPP to counteract mitochondrial dysfunction, we overexpressed the key enzymes G6PD (Fig. 6C) or PFK (Fig. 6D) in neurons in the presence of each RNAi. Both enzymes rescued the survival of the E1-PDH or ND23 RNAi expressed in neurons. This suggests that an increase in glycolysis and/or in PPP was required to maintain cell survival in stress conditions following impairment of mitochondrial functions.
Discussion
In the present work, using a genetic approach, we provided the first evidence that increasing metabolism of glucose sustained by overexpression of the glucose transporter hGluT3 ensures neuronal maintenance and survival in HD pathology. Previously, we have showed that DmGluT1, a predicted Drosophila sugar transporter ameliorates the survival of HQ93 flies when it was expressed in glial cells [45]. We also found that DmGluT1 expressed in neurons confers protection against mHtt (data not shown), and this result has been recently confirmed [28]. However, although having striking amino acid homology (44-49%) with the human classical glucose transporters (GluT1-4), we showed here that DmGluT1 has low affinity for glucose and galactose contrary to hGluT3. Moreover, it was transcribed at a very low level in wildtype fly heads (data not shown). On the basis of these data, we concluded that this isoform is probably not implicated in Drosophila neuronal glucose import and that DmGluT1 is likely not the neuronal "functional" ortholog of the hGluT3, although this role has been suggested in the report of Vittori et al. [28]. However, we cannot exclude that DmGluT1 may transport an undetermined or modified sugar. Thus, consequently, we focused our study on hGluT3 overexpression in Drosophila neurons. We showed that expression of this glucose transporter in Drosophila neurons is sufficient to suppress most of the neurological mHtt-induced phenotypes by strikingly improving fly survival, restoring locomotor activity and rescuing neurodegeneration. Our data confirm that HD progression could be affected by hGluT3 gene expression level and its glucose uptake activity in neurons.
In physiological conditions, after its import, intracellular glucose is phosphorylated and is thus ready to enter metabolic pathways, mainly glycolysis or PPP. But in HD, functions of genes related to carbohydrate metabolism and acting downstream glucose transporters are altered [63,64,65]. Diverse mechanisms have been proposed which could underlie the hypometabolism observed: decrease of glycolysis activity [66,67], deregulation of mitochondrial metabolism [68] including high ROS production [69] or ATP depletion [26].
To test the effect of the increasing glycolytic flux in mHtt transgenic flies, we used the PFK enzyme which catalyses an irreversible step of the glycolysis and is known as a regulator of this pathway. We found that overexpression of PFK in neurons had an overall modest impact on HD progression. This is not due to a low import of glucose in fly neurons since increasing glucose entry by hGluT3 did not reveal a beneficial effect of PFK. However, the absence of an efficient glycolytic effect on fly survival could be explained by the continuous degradation of the PFK or by a functional blockage at the PFK step as described in mammals [70]. Moreover, glycolysis up-regulation might have adverse effects since stabilization of the neuron-specific isoform of PFK by an excitotoxic stimulus increased neuronal glycolysis but thereafter led to oxidative damage and cell death [71]. It has been also demonstrated that GAPDH, a key enzyme in the glycolytic pathway downstream of the PFK step, can mediate neurotoxicity by binding to the polyglutamine stretch of the mutated huntingtin [72,73]. Inhibitors of glycolysis were found to suppress cell death in a cell culture model expressing poly-Q [74], suggesting that enhancement of the glycolysis may exacerbate the toxic action of mHtt in neurons. In conclusion, activation of glycolysis does not appear as an efficient pathway to compensate HD-induced energy deficits.
Neurons are the most sensitive cells of the brain to oxidative damage [75]. The PPP largely contributes to neuronal protection against oxidative injury by the G6PD activity which reduces NADP to NADPH, a necessary cofactor for regeneration of glutathione or thioredoxin in the reduced form. Several studies have shown that the PPP is up-regulated when the brain is subjected to traumatic injury or abnormal oxidative stress [76] and appears to play a critical role during neurological diseases [77]. It is well documented that HD pathological conditions augment production of ROS in patients and in HD models [78]. Here, we have showed that G6PD overexpression in neurons was beneficial for HQ93 fly survival and produced eye neuroprotection. G6PD and hGluT3 overexpressions seem to have no cumulative effects on the lifespan of HD flies, further indicating that these two genes are involved in the same metabolic pathway. We also showed that flies expressing G6PD in neurons in the presence of HQ93 presented a greater resistance to H 2 O 2 -induced oxidative stress than flies expressing only HQ93, indicating that the rescuing effect of G6PD was probably mediated by increased NADPH production. It has been reported that polyglutamine toxicity in Drosophila eyes was reduced by increasing NADPH level through elevated G6PD activity [55]. Furthermore, we showed that ROS detoxifying enzymes belonging to the Drosophila redox buffer system [57] such as thioredoxins and peroxiredoxins, conferred protection in neurons expressing HQ93. This result was in agreement with the neuroprotective effect of the mouse glutathione peroxidase in HD flies [79]. We thus propose that neurons overwhelmed by the pleiotropic action of mHtt, became more and more vulnerable to mHtt-induced ROS overload, and that the up-regulation of the PPP is a necessary protective antioxidant strategy against HD by reducing the amounts of ROS.
Mitochondrial abnormalities has long been proposed to underlie neuronal loss in HD pathology and during recent years, numerous studies focused on the role of mitochondrial dysfunction in the disease [68,80]. To investigate the effects of glucose metabolism on impaired mitochondrial function, RNAis were used to reduce the respective activities of the TCA cycle and complex I by specifically silencing the E1 component of the PDH complex and the 23kD subunit of the NADH ubiquinone oxidoreductase complex. Our data showed that the knockdown of these two targets reduced drastically fly survival and that overexpressions of either hGluT3, G6PD or PFK ameliorated lifespan. It was proposed that mitochondria can adjust cellular bioenergetic performance and alleviate stress by inducing metabolic readjustments through a feedback mechanism, the mitochondrial retrograde signal. It has been firstly characterized in yeast [81] and represents coordinated cellular responses to changes in the functional state of mitochondria to promote cell survival. Recent studies in Drosophila or in a Drosophila cell line also point to a retrograde response to support cell survival and activity during mitochondrial stress [82,83]. Our results show that overexpression of genes involved in glucose metabolism may compensate for mitochondria-induced alterations. Nevertheless, further studies are required to determine whether or not such compensatory mechanisms occur in the HD context and particularly, prior to the onset of symptoms.
In conclusion, we show that the unique feature of increased glucose neuronal import by increasing neuronal transporter expression has a striking beneficial impact on HD pathology to maintain neuronal activities. The induction of PPP is able to delay HD progression likely by providing efficient neuroprotection against ROS. In contrast, the enhancement of glycolysis is probably not a preferential way since glycolysis might promote neurotoxicity by compromising the efficacy of the antioxidant system. These observations could contribute to find future therapeutic strategies implying the neuronal glucose metabolism.
Drosophila Stocks
Flies were raised at 25°C on standard cornmeal agar diet. The UAS-Htt exon 1-Q93 flies, hereafter named HQ93 flies, and UAS-Htt exon1-Q20 flies (named HQ20 flies) were provided by L. Marsh and had been described in [43]. The transgenic overexpression strains: UAS-PFK (line #4) were kindly provided by C.S. Thummel; the UAS-G6PD (line #5f) flies were supplied by W.C. Orr, the UAS-Jafrac I flies by W-J. Lee and the UAS-deadhead flies by T. Aigaki. The neuronal driver Elav-Gal4 (line c155) was obtained from the Bloomington Drosophila Stock Center (Bloomington, Indiana). The RNAi lines UAS-ND23-IR (#110797) and UAS-E1-PDH-IR (#40410) were purchased from VDRC (Vienna, Austria). Accordingly to the genetic background of the different lines, we used as control either the yw or the w 1118 (BL5905) lines (S1 Table). To proceed to neuron-specific expression, flies carrying one or two UAS constructs were crossed to flies transgenic for the pan-neuronal Elav-Gal4 driver. Female F1 progeny carried both UAS and Gal4 were used for subsequent analyses.
Lifespan experiments
Newly eclosed adult female flies of the appropriate genotypes were collected within 24 hrs of emergence in vials at a density of about 20 flies per vial and maintained at 25°C. Flies were transferred every 2-3 days to fresh food, and the number of dead flies counted each day. Two to eight experiments were conducted in the same conditions and representative survival curves were shown. Survival curves were generated and statistical significance was tested by using logrank statistics software (GraphPad Prism).
Locomotion assay
Locomotor performance was tested by the negative geotaxis test [48]. Briefly, female flies were anesthetized with CO 2 and placed in a plastic column. After 30 min recovery, flies were gently tapped to the bottom of the column, then allowed to climb for 30 sec. The test was repeated 3 times for each batch of flies at 1 min intervals. For each experiment, the percentages of flies that reached the top of the column and that remained at the bottom were separately calculated. Statistical significance was assessed using the Student's t-test (GraphPad Prism).
Pseudopupil analysis
One eye of adult female head was dipped in vaselin grease covering a microscope slide. Eyes were observed with a Leica TCS SP2 microscope using a x60 objective, and photographed with a CoolSnaps HQ Photometrics camera. The visualization of the trapezoidal arrangement of photoreceptor cells in the ommatidia was performed with Image J. At least 10 flies were examined per genotype and the numbers of visible rhabdomeres were scored for 20 ommatidia per fly. Comparisons between median value of photoreceptor number per ommatidium in each line were performed using one-tailed Mann-Whitney's test (GraphPad Prism).
Drosophila hGluT3 generation
The cDNA of hGluT3 in pCMV-Sport6 plasmid (IMAGE clone: 4396508, ImaGenes, Berlin, Germany) was PCR amplified with the following primers: forward: 5' TTTGGATCCTTCCT-GAGGACGTG and reverse: 5' TATCCTCGAGGGATACTCTAGAG, then digested with BamH1 and NotI restriction enzymes, respectively. The purified fragment (3.3 kb) was inserted into the BglII and NotI restriction sites of the pUAST plasmid. The selected clone was verified by DNA sequencing. Germ-line transformation was performed by BestGene (BestGene Inc., USA) in a yw background.
DmGluT1 and DsRed-tagged DmGluT1 constructions
For non tagged-DmGluT1 contruction, the pUAS-DmGluT1 plasmid [45] was used to excise a 2.5 kb DmGluT1 sequence using EcoRI and XhoI restriction enzymes. The purified DmGluT1 fragment was inserted in the EcoRI and XhoI sites of the pcDNA3 plasmid (Life Technologies, USA), then ligated. The ligation boundaries were verified by sequencing.
The pDsRed-Monomer-N1 (Clontech Laboratories) plasmid was used to construct fusion of the DmGluT1 coding sequence to the N-terminus of the DsRed sequence. The pUAS-DmGluT1 plasmid was amplified with primers containing XhoI and BamHI restriction enzymes respectively, to insertion in the multiple cloning site of the DsRed vector: forward: 5' TTTTCTCGAGGCAACTGGCAACGAAATGGCT and reverse: 5' TTTTGGATCCACA-TACCTGCCATTGTTGTGC. After digestions and purification, the fragment (1.5 kb) was inserted into the DsRed vector, then the sequence of the construct was verified by sequencing.
hGluT3 immunodetection and Dsred-tagged DmGluT1 detection
For hGluT3 immunocytochemistry, cells were 4% formaldehyde fixed (20 min) 24 hrs after transfection and then permeabilized with 0.2% Triton X-100 for 10 min and treated with 50mM NH 4 Cl (10 min). Cells were incubated in 1% BSA-PBS for 5 min (x3) to block non-specific interactions. Cells were then incubated with the primary antibody used at a 1:100 dilution (rabbit anti C-term GluT3; Abcam) for 2h at RT. The secondary antibody was Dy549 goat anti-rabbit (Jackson) used at a 1:500 dilution for 1h at RT. Cultures were imaged with an Olympus FV 1000 laser confocal microscope. For DsRed fluorescent detection, cells were 4% formaldehyde fixed (10 min) and confocal image acquisition was performed on a Zeiss LSM780 laser scanning microscope.
Quantitative PCR
Reactions were monitored by using ABI Prism 7500 Fast thermal cycler (Life Technologies, USA) and KAPA SYBR Fast qPCR Master mix (Clinisciences, France) on reverse-transcribed cDNAs as obtained above. Cycling parameters were used: 40× (95°C for 3 sec, 60°C for 30 sec). Primers were designed with AmplifX version 1.7.0 software (http://crn2m.univ-mrs.fr/pub/ amplifx-dist; CNRS, Aix-Marseille University) and are listed in the S2 Table. Data were analyzed using the relative quantification method (-2 ΔCt method). The transcript levels were normalized with values obtained after amplification of ribosomal RNA (rp49) as endogenous control. Serial dilutions from 1:1 to 1:625 cDNAs were used for each gene and for the internal control to generate standard curves to check a nearly 100% efficiency. Data represent two averaged replicates of at least three independent experiments, each of which was carried out on separate set of tissue samples. Melting curves were established for each reaction to check that only one specific amplicon was synthesized during the amplification. Excel software (Microsoft) was used to analyze data and to generate representative graphs expressing mean expression levels + SEM. Expression levels were expressed in percent relative to controls, then compared with Mann-Whitney test (GraphPad Prism).
Hydrogen peroxide resistance
Female adult Drosophila were collected within 24 hrs of emergence and transferred into vials with food in groups of 20 flies per vial. Six or twelve days after emergence, Drosophila were tested at 25°C for resistance to H 2 O 2 (Sigma-Aldrich, USA) treatment. Flies were starved for 2 hrs, then placed into vials containing Whatman paper pieces saturated with 1.5% H 2 O 2 (v:v) in 2% sucrose or with 2% sucrose alone as control. Numbers of dead flies were recorded at 48 hrs after the beginning of the treatment. Two independent trials were performed, and a representative experiment was shown. One-tailed Mann-Whitney tests (GraphPad Prism) were used to determine the significance of the data expressed in percentages of surviving flies. | 8,451.8 | 2015-03-11T00:00:00.000 | [
"Biology"
] |
Lasing on a narrow transition in a cold thermal strontium ensemble
Highly stable laser sources based on narrow atomic transitions provide a promising platform for direct generation of stable and accurate optical frequencies. Here we investigate a simple system operating in the high-temperature regime of cold atoms. The interaction between a thermal ensemble of 88 Sr at mK temperatures and a medium-finesse cavity produces strong collective coupling and facilitates high atomic coherence, which causes lasing on the dipole forbidden 1 S 0 ↔ 3 P 1 transition. We experimentally and theoretically characterize the lasing threshold and evolution of such a system and investigate decoherence effects in an unconfined ensemble. We model the system using a Tavis-Cummings model and characterize the velocity-dependent dynamics of the atoms as well as the dependency on the cavity detuning.
I. INTRODUCTION
Active optical clocks have been suggested as an excellent way to improve the short-time performance of optical clocks by removing the requirement for an ultrastable interrogation laser [1][2][3]. Current state-of-the-art neutral-atom optical lattice clocks with exceedingly high accuracy are typically limited in precision by the interrogation lasers and not by the atomic quantum projection noise limit. The high Q value of the atomic transitions (Q > 10 17 ) exceeds that of the corresponding reference laser stability for the duration of the interrogation cycle [4][5][6][7][8]. State-of-the-art reference lasers now perform below the level of 10 −16 fractional frequency stability at 1 s [9][10][11]. This recent advance in performance is enabled by using crystalline mirror coatings and spacers as well as employing cryogenic techniques. Nevertheless, these lasers continue to be limited by the thermal noise induced in the reference cavity mirrors [12,13]. By using an active optical clock, the narrow spectral features of the atoms themselves produce the reference light needed, via direct generation of lasing in an optical cavity [14]. Operation in the bad-cavity limit, where the cavity field decays much faster than the bare atomic transition, allows high suppression of the cavity noise in these lasers [15]. It has been predicted that such systems could reach Q factors in excess of the transition Q factor due to narrowing caused by collective effects [3,16]. It was recently shown experimentally that an active optical atomic clock can perform at the 10 −16 fractional frequency stability level between 1 and 100 s, and with a fractional frequency accuracy of 4 × 10 −15 by using a cyclically operated optical lattice with 87 Sr atoms [15]. This is a performance level in stability also promised [17] or demonstrated [18] by other types of thermal atom systems. Continuous lasing with atoms at room temperature have shown performance at the 10 −13 level of fractional frequency stability [19].
These results demonstrate the potential of active atomic clocks and motivates the present attempt to improve the understanding of atom dynamics in such a system. We concentrate on a cold atomic gas consisting of bosonic 88 Sr cooled to a temperature of T ≈ 5 mK and permitted to expand freely as a thermal gas while lasing. Superradiant lasing in such a system would substantially reduce the technological challenges of maintaining truly continuous operation compared with the case of atoms confined in an optical lattice trap [20,21].
We demonstrate experimentally pulsed lasing on a narrow transition in 88 Sr and describe the characteristic dynamics of our system. Using a Tavis-Cummings model we simulate the full system consisting of up to 8 × 10 7 individual atoms and give a qualitative explanation of the dynamics. Rich velocity-and position-dependent dynamics of the atoms are included in the description and become an important factor for understanding the lasing behavior. The model allows us to quantify these behaviors and set requirements on ensemble size and temperature to realize strongly driven or weakly driven superradiance, respectively. Such requirements are useful for the future development of truly continuous lasing on ultranarrow clock transitions in strontium or in other atomic species.
In Sec. II we describe the physical characteristics of our system and the experimental routine. Section III describes our theoretical model for the cold-atom-based laser, and the proposed lasing dynamics. In Sec. IV the lasing characteristics are presented and comparisons between experiment and simulations are made.
II. EXPERIMENTAL SYSTEM
The system consists of an ensemble of cold 88 Sr atoms cooled and trapped in a three-dimensional (3D) magnetooptical trap (MOT) from a Zeeman-slowed atomic beam, using the 1 S0 ↔ 1 P1 transition, and was described in Ref. [22]. The atomic cloud partially overlaps with the mode of an optical cavity and can be state manipulated by an off-axis pumping laser (see Fig. 1). The cavity can be tuned on resonance with the 1 S0 ↔ 3 P1 intercombination line and has a linewidth of κ = 2π × 620 kHz. Our mirror configuration ensures a large cavity waist radius of w 0 = 450 μm, which ensures a reasonably high intracavity atom number N cav of order 1 × 10 7 . N cav is estimated from fluorescence measurements and lasing pulse intensities and is given by N cav ≡ η cav N, where η cav 0.15 is a geometric factor corresponding to the fraction of atoms within a cylinder of radius w 0 along the cavity mode. Since all atoms have different velocities and positions, the cavity coupling g j c for the jth atom is not constant. While this is included in the theoretical simulations, for discussion of regimes and general scaling behavior we use an effective coupling to the cavity mode averaged over the time-dependent position of the atoms g eff ≡ |g Here ζ z (z i ) describes the variation in coupling given by the standing wave of the cavity mode, and ζ xy (x i , y i ) describes the spatial dependence due to the Gaussian mode cross section.
The large cavity waist ensures a negligible decoherence induced from transit-time broadening of tt = 2π × 2.2 kHz. While the 1 S0 ↔ 3 P1 transition used for operation has a linewidth of γ = 2π × 7.5 kHz, the ensemble temperature T causes a Gaussian velocity distribution of width σ v = 0.69 m/s and a Doppler-broadened full width at half maximum (FWHM) of D = 2π × 2.3 MHz. The bare atoms are thus deep in the bad-cavity regime, γ κ, whereas the total inhomogeneously broadened ensemble is just below the badcavity threshold. Here, the cavity-field decay rate and total atomic decoherence rate is of similar size, κ ∼ dec , and ensemble preparation can heavily affect the lasing process and the attainable suppression of cavity noise. We characterize the system by its collective cooperativity C N = C 0 N, and the single-atom cooperativity is given by C 0 = (2g eff ) 2 κγ . This leads us to the definition of a collective coupling rate N = 2 √ Ng eff . While the spectral linewidth of the emitted light is controlled by C 0 [3,23], the coherence buildup, and thus the lasing power, is determined by C N . Increasing the coupling rate g eff , and thus the single-atom cooperativity, will then improve interaction at the expense of the ultimately attainable lasing linewidth.
After preparing the MOT, the cooling light is switched off, and an off-axis pumping beam is used to excite the atoms to the 3 P1 state. By pumping the atoms on resonance for 170 ns a peak excitation of approximately 85% is obtained for the atoms within the cavity mode. Inhomogeneous Doppler detuning caused by the thermal distribution of the atoms and the large spatial distribution of the full atomic ensemble with respect to the pumping field result in varying Rabi frequencies for different atoms. The collective atomic Bloch vector will thus contain some level of atomic coherence from the imperfect pump pulse. Due to the excitation angle of 45 • , the pump phase periodicity along the cavity axis suppresses any forced coherence between the atoms and cavity field. Additionally, the spatial coherence of the fast atoms will wash out as they propagate inhomogeneously along the cavity axis. To remove any remaining phase coherence we apply light at 461 nm detuned about ν = −41 MHz from the 1 S0 ↔ 1 P1 for 500 ns after pumping. This results in an average of 1.3 scattering events per atom and ensures that atoms are either in the excited or ground state. We see no quantifiable change in the lasing behavior by doing this.
Once excited, the atoms will build up coherence mediated through the cavity field and emit a coherent pulse of light into the cavity mode. Light leaking through one of the cavity mirrors is then detected on a photodiode. We couple a reference field s into a cavity mode far off resonance, FSR = 781 MHz, with the 1 S0 ↔ 3 P1 transition, allowing us to lock the cavity length on resonance with the atoms. Figure 2 shows an example of a lasing pulse and the associated beat signal when the cavity is on resonance (blue) and detuned ce = 900 kHz (orange) from the atomic resonance. The pumping pulse ends at t = 0 s and, after some delay τ (on the order of few μs), a lasing pulse is emitted into the cavity mode. The lasing pulses are followed by a number of revivals of the superradiant output. This is a fingerprint of the coherent evolution of the cavity-atom system, where light emitted into the cavity mode is reabsorbed by the atoms and is then re-emitted into the cavity mode. The lasing pulse shown in Fig. 2(a) is detected superimposed on a 75 nW background. This background is a reference field used for locking the length of the cavity and provides the opportunity for a heterodyne beat with the lasing pulse. At zero cavity-atom detuning ( ce = 0) the oscillating revivals are barely visible due to reference field intensity noise. When detecting the beat signal, however, the signal scales linearly rather than quadratically with the lasing electric field E las , and here multiple revivals are visible in both the resonant and off-resonant cases.
The beat signal is given by S beat ∝ |E las | sin φ. The envelope thus gives us an improved signal-to-noise ratio at low field values as compared with the emitted intensity. We down-sample the signal by beating it with the frequency FSR , and sin φ then allows us to retrieve the phase information. In Fig. 2(b) we see that the phase during the initial lasing pulse differs between the two datasets. Because the lasing frequency does not follow the cavity detuning ce in a bad cavity, our demodulation frequency is slightly off, and the phase becomes where ξ pull is the cavity frequency pulling factor. This results in the additional zero crossings in the beat signal of the ce = 900 kHz signal as compared with the emission intensity, e.g., during the primary pulse. Additionally, we observe that the phase φ is random between experimental cycles at a given detuning as a result of the randomized phase of the atomic coherence for each realization of the pulse; see inset in Fig. 2
III. SEMICLASSICAL MODEL
To gain an improved understanding of the dynamics involved in our system, we simulate the behavior with a Monte Carlo approach. We model the system by using a Tavis-Cummings Hamiltonian for an N-atom system. The description includes a classical pumping field at frequency ω p . In the Schrödinger picture the full expression becomes Here ω c is the angular frequency of the cavity mode, and a is the corresponding lowering operator. The involved electronic energy states |g and |e correspond to the ground and excited atomic states, respectively, with a transition frequency of ω e , as seen in Fig. 1(b). The time-dependent position of the atom is given by r j = (x j , y j , z j ), and k p is the pump beam wave vector. The pump beam has a semiclassical interaction term with Rabi frequency χ j p and the interaction between the cavity field and the jth atom is governed by the coupling factor g j c : where L is the cavity length and the wave number k e = ω e /c. The Doppler effect is included in the model through the time dependencies of the atom positions and, in particular for interaction with the cavity mode through the resulting variation in g j c (t ) caused by ζ z (z j (t )) = sin(k e z j (t )). By entering the interaction picture and using the rotatingwave approximation, the time evolution of the system operators can be obtained. Here we make the semiclassical approximation of factorizing the expectation values for operator products, which results in linear scaling of the number of differential equations with the number of atoms. This approximation results in the neglect of all quantum noise in the system, and by consequence any emerging entanglement [24][25][26]. We motivate this assumption by the very large number of atoms in the system, whose individual behavior is taken into account with separate coupling factors g j c . The quantum noise is then expected to be negligible compared with the single-operator mean values. The operator mean values are described by three distinct forms of evolution: Here nm = ω n − ω m is the detuning of the nth field with respect to the mth field. Because of the semiclassical approximation the Hermitian conjugate of these operators are simply described by the complex conjugate of their mean values, while σ j ee + σ j gg = 1. This leaves us with a total of 1 + 2N coupled differential equations, where N is on the order of 10 7 .
The Monte Carlo simulation assumes that the atoms are initially in the ground state and subsequently pumped into the excited state and left to evolve with time. All parameters are estimated experimentally and there are no explicit free fitting parameters (see Appendix). Atomic positions and velocities are sampled randomly from a 3D Gaussian and a thermal Maxwell-Boltzmann distribution, respectively. The velocities are distributed according to a temperature of T = 5 mK and the positions with a standard deviation of 0.8 mm. Atomic motion is treated classically and without collisions. The atoms interact only via the cavity field and any spontaneous emission into the cavity mode is neglected. The lasing process is initiated by an initial nonzero total coherence N j=1 σ j ge , resulting from the pumping pulse. This replaces the role of quantum FIG. 3. Three distinct lasing regimes. In our system we operate on the intersection of all three and seem to realize both N and N 2 atom number scaling behavior. noise in the system, and without it the system would couple only to the reservoir.
Our simulations indicate that the lasing occurs in three characteristic regimes determined by the ratios of the atomic decay rate γ , the cavity decay rate κ, and the collective atomic coupling factor N ; see Fig. 3. We consider only N > γ since for N < γ the spontaneous atomic decay to the reservoir will dominate. In the bad-cavity regime where γ < κ, a low coupling strength will allow the output power to scale quadratically with atom number, whereas high coupling strength will result in a linear power scaling and multiple Rabi oscillations in the atomic population during a single lasing pulse. Broadening effects can lead to higher effective decoherence rates dec which must be considered.
IV. SYSTEM CHARACTERIZATION
We characterize the lasing properties and pulse dynamics of the system by varying cavity-atom detuning and atom number. We compare the measurements with simulated experiments to verify the understanding in the numerical model. This will then allow us to draw out some behavior from the model that is inaccessible experimentally.
A. Lasing threshold
By varying the total number of atoms in the trap, we find the lasing threshold of the system. This gives an atom number dependency of the pulse power and the associated delay; see Fig. 4(a). We plot the peak power of the emitted lasing pulse as a function of the atom number N in the full atomic cloud for 795 experimental runs. Each point is about 40 binned experimental runs, with the standard deviation indicated. For low atom numbers (white region), N 3 × 10 7 , excited atoms decay with their natural lifetime, τ = 22 μs. In an intermediate regime of 3 × 10 7 < N < 5 × 10 7 (red region) the peak power appears to scale quadratically with the atom number, as the onset of lasing occurs. This is what we would expect according to the parameter regimes illustrated in Fig. 3. Finally, for higher atom numbers (blue region), N > 5 × 10 7 , the peak power becomes linearly dependent on the atom number, as the collective coupling N becomes much larger than κ. We show red and blue curve fits to their corresponding regions with N 2 and N scaling, respectively. The green points show the results from simulation and appear to agree well with experiment. The curves are fit to the raw data, within the respective regions before any binning, and the lasing threshold is determined from the quadratic fit. While the lasing process does not initiate for low atom numbers due to the requirement that C N γ dec , for high N the cavity field builds up sufficiently that the slowest atoms become strongly driven. At high atom numbers (N > 7 × 10 7 ) the cavity output seems to saturate as the assumption of linear scaling of the cavity atom number (N cav ) with respect to the MOT atom number (N) breaks down, and these points are not included in the linear fit. The nonzero value of the data below threshold is caused by random noise.
In Fig. 4(b) we plot the delay between the end of the pumping pulse and the associated peak in laser pulse emission. The delay has high uncertainties in the red region where we expect a 1/N scaling [27]. For high N we expect a 1/ √ N trend [28] (blue), and we choose to fit this to the entire dataset. When going to low atom numbers, background noise becomes increasingly important until no emission peak is visible, and the effective delay goes to infinity. Once again we show the simulation with green dots. Notice that there is a clear tendency towards longer delays in the simulation. We believe this to be caused by the fact that the model does not include spontaneous emission into the cavity mode.
As the number of atoms decreases, so does the collective cooperativity, and thus the ensemble coherence buildup. The time it takes for the ensemble to phase-synchronize increases, leading to the increased delay time and associated decrease in peak intensity. The total number of photons emitted during a pulse is not constant but scales with the collective cooperativity C N , just as the peak output power in Fig. 4.
B. Pulse evolution
We map out the time evolution as a function of atom number and cavity-atom detuning, respectively. To ease interpretation we define t = 0 s to be at the time of maximal power of each dataset in Figs. 5 and 6. The emission delay is then given by the distance between this maximal value at time zero and the green dots indicating the end of the pumping pulse.
Atom number variation
In Fig. 5(a) we simulate the behavior of the lasing pulse when the atom number is changed. We set the cavity-atom detuning to zero ( ce = 0) and vary the atom number in the MOT while keeping the density profile constant. The green points indicate the end of the pumping pulse and are binned in order to ease interpretation. The oscillation frequency of the output revivals decreases as the atom number is reduced, and at low atom numbers only the primary lasing pulse is visible.
In Fig. 5(b), the experimental results are shown. We vary again the atom number, allowing comparison to the simula-FIG. 6. Cavity-atom detuning dependency of the pulse dynamics. The oscillatory behavior is strongly suppressed at resonance but appears clearly once the cavity is detuned. Here at least four subsequent revivals are visible. The oscillating frequency scales with the detuning frequency and appears symmetric around zero detuning. The white line is a 2 ce fit to the pulse delays τ . The atom number is N = 7.5 × 10 7 . (a) Simulation results. The temperature was set to T = 5 mK. (b) Experimental results. The dashed blue and orange lines indicate the cuts shown in Fig. 2(a). tions. A clear primary peak can be seen for atom numbers N > 3 × 10 7 , and subsequent revivals of the light emission is observed. The reference light used for cavity locking shows up as a noisy background in the experimental data, and a constant offset corresponding to the mean value of the background signal has been subtracted in both Figs. 5(b) and 6(b). The data were recorded in sets of 50 with a randomly chosen set point for the atom number N. This means that slowly varying experimental conditions show up as slices of skewed data, but does not result in unintended biasing of the results.
The oscillatory behavior is well explained by following the evolution of the atomic inversion. For high atom numbers the collective coupling in the system becomes strong enough that the photons emitted into the cavity mode are not lost before significant reabsorption takes place. This leads to an ensemble that is more than 50% excited by the end of the primary lasing pulse, and subsequent laser revival (oscillations) will thus follow. In this regime where the collective coupling rate is much larger than the cavity decay rate, N > κ, the system output is expected to behave as the central plot of Fig. 3. At low atom numbers we get N < κ and light emitted by the atoms is lost from the cavity mode too fast to be reabsorbed by the atoms. As a result the primary lasing pulse creates a significant reduction in the ensemble excitation so that no further collective decay occurs, and the atoms subsequently decay only through spontaneous emission. In this regime the output power is expected to scale as N 2 , illustrated by the rightmost graph of Fig. 3. This is the behavior expected from an ideal superradiant system [27][28][29].
Cavity detuning
With a constant atom number of N = 7.5 × 10 7 , we now vary the cavity-atom detuning ce in Fig. 6. Here a broad range of cavity-atom detunings, up to about ce = ±2 MHz, is seen to facilitate lasing. At zero detuning the primary lasing feature is maximal and subsequent revivals are suppressed. The lasing pulse delay is seen to scale as 2 ce and is thus linearly insensitive to fluctuations close to ce = 0.
For nonzero detuning the oscillatory behavior of the lasing intensity is seen to increase in frequency, scaling with the generalized Rabi frequency of the coupled atom-cavity system N = (4Ng 2 eff + 2 ce ) 1/2 [30,31]. A noticeable effect is the apparent suppression of emission revivals for any detunings ce 200 kHz. The cavity-atom detuning range for which a significant lasing pulse is produced can be broad compared with the natural transition linewidth γ . The pulse can be initiated by only a few photons in the cavity field. The inhomogeneous broadening of the atomic linewidth caused by buildup of optical power in the cavity subsequently increases the lasing range significantly. As the intensity in the cavity mode builds up, power broadening acts to increase the effective mode overlap between the field and individual atoms. This increases the effective gain in the system, both at finite and zero detuning, allowing more atoms to participate and more energy to be extracted than would have otherwise been the case. Here, the lasing range agrees well with the Doppler broadening, but for the case of a much colder atomic ensemble (∼μK) the Doppler broadening of the atomic transition is no longer of the same order of magnitude. Further simulations, however, indicate that the width of the cavity-atom detuning region which supports lasing remains wide in the μK case due to power broadening.
Velocity-dependent dynamics
During the lasing process, the Rabi frequency of each atom will vary in time due to the changing cavity-field intensity, while the atomic motion along the cavity mode leads to velocity-dependent dynamics. A typical thermal atom may move a distance of a few wavelengths during the lasing process in this temperature regime. Our simulation shows how atoms affect the lasing process differently, depending on their velocities along the cavity axis; see Fig. 7. With an angle of 45 • between the cavity axis and the pump pulse beam, the slow atoms along the cavity axis are preferentially excited during pumping. These atoms initiate the lasing process, while faster atoms may suppress the pulse amplitude by absorbing light. Different velocity classes dominate emission or absorption of the cavity photons at different times during the initial pulse and subsequent revivals. The theoretical description of the velocity-dependent behavior shown in Fig. 7 provides an improved qualitative understanding of the effect of having thermal atoms in the system. As atoms are cooled further, their behavior becomes increasingly homogeneous, and the asynchronous behavior of hot atoms no longer destroys the ensemble coherence. Figure 7(a) illustrates the velocity dynamics for the case of a resonant cavity, ce = 0. For a range of different velocity groups, this shows the rate of change of the atomic groundstate population due to interactions with the cavity field. Significantly more ground-state oscillations after the primary pulse are visible here than revivals in the emitted power in Fig. 5. Most of these oscillations see an approximately equal amount of emission and absorption, causing the energy to remain in the atomic excitations rather than being lost from decay of the cavity mode. They can, however, be seen in the phase response of the system, as illustrated in Fig. 2(b). Eventually loss from spontaneous emission into the reservoir becomes an important decay channel. Concentrating on the slowest atoms, we see emission during the full length of the primary lasing pulse. For the subsequent pulses these atoms alternate between absorbing or emitting light. If we could isolate the light from the slowest atoms, we would thus only see every second lasing revival in the output power. For atoms with larger velocities, there will sometimes be both emission and absorption during any single pulse, and we even see the tendency of some velocity groups to consistently emit (v = 0.5 m/s) or absorb (v = 0.65 m/s) throughout the full process (red dashed lines in Fig. 7). This indicates that, even for the case of a resonant cavity, the velocity groups contributing most to the emitted light during the laser revivals are not the resonant ones. In the case of a detuned cavity mode, Fig. 7(b), the initial behavior is very similar. Atoms whose Doppler detuning brings them on resonance with the detuned cavity emit throughout the primary lasing pulse, whereas others will begin to absorb. Once again some atoms (v = 0.25 m/s) appear to emit light throughout the pulse revival oscillations. The periods of zero emission or absorption between oscillations (gray) are no longer visible, because some light is always emitted and absorbed by the atoms. The minima in the emitted power are thus no longer caused by zero emission but rather by the cancellation between different velocity classes. This behavior corresponds well to the results of Fig. 6 where revivals are much more pronounced in the case of large cavity-atom detuning. Future studies of the spectral properties of superradiant light in cold-atom systems could elucidate the dependency of emitted light frequency on the finite temperature of the atoms.
V. CONCLUSION
In this paper we investigate the behavior of an ensemble of cold atoms excited on a narrow transition and coupled to the mode of an optical resonator. The enhanced interaction provided by the cavity facilitates synchronization of the atomic dipoles and results in the emission of a lasing pulse into the cavity mode. This realizes the fundamental operating principle for an active optical clock, where superradiant emission of laser light can be used as a narrow-linewidth and highly stable oscillator.
We mapped out the emitted laser power as a function of atom number in order to identify the threshold of about N threshold cav = 4.5 × 10 6 atoms inside the cavity mode. Two different scalings of laser output power in the bad-cavity regime are identified, and although our system is at the limit of the bad-cavity regime, both regimes are realized by varying the atom number.
In an attempt to quantify the decoherence effects resulting from finite atomic temperature, a Tavis-Cummings model was developed. By using detailed parameters of the pumping sequence, atomic spatial distribution, and orientation, the model is seen to reproduce the experimental results to a high degree. The emitted energy from the atoms is seen to exhibit temporal Rabi oscillations as it undulates between atomic and cavity excitation. This behavior is elucidated via the numerical simulation by investigating the change in atomic excitation as a function of atomic speed throughout the lasing pulse sequence. We see that different velocity groups behave antisymmetrically with respect to each other. Surprisingly, the velocity group mainly contributing to emission rapidly changes from resonant atoms to atoms that are more detuned with respect to the cavity. This is caused by a faster initial loss of excitation for resonant atoms. A similar effect is seen in the case of a detuned cavity. Here the atomic behavior is much more uniform across different atomic speeds, as the relation between atom number and cavity coupling becomes more homogeneous.
This system relies on pulses of lasing from independent ensembles of atoms, which limits the pulse-to-pulse phase coherence. The phase coherence would be intact between pulses if we could ensure atoms are always present in the cavity as a memory, e.g., in a continuous system. The prospect of a continuously lasing atom-cavity system based on unconfined cold atoms is intriguing because of the severe reduction in engineering requirements compared with a system based on, e.g., sequential loading of atoms into an optical lattice [21]. The velocity-dependent dynamics are important in order to understand what type of equilibrium one can expect in such a system. An investigation of the spectral characteristics of stationary atomic systems have been shown in Refs. [15,16] and are promising for the transition we have used here. Investigation of the spectral properties in an unconfined ensemble will be presented in future work.
Simulations of the system are based on numerical integration of Eqs. (3) [32]. The system is initiated with all atoms in the ground state and no coherence. The atoms are randomly distributed, assuming a Gaussian density profile in each dimension, and randomly generated thermal velocities for a temperature of T = 5 mK. These velocities are assumed constant due to negligible collision rates. The pumping is simulated by turning on the Rabi frequency χ j p , which is calculated for each atom based on their coupling to the runningwave pump pulse and its intensity. The spatial intensity distribution is estimated based on measurements of the pump beam with a CCD camera and an optical power meter. After spatial smoothing to even out noise, the data from the CCD camera are used directly in the simulations and correspond approximately to a slightly-non-Gaussian elliptic profile with waists of w g 0 = 2.7 mm and w m 0 = 1.5 mm. The minor axis of the ellipse is rotated by 35 • with respect to the magnetic symmetry axis, and the power used is P pump = 98.4 mW. The simulated time evolution of the pumping pulse ignores the ramp-up and ramp-down of the acousto-optic modulator, and assumes a square pulse for 160 ns.
The MOT coils impose a quantization axis for the m = 0 transition. The pump pulse is polarized along the MOT coil axis and, as a result, atoms near the center of the MOT field are driven less strongly by the pump pulse. In the model, this is accounted for by introducing an effective intensity driving the transition, given by I × 4y 2 /(x 2 + 4y 2 + z 2 ), where the y axis is along the MOT coil axis. In the simulations, the MOT cloud center is offset by 2 mm with respect to y = 0 based on measurements of the energy splittings of the magnetic states. The pumping leaves the ensemble inhomogeneously excited, with the excitation being highest for atoms slightly away from the beam axis and for the slowest atoms along the beam axis. On average, 85% of atoms are excited within the cavity waist at the end of the pump pulse. Atomic spontaneous decay at a rate γ and leak of cavity photons through the mirrors at a rate κ are accounted for by Liouvillian terms. Throughout the simulation each atom interacts with the cavity mode with different coupling rates g j c depending on their positions relative to the Gaussian cavity mode waist (w 0 = 0.45 mm) and the standing wave structure. We calculate the cavity output power from one mirror (comparable to our experimental observations) by P =hω c nκ/2.
The ensemble temperature is estimated from Dopplerspectroscopy, following the approach in Ref. [22], and timeof-flight measurements to be T = 5.0(5) mK. The atom number is estimated from fluorescence measurements and calibrated by comparing the emitted photon number during a lasing pulse to the increase in fluorescence. The ensemble spatial distribution is estimated by shadow imaging and calibrated fluorescence distribution. The size of the atomic ensemble is measured to be σ = 0.9(2) mm, assuming a Gaussian distribution, but is pulled to σ = 0.8 mm in the simulations to ensure agreement between the output power levels. This is the only adjusted value and is within the measurement uncertainties. These parameters are somewhat degenerate, but we choose to adjust σ . We attribute some discrepancy between experiment and simulation to the model assumption of a 3D Gaussian atomic distribution. The experimentally realized atomic ensemble is irregular in shape with some inhomogeneity. | 8,075.2 | 2019-03-29T00:00:00.000 | [
"Physics"
] |
Classifying Global Symmetries of 6D SCFTs
We characterize the global symmetries for the conjecturally complete collection of six dimensional superconformal field theories (6D SCFTs) which are realizable in F-theory and have no frozen singularities. We provide comprehensive checks of earlier 6D SCFT classification results via an alternative geometric approach yielding new restrictions which eliminate certain theories. We achieve this by directly constraining elliptically fibered Calabi-Yau (CY) threefold Weierstrass models. This allows bypassing all anomaly cancellation machinery and reduces the problem of classifying the 6D SCFT gauge and global symmetries realizable in F-theory models before RG-flow to characterizing features of associated elliptic fibrations involving analysis of polynomials determining their local models. We supply an algorithm with implementation producing from a given SCFT base an explicit listing of all compatible gauge enhancements and their associated global symmetry maxima consistent with these geometric constraints making explicit the possible Kodaira type realizations of each algebra summand. In mathematical terms, this amounts to determining all potentially viable non-compact CY threefold elliptic fibrations at finite distance in the moduli space with Weil-Petersson metric meeting certain requirements including transverse pairwise intersection of singular locus components. We provide local analysis exhausting nearly all CY consistent transverse singular fiber collisions, global analysis treating all viable gluings of these local models into larger configurations, and many novel constraints on singular locus component pair intersections and global fiber arrangements. We also investigate which transitions between 6D SCFTs can result from gauging of global symmetries and find that continuous degrees of freedom can be lost during such transitions.
Contents
1 Introduction Six-dimensional superconformal field theories (6D SCFTs) are uniquely well-suited to shed light on the structure of the string landscape. Nearly two decades after the surprising appearance of the first arguments demonstrating their existence [1][2][3] resolved earlier proposals suggesting they might exist in principle but must be non-Lagrangian [4], renewed interest over the last several years [5][6][7][8][9][10][11][12][13][14][15] has enabled classification results for these theories [16,17] relying heavily on tools from F-theory. Among the features of SCFTs these classifications leave implicit are their global symmetries. Our primary focus here is to provide a characterization of these symmetries for conjecturally all 6D SCFTs realizable in F-theory without frozen singularities (i.e., those without O7 + planes in the language of type IIB string theory) [18][19][20] that have recently been conjecturally classified [16]. Global symmetries play a central role in recent lines of inquiry including investigations concerning renormalization group (RG) flows of 6D SCFTs [21,22], but a systematic treatment has remained lacking. We outline the general structure of 6D SCFT global symmetries, provide summary rules to determine global symmetry maxima for each known 6D SCFT. We also enable explicit listing of the maxima via implementation of an exhaustive search algorithm making manifest the potentially viable Kodaira type realizations of each gauge and global symmetry summand which may occur for a given SCFT base B ∼ = C 2 /Γ determining a family of F-theory models where Γ is a discrete U (2) subgroup meeting stringent requirements [17,23]. In the process, we show that some of the theories appearing in that classification can be eliminated. We also carry out a check that our methods suffice to otherwise match the previously reported "Atomic Classification" [16] via manifestly geometric constraints without appeal to Coulomb branch anomaly cancellation machinery. With these tools at our disposal, we then briefly examine the SCFT transitions obtained by promoting global symmetry subalgebras to gauge summands and find that continuous degrees of freedom can be lost during these "gaugings." The rest of this note is organized as follows. We give a general overview of relevant background material and our approach in Section 2. In Section 3, we review several features of Weierstrass models in F-theory and previous 6D SCFTs global symmetry classification results for the small class of theories for which these have been treated systematically. We then detail an algorithm determining these symmetries for general 6D SCFTs based upon restrictions we derive in Appendix A and other previously established constraints [24,25]. We turn in Section 4 to a discussion of novel restrictions on 6D SCFT bases and their gauge enhancements providing slight refinements of the classification from [16] determined using our algorithm. In Section 5 we summarize the geometrically realizable global symmetries of 6D SCFTs in terms of the permitted length two subquiver Kodaira type assignments for each valid base. We then discuss the general structure of 6D SCFT global symmetries via summands arising from the "atomic" base decomposition constituents in Section 6. The transitions between 6D SCFTs that can be obtained by occupying the degrees of freedom permitting global symmetry summands to instead allow further gauge summands (i.e., by "gauging global symmetries") are discussed in Section 7. Concluding remarks and an outline of applications and open problems appear in Section 8. Instructions for using the accompanying computer algebra workbook appear in Appendix B. Finally, tables summarizing global symmetries for several key cases helping to complete our analysis appear in Appendix C.
Overview
Global symmetries of theories with 1D Coulomb branch and those without non-abelian gauge algebra (for cases with B containing a single compact singular locus component) have previously been treated [24,25]. Here we extend that approach which involves determining geometrically realizable SCFT global symmetries via the properties of non-compact Calabi-Yau threefold elliptic fibrations of the form π : X → B underlying F-theory 6D SCFT models. Flowing to a conformal fixed point after taking a limit in which all compact components of the singular locus are contracted yields a CFT whose geometrically realizable global symmetries are constrained by the permissible non-abelian algebras which can be carried on non-compact components of the singular locus of X via a correspondence of these algebras to global symmetries of the SCFT dating to early F-theory descriptions of the small E 8 × E 8 instanton [26][27][28] also used recently in a number of works [15][16][17].
The geometrically realizable global symmetry maxima of F-theory 6D SCFT models for the cases treated previously [24,25] have only a single compact singular locus component. For those theories, these maxima are subalgebras of the Coulomb branch global symmetry algebras permitted via field-theoretic gauge and mixed anomaly cancellation requirements which govern the gauge enhancement prescriptions of the "Atomic Classification" [16]. We find this is true more generally and appears to hold for all 6D SCFTs admitting an F-theory description and having no frozen singularities.
While it has been argued that all continuous SCFT global symmetries must be gauged upon coupling to gravity [29], precise rules for determining these degrees of freedom for an arbitrary 6D SCFT and provision of gauging consistency conditions before and after such coupling have not been systematically treated in cases with Coulomb branch of dimension larger than one. While we will not discuss this global symmetry gauging mandated by coupling to gravity in any detail, we will briefly discuss gauging of global symmetries taking one SCFT to another. However, our primary focus in this note is simply to constrain the manifest geometrically realizable flavor symmetries of each F-theory 6D SCFT model. Note that the construction we study identifies these degrees of freedom in the UV though such models only give rise to a conformal theory after RG flow. This means that the actual global symmetries of an SCFT associated to each model may differ from those degrees of freedom we shall identify. As shown in earlier work [24,25], these geometrically realizable global symmetries are (in some cases strictly) more constrained than those permitted on the Coulomb branch of the theory. Typically, the latter constrain the actual global symmetries of a CFT since these also act on the Coulomb branch of the theory. However, additional field theoretic constraints can in some cases provide reductions beyond Coulomb branch gauge and mixed anomaly cancellation prescriptions, for example when we have su(2) gauge algebra [30].
The approach we take reduces our central task to a mathematically well-defined problem. This consists of providing a series of constraints on non-compact elliptically fibered Calabi-Yau threefolds we study by means of a singular elliptic fibration π determined by Weierstrass equation of the form with auxiliary data detailed in Section 3.1 and f, g locally defined polynomials on a complex surface. More precisely, f, g are sections of O(−4K B ), O(−6K B ), respectively, with K B the canonical bundle over the base B, as above. The constraints we obtain involve a careful analysis of local models for these elliptic fibrations. We treat sufficiently many cases that it is convenient to constrain the global Weierstrass models through implementation of exhaustive computer search routines.
Note that we do not claim the existence of globally consistent F-theory models achieving the flavor symmetry maxima we report. While proofs to that end are often possible and even trivial in sufficiently many cases that one might expect only limited tightenings of these maxima may be obtained, doing so in full generality is delicate and beyond the scope of this work. Among the key difficulties in demonstrating global consistency of the models underlying the maxima we report is the construction f, g in a neighborhood of a compact curve having intersection with multiple transverse type I n or I * n fibers. When only a pair of transverse curves is considered, we can often bypass explicit checks using a suitable coordinate system or a sequence of blow-ups. However, when three or more transverse curves are present explicit construction is often highly involved. Further work is needed to show that global symmetry inducing non-compact curves meeting distinct compact singular locus components remain uncoupled in cases where sequences of blow-ups do not immediately suffice to this end. (For example, determining via explicit construction whether Σ in the base 2 Σ ′ 1 Σ having Kodaira types I 1 and I 0 on Σ ′ and Σ, respectively, can support simultaneous transverse intersection with the triple of curve stubs I 6 ,I 3 ,I 2 permitted as a point singularity collection along Σ by "Persson's List" [31] is rather tedious. Advancing to analogous cases where we modify the type on Σ ′ and replace the type on Σ curve by I * n often presents similar but magnified challenges.) Hence, except where the maxima we report dictate only trivial global symmetry is permitted, these algebras should strictly speaking be viewed as upper bounds on the actual global symmetries of each theory in the UV.
Note that restrictions on the geometry of Weierstrass models are seemingly necessary to reach the precise conclusions of the "Atomic Classification" [16]. It is hence natural to ask whether a parsimonious approach giving a "purely geometric" characterization of all known constraints on 6D SCFT F-theory models is possible. In this work, we provide strong evidence towards answering this question in the affirmative.
Our methods extend earlier work [24] to the general case, thus resulting in a geometric classification of gauge and flavor symmetries realizable in F-theory models for all 6D SCFTs of the aforementioned classification [16]. We shall proceed without appeal to field-theoretic tools based on Coulomb branch anomaly cancellation requirements involving hypermultiplet count pairing restrictions. This enables us to provide consistency checks on results derived via the latter approach where known. We rely instead on algebro-geometric analysis of elliptically fibered Calabi-Yau threefolds with our efforts focusing on local polynomial expansions of the sections f, g occurring in (2.1).
To enable explicit listing of global symmetry maxima for each gauge enhancement of any fixed 6D SCFT base, we derive a series of constraints enabling our calculations to proceed via computer algebra system. These fall into three main categories. First, we determine which pairs of curves in B with generic fibers having specified Kodaira types are permitted to intersect without introducing singularities so severe that Calabi-Yau resolution of the fibration would be prevented. Second, we analyze local models for transverse singular curve collisions to determine the minimal "intersection contributions" (giving counts towards certain degrees of freedom along each curve) from every relevant permitted intersection. These both entail generalizing previous analysis limited to certain single curve theories [24,25]. The final category concerns elimination of certain arrangements multiple singular locus components.
Together these tools enable us to constrain 6D SCFT gauge and global symmetry algebras independently of and more strongly than Coulomb branch gauge and mixed anomaly cancellation techniques [16,24,25]. Our approach is parsimonious in that we obtain constraints via a manifestly geometric approach based on inspecting elliptic fibrations in correspondence with 6D SCFTs rather than via the hybrid approach underlying [16] which invokes representation theoretic anomaly cancellation tools supplemented by geometric restrictions.
We discuss the previously reported gauge enhancements and bases [16] which our methods eliminate in Section 4. Gauge enhancement prescriptions for certain constituents of SCFT bases, namely "links," obtained using the accompanying computer algebra routines are compared with outputs of routines provided in conjunction with [16] after minor edits aimed to match gauge prescriptions therein. Extending comparisons for links to those for general 6D SCFT bases yields novel restrictions in certain cases.
While established local analysis including intersection contribution data [24,25] plays a key role, our route is complicated by the following issues meriting significant extension of earlier treatments. The theories addressed therein involve models having a singular locus with only one compact curve Σ associated to a simple gauge algebra g. Determining which global symmetry algebras are realizable in such models can be treated via consideration of non-compact curve collections {Σ ′ i } with each Σ ′ i transverse to Σ and carrying nonabelian simple Lie algebra g ′ i . Relatively maximal algebras arising as ⊕ i g ′ i from a permissible configuration are identified as global symmetry maxima via a limit with Σ contracted. For such cases, analysis concerning non-compact curves collections with ⊕ i g ′ i potentially maximal suffices while configurations resulting in "small" ⊕ i g ′ i are irrelevant. To treat more general theories with multiple compact components {Σ i } in the singular locus giving rise to gauge algebra ⊕ i g i , we consider collections of non-compact curves having transverse intersection with some Σ i . While previously determined constraints on maximal algebra yielding configurations transverse to Σ i [24,25] are helpful, the curves meeting Σ i now include any compact neighbors Σ j =i which may carry "small" algebras corresponding to gauge summands comprising part of the data specifying a theory. Such local configuration hence may not be among the previously studied maximal configurations. This presents the significantly more involved combinatorial problem of finding the maximal configurations meeting each fixed Σ i given not only the Kodaira type along Σ i but also those of any Σ j meeting Σ i with global considerations introducing various subtleties. Further complicating our task is that intersection with a compact transverse curve often requires distinct local analysis an analogous intersection with a non-compact curve of the same Kodaira type and can yield different intersection data.
The minimal orders of vanishing determining Kodaira types become insufficient to realize a permitted type assignment {T i } on each {Σ i } in the general case, e.g. (A.2). This leads us to develop local models for many transverse intersections of curve pairs with designated orders of vanishing nearly exhausting all permissible transverse singularity collisions for CY threefold fibrations. The algorithm we supply incorporates this analysis to yield a significant step towards explicit classification of such fibrations including a treatment of codimension-two singularities.
In addition to enabling explicit listing of gauge enhancements and their flavor symmetry maxima, we shall discuss the general structure of these symmetries. We provide two complementary prescriptions involving rules dictating the flavor symmetry maxima that may occur for each 6D SCFT in terms of Kodaira types on curves determining a fixed gauge enhancement. The first consists of rules summarizing these maxima in terms of length two curve chains and constraints imposed additional neighboring curves. This summarizes results obtained with computer routines and organizes them into constraint equations and structured listings. Second, we detail these maxima for longer bases in terms of contributions arising from each of the building blocks in the "atomic" decomposition of 6D SCFT bases into "link" and "node" constituents [16].
The strategy
Our approach generalizes earlier work [24] to cases with discriminant locus consisting of more than one compact curve. The first ingredient involves imposing the restrictions derived therein for single curve cases with associated non-abelian gauge algebra along with the remaining single curve cases without non-abelian gauge algebra treated in [25].
We next check a given global symmetry summand inducing configuration for consistency with the number of vanishings of f, g, and ∆ required along each curve in a quiver. There are a handful of additional restrictions we shall impose including the elimination of a few configurations which are barred via earlier analysis (appearing in Appendix E.3 of [16]) and a similar discussion we derive here in Appendix A.
Before moving to treat the local analysis and other tools we shall require, we begin in this section with a review of our general approach for determining global symmetry maxima within F-theory on purely geometric grounds (i.e., sans anomaly cancellation tools). We also pause to detail an algorithm and our accompanying implementation which allows us to reach conclusions through an exhaustive search of configurations meeting known restrictions outlined here and those derived in Appendix A.
We shall make use of the maximal configurations for single curve theories studied previously [24,25] updated with tightenings in certain cases having non-abelian gauge algebra illustrated in Table 3 and in one case with trivial gauge algebra discussed in Section 3.3.2. In the remainder of Appendix A, we turn to a detailed local analysis on the number of vanishings required along each compact curve for all possible pairwise intersections of discriminant locus components which we shall encounter. In the process we also uncover various forbidden curve intersections. These restrictions are central to the approach we describe in this section, but we postpone their discussion until we have outlined the task at hand since their details are somewhat involved.
Weierstrass models and gauge algebras in F-theory
The essential geometric ingredient for an F-theory formulation of an SCFT is a Weierstrass model determining a singular elliptic fibration given by π : X → B with fibers determined by a Weierstrass equation of the form (2.1) with B ∼ = C 2 /Γ, as above, in the case of 6D F-theory. The discriminant of this equation, is a section of O(−12K B ) with its "discriminant locus," determining where the fibration is singular. The types of singularities that are permitted without being so severe as to prevent a Calabi-Yau resolution X are given by the Kodaira classification [32][33][34] with summary appearing in Table 1. "Non-minimal" fiber types indicated in the final row have resolution of singularities containing a curve which can be blown down. Blow-down of such a curve down leads to a new Weierstrass model with orders of vanishing of (f, g, ∆) along Σ reduced by (4,6,12). We hence discard such cases without loss of generality. Similarly, a two-dimensional Weierstrass model is minimal at P defined by {σ = 0} provided ord σ=0 (f ) < 4, or ord σ=0 (g) < 6. To reiterate, we confine our study in this work to those models lacking non-minimal points. (From the Calabi-Yau condition, we could have blown up such points. Without loss of generality, we take such a model as our starting point.) non-minimal -- Table 1. Singularity types with associated non-abelian algebras.
The precise gauge algebra which occurs in the cases of ambiguity is determined by inspection of the auxiliary polynomials appearing in Table 2 where Σ is a curve along the singular locus {z = 0}; larger gauge algebras result with more complete factorizations. A significant portion of our analysis concerns which splittings can take place in various intersection arrangements. Several existing results towards this end [16,24,25] are crucial to our work. type equation of monodromy cover I s/ns n , n ≥ 3 ψ 2 + (9g/2f )| z=0 IV s/ns ψ 2 − (g/z 2 )| z=0 I * s/ss/ns 0 Table 2. Monodromy cover polynomials determining non-abelian gauge algebras.
The association of non-abelian algebras to Kodaira types indicated in Table 1 including non-simply laced cases dates to [35]. The essential idea is that the resolution of singularities X for a given Kodaira type gives rise to a graph of curves we may naturally associate to a Dynkin diagram determining a Lie algebra g. This same g arises as the gauge algebra of a corresponding physical theory associated to an F-theory model compactified on a Calabi-Yau threefold X with metric furnished via work of Yau [36]. Cases in which non-simply laced algebras may occur further involve a cover of X with properties determined by the auxiliary "monodromy cover" polynomials appearing in Table 2. These together with the local polynomial expansions for certain Kodaira types as detailed previously (in Appendices A,B of [24]) are the main objects involved in our discussion.
Global symmetries from F-theory geometry
The field-theoretic and geometrically realizable global symmetries of 6D SCFTs with an F-theory model having discriminant locus with a single compact component were studied in previous work [24,25] to treat 1D Coulomb branch cases and certain trivially gauged theories. This leaves us to focus on theories with models having more than one compact component after providing a few tightenings of earlier results detailed in Sections 3.3.
Two ingredients are essential in our derivations. We make heavy use of earlier results giving expansions of f, g, and ∆ that determine general forms for expansions giving local models of Kodaira type I * n and I n curves [9,24]. We also rely on Tables 41,42 taken (up to minor corrections) from [24] which give forbidden curve intersections and intersection contributions obtained from certain local models. Though we require significant generalizations derived in Appendix A, these local intersection models and contribution data play a key role. We also will use the maximal configurations derived in those works and several geometric restrictions on curve pair intersections derived in [16].
Setup and notation
Let us begin by reviewing aspects of our setup, notation and terminology largely based on [16,24].
Global symmetries arise in F-theory via non-compact components of the discriminant locus. Coupling of the associated gauge group on each of these curves becomes zero after a rescaling, hence leading to a global symmetry [24]. Our main focus here involves extending earlier discussions [24,25] concerning symmetries constructed in this way for cases with discriminant locus containing a single compact curve (determining an SCFT with gauge algebra consisting of at most a single simple Lie algebra summand) to the much broader collection of theories appearing in [16]. The latter typically have bases with multiple compact components of the discriminant locus. We shall see that this extension is simple in principle, but presents significant combinatorial challenges.
Let us pause to discuss two limitations of our approach also noted elsewhere [24,25]. First, the Tate's algorithm prescription [9,37] may not capture the most general possible forms for Kodaira types I 7≤n≤9 . Since we shall rely on the Tate forms in these cases, it is possible that a limited class of configurations may be missed by our approach. Second, where local analysis permits non-compact curves to allow any monodromy designation, we report global symmetry maxima with those possibilities giving the largest algebras consistent with our analysis though additional restrictions could in principle be obtained in some cases. However, the maxima report always appear to be subalgebras of the Coulomb branch global symmetries which we generally expect to find as subalgebras of the actual global symmetry algebra for a given theory. This suggests that only limited further reductions from these maxima may be obtained. Note that multiple global symmetry maxima persist in many cases even when monodromy designations are unambiguous both in cases for fixed geometric realization of a given gauge algebra (e.g. su(2) on a −1 curve realized by Kodaira type IV ns with maxima appearing in Table 3) and in cases where we compare all geometries for fixed base realizing a given gauge assignment (e.g. su(3) on a −1 curve with su(10) and su(3) ⊕3 maxima, as read from Table 6.1 of [24]).
We now pause to review a few details of the "Atomic Classification" [16]. Theories are classified therein by detailing all possible connected trees of compact curves Σ i ⊂ {∆ = 0} ⊂ C 2 having Σ i ·Σ i = m i over which the fibration is singular. This "atomic" decomposition into permitted subgraphs given by the values m i , rules for their gluing and the gauge summands from each Σ i dependent on nearby attachments leaves certain ambiguities in the Kodaira types which can realize a given gauge summand, e.g., types I 0 , I 1 , and II each yield trivial summand. One of our secondary objectives here is to resolve these ambiguities and provide a geometric check of this classification.
Contraction of all Σ i yields the orbifold base B ∼ = C 2 /Γ with Γ a discrete U (2) subgroup determined by the values m i or alternatively the values m i obtained after iterated blowdown of all −1 curves to yield an "endpoint." These were classified in [17] and restructured in [23]. Distinct curves Σ j must have transverse intersections in at most a single point. The curves {Σ i } must be contractible at finite distance in the Calabi-Yau moduli space with Weil-Petersson metric dating in this context to [38,39] which leads to a pair of conclusions via [40,41]: i) Σ j ∼ = P 1 with negative self-intersection (that is, Σ i · Σ i < 0), and ii) the graph consisting of the Σ j must have positive definite adjacency matrix given by (3.3) To simplify notation, we will often omit the minus signs giving curve self-intersections with the understanding that all self-intersections are negative (e.g., in place of the chain −3, −2, writing instead 3, 2). Any two digit self-intersections will be given with parentheses where ambiguous. For example, when writing 1, 12, 1 without commas, we shall write 1(12)1.
A key ingredient for our work is the general result of [16]: with few exceptions, every 6D SCFT base in F-theory is of the form where g i ∈ {4, 6, 7, 8, 9, (10), (11), (12)} are "DE-type" nodes (referring to the gauge algebras supported on these curves), I ⊕l are subgraphs of the form 122....2 consisting of l curves called "instanton links", and S i , L i are "side links" and linear "interior links", respectively; attachment to DE-type nodes occurs via the exterior −1 curves when possible. Truncations of this general form are also permitted. Briefly, allowed bases are linear chains of curves with branching possible only near the ends. We refer to [16] for the details of two exceptions to the above structure. The first allows a limited class of bases with a single 4-valent curve that are linear away from this curve. The second allows up to four instanton branches for certain bases with precisely five nodes.
We will now review and extend the setup, notation, and terminology introduced in [24]. We let the compact irreducible effective divisors of (3.3), namely Σ i ⊆ {∆ = 0} lie at {z i = 0} and designate their self-intersection numbers via m i = −Σ i · Σ i . Let P i,k denote the intersections of Σ i , Σ k for i = k when non-empty.
We shall consider non-compact collections of curves Should we embed a neighborhood of the curves Σ i in some larger space, it is conceivable that one could derive more stringent requirements on global symmetries arising from the Σ ′ i,j . Nonetheless, configurations in such contexts must still obey the constraints we shall derive which are strictly local in i up to propagation of these local constraints along the compact curves of the quiver (in the sense detailed at the start of Appendix A) due to purely local analysis of intersections. Consequently, no extra freedom is introduced for example by allowing the Σ ′ i,j to have intersection with multiple Σ i .
abbreviating these quantities as f , g,∆ where unambiguous and noting that these are sections of the line bundles We will refer to∆ as the residual discriminant. When Σ is any of the compact divisors Σ k , setting m = m k we have since Σ ∼ = P 1 with genus g = 0; we define residual vanishings on Σ as the quantities which count the number of zeros with multiplicity of the restrictions to Σ of f , g and∆, respectively. To improve the naïve constraints on Σ i , Σ i,j reading we begin by defining for an intersection of two curves Σ, Σ ′ at P the intersection contributions from Σ ′ to Σ towards the residual vanishings given by the quantities abbreviating these as (ã P ,b P ,d P ) Σ . Note that strict inequalities iñ often follow from local analysis and that non-minimality at the intersection requires one of ord P f < 4 , ord P g < 6 . (3.12) Constraints tightening (3.9) using intersection contributions following from local analysis then readã where P i,Σ are relabellings of any intersection points between Σ and other components of the discriminant locus, namely {Σ j } j =i and {Σ ′ k,j } j∈J . Further restrictions come from consistency checks for gluing these local models into globally well-defined configurations.
We shall employ the terminology of [24] referring to curves such that Σ at {z = 0} has discriminant of the form for some p > 0 and z ∤ f as having odd type and indicate the orders of vanishing as (a, b + B, d) Σ , where B = 0, 1, · · · . For such curves, the second term in the right hand side vanishes identically upon restriction to Σ and henced P = 3ã P . When instead (a+ A, b, d) Σ , A = 0, 1, · · · , the residual discriminant has the form for some p > 0, and z ∤ g makingd P = 2b P , we refer to such curves as even type. The remaining cases with Kodaira types I n and I * n are termed hybrid types, these having both f z=0 and g z=0 involved in contributions to vanishings of the residual discriminant.
Note that the analysis of [24] for single curve theories addressed the general cases having any permitted A ≥ 0 and B ≥ 0 via the observation that maximal global symmetry algebras arise for a given Kodaira type when A = B = 0. Theories having multiple compact components of ∆ however often require some Σ i have A > 0 or B > 0 to realize a maximal configuration making our analysis somewhat more demanding. For example, consider the gauge enhancement with type assignment given by (3.16) Observe that the maximal global symmetry algebra [g GS ] which can arise is e 6 realized by a type IV * s fiber meeting the −1 curve. This requires A ≥ 1 in (3.16) by (3.9). Hence, to search for consistent geometric assignments yielding maximal algebras, it becomes relevant to consider non-zero A, B. In fact, non-zero A, B values are often required by a gauge enhancement before any global symmetry considerations as noted above, e.g. the configuration of (A.2). Together, it is hence natural to study nearly all A, B local intersection models in development of a brute force algorithmic approach to finding maximal configurations.
Theories with one compact singular locus component
We now review details of global symmetry algebra maxima realizable in F-theory for single curve theories. Our approach to treat arbitrary 6D SCFT bases found in F-theory constructions via our algorithm requires also the data consisting of the maximal transverse configurations for single curve theories as detailed in [24,25]. Many of these configurations do not lead maximal algebras for single curve theories. They do, however, constrain the transverse configurations we encounter in trying to do determine these maxima for more general bases.
Non-trivially gauged theories
Global symmetries realizable in F-theory for those cases where a single compact curve in the base carries non-abelian gauge algebra first appeared in [24]. We use these restrictions with a few new tightenings for type III and IV curves that we indicate with a ' †' symbol in Tables 3. In cases with non-abelian gauge algebra, the Coulomb branch global symmetry predictions from field theory are remarkably close to the constraints we find from F-theory geometry, the latter being more restrictive in some cases. 1 type along Σ algebra on Σ −Σ 2 max. global symmetry algebra(s)
Gaugeless theories
The global symmetries and maximal transverse configurations which may arise in F-theory models lacking non-abelian gauge algebra (i.e., where the discriminant locus contains only a single trivially gauged compact curve) first appeared in [25] as Tables 3,5,7, thus completing a characterization of the geometrically realizable global symmetries for single curve theories. We shall use the fact that a further tightening is possible for an I 1 curve; the flavor symmetry maximum coming from an I * n transverse fiber with n = 4 can be constrained slightly by observing that such a fiber must have monodromy, yielding a reduction for one of the maxima from so (16) to so (15). We shall also use the maximal configurations and tabulations of intersection contributions (appearing in [25] as Tables 4,6) in these gaugeless cases. Generalizations of this data to cases with A, B > 0, are treated in the Appendix A. Arguments yielding these results appearing in [25] use the same approach we take here and in [24]. We extend this analysis in part for treatment of arbitrary bases since gaugeless compact components of the discriminant locus often can appear in longer bases only if A, B > 0. We detail the relevant intersection contributions and forbidden intersections for such curves. This extension plays a key role in our algorithm determining global symmetry maxima consistent with the other restrictions derived in Appendix A.
A few comments in the case of a single type I 0 curve Σ may be helpful. The maximal configurations from [31] are those permitted as collections of singular points along Σ, but we impose stronger requirement that these arise from a transverse curve configuration. As with many other cases we study here, these maximal configurations may place distinct restrictions reduction resolves a mismatch with geometrically realizable global symmetries to bring agreement with predictions from F-theory. Whether yet unknown further constraints on field theory might allow precise matching in all cases remains an intriguing question. on the singularity type of the compact curves they intersect. As an example, among the maximal gauged curve configurations above for a type I 0 are [III * ,III] and [IV * s ,IV s ], from which the algebras e 7 ⊕ su(2) and e 6 ⊕ su(3) arise. These require B > 0 and A > 0, respectively where (a, b, d) Σ = (A, B, 0). Since A, B cannot simultaneously be non-zero in type I 0 , a given fixed compact component of the discriminant locus with designated orders of vanishing permits only one of these transverse curve collections. In this sense, they can only arise in distinct geometries. This phenomenon is one motivation for tracking the geometric data captured by the orders of f, g, ∆ in our work. In our example, we have a larger algebra in which these are both subalgebras, namely e 8 . We might hastily conclude the geometrically realizable global symmetry algebra for all models with a single I 0 curve is then always e 8 . However, when B > 0, intersection with any e 8 bearing curve is nonminimal. Our approach intends to enable a broader determination of whether the distinct global symmetries we find may arise from distinct SCFTs with their data specified by the geometry of the fibration at a level of precision beyond specification of the gauge algebra. For this reason and to confirm preliminary existence checks for geometric realizations of each configuration, we further track the orders of f, g, ∆ along each of the compact and non-compact components of the discriminant locus.
Algorithm summary
In this section we describe an algorithm we have implemented via series of computer algebra system routines. These routines are intended to allow adaptation for other purposes, though the primary focus in their development is the computation of 6D SCFT global symmetries. There are three main groupings of methods. The first computes 6D SCFT gauge enhancements from geometric considerations. The second handles semisimple Lie algebra inclusion rules. The final grouping determines curve configurations leading to geometrically realizable global symmetry maxima. Several subroutines dual a purpose role in the first and third groupings. The reason for this is that many of the restrictions on which Kodaira types may be paired in curve collisions often hold even when one of the curves is non-compact.
A summary of the algorithm determining gauge enhancements and global symmetries for each enhancement on a quiver given by the values m i follows. 2 Certain subroutines are more elaborate than indicated to allow efficiency boosts and result formatting including 'sewing' results by combining shorter quivers together to treat longer quivers, storage of partial results during computation, writing data to file for later use and presentation in text, and enabling parallel computation. Since these are non-essential to the underlying algorithm, we will not further discuss these aspects here, instead providing the precise work-flow via inclusion of our implementation with arXiv submission of this note.
Given a quiver Q specified via the values m i , the first leg of the algorithm finds all compatible gauge algebra assignments on Q while tracking the Kodaira types T i yielding gauge algebras ⊕ i g i . For each of these type assignments T ∼ {T i }, the second leg determines all geometrically realizable global symmetry maxima for T .
This process begins by assigning all possible orders of vanishing (a, b, d) Σ i on each Σ i in Q compatible with naive non-minimality constraints (3.9) up to user specified maximum values A max,T , B max,T to be allowed for each Kodaira type T. Each order assignment {(a, b, d) Σ i } is then paired with every naïvely permitted monodromy assignment appearing in Table 1. Intersection contributions are computed for each T i in each assignment T to check (3.13) with failing assignments discarded. Any remaining T are checked against restrictions on pairwise intersections of compact curves. Each length three subquiver is then checked against restrictions on maximal configurations involving such triplets. If Q is a branching quiver, a final check against restrictions on transverse trio configurations mostly derived in [24,25] is applied. The resulting list of permitted T determines the gauge enhancements we allow for the given quiver.
The second leg of the algorithm constrains any geometrically realizable global symmetries by finding transverse configurations {Σ ′ i,j } permitted for each assignment T . This beings by assigning for each fixed T every transverse collection {Σ ′ i,j } of non-trivially gauged non-compact curves, again in two phases with the second involving decoration by a monodromy assignment with constraints on configurations provided by (3.9),(3.13), respectively. Each collection is then checked against restrictions on pair intersections, transverse duets and triplet degenerations, and certain larger maximal configurations slightly generalizing those determined in [24]. The remaining transverse configurations {T ′ ∼ {Σ ′ i,j }} T ′ ∈J T determine all possible geometrically realizable global symmetry summands for fixed T .
Each configuration T ′ for a fixed T is then explicitly associated to a global symmetry algebra summand g GS,T ′ ∼ = ⊕ i,j g i,j . Any relatively maximal algebras among {g GS,T ′ } T ′ ∈J T are determined via an implementation of semisimple Lie algebra inclusion rules and analysis of maximal semisimple Lie subalgebras from [42]. These are the global symmetry algebras we permit for the enhancement T . Any T ′ with strictly smaller associated algebras are discarded and the resulting list returned to give the explicit geometric realizations of any global symmetry maxima for T . In making the maxima comparisons, we put all T having differing A i , B i assignments on the same footing provided the Kodaira types are matching.
A repackaged version of the program provided in conjunction with [16] is included along with methods allowing direct comparisons of enhancements described in the literature and those we compute via other methods enabling tracking of geometric data not previously available. We generally find agreement for the enhancements permitted on links with those of [16] and consequentially also for most 6D SCFTs with a handful of exceptions eliminated via geometric constraints on F-theory bases as detailed in Section 4. The input Q for our implementation is not confined to links; only the finitely many 4-valent bases are barred.
Flavor summand locality
We now pause to provide an example illustrating one result of our analysis. Briefly, global symmetry contributions are local in quiver position for fixed Kodaira type but not for fixed gauge algebra. Each assignment T α for α ∼ 2215 with g gauge ∼ = su(2)⊕e 6 appears in Table 4, where n 0 denotes the trivial algebra, flavor summands appear below the curves on which they arise, and the right-most column contains the flavor symmetry maxima for a given type assignment. While we can realize a g 2 global symmetry summand from the left-most −2 curve and an su(3) summand from the −1 curve, these options are mutually exclusive, i.e., all global symmetry algebras for an su(2) ⊕ e 6 gauge theory realizable in F-theory via this quiver are strictly smaller than su(3) ⊕ g 2 . Table 4. All global symmetry maxima for 2215 with gauge algebra su(2) ⊕ e 6 along with each possible Kodaira type assignment to the quiver realizing this gauge algebra.
For a quiver α, we thus fix a Kodaira type assignment T α before addressing which global symmetries are geometrically realizable. We can then compare results among all T α having shared gauge algebra, as above. Somewhat surprisingly, while various constraints are "non-local" in curve position including the permitted orders (a, b, d) Σ i (as illustrated in Table 19), the g global maxima for fixed T α instead appear to always include every relatively maximal product of any permitted curve contributions. In contrast, this ceases to hold when varying T α for g gauge fixed as illustrated by the above example.
Distinguished Calabi-Yau threefolds from global symmetry maxima
In this section we introduce a distinguished class of elliptically fibered CY threefolds determined by global symmetry maxima of 6D SCFTs. We examine the role of these symmetries in the field-theory to geometry "dictionary" and show that a nearly bijective correspondence results when including these symmetries among the data specifying an SCFT.
Briefly, the idea is to consider which singular elliptically fibered CY threefolds π : X → B give F-theory models for a 6D SCFT having data (g gauge , g global , Γ), where g gauge and g global are gauge and global symmetry Lie algebras, respectively, and Γ discrete U (2) subgroup gauge fields. As discussed in [16], Γ determines a unique quiver {m i } that is the minimal blowup of the endpoint associated to Γ permitting gauge enhancement given by g gauge . Such quivers do not exhaust those for F-theory models with matching gauge content, i.e., dropping g global from the SCFT data leaves the geometry severely underspecified.
Models compatible with (g gauge , Γ) often allow many choices for {m i }, T with geometrically realizable global symmetry algebras {g global } so different that their common merger at a conformal fixed point upon renormalization would be highly surprising. When instead tentatively regarding g global as an essential ingredient in specifying a CFT, matching models become so constrained that we find a nearly bijective map from 6D SCFTs to corresponding CY threefolds. The distinguished collection of all threefolds determined via this correspondence from the 6D SCFT landscape appears to be a natural candidate for further study.
We now turn to an example before further discussing this correspondence more generally. Consider the collection of theories having g gauge ∼ = e 6 . The compatible bases include: 1,2,21,3,31,131,4,41,141,5,51,151,512,1512. The number of curves in the base is not fixed by g gauge , nor by also specifying Γ. We can say more upon fixing g global . We consider the case that g global ∼ = su(3) and note that any compatible base has at least two curves since each single curve theory with g gauge ∼ = e 6 gauge algebra has g global trivial via Table 6.2 of [24]. We can eliminate the bases 131, 141, 151, 1512 since each has all g gauge ∼ = e 6 compatible enhancements with g global being too large.
We should now specify Γ to distinguish between the bases 51, 512, and others. Consider the g gauge ∼ = e 6 compatible T on 51, 512. These appear in Tables 5 and 6. Upon fixing either base, T is determined by a choice of g global . Now we observe that the field theory data Γ distinguishes between the remaining g global ∼ = su(3) compatible bases and hence in conjunction with g global specifies the geometry uniquely up to quiver and Kodaira type assignment. 5 1 GS Total: This example suggests that it is natural to consider a field-theory/geometry "dictionary" relating elliptic fibrations compatible with a choice of field theory data (Γ, g gauge , g global ).
Here g global constrains which blowups of the aforementioned minimal (g gauge , Γ) compatible base should be considered, for example by requiring blowup of a base with a single −4 curve to 51 when g gauge ∼ = f 4 and g global is non-trivial, or with another Γ, from −3 to 512. Such a dictionary formulation thus gives a natural route from field-theory to the bulk of geometries in correspondence with 6D SCFTs, with the role of g global essential in accessing Table 6. All global symmetry maxima for 512 with gauge algebra e 6 and each Kodaira type assignment realizing this algebra.
the better part of this geometric landscape. The fibration with minimally blown-up base can be viewed as a degenerate case in which we have omitted any g global specification.
Let us return to our g gauge ∼ = e 6 example, instead now fixing Γ −3 corresponding to the endpoint −3. Since g gauge has a single summand and all curves Σ having m Σ ≥ 3 carry non-trivial g Σ , there can only be one m ≥ 3 curve in any compatible base. All curves must have 1 ≤ m ≤ 6 since m Σ ≥ 7 requires Σ minimally support g Σ ∼ = e ≥7 . For the base −3, we have g global = 0, and this is the only such base. The remaining bases with shared endpoint which can match g gauge have α ∈ {41, 151, 512, 1612, 1 1 61} .
For 41, we have a unique g global ∼ = su(3). From the data above for 51, we can deduce that when α given by 151, the unique g global ∼ = su(3) ⊕2 . For the only trivalent option, this becomes su(3) ⊕3 (noting Table 7). For 1612, options for g global match those from 512 after appending an su(3) summand coming from the outer −1 curve. To simplify the correspondence, let us consider only the geometries leading to the maximal g global on each quiver. This gives for 512, g global ∼ = su(2) ⊕ su(3) and for 1612, g global ∼ = su(2) ⊕ su(3) ⊕2 . To summarize, α and T are determined uniquely in each case by g global .
Observe that "special" CY threefold fibrations are singled out by this "dictionary," namely those which have singular curve collections carrying one of the g global maxima for a 6D SCFT. Denoting this collection of varieties M g global , our example summary amounts to concluding that inverting the map from bases with enhancements specified up to Kodaira type to their relatively maximal global symmetries gives an injective map from g global to M g global . Generalizing this statement to all 6D SCFTs is non-trivial due to subtleties including the presence of multiple relative maxima for g global . Note that there is an analog of this distinguished class of threefolds in the moduli space of compact Calabi-Yau threefolds upon consideration of a compact base giving an F-theory model coupled to gravity wherein the global symmetries we describe are promoted to gauge symmetries via assignment of values m i to non-compact g global carrying curves when possible while our transversality requirements are relaxed.
The distinguished threefolds M g global in contact with F-theory models are remarkably sparse among CY threefold elliptic fibrations. Consider a fibration with base containing a single compact curve Σ and T Σ ∼ I 0 . The unique degenerate fiber collection along Σ corresponding to the unique flavor symmetry maximum for such models, namely g global ∼ = e 8 arising from a transverse curve Σ ′ with T Σ ′ ∼ II * , corresponds to the a distinguished geometry among the many others appearing as entries of "Persson's list" from [31]. Enhancing Σ to reach T Σ ∼ I ns n for n odd makes the number of transverse configurations grow exponentially in n while only n+3 2 geometries are in correspondence with the g global maxima of Tables (6.1-2) of [24]. This sparsity is not limited to single curve bases. For example, there are infinitely many minimally enhanced bases with outer links permitting an e 8 ⊕ e 8 global symmetry. For each, we have a variety with that flavor symmetry arising in the singular limit which is distinguished from the remaining varieties inducing any of the 6757 proper g global subalgebra isomorphism classes. 6 1 GS Total: Table 7. All gauge and global symmetry options for 61 with each possible Kodaira type specification realizing a given gauge theory.
Gauge algebras
In this section we discuss gauge algebra assignments for each base permitted via [16] though forbidden via the geometric constraint algorithm outlined in Section 3. We also outline our method for comprehensive comparison of gauge enhancements permitted via [16] versus our algorithm. The latter are strictly more constrained with a minority of cases excluded.
Link enhancements
We now inspect the link gauge assignments compatible with some explicit Kodaira type specification meeting our geometric constraint algorithm. We compare our prescriptions for links with those determined in [16] and then extend to a comparison for general 6D SCFT bases via the node attachment restrictions of [16].
Consequences of so(13) global symmetry constraints
An so(13) gauge summand can only be carried on a curve of negative self-intersection m = 2 or m = 4. The F-theory global symmetry from [24] for such a curve, sp (5), is independent of m, while the Coulomb branch global symmetry is given by sp(9 + Σ 2 ). These agree for m = 4, but the discrepancy for m = 2 leads to gauge enhancements for a family of bases which are more constrained than those characterized in [16]. In particular, gauge enhancements for quivers which are truncations of 21414 · · · containing the link 21 beginning so(13), sp(6 ≤ l ≤ 7), · · · are eliminated.
Further link enhancement restrictions
We have carried out a comprehensive comparison between the link gauge enhancements permitted by our approach versus those prescribed in [16]. The few discrepancies between these prescriptions are discussed in this section. Our comparisons are drawn after adjustments to the gauge enhancement prescription algorithm implementation accompanying [16] aimed to make these fully consistent with the gauge algebra constraints of [16] and identification of su(2) and sp(1). 3 We find agreement for all link enhancements except those appearing in Table 8 or contained in the family detailed in Section 4.1.1. We now briefly discuss geometric elimination of the former.
Enhancement
Permitted via [16] Geometrically realizable on a sub-quiver. This can be excluded via the following geometric considerations. 3 These edits resolve mismatches with prescriptions of [16] for the bases 13, 213, and 2 1 31 (affecting results for longer quivers) resolved by adjustments bringing listings for these into agreement with underlying gauging rules. Details of edits appear in the workbook accompanying the arXiv submission of this note in the subroutines for those quivers.
For a gaugeless curve to support an f 4 ,g 2 neighbor pair, it must be a type II curve. This leaves only as an assignment of Kodaira types potentially realizing this enhancement. However, this too is ruled out sinced Σ = deg(∆ Σ ) = 4 along a −5 curve Σ with f 4 algebra while each type II contributes two vanishings of∆ Σ .
This in turn eliminates each enhancement from [16] containing the above gauge assignment, these being the five truncations of obtained by removing any choice of −2 curves.
appears to be permitted by [16] gauging requirements and is among the companion workbook enhancement listings. However, geometric considerations eliminate this enhancement. We proceed by inspecting which Kodaira types might be permitted to realize the relevant gauge algebra assignments. The −2 intersections with an so(7) algebra carrying −3 curve requires these have Kodaira type III. This implies that of the 6 residuals in∆ Σ ′ available along the −3 curve, we have used three for each −2 curve intersection. This leaves no residual vanishings for the −1 curve to carry a Kodaira type other than I 0 . However, a type I 0 assignment is not permissible since 6 + 7 vanishings of∆ Σ are required along this curve to support intersections with the neighboring curves which must have Kodaira types I * ss 0 and I * ns 1 (as we can read from their algebra content) making theird Σ contributions to the −1 curve at least 6 and 7, respectively.
Summary of link comparisons
After compensating for the aforementioned issues, we find agreement with the gauge enhancement structure on all links with that of [16]. In other words, there is some Kodaira type assignment and chosen orders of f, g, ∆ along each curve of the link that meets all geometric constraints known to us and realizes each link enhancement dictated via the aux-iliary computer algebra workbook listings of [16] after the minor aforementioned edits with the above eliminated enhancement exceptions.
Comprehensive enhancement comparison
We now compare enhancements for bases with nodes we obtain with previous results. After accounting for a few technical exceptions, the enhancements we permit match those of [16]. We begin with a summary of comparisons for single node attachments to a link and then turn to discuss attaching a pair of nodes to an interior link.
4.2.1 e 6 , e 7 and e 8 attachments Enhancements of bases formed by left attachment of an e 7 or e 8 node to a link of the form L ∼ 1223 · · · yield matches as do instanton links (taking the form 122 · · · ) with e 8 node attachment. Explicit comparisons for the latter are made awkward by differences in the handling of infinite link enhancement families but can be treated by confining listings of [16] to those with empty gauge summand on the two leftmost curves and the rightmost matching a corresponding term from our listings.
We also find agreement for left e 6 and e 7 attachments to links of the form 123 · · · , 1223 · · · , and 122 · · · . Comparisons for the latter can be made with a procedure analogous to e 8 instanton link case, here instead via restriction to link enhancements obeying the gauging condition of [16] making the leftmost −2 curve empty, sp(1), or su(3) (with the latter only for e 6 attachment).
Excluding the links 1 2 23 and 122315131 discussed in Section 4.2.4, explicit comparisons yield agreement for the remaining branching links meeting e ≥6 curves, thus confirming all link attachment prescriptions of [16] for a single compact curve carrying an e ≥6 algebra with exceptions noted above.
Attachments to an interior link
Gauging rules of [16] for interior links with node attachments match those following from our approach in all cases except for exclusion of a forbidden node pair to 122315131 discussed shortly. Matching for cases with attachment of a −6 or −4 curve to the link 1315131 and all allowed attachments to 131513221 or 13151321 can be confirmed by comparison of these gauging rules with our listings for each quiver of this form. Agreement for enhancements of node attachments to the links 12231, 12321, and 1231 follows from link enhancement agreement and inspection that our prescriptions respect those attachment gauging rules as does that for 131, though checks for the latter are involved as specification of enhancements permitted by [16] require supplementing attachment rules with convexity conditions. These checks can be extended to confirm that all bases with interior links and no side-links having up to two nodes except bases of the form (1)4141 · · · yield matches. 4 In fact, after matches confirmed in the following subsections, we can conclude that with the aforementioned exceptions for noble branching link discrepancies, so(13) caveats, certain pairings for 122315131, and bases with node attachment to 1 2 23, we find agreement for all gauge enhancements. This follows from checks on multi-node bases revealing no further eliminations. Whether stronger geometric restrictions on short quivers can be derived to more significantly reduce the 6D SCFT landscape via our algorithm remains an intriguing question for future work. (2)(1)414 . . .
Enhancements on quivers of the form
Excluding the restriction discussed in Section 4.1.1 above, our method yields matching enhancements for quivers of the form (2)(1)414 · · · with those prescribed via [16]. Carrying out this check is delicate as each such quiver permits infinitely many enhancements. However, these obey a simple set of rules determined in [16] which match the restrictions on length three sub-quiver gauge algebra assignments dictated by geometric global symmetry restrictions excluding the aforementioned so(13) caveat.
To explicitly confirm matching away from those special cases detailed in Section 4.1.1 using our algorithm, we begin by fixing a quiver in this family and choosing an upper bound on gauge summand rank. Listing all compatible enhancements and discarding those having summands matching the rank bound allows confirmation via inspection that remaining enhancements obey the corresponding gauging conditions of [16]. Checks through large rank and quiver length reveals the claimed matching.
Novel links and link attachment restrictions
The links 1 1 513215 and 31 1 51315 appear to be allowed, though absent from the listings of [16]. These blow-down consistently and permit multiple valid gauge assignments including options with f 4 on the rightmost −5 curve. It thus appears that each is a properly a noble link rather than its truncation (of the outer −5 curve) being an alkali link permitting only right e 6 or e 7 attachment.
Among the links listings of [16] is 3 2 21 Σ with indicated attachments for (only) e 6 , e 7 , though neither appears to be permitted. The e 6 algebra is not possible, since this requires attachment to a curve with m ≤ 6 which then cannot satisfy the adjacency matrix condition. For e 7 algebra, we have m ≤ 7 similarly barred, leaving m = 8 as the only potentially consistent option. However, this is also barred as after one blow-down we reach which is inconsistent with the normal crossings condition. We conclude this link is not permitted any attachments (making it a noble link), though an e 6 global symmetry can arise from Σ.
Link and attachments summary
Novel links and link attachment prescriptions (comparing with [16]) appear in Table 9. Table 9. Summary of novel links and link attachment prescriptions versus those [16]. Here ' ' indicates a link that appears to be allowed though not listed in [16].
No trios of branching side-links, implementation scope
Our implementation of the algorithm outlined in Section 3.4 does not treat 4-valent bases. The only bases not treated directly are the single instanton link decorations of single node bases which are treated directly, these being of the form Our implementation also does not treat branching from any vertical branches, but this does not impose any limitations. While a priori the classification of [16] allows for a pair of branching side-links S L , S U meeting a node which then joins the backbone giving this situation is never encountered in practice. To check this, we confirm that −12 is the only possible node allowing a pairs of branching side links and an interior link L R . In this case, we can only have L R be interior link −1 and no attachments to this link are possible. In fact, there is only one such base and it can be rewritten in the form where S L is given by 1 1 513221 and S R is the reverse of this link.
Global symmetry classification summary via local contributions
Since flavor symmetries for bases with a single compact singular locus component were characterized in [24,25], we are left to contend with bases containing at least two compact curves. In this section, we detail analogous results capturing the general case obtained via the algorithm of Section 3.4 by dictating the flavor summand maxima arising from each segment of a base with specified enhancement. Though curves Σ i in a base may have any of the values 1 ≤ m Σ i ≤ 8 or m Σ i = 12, length two chains are highly constrained with the only options being α ∈ {1k for k > 1, 22, 23}. We shall use this fact to characterize flavor summands arising from each curve of every viable short subquiver decorated with a Kodaira type assignment. This yields a classification of 6D SCFT flavor symmetries via decomposition of an arbitrary base into short chains for which we prescribe summands with a combination of configuration listings and short constraint equations. In many cases, these tighten the permissible symmetries beyond those permitted by the free multiplet counts for the cases detailed in [16].
A fact we shall use frequently is that any curve carrying gauge algebra f 4 , e 6 , e 7 , or e 8 does not support any non-abelian global symmetry summand. As elsewhere in this note, bracketed terms indicate global symmetry summands. Note that for a curve Σ with 7 ≤ m Σ ≥ 8, only the assignment III * ↔ e 7 is permitted; for 6 ≤ m Σ ≥ 8, type IV ns ↔ f 4 is barred while e 6≤k≤7 types are permitted; for m Σ ≤ 4, each of f 4 and e 6≤k≤7 assignments are valid; finally, e 8 assignment requires m Σ = 12. We shall use these facts to treat multiple quivers simultaneously and refer to them as "the permitted gauging rules." It will be convenient to introduce notation [g α ] for flavor symmetry summands arising from a subquiver α ⊂ β (e.g. 23 ⊂ 232) upon fixing a type assignment on β given by T β , taking β = α when no containing quiver is apparent from context. For example, since the type assignment T ∼ T 23 ∼ III,I * ss 0 to 23 allows an su(2) flavor summand from the −3 curve via (5.1), we shall write [g 23,T ] ∼ = su(2)] or simply [g 23 ] ∼ = su(2) when context makes T unambiguous. We will similarly refer to flavor summands arising from a given curve Σ by [g Σ ] when context makes the containing type assignment clear, e.g. in
23
The only permissible Kodaira type assignments for the bare quiver 23 are noting that the indicated [g Σ ] may be further constrained by the presence of an additional
(12)1, 81, 71, 61, 51
Since the unique gauge assignment for (12)1 has no flavor summand arising from either curve and a unique Kodaira type assignment given by 12 1 neighboring curves are irrelevant to prescribe contributions from this curve pair.
The permitted configurations for the bare bases m1 with 5 ≤ m ≤ 8 are (2)] IV * ns IV * ns (5.4) Note that the permitted gauging rules constrain the allowed values of m in various cases. This condition will be implicit in the further listings with indeterminate m in this section.
Since flavor summands which can arise from the curve pairs m1 in longer quivers depend on Kodaira types of right neighboring curves, we next detail the effect of these attachments while observing the irrelevance of any left attachment.
Via [16], the only forms for links which can attach to an e 6 node are 12 .
3 · · · , 1223 · · · , or 12(2)(2) · · · . (5.5) We proceed through the type assignments for these links compatible with the presence of node with 7 ≤ m ≤ 8 and detail flavor summands arising from the subquiver m1 for each. All remaining links allow only m ≤ 6 attachment, these being of the form 13 · · · . Briefly, a −m curve carrying f 4 or e 6 gauge have an su(N ≤ 3) maximum for [g Σ ] in m1 Σ · · · with N dependent on other curves attached Σ; for m1 · · · with m ≥ 7, this is reduced to su (2). Note that T Σ ∼ I 0 is required for m ≥ 7 as intersection with III * is otherwise non-minimal.
3 · · · : Attaching a −7 or −8 curve Σ to a link of this form requires the −2 curve have su(2) gauge summand from Kodaira type III. When 7 ≤ m ≤ 8, [g m1 ] is trivial. The only difference when attaching a curve with 5 ≤ m ≤ 6 is that we can realize this su(2) gauge summand via IV ns . All configurations are among • 1223 · · · : Link attachments of this form to a curve Σ with 5 ≤ m Σ ≤ 8 require [g m1 ] for Σ enhanced to e 7 as is required when m Σ ≥ 7. Otherwise an su(2) flavor summand can occur. All potentially viable configurations are • 12 Σ (2)(2) · · · : When Σ has non-trivial gauge summand, [g m1 ] is trivial as required by [25,31]. For gaugeless Σ, instead [g m1 ] are constrained by T Σ according to II IV * ns II IV * ns All configurations for the bases m12 with m ≥ 5 appear in Table 46. Note that attachment of additional −2 curves does not ensure further constraints except when their types require T Σ be raised beyond I 1 .
• 1 Σ 3 · · · : Attaching links of this form to Σ ′ requires m Σ ′ ≤ 6 and [g m1 ] being trivial. This follows since the unique assignment to 613 is IV * s I 0 IV s , which leaves no further residual vanishings of∆ Σ along Σ. The only possible assignments for m13 with 5 ≤ m ≤ 6 are IV * ns Note that the above treatment also captures each possible linking to a curve with m ≤ 4 enhanced to one f 4 , e 6≤k≤7 provided we substitute m = 5 in the above discussion with the appropriate value of m and consider only links consistent with the number of blowdowns permitted by m as detailed in [16] Appendix D. (For m = 4, links of each form discussed are permitted.)
41
Our discussion here more involved since −4 node permits gauged −1 neighbors. We first discuss the highly constrained configurations for certain bases with subquiver 4 Σ 1 Σ ′ . As there are infinitely many enhancements of the base 41, determining [g 41 ] presents certain difficulties. However, there is little freedom for the Kodaira types of curves permitted intersect a −4 curve while maintaining non-minimality. The only infinite family of enhancements have the form so(N ), sp(N ′ ) and flavor summands for these enhancements can be characterized by a few simple conditions with the rest easily handled by inspection.
Non-trivial [g Σ ] may arise depending on the neighboring Kodaira type T Σ ′ provided T Σ ↔ so(N ). The only compatible gauged types are I ns n curves carrying sp(N ′ ) algebras. A constraint on [g Σ ] accounting for any neighboring curves can be derived via simple tallying conditions for contributions towards the residual vanishingsd Σ . Simple conditions capturing this constraint treat all but finitely many cases which we resolve first by working through the permitted attaching links and discussion of the bare base 41. We confine ourselves to T Σ ↔ so(N ) since f 4 and e ≥6 assignments are captured in Section 5.2.
with trivial flavor symmetry from all curves, thus making left attachments irrelevant.
• 412 Σ 2 3 · · · : Here [g 41 ] is trivial except for gaugeless Σ ′ , as we can read from (5.2) and Table 3. This is also trivial when g Σ is f 4 or e ≥6 , and again when g Σ ∼ = so(8) and T Σ ′ ∼ II; we can confirm the latter using the maximal configurations for type II and I * s 0 curves from [25] and [24], respectively.
The remaining gaugeless types, I 0 and I 1 arise in few configurations. For g Σ ∼ = so (8), is trivial for T Σ 2 ∼ IV ns otherwise has maximum for T Σ 2 ∼ III given by su (2). Finally, the −4 curve summand [g Σ ] ∼ = sp(N − 8) except for the unique assignment to the quiver 4123 featuring a type I 1 curve along Σ ′ when is reduced to [g Σ ] ∼ = sp(1).
• 413 · · · : Since the quiver 413 permits a (large but) finite number of enhancements, all configurations with flavor symmetry maxima indicated can be listed directly with the accompanying workbook. 5 The key features of every such configuration can be captured as follows.
When g Σ is any of f 4 , e 6 , or e 7 algebra, [g 41 ] is trivial. Remaining cases involve an , respectively. Tracking the Kodaira types along the −1 curves gives a stronger constraint depending on the types realizing neighboring gauge summands (particularly when N is even) via (3.13) used with the raised intersection contributions appearing in Table 34. To simplify the statement here, we consider the configuration This extends to restrictions on sub-configurations obtained by omission of terms corresponding to any removed outer curves (e.g. to also constrain 141).
can also be non-trivial. When g Σ ∼ = so(N L ) with N L ≤ 9, this summand is trivial with a single exception for T Σ ′ ∼ I ns is trivial except again when T Σ ′ ∼ I ns 3 and N L = 9 allows [g Σ ′ ] ∼ = su (2). The remaining enhancements appear as with 10 ≤ N L ≤ 24 and 7 ≤ N R ≤ 12; note that ⌊n/2⌋ = M and that 7 + n L + δ N L ,even = N L , where the Kronecker symbol is nonzero for even values of N L . Values of N R and n R are related similarly. These cases have [g Σ ′ ] trivial or of the form su(N T ) with N T constrained via a generalization of conditions from [16] obtained by type tracking which reads (5.14) • 414 · · · : Quivers of this form support infinitely many enhancements, but enhancement of either −4 node to f 4 or e ≥6 is forbidden. Constraints here mirror those for 413.
is essentially captured by gauging condition of [16] (which can be viewed as counting roots∆ Σ needed for I ns n i junctions requiring n i vanishings). In the presence of a left neighboring −1 curve Σ L (giving · · · 1 Σ L 41 · · · ) noting assignments to 141 are of the form sp( where sp(M 0 ) gives the gauge algebra along the upper −1 curve.
The gauging restriction on M of [16] for so(N L ), sp(M ), so(N R ) enhancements reading . Note that Kodaira type specification beyond the precision needed to determine the gauge content is essential.
• 412 Σ 2 · · · : The discussion for 413 captures all details here except in cases with Σ 2 gaugeless or carrying su(n). The [g 41 ] restriction for so, sp, so enhancements is again given by (5.14). We shall treat the remaining cases in three groups. The first features gaugeless Σ ′ , Σ 2 . The second feature type III or IV. Both are quickly characterized explicitly using the accompanying workbook by selecting small bounds for I n and I * n fibers. The final group contains infinitely many enhancements with so, sp, su gauge summands arising from I * n L ,I ns n M ,I s n R assignments. The latter are again governed by extension of [16] conditions on 412 gauging. We shall account for a triple of curves meeting Σ ′ with fibers of the types (5.17) Here where N L obeys the same restriction leading to condition (5.16) with the appropriate terms dropped in the absence neighboring curves, namely and M ′ required to satisfy constraints ond along the −1 curve reading 20) or by a second type of potentially relatively maximal flavor summand with one of the forms su(M ′ ) or ⊕ M ′ k su(M ′ k ) obeyingd Σ ′ constraints (generalizing the maximal configurations for an I ns n fiber appearing in [24] to include multiple su curves and a single so curve) which reads One can check this implies that in all cases where m ′ is permitted to be non-negative and n M > 2, the so(M ′ ) summand is always the unique maximal global contribution from the −1 curve and when (5.20) requires m ′ < 0, instead the second type of summand gives the maximal contribution and does so when k = 1. When n M ≤ 2, a IV s intersection is also permitted, which makes the analysis more involved and leads to multiple relative maxima for [g Σ ′ ] under certain conditions, in particular when n M = 2, N L = 9 and n R = 1 where these are so (8) and su(2) ⊕ su(3) (the latter arising from I 2 ,IV s ).
21
Our treatment here is simplified by the fact that left attachments to 2 Σ 1 Σ ′ can only begin with a −2 or −3 curve. Attachments to Σ ′ can result in many enhancements, in particular when the curve attached is a −4, −3 or −2 curve. We shall proceed by determining the [g 21 ] in cases without left attachment before returning to treat their presence. Observe that we have already treated 21m via Table 46 when 5 ≤ m ≤ 8 or m = 12. We begin by reviewing these cases.
• 218 · · · or 217 · · · : An su(2) flavor summand can arise from each of Σ, Σ ′ simultaneously provided T Σ ′ ∼ I 1 and T Σ ′ ∼ I 0 . As we can read from Table 46 (providing identical data via replacement in the e 7 cases of the −6 curve with a −8 or −7 curve), • 21m Σ R · · · with 5 ≤ m ≤ 6 : All configurations for these quivers appear in Table 46. In all cases the rank of [g 21 ] is bounded by 5. Enhancement of Σ R to e 7 is covered by the above treatment for 217. When Σ R ↔ e 6 , non-trivial [g Σ ] may arise depending on T 21 beyond the precision for gauge specification and [g Σ ] may be trivial, su (2), or su (3). When m = 5 and g Σ R ↔ f 4 specification of [g 21 ] is sufficiently involved that we defer to Table 46 noting the bound [g Σ ] ⊂ sp(4).
• 214 Σ R · · · : Left attachments here can only yield (3)2214 or 3214. The highly constrained configurations on these quivers are treated in our Section 6.2 discussion for side-links. Right attaching links and further nodes do not affect the structure of The presence of Σ R places strong restrictions on permitted enhancements since complements that for 412 and only involves conditions introduced there together a restriction on [g Σ ] given by (5.22).
• The bare quiver 21: Infinitely many enhancements of the form su, su and su, sp realized by types I s n L ,I s n R and I s n L ,I ns n R , respectively, are permitted here. The main ingredient determining [g 21 ] is the discussion for I n fibers appearing [24]. There are finitely many remaining enhancements, though numerous. These are readily listed explicitly via the accompanying workbook (via rank bounds on I n and I * n similar to those we detailed to 413.). We hence abbreviate their discussion here accordingly.
Let us first detail [g Σ ] for the aforementioned infinite families. In su, su enhancements with I s n L ,I s This constraint also governs the contributions from the −2 curve in su, sp enhancements realized by I s n L ,I ns n R . Prescribing [g Σ ′ ] is slightly more involved. Most relevant details appear in our treatment for 412. Consider the cases with such su, su enhancements. When n R ≥ 3, intersection with an so(M ) fiber is barred via non-minimality requirements. Resulting maxima are given by su(n M ′ ) terms with 8 + n R + δ 6,n R ≥ n L + n M ′ . When n R ≤ 2 the su(M ′ ) contributions are governed by the same conditions as the cases with su, sp enhancements. The latter allow two possible forms for [g Σ ] consisting of so(M ′ ) and su(M ′ ) summands. The former is maximal when permitted with M ′ > 8. These appear from a type I * m ′ fiber having M ′ = 6 + 2m ′ or M ′ = 5 + 2m ′ (depending on monodromy) and must obey The su(M ′ ) summands which may appear from a type I s M ′ fiber are slightly more constrained than in the su, su enhancement cases. These obey This leaves us to contend with finitely many remaining T 21 involving at least one of the types II, III, IV, or I * n≤4 . When the latter occurs as T Σ to give an enhancement of the form so, sp with gauge summands realized by types I * n L ,I ns n R , [g Σ ′ ] is governed by the maximal configurations discussed in [24] concerning type I ns n curves; these consist of so(M ′ > 8) flavor summands when permitted and otherwise involve only su(N ≤ 4) terms. We omit further discussion for the finitely many remaining T 21 since complete listings are quickly obtained via the accompanying workbook.
Let us now also briefly discuss consequences of left attachment to 21. All such attachments are via a −2 or −3 curve. The former leads to a short instanton link with further right attachments discussed in Section 6.3. Instead attaching a −3 curve is highly constraining as it gives a 32 subquiver; further right node attachments is discussed in 6.2. This leaves discussions for the quiver 321 and right attachments giving 3215, and 3213. The latter are detailed in Table 44 while 321 is discussed in Section 6.2.2.4 with all configurations captured via Table 14.
31
There are many enhancements of quiver involving such attachments and these often feature a rich global symmetry algebra structure making this perhaps the most non-trivial case. We begin by discussing [g Σ ] in each of these contexts.
• The bare base 31 : This base permits a wide variety of enhancements and flavor symmetries detailed fully in Table 45. The structure of [g Σ ] permits a condensed description. These summands are of the form sp(N ) for N ≤ 5 except when g Σ is given by so (8) in which case [g Σ ] ∼ = sp(1) ⊕n for n ≤ 3, where n is determined by T Σ ′ . The value n = 3 only occurs when T Σ ′ ∼ I 0 ; n = 2 can arise when T Σ ′ ∼ I 1≤k≤2 ; (1) with non-triviality precisely when Σ ′ is gaugeless. The remaining cases have • · · · 314 Σ R · · · : The Σ R attachment still allows enhancement Σ ′ which makes a full treatment somewhat involved. However, quivers containing 314 permit finitely many enhancements. Hence, all configurations are readily detailed via the accompanying workbook. Sub-enhancements along 31 are the subset of those for 31 allowing Σ R and consequently [g Σ ] can be read from Table 45 by confining to compatible enhancements conveniently detailed in [16]. These [g Σ ] are bounded by sp(1) for g Σ ∼ = so (9), by sp (2) for those with g Σ ∼ = so(10 ≤ N ≤ 11), and by sp (3) for g Σ ∼ = so (12), as follows by confining to the subset obeying convexity requirements on Σ ′ with minimally enhanced Σ R neighbor having g Σ R ∼ = so(8).
• · · · 313 Σ R · · · : Details here are identical to the previous base except for freedoms introduced when we reduce g Σ R below so (8 • 312 : The noble link 3 Σ 1 Σ ′ 2 permits a large but finite number of enhancements. We can treat the bare quiver by listing all possible configurations via the accompanying workbook. Since this is a "noble link" and enhancements of containing links are strongly limiting, we shall content ourselves with explicit configuration listings for these links without loss of depth in our discussion for longer bases. Attachments on the right must begin with a −3 curve while those on the left attachments must begin with a −2 curve Σ L . Contributions from [g Σ ] are easily determined from T Σ L using subquiver 23 constraints of (5.1) while noting that reduction to trivial [g Σ ] results for T Σ ′ ∼ II or T Σ ′ gauged.
• · · · 131 · · · : Many of the large but finite number of enhancements on 1 Σ L 3 Σ 1 Σ ′ feature multiple flavor symmetry maxima. Outer attachments Σ a and Σ b require m Σ i ≥ 3 and m Σa = m Σ b = 3 is forbidden (to blow-down consistently). As a result, features of The largest family of enhancements for this base is of the form with types T Σ i ∼ I ns n i and T Σ ∼ I * n ↔ so(N ). The analog of (5.12) for these cases is relies on (6.73),(6.77) of [24] to obtain sharpening beyond bounds imposed bỹ d Σ = 6 + 3n, this being replaced instead bỹ Analogous restrictions hold on sub-configurations obtained by omission of terms corresponding to removed outer curves (e.g. to also constrain 31).
The −1 curve contributions
We now discuss flavor summands from 31 Σ ′ arising via Σ ′ . We proceed through these cases for right attachments Σ R which constrain these, starting with the largest values for m Σ R .
• 312, 3123, 131, 31 : Each of these cases permits finitely many enhancements and we can proceed by listing all possible configurations via the accompanying workbook.
Since there are very few arrangements for 23, the configurations for 3123 are rather limited. Strong restrictions from context in longer bases limits the relevance of a discussion for the bare quivers 312 and 131, thus we leave details to explicit listings without loss of general structural insight.
22
We first discuss the infinite family of classical enhancements and conclude with the few remaining cases featuring non-su(N ) algebras. The only essential restriction for the classical enhancements of bases of the form · · · 222 · · · is a condition which extends naturally to constrain flavor symmetries for such configurations. The extended condition reads for a subquiver of the form where the flavor symmetry contribution from the middle curve is indicated with brackets. Note that in the absence of a neighboring curve, the condition is that obtained by dropping the corresponding term N j . In the case of a T-junction, an additional term appears and gives flavor contribution from the −2 curve at the junction of The remaining enhancements of the above quivers feature fiber types other than I n .
These appear below along with global symmetry summands indicated.
Flavor symmetry structure via "atomic" decomposition
Here we discuss the structure of 6D SCFT global symmetries with help from the "atomic" base decomposition (3.4) of [16]. Briefly, flavor symmetries only arise from curves near the tail ends of a base with the largest rank summands coming from side-links and short subquivers containing the few permissible −4 curves when either is present. The only exceptions are also the only bases for which decomposition summand and total rank bounds are absent, these having the form (1)2 · · · 2 or (1)414 · · · 4(1). The latter are readily treated separately via the methods Section 5 using simple constraint equations.
This decomposition allows us to characterize flavor summands for each theory by way of contributions from each term in (3.4) upon fixing an enhancement, i.e., via summands giving total flavor symmetry g GS ∼ = ⊕ i,j h i,j , with h i,j depending on the enhancement. To achieve this, we simply detail contributions from each link with node attachment(s) which can appear in a base. Let us begin with an abbreviated summary of this approach and its results.
For linear bases, global symmetry summands arise only near the tail ends of multi-node bases with the exception of those for which the only nodes are −4 curves. This follows from our discussions in Sections 6.1.1,6.1 together with the permitted chains of nodes classified in Appendices B,C of [16] obeying the algebra inclusions of (5.48) from [16] requiring that gauge summands g i from the nodes g i obey g 1 ⊂ g 2 ⊂ · · · ⊂ g m ⊃ · · · ⊃ g k . Flavor summands for theories with a linear base have all rank r i contributions from curves Σ i such that r i > 1 arising in subquivers where the only nodes (if any) are −4 curves on the base periphery (via (6.2)) when any e ≥6 gauge summands are present. The total contributions arising from an interior link L i joining nodes having gauge summands g L and g R are severely constrained. In fact, we have [16] together with our interior chain, side-link and interior-link node attachment contribution treatments readily yield bounds on the flavor symmetries arising for all enhancements of each base not permitting infinitely many enhancements (i.e., excluding (2)(1)414 · · · and (1)222 · · · and their branching variants). In most cases, these constraints suffice to determine flavor symmetries precisely with little effort up to treating any side-link summands.
Note that explicit listing via the accompanying workbook is also computationally feasible in practice except with very large numbers of nodes for which supplementing outer subquiver results using interior link contributions summands detailed here suffices as a practical route. Enhancement listings even for long quivers are typically computationally inexpensive. Side-link flavor summand maxima detailed here (or readily computed in isolation for a designated node attachment) allow extension to the general case. (In fact, all enhancements and these maxima for every side-link with node attachment can be quickly procured with the accompanying workbook.) Side-link enhancement matching on overlaps with nodes allows easy lookups enabling a practical method for arbitrary base treatment (even when exceedingly large). Assisted by the long and short base classifications of [16] Appendix C, global symmetries for all 6D SCFTs classified therein are thus readily determined. The number of side-links which can attach to certain bases is significant. Hence, for expository purposes we shall stop short of providing explicit listings for each short and long base, instead contenting ourselves with having provided a route that makes explicit listings computationally feasible.
Interior links with node attachments
As shown in [16] and reviewed here in Section 3.2.0.1, the structure of any 6D SCFT base consists of a linear quiver with decoration possible only near the ends. Hence, treatment for linear bases consisting of an interior link with a pair of outer node attachments is a key ingredient in a general characterization of SCFT flavor symmetries. In the following subsections we determine global symmetry contributions from these subquivers.
Interior of linear base flavor contribution summary
We shall proceed by treating each interior link with a pair of outer node attachments. The left-right symmetrization of rules below applies, i.e., rules for node pairs follow upon reversing the link orientation. We tacitly exclude the single −1 curve link as the only permitted node attachments are −4 curves yield the 414 · · · quivers already treated in Section 5. Our statements here hold for cases with a node bounding each side of an interior link, which shall also remain implicit. We shall refer to the flavor symmetry contribution from the left and right portion of each quiver by [g L ] and [g R ], respectively.
• Left attachment of a −4 curve Σ with so(N ≥ 9) gauge assignment to an interior link of the form
For T Σ L ∼ I ns n ′ with n ′ odd and N even, (5.12)) gives the stronger constraint We shall employ the notation of [16] quiver Here we have substituted node gauge summands while suppressing their self-intersections while D indicates any permitted so(N ≥ 8) node, D + an so(N ≥ 8) or f 4 node, and E + n an E n or f 4 node. We now extend the above shorthand to treat link enhancements by labeling forced T i (or g i when unambiguous) while unlabeled positions remain a priori unconstrained. We denote minimally (non-minimally) enhanced links via an '∼' (' ∼') symbol. For example, We have the following abbreviated summary.
3,3
↔ 1315131 E-to-E pairings: so(N ≤ 10) The remaining D-to-D node pairings via 3,3 are more easily characterized using decomposi-tion about the −5 curve as detailed above.
Side-links with a node attachment
We shall break our discussion of these cases into two parts. The first concerns determining flavor symmetry contributions arising on the inner side-link curves and the second those appearing from outermost curves. We shall tacitly exclude instanton-links from being considered as side-links, instead addressing these in Section 6.3. We also separate our discussion of two particular cases: 413 (discussed in Section 5) and 4131. Each permits finitely many enhancements and is readily treated via direct listings supplied by the accompanying workbook. We hence treat the remaining cases with statements requiring modification when applied to instanton-links and side-links 13 and 131 attached to −4 curves.
Contributions from non-outermost curves of side-links
For each curve Σ in a non-instanton side-link with node attachment, [g Σ ] is highly limited except when Σ is "outermost curve," i.e., one having a single compact neighbor. Even these typically have [g Σ ] trivial unless m Σ = 1. Non-outermost curve contributions typically obey [g Σ ] ⊂ su(2) and are always among su(2) ⊕k≤3 , su(N ≤ 4), or sp(2) when non-trivial (upon the aforementioned tacit exclusion of 13 and 131 from our discussion).
We now detail all non-trivial non-outermost curve flavor symmetry contributions. Note that having excluded instanton-links from our discussion rules out non-trivial [g Σ ′ ] for example from m Σ 1 Σ ′ when g Σ ∼ = e ≥7 .
• Interior g 2 curve contributions: The subquiver 13 Σ ′ 1 occurs in many links and permits T 131 ∼ II,I * ns 0 ,II in many contexts. These curves may support [g Σ ′ ] ∼ = su(2). This accounts for many of the non-trivial contributions in longer bases.
Outermost curve contributions
To treat the outer curves of a side-link, it will be helpful to separate our discussion of such curves with m = 1. The remaining cases contribute small summands and are easily characterized.
Outer curves with m = 1
We begin by treating the outermost −1 curves bordering a −5 curve. With two exceptions we shall discuss shortly for the links these are the only outer −1 curves which can arise off the linear portion of a link, instead bordering a −5 curve T-junction. Such curves can give rise to su(3) or g 2 flavor summands, the latter requiring Kodaira type II along the −1 curve. At most two type II curves are permitted to meet a −5 curve as discussed in Section 4.1.2.1. Links with a −5 curve bearing a T-junction often require type II curves on each side of the junction (e.g. those with linear portion having truncation of the form 2315132). Hence, only an su(3) contribution is typically possible from the outermost −1 curves branching from the linear portion of a link away from its ends. When the T-junction is in the outermost (penultimate) position, however, a g 2 summand may appear from (typically at most one of) these outermost −1 curves. All possible configurations such inner and outer T-junctions appear in Table 10.
The remaining outermost −1 curve contributions from side-links with a node attachment arise in · · · 21 and · · · 31.
Interior T-junctions Table 10. −5 curve T-junction flavor symmetry contributions from links.
Outermost −1 curves in · · · 31
We begin with treating the two special cases noted above in which such a −1 curve can appear over a −3 curve T-junction.
The only node Σ which may attach to this link α has m Σ = 4, as required for blow-down consistency. Further attachment to Σ is similarly barred making our discussion irrelevant for longer bases. We condense it accordingly while providing enough detail to illustrate a few subtleties. Gauge enhancements of α are highly constrained by the subquiver 23 allowing only su(2), so(7) and su(2), g 2 enhancements of (5.1) from which we read that T Σ L ∈ {III,IV ns }, the latter possible only in select g 2 cases.
Type on Σ Compatible types on Σ ′ GS summand maxima from Σ III I 0 su(2) ⊕ so (7) sp(3) I ns A similar case merits mention with those above, that for 12 2 31, a left-attachment permitting link. The structure of enhancements and flavor symmetries here follows at once from our discussion since this is simply a rearrangement of the curves in the first case above obtained by swapping the outer −2 and −1 curves. Details are thus also captured by (6.13).
Having concluded discussion of the special cases involving T-junctions at a −3 curve, we now turn to the cases involving linear · · · 31 terminations. We begin with a few cases meriting separate discussion as they have highly constrained enhancement structure.
• · · · 5131 : There are many links of this form permitting left node-attachment. Since the −5 curve requires a gaugeless neighbor and the e 8 gauging condition applies to the neighboring curve, the resulting configurations are highly limited. In fact, there are only two possible configurations for the inner curves and we can easily detail the resulting outer −1 curve flavor summand possibilities in each case as in Table 12.
Note that an su(2) flavor summand may arise from the −3 curve in precisely the cases when it has a pair of gaugeless neighbors and Kodaira type I * ns 0 .
• g131 : One can confirm via the comprehensive link listing appearing in Appendix D of [16] that the only remaining link allowing a left node to attach and · · · 31 right termination is α ∼ 131 (as all remaining links feature terminations · · · 5131 or · · · 231). We now work through the permitted node attachments to α while noting that only nodes carrying so(N ) or e 6 are permitted by the e 8 gauging condition. The unique configuration when carrying e 6 is 14) and the unique flavor symmetry contribution from nα is also e 6 .
We abbreviate our discussion for so(N ) attachments since explicit listing for each of the finitely many allowed configurations of this type captures all relevant details and is available from the listing provided for 4131 via the accompanying workbook.
Outermost −1 curves in · · · 21
The only curves permitting e 7 and e 8 global symmetry summands are of this form. We proceed through the forms for endings of such links.
• · · · 3221: The unique flavor symmetry maximum arising from the outer −1 curve of such links is e 8 . The type assignments on these curves must appear as • · · · 2321: The unique flavor symmetry maximum arising from the outer −1 curve of any such link with a node attachment is e 7 . Configurations for these links simply appear as • 1 2 321 Σ R . This link permits left attachment to a −4 curve Σ provided g Σ ∼ = so (8). All curves give trivial flavor summand except Σ R which has [g Σ R ] ∼ = e 7 .
• · · · 51321: Links of this form permit various maximal global symmetry summands to arise from the outermost curve depending on the Kodaira types realizing the unique gauge assignment. All assignments are of the form · · · 5 IV ns 1 where [g] is determined from T 1 , T 2 via Table 14.
• · · · 3 Σa 2 Σ b 1 Σc : Now that we have treated the special cases above with constrained gauge summands, we move to the general case which essentially merges the others. We again read the three assignments for 32 from Table (5.1). Permitted [g Σc ] appearing when T Σ b ∼ I * ns 0 follow from Table 14. When instead T Σ b ∼ I * ss 0 , we follow the above discussion for · · · 2321. Note that we have also now discussed all enhancements of the bare quiver 321. The remaining links not in the above special case groupings are (3)1321. These are quickly handled by short explicit workbook enabled listings.
Instanton-links with a node attachment
In this section we detail the flavor symmetry contributions which can arise from instantonlinks joined to nodes. The attachments of instanton-links to a −4 curve are limited in longer bases other than (2)1414 · · · to have at most a single curve. We hence confine our analysis to e ≥6 nodes as Section 5.3 treats 41 and 412 while 4122 is treated by Sections 5.3,5.6 (in particular, by (5.12), (5.20), (5.21), (5.27) and (5.29)). Such attachments cannot occur in bases with more than one node. Consequently, their treatment does not affect our discussion of more elaborate bases. Note that instanton-link attachments to a −5 curve Σ must have length l ≤ 4 with this becoming l ≤ 2 for Σ an interior curve. The latter are covered by Table 46 and we relegate l ≥ 3 cases to workbook listings since they are structurally similar to the e 6 attachments treated explicitly here.
Gaugeless instanton links
We begin by discussing the flavor symmetry contributions from a gaugeless instanton-link α ∼ Σ 1 Σ 2 · · · attached to an e-type node Σ 0 ↔ g 0 with self-intersection −m 0 . The permitted T α appear in Table 15.
· · · [g k ] Table 15. All possible gaugeless instanton-link type assignments permitting attachment to an e-type node for links with at least three curves.
The flavor summands [g i ] which can arise depend on T α and g 0 . We will proceed by moving through each g 0 option. We first treat instanton-links having at least three curves before returning to discuss a few exceptions to the following rules for short instanton-links.
• g 0 ∼ = e 8 : In this case, all flavor summands [g i ] are trivial.
To treat the short instanton-links having length l ≤ 2, we can simply extract these configurations from (5.4) and Table 46, the former directly treating l = 1. For l = 2 these simply obey the following.
Note that for l = 1, allowed values of [g 1 ] are simply the maximal complementary Lie subalgebras of g 0 in e 8 .
Gauged instanton links
In this section we consider an e q node attached to an instanton-link Σ 1 · · · Σ k carrying any non-trivial gauge summand. We will break our discussion into two parts. The first concerns "classical enhancements" of the form The second addresses enhancements involving any other Kodaira type, these being confined among types II, III, IV, and I * 0 .
Classical enhancements of an instanton link with node attachment
While we have already characterized flavor summands which can arise in subquivers of this form via characterizations for 12 and 22 · · · 2 in Section 5, we pause here to make a few comments. First note that e-node attachment requires T Σ 0 ∼ I 0 and this imposes g Σ 1 is a subalgebra of su (3), su (2), and n 0 in the cases for attachment to e 6 , e 7 , and e 8 , respectively. Strong restrictions T α follow via propagation of constraints along the link. The possible [g α ] for various classical enhancements are subalgebras of those for a particular enhancement with n 1 ≤ n 2 ≤ · · · ≤ n k . Convexity conditions for these enhancements as detailed in [16] follow from the rules of Section 5. These require that for enhancements of such bases having increasing arguments of su(n i ) algebras with k > 2 and i > 1, we have n i+1 − n i = m q with n 1 = m q ≤ 9 − q.
The aforementioned maximal [g α ] yielding configuration among all enhancements of e q α with q fixed has n 1 chosen to allow m q and consequently n k as large as possible. The resulting [g α ] appears as . (6.16) The remaining classical enhancements of the instanton-link are required via the convexity conditions for classical enhancements −2 curve chains to have n i increasing to a maximal n imax at some i max and are non-increasing for i > i max . Flavor summands appear only from the curves immediately to the left of any decreases and from the final −2 curve with ranks are governed by (5.26). Explicit workbook enabled listings of the permitted flavor summands for these remaining classical enhancements can be quickly procured.
Non-classical instanton-link enhancements with node attachment
In cases with any curve of the instanton-link having a Kodaira type other than I n , every curve other than the −1 curve is prevented from having type I n . These T α and g i obey convexity conditions on d i and r i ≡ rank(g i ) reading All possible forms for enhancements of instanton-links not of the form (6.16) with etype node attachment appear in Table 16 with any flavor summands indicated. Intersection contribution tallying eliminates several possibilities which obey the required gauging and convexity conditions, e.g. T α beginning as II,III,IV. Trivial [g 1 ] results for q ≥ 7 while q = 6 allows [g 1 ] ∼ = su(2) provided T Σ 1 is gaugeless.
Gauging global symmetries
We now briefly discuss which flavor symmetries can be consistently gauged to yield a new SCFT, i.e., which 6D SCFT transitions result from promotion of a global symmetry subalgebra to a gauge summand. While room for additional gauging is permitted should we consider the theories coupled to gravity where B is compact (when more strongly, all continuous symmetries must then be gauged as argued for example in [29]), the permitted SCFT transitions resulting from global symmetry promotion are highly constrained and have not been fully studied. In this section, we comment briefly on which of these transitions are visible within F-theory using the tools we now have at our disposal.
As a first step, we shall inspect the flavor symmetry maxima for single curve theories to examine when promoting global symmetry subalgebras to gauge summands carried on added neighboring curves results in at most reduced-rank gaugings. Consider an SCFT base with a single compact component of the discriminant locus Σ with g Σ ≡ g M and a choice g GS from among the geometrically realizable flavor symmetry maxima for such a theory. We now ask which other 6D SCFT bases with specified gauge assignment can arise via promotion of a g GS subalgebra to neighboring curve gauge summands then yielding a [su (2)] Table 16. Non-classical gauged instanton-link type assignments with attachment to an e-type node for links with at least three curves.
This of course requires that g L ⊕ g R ⊕ g U ⊂ g GS (with terms omitted for absent neighboring curves in the first two cases). Note that the new base may have a different discrete U (2) subgroup Γ associated to it. We can instead study a variant of this approach by requiring Γ stays fixed or focus on the nature of permitted Γ transitions, but we shall focus on whether g GS can be made trivial while g L ⊕ g R ⊕ g U has smaller rank than g GS , i.e., whether "submaximal gauging" can occur. We will soon refine this characterization since we often cannot gauge sufficiently many degrees of freedom via an SCFT transition to yield trivial g GS , as we now show.
Normal crossings constraints
We now pause to observe that the normal crossings condition barring three gauged neighbors from meeting a −1 curve provides a variety of cases where SCFT transitions fully gauging g GS (or any of its maximal subalgebras) away to neighboring curves is not possible. All single curve enhancements of a −1 curve having global symmetry maxima with at least three summands are examples since each neighboring curve can contribute only a single gauge summand. In particular, this restriction applies to the cases shown in Table 17 which we can read off from Table (6.1) of [24] with the further tightenings for type III and IV curves appearing above in Table 3.
Gauge algebra on Σ Kodaira type Flavor symmetry maxima Table 17. Selected single curve theories on a −1 curve permitting only sub-maximal gauging of flavor symmetry maxima to yield neighboring curve gauge summands for a valid SCFT base.
Note that this restriction is irrelevant in determining which theories can be eliminated for lack of a consistent gauging of global symmetries upon considering compact bases and coupling to gravity. In that context, we may relax the positive definite adjacency matrix condition to allow multiple −1 curves intersecting Σ. Hence, full gauging of the flavor symmetries in the cases of Table 17 again becomes plausible in that context.
Having trimmed the types of configurations where we might find more meaningful submaximal gaugings, we move on and discard this form of restriction as a minor curiosity.
Global to gauge symmetry promotion and 6D SCFT transitions
To motivate a more useful notion of "sub-maximal gauging," we now detail an example illustrating that complete gauging may be possible for a particular choice of resulting base while others can yield a somewhat surprising loss of continuous degrees of freedom. Consider a −2 curve with gauge algebra so (7). The unique flavor symmetry maximum for this single curve theory is g GS ∼ = sp(1) ⊕ sp(4). While we are able to gauge the entire flavor symmetry and can even do so to yield an SCFT base, e.g., 221 with the configuration 2 (III,su(2)) 2 Σ (I * ss 0 ,so(7)) 1 (I ns 8 ,sp (4)) , we cannot do so for the base α ∼ 2 2 2 Σ 2. Furthermore, can we cannot even gauge the neighboring curves in this base to obtain a maximal subalgebra of g GS . Hence, even though the number of neighboring curves we can add allows for the possibility of gauging a maximal Lie subalgebra of g GS , careful inspection of the available type assignments to these curves reveals only sub-maximal gauging is permitted.
More surprisingly, the neighboring curve gauge summands together with the remaining global symmetries on Σ in the presence of these curves gives a sub-maximal sub-algebra g GS ⊕ su(2) ⊕3 of g GS . The required gauging of g GS to form this base in fact gives a rankreducing breaking of g GS , as we now confirm. Consider that the neighboring −2 curves must have Kodaira type III for intersection with Σ. The resulting configuration appears as 2 (III,su(2)) 2 (III,su (2)) 2 Σ (I * ss 0 ,so (7)) (III,su (2)) and requires contributions to (ã,b,d) Σ ∼ (≥ 4, ≥ 6, 12) for the T-junction given by at least (3, ≥ 6, 9). As a result, the only symmetry bearing curve Kodaira types which can simultaneously intersect Σ are I ns ≤3 and III. Further gauging of any remaining global symmetry is prevented by adjacency matrix requirements for SCFT bases. The remaining global symmetry for the resulting base (which arises purely from Σ and hence matches g GS ) together with the neighboring curve gauge summands yields a sub-maximal algebra g max of g GS since This phenomenon motivates our working definition of sub-maximal gauging to be a case in which the sum of neighboring gauge algebras with the residual global symmetry algebras g GS along Σ are not maximal splittings of g GS in the sense of being among the (relatively) maximal subalgebras of g GS having the required number of summands to match the count of non-trivially gauged neighboring curves. Observe that the discrete U (2) subgroup, Γ 2 associated to the original base with the single compact curve Σ is trivial, while that for the base in the above gauging, namely Γ α , is not. This suggests that the emergence of non-trivial discrete U (2) gauge fields may be an ingredient in determining permitted global symmetry gauging rules and hence a helpful tool in classifying 6D SCFT RG flows.
Considering this phenomenon more generally defines an interesting structure associating to each single curve flavor symmetry maximum the collection of relatively maximal algebras which can be gauged given fixed neighboring curves defining a base with discrete U (2) subgroup Γ. Said differently, we have a distinguished class of Lie subalgebras for each of the flavor symmetry maxima with each member in this class of subalgebras associated to a discrete U (2) subgroup. Additional structure emerges since not all Γ associated permissi-ble SCFT gauging transitions are shared for the distinct flavor maxima with fixed Kodaira type. Further refinements to the data appear since multiple Kodaira types in some cases can realize a given gauge algebra.
A broad survey of single curve gaugings appears in Table 18.
Conclusions and outlook
We have carried out a systematic investigation global symmetries for each 6D SCFT with a known F-theory realization having no frozen singularities, namely those appearing in the classification of [16]. We have produced a tentative classification of the geometrically realizable global symmetries of these theories. The tools we have provided include an implementation of an algorithm enabling explicit listing of the Kodaira type realizations for each 6D SCFT gauge enhancement, thus helping to complete the geometry to field theory "dictionary" for these theories. We have detailed the structure of global symmetries permitted via this algorithm in terms of the "atomic decomposition" of 6D SCFT bases from [16] and in terms of certain shorter chains which may occur. This has enabled us to recast our findings via short listings and simple constraint equations for these symmetries in terms of the geometric realizations of each gauge theory, i.e., the Kodaira type assignments compatible with each gauge assignment. We have made the latter manifest, resolving certain ambiguities in the classification appearing in [16].
In the process, we have eliminated some of the CFTs detailed in the classification of [16] and shown that the refined classification can be recast in purely geometric terms without appeal to anomaly cancellation. We have also investigated the gauging of 6D SCFT global symmetries which yield transitions between SCFTs and found that these gaugings can result in rank reductions. We have derived novel restrictions on Calabi-Yau threefold elliptic fibrations, carried out local analysis of nearly all permitted singular locus collisions, and found many local and global constraints on permitted collections of singular fiber degenerations for such fibrations. This provides steps towards a general analog of [31] by constraining singular fiber degenerations along chains of curves with arbitrary Kodaira types. Our approach strongly constrains the space of non-compact CY threefolds and makes explicit all potentially viable varieties of this type at finite distance in the moduli space meeting a transverse singular fiber collision hypothesis up to specification of noncompact singular fibers not associated to a non-abelian algebras (with this latter caveat easily removed by trivial adjustments in our algorithm).
We hope these tools prove useful in the classification of 6D RG flows and a complete classification of all 6D SCFTs. In particular, whether the multiple global geometrically realizable global symmetry maxima which arise in many cases correspond to distinct theories (i.e., if there terms under which the global symmetries of F-theory models provide SCFT invariants) is a question we leave to future work. While the relations between gauge and global symmetries of 6D SCFTs with those of discrete U (2) gauge fields determining endpoints we have discussed are suggestive, we hope that additional investigation may help clarify the precise interplay between these ingredients determining the field content of these theories. (−) I * ss Table 18. Selected gaugings of single curve SCFT GS maxima. Here ' †' indicates that a GS factor is not gaugable onto any additional neighboring compact curve allowed in any SCFT base, ' ‡' the same due to the normal crossings condition, 'X ! ' that the GS (relative) maximum is not fully gaugable for any SCFT base, and 'X !! ' the same due to the normal crossings condition. Here ' * ' indicates a relatively maximal gauging for the given base and ' * * ' the same among all bases; unique gauging for a given base is indicated with an ' * ! ' symbol.
A Intersection contributions and forbidden pair intersections
In this appendix we extend the intersection contributions data collected in [24,25] to include that for pairs of curves with A, B > 0 in which either curve may be compact. The lone exception to the latter concerns cases of non-compact transverse curves which carry no gauge-summand, namely those with Kodaira types I 0 , I 1 , or II. Such curves cannot contribute global symmetry summands and hence only require consideration in cases where the transverse curve can be compact component of the discriminant locus. Before proceeding, we note that studying the contributions to curves with A > 0 or B > 0 from transverse gauged fibers was safely ignored in [24,25]. For the theories treated in those works, the maximal flavor symmetry inducing configurations arise for a curve with fixed Kodaira type when A = B = 0. However, in treating flavor symmetries for more general SCFTs with a base consisting of more than a single compact curve, the minimal values of A, B along a given curve may be non-zero. For example the −1 curve in the base (12)1 Σ 2231513221(12) requires Σ here has type I 0 with A Σ ≥ 4. This minimum value of A Σ along Σ depends significantly on the presence of non-neighboring curves, even those which do not affect the minimum gauging of the neighboring curves as illustrated in Table 19. Note that the extent of "non-locality" is rather significant. For example, the quiver (12)1 Σ 22315 requires the same minimum as (12)1 Σ 22315132 given by A Σ = 3, while addition of the gaugeless type II curve of self-intersection −2 in the final position to yield (12)1 Σ 223151322 raises this minimum to A Σ = 4. By the same token, the only assignments of orders of vanishing along each curve of the quiver 1223151322 compatible with a left attachment of a non-compact II * fiber inducing the (unique) e 8 global symmetry maximum for this quiver have assignments to the −1 curve of the form (A ≥ 4, 0, 0).
Our approach stems in part from this subtlety that the minimal values of A, B are hence "global" in quiver position as illustrated by the above example. An exhaustive computer search approach to determining global symmetries beginning with assignment of orders of vanishing along each compact component of the discriminant locus is hence merited. Similarly, maximality checks of global symmetry subalgebras which may arise from transverse non-compact singular locus components also merits consideration of compact curves with A, B > 0 in addition to A = B = 0 cases. Hence, development of local models for all such intersections a natural first step as these do not appear to be available in the literature.
We shall go somewhat further than required by developing the intersection contribution data from such local models for a somewhat broader class of intersections than actually arise in the minimal order assignments to SCFT bases. This approach has certain benefits. First, it allows us to be cavalier in assigning larger orders of vanishing and allowing automation to ensure we have not missed any global symmetry maxima inducing configurations which may require non-minimal A, B assignments. Furthermore, this allows us to more tightly constrain non-minimal A, B configurations which we hope may find application addressing codimension-two singularities and perhaps development of a framework which may come to bear on issues concerning the gauging of global symmetries beyond the scope addressed in this work.
Minimum value of A along Σ Compatible subquiver(s) 0 Table 19. Minimum orders of vanishing of f | Σ for various subquivers of (12)1 Σ 2231513221 (12) with compatible Kodaira type (and hence gauge) assignments given by truncations of the only viable assignment along this quiver, namely II * ,I 0 ,II,IV ns ,I * ns 0 ,II,IV * ns ,II,I * ns 0 ,IV ns ,II,I 0 ,II * .
Permitted A, B values in longer quivers are often highly constrained. Certain bases allow only a single globally consistent A, B assignment though locally infinitely many choices are permitted. For example, the unique permitted values on the quiver 71231513221(12) are e 7 III * 7 (3,5,9) while the subquiver 13221(12) permits infinitely choices of these values consistent with the Kodaira type assignments above.
In some circumstances, the permitted A, B values force particular gauge assignments. Consider for example the base 232. We can bypass anomaly cancellation machinery and more involved global analysis of the monodromy cover for I * 0 fibers by noting that the only orders of vanishing consistent with the naïve intersection contribution constraints of (3.11) leave the only configuration , where B ≥ 1 requires that the I * 0 fiber is semi-split (as we can read from Table 25 along with the observation of A.6.2.1 that so(8) would have required purely evenã contributions), thus yielding an so(7) gauge summand.
As an example yielding novel constraints, we consider the minimal left f 4 attachment compatible enhancement of the interior link 1321 with right e 7 -node attachement to a curve Σ with self-intersection −m having minimal orders of vanishing along each curve as indicated below.
Note that Σ R must type I 0 since gaugeless curves with type other than I 0 lead to nonminimal intersection with a curve having III * fiber; B Σ ′ ≥ 1 follows immediately from the naïve residuals tallying requirements of (3.11). Provided that 5 ≤ m Σ ≤ 7, there is no inconsitency with B Σ R = 1. However, when m = 8, naïve intersection contributions of (3.11) from Σ R to Σ rule out the configuration. Since additional orders of vanishing of g Σ R and g Σ are coupled, i.e., B Σ R ≥ 1 + B Σ , any attempt to permit the required vanishings along Σ by raising B Σ in turn raises B Σ R . However, this is not compatible with the indicated gauge assignment since it forces B Σ M > 0 and in turn B Σ L > 0, thus requiring an so (7) gauge summand along Σ M as we can read from Table 25. Further note that in these cases with m Σ = 8, this further enhancement of Σ L to so (7) is unacceptable in the presence of a left f 4 attachment (which we can easily confirm via the e 8 gauging condition since so(7) ⊕ f 4 ⊂ e 8 ).
We next consider the quiver 2231322 which minimally supports an assignment with Kodaira types leading to trivial contribution from the −1 curve (via "Persson's list" restrictions). Now consider raising the Kodaira type on the −1 curve to type II which allows for a potentially non-trivial flavor symmetry summand to appear there. Note that such a summand is a priori permitted while maintaining the E 8 gauging condition since g 2 ⊕ su(2) ⊂ f 4 . We have the following assignment with the minimal orders along each curve required by (3.11) indicated below. The only possible global symmetry can only occur from a type I 2 fiber. It hence is relevant in our analysis to determine the local models for intersections of curves with fibers of various Kodaira types having A, B > 0.
Before turning to a systematic analysis of all intersection contributions of potential relevance to constraining F-theory SCFT models, we begin with a simple example to demonstrate the method we follow to derive the intersection contribution data appearing in this section. This data plays a key role in our algorithm to eliminate various gauge enhancements of quivers and possible global symmetry summand inducing non-compact transverse curve configurations. Additional examples for certain A = B = 0 cases following the same approach can be found in [24,25].
The A, B values which may appear in arbitrary configurations consisting of an SCFT base decorated with non-compact transverse curves are somewhat non-trivial to constrain a priori. Furthermore, the results of a comprehensive analysis of intersection contributions for all permitted intersections of Kodaira fibers may find alternative uses in constraining elliptically fibered Calabi-Yau threefolds more generally. We shall hence tabulate intersection contributions for nearly all curve pairs (with the exceptions of −1, −1 intersections and gaugeless non-compact fibers which clearly do not arise in SCFT bases with non-compact global symmetry inducing fiber configurations). Though many of the indicated intersections do not arise in minimal assignments for SCFT bases, these cases often become relevant when we consider any alternative settings which allow us to relax the positive definite adjacency matrix condition or motivate careful study of all options for codimension two singularities. Though extending the rest of our analysis to these settings is beyond the scope of this work, we hope that the tools developed here facilitate such investigations.
A.1 Computing intersection contributions
We now detail an example illustrating the method underlying our approach to determining intersection contributions. Certain subtleties make some cases significantly more involved than illustrated by our particularly simple choice of example. However, most relevant issues for the types of computations we carry out have been discussed at length in [24,25]. A few novel subtleties arise in treating compact pair intersections as we shall discuss in various cases treated in this appendix.
As in the configuration above, we let Σ be a type II curve along {z = 0} with orders of vanishing given by (1 + A, 1, 2) = (2, 1, 2) and Σ ′ a type I 2 curve along {σ = 0} (with orders of vanishing (0, 0, 2)). Let P denote the point of their intersection at σ = z = 0. The general form for f, g, ∆ of a type I 2 curve can be read from (A.8) of [24] as Imposing these divisibility conditions then yields where we have replaced φ/z, f i /z 2 , g 2 /z with φ, f i , g 2 for simplicity of notation. We read the minimal orders of vanishing giving "intersection contributions" to Σ via the minimum σ-degrees of the lowest order terms in f Σ , g Σ ,∆ Σ (namely those of z-degrees 2, 1, 2, respectively, as required to match the z-orders along Σ) to obtain Thus, the potential sp(1) flavor symmetry summand appearing in (A.4) is impossible. Note this would have yielded a configuration with transverse curve algebras contained in the global symmetry maxima for a type II curve appearing in [25]. The contributions analysis therein does eliminate the configuration, but requires the above augmentation to produce the loweredã P minimum indicated above.
A.2 Preliminaries
In the following sections, we let Σ be a curve at {z = 0} with −Σ ·Σ = m, having transverse intersections with curves Σ ′ j located at {σ j = 0}. Let the orders of vanishing along Σ be (a, b, d) and those along Σ ′ j be given by (a ′ j , b ′ j , c ′ j ).
A.3 Type II curve intersections
We introduce further constraints on collisions involving a Kodaira type II curve in A.7 for cases involving an I * n curve. Here we focus on the remaining non-trivial cases, those involving an I n curve. Such collisions were studied at length in [25] when the I n curve is non-compact and A = 0 along the type II curve. The tight restrictions we find in sections A.4,A.5 for type III and IV curves have only weaker analogues here. Underlying this distinction is that we are farther from non-minimality in II,I n collisions. As a consequence, we are required to consider intersections where the most general form for the relevant local models of type I n curves is unknown since the cases 7 ≤ n ≤ 9 are included.
We now proceed to make mild generalizations of the intersection contribution data first appearing in [25] that are required in the present work. Part of our work is dispatched by reading from Table 42 taken from [24]. We find that A ≥ 3 is not possible for intersections with I n curves having n ≥ 4, a bound we revise here since this only holds for non-compact I n curves. The A = 0 results can be read directly from the contribution tables of [25]. In one such case, we find a small correction. This leaves us to determine the relevant contributions from compact I n curves for all n, and non-compact curves only for n ≤ 4. Since type II curves do not carry a non-abelian gauge algebra, we can safely ignore collisions of compact I n curves with non-compact type II curves.
We collect intersection contributions for the remaining cases in Table 20. The '!' symbol there indicates disagreement with [25]. Entries marked with ' †' are not permitted via the inductive form for I n curves appearing as (A.25)-(A.28) in [24]. The ' * ' symbol indicates the contributions are only valid for a non-compact I n curve, 'X.' that the intersection is valid only for non-compact I n curve, and 'X..' that the intersection exceeds numbers of vanishings available for a type II curve even with m = 1. Entries to the right of those indicated with an 'X', 'X.', 'X..' are similarly forbidden.
A.3.1 Intersection with an I * s
0 curve The analysis for intersection of a type II curve with a transverse curve holding type I * s 0 in [25] bars such intersections but implicitly uses that B = 0 along the I * s 0 curve, say Σ ′ , to obtain non-minimality of the intersection. Provided B ≥ 2, such intersections are also non-minimal. However, in the special case that B = 1 factorization of the monodromy is possible. We can read from (A.9) that intersection contributions to Σ are then given by (2 + (a mod 2), 4,8) and contributions to I * s 0 curve are (2⌈a/2⌉, 1, 4⌈a/2⌉).
A.4 Type III curve intersections
Let Σ be a curve with Kodaira type III and orders of vanishing (a, b, 3) = (1, 2 + B, 3). Suppose that Σ ′ has orders of vanishing (a ′ , b ′ , d ′ ). Note Σ has odd type, makingd contributions thrice those ofã contributions. The only technical cases for such intersections concern transverse curves with type I n or I * n . The latter are restricted for n ≥ 1 due to non-minimality. We treat intersections with I * 0 curves in section A.6, focusing here on treating Σ ′ with type I n in each case relevant to global symmetry computation, namely those involving at least one compact curve.
A.4.1 III, I n intersections
A.4.1.1 Type III with B ≥ 0 intersection with a non-compact type I n curve We now extend the contribution data for cases with B = 0 analyzed in [24] to those with B > 0. Reading from Tables A.1,A.2 of [24], we find values for the local intersection contributions to a III with B = 0 and that the restriction that B ≥ 2 requires n ≤ 3 for non-minimality. We make a correction to this bound; the revisions appear in the appendices as Tables 42,41. First, we collect the results of contributions for intersections analyzed [24] and the remaining B > 0 cases of contributions to a type III with m = 1, 2 from a noncompact transverse I n curve in Table 21. Note that in the cases with n ≥ 7, we compute contributions working from the inductive Tate forms for I n curves from (A.25)-(A.28) of [24] that are potentially not the most general when 7 ≤ n ≤ 9. Here Σ has residuals given by (5, 8 + B, 15) for m = 1 and (2, 4 + 2B, 6) when m = 2.
A.5 Type IV curve intersections
Here we collect information about monodromy rules for type IV curves and intersection contributions involving transverse curves. The only subtle cases involve type I n and I * 0 curves since I * n intersections with n > 0 results in non-minimality.
Note that Σ has even type. Reading from 2, we see the monodromy along Σ is determined by whether g z 2 | Σ is a square; the larger gauge algebra, su (3), occurs if so.
A.5.2 Intersections of IV with I n curves A.5.2.1 IV with A ≥ 0 meeting I n curves for global symmetry or quiver intersections This case is detailed in [24] when the transverse type I n curves are non-compact and A = 0 along the type IV curve. These contributions appear with a minor correction for intersection with an I ns 3 fiber (which also holds for the compact pair case) in Table 41. We shall use these contributions for the compact pair case when A = 0 as then they remain unchanged for these type pairings. Note that the actual minimal contributions must be modified from these table values (as they depend on monodromy along the IV curve) to give eveñ b contributions to the type IV in the IV s case. From Table (6.1) of [24], we have that transverse I n curves carry at most sp(4) symmetry. Note that as indicated in Table 42, n ≤ 3 is required when A > 0 for non-minimality. We collect the results of contributions to (and from when applicable) a type IV curve intersecting a type I n curve for these remaining cases in Tables 23 and 24, separating the A = 0 case in which n > 3 is permitted. Table 24. Intersection contributions to IV ns/s with A = 0 from a transverse non-compact I n curve. An X indicates the intersection is forbidden by minimality considerations. Here ( * ) indicates a permitted intersection only when IV has m = 1, and (-) an intersectionb on IV forbids in all cases.
A.6 Type I * 0 curve intersections Here we compile information about monodromy rules for I * 0 curves and intersection contributions to residuals counts involving transverse curves from an I * 0 curve.
A.6.1 Preliminaries
Let Σ = {σ = 0} be a curve with type I * 0 and orders of vanishing (a, b, 6) = (2 + A, 3 + B, 6). Suppose that Σ ′ = {z = 0} is a transverse curve with orders of vanishing (a ′ , b ′ , d ′ ). Recall that we refer to the cases with 2b ′ = 3a ′ as 'hybrid type', those with 2b ′ > 3a ′ as 'odd type' and those with 2b ′ < 3a ′ as 'even type'. The contributions to residual vanishings of ∆ along Σ ′ from intersection with the transverse curve Σ are given by 3ã and 2b in the odd and even type cases, respectively, whereã andb are the residuals contributions to Σ ′ from a curve Σ for vanishings of f, g, respectively.
B > 0 : Here Q = ψ(ψ 2 + p), and since this is not an irreducible cubic, this case is not possible.
A.6.1.3 Gauge algebra so(8):
The monodromy cover is fully split here and appears as (ψ − α)(ψ − β)(ψ − γ). To have the ψ 2 term vanish we have Now we note that for m = 4, we have that for either A, B > 0 this is the only possible gauge algebra. For A > 0, m = 4, we haveb = 0 and hence the monodromy cover after appropriate rescaling appears as ψ 3 + 1 and hence factors completely. For B > 0, m = 4 the cover appears as ψ(ψ 2 + 1) after rescaling since hereã = 0, and hence the cover can be fully split.
A > 0 : In this case, −β 2 − α 2 − αβ = 0. Substituting for β 2 using this identity gives (g/σ 3 ) σ=0 = −αβ(α + β) = α 3 . Thus g = σ 3 (α 3 + g 4 σ + O(σ 5 )). This implies that all contributions to the residuals in g come in multiples of 3. For example, we have larger than expected contributions to g residuals from intersections with curves of type II. Since A > 0, inspecting the case of an intersection with type III shows that we have a (4, 6, 12) point as another consequence. This follows since the remaining terms in g are of total order at least 6.
A.6.1.4 Summary of Restrictions
We collect the restricted monodromy assignments in Table 25.
X so(8) Here we study the contributions to the residual vanishings along Σ ′ from a transverse intersection with Σ in each of the cases above.
A.6.2.1 Gauge algebra so(8): Using the above preliminaries, we have the following table of intersection contributions from Σ with so(8) gauge algebra to Σ ′ . This depends on the orders of vanishing along Σ ′ , whether the order in g there is a multiple of three, and if Σ ′ is of even or odd type. Recall that Σ ′ with orders (a ′ , b ′ , d ′ ) is of even type if 3a > 2b, odd type if 3a < 2b, and hybrid type otherwise. We do not explore the latter case in Table 26. Note that I 0 can appear in any of the three types as orders of vanishing for I 0 are given by (a ′ , b ′ , 0) with one of a ′ or b ′ necessarily zero. A.6.2.2 Gauge algebra so (7): Here we only need to study the case B > 0. When Σ ′ has even type, we have minimal contributions to residuals along Σ ′ given by (2, b, 2b). In the odd type on Σ ′ case these are instead (2, b, 6).
A.6.2.3 Gauge algebra g 2 : Only A > 0 is relevant here. We have contributions given by (a, 3,6) in the even type case and by (a, 3, 3a) in the odd type case.
A.6.3 Restricted tuples
We now discuss some consequences of the above I * 0 restrictions. Residual vanishing counts along Σ before any intersections are (8−2m+2A, 12−3m+3B, 24−6m), where m = Σ·Σ. We list forbidden collections of transverse curves simultaneously meeting Σ for various values of m and a given monodromy assignment in Table 27. An X indicates the monodromy assignment for specified values of A, B, m is forbidden. Separate forbidden collections are semicolon-separated. Note that we do not use all available vanishings with some collections. Rather, any collection containing a forbidden collection is also ruled out since the required vanishing conditions cannot be met with any of the indicated transverse subcollections. For example, in the case with data given by so (8), A > 0, m = 3, the presence of two transverse curves with types (., 1, .) prevents g = (g/σ 2 )| σ=0 from being a cube (since these each require 2 additional vanishings of g at their intersections with Σ). However, this contradictsb = 3. Table 27. Forbidden transverse curve collections meeting I * 0 with orders (a = 2 + A, b = 3 + B, 6).
The form of the relevant restricted polynomials for so (8) with A, B > 0 rule out a significant number of intersections that satisfy naive residuals tallying. Notable cases include so (8), B > 0 intersection with a type III curve (with orders (1, ≥ 2, 3)) since this hence induces non-minimality and so(8), B > 1 intersection with type II curves. Other non-trivial prohibitions include so(8), A > 0 intersections with any type IV curves or type III curves.
A.6.4 Intersection contributions to I * 0 A.6.4.1 I * 0 with A > 0 meets multiple I n curves: Let Σ be a curve with type I * 0 at z = 0 having orders of vanishing (2 + A, 3, 6). Consider first A > 0. Here Σ has even type and the contributions to residual vanishings of ∆| z=0 are induced precisely by those of g| z=0 .
In the case that m = −Σ · Σ = 3, the residual vanishings along Σ are given by (−, 3, 6). Working from the general local form of an I 2 curve found in (A.2) of [24], we can place further restrictions to give the forms for I n with n > 2. For an I 2 curve at σ = 0 to meet Σ, we have z|φ. Since A > 0, Σ has even type so each vanishing of∆ along Σ corresponds to a vanishing of g there. Hence, for two I 2 curves at σ, σ ′ , (using the general form separately for each) restricting to σ = 0 and σ ′ = 0 and using that there is ab contribution at each intersection, we have z 2 |φ in each expansion (rather than only the a priori requirement that z|φ needed for intersection with an arbitrary I * 0 curve). Since g goes as φ 3 + O(σ), we thus have a residuals contribution minimum from the I * 0 given by (−, ≥ 1, ≥ 2) at each I 2 intersection and contributions (≥ 4, ≥ 6, ≥ 8) to each I 2 curve. When compact, the I 2 curves must have self intersections given by −1 and they cannot meet other type I n with n ≤ 4 curves. 6 Since z 2 |f 1 and z 2 |φ we can read from the form of g in (A.2) of [24] that there are larger residuals contributions to Σ given by (−, ≥ 2, ≥ 4) from each intersection with an I n when n ≥ 2. Hence these triples are disallowed as they exceed the permitted number of vanishings along Σ. Note that results of this kind follow from intersection contribution tallying as read from the general forms of I n unless A = B = 0 when special treatment is required.
A.6.5 I * 0 restricted tuples with A = B = 0 We now briefly discuss certain forbidden configurations for a curve Σ of type I * 0 curves with orders of vanishing given precisely by (2,3,6). We will proceed by working through the cases for m Σ . The restrictions we find are helpful in allowing us to eliminate the need for anomaly cancellation machinery while characterizing 6D SCFT gauge enhancement structure.
Result A.1. The configuration I 2 I * 0 I 2 requires at least a semi-split I * 0 curve Σ for m Σ = 3.
Proof: Let the two I 2 fibers be denoted Σ 1 , Σ 2 . Now Σ i intersection with Σ requires that the forms of f , g are those giving the general form for I 2 of (A.5) modified such that the relevant coefficient functions are instead constant. Proceeding in this fashion while imposing both I 2 intersections, we shall simply obtain a partial splitting of the monodromy cover polynomial ψ 3 + f ψ + g via explicit factorization.
The most general f , g, ∆ are given in the patch with coordinates z, σ by where φ, Φ, f 1 are unspecified constants. One can then partially factor the monodromy cover as in this coordinate patch. Note that we can now also read off the factorization of the monodromy cover polynomial on the other patch.
This implies the gauge assignment on the base 131 given by su (2), g 2 , su (2) is not viable. Similar methods forbid the triple I 2 I * 0 ns III and show that when the III here has orders (1, 3, 3), we can forbid that I 2 I * 0 III regardless of monodromy. Likewise, the triples I 2 I * 0 IV and I 4 I * 0 I 2 are also forbidden for all of monodromy choices.
Since ϕ is the discriminant of the quadratic term in (A.10), this cannot be a square, as Σ would then be a fully split I * s 0 curve carrying so (8) (1)], as we now show. An I ns ≥4 intersection requires A = B = 0 along Σ and deg(ϕ) = 4 for m = 2. Let the type III intersections with Σ be at {σ i = 0}. We have σ i |ρ, σ i |ϕ. Any I n fiber at {σ = 0} has n ≤ 4 since we must have σ n |ϕ 2 . The latter holds since ρ and ϕ cannot share a root σ unless σ|λ and σ|µ, but this is impossible since deg(λ) = 2 and σ 1 σ 2 |λ. An additional sp(1) summand from an I 2 fiber at {σ ′ = 0} giving σ ′2 |ρ exhausts all roots of∆ Σ . This applies to restrict global symmetries of 122 and 222 with the above assignment. (1)]. Here we have σ 1 |ϕ, σ 1 |ρ. This leaves the maximal configuration for the remaining roots of ϕ and ρ occupied by I ns 6 and I ns 2 fibers, respectively, as two I ns 4 fibers can be eliminated. The latter requires three distinct shared roots of ϕ and ρ though deg(λ) = 2. In particular, this applies to the bases 12 and 22 with type III required for 22.
Similarly,
A similar extension of the argument yields following holding for Σ with any m. A, 3, 4) for A > 0. Those to Σ ′ are at least (2,4,8).
A.6.5.3 Additional tools to match anomaly cancellation conditions from geometry One of the few tools for which we require further geometric insight (beyond those available via tracking contribution counts and single curve global symmetry maxima) in order to match known constraints derived via anomaly cancellation machinery is the following result.
This condition appears in [16] and is argued on partially on anomaly cancellation grounds. We now show this follows from geometry without field theory considerations. Along with the other geometric constraints derived here, in [24], and those in the appendices of [16], non-minimality and intersection contribution tallying more than suffice to match all 6D SCFT enhancements constraints on all links and hence on all bases to those one can reach while also employing anomaly cancellation considerations. Some enhancements are eliminated via the present considerations, as discussed in Section 4. Proof: With residuals contribution tracking, we find that the only enhancements of 322 remaining are the options so (7), su(2) and g 2 , su(2) on the 32 subquiver. To see that so (7) is forbidden on the −3 curve Σ, we first note that in all the available type assignments on the link 3 Σ 2 Σ ′ , Σ, Σ ′ have orders precisely (2, 3, 6), (2, 2, 4), respectively. It thus suffices to show that under the following conditions, the assignment so(7) to Σ is not possible. In fact, this argument now follows directly from Appendix E.3 of [16]. Nonetheless, we shall give a pair of alternative arguments demonstrating the utility of various tools.
Our setup leaves only one possibility for the residuals on Σ: they are given by (0, 1, 2) and hence the form of f Σ , g Σ are given by f ∼ c 1 w 2 and g ∼ c 2 zw 2 where c 1 , c 2 are nonzero constants and the IV lies at w = 0. Observe that z = w (or we would have a codimension two (4, 6, 12) point along the I * 0 curve). The resulting monodromy cover is then irreducible. We have the cover given by P = ψ 3 + c 1 w 2 ψ + c 2 zw 2 that cannot be semi-split, since as we saw above the ψ 2 term vanishing requires that With self intersection −3, the residuals on are (2, 3, 6) for type I * 0 with orders (2, 3, 6). Hence deg(µ − λ 2 ) = 2 and deg(sλ) = 3 with the degrees of µ, λ being 2, 1 respectively. We can then identify µ ∼ w 2 and λ ∼ z where z gives the other vanishing of g along the I * 0 . We have g = c 2 λµ = (cw 2 )(c ′ z) and f = (µ − λ 2 ) = (cw 2 − c ′2 z 2 ), where c, c ′ are nonzero constants. This means f has two distinct roots along the I * 0 , contradicting that we use both available vanishings at once in meeting the type IV curve.
Proof: We proceed as above, here imposing the requirements that Proceeding to match the terms of f and µ − λ 2 order by order and then those of g, we find before completing the matching that λ 0 = 0, µ 0 = 0, From the latter we see that one of λ 1 , f 1 must be zero since g is zero at order σ 2 . We rule out the first case as it induces infinite intersection contribution, leaving f 1 = 0. Intersection contributions to the I * 0 ss are then given by (2,3,6). (Note this forbids any additional intersections along Σ, even with an I 0 curve having A ≥ 1.) The above seemingly mild restriction plays an important role in determining which enhancement configurations are permitted and the degrees of freedom which remain to become global symmetry summands.
A.6.6 Contributions to I * 0 with A, B > 0 from I n We now investigate the details of intersection contributions in the few permitted values for n in for I * 0 intersections. When A, B > 0, the situation is even more restrictive. The maximal allowed n for I n meeting I * 0 in each case of A, B > 0 is given in Table 41. So that we may refer to the general forms for I n type curves, let's suppose our I * 0 here lies at z = 0.
A.6.6.1 A > 0, so (8): We will use the observation that since we have algebra so (8), intersection contributions tõ b are multiples of 3. Those tod are then the doubles of thoseb contributions as A > 0 is an even type I * 0 . From Table 42, we see that the maximal allowed intersection with I n in this case with A > 0 is for n = 3. I n compact: We will treat the three possible values of n separately when Σ ′ here is a Kodaira type I n curve that is compact and has self-intersection −1. Note that we must have A ≤ 2, since otherwise we exceed the 4 allowed f residual contributions to Σ ′ . Since A > 0, we must be without monodromy along Σ ′ in the one relevant case where n = 3. When n = 1, 2 we note that z 2 |φ with φ as in (A.4,A.8) of [24], respectively. Considering this fact together with the restriction that we have contributions in threes yields Table 28.
Note that in the case when A = 1 and n = 1, we must have at least z 4 |g 1 and z 4 |g 2 in (A.4) of [24] via the above consideration that g intersection contributions must come in threes here. This has total order of f, g given by 4, 5 in both cases, falling barely short of non-minimality. The other cases n > 1 are barred by similar considerations.
Non-compact I n curves for global symmetry: We carry out a similar study here with the change that φ is allowed higher degree here, introducing the possibility of intersections with I 2 or I 3 . Since we are concerned with global symmetry, we safely ignore n = 1 (as I 1 curves carry the trivial algebra). However, intersections for n > 1 are forbidden as a result of non-minimality following from the strong requirement that contributions tob are divisible by 3. This result is indicated in Table 29.
A.6.6.2 B > 0 so(8): This case has similar requirements to those above with the main difference being that the contributions to residuals in f are required to be even here as we saw in the preceding Table 29. Intersection Contributions to I * 0 s with A > 0 from an I n non-compact curve. An 'X' indicates the intersection is forbidden by non-minimality.
analysis. We collect results for this case in Table 30.
A.6.6.3 B > 0, so (7): Again we consider intersections with a transverse type I n curve, Σ ′ using restrictions from Table 42 which dictate that the maximum value of n here is 2. We will use that we have odd type on the I * 0 . We will not need to use that intersection contributions to the I * 0 in this case are dictated by the lowest order z term in f , say µ 1 γ 2 with µ 1 square-free and of nonzero degree (since in this case the relevant term cannot be a square, or equivalently, contributions toã along an I * 0 ss cannot have purely even f residual contributions).
I n compact:
With these constraints, we produce Table 31 by reading from the general forms for I n as they appear in [24] Appendix A. Note, we have B ≤ 3, as the degree of φ (A.4) of [24] is 2 in the only relevant compact I n intersections, those for a curve with self-intersection −1. (1, 1, 3)/(4, 6, 10) X 3 (1, 0, 3)/(4, 6, 10) X Table 31. Intersection contributions with I * 0 ss with B > 0 from/to an I n curve. An 'X' indicates the intersection is forbidden by minimality considerations.
when meeting I 2 . Such intersections are barred for Σ ′ compact. We have not used the condition thatã contributions cannot be purely even, instead limitations being induced by minimality considerations. We collect our results in Table 31.
Non-compact I n curves for global symmetry: Our results here only concern n ≥ 2 since n = 1 does not yield global symmetry. From Table 42, we cannot exceed n = 2. We have the identical result when B = 1. When B ≥ 2, we can avoid non-minimality by raising the degree of φ. In this case, we only are interested in the intersection contributions to the I * 0 . We collect the relevant result in Table 32. Table 32. Intersection Contributions to I * 0 ss with B > 0 from a transverse non-compactI n curve. An 'X' indicates the intersection is forbidden by non-minimality considerations.
A.6.7 A > 0, g 2 The a priori restrictions in this case are similar to those for the so(8) case with the exception that g contributions are not forced to be multiples of three. On the contrary, we are prevented from having contributions which consist entirely of multiples of three, though this is irrelevant in our analysis here. When n = 3, we use that φ 0 rather than µ in (A.11) of [24] must carry all divisibility (we have z|φ 0 and have thus used all available roots of φ 0 ) since we are considering the case of intersection with a compact I n curve while f, g residuals are limited by 4, 6, respectively. We find there is non-minimal intersection for n ≥ 3 as this would require z 2 |ψ 1 with ψ 1 as in (A.11) of [24].
A.7 Type I * n curve intersections
In this section, we collect contributions to residual vanishings from an intersection with an I * n curve. The main focus is the case with transverse curve of type I n .
A.7.3 Multiple I * n curves meeting a type I m curve In this case, we must have the self-intersection along the I m curve, say Σ, with Σ · Σ = −1 since for −Σ · Σ > 1 we have no vanishings of f, g possible. The total residuals are (4, 6, 12 + m) along Σ, so at most two I * n curves can meet Σ, and such a pair leaves the no remaining residuals in f, g. Note that intersection with a pair of I * n curves requires that the I m curve has monodromy. A.7.4 Compact curve intersections for pairs with types II, I * n We first observe that the only relevant case here is n = 1, as a type II curve cannot meet an I * n curve with n > 1; such intersections are non-minimal even when the I * n is noncompact. We consider here the case with both curves compact, thus introducing additional constraints not applicable to the situations considered in [24]. Along the I * n curve, (ã,b) = (2(4 − m), 3(4 − m)) and II gives a non-trivial f, g contribution. Hence, m = 4 does not need to be considered. We summarize the contributions for all remaining cases of such an intersection in Table 35 with Σ · Σ = −m intersecting a II. An X indicates the intersection is non-minimal and an (X..)X. that the intersection is banned by (naive) residuals considerations. Entries to the right of an allowed entry have the same values.
Note that Kodaira types beyond II other than I n family types are banned from I * n intersection by non-minimality. Thus we have given here the set of possible intersection contributions for types other than I 0 , which we have recorded separately.
A.8 I 0 ,I * n curve intersections We now find the intersection contributions to a curve Σ with type I 0 from an I * n curve, Σ ′ , which may be non-compact. Our computations rely on the general forms of I * n given in Appendix B of [24]. We impose divisibility conditions on these local models as required for intersection with a compact I 0 curve having specified properties. We then read off the intersection contributions to Σ in each case while recording which of these allow Σ ′ compact and tabulating intersection contributions to Σ ′ in such cases.
In particular, the constraints onã,b and the general forms of I * n together imply that u 1 in the general form has the number of roots indicated in Table 37 where f = − 1 3 u 2 1 σ 2 + O(σ 3 ) and g = 2 27 u 3 1 σ 3 +O(σ 4 ). This data allows us to find the following rules for intersections with m = 4 m = 3 m = 2 m = 1 deg(u 1 ) 0 1 2 3 Table 37. Degree of u 1 for I * n with Σ · Σ = −m.
type I 0 ∼ (A, B, 0) singularities which are not necessarily compact nor transverse curves but simply singularities along the I * n locus. Hence, any remaining residuals in purely f or g yield one of the indicated intersection contributions. Note that in some cases indicated as nonminimal, there may be allowed point singularities in f, g to the corresponding order. Said differently, our tables are intended to track contributions from transverse curves but also give the contributions for point singularities when a transverse curve of the corresponding type is allowed. Only in cases where the intersections are indicated as allowed rather than non-minimal do we track the data since our priority is to treat transverse curves rather than simply singular points (but some information for point singularities does result). The contributions to the I 0 are also noted for use in the case that the singularity is an intersection with a compact transverse I 0 curve. Note that for m = 4, we cannot have intersection with a curve of type I 0 ∼ (A, B, 0) with either of A > 0 or B > 0. This fact and other restrictions of this form are already accounted for via simple residuals tracking without modification from what follows below. For A = B = 0, it is easy to see the intersection contributions in both monodromy cases are simply (2, 3, 6 + n), the naïve estimate. We collect the relevant results in Table 38.
A.9.1 m = 3 Residuals along the I * n here are given by (2, 3, 6 + 3n). We find intersection contributions and non-minimal intersections appearing in Table 38. Table 38. Intersection Contributions to I 0 ∼ (A, B, 0) and I * n with m = 3, respectively. 'X' indicates non-minimal intersection and '-' indicates exceeding allowed residuals. 'X.' indicates that the intersection is non-minimal here while in the non-compact case it is was not yet forbidden via the previous analysis in the non-compact case.
A.9.2 m = 2
Here residual vanishings along the I * n are given by (4, 6, 12 + 2n). Hence, A, B are capped for I 0 at 4, 6. The relevant contribution data is recorded in Table 39.
A.9.3 m = 1
In this case, the residual vanishings along the I * n are instead given by (6, 9, 18 + n). Hence A, B for I 0 are at most 6, 9, respectively. Contribution data for these intersections appears in Table 40.
A.10 Type I n intersections with A = B = 0 curves In Table 41 we show intersection contributions from type I n curves to those of every other possible gauged Kodaira type having minimal orders of vanishing on the transverse curve, To clarify, in this case αβ(α + β) must vanish. Hence one of α, β, or α + β must vanish. Suppose α = −β. Then α 2 + β 2 + αβ = α 2 .
In the case that we have A 0 > 0, the coefficient on ψ vanishes and hence (g/σ 3 )| σ=0 = (α 2 + β 2 )(α + β). The utility of this condition in considering z k |g to study B > 0 is not immediately obvious and we shall proceed without making use of it.
Given so(7) algebra on I * 0 , the monodromy cover splits partially. We can write it as When A 0 > 0, this becomes ψ 3 − λ 3 . Since λ 3 = (g/σ 3 )| σ=0 , we see that z divisibility of g along the I 0 becomes z 3 divisibility. In the case that A 0 ≥ 2, the order in f is already 4 before intersection with the I 0 . This results in a (4, 6, 12) point as the (4,4,8) point at the intersection is boosted by the monodromy condition. In terms of expanding g, this reads g = σ 3 (zg 3 (z) + zg 4 (z)σ + zg 5 (z)σ 2 + . . . ) with zg 3 (z) a cube. The latter requires that z 3 divides g| σ=0 . Hence, A 0 ≥ 2 and B ≥ 1 is forbidden for I * ss 0 . When we have g 2 algebra, the monodromy cover is irreducible, and g 2 so(7) so(8) Table 43. Forbidden intersections and type restrictions appears as ψ 3 + qψ + p. If this had a root, it would appear as ψ 3 + ψ(µ − λ 2 ) − λµ. In the case with A 0 > 0, irreducibility implies that p = λµ where µ = λ 2 , i.e., (g/σ 3 )| σ=0 is not a cube. In particular, this prevents (g/σ 3 )| σ=0 from being constant, hence barring the case with no residual vanishings along the I * 0 after accounting for any other intersections. Here, g = σ 3 (g 3 + σg 4 + σ 2 g 5 + . . . ), and the above is simply to say g 3 is not a cube. This result applies in all configurations involving an I * 0 with g 2 algebra; there is nothing used about an intersection with I 0 . To phrase this differently, if we have an I * 0 with g 2 algebra and A 0 > 0, the other vanishings of g along this curve cannot all appear as cubes. Among other restrictions this imposes, we see that when Σ ′ · Σ ′ = −m, with m given by 1, 2, 3, the residual vanishings of g before considering other intersections read 9, 6, 3, respectively in these cases. This implies for example that a g 2 gauged I * 0 with A 0 > 0 cannot meet 3, 2, 1 other type III curves each having types (1,3,3) in the respective cases.
Similarly, for B 0 > 0 we have monodromy cover given by ψ 3 +qψ. This is not irreducible, thus ruling out the case with g 2 algebra.
A few of these results are summarized in Table 43.
B Notes for using the computer algebra workbook
The arXiv submission of this work is accompanied by 'gsFunctions.nb,' computer algebra workbook for Mathematica enabling explicit listing of gauge enhancements and global symmetry maxima for nearly all 6D SCFTs. This notebook can be downloaded via the "Download:" link in the arXiv webpage featuring the submission history and abstract. Click on the "Other format" link and then on "Download source." After unzipping this file, all contents should be moved to a single folder. The 'gsFunctions.nb' workbook contains function definitions and the entire notebook should be evaluated to initialize them. At the bottom of this workbook are a series of examples with comments detailing instructions to initiate computation of gauge and global symmetries for an arbitrary at-most-trivalent base given as user input. Results can be written to a LaTex file in the workbook directory which can be compiled to view results condensed to the format of tables appearing in Appendix C. Instructions for in-workbook use of several additional functions appear in comments detailing further examples.
C Tables of flavor symmetries for miscellaneous quivers
In this appendix we provide configuration data for a few miscellaneous bases. For each T , summands of g global appear under each curve with g global totals shown in the rightmost column. Note that in listings for αm, permitted T compatible with m are constrained via the permitted gauging rules discussed at the start of Section 5. | 39,004 | 2017-11-14T00:00:00.000 | [
"Mathematics"
] |
FPGA based modified multi-repeat distribution matcher for probabilistic amplitude shaping
The technique of probabilistic amplitude modulation, based on distribution matching, has garnered considerable attention in recent years as a means to enhance spectral efficiency and diminish the constellation energy of coded modulation. This paper introduces the implementation of Probabilistic Amplitude Modulation (PAS) using a Modified Multi-Repeat Distribution Matcher (MMRDM) on a Field Programmable Gate Array (FPGA). The Modified Multiple Repetition Distribution Matcher (MMRDM) is integrated into a 2×2 Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system, realized through the Xilinx System Generator (XSG). Simple Zero-Forced and Minimum Mean Squared Error (MMSE) equalizers are applied to the receiver for signal detection across the MIMO channel. The system incorporates enhanced security through chaos-based scrambling with 16 and 64 Quadrature Amplitude Modulation (QAM). VHDL code files for this system are generated for the Xilinx Kintex-7 (xc7k325t-3fbg676) for hardware implementation. Performance evaluation includes an assessment of required storage capacity, complexity, and bit error rate (BER). Using Vivado 2017.4, the system is successfully routed with resource utilization, for example, 0.67% Block RAM (BRAM), 68.6% Look-Up Tables (LUT), 83% DSP 48s, and 1.5% registers for 64-QAM uniform modulation. Similarly, for 64-QAM 10 level (shaper output 60 bit) shaped modulation, the resource utilization is 0.67% BRAM, 68.8% LUT, 83% DSP 48s, and 1.6% registers on the specified device. Simulation results demonstrate an improvement in the net shaping gain of approximately (2-4 dB) at 1×10^(-4) for different equalizer cases compared to uniform QAM, along with a notable reduction in required storage capacity and computational complexity.
INTRODUCTION
Probabilistic shaping (PS) has emerged as a potent tool in digital signal processing (DSP) for various communication systems, particularly in achieving capacity-approaching performance and adapting rates in accordance with the Shannon limit (Böcherer et al., 2015;Buchali et al., 2015).In typical PS systems, distribution matching (DM) is positioned outside forward error correction (FEC) coding for simplified implementation.The utilization of digital subcarriers enables water filling performance (Che & Shieh, 2018) and mitigates the impact of equalizer-enhanced phase noise (Sun et al., 2020).
Several researchers (Bobrov & Dordzhiev, 2023;Civelli & Secondini, 2020;Guiomar et al., 2020;Gültekin et al., 2019) have proposed diverse transmission systems based on probabilistic shaping, resulting in enhanced system performance and confirming the efficacy of probabilistic shaping technology across different communication systems.Probabilistic amplitude shaping (PAS) coding modulation, proposed as an integration strategy between modulation and channel coding (Böcherer et al., 2015), has garnered considerable interest in both wireless and optical communications.The architecture of probabilistic amplitude shaping involves CCDM (Constant Composition Distribution Matching), utilizing arithmetic coding as a shaping algorithm, with performance improvements observed for longer data packets (Schulte & Böcherer, 2015).However, this integration, while showcasing excellent performance at quasi-infinite DM word lengths, poses high complexity for hardware implementation due to the use of arithmetic coding, despite simplification from outer concatenation.
Various DM algorithms, such as enumerative sphere shaping (ESS) (Gültekin et al., 2018), multisetpartition DM (Fehenberger et al., 2018), and DM with shell mapping (Schulte & Steiner, 2019), have been explored.Look-up table (LUT) based DMs have been applied in fixed-length hierarchical DM (Yoshida, Karlsson, et al., 2019a), fixed-length optimum bit-level DM with binomial coefficients (Koganei et al., 2019), and variable-length prefix-free code DM (J.Cho & Winzer, 2019).In cases of limited source entropy live traffic, joint source-channel coding has been shown to simultaneously compress data and shape to reduce average symbol energy based on sorting of LUT contents in the DM with minimal added complexity (Yoshida, Karlsson, et al., 2019b).(İşcan et al., 2019) utilized probabilistic shaping to enhance system performance in the 5G new radio system by introducing a polar encoder after the shaping encoder, resulting in over 1dB improvements on AWGN channels without increased computing requirements.Probabilistic amplitude shaping can be effectively implemented on a Field Programmable Gate Array (FPGA), offering versatility in integration into an Orthogonal Frequency Division Multiplexing-Multiple Input Multiple Output (OFDM-MIMO) system.FPGAs are widely adopted in communication systems for their enhanced processing speed, parallelism in implementation, and cost reduction (Ganesh et al., 2015).Spatial multiplexing MIMO-OFDM implementation using FPGA has become a key area of interest for researchers (Chen et al., 2007;Kerttula et al., 2007;Mahendra Babu et al., 2014;Park & Ogunfunmi, 2012;Yoshizawa & Miyanaga, 2012), with various designs showcasing advancements in different aspects of communication systems.This paper proposes a modified method based on the multi-repeat distribution matcher principle (MRDM) (Jing et al., 2021;Yoshida et al., 2021;Yoshida, Binkai, et al., 2019) as a probabilistic shaping coder.The proposed method employs smaller lookup tables to create a coding scheme, significantly reducing required storage capacity without complex calculations, making it easily implementable on FPGA.The non-uniform distribution, akin to the MB distribution, is achieved by selecting the number of levels or subsets and symbols for repetition in each level, creating small lookup tables with the same number of levels according to the shaping rate required for shaping encoder and decoder formation.Altering the distribution enhances energy and spectrum efficiency, as the average symbol energy required to transmit data with a non-uniform distribution is lower than that of the same data transmitted with a uniform distribution at the same noise level.Mathematical analysis of bit error rate via AWGN and Massive-MIMO channels was performed, and simulations of MMRDM were conducted through a 2x2-MIMO OFDM channel with two different equalizers using the Xilinx System Generator.Results indicate significant improvements in storage capacity, computational complexities, and bit error rate (BER) when utilizing MMRDM compared to uniform QAM under the same channel and signal-to-noise ratio conditions.
SYSTEM MODEL
As illustrated in Figure 1, this paper employs a 2x2-MIMO OFDM communication system utilizing coded modulation based on rectangular QAM and MMRDM probabilistic shaping.On the transmitting end, the PAS encoder transforms the uniformly distributed bit stream into the desired distribution, which will be discussed in detail in the subsequent section.Following this, the conv-FEC encoder processes the shaped bits, and QAM mappers then associate these bits with QAM symbols.
Upon reaching the receiver and traversing the 2x2-MIMO channel, two types of equalizers (Zero-Forced and MMSE) are employed to reconstruct the original signal.The QAM demapper untangles the received signals by calculating Record Likelihood Ratios (LLRs) based on the observed signals.Subsequently, the FEC decoder receives the LLRs from the demapper to correct channel errors and maps them back to data bits through the shaping (invDM) decoder.
MODIFIED MULTI-REPEAT DISTRIBUTION MATCHER
As commonly understood, shaping coding operates on the principle of altering the distribution of data with a rate Rb from uniform to non-uniform.The objective is to minimize the average energy of symbols in the constellation by repeating points closest to the center, with lower energy, more frequently than distant points with higher energy.This approach is employed to enhance the overall performance of the system (K.Cho & Yoon, 2002).To achieve optimal results, the probability of the constellation signal point r ∈ R must adhere to Eq. 1.
Where is a parameter governing the relationship and trade-off between average energy and bit rate, and as mentioned earlier, this distribution is the Maxwell-Boltzmann (MB) distribution.
In this proposed approach, the probabilistic shaping process involves the addition of extra bits to achieve a one-to-one mapping method, resulting in reduced computational complexity for the system.The method utilizes multiple repetition mapping of combinations instead of sequence permutation to derive the mapping sequence in the Probabilistic Amplitude Shaping (PAS).This technique is referred to as multi-repeat shaping (MRS) (Jing et al., 2021).The modification presented in this paper involves populating multiple lookup tables with constellation points based on the target rate as subsets, aiming to achieve a distribution similar to the MB distribution.Figure 2 provides an illustrative depiction of this method.
To begin, we define the number of levels or groups as L, followed by sub-groups of assignment groups denoted as 1 , ..., .Symbols, representing the constellation's points, consist of a group of bits according to the order of the constellation.For each level or subset, lookup tables are established.The output from these tables signifies the probabilistic encoding of the original input information for a given packet length (Jing et al., 2021).It is crucial to carefully define the rules for selecting constellation symbols to ensure the flexibility of the shaping method.In the case of an M-QAM constellation, where there are M symbols, each symbol is composed of = 2 () bits.Within each level, a certain number of these symbols will be repeated.For instance, in level 1 , the number of symbols is denoted as 1 .Subsequently, the 2 subset is formed from 1 , incorporating its symbols ( 2 ), and this process continues until reaching the final group with its designated number of symbols, .The repetition of symbols follows the energy rule, meaning that symbols with lower energy undergo more repetition.
This approach achieves a one-to-one mapping of symbols across all subsets for a given block length of the original data.This mapping is designed to avoid the computational complexity associated with a many-toone assignment resulting from a complex iterative method.
In this method, the main parameters that must be specified for the attainment of shaping encoding (Jing et al., 2021) are as follows: L: the number of levels N: the number of bits for each symbol within the M constellation 1 , 2 , ..., : the number of symbols in all the set groups m: the input information's block length Parameters are related to each other based on these mathematical relationships: Let us suppose that the number set of vectors satisfied MB distribution PA' is (3) In that case, the input block length m will be as follows: = ⌊log 2 1 )⌋+. . .+⌊log 2 )⌋ (4) The shaping rate R is.
The codebook size of MRDM is The parameter L plays a crucial role in determining the trade-off between the divergence of the distribution from the MB distribution and the complexity of the shaping encoder.The maximum number of combined codewords is governed by L, and each symbol comprises N bits.Consequently, the storage capacity needed for the generated lookup table is calculated as 2 × ( × ) bits.This implies that an increase in the number of levels and the modulation order results in heightened computational complexity and a significant increase in storage capacity.For instance, in the scenario where L = 2 and the modulation is 16-QAM (meaning N=4) with levels 1 = 4 and 2 = 8, based on Eq. ( 4), the block length m is 5. Therefore, the size of the lookup table will be (2 5 × 8) bits.
This implies that the storage capacity required would be minimal, but there would be a greater distribution divergence.However, by setting L = 3 for the same modulation and selecting levels with 1 = 4, 2 = 8, and 3 = 16, based on Eq. ( 4), the block length m = 9.Consequently, the lookup table would be (2 9 × 12) bits, indicating an increase in storage capacity and a decrease in distribution divergence.Nevertheless, as the modulation order rises, the necessary capacity will surge significantly.To illustrate, consider 64-QAM with ten different energy levels and 64 symbols, where N = 6.Assuming L = 8 and levels 1 = 4, 2 = 4, 3 = 8, 4 = 8, 5 =16, 6 = 32, 7 =32, 8 =64 using Eq. ( 4), it results in m=30 and the size of the lookup table being (2 30 × 48), which is impractical due to its complexity.
To address this, we propose a modification to make the lookup table practical and easy to calculate.The lookup table is divided into smaller tables, each containing symbols equal to the number of symbols in each group.Returning to the previous examples, when L = 2, the size of the lookup table is 256 bits (Jing et al., 2021) Table I.With the modification, two lookup tables are created: the first containing 4 symbols, each represented by 4 bits, and the second containing 8 symbols represented by 4 bits.This reduces the storage requirement to only 3 bits for encoding tables, without the need for any table during the decoding process, as depicted in Figure 3.At L=3, the lookup table requires 6144 bits to store, but only 3 bits are needed when using multiple lookup tables for coding, as illustrated in Figure 4.For L=8 and 64-QAM modulation, the single lookup table needed would be (2 30 × 48), which is impractical.However, with our modification, only 18 bits are required for the shaping and de-shaping process, as shown in Figure 5.Eventually, from the MMRDM the probabilities of the generated distribution can be deduced as follows: QAM symbols probabilities are denoted by ( ) = [ 1 , 2 , … , ], with L groups having varying probabilities.group g1 comprises 1 symbols with equal probabilities:
MMRDM PERFORMANCE
As it is known that the QAM symbol error rate Ps through the AWGN channel can be expressed as (K.Cho & Yoon, 2002): ( ) .Thus, < and the value of dmin = 2A, which is why = √ 3 2( − 1) ⁄ , the expression of dmin is And > Apply these rules in 16-QAM as a case in point.
In Eq. ( 6) QAM symbol error rate For 16-QAM with MMRDM, L=2 = 4 2 , then the amplitude = √ 4 ⁄ = √ sub.In Eq. ( 6) Bit error rate = log 2 Upon comparing equations 8 and 9, it becomes evident that the symbol error rate in MMRDM QAM surpasses that in standard QAM.This improvement is attributed to the shaping process, wherein the symbol energy rate for the constellation decreases, resulting in an increased minimum distance (dmin) between the symbols compared to uniform QAM (Yu et al., 2020).The performance of the proposed method can be assessed by calculating the achievable shaping gain ratio GMMRDM.This ratio is defined as the ratio between the shaping gain provided by PS-MMRDM using a QAM lattice of MMMRDM size and the gain provided by the uniform QAM constellation of MQAM size, expressed as follows: The gain ratio is determined to be 3.9dB by substituting the given values for QAM = 10 2 and = 4 2 .To compute the storage capacity required for the proposed method, two scenarios are considered.First, if the selected subsets are powers of 2, the storage complexity is ( × − ).However, if any subset ci is not a power of 2, then the storage capacity of the lookup table becomes (( × − ) + ( + ), where = = ⌊ ⌋ − and = − (⌊ ⌋ + 2).
For ESS, the storage required depends on the number of energy levels L that satisfy the shaping rate Rs, which can be relatively large.The storage capacity required is expressed as (Gültekin et al., 2018) ( + )⌈ ⌉ bits.
Next is the calculation of computational complexity, representing the arithmetic operations needed for encoding and decoding in the shaping methods.For the proposed method, two cases are considered.In the first case, if the levels or subgroups are powers of 2, no calculation process is needed for the sender or receiver, as depicted in Figures 3 and 4.However, if the level is not a power of 2, simple operations are required to find the address of the symbol in the lookup table, whether for the transmitter or the receiver.
The time complexity for this subset is O(1).As for ESS, the number of bits operations required will be (| − |)⌈ ⌉ ⌉ per symbol, whether for the transmitter or the receiver, where | | represents the cardinality.
In conclusion, if subgroups are selected in the proposed method as powers of 2, the storage and computational complexity will be significantly lower compared to ESS.For instance, if the number of output symbols is 30, the modulation scheme used is 64-QAM, and m = 120, then the memory required for MMRSM is 60 bits only.
XILINX SYSTEM GENERATOR IMPLEMENTATION OF A MIMO-OFDM
Figure 6 illustrates the block diagram of the implemented key functions on the FPGA (Xilinx® kintex®-7 xc7k325t-3fbg676).Each block is constructed using Xilinx system generator (XSG) tools with a master clock period of 5 ns.The parameters employed for the MIMO-OFDM system are summarized in Table 1.
On the transmitter side, the random bit generator, operating at a rate of 200 Mbps, undergoes scrambling using a chaos-based stream ciphering method.The Multiple Repeat Distribution Matcher (MMRDM) alters the data distribution by introducing additional bits to the original bit, enabling the use of multiple lookup tables.MMRDM rates depend on the number of levels and the cardinality of each level according to equation 5.For instance, the rate of a 2-level MMRDM with 16-QAM is 5/8, and for a 3-level MMRDM, it is 3/4, as per equation 6. Figures 7 and 8 depict the XSG block diagram of 2 and 3 levels MMRDM encoders and decoders.The Convolution encoder encodes the binary scrambled stream bits with a code rate of ½, resulting in an output rate double that of the MMRDM rate.A matrix interleaver is then applied to produce interleaved stream bits.QAM modulation is utilized, generating complex modulated signals with a symbol rate equal to the Convolution encoder divide by N (symbol bits).The signal is then converted from serial to 2 parallel samples to achieve a rate twice that of the QAM symbol rate.Subsequently, the signal passes through the Inverse Fast Fourier Transform (IFFT) block to produce the OFDM signal, with the cyclic prefix guard omitted.One signal is transmitted through one antenna, and the other through the second antenna, effectively doubling the system's capacity.Following this, the signals are corrupted by a MIMO channel with flat fading and AWGN channels.On the receiving side, linear MMSE MIMO detection is employed to recover the clean signals.It is assumed that the channel estimation is perfect, and the channel parameters are directly inputted into the MMSE MIMO detection.All operations performed on the transmitted side are reversed on the receiving side to reconstruct the binary stream bits.Returning to the probabilistic shape encoder, the method's efficiency has been theoretically demonstrated in terms of the required storage capacity and computational complexity.This efficiency will be further validated when implemented in the OFDM-MIMO system on an FPGA.For instance, when generating a VHDL Code or Bitstream file for various systems, the resources needed for uniform 16-QAM modulation are 0.67% BRAM, 67.12% LUT, 80.95% DSP 48s numbers, and 1.48% registers.In comparison, the required resources for L2-MMRDM 16-QAM are 0.67% BRAM, 67.15% LUT, 80.95% DSP 48s numbers, and 1.5% registers.The selected device shows only a marginal increase of 0.037% in LUT and 0.02% in registers over the uniform 16-QAM resources, with no increase in DSP 48s numbers, as demonstrated in Table 2.This table reveals the resource utilization on (Xilinx® kintex®-7 xc7k325t-3fbg676).It is evident that the MMRDM technique consumes minimal resources, and both the MMRDM encoder and decoder require no BRAM or DSP slices.
RESULTS AND DISCUSSION
In the proposed system, the base station employs OFDM-MIMO and is implemented using XSG in R2022b MATLAB software.System performance is assessed by focusing on parameters such as bit error rate (BER) and modulation gain ratio, crucial for evaluating communication system reliability.In the simulation of the MMRDM-OFDM-MIMO system, a data block size ranging between 2, 246, 4000 and 2, 995, 2000 bits is selected.Figures 9 depict the BER as a function of signal-to-noise ratio (SNR) for 16QAM and 64QAM, considering a 2x2 MIMO system with both transmitting and receiving antennas.
In Figure 9(a), where the number of MMRDM groups is L=2 and L=3, the performance is compared to uniform 16-QAM.In Figure 9(b), the number of shaping groups is L=8 and L=10, and they are compared to uniform 64-QAM, specifically for the Zero-Forced type equalizer.Noticeable improvements in BER are observed with MMRDM compared to uniform QAM, showing an improvement of about (2-4) dB at 10 −4 BER. Figure 10 introduces the MMSE equalizer for the same probabilistic shaping combinations and FEC rate, demonstrating that MMRDM probabilistic modulation consistently provides better performance than uniform QAM in both systems, with an improvement of about 2-2.5 dB at 10 −4 BER.As evident in Figures 9 and 10, the number of levels (L) plays a crucial role in balancing the deviation of the input data distribution from the MB distribution, the encryption formation rate, and the associated complexities in terms of computational demands and required storage capacity.With an increase in L, the rate rises, and rate losses decrease, while the required storage capacity increases but remains low in MMRDM, as highlighted in Table 2.
CONCLUSIONS
This study explores the application of the MMRDM-based probabilistic shaping technique in a 2×2 MMSE MIMO-OFDM communication system implemented on an FPGA using the Xilinx system generator.Our findings demonstrate enhancements in system performance, simplification of the shaping process without imposing computational complexity or extensive storage requirements.Notably, the memory demand is minimal compared to the data block length.Furthermore, the application of Eq. ( 10), representing the shaping gain ratio, leads to substantial improvements in energy efficiency, coupled with a noteworthy reduction in Bit Error Rate (BER).Both results and mathematical analyses underscore the favorable shaping advantages conferred by the MMRDM method.
In the FPGA implementation, it is noteworthy that, as detailed in
Eb) denotes the minimum distance between constellation symbols.t is predicated on the number of symbols M, on the bit energy Eb as well as up to N=log2M the number of bits for each symbol.The symbols locations are (±A, ±3A,…,±(√ − 1)A) + i(±A, ±3A,…,±(√ − 1)A) for the square QAM constellation, for QAM average symbols energy = 2 3 ( − 1) 2 , and also for MMRDM QAM average symbols energy = ∑ ( ) =1
Table 2
Table 2, the MMRDM encoder and decoder exhibit efficiency, requiring neither (BRAM) nor DSP slices, and demonstrating low utilization of Look-Up Tables (LUT) and registers. | 5,035.8 | 2023-11-30T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Adaptive PID Controller Using RLS for SISO Stable and Unstable Systems
1 Automatic Control Engineering, Egyptian Nuclear and Radiological Regulatory Authority (ENRRA), 3 Ahmed El-Zomor Street, El-Zohor District, Naser City, Cairo 11762, Egypt 2 Automatic Control Engineering, Department of Electronics and Communication, Faculty of Engineering, Cairo University, Giza 12316, Egypt 3 Nuclear Engineering, Egyptian Nuclear and Radiological Regulatory Authority (ENRRA), 3 Ahmed El-Zomor Street, El-Zohor District, Naser City, Cairo 11762, Egypt
Introduction
A challenging problem in designing a PID controller is to find its appropriate gain values (i.e., proportional gain , integral gain , and derivative gain ) [1]. Moreover, in case where some of the system parameters or operating conditions are uncertain, unknown, or varying during operation, a conventional PID controller would not change its gains to cope with the system changes. Therefore a tuning method is needed. Various PID controller tuning techniques have been reported in the literature. It is classified into two groups, offline tuning methods as Zeigler-Nichols method and online tuning methods or adaptive PID. APID can tune the PID gains to force the system to follow a desired performance even with the existence of some changes in system characteristics [2].
Adaptive control has been commonly used during the past decades specially the model reference adaptive control (MRAC). Its objective is to adapt the parameters of the control system to force the actual process to behave like some given ideal model which is demonstrated in [3,4]. There are two main categories of adaptive control. (1) Indirect. It starts with controlled system identification and then uses those estimated parameters to design the controller as presented in [5][6][7]. (2) Direct. This is more practical than indirect method. It uses a parameter estimation method to get the controller parameters directly the same as introduced in [8,9].
An adaptive PID controller is presented in [10] using least square method which is an offline parameter estimation method. On the other hand, an optimal self-tuning PID controller is introduced in [5] using RLS to estimate the model from its dynamic data. RLS is a recursive algorithm for online parameter estimation that is frequently used because it has a fast rate of convergence. In [11] an online type of controller parameter tuning method is presented by utilizing RLS algorithm. It develops the standard offline fictitious reference iterative tuning FRIT method to be used as a modified estimation error for RLS algorithm. Also the controllers in [11][12][13] present online tuning based on input and output data of the system. In the case of unstable systems, few researchers study the behaviour of the adaptive PID techniques on unstable systems and examine its ability to stabilize them as verified in [14][15][16][17].
In this paper, the direct method of adaptive control is considered. RLS algorithm is used as adaptation mechanism to tune the PID gains automatically online to force the actual process to behave like the reference model. The proposed approach has also the ability to stabilize the unstable system. Adding some parameters variations in actual process during its operation time confirms the proposed controller adaptation capability and robustness against process variation in both stable and unstable cases.
The structure of this paper is as follows. In Section 2, the problem statement is presented. In Section 3, an APID controller and its adaptation mechanism using RLS are introduced. The proposed technique is applied to numerical examples in Section 4 and its results show its ability in tracking the reference input signal using APID controller for both stable and unstable systems even when the considered system suffers from changes of its parameters. Finally, in Section 5, the conclusion and some suggestions for further work are presented.
Problem Formulation
Consider a system shown in Figure 1, where is a process that is modeled as a single-input and single-output linear system, ( ) is a PID controller, and denotes a parameter vector to be tuned in the controller. Also, , , , and = − denote the control input, output, reference signal, and error signal, respectively.
In conventional control, the PID controller can be expressed as Note that, approximately, the transfer function of integrator / can be expressed as /( + 1).
Simply, the controller transfer function [11] can be expressed as ] .
( ] . (3) So (2) can be rewritten in the form In most of practical applications, the actual structure of the controlled system is unknown or varying. Therefore, the adaptive mechanism is used for self-adjustment of the PID gains to achieve the best tracking performance. The proposed technique controls the motion of both stable and unstable systems to follow the ideal trajectory provided by a designer defined reference model ( ).
APID Controller Using RLS
The proposed controller objective is to find the coressponding controller parameters (PID gains) using RLS algorithum as adaptation mechanism such that the closed-loop transfer function is more or less equal to the reference model transfer function. In other words, the reference output ( ) tends to be equal to the plant output ( ) as follows: So it can be written as where ( ) is a given model reference transfer function which represents the ideal closed-loop dynamics. Hence, (6) can be written as Applying the controller transfer function ( ) to both sides of the above equation results in and becomes Now the modified estimation error of RLS can be defined as This means that Based on the RLS algorithms, we tune the parameters which are the PID gain values so that the following performance index is minimized: On the other hand, in order to apply the classical equations of the RLS estimation algorithm used to find the parameters , a modified estimation error can be expressed as where = 1 (1 − ( )) ( ).
To build the RLS algorithm using the 2nd-Level S-Function in Matlab, the first term in right hand side of (13) has to be rewritten as where RLS = [ ] and is a vector which contains all parameters of PID gains.
RLS is an algorithm which recursively finds the optimal estimatê( ) of the controller parameter by usinĝ( −1) [3]. Thus, considering (14), the proposed RLS update laws will be as follows: where is the adaptation gain and is the covariance matrix. According to the above RLS algorithm equations, the controller parameterŝ( ) are updated at each time. Thus, variation of the controller parameterŝ( ) may be large at the start of algorithm, at the time when plant characteristics change rapidly, and at the time when the set-point reference is changed. Due to this, the system may stop working steadily.
In order to avoid such a problem and reduce the variation of the controller parameterŝ( ) is filtered by a low-pass filter which can be defined as where is a sufficiently small positive constant. So is changed reasonably.
Numerical Examples
In order to illustrate the main features of the proposed APID using RLS, simulation examples are now presented.
The following examples cover stable and unstable systems cases and consider the changes in the system parameters during simulation time.
Let the sampling time be = 0.001 and let the reference model be The reference input signal is chosen to be a delayed square wave.
In order to evaluate the proposed control method to plant uncertainties, we consider the case where there exists a change in one of the system poles at 155 s (i.e., the pole = 1 changes to = 3); moreover the gain of the plant is doubled suddenly at 225 s.
The output by the proposed APID using RLS controller is shown in Figure 2 and it is compared with the controller presented in [11] and conventional PID. In the proposed controller the PID gains are tuned adaptively despite the variation of gain and poles of plant, and good tracking performance is maintained. It is clear from the figure that the proposed APID using RLS controller has superior performance as it has smaller overshoot at the beginning of the simulation than the controller in [11] and the conventional PID could not handle either the gain change or the pole change.
Unstable SISO System
. Now, consider that the unstable system stated in [18] is a simple SISO model of inverted pendulum And reference model can be expressed as where natural frequency = 2 and damping ration = 1. The reference input signal is chosen to be a delayed square wave.
It is shown in Figure 3 that the proposed APID using RLS controller can stabilize the system and achieve good tracking performance despite the fact that the gain of the plant is doubled suddenly at 155 s and the system's unstable pole is changed at 255 s. On the contrary the conventional PID and the controller presented in [11] with the same initial parameters failed to stabilize the system.
Conclusions
In this paper, adaptive PID (APID) controller is proposed using RLS algorithm which updates the PID gains automatically online to force the actual system to behave like a desired reference model. Numerical examples have been shown to confirm the tracking capability of the proposed controller when it is applied to both stable and unstable systems. It also proves the efficiency of the controller during the changes of system parameters during operation of system. Moreover comparisons are made between the proposed APID and the adaptive controller presented in [11] and the conventional PID. This work can be further extended to drive the stability analysis for the proposed APID controller for SISO systems. Also, as a further research, the APID controller technique demonstrated in this paper can be modified to be applicable for MIMO systems and then its behavior can be investigated in the presence of some variations in system parameters. | 2,266.6 | 2014-10-07T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
D-IMPACT: A Data Preprocessing Algorithm to Improve the Performance of Clustering
In this study, we propose a data preprocessing algorithm called D-IMPACT inspired by the IMPACT clustering algorithm. D-IMPACT iteratively moves data points based on attraction and density to detect and remove noise and outliers, and separate clusters. Our experimental results on two-dimensional datasets and practical datasets show that this algorithm can produce new datasets such that the performance of the clustering algorithm is improved.
Introduction 1.Clustering Problem and Data Preprocessing
Clustering is the process of dividing a dataset into partitions such that intracluster similarity is maximized.Although it has a long history of development, there remain open problems, such as how to determine the number of clusters, the difficulty in identifying arbitrary shapes of clusters, and the curse of dimensionality [1].The majority of current algorithms perform well for only certain types of data [2].Therefore, it is not easy to specify the algorithm and input parameters required to achieve the best result.In addition, it is difficult to evaluate the clustering performance, since most of the clustering validation indexes are specified for certain clustering objectives [3].Finding an appropriate algorithm and parameters is very difficult and requires a sufficient number of experimental results.The datasets measured from real systems usually contain outliers and noise, and are, therefore, often unreliable [4] [5].Such datasets can impact the quality of cluster analysis.However, if the data have been preprocessed appropriately-for example, clusters are well-separated, dense and have no noise-the performance of the clustering algorithms may improve.
Data preprocessing is often used to improve the quality of data.In relation to clustering, popular applications of data preprocessing are normalization, removing noisy data points, and feature reduction.Many studies have used Principal Component Analysis (PCA) [6] to reveal representative factors.Although PCA accounts for as much variance of the data as possible, clustering algorithms combined with PCA do not necessarily improve, and, in fact, often degrade, the cluster quality [7].PCA essentially performs a linear transformation of the data based on the Euclidean distance between samples; thus, it cannot characterize an underlying nonlinear subspace.
Recent studies have focused on new categories of clustering algorithms which prioritize the application of data preprocessing.Shrinking, a data shrinking process, moves data points along the gradient of the density, generating condensed and widely separated clusters [8].Following data shrinking, clusters are detected by finding the connected components of dense cells.The data shrinking and cluster detection steps are conducted on a sequence of grids with different cell sizes.The clusters detected at these cells are compared using a cluster-wise evaluation measurement, and the best clusters are then selected as the final result.In CLUES [9], each data point is transformed such that it moves a specific distance toward the center of a cluster.The direction and the associated size of each movement are determined by the median of the data point's k nearest neighbors.This process is repeated until a pre-defined convergence criterion is satisfied.The optimal number of neighbors is determined through optimization of commonly used index functions to evaluate the clustering result generated by the algorithm.The number of clusters and the final partition are determined automatically without any input parameters, apart from the convergence termination criteria.
These two shrinking algorithms share the following limitations: The process of shifting toward the median of neighbors can easily fracture the cluster (Figure 1). The direction of the movement vector is not appropriate in specific cases.For example, if the clusters are adjacent and differ highly in density, the median of the neighbors is likely to be located on another cluster.
In addition to the distance, density [10] is a quantity typically considered in clustering.The density represents the distribution of data within a certain distance.Density-based clustering algorithms attempt to find dense regions separated from other regions that satisfy certain criteria.Well-known density-based clustering algorithms include DBSCAN [11], OPTICS [12], and DENCLUE [13].Density clustering algorithms can find arbitrary clusters with high accuracy, but they are highly sensitive to the value of parameters and their accuracy decreases rapidly as the number of attributes increases, especially when dealing with high-dimensional datasets.
IMPACT Algorithm and the Movement of Data Points
IMPACT [14] is a two phases clustering algorithm which is based on the idea of gradually moving all data points closer to similar data points according to the attraction between them until the dataset becomes self-partitioned.In the first phase of the IMPACT algorithm, the data are normalized and denoised.In the next phase, the IMPACT algorithm iteratively moves data points and identifies clusters until the stop condition is satisfied.The attraction can be adjusted by various parameters to handle specific types of data.IMPACT is robust to input parameters and flexibly detects various types of clusters as shown in experimental results.However, there are steps that can be improved in IMPACT, such as noise removal, attraction computation, and cluster identification.Also, IMPACT has difficulties in clustering high dimensional data.
In this study, we propose a data preprocessing algorithm named D-IMPACT (Density-IMPACT) to improve the quality of the cluster analysis.It preprocesses the data based on the IMPACT algorithm and the concept of density.An advantage of our algorithm is its flexibility in relation to various types of data; it is possible to select an affinity function suitable for the characteristics of the dataset.This flexibility improves the quality of cluster analysis even if the dataset is high-dimensional and non-linearly distributed, or includes noisy samples.
D-IMPACT Algorithm
In this section, we describe the data preprocessing algorithm D-IMPACT based on the concepts underlying in the IMPACT algorithm.We aim to improve the accuracy and flexibility of the movement of data points in the IMPACT algorithm by applying the concept of density to various affinity functions.These improvements will be described in the subsequent subsections.
Movement of Data Points
The main difference between D-IMPACT and other algorithms is that the movement of data points can be varied by the density functions, the attraction functions, and an inertia value.This helps D-IMPACT detect different types of clusters and avoid many common clustering problems.In this subsection, we describe the scheme to move data points in D-IMPACT.We assume that the dataset has m samples and each sample is characterized by n features.We also denote the feature vector of the i th sample by x i .
Density
We use two formulae to compute the density of a data point based on its neighbors, which are defined as data points located within a radius Φ.This density is calculated with and without considering the distance from the data point to its neighbors.We define the density δ i for the data point x i as ( ) where ( ) x is one of following density functions: ( ) ( ) , is the number of neighbors.Unlike the density function den 1 , the density function den 2 considers not only the number of neighbors, but also the distance between them to avoid issues relating to the choice of threshold value, Φ.In a practical application, we scale the density to avoid scale differences arising from the use of specific datasets as follows:
Attraction
In our D-IMPACT algorithm, the data points attract each other and one other closer.We define the attraction of data point x i caused by x j as ( ) ( ) ( ) ( ) x ,x is a function used to compute the affinity between two data points x i and x j .This quantity ignores the affinity between neighbors.The affinity can be computed using the following formulae: These four formulae have been adopted to improve the quality of the movement process in specific cases.The function aff 1 , used in IMPACT, considers the distance between two data points only.The function aff 2 considers the effect of density on the attraction; highly aggregated data points cause stronger attraction between them than sparsely scattered ones.This technique can improve the accuracy of the movement process.The function aff 3 considers the difference between the densities of two data points; two data points attract each other more strongly if their densities are similar.This can be used in the case where clusters are adjacent but have differing densities.The function aff 4 is a combination of aff 2 and aff 3 .The parameter p is used to adjust the effect of the distance to the affinity.Attraction is the key value affecting the computation of the movement vectors.For each specific problem in clustering, an appropriate attraction computation can help D-IMPACT to correctly separate clusters.
Under the effect of attraction, two data points will move toward each other.This movement is represented by an n-dimensional vector called the affinity vector.We denote a ij as the affinity vector of data point x i caused by data point x j .The k th element of a ij is defined as The affinity vector is a component used to calculate the movement vector.
Inertia Value
To shrink clusters, D-IMPACT moves the data points at the border region of original clusters toward the centroid of the cluster.Highly aggregated data points, usually located around the centroid of the cluster, should not move too far.In contrast, sparsely scattered data points at the border region should move toward the centroid quickly.Hence, we introduce an inertia value to adjust the magnitude of each movement vector.We define the inertia value I i of data point x i based on its density1 by 1 .
Data Point Movement
D-IMPACT moves a data point based on its corresponding movement vector.The movement vector v i of data point x i is the summation of all affinity vectors that affect the data point where a ij is the affinity vector.The movement vectors are then adjusted by the inertia value and scaled by s, which is a scaling value used to ensure the magnitude does not exceed a value Φ, as in the IMPACT algorithm.This scaling value is given by where ( ) x − is the coordinate of data point x i in the previous iteration, and ( ) is the coordinate of data point x i in this iteration.We propose the algorithm D-IMPACT based on this scheme of moving data points.
D-IMPACT Algorithm
D-IMPACT has two phases.The first phase detects noisy and outlier data points, and removes them.The second separates clusters by iteratively moving data points based on attraction and density functions.Figure 2 shows the flow chart of the D-IMPACT algorithm.Since two parameters p and q play similar roles in both IMPACT and D-IMPACT algorithms, they can be chosen according to the instructions in the literature of IMPACT algorithm (in this study, we set p = 2 and q = 0.01).To remove noisy points and outliers, we set the input parameter Th noise as 0.1, which achieved the best result in our experiments.
Noisy Points and Outlier Detection
First, the distance matrix is calculated.The density of each data point is then calculated by one of the formulae defined in the previous subsection.The threshold used to identify neighbors is computed based on the maximum distance and the input parameter q, and is given by max Distance, q Φ = × where max Distance is the largest distance between two data points in the dataset.The next step is noise and outlier detection.An outlier is a data point significantly distant from the clusters.We refer to data points which are close to clusters but do not belong to them to as noisy points, or noise, in this manuscript.Both of these data point types are usually located in sparsely scattered areas, that is, low-density regions.Hence, we can detect them based on density and the distance to clusters.We consider a data point as noisy if its density is less than a threshold Th noise , and it has at least one neighbor which is noisy or a cluster-point (with the latter defined as a data point whose density is larger than Th noise ).An outlier is a point with a density less than Th noise that has no neighbor which is noisy or a cluster-point.Figure 3 gives an example of noise and outlier detection.
Both outliers and noisy points are output and then removed from the dataset.The effectiveness of this removal is shown in Figure 4.The value Φ is then recalculated as the dataset has been changed by the removal of noise and outliers.When this phase is completed, the movement phase commences.
Moving Data Points
In this phase, the data points are iteratively moved until the termination criterion is met.The distances and the densities are calculated first, after which, we compute the components used to determine the movement vectors: attraction, affinity vector, and the inertia value.We then employ the movement method described in the previous section to move the data points.The movement shrinks the clusters to increase their separation from one another.This process is repeated until the termination condition is satisfied.In D-IMPACT, we adopt various termination criteria as follows: Termination after a fixed number of iterations controlled by a parameter n iter . Termination based on the average of the densities of all data points. Termination when the magnitudes of movement vectors have significantly decreased from the previous iteration.When this phase is completed, the preprocessed dataset is output.The new dataset contains separated and shrunk clusters, with noise and outliers removed.O m n .We see, based on our experiments, that the number of iterations is usually small and does not have significant impact on the overall complexity.Therefore, the overall complexity of D-IMPACT is ( ) 2 O m n .We measured the real processing time of D-IMPACT on 10 synthetic datasets.For each dataset, the data points were randomly located (uniformly distributed).The sizes of the datasets varied from 1000 to 5000 samples.These datasets are included in the supplement to this paper.We compared D-IMPACT with CLUES using these datasets.D-IMPACT was employed with the parameter n iter set to 5. For CLUES, the number of neighbors was set to 5% of the number of samples and the parameter itmax was set to 5. The experiments were executed using a workstation with a T6400 Core 2 Duo central processing unit running at 2.00 GHz with 4 GB of random access memory.Figure 5 shows the advantage in speed of D-IMPACT in relation to CLUES.
Experiment
In this section, we compare the effectiveness of D-IMPACT and the shrinking function of CLUES (in short, CLUES) on different types of datasets.
Two-Dimensional Datasets
To validate the effectiveness of D-IMPACT, we used different types of datasets: two dimensional (2D) datasets taken from the Machine Learning Repository (UCI) [15], and a microarray dataset.Figure 6 shows the 2D datasets used.
The 2D datasets are DM130, t4.4k, t8.8k, MultiCL, and Planet.They contain clusters with different shapes, densities and distributions, as well as noisy samples.The DM130 dataset has 130 data points: 100 points are generated randomly (uniformly distributed), and then three clusters, where each cluster comprises ten data points, are added to the top-left, top-right and bottom-middle area of the dataset (marked by red rectangles in Figure 6(a)).The MultiCL dataset has a large number of clusters (143 clusters) scattered equally.Two datasets, t4.8k and t8.8k [16], used in the analysis of the clustering algorithm Chameleon [17], are well-known datasets for clustering.Both contain clusters of various shapes and are covered by noisy samples.Clusters are chained by the single-link effect in the t4.8k dataset.The clusters of the Planet dataset are adjacent, but differ in density.These datasets encompass common problems in clustering.
Practical Datasets
The practical datasets are more complex than the 2D datasets, i.e., the high dimensionality can greatly impact the usefulness of the distance function.We used the Wine, Iris, Water-treatment plant (WTP), and Lung cancer (LC) datasets from UCI, as well as the dataset GSE9712 from the Gene Expression Omnibus [18] to test D-IMPACT and CLUES on high-dimensional datasets.The datasets are summarized in Table 1.The Iris dataset contains three classes (Iris Setosa, Iris Versicolor, Iris Virginica), each with 50 samples.One class is linearly separable from the other two; the latter are not linearly separable from each other.The Wine dataset (178 samples, 13 attributes), which are the results of chemical analysis of wines grown in the same region in Italy, but derived from three different cultivars, include three overlapping clusters.The WTP dataset (527 samples, 38 attributes) includes the record of the daily measures from sensors in an urban waste water-treatment plant.It is an imbalanced dataset-several clusters have only 1 -4 members, corresponding to the days that have abnormal situations.The lung cancer (LC) dataset (32 samples, 56 attributes) describes 3 types of pathological lung cancers.Since the Wine, WTP, and LC datasets have attributes within different ranges, we perform scaling to avoid the domination of wide-range attributes.The last dataset we use is a gene expression dataset, GSE9712, which contains expression values of 22,283 genes from 12 radio-resistant and radio-sensitive tumors.
Validating Methods
For a fair comparison, we employed CLUES implemented in R [19] and varied the number of neighbors k (from 5% to 20% of the number of samples) for different datasets.For D-IMPACT, according to the instructions and the experimental results in the literature of IMPACT algorithm, we used the default parameter set (q = 0.01, p = 2, aff 1 , den 1 , Th noise = 0, n iter = 2) with some modifications.The complete parameter set is described in Table 2.
We compared the differences between the preprocessed datasets and the original datasets using 2D plots.However, it is difficult to visualize the high-dimensional datasets using only 2D plots.For this reason, we compared the two algorithms by using a plot showing several combinations of features.Further, to evaluate the quality of the preprocessing, we compared the clustering results for the datasets preprocessed by D-IMPACT and CLUES.We used two evaluation measures, the Rand Index and adjusted Rand Index (aRI) [20].Hierarchical agglomerative clustering (HAC) was used as the clustering method [10].We used the Wine, Iris, and GSE9712 datasets to validate the clustering results, and the WTP and LC datasets to validate the ability of D-IMPACT to separate outliers from clusters.
Experimental Results of 2D Datasets
The results of D-IMPACT and CLUES on 2D datasets DM130, MultiCL, t4.8k, t8.8k, and Planet are displayed and analyzed in this section.Clusters in the dataset DM130 are difficult to recognize since they are not dense or well separated.Therefore, we set the p to 4 and run D-IMPACT for longer (n iter = 3).The D-IMPACT algorithm shrinks the clusters correctly and retains structures of the original dataset (Figure 6(a) and Figure 7(a)).CLUES, with the number of neighbors k varied from 10 to 30, degenerated the clusters into a number of overlapped points and caused a loss of the global structure (Figure 7(b)).The shrinking process may merge clusters incorrectly since clusters in the dataset MultiCL are dense and closely located.Hence, we used the density function den 2 and the affinity function aff 2 , which emphasizes the density, to preserve the clusters.The result is shown in Figure 8. D-IMPACT correctly shrunk the clusters (Figure 8(a)), yet CLUES merged some clusters incorrectly due to issues relating to the choice of k (Figure 8(b)).
In relation to the two datasets t4.8k and t8.8k, D-IMPACT and CLUES are expected to remove noise and shrink clusters.We set q = 0.03 and Th noise = 0.1 to detect carefully noise and outliers.The results of D-IMPACT are shown in Figure 9; the majority of noise was removed, and clusters were shrunk and separated.We then tested CLUES on the t4.8k dataset.Since the clusters in t4.8k are heavily covered by noise, we tested CLUES on the dataset whose noise was removed by D-IMPACT for a fair comparison.The value k is varied to test the parameter sensitivity of CLUES. Figure 10 shows different results due to this parameter sensitivity.
To separate adjacent clusters in the dataset Planet, we used the function aff 3 , which considers the density difference.The parameter q is set to 0.05, since the data points are located near each other.We used den 2 and p = 4 to emphasize the distance and density.The results are shown in Figure 11.As shown, D-IMPACT clearly outperformed CLUES.
Iris, Wine, and GSE9712 Datasets
To avoid the domination of wide-range features, we scaled several datasets (Scale = true).In the case of Wine, we had to modify the inertiavalue and use p = 4 to emphasize the importance of nearest neighbors.We used HAC to cluster the original and preprocessed Iris and Wine datasets, and then validated the clustering results with aRI.A higher Rand Index score indicates a better clustering result.The Iris dataset was also preprocessed using a PCA-based denoising technique.However, the distance matrices before and after applying PCA are nearly the same (using 2, 3, or 4 principal components (PCs)).Therefore, the clustering results of HAC for the dataset preprocessed by PCA are at most the same result as that of the original dataset, which depends on the number of PCs used (aRI score ranged from 0.566 to 0.759).Table 3 shows the aRI scores of clustering results of HAC on original datasets and datasets preprocessed by D-IMPACT and CLUES.The effectiveness was dependent on the datasets.In the case of Iris, D-IMPACT greatly improved the dataset, particularly as compared with CLUES.However, for the Wine dataset, CLUES achieved the better result.This is due to the overlapped clusters in the Wine dataset are undistinguishable using affinity function.In addition, we calculated aRI scores to compare clustering results obtained by the clustering algorithms IMPACT and D-IMPACT.For the Iris dataset, the best aRI score achieved by IMPACT was 0.716, which was greatly lower than the best aRI score by D-IMPACT (0.835).For the Wine dataset, the best aRI score by IMPACT was 0.897, which was slightly lower than the best aRI score by D-IMPACT (0.899).These results show that the movement of the data points was improved in D-IMPACT compared to the IMPACT algorithm.The GSE9712 dataset is high-dimensional and has a small number of samples.Due to the curse of dimensionality and the noise included in microarray data, it is very difficult to distinguish clusters based on the distance matrix.We performed D-IMPACT and CLUES on this dataset to improve the distance matrix, and then applied the clustering algorithm HAC.D-IMPACT clearly outperformed CLUES since CLUES greatly decreased the quality of the cluster analysis.We also performed k-means clustering [10] on these datasets.We performed 100 different initializations for each dataset.The clustering results also favored D-IMPACT.Table 4 shows the best and average scores (in brackets) of the experiments.In addition, using Welch's two sample t-test, the stability of the clustering result on D-IMPACT increased; the p-values between two experiments (100 runs of k-means for each experiment) of the original dataset, CLUES, and D-IMPACT were 0.490, 0.365 and 0.746, respectively.Since the p-value of the t-test is the confidence of the alternative "the two vectors have different means", a higher p-value indicates more stable clustering results.To clearly show the effectiveness of the two algorithms, we visualized the Iris and Wine datasets preprocessed by D-IMPACT and CLUES as shown in Figure 12.Since Wine has 13 features (i.e.78 subplots are required to visualize all the combinations of the 13 features), we only visualize the combinations for the first four features, using 2D plots (Figure 13).D-IMPACT successfully separated two adjacent clusters (blue and red) in the Iris dataset.D-IMPACT also distinguished overlapping clusters in the Wine dataset.We marked the separation created by D-IMPACT with red-dashed ovals in Figure 13.This shows that D-IMPACT worked well with overlapped clusters.CLUES degenerated the dataset into a number of overlapped points.This caused the loss of cluster structures and reduced the stability of clusters in the dataset (Figure 14).Therefore, the use of k-means created different clustering results during the experiment.
Water-Treatment Plant and Lung Cancer Datasets
To validate the outlier separability, we tested CLUES and D-IMPACT on the WTP and LC datasets.The WTP dataset has small clusters (1 -4 samples for each cluster).Using aff 2 , we can reduce the effect of the affinity to these minor clusters.We show the dendrogram of HAC clustering results (using single-linkage) on the original and preprocessed dataset of WTP in Figure 15.In the dataset preprocessed by D-IMPACT, several minor clusters are more distinct than the major clusters (Figure 15(b)).In addition, the quality of the dataset was improved after preprocessing by D-IMPACT; the clustering result using k-means (100 runs) on the dataset preprocessed by D-IMPACT achieved average aRI = 0.217, while the clustering result on the original dataset had average aRI = 0.120.CLUES merged minor clusters during shrinking and, therefore, the clustering result was bad (average aRI = 0.114).To compare the outlier detection capability of D-IMPACT and CLUES, we calculated the Rand Index scores for only minor clusters.The resulting dataset preprocessed by D-IMPACT achieved Rand Index = 0.912, while CLUES had Rand Index = 0.824.In addition, in the clustering result on the dataset preprocessed by D-IMPACT, 8 out of 9 minor clusters were correctly detected.In contrast, no minor cluster was correctly detected when using CLUES.
The lung cancer (LC) dataset was used by R. Visakh and B. Lakshmipathi to validate the outlier detection ability of an algorithm focusing on a constraint based cluster ensemble using spectral clustering, called CCE [21].The dataset has no obvious noise or outliers.We detected some noise and outlier points by considering the distance to the nearest neighbor and the average distance to the k-nearest neighbors (k = 6) of 32 samples in the LC dataset.We generated a list of candidates for noise and outliers: sample numbers 18, 19, 23, 26, and 29.We then performed HAC with different linkages on the original and preprocessed LC datasets to detect noise and outliers based on the dendrogram.These results were then compared with the reported result of CCE.This was done by calculating the accuracy and precision values.The results in Table 5 clearly show that D-IMPACT outperformed CCE.It also shows the effectiveness of D-IMPACT in relation to outlier detection.
Conclusion and Discussion
In this study, we proposed a data preprocessing algorithm named D-IMPACT inspired by the IMPACT clustering algorithm.D-IMPACT moves data points based on attraction and density to create a new dataset where noisy points and outliers are removed, and clusters are separated.The experimental results with different types of datasets clearly demonstrated the effectiveness of D-IMPACT.The clustering algorithm employed on the datasets preprocessed by D-IMPACT detected clusters and outliers more accurately.
Although D-IMPACT is effective in the detection of noise and outliers, there are some difficulties remaining.In the case of sparse datasets (e.g., microarray data and text data), the approach to noise detection based on the density often fails since most of the data, including noise and outlier points, will have a density which equals 1 under our definition.In addition, the distances between data points are not so different due to the curse of dimensionality.In order to overcome this problem, we consider an attraction measure between two data points.The attraction of a noise or outlier point is usually small since it is far from other data points.These problems may be overcome by using the density and attraction information to detect these data point types.
Figure 2 .
Figure 2. The outline of the D-IMPACT algorithm.
2. 2 2 O
.3.Complexity D-IMPACT is a computationally efficient algorithm.The cost of computing m 2 affinity vectors is ( ) m n .The
Figure 3 .
Figure 3. Illustration of noisy points and outliers.
Figure 4 .
Figure 4. Illustration of the effect of noise removal in D-IMPACT.
Figure 5 .
Figure 5. Processing times of D-IMPACT and CLUES on test datasets.
Figure 11 .
Figure 11.Visualization of the dataset Planet preprocessed by D-IMPACT and CLUES.a) Preprocessed by D-IMPACT.Two clusters are separated; b) Preprocessed by CLUES; c) Clustering result using HAC on the dataset in b), indicating that CLUES shrinks clusters incorrectly.
Figure 12 .
Figure 12.Visualization of the Iris dataset before and after preprocessing by D-IMPACT.Visualization of the original dataset is shown in the bottom-left triangle.Visualization of the dataset optimized by D-IMPACT is shown in the top-right triangle.
Figure 13 .
Figure 13.Visualization of the first four features of the Wine dataset before and after preprocessing by D-IMPACT.Visualization of the original dataset is shown in the bottom-left triangle.Visualization of the dataset preprocessed by D-IMPACT is shown in the top-right triangle.
Figure 14 .
Figure 14.Visualization of the Iris and Wine datasets preprocessed by CLUES.a) Iris; b) Wine.
Figure 15 .
Figure 15.Dendrograms of the clustering results on the WTP dataset.a) Dendrogram of the original water-treatment dataset; b) Dendrogram of the water-treatment dataset after being preprocessed by D-IMPACT; c) Dendrogram of the water-treatment dataset after being preprocessed by CLUES.
Table 1 .
Datasets used for experiments.
Table 3 .
The Index scores of clustering results using HAC 2 on the original and preprocessed datasets of Iris and Wine.The best scores are in bold.
Table 4 .
Index scores of clustering results using k-means on original and preprocessed datasets of IRIS and Wine.The best scores are in bold.
Table 5 .
Accuracy and precision values of noise and outlier detection on the lung cancer dataset. | 6,668.2 | 2014-07-07T00:00:00.000 | [
"Computer Science"
] |
A Method of Reordering Lossless Compression of Hyperspectral Images
: An improved lossless compression method with adaptive band reordering and minimum mean square error prediction was proposed to address the problems of huge data volume of remote sensing images, great pressure on transmission and storage and a low compression ratio. This method may determine the optimal band ordering adaptively, and make full use of the ordering correlation to eliminate the image redundancy according to the minimum mean square error criterion. First, it adaptively grouped hyperspectral image bands, and used the minimum spanning tree algorithm for band ordering within each group to enhance the inter-spectral correlation of adjacent bands. Later, it selected the contexts for inter-and intra-spectral prediction adaptively for the bands within the group to remove the redundancy of hyperspectral images. Finally, it conducted binary arithmetic coding of the predicted residuals to remove the statistical redundancy, and complete the lossless compression of hyperspectral images. The test results based on the hyperspectral images of ZY1-02D show that the method in this paper effectively utilizes the intra-and inter-spectral correlations, improves the prediction performance, and outperforms the commonly used compression methods.
INTRODUCTION
As remote sensing technology has made considerable strides in recent years, the hyperspectral resolution and high spatial resolution, as well as the data quantization bits have been increasing.Meanwhile, the temporal resolution of satellite remote sensing observation has also increased constantly, and all these factors have caused a surge in the volume of remote sensing images.The massive data of hyperspectral images bring a huge burden on the transmission and storage, which limits its application in the field of computer vision and remote sensing.Therefore, it is essential to study the compression of hyperspectral images.The reason for the huge amount of hyperspectral image data lies in the intral-spectral and inter-spectral redundancy of data.To address this problem, researchers have put forward several compression algorithms, which mainly consist of three types, including lossless compression, near-lossless compression, and lossy compression (Signoroni, 2019;Wang, 2019).It costs high to obtain hyperspectral remote sensing images.To guarantee the precision of image data application, lossless compression becomes particularly important.Prediction-based methods are adopted for the lossless compression of hyperspectral images, and the precision of prediction can be enhanced by building a predictive linear model based on the spectral version (Wu, 2000) of context-based adaptive lossless image codec (CALIC) and its variant (Magli, 2004).The Lookup Tables (LUT) based coding method makes use of the inter-spectral structural similarity and unique correction features of hyperspectral images, with low time complexity (MIELIKAINEN, 2006), but it is not ideal for images without correction features.In addition, the prediction method based on the idea of low complexity filter also has a good compression performance, such as fast lossless (FL) (Klimesh, 2006)and Spectral-oriented Least SQuares(SLSQ) (Rizzo, 2007), and it can remove image redundancy with updated weight and linear model.Consultative committee for space data systems (CCSDS) has already adopted the optimized FL method as the compression standard for multi-spectral and hyperspectral images.Song Jinwei applied the recursive least square (RLS)to calculate the 8-order spectral linear prediction coefficient (Song, 2013), which effectively reduced the computation complexity and improved the prediction precision.Currently, most compression methods directly encode the prediction, and re-ordering may enhance the hyperspectral correlation and increase the prediction precision (Gaucel, 2011).In (Tate, 1997;Afjal, 2019), the prediction precision was improved by reordering the bands.In (Toivanen, 2005), the compression performance was enhanced by 5% with the reordering method.According to the above literature analysis, the smaller the prediction residual is, the better the compression performance will be.As a result, an improved adaptative band ordering and minimum mean square error prediction algorithm was put forward in this paper.It can conduct adaptive ordering of bands to find the optimal reference band and coding order for each band, eliminate the inter-spectral redundancy of images according to the minimum mean square error criterion as well as intra-spectral redundancy with the improved median predictor, and finally conduct entropy encoding for the prediction residuals with a binary arithmetic encoder.ZY1-02D is an important satellite in China's spatial infrastructure planning, and the Ministry of Natural Resources of the People's Republic of China is responsible for the construction of the project.This method has been tested effective in the experiment on ZY1-02D hyperspectral images.
CORRELATION ANALYSIS
The primary task of compression is to eliminate the correlation of images.Therefore, the effective use of image correlation is significant for the improvement of compression performance.Correlation coefficient can directly reflect the linear correlation, and can be used as an effective indicator of correlation.The expression of inter-spectral correlation coefficient , k k t r between the kth band and k+tth band is defined as: The expression of the intra-spectral coefficient Where, ( , , ) I k i j and ( , , ) I k t i j are the pixels of the image of the kth and k+tth band at the location ( ) , i j respectively; k I and k t I are the mean pixel of the corresponding band; N and M are the row and column of image; l is the number of rows separated; In Fig. 1, inter-spectral and intral-spectral correlation analysis were conducted for ZY1-02D short wave infrared and visible near infrared hyperspectral images.It is clear that the intralspectral correlation of most bands in ZY1-02D hyperspectral images is weaker than the inter-spectral correlation.Therefore, the primary task of compression shall be the elimination of inter-spectral correlation.
METHODS
In this paper, the lossless compression scheme is divided into three parts: adaptive band ordering, prediction, and entropy coding, as shown in Fig. 2. While coding, in addition to predicting the residual code stream, the band reordering numbers shall also be added, but the numbers account for small bytes and can be ignored.While decoding, the code stream is first decoded into residual image and ordering number, and then the pixels of each band are recovered one by one with a predictor.Due to the symmetry of compression and decompression of algorithm, the bands of hyperspectral images can be reordered according to the ordering number to restore the original image.
Band reordering
Not all the correlation between two bands of the hyperspectral images is extremely high, and some bands of small interspectral correlation with other bands would not be included in band ordering.Meanwhile, for hyperspectral data with multiple bands, reasonable partitioning may reduce the computation of correlation coefficients.Good band reordering can improve the correlation between bands and indirectly improve the prediction performance, a segmented sorting method was proposed in this paper, which consists of two parts: ① adaptive band grouping (Fig. 3); ② band reordering within each group using the minimum spanning tree algorithm.
Figure 3. Adaptive grouping of hyperspectral image bands
Fig. 4 is an optimally ordered tree corresponding to Fig. 3, where the vertex bands are compressed with intra-spectral prediction and the remaining child nodes are predicted using the ordered reference bands.In this way, the best predicted coding order is transformed into a minimum spanning tree problem in graph theory.Table I shows the comparison of the inter-spectral correlation coefficients between the best-ordered and unordered bands in Fig. 4, from which it is clear that the best ordered band enhances the inter-spectral correlation coefficient, so the adaptive solution to the best ordering of different images may enhance the compression effects.
Intra-spectral prediction
Intra-spectral prediction applies the classical median predictor to eliminate the spatial redundancy of images.For the diagonal edges in bands, the prediction of diagonal edge of images was added on the basis of the median predictor in this paper, and an improved intra-spectral prediction method was proposed.The formula of median predictor is as follows: (3) Where, A , C and B are the three adjacent pixels of the pixel point to be predicted, and X is the predicted value of the current pixel.
The edge judgment conditions include Where, 1 T and 2 T are pre-defined positive thresholds, for testing the contrast of pixel gray values of diagonal edges, and theoretically, 2 1 T T .
Inter-spectral prediction
SLSQ increases the indexes based on the current pixel of the reference band and local pixel distance from the current pixel.
Aiming at the possible local edge in inter-spectral prediction, ordering model was added on the basis of SLSQ, and improved inter-spectral prediction was put forward to choose the best number of contexts, as shown in Fig. 5.
{ | 0,1, ,11} means the set of pixel point x within the local candidate region, and the distance weighted Where, i is the distance weighted value of i x , and the further i x is away from x , the smaller the weighted value will be.
N minimum distance weighted pixel values are selected from i d after calculation, denoted as 0,1, , and the corresponding i y is taken as the best prediction context of y , namely . With the candidate values of the reference band in the template and the corresponding current band, the predicted values can be calculated according to the minimum mean square criterion: Where, a is the prediction coefficient of the reference band,
Entropy coding
After prediction, all residuals were mapped to the non-negative values, and then binary arithmetic coding was used to remove statistical redundancy from the images.The mapping equation is as follows: 2 , 0 ( ) 2 1, 0 Where, n is the predicted residual value.
RESULTS AND ANALYSIS
To verify the effectiveness of the algorithm in this paper, compression test was conducted for ZY1-02D satellite hyperspectral images with VS2013.The ZY1-02D satellite is equipped with visible near infrared camera and hyperspectral camera.The wave length of data achieved by hyperspectral camera ranges between 0.4 μm and 2.5 μm, with 76 bands of visible near infrared (VN) and 90 bands of shortwave infrared (SW), and each pixel is 16 bits.Eight AHSI images were taken at different time and locations, as shown in Table II.The compression effect is evaluated using bit per pixel (bpp) and compression ratio.
Parameter setting
Since there is no change in the result of the value of 2 T , Fig. 6 shows the variation curve of the bpp of image ZY1E-AHSI-E106.90-N42.36-VN with 1 T when the fixed threshold 2 10 T .It is clear that when there are diagonal edges in the image, bpp decreases after the addition of edge detection, while the compression performance is a little higher than that of the previous method.When 1 T ranges between 400 and 600, it tends to be stable, and at this moment, the average bit rate of the intra-spectral prediction is the smallest.
Compression results and comparison
Fig. 8 shows the absolute average of the prediction residuals for each band obtained after the prediction of image ZY1E-AHSIby the algorithm in this paper.The interspectral and intral-spectral prediction after band ordering makes full use of the correlation and the optimal number of contexts to eliminate the image redundancy effectively, and achieve a small prediction residual.
Figure 8. Predicted band
Compression tests were conducted with several lossless compression algorithms, and the compression ratio was used as the evaluation index of compression performance in lossless compression.Compression algorithms include WinRAR, JPEG-LS, FL, JPEG2000 and the algorithm in this paper, as shown in Table III, among which, WinRAR is now the most commonly used software for lossless compression; JPEG-LS and JPEG2000 are internationally acknowledged lossless and nearlossless compression standards; FL is a hyperspectral image compression algorithm recommended by CCSDS.According to table III, WinRAR fails to take the correlation of hyperspectral images into account, and just removes the statistical redundancy of data, featuring a poor compression performance; although JPEG-LS and JPEG2000 remove the spatial redundancy of hyperspectral images, they fail to consider the inter-spectral correlation, thus with a poor compression performance; JPEG2000 is more suitable for near-lossless or lossy compression, and it is a little bit weaker than JPEG-LS in lossless compression.FL predicts the current band using several reference bands, and removes the inter-spectral redundancy of hyperspectral images, with better compression performance than that of the other three methods.But its precision decreases in case of bands of low prediction correlation and a large number of reference bands.The algorithm in this paper combined the adaptive band ordering algorithm and the prediction algorithm to effectively remove the intral-spectral and inter-spectral redundancy of hyperspectral images, showing the best performance in compression when compared with the previous four methods.
CONCLUSION
To utilize the correlation of hyperspectral images effectively and enhance the performance of lossless compression, an improved adaptive band reordering and minimum mean square error prediction algorithm was put forward in this paper.It firstly improved the correlation between the band to be predicted and the reference band through the adaptive band ordering, and then removed the intra-and inter-spectral correlation of hyperspectral images with the improved intraspectral median predictor and inter-spectral optimal context predictor, and finally eliminated the statistical redundancy and completed the compression process by binary arithmetic coding of the prediction residuals and the band ordering number.The images were consistent with the original image after the decompression, achieving the lossless compression of hyperspectral images.The test results of ZY1-02D hyperspectral images show that this algorithm achieves an average compression ratio of 3.58 and 3.36 times for the 16-bit uncorrected images, which is better than WinRAR, JPEG-LS, FL and JPEG2000.In terms of time, the time consumption of the method in this paper is greatly increased compared with the previous methods, because the increase in algorithm will inevitably lead to an increase in computing time, but for production applications, multi-threaded or CUDA programming methods, which can significantly reduce the calculation time.
The algorithm proposed in this paper is simple and easy to implement, with enhanced performance, and can provide references for lossless compression of hyperspectral images.
Figure 4 .
Figure 4. Minimum spanning tree value of the pixel y of the current band.
Figure 6 .
Figure 6.Intral-spectral prediction of bpp trends Fig.7shows the variation trend of the average bit with N when different numbers of context are selected in the prediction template.According to the image analysis in table 2, with the constant increase of N, bpp increases.When N=2, bpp is the smallest.So N=2 in this paper.
Table 1 .
.Table of inter-spectral prediction order
Table 3 .
Compression results of hyperspectral images of the ZY1-02D satellite (compression ratio) | 3,308.2 | 2023-12-05T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Microstructural Characterization of Calcite-Based Powder Materials Prepared by Planetary Ball Milling
In this work, a planetary ball milling was used to modify the surface properties of calcite-based material from waste oyster shell under the rotational speed of 200–600 rpm, grinding time of 5–180 min and sample mass of 1–10 g. The milling significantly changed the microstructural properties of the calcite-based minerals (i.e., surface area, pore volume, true density, and porosity). The surface characterization of the resulting powder should be macroporous and/or nonporous based on the nitrogen adsorption/desorption isotherms. Under the optimal conditions at the rotational speed of 400 rpm, grinding time of 30 min and sample mass of 5 g, the resulting calcite-based powder had larger specific surface area (i.e., 10.64 m2·g−1) than the starting material (i.e., 4.05 m2·g−1). This finding was also consistent with the measurement of laser-diffraction (i.e., 9.7 vs. 15.0 μm of mean diameter). In addition, the results from the scanning electron microscope (SEM) observation indicated that surface roughness can be enhanced as particle size decreases as a result of particle-particle attrition. Thus, grinding the aquacultural bioresource by a high-energy ball milling can create the fine materials, which may be applied in the fields of inorganic minerals like aggregate and construction material.
Introduction
It is well known that the oyster shell is a natural biomaterial with excellent fracture strength and hard toughness, which are attributed to its laminated microstructure [1]. This mineral material is primarily composed of calcium carbonate (CaCO 3 ) crystals (i.e., calcite) laid down in a protein matrix [2,3]. To mitigate the environmental loads from waste oyster shell, many studies on its OPEN ACCESS recycling or utilization have been reported in recent years. One of the methods was to reuse it as an available material without further thermal and chemical treatment such as desulfurization sorbent [4], sludge conditioner [5], adsorbent [6,7], construction material [8,9], nutrition supplement [10], eutrophication control medium [11,12], food additive [13], liming material [14], and dechlorinating agent [15].
Concerning the efficient utilization of waste oyster shell, the size reduction method could be used to fabricate finer powder because the surface properties (e.g., specific surface area) of powder material mainly depend on its particle size, particle shape, and pore structure [16]. The fine milling of granular material can not only reduce particle size but also change surface structure because of the mechanochemical effect [17,18]. Ball milling is a commonly used method of producing fine powder in many industries. The tumbling mill with centrifugal and planetary action has recently been used to prepare fine powder from a variety of materials such as minerals, ores, alloys, chemicals, glass, ceramics, and plant materials. More noticeably, the planetary ball milling can reduce particles to fine powders based on a mechanical energy transfer, or impact and friction forces through high hardness ball media [19,20]. However, its energy efficiency is low, and the power cost is high.
In the open literature, numerous prior studies have reported similar work on ball milling of other types of powders, but there is no report about the pore and surface properties of calcite-based mineral powder using planetary ball milling. In the present study, the mechanical grinding was used to process the local waste oyster shell in order to serve as a biomaterial powder. As demonstrated by the previous study [21], the ground eggshell powders, obtained by the planetary ball milling with varying rotational speed, grinding time and sample mass loading, have been analyzed to characterize their pore properties, particle sizes, surface morphologies, and crystallinities by N 2 adsorption/desorption isotherm, laser diffraction, scanning electron microscope (SEM) observation, and X-ray diffraction, respectively. It was found that eggshell shells can be successfully ground to the resulting fine powder with the specific surface area of over 10 m 2 ·g −1 while the specific surface area of its starting sample is less than 1.0 m 2 ·g −1 . Thus, the main objective of this work was to explore the preparation of fine minerals from waste oyster shells using a high-energy ball milling as a function of rotational speed, grinding time, and mass dosage, and to characterize the surface properties of the resulting powders.
Materials
The waste oyster shell (denoted as WOS) was obtained from giant Pacific oyster (Crassostrea gigas) in a commercial market (Kaohsiung City, Taiwan). The shell sample was first cleaned with tap water to remove fresh remnants attached to the shells, and then it was dried by the solar radiation for at least 2 h. The shells were further crushed and ground by a rotary knife cutter to prepare smaller particles. The particle materials were thus sieved to the mesh number ranges from 100 to 200 (average particle size = 0.112 mm) according to the standard sieve designation. The resulting materials were finally dried at 105 °C for 24 h and stored in the desiccator prior to the physical and chemical characterizations. The crude shell particle (denoted as WOS-RM) was used as raw material further ground to fine powder by planetary ball milling described below. As listed in Table 1, an inductively coupled plasma-atomic emission spectrometer (Jarrel-Ash Co., USA; Model No.: ICAP 9000) and an elemental analyzer (Model: vario EL III; Elementar Co., Germany) have been used to obtain the chemical compositions of the WOS-RM sample, showing that the sample is mainly composed of calcium carbonate (i.e., calcite). On the other hand, the X-ray diffraction (XRD) pattern of the sample using a Rigaku MiniFlex instrument (Cu-Kα radiation) exhibited a significant peak of around 30 degree (2θ), which is characteristic of crystalline calcite that indicates as hkl (104) [22]. As seen in Figure 1, the diffraction peaks (not shown here) were broadened as the ball milling (described below) increased, indicating a decreasing trend in crystallite (grain) sizes due to the internal stress [19]. Based on the Scherrer equation, the crystallite size is reversely proportional to the width at half maximum of the broadened diffraction line on the 2θ scale. Accordingly, the crystallite size of WOS-M3 should be less than that of WOS-M4. The decrease of the calcite crystal size may be attributed to the thermal expansion while high-impact grinding results in the deformation.
Experimental Apparatus and Procedures
In the present study, a dry process by using high hardness material as grinding medium was used to make fine calcite-based particle. The fine grinding of crude shell sample was performed by a tumbling (planetary) ball mill (Retsch Co., Germany; Model No.: PM 100). The apparatus was equipped with a pair of agate jars. The inner diameter and depth in the 50 cm 3 -cylindrical pot were 60 mm and ca. 30 mm, respectively. The polished zirconium oxide ball with 20 mm diameter laid inside the jar was used to grind the sample. For each grinding experiment, the sample was put into the milling pot and ground under the following operation modes: 1. Rotational speed: 200, 300, 400, 500, and 600 rpm (denoted as S1, S2, S3, S4, and S5, respectively); 2. Grinding time: 5, 10, 30, 60, 120, and 180 min (denoted as T1, T2, T3, T4, T5, and T6, respectively); 3. Sample mass: 1.0, 2.5, 5.0, 7.5, and 10.0 g (denoted as M1, M2, M3, M4, and M5, respectively).
After being ground for the prescribed operation mode, the ground powder product was thoroughly removed from the mill jar and then stored in a desiccator for the subsequent characterization measurements. The preparation conditions and sample identification codes of the resulting oyster shell powders were denoted as WOS-S series (the preparation under the prescribed conditions: sample mass of 5 g, rotational time of 30 min, and clockwise rotation), WOS-T series (the preparation under the prescribed conditions: sample mass of 5 g, rotational speed of 400 rpm, and clockwise rotation), and WOS-M series (the preparation under the prescribed conditions: rotational time of 30 min, rotational speed of 400 rpm, and clockwise rotation). For example, the shell powder WOS-S1 was prepared under the grinding conditions of rotational speed of 200 rpm, sample mass of 5 g, and rotational time of 30 min.
Pore Property
Regarded as a comparative factor in determining the degree of pore properties of the resulting powders in the present study, the Brunauer-Emmett-Teller (BET) surface area has generally been used for comparing the specific surface areas of a variety of porous materials. The pore properties of fine powder, including the BET surface area (S BET , m 2 ·g −1 ) and pore size distribution, were obtained while its nitrogen adsorption-desorption isotherms were measured at −196 °C. The Surface Area & Porosity Analyzer (Micromeritics Co., Norcross, GA, USA, Model No.: ASAP 2020) was used to examine this work. The pore size distribution was calculated based on differential pore volume of Barrett-Joyner-Halenda (BJH) desorption isotherm using the Kelvin model of pore filling theory [16].
True Density
In the measurement of true density, the contribution to the volume of pores or internal voids must be subtracted based on its definition. Helium gas can penetrate even the very small pores in the powder sample because of its inertness and small molecule size (about 0.2 nm). A helium displacement method with a pycnometer (Micromeritics Co., Norcross, GA, USA, Model No.: AccuPyc 1340) was used to measure the true density (ρ s ) of the particulate sample in this work. From the data on total pore volume and true density of the fine particle, its porosity can be thus estimated [16].
Particle Size Distribution
In addition to the pore property, the particle size also contributes to the surface area of the resulting fine powder. An LS 230 (Micro-volume module) laser-diffraction particle size analyzer (Beckman Coulter Inc.; USA) was used to examine the particle size distributions of crude shell particle (i.e., WOS-RM) and some resulting fine powders. Prior to the measurement, about 0.1 g was added to 30 cm 3 de-ionized water to prepare a suspension solution. Then, magnetic agitation was used to shake the suspension solution to avoid the sedimentation of the sample particle.
Scanning Electron Microscope (SEM) Observation
In order to elucidate the particle properties (e.g., surface roughness, pore property and particle size) of the resulting shell powders from nitrogen adsorption/desorption isotherms and laser diffraction, the scanning electron microscope (SEM) was used to further examine and observe the external texture. The surface morphologies of the resulting fine powders were examined using the scanning electron microscopy (SEM) by a S-3000N (Hitachi Co., Tokyo, Japan) apparatus operated at a 15.0 kV accelerating potential. Prior to the observation, the surface of the sample was coated with a thin, electric conductive gold film.
Pore Property
The data in Table 2 indicated the Brunauer-Emmett-Teller (BET) surface area, total pore volume, true density of the starting sample (i.e., WOS-RM), and the resulting WOS-series. The data revealed that most of these fine minerals possessed poorer pore properties toward the probe molecule (i.e., nitrogen) than common porous materials such as activated clay [23]. Figure 2 showed the nitrogen adsorption/desorption isotherms of the optimal calcite-based powder (i.e., WOS-M3), indicating that monolayer-multilayer adsorption is unrestricted. On the basis of the Brunauer, Deming, Deming, and Teller (BDDT) classification [24], the isotherms belong to typical Type II, which has characteristic that nonporous or macroporous solids where monolayer coverage is succeeded by multilayer adsorption at higher relative pressure values. However, it should be noted that a small hysteresis loop can be seen in Figure 2. The probe molecule gas (i.e., nitrogen) was progressively added to the system to measure the lower (adsorption) branch of the loop while the probe molecule gas was progressively withdrawn from the system to measure the upper (desorption) branch. According to the classification of isotherm shapes, the Type VI isotherms possess a hysteresis loop, which is associated with mesoporous solids, where capillary condensation occurs [24]. Furthermore, the hysteresis loop, which is corresponding to type H3 recommended by the International Union of Pure and Applied Chemistry (IUPAC), has been associated with solids having slit-shaped pores with wide mouths [24]. From the pore size distribution (<50 nm) of WOS-M3 powder (Figure 3) based on the pore volumes of the Barrett-Joyner-Halenda (BJH) desorption branch in the measurement of nitrogen isotherms, it was observed that the pores of the optimal mineral sample have a heterogeneous distribution of slit widths mainly ranging from 2.5 nm to 4.5 nm. The foregoing observation was also consistent with the N 2 adsorption/desorption isotherms as seen in Figure 2. In the rotation-ground shell particle series (WOS-S series), the crack or slit seemed to be created by rotation speed of increasing from 200 to 600 rpm. As a result, the pore properties of BET surface area increased from 5.52 m 2 ·g −1 at rotational speed of 200 rpm to 10.64 m 2 ·g −1 at rotational speed of 400 rpm. Moreover, the pore property of the resulting powder increased to the maximum at about 400 rpm, but steadily declined in rotational speed at above 400 rpm. The optimum value could result from a compromise among the stress imposed to each particle, the mobility and attrition resistance of the sample particles, and the collision frequency and intensity. This result was in high accordance with the previous observation on the eggshell ground by a planetary ball mill [23]. Table 2, when the grinding time increased from 5 min to 30 min, the pore properties of the WOS-T samples were also increased as a result of the development of mesopores (crack or slit) and/or fine powder. However, it was observed that the values at grinding time of 30-60 min decreased more slightly than those at grinding time less than 30 min possibly due to the breakdown of the few well developed pores when the grinding time was longer than the crack transition time. Table 2 showed the pore properties of the WOS-M samples in terms of BET surface area produced at a rotational speed of 400 rpm and a milling time of 30 min under various sample mass dosages (1.0-10.0 g). Clearly, the BET surface areas of the resulting powder solids tended to gradually increase with the increase in the mass dosage from 1.0 to 5.0 g. However, there was an optimum value of BET surface area at the minimum mass dosage (i.e., 5.0 g) for the sample WOS-M3 because of the extent of the crack or slit evolution and development of fine particle. As the grinding was operated at mass dosage of above 5.0 g, a more inefficient particle-particle contacts and a lower energy transmitted to each particle contributed to the decrease of the pore property. As a result, the optimum operation parameters may result from a compromise between the stresses imposed on each particle, the mobility and attrition resistance of the sample particles, and the collision frequency and intensity. Figure 4 depicted the evolution in the size distributions of the crude shell particle (i.e., WOS-RW) and two resulting particles (i.e., WOS-M3 and WOS-M4), which are representative of fine bioceramics with a definite division of pore properties as listed in Table 2. The particle size distribution of the WOS-RW sample broadly ranges from 0.4 to 120 μm, but it reaches a peak at about 20 μm. Approximately 90% of the crude sample particles possess particle size below 40 μm. When the size reduction evolved during grinding, the pattern mode of the particle size distribution and its corresponding peak are shifted from the right hand to the left hand. This led to a somewhat reduction of the mean diameter from 15.0 μm of the crude sample (i.e., WOS-RW) to 9.7 μm of the WOS-M3 sample. However, the median size of the WOS-M4 sample was increased to 10.8 μm. This should result from the agglomeration, which may occur according to a coalescence mechanism when the mass loading of the oyster shell sample is too high. Generally, the fragmentation rate decreases with an increase of the particle sample mass loading because inefficient particle-particle contacts are more important and the energy transmitted to each particle is lower [25]. If the oyster shell sample is assumed to be uniform with the spherical type in the particle size, its external surface area is reversely proportional to its particle diameter. Consequently, the order of the specific surface areas would be: WOS-M3 > WOS-M4 > WOS-RW. Obviously, the data on the particle size and its distribution were in accordance with the results of pore properties (seen in Table 2).
Scanning Electron Microscope (SEM) Observation
The textural structure examination of the optimal calcite-based sample (i.e., WOS-M3) was further observed from the SEM photograph to elucidate the grinding effect on the particle size. From Figure 5, it can be clearly seen that the ground shell sample and its agglomerate had a much smaller particle size. It seems that the sample does not possess well-defined pore structures. The finding was in agreement with the results of N 2 adsorption-desorption isotherms ( Figure 2). However, the powder particle displayed a much rougher and irregular surface structure by the mechanical mechanism during grinding, leading to the formation of some pores/cracks on the surface. This surface roughness and macropores can also exhibit the apparently larger BET surface areas of the resulting shell samples as described above (Table 2).
Conclusions
In conclusion, planetary ball milling method has been used to grind the oyster shell waste under various conditions. In order to characterize their properties, N 2 adsorption/desorption isotherms, laser diffraction, and SEM observation have been examined to analyze the resulting biomaterial powders. The following conclusions can be drawn: • Under the rotational speed of 200-600 rpm, grinding time of 5-180 min and sample mass loading of 1-10 g, the grinding treatment significantly changes the surface properties of the calcite-based minerals. The pore properties of the optimal resulting powder are 10.64 m 2 ·g −1 , 0.066 cm 3 ·g −1 , and 0.15 based on BET surface area, total pore volume and porosity, respectively, as compared to those (i.e., 4.05 m 2 ·g −1 , 0.024 cm 3 ·g −1 , and 0.06) of the starting material. This finding was also consistent with the particle size measurement (i.e., 9.7 vs. 15.0 μm of mean diameter).
• From the nitrogen adsorption/desorption isotherms, a typical Type II was found in the resulting powders, indicating that the isotherms are characterized by the nonporous materials or materials with macropores or open voids. However, a small hysteresis loop was also seen in the isotherms, suggesting that a small amount of mesopore with wide slit-shaped mouth exists in the fine powder.
• According to the observations in the SEM, the surface roughness can be enhanced as particle size decreases as a result of particle-particle attrition, suggesting that the specific surface areas of resulting powders hence increase with increasing fractural impact during high-energy ball milling. | 4,344.2 | 2013-08-01T00:00:00.000 | [
"Materials Science"
] |
The InN epitaxy via controlling In bilayer
The method of In bilayer pre-deposition and penetrated nitridation had been proposed, which had been proven to have many advantages theoretically. To study the growth behavior of this method experimentally, various pulse times of trimethylindium supply were used to get the optimal indium bilayer controlling by metalorganic vapour phase epitaxy. The results revealed that the InN film quality became better as the thickness of the top indium atomic layers was close to bilayer. A following tuning of nitridation process enhanced the quality of InN film further, which means that a moderate, stable, and slow nitridation process by NH3 flow also plays the key role in growing better-quality InN film. Meanwhile, the biaxial strain of InN film was gradually relaxing when the flatness was increasingly improved.
Background
The unique properties of InN are currently attracting much interest in the research community [1,2]. Because of its lowest effective mass and the highest electron drift velocity among all III-nitride semiconductors [3], InN is promising for high-speed and high-frequency electronic devices. And recently, the band gap of InN, which is considered as 1.9 eV, is renewed to approximately 0.7 eV [4][5][6], covering a broad range of wavelength from near infrared at approximately 1.5 μm to ultraviolet at approximately 200 nm based on its direct band gap alloying with GaN and AlN [7][8][9]. However, the achievement of high crystalline quality InN film is still a great challenge, which attributed to the lack of lattice-matched substrate, the low dissociation temperature (approximately 600°C), the low pyrolysis efficiency of NH 3 at 500 to 600°C, the pre-reaction of the precursors before arriving at the substrate surface, and also the InN clustering effect [10][11][12][13][14].
In order to achieve high-quality InN film, effort has been made by researchers with different methods such as optimizing growth temperature, controlling V/III ratio, introducing buffer layer, or employing pulsed atomic layer epitaxy technique [15,16]. However, the crystalline quality of InN film is still far below a satisfactory level due to the existence of huge quantity of defects [16]. To elucidate the original difficulty in In film deposition, the formation kinetics of InN with N and In atoms on the In polar GaN surface has been systematically studied by first-principles calculations [17], it was found that the pre-deposition of In bilayer on the surface could improve the In migration on the surface and the smoothness of In film.
In this work, the epitaxy method of In bilayer controlling and penetrated nitridation was employed for the InN film growth on GaN template. In order to determine critical trimethylindium (TMI) flow required for forming In bilayer, the pulse time of TMI supply was optimized. The results revealed that the film quality became better as the thickness of the top indium atomic layers was close to bilayer. Based on the In bilayer deposition, a moderate, stable, and slow nitridation process by NH3 flow also played the key role in growing betterquality InN film. X-ray diffraction (XRD) measurements confirmed the gradual relaxation of biaxial strain in InN epilayers during increment of the smoothness.
Growth of samples
InN films were grown on a 3-μm-thick GaN template with(0001) sapphire substrate by using metalorganic chemical vapor deposition (MOCVD) system with a Thomas Swan closely coupled showerhead (CCS) reactor. The trimethylgallium (TMG), trimethylindium (TMI), and ammonia (NH 3 ) were used as the precursors for Ga, In, and N, respectively, and H 2 and N 2 were used as the carrier gasses. Prior to the GaN/AlGaN superlattice growth, thermal cleaning of the (0001)-oriented sapphire substrate was carried out under hydrogen ambient at 1,050°C for 10 min to remove native oxide from the surface. Then, an approximately 30-nm low-temperature GaN buffer layer (approximately 570°C) was grown followed by a approximately 3-μm high-quality GaN underlaying layer (approximately 1,090°C). During the stage of InN growth, the pressure was set to 450 torr at 550°C [18]. In order to accurately control the deposition of indium atomic multilayers and the following nitridation process, the pulse growth method was employed through switching and adjusting the pulsed supply time of TMI and ammonia flows, as shown in Figure 1. For samples A, B, C, and D, a constant TMI flow of 2.0 × 10 −5 mol/min was used whereas a series of duration time of the pulsed TMI flow, 16,8,4, and 3 s, was applied, respectively. Then, they were followed by a 33-s pulse of NH 3 flow for the nitridation process. The mole flow of ammonia was set to be 0.5 mol/min. For the purpose of comparison, the total moles of indium supply in different samples was preserved by varying the total pulse periods of 180, 360, 720, and 960 for samples A to D, respectively. In order to avoid the possibility of over-nitridation, a lower ammonia flow of 0.25 and 0.125 mol/min was set for samples E and F, respectively, and other growth parameters were as the same as those set for sample C.
Characterization
The thickness of film was measured by in situ growth monitoring curves (Panalytical X'pert PRO X, Panalytical, Almelo, The Netherlands). The surface morphology and smoothness of the as-grown samples were characterized by atomic force microscopy (AFM, PicoSPM and PSI XE-100, Molecular Imaging, Ann Arbor, MI, USA) and scanning electron microscopy (SEM, LEO 1530, LEO Elektronenmikroskopie GmbH, Oberkochen, Germany) equipped with an energy-dispersive X-ray spectrometer (EDX). The structural quality and the In composition of InN films were evaluated by X-ray diffraction (XRD) in a X' Pert PRO system.
Results and discussion
In the ideal indium bilayer construction process, we need to deposit one indium monolayer in each pulse, thus this new indium monolayer would construct an indium bilayer with the top indium monolayer which we had deposited in last pulse [17]. Figure 2 shows the in situ growth monitoring interferometer curves of samples A to D. One can observe that in the InN growth stage, the vibration of all four sample's curves have experienced nearly equal phase shift. According to this phase shift, we can easily calculate the average InN film's thickness of them, which is about 170 nm. Also this result has been confirmed by practical measurement in the SEM cross-sectional observation (see SEM cross-sectional photos in Figure 3). Thus, the InN deposition thicknesses per period of samples A, B, C, and D are about 9.5, 4.7, 2.4, and 1.8 Å, respectively. According to the (0001) lattice constant c of wurtzite InN, the thickness of one In-N monolayer (c/2) is about 2.8 Å. Comparing to this value, sample C's growth thickness of each pulse is the closest one. Figure 3 shows the SEM images of surface morphology and cross sections of samples A to D. From the top view of sample A (A1), one can see some obvious dark holes on the surface, indicating the formation of vacancies due to In accumulation in droplets. The formation of holes and droplets easily leads to a pretty rough surface (rms = 33, from AFM scanning result), as shown in Figure 3A2. As we know, the melting temperature of metal In is only about 157°C. Thus, under the growth temperature of InN (550°C), the pulsed deposition of In for a long duration time may form a thick liquid In layer on the surface. By the effect of surface tension, In droplets in large size would come into being quickly. This is the main reason governing the surface roughness. In order to reduce the roughness, the pulse time of TMI is reduced to 8 s for sample B. The obtained InN film shows better flatness (rms = 20) and dark holes have been well removed ( Figure 3B2). According to the theoretical simulation of the kinetics of InN formation [17], if the thickness of indium film is larger than two atomic layers, the nitridation of this In film could not well form a InN epilayers in correct stoichiometric ratio (1:1) and the excessive In will lead to roughness. Thus, the TMI pulse time was further decreased down to 4 s. As shown in Figure 3C1, the islands of sample C begin to show regular shape relatively and the surface becomes more flat (rms = 14). Meanwhile, it can be observed that there are some islands in larger size, as indicated by arrow. The number of these types of large islands further increases in sample D ( Figure 3D1), in which the TMI pulse time was set to 3 s. This trend of quality deterioration implies that the indium film deposited during the TMI period turns to be less than one atomic layer and fail to construct indium bilayer. This insufficient coverage of indium layer could not provide the advantage of nitridation of indium bi-layer structure. On the contrary, over-nitridation under N-rich condition leads to the deterioration of the InN film quality of sample D. Therefore, it could be determined that 4-s pulsed supply of TMI in sample C is the optimal setting.
To investigate the optical property of these samples, absorption spectra were recorded to determine the band gap of InN film and the results are shown in Figure 4. Although all four samples' absorption curves show limited differences due to the small thickness or relatively low crystalline quality of the InN film, their differences of slope's changes still can be identified. The absorption spectra of sample C and D have a clear slope threshold near the absorption edge. While, for samples A and B such slope threshold is absent and, beyond 1,100 nm, absorptions related with defect or impurity bands appear. This indicates that sample C has the best film quality due to the optimized pulsed growth with TMI supply. In principle, InN is a direct band semiconductor so that the relationship between its energy band gap and its absorption coefficient could follow the formula below: where the α is the absorption coefficient and the E g is the band gap. Thus, the E g of our samples could be estimated through the intersectional point of absorption edge's tangent and horizontal axis. It is found that the E g of sample C and D is about 1.22 and 1.19 eV, respectively. Due to the unclear slope thresholds in samples A and B, the E g is difficult to determine precisely. The range of reasonable E g for samples A and B would be between 0.7 to 0.9 eV, which is lower than those of sample C and D. This may stem from the absorption associated with the deep level defects and impurities rather than the band edge absorption of InN, the E g of which should be about 1.25 eV [19].
Considering the negative influence by the excessive NH 3 supply, we tried to improve the nitridation process by optimizing the ammonia flow. In principle, the indium bilayer will experience a nitridation process with the penetration of N atoms into between the bilayer [17]. This process would finally form a uniform wurtzite InN structure on the surface. For the case of excessive NH3 flow, the top layer in high N concentration on the surface easily forms a steep concentration gradient between surface and sub-surface layers where the N atoms will diffuse to. According to Fick' first law, where the J is the total diffusion flux and the D is the diffusion factor. The steeper the concentration gradient − ∂C ∂X À Á would lead to the higher the total diffusion flux J [20]. Thus, N atoms could not uniquely arrive at the preferable top site via the one-atom-on-one-site mode. Instead, they would diffuse to various positions and some would even crowd in some energy minima. Meanwhile, ultra-high N concentration on surface could even make some N atoms hang over the top indium atomic layer, and, in this case, the indium pre-deposition of next pulse would fail to construct indium bilayer in some regions. As a result, the uniformity and smoothness of the InN film is deteriorated. Based on this analysis, the NH3 flow should be optimized by reducing the mass flow, which is set to 0.25 mol/min for sample E and 0.125 mol/min for sample F. Figure 5 shows the SEM images of these two samples. One can see that the smoothness of sample E has been slightly improved and is better than that of sample C. This indicates that the lower ammonia flow could improve the uniform diffusion of N atoms. Further reduction of NH3 flow in sample F finally leads to a large improvement of InN quality and surface smoothness, as shown in the cross-sectional image of Figure 5F2. The corresponding AFM scanning also confirms this enhancement of surface smoothness (rms = 7). After the deposition of indium bilayer, a moderate, stable, and slow nitridation process with appropriate ammonia flow is crucial for the formation of better-quality InN film.
In order to study the residual strain of as-grown InN films, XRD characterizations with ω-2θ scans were taken and the results are shown in Figure 6. Typical symmetrical (002) diffraction peaks of wurtzite InN and wurtzite GaN could be clearly identified, at about 15.8°and 17.4° [21]. Besides, another weak peak was observed at about 16.65°; this peak has been identified as (101) diffraction peak of wurtzite InN by consulting related database and reference. In order to separate the mixing of these two peaks, a multi-peak fitting in this region was made and peak positions of each could be determined. Base on angle of the (002) peaks (green dash lines) and angle of the (101) peaks (red dash lines), the corresponding lattice constant along c axis and a axis could be obtained simply by using the formula below: where the d hkl is the interplanar distance between (hkl) planes. The complete crystalline data is summarized in Table 1. One can see that the lattice constant a is increasing from samples A to F, and the a value of sample F (3.63 Å) is very close to the equilibrium value of wurtzite InN (3.627 Å) obtained by first principle calculations, indicating the gradual reduction of residual biaxial strains through growth optimization. Whereas, the (002) peak (correspond to lattice constant c) is right shifting correspondingly due to the expansion distortion by the elastic strain on the a axis. Meanwhile, it can be seen that the (002) peak is getting dominant, which means a preferential (002) crystal orientation in sample F. All these evidences imply that the biaxial strain has been well relaxed, and the crystal orientation has become better in sample F.
Conclusions
Through using various pulse times of TMI supply, we achieved optimal indium bilayer control by metalorganic vapour phase epitaxy. When the top indium multilayer was getting close to bilayer, InN film quality had been gradually improved due to high surface migration and good structure consistency of indium bilayer forming. The absorption spectra also confirmed that the InN film which was grown via optimal indium pre-deposited controlling had the fewest defects and impurities. Furthermore, an optimization of ammonia flow during the nitridation stage made an extraordinary improvement of the InN film's flatness; it means that based on the In bilayer controlling deposition, a moderate, stable, and slow nitridation process also plays the key role in growing better-quality InN film. Meanwhile, the biaxial strain of InN film was gradually relaxing when the parameters of growth was optimizing, implying that the mismatch stress of InN heteroepitaxy can be well relaxed via this growth method. Figure 6 The XRD diffraction spectra of samples A, B, C, E, and F. | 3,593.8 | 2014-01-06T00:00:00.000 | [
"Materials Science"
] |
Editorial: Protein Misfolding and Spreading Pathology in Neurodegenerative Diseases
Department of Experimental Neurodegeneration, Center for Biostructural Imaging of Neurodegeneration, University Medical Center Gottingen, Göttingen, Germany, Division of Pharmacology, Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy, Center for Neurodegenerative Science, Van Andel Institute, Grand Rapids, MI, United States, Max Planck Institute for Experimental Medicine, Göttingen, Germany, 5 The Medical School, Institute of
A common pathological hallmark among different neurodegenerative diseases is the accumulation of aggregated proteins that might cause cellular dysfunction and, eventually, lead to cell death. Amyloid-beta, Tau, alpha-synuclein, TDP-43, or the prion protein, are just a few examples of proteins that can aggregate and contribute to the pathogenesis of neurodegenerative diseases with diverse clinical manifestations (Alzheimer's disease, frontotemporal lobar degeneration, Pick's disease, Parkinson's disease, Lewy body dementia, multiple system atrophy, amyotrophic lateral sclerosis among the most common). Emerging evidence suggests that the progression of symptoms in patients affected by such disorders correlates with the spreading of pathology through the brain, but the molecular mechanisms underlying aggregation and propagation of protein aggregates are still obscure. This Research Topic focuses on the structural and molecular characteristics of aggregation-prone proteins and resumes new aspects of pathology spreading. A series of 10 articles provides an exciting up-to-date overview of core biological features of prion and prion-like neurodegenerative disorders.
Protein aggregation is a common feature among numerous neurodegenerative diseases, and is thought to culminate with detrimental effects for the cells where the proteins accumulate. Despite the commonalities of protein aggregation and cell dysfunction, the pathobiological bases of these diseases may differ. For example, the two characteristic protein deposits in Alzheimer's disease (AD) are the extracellular senile plaques, whose main constituent are amyloid-β (Aβ) fibrils, and intraneuronal neurofibrillary tangles (NFT), composed of hyperphosphorylated Tau protein (Brion et al., 1985(Brion et al., , 1986Kirschner et al., 1986;Sisodia et al., 1990). Similarly, fibrillary alpha-synuclein (aSyn) is the main protein component of Lewy bodies (LB) which are the key neuropathological hallmarks of Parkinson's disease (PD) and Lewy body dementia (DLB) (Spillantini et al., 1997), and can be found within glial cytoplasmic inclusions in the brain of patients affected by multiple system atrophy (MSA) (Wakabayashi et al., 1998). Finally, TAR-DNA binding protein 43 (TDP-43)-or tau-positive inclusions can be found in the brain or spinal cord in, e.g., frontotemporal lobar degeneration (FTLD) or amyotrophic lateral sclerosis (ALS) (Arai et al., 2006;Hasegawa et al., 2008;Kabashi et al., 2008;Sreedharan et al., 2008;Da Cruz and Cleveland, 2011).
Recently, Aβ, Tau, aSyn, and TDP-43 were proposed to display several properties similar to those exhibited by the prion protein (PrP). In particular, their physiological conformation can shift to a self-replicating pathological strain which can spread from a neuron to another across different brain regions.
Intriguingly, there seems to be a cross-talk between PD and AD, as aSyn pathology is frequent in AD brains, and Tau accumulation is common in PD with dementia (Lippa et al., 1998;Hamilton, 2000;Irwin et al., 2013). In a similar vein, Aβ plaques are a common feature of DLB, where aSyn aggregates are the predominant feature (McCann et al., 2014).
Remarkably, the prion-like properties of Aβ and aSyn demonstrated in experimental models, have been corroborated by studies that have suggested that human transmission of cerebral amyloid, and Aβ pathology contaminated material, occurs upon certain clinical procedures (Purro et al., 2018;Banerjee et al., 2019;Spinazzi et al., 2019). Furthermore, host to graft propagation of LB pathology has been suggested to have occurred in patients who received fetal neuron transplants over one decade prior to death and neuropathological examination (Kordower et al., 2008a,b;Chu and Kordower, 2010;Li et al., 2010;Kurowska et al., 2011). Whether these clinical observations with Aβ and aSyn pathology represent authentic prion spreading of pathology remains controversial, and this issue calls for caution due to the tremendous implications it might have.
Therefore, using prion-like terminology for all neurodegenerative diseases requires a deeper understanding of the molecular mechanisms involved. There are currently no disease-modifying treatments available for these diseases, and therefore it is desirable to design strategies that could directly target the aggregation of these proteins or modulate their ability to propagate from one brain region to another. Ongoing clinical trials with immunotherapy are focused on clearing the aggregates when they are in the extracellular space, and can be viewed as a strategy to limit prion-like spreading of the pathology (Gallardo and Holtzman, 2017;Sigurdsson, 2018;Panza et al., 2019).
In this Research Topic of Frontiers in Molecular Neuroscience, Terry and Wadsworth review the importance of prion structure for pathogenicity. They highlight and discuss new findings differentiating the architecture of authentic infectious lethal prions from that of PrP amyloidosis as the pillar for a critical re-evaluation of the structure of other prion-like proteins associated with other neurodegenerative diseases (Terry and Wadsworth). In the second paper, Lim reviews the analogy between prion protein and other aggregation-prone proteins, such as aSyn, Aβ, and Tau, focusing on the likelihood of these proteins to cross-seed and to adopt different conformations, and on the importance of understanding the molecular basis that drive the different conformations. Next, Vasili et al. discuss overlapping aspects between PD and AD, and elaborate on the mechanisms involved in the transfer/spreading of aSyn and Tau. They provide a thorough overview of the current cell and animal models to assess the mechanisms of spreading of pathology (Vasili et al.). On the same line, Friesen and Meyer-Luehmann summarize the literature on Aβ seeding in mouse models of AD, and their application for the study of cerebral amyloidosis and associated pathologies.
Prasad et al. present a comprehensive review about the role of TDP-43 in ALS. They discuss the imbalances on TDP-43 homeostasis, implicated in miss-regulation of RNA and cytotoxicity mechanisms. McAlary et al. provide a detailed review of the current evidence on idiopathic ALS as a prionlike disorder. They focus on key proteins and genes involved in the disease SOD1,FUS,and C9orf72), and discuss the current evidence from biophysical to in vivo studies (McAlary et al.).
Chen et al. review the consequences of gut inflammation on PD, highlighting how it may initiate and promote enteric aSyn pathology deposition and spreading into the brain.
The putative contribution of calcium channels to PD etiopathogenesis and progression, putting the accent on how they might contribute to aSyn aggregation and secretion in synucleinopathies, is addressed and discussed in the article by Leandrou et al..
New translational insights on neurodegenerative disorders characterized by prion-like protein spreading are provided by Singh et al., which describe that increased levels of SIRT2 correlate with circulating aSyn in early PD. They propose SIRT2 as potential biomarker for early detection of the disease (Singh et al.). Finally, Nam et al. discuss the efficacy of ALWPs, a combination of oriental herbal medicines with proven efficacy in diabetes mellitus, immune modulation, and owning both neurotrophic and anti-inflammatory action in decreasing Aβ plaque load, as well as Tau hyperphosphorylation in the cortex and hippocampus of the 5XFAD mouse model of AD.
In short, the articles in this Research Topic provide new up-to-date insights into our understanding of protein aggregation and spreading of pathology in prion and prion-like neurodegenerative disorders. Addressing a wide variety of topics, they introduce thought-provoking concepts and clues about the relevance of laboratory findings to the clinical arena. Hopefully, a greater understanding of the prion-like propagation of protein aggregate pathology will lead to novel therapeutic strategies that slow disease progression.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
FUNDING
TO is supported by DFG SFB 1286 (projects B6 and B8). TO and DL are supported by a grant from International Parkinson Fonds Deutschland. | 1,776.6 | 2020-01-17T00:00:00.000 | [
"Medicine",
"Biology"
] |
Topological aspects of $4$D Abelian lattice gauge theories with the $\theta$ parameter
We study a four-dimensional $U(1)$ gauge theory with the $\theta$ angle, which was originally proposed by Cardy and Rabinovici. It is known that the model has the rich phase diagram thanks to the presence of both electrically and magnetically charged particles. We discuss the topological nature of the oblique confinement phase of the model at $\theta=\pi$, and show how its appearance can be consistent with the anomaly constraint. We also construct the $SL(2,\mathbb{Z})$ self-dual theory out of the Cardy-Rabinovici model by gauging a part of its one-form symmetry. This self-duality has a mixed 't Hooft anomaly with gravity, and its implications on the phase diagram is uncovered. As the model shares the same global symmetry and 't Hooft anomaly with those of $SU(N)$ Yang-Mills theory, studying its topological aspects would provide us more hints to explore possible dynamics of non-Abelian gauge theories with nonzero $\theta$ angles.
Introduction
Quark confinement is a basic feature of the strong interaction, while it is an important and unsolved problem to derive it in any analytical methods based on non-Abelian Yang-Mills (YM) theories. One of the famous scenarios of quark confinement is the dual superconductivity, which assumes that the YM vacuum is caused by condensation of monopoles [1][2][3][4]. When both electric and magnetic particles exist, we can expect rich structure of possible phases, such as the Coulomb phase, Higgs phase, confinement phase, etc. The monopole is quite useful to understand the qualitative effect of the θ angle and more exotic phases can appear for θ = 0 [5], as the magnetic monopole acquires the fractional electric charge proportional to θ and it is known as the Witten effect [6].
Cardy and Rabinovici proposed a simple model in order to study these nontrivial dynamics quantitatively in the presence of θ [7,8], and we call it the Cardy-Rabinovici model in this paper. It is the lattice U (1) gauge theory on the four-dimensional cubic lattice coupled to the charge-N Higgs particles, and the monopoles can be described by the violation of Bianchi identity at the lattice scale. In the continuum formulation, we introduce the θ angle as a coupling to the instanton density, but such topologies are suffered from lattice discretization and lose some of important features valid in the continuum (See Refs. [9,10] for recent developments on topologies of lattice U (1) gauge theories). In the Cardy-Rabinovici model, the θ angle is introduced to reproduce the Witten effect. Cardy and Rabinovici have conjectured that the phase diagram has a rich structure in the space of the coupling g 2 and the θ angle by applying the heuristic free-energy argument to identify the condensate in the vacua [7].
An interesting property of this model is the appearance of oblique confinement phase, originally proposed by 't Hooft [5]. When θ π, the monopole costs not only magnetic energies but also electric energies due to the Witten effect, which suggests that its condensation becomes less probable in the strong coupling regime around θ π. Instead, a bound state of two monopoles and one Higgs particle does not carry net electric charge because the electric charge induced by the Witten effect is canceled by that of the Higgs particle, and the bound state is more likely to condense near θ π.
There is a recent progress on the YM dynamics at θ = π by the new anomaly matching condition [11]. The four dimensional pure SU (N ) YM theory enjoys the Z N one-form symmetry, Z [1] N . The one-form symmetry is not an ordinary symmetry in the sense that it does not transform local operators [12]. Instead, the one-form symmetry acts on line operators, which describe the world-line of test particles, or very heavy quark in the YM theory. In the SU (N ) YM theory, we can measure N -ality of their electric charges so we have Z N symmetry acting on the Wilson loops. Like ordinary symmetries, we can consider the gauging operation of Z [1] N , but we should introduce two-form gauge fields. In Ref. [11], it is found that the CP symmetry at θ = π is violated under the presence of nontrivial two-form background gauge fields for even N , and this anomaly is renormalization-group invariant. When N is odd, more subtle condition, called global inconsistency, gives the constraint on the phase diagram, instead [11,[13][14][15][16][17][18][19]. The CP symmetry at θ = π has been suspected to be spontaneously broken [20][21][22][23][24][25][26][27], and this argument unveils that it is partly required by kinematical reasoning.
In this paper, we show that the Cardy-Rabinovici model has the same structure of the symmetry and anomaly as the SU (N ) YM theory. As all the dynamical electric particles have the U (1) gauge charge N , the model enjoys the Z [1] N symmetry, and we introduce the θ angle in a way that it becomes 2π periodic, θ ∼ θ + 2π, in the low-energy limit. As a result, the theory at θ = π acquires the CP symmetry, and it has the mixed anomaly with Z [1] N as in the case of SU (N ) YM theory. The presence of the 't Hooft anomaly tells us more details on the phase diagram, and we confirm the consistency between the anomaly constraint and the proposed phase diagram by clarifying the topological aspects of each phase. In particular, we study topological aspects of oblique confinement, which crucially depend on whether N is even or odd. When N is even, the oblique confinement phase is a Z 2 topological order at low energies with the spontaneous symmetry breaking, Z [1] N → Z [1] N/2 . (1.1) When N is odd, the oblique confinement phase is trivial as intrinsic topological orders, while it should be separated from the usual confinement phase as they are different symmetryprotected topological (SPT) states with Z [1] N to match the global inconsistency. There is another interesting aspect of the Cardy-Rabinovici model. In Ref. [8], the effective Lagrangian of charges and monopoles is obtained by integrating out the fluctuation of photons, and the effective Lagrangian has the SL(2, Z) self-duality. Indeed, we find that the local dynamics is identical under the SL(2, Z) transformation, so the local quantities, such as the free-energy density, must be the same under the duality transformation. However, we show that the SL(2, Z) duality does not extend to the global aspects of the theory. In some cases, the duality operation exchanges the topologically trivial and nontrivial phases, and we clarify that this is because the duality transformation does not preserve the one-form symmetry.
We therefore construct the model, whose local dynamics is the same with that of the Cardy-Rabinovici model, while the SL(2.Z) self-duality extends to the global aspect of the theory. It is obtained by considering the Cardy-Rabinovici model with the charge N = M 2 , and then we gauge the subgroup Z [1] M of the one-form symmetry Z M symmetry, and the first factor denotes an electric one-form symmetry and the another does a magnetic one. It turns out that the gapped phases of the gauged Cardy-Rabinovici model always show the Z M topological order, and this is because of the mixed anomaly between electric and magnetic Z [1] M symmetries. As the electric and magnetic one-form symmetries are isomorphic, we can show that the gauged model enjoys the SL(2, Z) duality. In this model, however, the θ angle is no longer 2π periodic, θ ∼ θ + 2π, and the map θ → θ + 2π must be identified as one of the generators of SL(2, Z). This SL(2, Z) duality is the same with that of the pure Maxwell theory. The partition function of the Maxwell theory is not SL(2, Z) invariant on general spin four-manifolds, but instead transforms as the modular form [28,29]. We can regard this as a mixed anomaly between SL(2, Z) duality and the Lorentz invariance [30], so we find another anomaly constraint on the phase diagram.
Organization of this paper is as follows. In Sec. 2, we give a review on the Cardy-Rabinovici model. We will see that the model is expected to have a rich phase structure based on the heuristic discussion on the free energy of world lines of dyonic excitations. We will also give a review on the SL(2, Z) self-duality about the local dynamics, emphasizing that it does not necessarily extend to the global nature of the model. In Sec. 3, we study the topological aspects of the Cardy-Rabinovici model, which partly justified the conjectured structure of the phase diagram. The anomaly matching plays an important role for this purpose, and we give the formal continuum definition of the Cardy-Rabinovici model in order to compute its anomaly in a clear manner. Using the continuum reformulation, we obtain the mixed anomaly, or global inconsistency, for Z [1] N and CP at θ = π. In Sec. 4, we give the gauged Cardy-Rabinovici model using the continuum formulation, and study the SL(2, Z) self-duality as a genuine property of the theory. We discuss the mixed anomaly between SL(2, Z) and gravity to constrain the possible phase diagram. In Sec. 5, we summarize the results and discuss possible implications to the non-Abelian YM dynamics with nonzero θ angles.
Review on Cardy-Rabinovici lattice gauge model with the θ angle
In this section, we give a brief review of the work by Cardy and Ravinobici on the lattice U (1) gauge theory with the θ angle [7,8]. This model is expected to show the rich phase structure due to the various types of charge, monopole, and dyon condensations [7]. Moreover, the local dynamics of this model enjoys the SL(2, Z) self-duality, which constrains possible structures of the phase diagram [8].
Description of the Cardy-Rabinovici model
The four-dimensional lattice gauge theory in Refs. [7,8] is defined as follows, and we call it the Cardy-Rabinovici model. The spacetime is assumed to be the four-torus T 4 , and it is discretized as the square lattice. Instead of the usual formulation of lattice gauge theory, here we use a formulation known as the Villain form of the lattice U (1) gauge theory [31] to conveniently describe the θ angle and magnetic degrees of freedom. In the Villatin form, noting the structure U (1) = R/Z, the U (1) gauge field a on the discretized torus is introduced as a pair ( a µ , s µν ), where a µ is the R-valued link variable and s µν is the Z-valued plaquette variable.
First the kinetic term of the U (1) gauge field is given by where f = da is the field strength, Note that the field strength is invariant under the R-valued 0-form gauge transformation and the Z-valued 1-form gauge transformation This lattice discretization of U (1) gauge theory allows us to define the monopole current, Here,x is the site on the dual lattice,x = x + 1 2 (1 +2 +3 +4), and thus m µ (x) is the Z-valued link variable on the dual lattice. By definition, it satisfies the conservation law, and thus the configuration of m µ can be regarded as the closed world-line of magnetically charged particles.
Next we describe the matter part. The electric matter field in Refs. [7,8] is introduced as the closed world-line representation. It is defined as the Z-valued link variable n µ , satisfying the constraint, ∂ µ n µ = 0. (2.7) When the electric matter has the charge N so that the theory enjoys Z N one-form symmetry, its minimal coupling term in the Lagrangian is given by The gauge invariance of this coupling is ensured by the conservation law (2.7). Indeed, summing up n µ restrictsã µ into 2π N Z, so the theory becomes the lattice Z N gauge theory. Now, let us introduce the θ parameter to this lattice model. This can be done by noticing that the magnetic monopole acquires the electric charge θ 2π by the Witten effect [6]. In order to reproduce this nature, we replace n µ by Here, F (x −x) is a short-ranged function in order to relate the dual lattice and the original lattice. Although the choice of F is arbitrary, the details of F is expected not to affect the low-energy dynamics of this model. Now, the Lagrangian of the matter fields becomes At sufficiently low energies, the distinction between the original and dual lattices is expected to be no longer important 1 , and then we may simply write the effective electric current as Since both n µ and m µ are Z-valued link variables, there is an emergent 2π periodicity of the θ parameter at low energies, becauseñ µ is invariant under θ → θ + 2π and n µ → n µ − m µ .
As an example, we draw the list of charged particle excitations in Fig. 1 for N = 2 at θ = 0 and π. Since the gauge group is U (1), all the points in the charge lattice allow test particles, which can be introduced as the genuine line operators. The charges of the dynamical excitations, denoted with the blue blobs, are more restricted to preserve the Z N one-form symmetry. Combining these data, the action of the model is (2.12) 1 In this paper, we assume that the Cardy-Rabinovici model has a nice continuum limit or UV completion. Then the partition function of the model is defined by where the symbol "Tr" stands for integrations over the R-valued link variableã µ , and summations over the Z-valued plaquette variable s µν and Z-valued link variable n µ satisfying (2.7). Since the Lagrangian (2.12) is quadratic in a µ , we can integrate it out exactly. The effective Lagrangian can be written only in terms of n µ and m µ [7]: (2.14) Here, G(x − x ) is the lattice massless Green function, and Θ µν is an angle-valued function defined in Ref. [7], which expresses the angle between the Dirac string emitted from the monopole m µ (x) and the four-vector (x − x). It is Θ µν that ensures the Dirac-Schwinger-Zwanziger quantization condition [32][33][34], as it jumps by 2π when an electric charge goes through the world-sheet of the Dirac string. Therefore, the Dirac string is not physically observable only if the charge quantization is satisfied.
Phase diagram via the free-energy argument
In order to get physical intuitions on the Cardy-Rabinovici model (2.12), let us study the phase structure based on a simple free-energy argument in Ref. [7] (see also Refs. [35,36]). As we will see soon, the argument is heuristic and details on quantitative structures should not be taken seriously. The steps are as follows: 1. Given the parameters (g, θ), we estimate the internal energy of a particle excitation with fixed electric and magnetic charges, which forms a closed world line in the four dimensional spacetime.
2. We judge that the particle can condense if the energy is smaller than the entropy of the loop.
3. When there are several candidates for charged particles to condense, we pick up the minimal energy one. On the other hand, when none of the charges can condense, we interpret that the realized phase is the Coulomb phase.
The electric and magnetic charges of dynamical excitations are labeled by integers (n, m) ∈ Z × Z as Then we estimate the energy of the loop excitation by extracting the short-range part of the self Coulomb interaction from (2.14), where the long-range part is neglected because it can be screened by the presence of other loops. Noting that the entropy of loops with the length L in the d-dimensional cubic lattice is roughly given by L ln(2d − 1), the particle (n, m) can condense if [7] where C = ln 7/πG(0), while the value of the constant C should not be taken too seriously. This expression suggests that the condensation becomes harder as N takes larger values, so there is more chance for the Coulomb phase for larger N . We note that the lattice Monte Carlo simulation of this model is possible when θ = 0 [37,38], and the result is consistent with the free-energy argument with roughly C ∼ 6. When C/N > 2/ √ 3 1.15, it turns out that there always exists a particle to condensate and therefore the Coulomb phase does not appear in the phase diagram. Thus the system is always in gapped phases for C/N > 2/ √ 3 at any (g, θ). The phase diagram is given in Fig. 2, and the left and right figures show the cases for C/N > 2/ √ 3 and C/N < 2/ √ 3, respectively. When C/N < 2/ √ 3, the Coulomb phase appears in the gray regions, which is denoted by γ. Other phases are the gapped phases due to the condensation of charged particles.
Higgs phase
The Higgs phase is defined by the condensation of the electric charge, given by (n, m) = (1, 0), so its energy is This does not have any θ dependence. Another important point is that it does not have the 1/g 2 term as the Higgs particle does not carry the magnetic charge. Therefore, in the weak-coupling regime, the Higgs mode should be the most favored gapped phase.
Confinement Confinement
Oblique confinement
Phase diagrams for 0 < θ < 2π and N g 2 /2π < 5 when C/N > 2/ √ 3 (left) and C/N < 2/ √ 3 (right) based on the simple free energy argument. In the right figure, the Coulomb phase appears in the gray regions denoted by γ. The weak coupling region is governed by the Higgs phase, while the strong coupling region by the confinement phase. For 0 < θ < π, the confinement is caused by the condensation of the monopole, (n, m) = (0, 1), while for π < θ < 2π, it is caused by the condensation of the dyon, (n, m) = (−1, 1). For sufficiently strong coupling, N g 2 /2π > 2 √ 3, the oblique confinement mode appears near θ = π, which is the condensation of (n, m) = (−1, 2). Blue solid curves separate these gapped phases by the phase transitions.
Confinement phase
Confinement is caused by the condensation of magnetically charged particles. The most naive one is the condensation of the magnetic monopole, (n, m) = (0, 1), and its energy is The internal energy increases quadratically as a function of θ.
Since the θ angle has the periodicity 2π in the continuum limit, the above expression means that the monopole condensation should not be a valid description for large values of θ. Therefore, we should take into account the possibility of the dyon condensation, too [5,21]. When the dyon with the charge (n, 1) condenses, its free energy becomes Among these states, we select the minimal energy state as the confinement phase at θ, so the energy density is If we take θ = (2n θ − 1)π + δθ with n θ ∈ Z and 0 ≤ δθ < 2π, then it is solved as where the solution is unique for δθ = 0 while there is another solution n = −n θ +1 for δθ = 0 with the same minimum. For example, when −π < θ < π, the monopole (n, m) = (0, 1) would condense, but when π < θ < 3π, the dyon (n, m) = (−1, 1) would condense, and there is the phase transition at θ = π.
Oblique confinement phase
Oblique confinement, which was originally proposed by 't Hooft [5], is characterized by the condensation of the higher monopole charge, such as (n, m) = (−1, 2). It is the new phase with the condensation of magnetically charged particles, and it turns out to be energetically favored only if the charge lattice is oblique due to the Witten effect. The charge, (−1, 2), can be regarded as the composite particle of monopole (0, 1) and dyon (−1, 1), and its condensation energy is given by Near θ = π, this condensation does not cost the electric energy at all, while the both Higgs and confinement phases cost some of them (see the right panel of Fig. 1). Therefore, if the electric coupling g 2 is sufficiently large, the oblique confinement can overcome the usual confinement, and it indeed appears when N g 2 2π > 2 is not 2π periodic in θ, we again consider the list of charges (2n − 1, 2) and pick up the minimal energy one: This is the free-energy density for the oblique confinement phase with manifest 2π periodicity. When θ π, the state n = 0 is chosen, but, for example, when θ −π, the different state n = 1 is selected. According to this free-energy argument, it becomes evident that more exotic oblique confinement phase appears [7]. When θ/2π is a rational number, i.e. θ/2π = −p/q, the condensate of the charge (p, q) does not cost any electric energies and thus it is preferred at sufficiently strong couplings. In order to discuss those oblique confinement phases in a systematic manner, it is convenient to resort to self-dual nature of the Cardy-Rabinovici model [8], as we shall review in the next subsection.
SL(2, Z) duality of the free-energy density and the phase diagram
So far, we draw the phase diagram, Fig. 2, just by putting an ansatz (2.17) for the condensation of the particle type (n, m). The structure of the phase diagram is claimed to be justified by using the self-dual nature of the model [8] (see also Refs. [39,40] for the model without θ). The following discussion is true for the local dynamics such as the free-energy density, but, as we shall see in the later sections, it does not always generalize to other observables.
Let us introduce the complex coupling, which is in the upper half plane. Assuming that the distinction between the original and dual lattices disappears at the low energies, the Lagrangian (2.14) is invariant under the SL(2, Z) duality transformation which is generated by the S and T transformations [8]: and We can express the self-duality group as We note that S 2 = C trivially acts on the space of coupling but it flips both electric and magnetic fields, and therefore it is identified as the charge conjugation symmetry There is also the CP transformation, The phase diagram must be symmetric under these transformations. This provides the motivation to draw the phase diagram in the τ plane [8], and we show it in Fig. 3 with the horizontal axis Re(τ ) = θ 2π and vertical axis Im(τ ) = 2π N g 2 . This shows the same phase diagram with the left figure in Fig. 2, and the blue solid curves show the phase transitions between gapped phases. These curves are related by SL(2, Z) transformations, and the phase diagram shown in Fig. 3 is invariant under SL(2, Z). This is obvious by noticing that We here draw the case, where the system is gapped at any τ . The blue curves denote the phase transitions, and they are related by the self-dual transformations, SL(2, Z).
this Higgs-confinement phase transition, then all the other phase transitions are obtained by the duality.
For instance, we can obtain the CP-broken line at θ = π from the Higgs-confinement phase transition curve as follows. ST −1 maps τ and charges as Therefore, the Higgs-confinement phase transition curve is mapped as while the charges (1, 0) and (0, 1) are mapped as This implies that the phase boundary between condensations of the charges (1, 0) and (0, 1) is mapped to the one between (0, 1) and (−1, 1) at θ = π, which is nothing but the CPbroken line. Figure 3 also clarifies the existence of various oblique confinement phases in the strong coupling regime of this model. For example, the region with the condensation of charge (−1, ) can be obtained by applying ST − to the one of the confinement phase. When Im(τ ) approaches to 0, there are infinitely many numbers of different oblique confinement phases as a function of θ [7,8].
Let us emphasize, however, that the SL(2, Z) duality applies only for the local dynamics when N > 1, and it does not exist as the self duality of the theory. In the weak coupling region, Im(τ ) 1, the system is expected to be in the Higgs phase, which is a gapped phase with deconfined Wilson loops, as it is caused by the condensation of electrically charged particles. Under the S transformation, it is mapped to the strong coupling region, Im(−1/τ ) 1. The vacuum state is now described by the condensation of magnetic monopoles, so the system is again gapped but the Wilson loops are confined. This means that the two systems at τ and −1/τ are different as topological orders, and the global nature of the low-energy dynamics is not preserved under the S transformation.
When N = 1, the Higgs and confinement phases are the same phase as we do not have an order parameter to distinguish these phases due to the lack of the one-form symmetry, and the phase diagram can be almost trivial as the phase transition curves disappear [41,42]. We shall concentrate on the case N > 1 in the following of this paper.
Anomaly and topological aspects of the phase diagram
In this section, we uncover various topological aspects of the Cardy-Rabinovici model when N > 1. For this purpose, 't Hooft anomaly matching condition plays an important role. In order to compute 't Hooft anomalies of this model, the continuum formulation is more useful than the lattice formulation, mainly because the θ term can be treated more easily. We first give the continuum reformulation of the model, and then study the anomaly matching condition at θ = π.
Formal description of the Cardy-Ravinobici model in the continuum
In order to analyze the topological aspect of the model, it is desirable to give the continuum formulation. However, it is the U (1) gauge theory coupled to both electrically and magnetically charged particles, and we currently do not have the Lagrangian formulation of such a model.
In this paper, therefore, we give up writing down the continuum theory with manifest locality and unitarity, and assume that those properties are ensured by the lattice formulation given in (2.12) or (2.14). Motivated by (2.14), we express the configuration of the matter fields using their world lines specified by {n µ } and {m µ }. We express the contribution of the electric charge 1 by the Wilson loop, where a is the U (1) gauge field 3 , and [n µ ] is the world line corresponding to {n µ }. Similarly, the contribution of the magnetic charge 1 is expressed by the 't Hooft loop, which is defined as the defect operator [43]: for sufficiently small two-sphere S 2 linking to the loop [{m µ }], the gauge field must satisfy the quantization condition, The Wilson loop appears only in the N -th power, as the electric charge of dynamical particles are quantized in N . We assume that the weight factor, S matter [{n µ }, {m µ }], can be chosen appropriately, so that it is consistent with the axiom of quantum field theories such as locality and unitarity. The full partition function is described by summing up the U (1) gauge field, a: where g is the coupling constant and θ is the vacuum angle. We note that the θ term in (3.5) seems to have an extra factor N in front, and this is necessary for the 2π periodicity of θ. The index theorem, however, tells that on 4-dimensional spin manifolds, so it may be wondering why we need the extra factor N for the 2π periodicity as we would naively expect the 2π/N periodicity in the convention of (3.5). Therefore, we need to explain why the naive 2π/N periodicity is wrong, and how θ can be still 2π periodic. The key ingredient is the Witten effect [6]. If we shift θ → θ + ∆θ with ∆θ = 2π N , then the magnetic particle acquires the extra electric charge N ∆θ 2π = 1. In other words, the pure 't Hooft line is not mapped to itself: (3.7) This implies that if there were no magnetic excitations represented by 't Hooft lines, the theory has the naive 2π/N periodicity, but it can be violated by the presence of such excitations. Moreover, since only W N appears in (3.4), there is no way to recover the 2π/N periodicity. Under the transformation θ → θ + 2π, however, the Witten effect is realized as H → HW −N , and the matter partition function is transformed as We assume this property for the weight factor, and then we identify We note that this is a stronger statement than saying that the theory is self-dual under the T transformation, θ → θ + 2π. Any local observables, including the line operators, should have the same expectation values at θ and θ + 2π, and this is true for the Cardy-Rabinovici model. Next let us see response to the charge conjugation and CP transformations. The charge conjugation C acts as The CP transformation, or the time-reversal transformation, acts only on the electric charge, so the theory also has the CP symmetry at θ = 0 or θ = π if In the lattice model (2.12) or (2.14), the weight factor of the matter fields comes out only of the Coulomb interaction, which corresponds to setting S matter = 0. It obviously satisfies all of the above requirements. So far we have discussed the reformulation of the Cardy-Rabinovici model in the continuum limit. The remaining topic is the electromagnetic duality. Naively thinking, we can realize the electromagnetic duality by requiring as it exchanges the electric and magnetic fields. For studying the local dynamics, such as the free energy density, this condition is indeed sufficient to ensure the duality under S as we noted in Sec. 2.3. However, the electromagnetic duality, S, as a theory is more subtle, and we will come back to this issue later. This theory has the electric Z N one-form symmetry, Z
't Hooft anomaly at θ = π and anomaly matching constraint
The CP symmetry of the model exists only at θ = 0 or θ = π. This is because the CP operation effectively flips the sign of θ, θ → −θ, and recalling that θ ∼ θ + 2π, the only invariant points are at θ = 0, π.
There is an important mixed anomaly between Z [1] N and the CP symmetry at θ = π. This turns out to be the same anomaly or global inconsistency with that of SU (N ) YM theory at θ = π [11,13] (see Refs. [14,16,17,[45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61] on related studies). To see this, let us gauge Z [1] N by introducing the Z N two-form background gauge field, B. It is constructed as the U (1) two-form gauge field satisfying N B = dC, (3.17) where C is an auxiliary U (1) gauge field. By this condition, B is restricted to have the quantized flux in the unit of 2π N : where S is a closed two-dimensional surface. Postulating the one-form gauge invariance, the action of the Maxwell theory becomes The first term is manifestly CP-invariant, but the last term is not. Under the CP operation, θ = π is mapped to θ = −π, i.e. the θ angle is shifted by −2π from θ = π. Therefore, the change of the action is given by This means that the partition function changes its phase under the CP transformation at θ = π, depending on the background gauge field B for Z [1] N : Now we should ask whether or not we can cancel this phase factor by adding appropriate local counterterms. If there does not exist such counter terms, then we regard that this is the genuine anomaly. Since the Z
[1]
N gauge symmetry is unbroken in the above computation, it is sufficient to consider the Z [1] N -invariant local counterterms, which is given by Here, the level k should take values in Z for the gauge invariance, and we identify k+N ∼ k. We define the partition function including this local counterterm as and then the CP transformation acts as The anomaly can be eliminated if The anomaly is the genuine one only if such k does not exist. When N is even, there is no k ∈ Z satisfying 2k + 1 = 0 mod N as 2k + 1 is an odd number. Therefore, we find the genuine 't Hooft anomaly for even N . When N is odd, we can choose to eliminate the anomalous phase, so there is no anomaly at θ = π for CP and Z [1] N . We note, however, that we can do the same consideration for θ = 0, and then k should be 0 mod N for the CP-invariant regularization. Therefore, when we extend our consideration from a single theory at a given coupling constant to a family of theories parametrized by the couplings, we find the global inconsistency between θ = 0 and θ = π [11,13]. In the following, let us see how the anomaly or global inconsistency is matched by the phase diagram shown in Fig. 2, or in Fig. 3.
Coulomb phase
According to the free-energy argument in Sec. 2.2, the system can be in Coulomb phase if N is sufficiently large, and it is shown as the gray region in the right panel of Fig. 2. In the Coulomb phase, the only low-energy excitations are basically given by massless free photons, and the low-energy effective theory is just the Maxwell theory.
The anomaly (3.23) can be matched by the existence of those massless excitations. Indeed, the computation of the anomaly (3.23) is performed with the Maxwell Lagrangian, so the anomaly matching condition is obviously satisfied.
Higgs phase
In the Higgs phase, the system is gapped due to the condensation of electrically charged particles. We have seen in the left panel of Fig. 2 (or Fig. 3) that it is realized in the weak-coupling regime, 2π N g 2 > √ 3 2 . We can write the low-energy effective action of the Higgs phase as where v is the characteristic size of the vacuum expectation value, and ϕ is the phase field of the scalar field. In this phase, the one-form symmetry is spontaneously broken, as the Wilson loop obeys the perimeter law. This fact can be seen very easily in the above effective description. In the low-energy limit, we can take v → ∞ as it has the mass dimension 1, and then the effective action takes finite values only if N a = dϕ. (3.31) Especially, the field strength becomes zero in that limit, da = 0, and then the expectation value of the Wilson loop, W ( ) , does not change as we change the loop . This is true for any higher charge Wilson loops, which can be formally expressed as for any n = 0, 1, 2, . . . , N − 1. Therefore, the Higgs phase is the Z N topological order.
Let us see how the anomaly is matched in the low-energy limit, v → ∞. Introducing the background gauge field B, then the low-energy effective action becomes 33) where N B = dC. In order to detect the anomaly, we must make B nontrivial, so that, e.g., For such B, the U (1) gauge field N a + C cannot be an exact form globally, that is dϕ = N a + C (3.34) for any ϕ. In the limit v → ∞, the effective action diverges, and then for such nontrivial B. This obviously satisfies the anomaly equation (3.23) as the both sides are zero. This is how the anomaly is explicitly reproduced in the low-energy effective theory of the Higgs phase.
Confinement phase
As the coupling constant becomes larger, the charge condensation is taken over by the monopole condensation, and the system is in the confinement phase. We have found in the left panel of Fig. 2 that the confinement phase appears even when θ π if 1 2 √ 2 . For the Cardy-Rabinovici model, the confinement phase is the topologically trivial phase. This is because all the nontrivial Wilson loops show the area law and none of the symmetries is broken. In the low-energy limit, it can be formally expressed as lim →∞ W n ( ) = 0, (3.36) for n = 1, 2, . . . , N − 1 mod N . Here, lim →∞ indicates the limit of considering the larger and larger loops, and the perimeter part of the expectation value is assumed to be eliminated by the renormalization. Although the confinement phase is a topologically trivial gapped phase, its groundstate energy shows nontrivial θ dependence, as we have seen in Sec. 2.2. As a consequence, when θ = π, there are two vacua, one of which has the monopole condensation with charge (n, m) = (0, 1) while the another has the dyon condensation with charge (n, m) = (−1, 1). These two vacua are related by CP at θ = π, so the system shows the spontaneous CP breaking, Let us explicitly see how the anomaly (3.23) is reproduced by those two vacuum states. For simplicity, we renormalize the vacuum energy so that the monopole-condensed state at θ = π has the partition function, In this normalization, let us assume that the dyon-condensed state at θ = π has the nontrivial partition function, We note that the absolute value of these partition functions must be the same at θ = π, because they are related by the spontaneously-broken CP transformation. However, their phases do not need to be the same under the existence of background B fields, and we assign a specific phase to the dyon-condensed phase to reproduce the anomaly: the dyoncondensed phase is a nontrivial symmetry-protected topological (SPT) state with the Z
symmetry. The full partition function is given by
Under the CP transformation, This is nothing but the anomaly relation (3.23).
Oblique confinement phase near θ = π
When the coupling constant is sufficiently large, the system is in the confinement phase near θ = 0. For non-zero θ, however, more exotic condensations can occur due to the Witten effect, and they are the oblique confinement phase. When θ = π, the composite of monopole and dyon with the charge (n, m) = (−1, 2) starts to condense when the coupling is strong enough, 2π Here, let us recall that the electric and magnetic charges (e, m) is labeled by (n, m) as in (2.15): (e, m) = (N (−1 + θ/π), 2).
In the oblique confinement phase, it should be obvious by the Debye-screening argument that the line operators with the charge (N n, m) = (−N, 2) obeys the perimeter law for n = 0, N/2 mod N . This can be characterized by the spontaneous breaking pattern 47) and the resulting low-energy theory is the Z 2 topological order. Because of this unusual nature, the oblique confinement phase may provide a new way to find the topological orders in condensed matter systems [62]. We can show that this is good enough to match the anomaly (3.23) using the similar discussion for the anomaly matching in the Higgs phase. Here, let us take another approach instead. Anomaly matching can be satisfied if the symmetry is broken to the anomaly-free subgroup, so it is sufficient to show that Z [1] N/2 and CP does not have an 't Hooft anomaly. LetB is a Z N/2 two-form gauge field, and we replace B byB in (3.23), It may seem that the symmetry is still anomalous in this expression, but we should consider if this anomalous phase can be canceled by adding the local counter term: with k = 0, 1, . . . , N/2 − 1 mod N/2. Under CP, we obtain (3.51) Setting k = −1 mod N/2, the anomalous phase indeed disappears. Therefore, the symmetry is spontaneously broken to the anomaly-free subgroup, and the anomaly matching condition is satisfied. When N is odd, the oblique confinement phase is a trivial phase. It has the mass gap, and the nontrivial Wilson lines are all confined: for n = 0 mod N . Furthermore, unlike the confined phase, the oblique confinement phase respects the CP symmetry at θ = π, so the ground state is unique. Indeed, we should note that the system does not have the genuine 't Hooft anomaly when N is odd, so the trivially gapped state at θ = π is allowed. Even in this situation, the global inconsistency still puts a nontrivial constraint to the phase diagram [13][14][15][16][17][18][19]: the trivial gapped states at θ = 0 and π should be distinguished as the Z N -symmetric SPT states, and there must be a quantum phase transition separating them. We can explicitly see this as follows. The partition function of the oblique confinement phase should have the following B dependence, in order to reproduce (3.23). We note that the monopole-and dyon-condensed phases, (3.38) and (3.39), have the different level of this topological action. As the level is quantized due to the gauge invariance, there is no continuous way to interpolate these levels without phase transitions as long as the Z N symmetry is respected. Therefore, existence of the phase transition curves between the confinement and oblique confinement phases in Fig. 2 is ensured for odd N : even though they are both trivial as intrinsic topological orders, they are different SPT states.
Electromagnetic duality for N = M 2 and anomaly constraint
As we have seen in Sec. 2, the free-energy density of the Cardy-Rabinovici model enjoys the SL(2, Z) self-duality, and it plays the important role to constraint the phase diagram. We note, however, that it is not the self-duality of the theory. One way to see this failure is that the S transformation does not preserve the 1-form symmetry as the electric and magnetic line operators are interchanged. As a result, the global nature of each phase is not preserved under SL(2, Z), and thus, for example, a topologically trivial phase is mapped to an intrinsic topological order, or vice versa.
In this section, we obtain the SL(2, Z) self-dual theory out of the Cardy-Rabinovici model with N = M 2 by gauging a part of the one-form symmetry: (4.1) After this operations, the theory has the SL(2, Z)-gravity mixed anomaly, and we can find the further constraint on the phase diagram, and it turns out to explain more details on Fig. 3.
The Z [1]
M -gauged Cardy-Rabinovici model for N = M 2 Let us construct the self-dual theory out of the Cardy-Rabinovici model. It turns out that we can obtain such a theory when N is a square number, say N = M 2 , by gauging M 2 . In order to gauge the Z [1] M symmetry [63,64], we introduce the dynamical where c is a U (1) 1-form gauge field and b is a U (1) 2-form gauge field. By this condition, the period of b is quantized in the unit of 2π M . We require the 1-form gauge invariance, In order to establish the 1-form gauge invariance, we should replace the field strength da by In the gauge a = 0, this is nothing but the fundamental Wilson line for the new U (1) gauge field c. In the path integral (3.4), only W N appears, which is equal to W M after gauging, as we have set N = M 2 . Next let us discuss the magnetic line operator. The 't Hooft line H( ) is defined as the defect operator specified the magnetic flux around it. Since we should replace the field strength, its definition is changed as for small two-spheres S 2 linking to . In the gauge a = 0, this reads By combining these data, the partition function of the Z [1] M -gauged theory is given as This is the Z
Symmetry, self-duality and anomaly
Let us discuss the topological properties of the Z M symmetry does not affect the local dynamics. For example, non-topological degeneracy of the vacua should be in common for these two theories, so they have the same phase diagram, Fig. 3, while the topological characterization of each phases is affected by gauging Z [1] M .
1-form symmetry and its anomaly
The 1-form symmetry of the original Cardy-Rabinovici model is Z In the gapped phase of the gauged model, the 1-form symmetry is always spontaneously broken. In the Higgs phase, the electric lines are deconfined, for n = 1, . . . , M − 1, and other nontrivial lines are all confined. In all of these situations, we find the symmetry breaking pattern, M , (4.17) and the unbroken Z [1] M symmetry carries the information of the condensation. The U (1) pure Maxwell theory enjoys the U (1) [1] ele. × U (1) [1] mag. symmetry, and the (Z M ) mag. symmetry is its subgroup. The U (1) [1] ele. × U (1) [1] mag. symmetry has the mixed anomaly: introducing the background 2-form gauge fields, B ele. and B mag. , the theory is no longer gauge invariant in the genuine four dimensions, and the gauge invariance requires the anomaly inflow from the 5-dimensional bulk topological action, Even when restricting the symmetry to (Z M ) mag. , this topological action is still nontrivial, and the anomaly matching condition is imposed. This anomaly is also necessary in order to reproduce the original Z [1] M 2 symmetry when gauging (Z [1] M ) mag. [65]. The spontaneous breaking, (4.17), is indeed one of the scenarios matching this 't Hooft anomaly.
SL(2, Z) self duality, Z 6 subgroup, and mixed gravitational anomaly
The original Cardy-Rabinovici model has the Z M 2 electric 1-form symmetry but does not have the magnetic 1-form symmetry. When we perform the electromagnetic duality transformation, i.e. the S transformation, these two symmetries should be exchanged, and the theory is mapped to a different theory with the Z M 2 magnetic 1-form symmetry and without the electric 1-form symmetry. In this sense, the original Cardy-Rabinovici model cannot be a self-dual theory, even though its local dynamics of charges and monopoles shows an interesting self-duality.
In the Z [1] M -gauged Cardy-Rabinovici model (4.10), this problem does not exist since we have same amounts of the electric and magnetic 1-form symmetries. Therefore the theory enjoys the SL(2, Z) self-duality, which is the same with that of pure Maxwell theory. They are generated by S and T defined in (2.27) and (2.28), with the complex coupling τ = θ 2π +i 2π N g 2 . We note that, in this theory, θ and θ+2π should not be identified as in (3.10). Instead, we can only say that those two points are related by the duality transformation, T ∈ SL(2, Z), which maps τ → τ + 1. This is because the expectation value of nontrivial 't Hooft loops do not show the 2π periodicity, As an identification of θ, the periodicity is extended to 20) due to the gauging of Z [1] M , which corresponds to τ ∼ τ + M . This should be compared with the similar extension of the θ periodicity between SU (N ) and SU (N )/Z N Yang-Mills theories [64].
There are some points in the space of τ , which are fixed points under certain subgroups of SL(2, Z). There are two important fixed points. The first one is τ = i, which is the fixed point of Other points with a nontrivial stabilizer subgroup of P SL(2, Z) in the upper half plane can be mapped to either of the above two points by combinations of S and T . For those fixed points, their stabilizer subgroups of SL(2, Z) are promoted to symmetry from the self-duality of the theory. Such a symmetry group may have an 't Hooft anomaly. We here focus on its mixed anomaly with the gravity. The key point is that the partition function of the quantum Maxwell theory on generic manifold is not invariant under SL(2, Z), but instead it behaves as the modular form 6 [28,29], (4.23) Here, χ is the Euler number of the spin four-manifolds, and σ is the signature of the spin four-manifolds, which is equal to 1 3 of the Pontryagin class p 1 , and R is the curvature 2-form. On spin four-manifolds, the signature σ is known to be quantized in 16Z by the Rokhlin's theorem, and the generator is the K3 surface, σ(K3) = −16.
Using this information, let us compute the mixed anomaly between the subgroup of SL(2, Z) and gravity [30]. Let us first consider the case τ = i, where (Z 4 ) S is a symmetry. The extra factor under the S transformation in (4.24) is (4.26) In the last equality, we use the fact that σ ∈ 16Z on spin manifolds, and thus the S transformation does not have a mixed 't Hooft anomaly with gravity. We can, however, find an interesting anomaly at τ * = exp(πi/3), (4.27) In the case of Z Maxwell (τ ), we have u = 1 4 (χ − σ) and v = 1 4 (χ + σ).
where ST −1 generates the Z 6 symmetry.
For the K3 surface, this anomalous phase is exp(−2πi/3), and thus (Z 6 ) ST −1 has an order-3 mixed anomaly with the signature density. This computation is explicitly done for the pure Maxwell theory, but the same anomaly should exist for the gauged Cardy-Rabinovici model as the anomaly does not change under the continuous SL(2, Z) preserving deformations. We note that the following assumption is made to justify this argument: the gauged Cardy-Rabinovici model enjoys the SL(2, Z) self duality and the Lorentz invariance at low-energies.
Recently, it has been proven that the mixed gravitational anomaly cannot be matched by the topologically ordered phase if the anomaly is detectable on the K3 surface [66,67]. As a result, in order to match the anomaly, the system requires certain massless excitations, such as free photons in the Coulomb phase, or the vacuum break the symmetry, (4.29) In the left panel of Fig. 2, there are three degenerate ground states at τ = τ * , where condensations of the three charges (1, 0), (0, 1) and (−1, 0) occur. connected by the ST −1 transformation. This implies the spontaneous symmetry breaking (4.29) of the Z 3 subgroup in (Z 6 ) ST −1 that matches the anomaly. Once we have the Z 3 breaking at τ = τ * , we can find similar breaking of Z 3 at other points by applying SL(2, Z) transformations. For instance, the phase boundary of condensations of the charges (0, 1), (−1, 1) and (−1, 2) at θ = π can be obtained by applying ST −2 to τ = τ * . In the right panel of Fig. 2, the system at τ * is in the Coulomb phase, and the anomaly matching is again satisfied.
Summary and discussion
In this paper, we revisit the phase structure of the four-dimensional lattice U (1) gauge theory with the θ angle, which was originally proposed by Cardy and Rabinovici [7,8]. This Cardy-Rabinovici model has been expected to show various phase transitions depending on the coupling, g 2 , and θ based on heuristic free-energy arguments of possible condensations and also on its consistency with the SL(2, Z) self-duality of local dynamics [7,8]. We show that this model has the mixed anomaly, or global inconsistency, between Z [1] N and CP at θ = π exactly in the same way with SU (N ) YM theory. We confirm that the proposed phase diagram is consistent with the constraint by the anomaly matching condition.
In particular, we discuss properties of the oblique confinement phase around θ = π in details. This phase is caused by the condensation of composite particles with the charge (n, m) = (−1, 2). When N is even, the one-form symmetry is spontaneously broken as Z [1] N → Z [1] N/2 and the low-energy physics is described by the Z 2 topological order. We show that this is indeed sufficient in order to match the mixed 't Hooft anomaly for Z [1] N and CP at θ = π, so the oblique confinement phase realizes one of the minimal scenarios to match the anomaly.
For odd N , the genuine anomaly is not present at θ = π and thus the trivially gapped phase is allowed to appear. The oblique confinement phase for odd N is indeed such a phase: it is gapped because of the condensation and there are no deconfined lines as none of the test particles can have a charge parallel to the charge (n, m) = (−1, 2). However, the theory has the global inconsistency between θ = 0 and θ = π, and thus the oblique confinement phase at θ = π for odd N is a different SPT state from the usual confinement phase caused by the monopole condensation. These arguments justify the presence of phase transitions, which were conjectured by Cardy and Rabinovici.
What are the possible implications to the phase diagram of SU (N ) YM theory? In the 't Hooft large-N limit, there is a convincing argument showing that the anomaly at θ = π is matched by spontaneous breakdown of CP symmetry [21], and it is supported by the holographic model [26] and also by semiclassical computations of deformed or softly-broken supersymmetric YM theories [68][69][70][71][72][73]. When N is not so large, however, the dynamics at θ = π may be different. The presence of anomaly, or global inconsistency, ensures that there must be at least one quantum phase transition as we change θ from 0 to 2π [11,13]. In Ref. [11], it has been discussed that the Coulomb phase may appear in a finite window including θ = π for N = 2 as one of possible exceptions from the large-N viewpoint. Indeed, this possibility can be realized in the Cardy-Rabinovici model as we can see in the right panel of Fig. 2. We should note, however, that the local dynamics of the YM theory is very different from that of the Cardy-Rabinovici model. In the Cardy-Rabinovici model, the Coulomb phase can appear if C N < 2 √ 3 according to the free-energy discussion, so the Coulomb phase is preferred for larger N instead of smaller ones.
As another possibility motivated by the Cardy-Rabinovici model, there may be a finite window of the oblique confinement phase around θ = π for pure SU (N ) YM theories with small N . As the anomaly involves the one-form symmetry, the anomaly constraint exists even at finite temperatures as the four-dimensional anomaly induces a mixed anomaly between Z 2 is spontaneously broken while Z [1] 2 is unbroken. In this scenario, the deconfinement temperature must remain nonzero at any values of θ, because the oblique confinement phase cannot be continuously connected to the high-temperature deconfinement phase. This statement is true also for N = 3 if the oblique confinement is realized around θ = π at low temperatures. For odd N , the oblique confinement phase does not break any symmetry, while the high-temperature deconfinement phase breaks Z 3 . In this paper, we have also discussed the Z [1] M -gauged Cardy-Rabinovici model with the charge N = M 2 . In the original Cardy-Rabinovici model, the SL(2, Z) self-duality found in Ref. [8] is limited to the local aspect of the theory, mainly because the electromagnetic charge lattice is not invariant under SL(2, Z). In the gauged model, the SL(2, Z) selfduality is true also for the global aspect of the theory, and we can discuss its anomaly to constrain the phase diagram. The gapped phases are always Z M topological orders, and the theory also enjoys the SL(2, Z)-gravity mixed anomaly. Especially at τ = exp(2πi/3), the (Z 6 ) ST −1 subgroup of SL(2, Z) becomes a symmetry of the theory, and it has the mixed anomaly with the signature density. As a consequence of the anomaly matching, the spontaneous breaking (Z 6 ) ST −1 → (Z 2 ) C is required. It would be interesting if one can find a pure anomaly of the duality in our model which has been studied well for the Maxwell theory in Refs. [74,75]. It has been known that various four-dimensional theories with SL(2, Z) duality can be constructed out of the two-torus compactification of 6-dimensional (2, 0) theories. It is quite amusing if certain deformation of such a theory can show the rich structure of the phase diagram as studied in this paper. | 13,393.8 | 2020-09-21T00:00:00.000 | [
"Physics"
] |
Polymeric Amines and Ampholytes Derived from Poly(acryloyl chloride): Synthesis, Influence on Silicic Acid Condensation and Interaction with Nucleic Acid
Polymeric amines are intensively studied due to various valuable properties. This study describes the synthesis of new polymeric amines and ampholytes by the reaction of poly(acryloyl chloride) with trimethylene-based polyamines containing one secondary and several (1–3) tertiary amine groups. The polymers contain polyamine side chains and carboxylic groups when the polyamine was in deficiency. These polymers differ in structure of side groups, but they are identical in polymerization degree and polydispersity, which facilitates the study of composition-properties relationships. The structure of the obtained polymers was confirmed with 13C nuclear magnetic resonance infrared spectroscopy, and acid-base properties were studied with potentiometry titration. Placement of the amine groups in the side chains influences their acid-base properties: protonation of the amine group exerts a larger impact on the amine in the same side chain than on the amines in the neighboring side chains. The obtained polymers are prone to aggregation in aqueous solutions tending to insolubility at definite pH values in the case of polyampholytes. Silicic acid condensation in the presence of new polymers results in soluble composite nanoparticles and composite materials which consist of ordered submicrometer particles according to dynamic light scattering and electron microscopy. Polymeric amines, ampholytes, and composite nanoparticles are capable of interacting with oligonucleotides, giving rise to complexes that hold promise for gene delivery applications.
Introduction
Polymeric amines have wide-ranging applications as emulsifiers, components of pharmaceutical preparations, in cation exchange resins, as matrices for composites and catalysts [1][2][3], and in drug and gene delivery systems [4][5][6][7][8][9]. Carbochain polymeric amines, e.g., poly(vinyl amine) are often synthesized by means of polymer-analogous reactions (modification of the existing polymer) [10]. This is due to the high activity of the amine groups that results in difficulties in monomer purification and polymerization. Moreover, some monomers (vinyl amine) do not exist in a stable form. When polymer-analogous reactions are carried out under moderate conditions without the destruction of the main polymeric chain, it is possible to obtain a set of (co)polymers with the same polymerization degree and various ratios of the functional groups. Such sets of (co)polymers are interesting in the context of investigating the composition-property relations as freezing one parameter (polymerization degree) allows one to concentrate solely on the composition. context of investigating the composition-property relations as freezing one parameter (polymerization degree) allows one to concentrate solely on the composition.
This work aimed at the synthesis of polymeric amines and ampholytes starting from poly(acryloyl chloride) (PAC). This polymer can be readily obtained by radical polymerization of the monomer [11]. The chloro anhydride group readily reacts with primary and secondary amines, which allows the introduction of various substituents through the amide group. Unfortunately, this polymer is rarely applied in polymer-analogous reactions. We used polyamines containing one secondary and several tertiary amine groups in the reaction with PAC ( Figure 1). These amines were obtained by a stepwise procedure [12] which we designed with the objective to synthesize polyamines simulating substances found in diatom algae [13]. Variation of the amine:-COCl ratio allowed us to obtain completely substituted homopolymers and polyampholytes after the hydrolysis of the residual chloro anhydride groups. An alternative way to obtain these polymers is the post-polymerization modification of macromolecules from the activated esters of acrylic acid [14]. This method has some disadvantages when compared with the PAC approach, activated esters of acrylic acid are relatively expensive substances and the modification reaction requires an elevated temperature and activators [15], which could result in the destruction of the main polymeric chain.
The new polymers were studied as activators of silicic acid condensation, giving rise to soluble composite nanoparticles and solid materials. The interaction of the polymeric amines and composite nanoparticles with DNA oligonucleotide was also studied to demonstrate the potential of these compounds for gene delivery applications.
The new polymers were studied as activators of silicic acid condensation, giving rise to soluble composite nanoparticles and solid materials. The interaction of the polymeric amines and composite nanoparticles with DNA oligonucleotide was also studied to demonstrate the potential of these compounds for gene delivery applications.
Characterization of the Copolymers and Composites
Fourier transform infrared (FTIR) spectra were recorded with an Infralum FT-801 spectrometer (Simex JST, Novosibirsk, Russia) using an attenuated total reflection (ATR) attachment (for polymers) or KBr pellets (for composites). 1 H and 13 C Nuclear magnetic resonance (NMR) spectra were Polymers 2017, 9, 624 3 of 18 obtained on a Bruker DPX 400 spectrometer (400.13 and 100.61 MHz, respectively, Billerica, MA, USA) in D 2 O. Relaxation delays (D1) between pulses in 13 C NMR were 10 s, which according to preliminary experiments, allowed us to prevent the influence of relaxation effects on the integral intensity of the 13 C NMR peaks. This is more than five times greater than the relaxation times in acrylic polymers [16]. Potentiometry measurements were performed on a "Multitest" ionometer using a combined pH-electrode in a temperature-controlled cell at 20 ± 0.02 • C. Solutions of 1.5 g·L −1 were prepared for potentiometry titration. A solution of 0.1 M HCl was used for adjusting the pH of the solutions up to 2.8, and 0.1 M NaOH was employed as a titrant.
The molecular mass of the new polymers was estimated via size-exclusion chromatography (SEC) using a Milichrom A02 chromatograph (JSC Econova, Novosibirsk, Russia) with 2 mm × 75 mm column filled with SRT SEC-100 5 µm phase (Sepax Technologies, Inc., Newark, NJ, USA), operated at 35 • C using 10:90 methanol:water solution of TFA, pH 2.5. The flow rate of the mobile phase was set at 0.03 mL·min −1 (pressure 100 psi), whereas the injection volume for 1 g·L −1 of the sample solution was 1 µL. Fractionated samples of poly(vinyl formamide) (PVFA) [17] were applied as standards (M w /M n < 1.3).
Dynamic light scattering (DLS) experiments were performed using a LAD-079 instrument built at the Institute of Thermophysics (Novosibirsk, Russia). The solutions were purified from dust using syringe filter units (0.45 µm pore size, Sartorius 16555-Q Minisart, Sartorius AG, Chöttingen, Germany). The experiments were performed at 20 ± 0.02 • C. Measurements were carried out with a 650-nm solid-state laser at a 90 • scattering angle. Correlation functions were analyzed with a polymodal model using a random-centroid optimization method [18]. Zeta-potential (ζ) of the polymer-oligonucleotide complexes was measured with a Zetasizer ZS90 (Malvern Instruments Ltd., Malvern, UK).
Transmission electron microscopy (TEM) of the soluble particles was performed using a LEO 906E instrument (Zeiss, Oberkochen, Germany) on freeze-dried solutions diluted ten-fold just before freezing. The solid products were dispersed in hexane and drops of the solution were placed on formvar film-coated copper grids. Scanning electron microscopy (SEM) of the precipitates was performed using FEI Quanta 200 instruments (Thermo Fisher Scientific, Waltham, MA, USA). The samples were placed on double-sided sticky carbon tape mounted on aluminum sample holders and then sputter coated with gold using a Balzers SDC 004 Sputter coater (Oerlikon Corporate Pfaffikon, Altendorf, Switzerland). The coating settings (working distance 50 mm, current 15 mA, time 75 s) corresponded to a 12-nm gold coating as per the device manual. The surface area of the composites was measured by nitrogen adsorption at the boiling point of nitrogen (−196 • C) with a Sorbtometr-M device (JSC Katakon, Novosibirsk, Russia), and the data were treated with the Brunauer-Emmett-Teller method [19].
Synthesis of Poly(acryloyl chloride) (PAC)
PAC was synthesized similar to the protocol described earlier [11] by polymerization of acryloyl chloride (5 g) in 20 mL of dioxane with the addition of 0.1 g AIBN in argon atmosphere at 60 • C for 48 h. PAC was used in the reaction with polyamines without comprehensive purification (see below). With the objective to estimate yield and polymerization degree of the PAC, the reaction mixture was poured into water (50 mL) and dialyzed against water. After freeze drying, poly(acrylic acid) was obtained with 90% yield. According to viscometry data, [20] the polymerization degree of the poly(acrylic acid) and, correspondingly of PAC, was found to be 220.
Reaction of PAC with Polyamines
PAC solution in dioxane was poured into 40 mL of hexane in a centrifuge tube where the obtained precipitate was separated by centrifugation (3000× g, 5 min) and dissolved in 10 mL of DMFA. The obtained PAC solution was cooled to 0 • C and polyamine in 10 mL of DMFA (cooled to 0 • C) was added. The polyamine:PAC ratios used are presented in Table 1. The reaction mixture was stirred for 30 min at 0 • C and 30 min at room temperature, following which the reaction mixture was poured into 50 mL of water and stirred until the polymer completely dissolved. In experiments with polyamine deficiency, NaOH pellets were added until dissolution. The obtained solution was dialyzed against water and freeze dried. Yields of the polymers were found to be above 95%. 0 °C) was added. The polyamine:PAC ratios used are presented in Table 1. The reaction mixture was stirred for 30 min at 0 °C and 30 min at room temperature, following which the reaction mixture was poured into 50 mL of water and stirred until the polymer completely dissolved. In experiments with polyamine deficiency, NaOH pellets were added until dissolution. The obtained solution was dialyzed against water and freeze dried. Yields of the polymers were found to be above 95%.
Study of Silicic Acid Condensation in the Presence of New Polymers and Synthesis of Soluble and Solid Composites
In the first stage, the condensation of silicic acid in the presence of new polymeric amines and ampholytes was studied by potentiometry titration of the sodium silicate and polymer mixtures. The obtained results (Table 1) allowed us to select pH ranges where the formation of soluble or insoluble products were expected. Experiments at the desired pH values were performed by mixing stock Na2SiO3·5H2O and polymer solutions (100 mM and 2 g·L −1 correspondingly), the desired amount of water was added, and the pH was adjusted with 1 and 0.1 M HCl solutions. The addition of HCl was done in <1 min using data from preliminary experiments. This allowed us to prevent Si(OH)4 condensation at a pH higher than the desired value. Fifty mM HEPES buffer (pH = 7.4) was presented in the solutions prepared for study interactions with oligonucleotide. The obtained solutions were studied with dynamic light scattering (DLS), molybdenum blue assay [21,22], and TEM after freezedrying. Composite precipitates obtained in some systems were collected with centrifuge (3000× g, 10 min), washed twice with cold water (2-4 °C), freeze-dried, and studied with FTIR and SEM.
Study of Silicic Acid Condensation in the Presence of New Polymers and Synthesis of Soluble and Solid Composites
In the first stage, the condensation of silicic acid in the presence of new polymeric amines and ampholytes was studied by potentiometry titration of the sodium silicate and polymer mixtures. The obtained results (Table 1) allowed us to select pH ranges where the formation of soluble or insoluble products were expected. Experiments at the desired pH values were performed by mixing stock Na 2 SiO 3 ·5H 2 O and polymer solutions (100 mM and 2 g·L −1 correspondingly), the desired amount of water was added, and the pH was adjusted with 1 and 0.1 M HCl solutions. The addition of HCl was done in <1 min using data from preliminary experiments. This allowed us to prevent Si(OH) 4 condensation at a pH higher than the desired value. Fifty mM HEPES buffer (pH = 7.4) was presented in the solutions prepared for study interactions with oligonucleotide. The obtained solutions were studied with dynamic light scattering (DLS), molybdenum blue assay [21,22], and TEM after freeze-drying. Composite precipitates obtained in some systems were collected with centrifuge (3000× g, 10 min), washed twice with cold water (2-4 • C), freeze-dried, and studied with FTIR and SEM.
Synthesis and Electrophoresis of Oligonucleotide Complexes with Polymers and Composite Nanoparticles
The interaction between 21-mer DNA oligonucleotide GATCTCATCAGGGTACTCCTT-6-FAM and the synthesized polymers or composite nanoparticles was investigated by electrophoresis on agarose gel. Complexes were prepared by mixing solutions of polymer (or composite) and oligonucleotide. The samples were incubated at room temperature for 30 min and placed in the wells of 1% agarose gel. Controls for free oligonucleotide and free polymer (or composite) were also loaded to the gel. The gel running buffer was composed of 40 mM Tris acetate (pH adjusted to pH 7.4) and 1 mM ethylenediaminetetraacetic acid (EDTA). A glycerol gel loading buffer was applied (0.5% sodium dodecyl sulfate, 0.1 M EDTA (pH = 7.4), 50% glycerol for 10× reagent). A solution of Polymers 2017, 9, 624 5 of 18 0.05% bromophenol blue was applied to visualize the "dye front" and to calculate the relative mobility (R f ). The gel was run at 90 V and the fluorescein-tagged oligonucleotide was visualized on a UV transilluminator. Mobility of the free DNA oligonucleotide was equal to the mobility of bromophenol blue (R f = 1).
Study of Oligonucleotide Complex Penetration into Model Yeast Cells
The cells of S. cerevisiae were maintained at 30 • C on YEPD medium (0.5% yeast extract, 1% peptone, 2% glucose). The cells were grown at 30 • C in 10 mL plastic vials with 2 mL of liquid YEPD. Cells in the logarithmic or stationary growth phase were used in the experiments. Oligonucleotide complexes were added to the cell culture and observations were done after 24 h of cultivation. A Motic AE-31T microscope (Motic, Xiamen, China) equipped with a fluorescence attachment (emission at 470 nm) and Moticam Pro 205A camera (Motic, Xiamen, China)) was used for observation of the yeast cells.
Results
The reaction of PAC with polyamines gave rise to white powder-like products. New polymers were obtained with high yields (>95%), as opposed to similar reactions with long-chain polyamines [23], which were complicated by cross-linking due to the presence of two -NH groups in the polyamine molecules. All polymers (except for P02 and P03) were soluble in water and insoluble in organic solvents. P02 and P03 were soluble in water with HCl acidification to pH 3. The FTIR spectra of the polymers ( Figure S1 in Supplementary Materials) revealed bands characteristic of amides (1630 cm −1 , ν C=O), amines (1150, 1050 cm −1 , ν C-N; broad absorbance band at 2200-3000 cm −1 , ν N-H in protonated groups), carboxylic groups (1320 cm −1 , ν C-O; 1558 cm −1 , -COO − , ν a ; 1710-1720 cm −1 , -COOH, ν C=O), and methyl and methylene groups (1460-1470 cm −1 , δ a ; 2760-2930 cm −1 , ν) [24,25]. According to the FTIR spectra, carboxylic units were present in the form of -COOH (P03, P13, and P14 samples), or mainly as carboxylate anions (P04 and P24). The latter samples were prepared with the addition of NaOH during the hydrolysis of -COCl moieties and further purification. These polymers contained non-neutralized amine groups, which was confirmed with the absence of a broad absorbance band at 2200-3000 cm −1 . The FTIR spectrum of P23 contained a weak band of the -COOH group at 1710 cm −1 , shoulder at 1570 cm −1 (-COO − ), and band of protonated amines (2200-3000 cm −1 ), which corresponded to the zwitterionic structure of the polymer. 1 H NMR spectra of the polymers were not informative and the composition of the copolymers was calculated on the basis of 13 C NMR data ( Figure 3) using integrals of the amide peak near 176 ppm (b) and the peak for carboxylic group near 178.5 ppm (a). In the case of low content of the amide units ( Figure 3, P04 sample) the amide peak fused with the carboxylic signal and the peak of the methylene groups near 21 ppm was applied for the calculations. The composition of the synthesized polymers is presented in Table 1 and these data demonstrated that polymeric amines and carboxyl containing polyampholytes could be obtained by the reaction between PAC and polyamines. Molecular weight of the new polymers (P n ) is 26-58 kDa and increases with increase of the length of the polyamine substituent ( Table 1). The polydispersity (PDI) of the polymers is typical for macromolecules obtained by radical polymerization. Polymers 2017, 9, 624 6 of 18 Copolymers containing a high amount of carboxylic groups are not water soluble at some pH values (Table 1). This is a typical characteristic of polyampholytes as interaction between oppositelycharged functional groups decrease their hydrophilicity. Interestingly, the P23 sample was soluble in the entire pH range studied (2.8-11) as opposed to the P03 and P13 samples, which contained a similar amount of polyamine chains. The high solubility of P23 was correlated to a higher length of the polyamine chains. The -COOH: amine group ratio was found to be 40:60, 36:124, and 46:162 for the P03, P13, and P23 samples, respectively. Potentiometric titration of homopolymers (P01 and P11 samples) was performed to obtain the dependence of pK of conjugated acid on ionization degree α, which was calculated according to Reference [26] by Equation (1): where [NaOH] is the concentration of alkali added; and Ctotal is the total concentration of the conjugated acid units. This equation accounted for the self-ionization and hydrolysis of basic and protonated groups. The polymers under investigation contained some amine units associated with HCl. Hence, an excess of HCl was added to the solutions before titration. During NaOH addition, we observed an inflection on the pH vs. volume curve which corresponded to the commencement of titration of the conjugated acid (protonated amine groups).
[NaOH] used for α calculations was calculated from this inflection point. pK was calculated according to the Henderson-Hasselbach equation [27]: Copolymers containing a high amount of carboxylic groups are not water soluble at some pH values (Table 1). This is a typical characteristic of polyampholytes as interaction between oppositely-charged functional groups decrease their hydrophilicity. Interestingly, the P23 sample was soluble in the entire pH range studied (2.8-11) as opposed to the P03 and P13 samples, which contained a similar amount of polyamine chains. The high solubility of P23 was correlated to a higher length of the polyamine chains. The -COOH: amine group ratio was found to be 40:60, 36:124, and 46:162 for the P03, P13, and P23 samples, respectively. Potentiometric titration of homopolymers (P01 and P11 samples) was performed to obtain the dependence of pK of conjugated acid on ionization degree α, which was calculated according to Reference [26] by Equation (1): where [NaOH] is the concentration of alkali added; and C total is the total concentration of the conjugated acid units. This equation accounted for the self-ionization and hydrolysis of basic and protonated groups. The polymers under investigation contained some amine units associated with HCl. Hence, an excess of HCl was added to the solutions before titration. During NaOH addition, we observed an inflection on the pH vs. volume curve which corresponded to the commencement of titration of the conjugated acid (protonated amine groups).
[NaOH] used for α calculations was calculated from this inflection point. pK was calculated according to the Henderson-Hasselbach equation [27]: The obtained pK vs. α data ( Figure 4) showed a progressive increase of pK with α for P01 and P11 polymers. This effect is typical for polyelectrolytes, e.g., poly(vinyl amine) [28] and arises due to the electrostatic effect. An increase in α corresponded to a decrease in the positive charge on the polymer chain, which decreases the dissociation activity of -NH + groups, thereby increasing the pK value. Both curves exhibited an extrapolated pK = 10.5 at α = 1, which corresponded to the non-protonated polymeric chain. This value was in agreement with the pK of low-molecular amines [29]. In the case of the P11 polymer, two kinds of electrostatic interactions between protonated amine groups were possible ( Figure 5): the interactions between neighboring polymeric units (1); and interactions between groups in the same side chain (2). The obtained pK vs. α data ( Figure 4) showed a progressive increase of pK with α for P01 and P11 polymers. This effect is typical for polyelectrolytes, e.g., poly(vinyl amine) [28] and arises due to the electrostatic effect. An increase in α corresponded to a decrease in the positive charge on the polymer chain, which decreases the dissociation activity of -NH + groups, thereby increasing the pK value. Both curves exhibited an extrapolated pK = 10.5 at α = 1, which corresponded to the nonprotonated polymeric chain. This value was in agreement with the pK of low-molecular amines [29]. In the case of the P11 polymer, two kinds of electrostatic interactions between protonated amine groups were possible ( Figure 5): the interactions between neighboring polymeric units (1); and interactions between groups in the same side chain (2). The initial part of the P11 curve lies significantly below the P01 curve, which points to the higher importance of the interactions in the side chains. Both curves had an initial plateau which corresponded to the elimination of protons from the polymeric chain without a decrease in the strength of the conjugate acid. This effect was possibly connected with a relatively high distance between nitrogen atoms in the neighboring units, and it was necessary to remove some protons for initiating the electrostatic effects. The plateau region observed in the case of P11 was approximately two times shorter when compared to P01 as P11 contained two amine groups in the side chain and an equal α corresponded to twice the number of ionized side chains in P11.
The particle size of the solutions of the new polymers was studied using DLS ( Figure 6) at pH 7.4. Most of the polymers exhibited a bimodal distribution of the particle size: one centered around 10-40 nm, possibly due to single macromolecules or small aggregates; and another at 200-1000 nm due to the formation of large aggregates. The size of the aggregates was often larger than the diameter of the filter pores (450 nm), which pointed to a reversible destruction of the aggregates during filtration, similar to imidazole-containing polyampholytes [30]. The DLS data are presented as time correlation function intensity vs. particle size, which provided an overestimation of the large particles fraction as the scattering intensity is proportional to particle size in degree 3, or more [31]. The obtained pK vs. α data ( Figure 4) showed a progressive increase of pK with α for P01 and P11 polymers. This effect is typical for polyelectrolytes, e.g., poly(vinyl amine) [28] and arises due to the electrostatic effect. An increase in α corresponded to a decrease in the positive charge on the polymer chain, which decreases the dissociation activity of -NH + groups, thereby increasing the pK value. Both curves exhibited an extrapolated pK = 10.5 at α = 1, which corresponded to the nonprotonated polymeric chain. This value was in agreement with the pK of low-molecular amines [29]. In the case of the P11 polymer, two kinds of electrostatic interactions between protonated amine groups were possible ( Figure 5): the interactions between neighboring polymeric units (1); and interactions between groups in the same side chain (2). The initial part of the P11 curve lies significantly below the P01 curve, which points to the higher importance of the interactions in the side chains. Both curves had an initial plateau which corresponded to the elimination of protons from the polymeric chain without a decrease in the strength of the conjugate acid. This effect was possibly connected with a relatively high distance between nitrogen atoms in the neighboring units, and it was necessary to remove some protons for initiating the electrostatic effects. The plateau region observed in the case of P11 was approximately two times shorter when compared to P01 as P11 contained two amine groups in the side chain and an equal α corresponded to twice the number of ionized side chains in P11.
The particle size of the solutions of the new polymers was studied using DLS ( Figure 6) at pH 7.4. Most of the polymers exhibited a bimodal distribution of the particle size: one centered around 10-40 nm, possibly due to single macromolecules or small aggregates; and another at 200-1000 nm due to the formation of large aggregates. The size of the aggregates was often larger than the diameter of the filter pores (450 nm), which pointed to a reversible destruction of the aggregates during filtration, similar to imidazole-containing polyampholytes [30]. The DLS data are presented as time correlation function intensity vs. particle size, which provided an overestimation of the large particles fraction as the scattering intensity is proportional to particle size in degree 3, or more [31]. The initial part of the P11 curve lies significantly below the P01 curve, which points to the higher importance of the interactions in the side chains. Both curves had an initial plateau which corresponded to the elimination of protons from the polymeric chain without a decrease in the strength of the conjugate acid. This effect was possibly connected with a relatively high distance between nitrogen atoms in the neighboring units, and it was necessary to remove some protons for initiating the electrostatic effects. The plateau region observed in the case of P11 was approximately two times shorter when compared to P01 as P11 contained two amine groups in the side chain and an equal α corresponded to twice the number of ionized side chains in P11.
The particle size of the solutions of the new polymers was studied using DLS ( Figure 6) at pH 7.4. Most of the polymers exhibited a bimodal distribution of the particle size: one centered around 10-40 nm, possibly due to single macromolecules or small aggregates; and another at 200-1000 nm due to the formation of large aggregates. The size of the aggregates was often larger than the diameter of the filter pores (450 nm), which pointed to a reversible destruction of the aggregates during filtration, similar to imidazole-containing polyampholytes [30]. The DLS data are presented as time correlation function intensity vs. particle size, which provided an overestimation of the large particles fraction as the scattering intensity is proportional to particle size in degree 3, or more [31]. Polymeric amines have been extensively investigated for their ability to influence silicic acid condensation and in the formation of organo-silica composites. The major interest in this class of molecules stems from the fact that polymeric amines are considered as synthetic models of biogenic molecules, such as silaffins, and the resulting composites formed by them with silicic acid often show interesting and useful properties [32]. We found that titration of the polymer-sodium silicate mixture with HCl resulted in the formation of either turbid or transparent systems (Table 1). Silicic acid is capable of condensing to give rise to poly(silicic acid) (PSA) when pH decreased to 10-11 [20,33] and poly(silicic acid) (PSA) could interact with the polymeric amines. Polymers with high contents of amine groups (P01, P11, and P22) did not show precipitates in the presence of sodium silicate. The introduction of small and moderate amounts of carboxylic units resulted in turbid systems even if the polymer was soluble in the entire pH range being studied. These results may be attributed to the formation of an inter-polymeric complex in the polymer-PSA system. Precipitation of these systems often depends on the acid-base ratio and an excess of amine units can result in soluble nonstoichiometric complexes as discussed in our review [32]. Introduction of acidic carboxylic units into the polymer chain equalized the acid-base ratio and the polymer-PSA complex became insoluble. The polymer-PSA complexes based on polymers with a large amount of acidic units (P04, P14, and P24) were more soluble than the free polymers. We hypothesized that these complexes contained an excess of acidic groups (carboxylic and silanol) which made them soluble.
The influence of various substances on the condensation of silicic acid can be monitored using the molybdate method [21,22], which allowed the measurement of the monomer and dimer concentrations of silicic acid. Study of Si(OH)4 condensation in the presence of new polymers ( Figure 7) revealed an acceleration of the condensation process with polymers containing a considerable amount of amine units similar to our results with poly(vinyl amine) [34]. The effect was visible at the very early stages of condensation (kthird values) and the polymers decreased the equilibrium Polymeric amines have been extensively investigated for their ability to influence silicic acid condensation and in the formation of organo-silica composites. The major interest in this class of molecules stems from the fact that polymeric amines are considered as synthetic models of biogenic molecules, such as silaffins, and the resulting composites formed by them with silicic acid often show interesting and useful properties [32]. We found that titration of the polymer-sodium silicate mixture with HCl resulted in the formation of either turbid or transparent systems (Table 1). Silicic acid is capable of condensing to give rise to poly(silicic acid) (PSA) when pH decreased to 10-11 [20,33] and poly(silicic acid) (PSA) could interact with the polymeric amines. Polymers with high contents of amine groups (P01, P11, and P22) did not show precipitates in the presence of sodium silicate. The introduction of small and moderate amounts of carboxylic units resulted in turbid systems even if the polymer was soluble in the entire pH range being studied. These results may be attributed to the formation of an inter-polymeric complex in the polymer-PSA system. Precipitation of these systems often depends on the acid-base ratio and an excess of amine units can result in soluble non-stoichiometric complexes as discussed in our review [32]. Introduction of acidic carboxylic units into the polymer chain equalized the acid-base ratio and the polymer-PSA complex became insoluble. The polymer-PSA complexes based on polymers with a large amount of acidic units (P04, P14, and P24) were more soluble than the free polymers. We hypothesized that these complexes contained an excess of acidic groups (carboxylic and silanol) which made them soluble.
The influence of various substances on the condensation of silicic acid can be monitored using the molybdate method [21,22], which allowed the measurement of the monomer and dimer concentrations of silicic acid. Study of Si(OH) 4 condensation in the presence of new polymers (Figure 7) revealed an acceleration of the condensation process with polymers containing a considerable amount of amine units similar to our results with poly(vinyl amine) [34]. The effect was visible at the very early stages of condensation (k third values) and the polymers decreased the equilibrium concentration of free silicic acid in the later stages. Polymers containing an excess of carboxylic units (P14 and P24) did not influence condensation during the early stages, and inhibited condensation at the hour and day time intervals. Condensation of silicic acid at pH 7 proceeded mainly as an S N 2 reaction between the -Si-OH and -Si-O − moieties [33]. Silanol anions arose from the primary PSA particles as Si(OH) 4 is a very weak acid (pK = 9-10) [21,32] in comparison to PSA (pK 0 = 6-7). Interaction between PSA (including primary oligomers) and polyamine chain increased the amount of silanol anions (-SiO − + HN-) and an acceleration in condensation was observed. In the case of the P14 and P24 polymers, the macromolecular chains were negatively charged at pH 7 due to an excess of carboxylic units that prevented their interaction with primary siliceous oligomers, and these polymers did not influence condensation in early stages. Inhibition of condensation at the later time points resulted in a decrease of silanol units in the system. A similar effect was observed in the presence of poly(ethylene glycol) [35], poly(1-vinylpyrrolidone) [36], and poly(1-vinylimidazole) [37]. These polymers form hydrogen bonds with silanol units which stabilize non-ionized -Si-OH groups, thus decreasing the condensation rate. P14 and P24 polymers contain carboxylic anions at pH 7 and these groups can be treated as weak bases (pK of the conjugated acid is 6-6.5 at pH 7) [38]. We hypothesized that the carboxylic anions could stabilize non-ionized -Si-OH groups by means of hydrogen bonding (-C(O)O − ···H-O-Si-) and these interactions resulted in the inhibition of condensation. Possible spectroscopic evidence of these bonds was reported by Danilovtseva et al. [39].
Composite precipitates were obtained when silicic acid was condensed in the presence of polymers P02, P03, and P13 at pH 7. The precipitates contained silica and organic polymers according to the FTIR spectra Si-O-Si (1070 cm −1 ) and Si-OH (965 cm −1 ) bands, ( Figure S2 in Supplementary Materials). The composite particles appeared as 200-500 nm (in diameter) spheres decorated with small particles about 50 nm (Figure 8). BET surface area for composites based on P02, P03, and P13 polymers was 5.8, 1.8, and 23.1 m 2 ·g −1 , respectively. These values were considerably lower than those reported for silica precipitated in aqueous medium (several hundred m 2 ·g −1 ). This anomaly can be attributed to the coating of the siliceous nanoparticles with an organic polymer that led to a decrease in the formation of micro-and mesopores. concentration of free silicic acid in the later stages. Polymers containing an excess of carboxylic units (P14 and P24) did not influence condensation during the early stages, and inhibited condensation at the hour and day time intervals. Condensation of silicic acid at pH 7 proceeded mainly as an SN2 reaction between the -Si-OH and -Si-O − moieties [33]. Silanol anions arose from the primary PSA particles as Si(OH)4 is a very weak acid (pK = 9-10) [21,32] in comparison to PSA (pK0 = 6-7). Interaction between PSA (including primary oligomers) and polyamine chain increased the amount of silanol anions (-SiO − + HN-) and an acceleration in condensation was observed. In the case of the P14 and P24 polymers, the macromolecular chains were negatively charged at pH 7 due to an excess of carboxylic units that prevented their interaction with primary siliceous oligomers, and these polymers did not influence condensation in early stages. Inhibition of condensation at the later time points resulted in a decrease of silanol units in the system. A similar effect was observed in the presence of poly(ethylene glycol) [35], poly(1-vinylpyrrolidone) [36], and poly(1-vinylimidazole) [37]. These polymers form hydrogen bonds with silanol units which stabilize non-ionized -Si-OH groups, thus decreasing the condensation rate. P14 and P24 polymers contain carboxylic anions at pH 7 and these groups can be treated as weak bases (pK of the conjugated acid is 6-6.5 at pH 7) [38]. We hypothesized that the carboxylic anions could stabilize non-ionized -Si-OH groups by means of hydrogen bonding (-C(O)O − ···H-O-Si-) and these interactions resulted in the inhibition of condensation. Possible spectroscopic evidence of these bonds was reported by Danilovtseva et al. [39]. Composite precipitates were obtained when silicic acid was condensed in the presence of polymers P02, P03, and P13 at pH 7. The precipitates contained silica and organic polymers according to the FTIR spectra Si-O-Si (1070 cm −1 ) and Si-OH (965 cm −1 ) bands, ( Figure S2 in Supplementary Materials). The composite particles appeared as 200-500 nm (in diameter) spheres decorated with small particles about 50 nm (Figure 8). BET surface area for composites based on P02, P03, and P13 polymers was 5.8, 1.8, and 23.1 m 2 ·g −1 , respectively. These values were considerably lower than those reported for silica precipitated in aqueous medium (several hundred m 2 ·g −1 ). This anomaly can be attributed to the coating of the siliceous nanoparticles with an organic polymer that led to a decrease in the formation of micro-and mesopores. Condensation of silicic acid in the presence of polymeric amines often results in composite nanoparticles that are stable in aqueous solutions [32,40]. We studied the formation of the composite nanoparticles at a decreased polymer concentration (0.4 g·L −1 ) with the objective to obtain soluble products with a high silica content. We found that the stable polymer-PSA solutions were obtained at silicic acid concentrations of 7.5-12.5 mM at pH 7.4. DLS data (Figure 9) showed that free silicic acid formed approximately 20 nm (in diameter) particles after three days of condensation. Reactions in the presence of polymers proceeded through bimodal distribution of the particle size and after 1-2 days, uniformly distributed particles were observed. The size of the resulting particles increased with an increase of Si(OH)4 concentration and with the introduction of carboxylic groups in the polymeric chain. Freeze-dried solutions of the composite nanoparticles were studied with transmission electron microscopy (TEM, Figure 10). The obtained material contained electron dense Condensation of silicic acid in the presence of polymeric amines often results in composite nanoparticles that are stable in aqueous solutions [32,40]. We studied the formation of the composite nanoparticles at a decreased polymer concentration (0.4 g·L −1 ) with the objective to obtain soluble products with a high silica content. We found that the stable polymer-PSA solutions were obtained at silicic acid concentrations of 7.5-12.5 mM at pH 7.4. DLS data (Figure 9) showed that free silicic acid formed approximately 20 nm (in diameter) particles after three days of condensation. Reactions in the presence of polymers proceeded through bimodal distribution of the particle size and after 1-2 days, uniformly distributed particles were observed. The size of the resulting particles increased with an increase of Si(OH) 4 concentration and with the introduction of carboxylic groups in the polymeric chain. Freeze-dried solutions of the composite nanoparticles were studied with transmission electron microscopy (TEM, Figure 10). The obtained material contained electron dense particles with sizes comparable to the size of the smallest silica particles (Figure 9). The TEM images also revealed that the silica particles were surrounded by a transparent layer, probably due to the organic polymer chains. The size of the particles from the TEM data was several times lower than that obtained with DLS, possibly due to the association of the particles in an aqueous medium. Moreover, DLS recorded the hydrodynamic radius of the particles and hence was always greater than the size observed in electron microscopy.
Polymers 2017, 9,624 11 of 18 particles with sizes comparable to the size of the smallest silica particles (Figure 9). The TEM images also revealed that the silica particles were surrounded by a transparent layer, probably due to the organic polymer chains. The size of the particles from the TEM data was several times lower than that obtained with DLS, possibly due to the association of the particles in an aqueous medium. Moreover, DLS recorded the hydrodynamic radius of the particles and hence was always greater than the size observed in electron microscopy. Polymeric amines have been widely investigated as gene transfer agents [41,42]. The positive charge of the polyamine chain provides effective binding with nucleic acid. The obtained complex particles internalize in the cells via endocytosis and end up in the endosomes (small membranebound compartments). The endosomes are transported to lysosomes (organelles containing hydrolytic enzymes) which degrade a wide range of biomacromolecules. The polyamine coating must aid the complexed nucleic acid to escape lysosomal degradation. Earlier reports have demonstrated that polyethylenimine (PEI) is an effective transfection agent as it enables the destruction of endosomes before approaching lysosome [43]. This effect was explained based on the ability of PEI to work as a "proton sponge", therefore, showing a buffer capacity from the physiological pH to acid range [44]. Proton pumping into the endosome and a corresponding influx of chloride ions results in an increase of ionic strength inside the endosome. This is accompanied by osmotic swelling, which gives rise to the endosome destruction and release of the nucleic acid cargo into cytosol. Thus, synthesis of polymers for gene delivery is aimed on creation of structures showing a high buffer capacity in the neutral and slightly acidic range. Complex polymer structures containing high amounts of tertiary amine groups were recently obtained [45][46][47][48] and showed a high buffer capacity combined with enhanced transfection activity. Polymeric amines have been widely investigated as gene transfer agents [41,42]. The positive charge of the polyamine chain provides effective binding with nucleic acid. The obtained complex particles internalize in the cells via endocytosis and end up in the endosomes (small membrane-bound compartments). The endosomes are transported to lysosomes (organelles containing hydrolytic enzymes) which degrade a wide range of biomacromolecules. The polyamine coating must aid the complexed nucleic acid to escape lysosomal degradation. Earlier reports have demonstrated that polyethylenimine (PEI) is an effective transfection agent as it enables the destruction of endosomes before approaching lysosome [43]. This effect was explained based on the ability of PEI to work as a "proton sponge", therefore, showing a buffer capacity from the physiological pH to acid range [44]. Proton pumping into the endosome and a corresponding influx of chloride ions results in an increase of ionic strength inside the endosome. This is accompanied by osmotic swelling, which gives rise to the endosome destruction and release of the nucleic acid cargo into cytosol. Thus, synthesis of polymers for gene delivery is aimed on creation of structures showing a high buffer capacity in the neutral and slightly acidic range. Complex polymer structures containing high amounts of tertiary amine groups were recently obtained [45][46][47][48] and showed a high buffer capacity combined with enhanced transfection activity.
Buffer capacity of the new polymers (Table 2) was calculated from the potentiometry data according to the International Union of Pure and Applied Chemistry (IUPAC) recommendations as a derivative of the added NaOH concentration vs. pH (the number of moles of strong base required to change the pH by one unit when added to one liter of the solution) [49]. Polymers containing one amine group in the side chain (P01 and P02) showed a drastic decrease of the buffer capacity in slightly acidic pH when compared with polymers whose side chains contained two or three amine moieties. This effect was connected with a decrease of basicity of neighboring amine groups after the protonation of one nitrogen atom in the side chain. The buffer capacity of the ampholyte polymer (P23) did not depend on pH in the studied interval as the carboxylate anion was also a basic unit in addition to the amine groups. Thus, polymers with two or three amine moieties in the side chains are more promising agents for gene delivery according to the proton sponge theory. Polyampholyte P23 is also suitable to study in gene delivery as its average unit charge is +0.43 at pH 7.4 when calculated as per Annenkov et al. [30] and this polymer is capable of interacting with negatively charged nucleic acids. The ability of new polymers to interact with nucleic acids was verified with 21-mer DNA. Gel electrophoresis data (Table 3, Figures S3-S5 in the Supplementary Materials) showed that polymers with a high content of amine groups (P01, P02, P11, P12, P22, and P23, #1, 2, 5-7, 10, 16, and 18 in Table 3) gave positively-charged complexes with the oligonucleotide. Decreases in the amine content in the copolymer resulted in an almost neutral complex (P13), or the complex was absent (P03 and P04). The polymer:oligonucleotide ratio could change the charge of the complex from positive to neutral (Table 3, #11 and 12, Figures S4 and S5 in the Supplementary Materials). Furthermore, it was observed that a lower concentration of the polymer (0.2-0.3 g/L, Table 3, #8 , 9, 13, and 14) resulted in incomplete complexation, as evident from the spreading electrophoretic blots from R f = 0 to near 1. ζ-Potential of the polymer-oligonucleotide complexes is positive in the case of systems which show full complexing: 20.3, 14.9, 5.8, 15.7, and 9.1 mV for lines 7, 11, 12, 15, and 19 from Table 3. The size of the complex particles is below 50 nm according to TEM data ( Figure 11). Buffer capacity of the new polymers (Table 2) was calculated from the potentiometry data according to the International Union of Pure and Applied Chemistry (IUPAC) recommendations as a derivative of the added NaOH concentration vs. pH (the number of moles of strong base required to change the pH by one unit when added to one liter of the solution) [49]. Polymers containing one amine group in the side chain (P01 and P02) showed a drastic decrease of the buffer capacity in slightly acidic pH when compared with polymers whose side chains contained two or three amine moieties. This effect was connected with a decrease of basicity of neighboring amine groups after the protonation of one nitrogen atom in the side chain. The buffer capacity of the ampholyte polymer (P23) did not depend on pH in the studied interval as the carboxylate anion was also a basic unit in addition to the amine groups. Thus, polymers with two or three amine moieties in the side chains are more promising agents for gene delivery according to the proton sponge theory. Polyampholyte P23 is also suitable to study in gene delivery as its average unit charge is +0.43 at pH 7.4 when calculated as per Annenkov et al. [30] and this polymer is capable of interacting with negatively charged nucleic acids. 0.9 0.9 0.9 0.9 0.9 The ability of new polymers to interact with nucleic acids was verified with 21-mer DNA. Gel electrophoresis data (Table 3, Figures S3-S5 in the Supplementary Materials) showed that polymers with a high content of amine groups (P01, P02, P11, P12, P22, and P23, #1, 2, 5-7, 10, 16, and 18 in Table 3) gave positively-charged complexes with the oligonucleotide. Decreases in the amine content in the copolymer resulted in an almost neutral complex (P13), or the complex was absent (P03 and P04). The polymer:oligonucleotide ratio could change the charge of the complex from positive to neutral (Table 3, #11 and 12, Figures S4 and S5 in the Supplementary Materials). Furthermore, it was observed that a lower concentration of the polymer (0.2-0.3 g/L, Table 3, #8 ,9, 13, and 14) resulted in incomplete complexation, as evident from the spreading electrophoretic blots from Rf = 0 to near 1. ζ-Potential of the polymer-oligonucleotide complexes is positive in the case of systems which show full complexing: 20.3, 14.9, 5.8, 15.7, and 9.1 mV for lines 7, 11, 12, 15, and 19 from Table 3. The size of the complex particles is below 50 nm according to TEM data ( Figure 11). The structure and charge of the oligonucleotide-containing complexes are important factors that influence the transfection efficiency of the oligonucleotide. Various molecules have been investigated in this area and one approach is to use siliceous composites [4,5]. We studied the ability of composite siliceous nanoparticles based on new polymeric amines to interact with the oligonucleotide (Table 4, Figures S6 and S7 in the Supplementary Materials) and found the following: • the presence of PSA in the system effectively shifted the charge of the particles from positive in the case of free polymers to neutral and negative (Table 4, #2-5, 7, 13-17 compared with #8-11 and 18-19); • high content of PSA in the composite decreased or prevented interaction with the oligonucleotide ( Table 4, #1-3, 6, and 12) which became apparent in the DNA fluorescence near R f = 1.
Thus, the introduction of carboxylic units into a polyamine chain and/or association with PSA allowed us to control the charge of the DNA-containing aggregates. TEM data ( Figure 10) showed that composite DNA particles were below 200 nm in diameter. The addition of DNA to siliceous composite nanoparticles resulted in the aggregation of small particles based on P01 and P02 polymers, while denser particles from the P22 polymer persisted with their shape in the presence of DNA. Slightly positive systems are the most promising as gene delivery agents, and were obtained with P01, P11, P12, P13, P22, and P23 polymers ( Table 3, #11, 15, 19, Table 4, #5 and [16][17][18][19][20]. Short DNA and RNA chains are not so active in complexing with polymers traditionally applied with plasmid DNA ( [50] and references in this review). This was explained with the absence of spatial flexibility of the short chains which results in high constraints on the formation of cooperative interactions with the polymer. Amine groups in the polymers obtained in this work are connected to the main chain through amide and trimethylene spacers, which decrease the entropy loss during complexing and allows the formation of stable polymer-oligonucleotide complexes. The ability of the new polymers and composite nanoparticles to facilitate penetration of oligonucleotides into living cells was demonstrated with S. cerevisiae yeast cells ( Figure 12). The appearance of green fluorescence inside cells is evidence of the penetration of the oligonucleotide complex. Table 3 (a,a'), line 20 in Table 4 (insertions in (a)), and line 4 in Table 4 (b,b'). Complex: cell culture ratio was 1:3. Scale bars represents 10 μm. Table 3 (a,a'), line 20 in Table 4 (insertions in (a)), and line 4 in Table 4 (b,b'). Complex: cell culture ratio was 1:3. Scale bars represents 10 µm.
Conclusions
We synthesized a set of homopolymers and copolymers starting from poly(acryloyl chloride) and short-chain polyamines. The polymers were obtained by polymer-analogous reactions under moderate conditions, which prevented the destruction of the main chain and provided a set of polymers with highly variable functionality and similar polymerization degrees. New polymeric amines and ampholytes were active in interaction with poly(silicic acid) and oligonucleotides, which allowed the design of composite nanoparticles, ordered composite materials, and oligonucleotide complexes for gene delivery. | 11,481.4 | 2017-11-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Flow structure diagnostics in a four-vortex furnace model using PIV-method
In this paper the flow structure in a four-vortex furnace of a pulverized coal boiler has been studied. The results of an experimental study of inner aerodynamics carried out on a laboratory isothermal model of a combustion device are presented. Using the particle image velocimetry method, flow velocity distributions have been obtained in a number of sections. A spatial flow structure consisting of four stable conjugate vortices with vertical flow rotation axes has been visualized. The perspectives of coal energy development necessitate the creation of new ways of efficient and environmentally safe use of solid fuels. The limited reserves of high-quality solid fuels lead to the need to involve low-grade ("non-project") coals in the fuel and energy balance. The solution of this problem requires a significant increase in the characteristics of steam boilers of TPPs and the development of new types of combustion devices that satisfy modern standards of energy efficiency and environmental safety. One of the promising technologies in this field is fuel combustion in the vortex flow. The flow swirling provides for a more complete fuel burnup due to better mixing and longer residence time of fuel particles in the combustion chamber. In this paper, the flow structure in a perspective combustion device [1] with a fourvortex solid fuel combustion scheme is experimentally studied. Four vortical structures with vertical axes and opposite directions of rotation are formed there. The furnace is designed for the use at thermal power plants for burning brown slagging coal. The furnace is equipped with a shielded rectangular combustion chamber. Two diagonally directed blocks of multilevel burners are mounted on the sidewalls; they are close to the axis of symmetry of the sidewalls so that the ratio of distance between the burners to the burner height is 0.8–1.2. In the center of the front and rear walls, there are nozzles of secondary and tertiary inflow, made in the form of vertical near-wall slots, whose height is equal to the height of the burner unit and oriented relative to each other in opposite directions along the walls, where they are located. Primary fuel combustion occurs in the direct-flow part of the side burner flame. Due to optimal positioning of the side burners, efficient interaction of burner jets mixing and ignition is achieved via intensive flue gas suction in the inter-burner space. The colliding straight-flow parts of the flame, directed from the sidewalls towards each other, turbulize the flow, and due to the nozzles of secondary and tertiary inflow, form four vortex flows with vertical rotation axes (Fig. 1-a). Corresponding author<EMAIL_ADDRESS>DOI: 10.1051/ , 03006 (2017) 71150300 115 MATEC Web of Conferences matecconf/201
Abstract. In this paper the flow structure in a four-vortex furnace of a pulverized coal boiler has been studied. The results of an experimental study of inner aerodynamics carried out on a laboratory isothermal model of a combustion device are presented. Using the particle image velocimetry method, flow velocity distributions have been obtained in a number of sections. A spatial flow structure consisting of four stable conjugate vortices with vertical flow rotation axes has been visualized.
The perspectives of coal energy development necessitate the creation of new ways of efficient and environmentally safe use of solid fuels. The limited reserves of high-quality solid fuels lead to the need to involve low-grade ("non-project") coals in the fuel and energy balance. The solution of this problem requires a significant increase in the characteristics of steam boilers of TPPs and the development of new types of combustion devices that satisfy modern standards of energy efficiency and environmental safety. One of the promising technologies in this field is fuel combustion in the vortex flow. The flow swirling provides for a more complete fuel burnup due to better mixing and longer residence time of fuel particles in the combustion chamber.
In this paper, the flow structure in a perspective combustion device [1] with a fourvortex solid fuel combustion scheme is experimentally studied. Four vortical structures with vertical axes and opposite directions of rotation are formed there. The furnace is designed for the use at thermal power plants for burning brown slagging coal. The furnace is equipped with a shielded rectangular combustion chamber. Two diagonally directed blocks of multilevel burners are mounted on the sidewalls; they are close to the axis of symmetry of the sidewalls so that the ratio of distance between the burners to the burner height is 0.8-1.2. In the center of the front and rear walls, there are nozzles of secondary and tertiary inflow, made in the form of vertical near-wall slots, whose height is equal to the height of the burner unit and oriented relative to each other in opposite directions along the walls, where they are located. Primary fuel combustion occurs in the direct-flow part of the side burner flame. Due to optimal positioning of the side burners, efficient interaction of burner jets mixing and ignition is achieved via intensive flue gas suction in the inter-burner space. The colliding straight-flow parts of the flame, directed from the sidewalls towards each other, turbulize the flow, and due to the nozzles of secondary and tertiary inflow, form four vortex flows with vertical rotation axes ( Fig. 1-a).
The four-vortex scheme of pulverized coal combustion [1] was developed for boiler BKZ-320-140, station No. 18 at Krasnoyarsk CHP-1 with its transfer from liquid to dry slag removal. It was also implemented at reconstructing furnaces of boilers BKZ-640-140-TP at the Gusinoozerskaya TPP (Russia). According to the practice, burning the high-ash coals in the furnaces of BKZ boilers leads to intense slagging of the furnace and steam superheaters, whereby the boiler load is reduced. Evaluating the effectiveness of boiler reconstruction has identified a number of shortcomings in boiler operation, requiring its further modernization. To improve the technical, economic and environmental performance, it is necessary to optimize operating and design parameters of the four-vortex furnace of the pulverized-coal boiler on the basis of a detailed study of its aerodynamics on laboratory models.
To study the flow structure in the four-vortex furnace model, an experimental stand was developed and debugged. The scheme of this stand is shown in Fig. 1-b. The main elements of this stand are [2]: compressed air supply system with control and regulating devices; four-vortex furnace model; system for seeding a stream by tracer-particles; and measuring system. The model is made of 10 mm thick plexiglass at a scale of 1:25 (internal dimensions 290 x 880 x 730 mm). On the side walls at three levels, there are two diagonally directed nozzles at an angle of 6° (dimensions 28 x50 mm), and the axes of the side nozzles are directed to the center of the furnace. On the front and rear walls at three levels (locations of side nozzles are marked), there are also two central nozzles (dimensions of front nozzles 23 x 66 mm, and rear ones -11 x 64 mm) directed towards the side walls at an angle of 20°.
Study of the flow structure in the four-vortex furnace model was carried out using the particle image velocimetry (PIV) method, which is the more productive in comparison with the method of laser Doppler anemometry, also used for diagnostics of flow pattern [3][4][5][6][7].
PIV is a non-contact optical panoramic method, allowing measurements with high performance. Measurements of the instantaneous flow velocity field in a given section are based on the measurement of the motion of impurity particles (tracers) located in the plane of the section under study, for a fixed time interval. This displacement is determined based on the application of correlation methods to tracer pictures, using regular partitioning into elementary cells.
In this work the PIV-system "POLIS" (development of IT SB RAS) was used; it consists of the CCD camera Videoscan 4021 (with a resolution 2048 x 2048 pixels, the frequency of shooting up to 1.25 Hz, and an exposure time of 28 ms) and a pulsed Nd: YAG laser QuantelEVG (wavelength of 532 nm, energy in the pulse up to 145 mJ, and a pulse duration of 10 ns). To obtain the average velocity field, a series of 100 pairs of images was conducted in each measuring section. The regime under which the velocity was 5 and 3 m/s in the outlet from the side and central nozzles, respectively, was investigated. In horizontal sections, the flow is symmetrical with respect to the vertical plane passing through the center of the central walls of the furnace. Therefore, in order to increase the spatial resolution and save the time of the experiment, measurements were made only in the half area of the investigated horizontal cross-sections (290 x 365 mm). The obtained results show the complex structure of the flow in the investigated furnace model, consisting of four conjugate stable vortices. The jets, which flow from the nozzles located on the side walls at a distance of three nozzle calibers, merge and spread as a single stream to the center of the furnace. Then the stream turns back and, interacting with the jets which flow from the central nozzles, is directed parallel to the front and rear walls. In the corners, the stream turns and flows to the bottom of the side burners, thereby forming four closed vortices with vertical axes and opposite direction of rotation.
With this flow structure, there is an intensive "washing" of the furnace screens (maximum tangential velocity is near the walls), which in practice contributes to the improvement of convective heat transfer. With such an aerodynamic scheme, a long torch length is provided; there is no direct surge of the unburned fuel particles on the screens, which reduces the danger of intense surface slag typical for the frontal arrangement of the burners. The mounting of tertiary blast nozzles is a necessary condition for the formation of a four-vortex flow structure and, at the same time, allows organizing a gradual combustion of fuel in order to reduce emissions of nitrogen oxides.
The results of the research are useful for verification of developing mathematical models applied in full-scale numerical simulations of furnace processes [8,9].
The study was carried out with the financial support of the Russian Science Foundation (project No. 14-29-00093). | 2,373 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Impact of China’s Provincial Government Debt on Economic Growth and Sustainable Development
Macroeconomic stability is the core concept of sustainable development. However, the coronavirus disease (COVID-19) pandemic has caused government debt problems worldwide. In this context, it is of practical significance to study the impact of government debt on economic growth and fluctuations. Based on panel data of 30 provinces in China from 2012 to 2019, we used the Mann–Kendall method and Kernel Density estimation to analyze the temporal and spatial evolution of China’s provincial government debt ratio and adopted a panel model and HP filtering method to study the impact of provincial government debt on economic growth and fluctuation. Our findings indicate that, during the sample period, China’s provincial government debt promoted economic growth and the regression coefficient (0.024) was significant. From different regional perspectives, the promotion effect of the central region (0.027) is higher than that of the eastern (0.020) and western regions (0.023). There is a nonlinear relationship between China’s provincial government debt and economic growth, showing an inverted “U-shaped” curve. Fluctuations in government debt aggravate economic volatility, with a coefficient of 0.009; tax burden fluctuation and population growth rate aggravate economic changes. In contrast, the optimization of the province’s industrial structure and the improvement of the opening level of provinces slow down economic fluctuations.
Introduction
The coronavirus disease pandemic has extensively impacted the economy of countries worldwide, leading to prominent government debt problems [1]. According to estimates by the Congressional Budget Office (CBO) of the United States (US), because of the pandemic, the US federal government debt ratio rose to 126% in 2020 and continues to rapidly rise (Date sources: https://www.cbo.gov/publication/57635, Washington, DC, U.S. accessed on: 31 August 2021) [2]. Furthermore, the data released by China's National Bureau of Statistics suggest that China's government debt balance in 2020 was 46.55 trillion yuan, and that the government debt ratio was 45.82%. As of the end of December 2020, the national local government debt balance was 25.66 trillion yuan-a year-on-year increase of 20.44%-but the issue of sustainability of local government debt is very urgent (Date sources: http://www.gov.cn/xinwen/2021-01/26/content_5582612.htm Beijing, China. accessed on: 31 August 2021). In recent years, the scale of local government debt in China has shown a trend of continuous expansion, which has had a profound impact on macroeconomic stability and fiscal sustainability [3][4][5]. The local government debt has a positive impact on promoting investment and enhancing the vitality of the local economy [6][7][8]. Furthermore, China's economy has been seriously affected by COVID- 19. In order to quickly restore the social and economic order, China implemented economic stimulus policies by issuing government bonds and other forms of financing. The phenomenon of rapid increase in government debt risks has begun to frequently occur throughout the country, which has aroused panic among the people and caused widespread concern in society [9].
Since the post-Keynesian era, government debt and GDP, as well as the relationship and fluctuations between them, are important components of macroeconomic theories [10,11]. There have been endless debates among various schools of thought about whether government debt expansion can effectively promote economic growth in the long term [12,13]. Some scholars believe that government borrowing will weaken its ability to formulate relevant countercyclical policies in response to economic crises, which will cause the government to do nothing in the face of economic fluctuations, thereby affecting the stability of the entire economy and society [14,15]. Furthermore, scholars have proposed that, when government issues additional public bonds and implements fiscal deficit policies, it can effectively expand domestic demand and promote regional economic development [16][17][18]. In addition, others have demonstrated that the effects of raising debts and levying taxes on finance are the same, and that the behavior of local governments raising debt will not affect social resources, investment, labor supply, and other factors, which proves the neutrality of debt [19][20][21][22].
Evaluating relevant research conducted from an empirical perspective, we found that empirical results were very different owing to the different theories and data referred to in discussions concerning these matters. On the one hand, some scholars used the panel regression of a time series to draw the conclusion that government debt promotes economic growth. They mainly studied the debt and economic development of Southeast Asian countries and found that, in most Southeast Asian countries, government debt has an obvious positive effect on economic growth [23]. Others showed that government debt can promote economic development to a certain extent in both the short and long term [24]. On the other hand, some scholars have confirmed that the influence of government debt on economic growth is unfavorable. Cohen [25] proposed using the ratio of government debt to regional GDP to represent the degree of dependence of the local economy on government debt. His research showed a negative correlation between government debt and economic growth. Elmeskov and Sutherland [26] conducted research from a long-term perspective and suggested that excessive government debt would seriously affect public savings. Their research data showed that, for every 1% increase in the total government debt, the total gross domestic product (GDP) of the region under stable output will be reduced by 10%. At the same time, Woo and Kumar [27] pointed out that government debt led to a decrease in investment and labor. Slowdown in productivity growth is the root cause of this phenomenon.
With the enrichment of empirical tools and data sources, many scholars have found that there is a nonlinear relationship between government debt and economic development. Some scholars used empirical research to find that the relationship between government debt and economic growth is "U-shaped" [28]. However, other scholars have different views, such as Checherita-Westphal and Rother (2011) [29], who selected 12 Eurozone countries as their research sample. They found that there is a clear threshold effect between government debt and economic growth, and that the relationship between them is a typical inverted "U-shaped". Many scholars have conducted similar studies correspondingly [30][31][32][33], others believe that there is not only a threshold effect between debt and economic growth, but also a more complex relationship [34][35][36].
For China, China's government debt-to-GDP ratio is lower than that of most large, developed economies [37], and government debt scales have not reached their respective thresholds [38]. However, there is a lack of research on China's provincial government debt. In recent years, China's provincial government debt has risen every year, and the debt ratio of some relatively backward provinces has reached the risk warning point. Many scholars [39][40][41] used a panel data model to study the relationship between government debt and corporate leverage and found that there is a negative relationship between the two. Some of them [42] used a fixed effects model and panel data from 2006 to 2015 to study the impact of land hoarding and prices on the scale and risk of local government debt. Subsequently, they found that both the scale and the price of land had a positive impact on the scale and risk of urban investment bonds (UIB). In terms of different regions, only the eastern region showed a significant correlation between land assets and the UIB. Other scholars [43] used economic fluctuations, local debt risks, and bank risk-taking variables to construct an econometric model and found that both economic changes and local government bond risks have a significant positive impact on bank risks and a negative correlation with regional economic growth. The authors of [44] researched China's local government financing platform (LGFV) and found that there is an inverted "U-shaped" relationship between the diversification of LGFV and local economic growth.
Based on sustainable development theory, we used the Mann-Kendall method and Kernel Density estimation to analyze the evolution of China's provincial government debt ratio and adopted a panel model to study the impact of provincial government debt on economic growth and fluctuations. Because of the opaqueness of local hidden debts before the "New Budget Law" was promulgated and implemented, related debts were difficult to obtain. The data of previous studies lacked timeliness and guidance for the implementation of current policies was limited. Therefore, compared with previous studies, our study mainly contributes to the existing literature in several ways. First, we considered provincial government debt as the research object, which could make the research on economically sustainable development more in-depth and specific. Our study explores the sustainability of China's provincial government debt and depicts the temporal and spatial evolution of the provincial government debt ratio from 2009 to 2020. Second, our research overcomes the limitations of the availability of local debt data, updates the research data to the latest, extends the perspective to the impact of local government debt on economic growth and volatility, builds an empirical model to verify them, and analyzes nonlinear relationships and regional differences. Third, we tested other influencing factors and proposed specific suggestions to improve the sustainable economic growth of different regions and provinces.
The remainder of this paper begins with Section 2, which introduces the research concept of this article and includes study ideas, methods, and data. Section 3 presents the trend analysis, provides empirical results and robustness tests, and provides an analysis. Section 4 discusses the empirical results and propose methods for sustainable development under COVID-19. Section 5 includes the conclusions, policy implication and limitations.
Study Idea
First, we determined the methods for studying the economically sustainable development of China's provinces, constructed econometric models, and analyzed the variables and data needed for the study.
Second, we used geographic information system (GIS) tools and kernel density estimation to show the dynamic distribution of China's provincial government debt ratio from 2009 to 2020. By using a Mann-Kendall test, we analyzed the trend of China's provincial government debt ratio from 2009 to 2020. In terms of empirical testing, we used econometric methods to determine the impact of provincial government debt on economic growth and fluctuations and analyzed nonlinear relationships and regional differences.
Finally, based on the results of the empirical analysis, we proposed specific policy recommendations for the eastern, central, and western provinces in China and explored research deficiencies and improvement methods.
Kernel Density Estimation
We used kernel density estimation to describe the evolution trend of China's provincial government debt ratio and analyzed the status quo of sustainable development of government debt in various regions of China.
As a non-parametric method, kernel density estimation has weak model dependence and strong robustness (Mariani and Vaden, 2010) [45]. This has become a common method for analyzing spatial imbalances. This method usually assumes that the density function of random variable X is: The kernel density function, as a smooth transition function or weighting function, usually satisfies: where N represents the number of observations, X i represents the independent and identically distributed observations, x represents the average value, k represents the kernel density, and h represents the bandwidth. The larger the bandwidth, the smoother the estimated density function curve and the lower the accuracy of the estimation; in contrast, the smaller the bandwidth, the less smooth is the density function, but the estimation accuracy is higher.
Econometric Methodology
Many scholars have adopted the most cutting-edge models and empirical methods to study the problem of government debt, such as the dynamic debt stabilization game model [46] and Python toolkit [47]. Based on the applicability of our study, following the classic research on government debt [9,11,48,49], we applied a panel data approach to examine the impact of local government debt on economic growth, namely, whether there was a threshold effect and the impact of local government debt on economic volatility, starting at the provincial government level. First, the impact of government debt on economic growth was examined by constructing a panel model, as follows: Second, we used the quadratic curve analysis method to bring the quadratic term of government debt variables into the econometric Model (4), as follows: Finally, the impact of local government debt on economic volatility was studied using the Hodrick-P rescott (HP) filter method to measure economic volatility; accordingly, a panel model was constructed as follows: where lnGDP is the dependent variable that represents the natural logarithm of provincial real GDP and GDPFlu is the dependent variable that represents the fluctuating term of the natural logarithm of provincial real GDP. As an independent variable, lnDebt is the natural logarithm of provincial government debt size, lnDebt 2 is the quadratic term of the natural logarithm of government debt size, and DebtFlu is the fluctuating term of the natural logarithm of provincial government size. Controls and ControlsFlu represent the set of control variables of Models (3) to (5), respectively. Province i and Year t are province and year fixed effects, respectively, which help mitigate issues from omitted variable bias. The symbol ε represents the estimated error item; the terms i and t denote the province and time, respectively. In the empirical process, the first-order lag term of the independent variable was used for the regression. The reasons are as follows: (1) The data selected to measure the level of government debt are the balance of government debt at the end of each year, and the balance at the end of the year will generally affect government spending in the second year and have an impact on provinces' GDP. Therefore, the impact of government debt on economic growth generally has a time lag [50,51]. (2) Using the first-order lag term of the independent variables can alleviate the endogeneity problem to a certain extent.
Research Data
The specific calculation methods and data sources of the variables are listed in Table 1. (1) Dependent Variables: Economic growth: lnGDP To eliminate the effect of inflation, 2009 was used as the base period to measure economic growth by calculating the real GDP of each province by taking the natural logarithm of each province's nominal GDP collected for 2009 and the provincial GDP index for the period of 2010 to 2020 [52].
Economic fluctuations: GDPFlu We chose the HP filtering method to deal with economic fluctuations. The HP filtering method approach can separate the trend items in the time series variables in a smooth sequence, so the time series data are divided into two parts: a smooth trend item and a periodic fluctuation item [53].
In data processing, HP filtering is performed on the natural logarithm of actual GDP. The trend item obtained after the HP filtering of the time series of the total output can be used to represent the potential output, the fluctuation item represents the output gap, and the time series of the output gaps can reflect economic changes. The decomposition process is the minimization process of solving Equation (6).
In Formula (6), lnGDP represents the total output level, and lnGDP t * represents the actual potential output. The decomposed trend item is obtained by calculating the HP filter, which is the actual potential output lnGDP t *. Next, the cyclical fluctuation part of economic growth is obtained by removing the trend item-that is, the output gap (lnGDP t -lnGDP t *). This result can be used to represent the cyclical fluctuation of the GDPFlu economy. The smoothing parameter λ in Formula (6) is set to 100, according to the value of regarding the annual data [54]. This study calculated the natural logarithm of the scale of local government debt, performed HP filtering, set the smoothing parameter λ to 100, and considered the volatility term as an indicator to measure the volatility of government debt.
(3) Control Variables Level of urbanization: Urb measures the level of urbanization using the urbanization rate of each province [55]. Industrial structure: Indus. This study used the share of the tertiary sector in the province's GDP to measure the degree the industrial sector's sophistication [56]. Population growth: Pop; represents a measure of the province's natural population growth rate. Opening level of provinces: Open, following [57], is the ratio of total provincial exports and imports to a province's GDP per year, used to measure the degree of a province's openness. Level of financial expenditure: Gov, this study used the ratio of province's general public budget expenditure to its GDP to measure the level of local general public budget expenditure [58]. Local general public budget expenditure includes general public services, public security expenditures, local overall social undertakings expenditures, and so on. Level of province's tax liability: Tax; in this study, the ratio of provincial government's annual tax revenue to nominal GDP was used to measure the tax burden level in each province. Province's human capital levels: Edu. We chose the years of formal education per capita in each province to measure this indicator [59].
The descriptive statistics of the variables are presented in Appendix A. We used Eviews 10.0 ® and Stata 16.0 ® for data calculation, statistical analysis, and regression analysis.
Empirical Results
We divided China's 30 provinces into three regions: East, Central, and West. Among them, there are 11 provinces in the eastern region: Beijing, Tianjin, Hebei, Liaoning, Shanghai, Jiangsu, Zhejiang, Fujian, Shandong, Guangdong, and Hainan. The eight provinces in the central region are Shanxi, Jilin, Heilongjiang, Anhui, Jiangxi, Henan, Hubei, and Hunan. Furthermore, the 11 provinces in the western region are Inner Mongolia, Guangxi, Chongqing, Sichuan, Guizhou, Yunnan, Shaanxi, Gansu, Qinghai, Ningxia, and Xinjiang. Because the sample data volume in Tibet was too small and it was difficult to obtain accurate data, it was not within the scope of the sample selection in this study. In the text, we abbreviated the provincial government debt ratio as the PDR (Provincial government debt ratio: The ratio of the provincial government debt balance at the end of the year to province's GDP of the year. It is an indicator that measures the carrying capacity of the province's economic scale on government debt or the dependence of province's economic growth on government debt. Internationally, the debt ratio of 60% stipulated in the Maastricht Treaty is usually used as the reference value of the government debt risk control standard), and the indicator would be used for robustness testing. We calculated the PDR of 30 provinces from 2009 to 2020, as shown as in Appendix B.
Temporal Evolution
We used ArcGIS 10.C S ® to draw PDR distribution map. Figure 1A-D show the changing trend of China's PDR in 2009, 2012, 2016, and 2020. It divides the PDR in China's 30 provinces into four categories, from low to high. Notably, the darker the red color, the higher the PDR in that year. The data of the PDR originates from Wind database and manual calculation by authors. Figure 1E shows the mean value of the PDR from 2009 to 2020; the darker the red, the higher the mean value, where Guizhou Qinghai and Yunnan have the highest average value and a heavy debt burden; Guangdong Henan and Shandong have the lowest average value, and the debt pressure is relatively light. As shown in Figure 1F, the Mann-Kendall method was used to measure the trend value of the PDR in each province. The first interval is green, indicating that the PDR has a downward trend; the second range is light red, indicating that the PDR has an upward trend; and the third range is dark red, indicating that the PDR shows a significant upward trend. We found that the most developed regions
Spatial Evolution
We used MATLAB R2021 ® to make the nuclear density distribution map of PDR of whole country, eastern region, central region, and western region from 2009 to 2020.
As shown in Figure 2, the main peak of China's overall PDR tended to shift to the right and the peak increased after the main peak became shorter. After 2013, the peak increased, the bandwidth increased to a certain extent, and there was a right tailing trend with greater ductility. Overall, China's overall PDR shows a continuous upward trend with obvious inter-provincial differences-but a downward trend, nonetheless, especially after 2013. The provinces with a higher index widen the domestic differences, but the provinces with a lower PDR have a catch-up effect, and the differences between regions begin to narrow.
The main peak of the PDR in the eastern region shifted to the right and the peak height increased after a certain decline. The decline was more obvious from 2009 to 2013. Specifically, the bandwidth showed a continuous shrinking trend; there was a right tailing phenomenon, and the ductility increased. It can be seen that the PDR level in the eastern region shows little difference and change. In recent years, the gap between provinces with a high PDR and provinces with a low PDR has gradually narrowed and the PDR level in the region is relatively stable. The main peak of the PDR in Central China tends to shift to the right after a small range, and the peak obviously fluctuates in stages. There is an obvious downward trend from 2009 to 2013 and an obvious upward trend from 2014 to 2020. The bandwidth shows a certain expansion trend as there is a right tailing phenomenon and the ductility decreases. The level of PDR in the central region shows an overall difference, the change is small, and there is an obvious polarization effect. The provinces with high levels of PDR continue to grow, while the provinces with low levels of PDR grow slowly.
The main peak of the PDR in the western region has a small and rapid increasing trend. Accordingly, the decline is stable and there is rapid growth, and the peak height has obvious fluctuations. This is mainly manifested in a slight decline from 2009 to 2013, a small increase from 2014 to 2018 as the bandwidth continued to shrink from a right tailing trend, and the ductility decreased. Therefore, it is still necessary to focus on the problem of high PDR in areas with backward economic development in order to further improve the sustainability of government debt in the region.
The Influence of Government Debt on Economic Growth (1) Benchmark regression
To test whether provincial government debt has an impact on economic growth, Model (3) needed to be regressed. Before the regression, there was an F test of the panel data. This showed that the p value of the F statistic is less than 0.01, which proves that the fixed effect of the sample data of the model is extremely obvious; Hausmann's test has a p value of 0.0000, strongly rejecting the null hypothesis, and confirms the use of fixed effects, and that provinces and years are fixed in the regression. As shown in Figure 3 As shown in Table 2, according to Model (3), Column (1) describes fixed effect regression of 30 provinces, the dependent variable is L1-lnDebt, and L1-lnDebt has a positive impact on lnGDP and passes the 1% significance test. That is, provincial government debt can significantly promote economic growth. The coefficient is 0.024, which means that, when the provincial government debt increases by 1% point, the real GDP will increase by 0.024% points. Among the control variables, Urb, Indus, and Edu significantly increased lnGDP and passed at least 1% significance level test. Open and Tax have a negative impact on lnGDP and passed the 1% significance test. The impact of Pop on lnGDP is negative and passed at least a 10% significance level test. Notes: L1-lnDebt and L1-lnDebt2 are first-order lag term of the independent variables, in order to alleviate the endogenous problem of the model, the independent variables are processed by lag first-order, t-statistics are in parentheses. *** p < 0.01 and * p < 0.1.
In the three major regions, L1-lnDebt is positively significant for lnGDP and passes at least a 1% significance level test. The eastern region has the lowest impact coefficient of 0.020, and the central region has the highest impact coefficient, reaching 0.027. The influence coefficient of the western region is 0.023, which is slightly lower than the overall level of 0.024. From the perspective of control variables, in the eastern region, Urb, Indus, and Edu all significantly promote lnGDP and pass the 1% significance level test. On the contrary, Open and Tax have an inhibitory effect on lnGDP; specifically, Open passes the 1% significance test, and Tax passes the 10% significance test. In the central region, Urb, Gov, and Edu have a significant positive impact on lnGDP as they all pass the 1% significance test. Notably, Indus, Open, and Tax hinder the growth of lnGDP; among them, Indus fails the significance test, Open passes the 1% significance test, and Tax passes the 5% significance test. In the western region, Urb and Edu have a significant promoting effect on lnGDP and they all pass the 1% significance test. Finally, Pop, Gov, and Tax inhibit the growth of lnGDP, but they all fail the significance test.
(3) Further study
To examine the nonlinear relationship between the debt scale and economic growth, we conducted an empirical regression on Model (4). As shown in Table 2, according to Model (4), Column (5) describes the fixed effect regression of 30 provinces. The corresponding dependent variables are L1-lnDebt and L1-lnDebt 2 , and they are both significant at the 1% level. This shows that there is a nonlinear relationship between China's provincial government debt and economic growth, showing an inverted "U-shaped" curve. This is similar to the findings of others, such as Bailey et al. (2021) [39] and Wei et al. (2021) [60]. Based on the quadratic axis of symmetry, the axis of symmetry of the government debt L1-lnDebt can be calculated as 6.684. From descriptive statistics, because 6.684 is within the value range of L1-lnDebt, and the provincial data year corresponding to the value is 2010, we say that, when the value of L1-lnDebt is 6.684, the threshold is reached; at this time, local government debt has the greatest positive impact on local economic growth. It can be seen from Figure 4 that the growth rate of China's GDP in 2010 reached 10.640%, the highest point from 2009 to 2020, which simultaneously confirmed the threshold effect.
The Influence of Government Debt on Economic Growth Fluctuations (1) Benchmark regression
To test whether provincial government debt has an impact on economic fluctuations, Model (5) was regressed. Before regression, the F test of the panel data was performed, and the result showed that the F statistic p value was 1, which confirmed that the model was not suitable for fixed effects. The LM test was performed, and the result showed that the p value was 1, which confirmed that the model was not suitable for random effects. After the F test and LM test, we used mixed regression to conduct an empirical analysis on Model (5). Figure 5 As shown in Table 3, according to Model (5), Column (1) describes fixed effect regression of 30 provinces. The dependent variable corresponding to Column (1) is L1-DebtFlu. Accordingly, it can be seen that L1-DebtFlu has a positive impact on GDPFlu and passes the 10% significance test. That is, provincial government debt volatility can significantly promote economic growth volatility. The coefficient is 0.009, which means that, when the provincial government debt volatility increases by 1% point, the real GDP volatility will increase by 0.009% points. Among the control variables, UrbFlu, PopFlu, GovFlu, TaxFlu, and EduFlu have positive impacts on GDPFlu; among them, PopFlu passes the 5% significance test, and TaxFlu passes the 1% significance test, but the remaining variables fail to pass the significance test. In contrast, IndusFlu and OpenFlu have a significant inhibitory effect on GDPFlu, and they all pass the 1% significance test.
(2) Regional Fluctuations analysis As shown in Table 3, Columns (2), (3), and (4) of Table 2 describe mixed effect regression of the eastern provinces, central provinces, and western provinces, respectively. The independent variable corresponding to Columns (2) through (4) is L1-DebtFlu. Based on Columns (2) and (3), in the eastern and central regions, L1-DebtFlu in provincial government debt has no significant impact on GDPFlu. Among them, the coefficient in the eastern region is positive, the coefficient in the central region is negative, and the absolute values are both small. This shows that the eastern and central regions have done a relatively good job in controlling government debt risks and the influence of government debt fluctuations on economic changes can be eliminated. Column (4) shows that, in the western region, the influence coefficient of L1-DebtFlu is significantly positive and passes the 5% significance test; moreover, the coefficient is 0.016, and the absolute value of the western region is higher than the overall national level. This shows that the volatility of provincial government debt in the western region has greatly aggravated economic volatility and caused unstable economic operations. Notes: L1-DebtFlu is first-order lag term of the independent variable; in order to alleviate the endogenous problem of the model, the independent variable is processed by lag first-order, t-statistics are in parentheses. *** p < 0.01, ** p < 0.05, and * p < 0.1.
(3) Robustness test
To ensure the reliability of the regression results, this study adopted the independent variable substitution method and used the government debt ratio to replace the natural logarithm of the government debt scale to measure the government debt level. The province's government debt ratio in the current year was attained by calculating the ratio of government debt this year to the province's GDP. Amplify 100 times and perform HP filter processing and retain its fluctuation term as a new explanatory variable. The results of the robustness tests are presented in Table 4.
The independent variables corresponding to Columns (1) through (4) in Table 4 are L1-DRFlu. The dependent variables in Table 4 are the same as those listed in Table 3. The regression results in Column (1) of Table 4 show that the independent variables are significant and that the regression coefficient is 0.008 and is significant at the 10% level. This supports the regression results in Table 4 that provincial government debt volatility can significantly promote economic growth volatility. However, compared with the regression results in Column (1) of Table 3, the regression coefficient is lower, indicating that government debt ratio volatility is less sensitive to economic growth volatility. Columns (2) to (4) of Table 4 describe the eastern, central, and western regions, respectively. After adjusting for the independent variables, the sign and significance of the regression coefficients of the independent variables did not change. This shows that the influence of government debt on economic fluctuations is not affected by the form of the independent variable and the model is robust. Notes: L1-DRFlu is first-order lag term of the independent variable; in order to alleviate the endogenous problem of the model, the independent variable is processed by lag first-order, t-statistics are in parentheses. *** p < 0.01, ** p < 0.05, and * p < 0.1.
Discussion
Based on sustainable development theory, we adopted the fixed effect model to analyze the impact of China's provincial government debt on economic growth and conducted regional heterogeneity analysis. Then, we introduced the square term of government debt into the model to verify the "nonlinear relationship" of the impact of China's government debt on economic growth and judged whether there is a threshold effect showing "U" or inverted "U" relationship between them. Finally, we conducted HP Filtering on all variables to further test the impact of China's provincial government debt on economic fluctuations and completed the robustness test.
We can see from the above empirical results that, on the one hand, China's provincial government debt promoted economic growth, Dey et al. [61] had the same view, and the regression coefficient (0.024) was significant. From different regional perspectives, the promotion effect of the central region (0.027) is higher than that of the eastern (0.020) and western (0.023) regions. This is consistent with the conclusion of [62]. Theoretically, there is a nonlinear relationship between China's provincial government debt and economic growth, showing an inverted "U-shaped" curve; however, ref. [63] presented nonlinear characteristics, rather than an inverted "U-shaped" relationship.
On the other hand, the variation in government debt aggravates economic fluctuations, and the regression coefficient (0.009) is significant. The regression coefficients of the eastern and central provinces are not significant; however, the regression coefficient of the western provinces (0.016) is larger and more significant than that of other regions. Tax burden fluctuations and population growth rates aggravate economic changes. In contrast, the optimization of provincial industrial structure and improving provincial opening level can slow economic fluctuations. This is similar to the viewpoint of [64].
We discovered that China's provincial government debt has a significant positive impact on economic growth. Moreover, debt volatility contributes to regional economic volatility. For example, owing to the effect of COVID-19, Hubei's GDP in 2019 was 4582.831 billion yuan, with an annual growth rate of 7.5%. However, by 2020, the provincial GDP was 4344.346 billion yuan, representing a year-on-year decrease of 5.0%-its economy has significantly declined. To speed up recovery and stabilize employment, the Hubei Provin-cial Government has increased its borrowing efforts. In 2020, the debt balance of Hubei Province was 1494.933 billion yuan, an increase of 85.934% year-on-year, and the provincial government debt ratio increased by 96.141% year-on-year. If the government does not borrow to increase investment and stabilize economic growth, Hubei's economy will experience more severe fluctuations and decline.
In short, China's provincial government debt has increased significantly under COVID-19. Owing to the slow economic recovery, the debt ratios of certain provinces such as Qinghai and Guizhou remain high, and there is even the possibility of debt crises. Therefore, we need to further explore how the economy can sustainably develop if the new coronavirus epidemic becomes a normal facet of the economy. First, the government must maintain macroeconomic stability and a stable level of government debt, and ensure that no debt crisis occurs. Second, the developed provinces in the east should assist the backward provinces in the west by providing horizontal fiscal expenditures to ensure that all provinces can overcome these difficulties. Third, the coronavirus highlights the ecological environment's importance. To maintain sustainable economic development, the government must increase investment in environmental protection; guide government debt to invest in green environmental protection industries and the green economy; and achieve sustainable economic development through green innovation. Finally, China in 2020 GDP growth rate has dropped by half because of the impact of COVID-19. The influence of COVID-19 has greatly restricted international trade and personnel movement; in order to revive the China's provincial economies, the government should increase the stimulation of domestic demand and simultaneously develop a combination of online and offline methods to promote product sales.
Conclusions
This study examines the impact of local government debt on economic growth and fluctuation, which has important research value. In the context of COVID-19's impact, local governments have increased borrowing, which has stimulated the economy; but local government debt also impacts local economic fluctuations. For example, when the government debt ratio is too high to repay debt, it will cause a debt crisis and have a disastrous impact on sustainable economic development. The data used here are more complete than those of previous studies and have been updated to 2020, which tests the impact of China's provincial government debt on economic growth and sustainable development with COVID-19. We build an empirical model to test the different impacts and regional differences in the scale of provincial government debt on economic growth and fluctuations. In addition, we verify the non-linear relationship between provincial government debt and economic growth.
From a national perspective, analyzing local government debt's impact on economic growth shows that such debt promotes economic growth, with a coefficient of 0.024. From the perspective of regional heterogeneity, the coefficients for the eastern and western regions are 0.020 and 0.023, respectively, and the role of government debt in promoting economic growth is significantly lower than the national level. The coefficient in the central region is 0.027, and the contribution of provincial government debt to economic growth is higher than the national level.
We empirically conclude that there is a nonlinear relationship between China's provincial government debt and economic growth, which shows an inverted "U-shaped" curve. There may be a theoretical threshold effect between local government debt and economic growth. When the threshold is reached, local government debt has the greatest positive impact on local economic growth. During the sample period, the maximum value of China's economic growth rate corresponds to the threshold point, confirming the above conclusion.
Regarding the impact of local government debt on economic fluctuations, from a national perspective, government debt volatility aggravates economic volatility with a coefficient of 0.008; however, in the eastern and central regions, its impact on economic shifts is not significant. In China's western region, the fluctuation of provincial government debt significantly aggravates changes in the local economy and causes unstable economic operations.
Thus, this study results are important. We propose a new perspective for China's local debt research; that is, China's regions should use government debt to manage the impact of the coronavirus pandemic, prevent risks in debt expansion, alleviate economic fluctuations, and ensure economic and social stability throughout China. This operation has a certain reference significance.
Policy Implication
This study reveals relevant policy implication. The economic development level in the eastern region is leading the country in this category. With strong debt management capabilities and relatively complete market systems, under normal circumstances, the market's self-adjustment mechanism should be relied upon, and it is not appropriate to extensively intervene [65]. The central region's economic endowment is insufficient, its economic foundation is weak, and the industrial structure remains imperfect. This requires actively promoting the reform of the government debt management system as well as rendering scientific and reasonable debt investment decisions. We recommend promoting the upgrading of the industrial structure in general and the entire market through the development of the characteristic economy in order to drive the regional economy's sustainable and coordinated development.
The degree of marketization, industrial structure, and economic development efficiency in the western region are far from those of the country's other two regions. The backward development concept for GDP should be abandoned, and a sound mechanism for evaluating government debt should be established. Government debt's role in promoting the economy and encouraging social capital within public investment should be emphasized. There should also be an appropriate increase in social capital's participation in areas of people's livelihoods, such as science, education, culture, and health.
Limitations
Several important limitations of this study warrant discussion. On the one hand, the sample data volume of this study is not rich enough, because only 30 provincial governments were studied, resulting in an insufficient sample size. In the future, we expect public disclosure of government debt data at the municipal and county levels; alternatively, we can use quarterly data from provincial units to expand the sample size. On the other hand, this study examined the impact of the scale of government debt on economic growth and fluctuation. However, in practice, the influence of different flows of government debt funds on economic growth is obviously different. In future research, we can consider subdividing government debt variables, studying the different effects of government debt flowing into different fields or industries on economic growth, analyzing the corresponding mechanism, and exploring specific ways to improve government debt's sustainable development.
Author Contributions: W.Y. and Y.W. designed this manuscript. Z.Z. wrote this manuscript. P.D. and L.G. collected the data and made scientific comments on this manuscript. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have influenced the results reported in this paper. Notes: shows the statistical information of the variables adopted in this paper. When calculating the control variables involving the ratio, we take the value before the percentage sign, that is, enlarge the ratio by 100 times before regression. | 9,249.8 | 2022-01-27T00:00:00.000 | [
"Economics"
] |
Heterologous Expression of Thermolabile Proteins Enhances Thermotolerance in Escherichia coli
Heat shock proteins (HSPs) play important roles in the mechanism of cellular protection against various environmental stresses. It is well known that accumulation of misfolded proteins in a cell triggers the HSPs expression in prokaryotes as well as eukaryotes. In this study, we heterologously expressed two proteins in E. coli, namely, citrate synthase (CpCSY) and malate dehydrogenase (CpMDH) from a psychrophilic bacterium Colwellia psychrerythraea 34H (optimal growth temperature 8 ̊C). Our analyses using circular dichromism along with temperature-dependant enzyme activities measured in purified or direct cell extracts confirmed that the CpCSY and CpMDH are thermolabile and present in misfolded form even at physiological growth temperature. We observed that the cellular levels of HSPs, both GroEL and DnaK cheperonins were increased. Similarly, higher levels were observed for sigma factor σ32 which is specific to heat-shock protein expression. These results suggest that the misfolded-thermolabile proteins expressed in E. coli induced the heat shock response. Furthermore, heat treatment (53 ̊C) to wild type E. coli noticeably delayed their growth recovery but cells expressing CpCSY and CpMDH recovered their growth much faster than that of wild type E. coli. This reveals that the HSPs expressed in response to misfolded-thermolabile proteins protected E. coli against heat-induced damage. This novel approach may be a useful tool for investigating stress-tolerance mechanisms of E. coli. Corresponding author.
Introduction
The accumulation of unfolded or misfolded proteins is one of the major factors leading to increased expression of highly conserved proteins called heat shock proteins (HSPs).The HSPs include the molecular chaperones, such as GroEL/GroES and DnaK, which help cellular proteins to maintain the proper folding required for function [1] [2].It also includes some proteases, such as ClpAP, ClpXP, and FtsH, which degrade the unfolded proteins [3].In E. coli, HSP expression is positively controlled by σ 32 , the alternative subunit of RNA polymerase specific to the heat-shock promoter [4].Under physiological conditions, DnaK chaperone system traps σ 32 to mediate its degradation by proteases, mainly FtsH, an AAA protease associated with the inner membrane [5]- [7].On the other hand, under stress conditions, the DnaK system interacts with the unfolded proteins, and releases σ 32 .As a result, σ 32 activate the transcription of several HSP genes.A similar model is also proposed in eukaryotes, for example, heat shock response mediated by a transcriptional factor, Hsf1 [8]- [11], and unfolded protein response in the endoplasmic reticulum [12].
Psychrophiles can inhabit at lower temperatures, generally in the range of 0˚C -15˚C [13].The transcription of HSP genes in psychrophilic bacterium such as Colwellia maris ABE-1 is induced at much lower temperatures, such as 20˚C, than those of mesophilic ones [14]- [16].Therefore, certain proteins of the psychrophiles may be in misfolded state at physiological growth temperature for most mesophiles.Adaptation of enzymes to cold environments should be essential for the survival and growth of psychrophilic bacteria under cold environmental conditions.Although cold-adapted enzymes exhibit high specific activities at low temperature, they also display pronounced thermolability compared with their mesophilic and thermophilic counterparts.For example, isocitrate lyase of a psychrophilic bacterium, Colwellia psychrerythraea NRC 1004, showed the maximum activity at 25˚C and was completely inactivated by incubation even at 30˚C for 2 min [17].Similar results are reported for citrate synthase from an Antarctic bacterial strain, DS2-3R [18], and for malate dehydrogenase from the psychrophilic bacterium, Flavobacterium frigidimaris KUC-1 [19].
The acquisition of thermotolerance by an organism is correlated with the elevated expression of HSPs.Owing to their protective functions under high-temperature conditions; overexpression of HSPs has been used as a promising technique to improve the thermotolerance of transgenic organisms [20]- [23].To date, most of these trials have been performed using one or two HSP genes.However, if we consider that several types of HSPs function synergistically in living cells, their multiple expressions by gene manipulation should lead to further improvement compared with the expression of specific HSPs individually.
Given that psychrophilic proteins are misfolded at physiological growth temperatures of E. coli, heterologous expression of psychrophilic proteins would increase the expression level of HSPs in transformed cells.Taking these observations together, the thermolabile nature of psychrophilic proteins could be utilized as a signal to induce the synthesis of HSPs at physiological growth temperatures of E. coli that could enhance thermotolerance.Such cells having higher HSP levels can tolerate stress and recovery should be quicker than wild type.In the present study, we transformed E. coli cells with two genes encoding citrate synthase (CpCSY) and malate dehydrogenase (CpMDH) from a psychrophilic bacterium Colwellia psychrerythraea 34H and also their analogues native to E. coli.We investigated the change in cellular levels of HSPs, such as GroEL and DnaK; as well as an alternative sigma factor of RNA polymerase σ 32 .These three factors were analyzed because ordered network between GroE and DnaK is essential for tightly regulating σ 32 activities which is central to the expression of HSP genes [24].We found that the expression of these psychrophilic proteins in misfolded/unfolded state enhanced thermotolerance of E. coli cells due to altered cellular levels of HSPs.
Bacteria, Culture Conditions, and DNA Preparation
The psychrophilic bacterium Colwellia psychrerythraea strain 34H was obtained from American Type Culture Collection (Manassas, USA) grown to stationary phase in Marine Broth (Difco, Lawrence, KS, USA) at 8˚C.E. coli strains JM109 and BL21 purchased from Takara (Japan) were used for the propagation of plasmids and the heterologous expression of recombinant proteins, respectively.Unless otherwise stated, these E. coli strains were grown at 37˚C with vigorous shaking in LB medium supplemented with 50 µg•ml −1 ampicillin when required.Genomic DNAs of C. psychrerythraea and E. coli were prepared by methods described previously [14].
Plasmid Construction
Restriction endonucleases were obtained from New England Biolabs (Beverly, MA, USA) and DNA-modifying enzymes for plasmid construction were from Takara Shuzo (Kyoto, Japan).The coding regions of genes for MDH and CSY of C. psychrerythraea and E. coli were amplified by PCR with KOD plus DNA polymerase (Toyobo, Osaka, Japan) and corresponding PCR primers (Table 1).The PCR products were digested with NdeI and XhoI.The DNA fragments were cloned into the corresponding sites of pET-21b vector (Novagen, Darmstadt, Germany).The resultant plasmids for C. psychrerythraea enzymes were designated as p-CpMDH and p-CpCSY, and those for E. coli enzymes were designated as p-EcMDH and p-EcCSY, respectively.Sequences of all constructs were confirmed by DNA sequencing using an ABI PRISM 310 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA).
Heterologous Expression and Purification of His-Tagged Proteins in E. coli
The plasmids described above were used to transform E. coli BL21, and the resultant transformants were grown at 37˚C until OD 600 of the culture reached 0.5.The heterologous expression of His-tagged recombinant proteins was induced by the addition of 0.1 mM IPTG at 30˚C for MDHs and at 20˚C for CSYs.Cells were harvested and suspended in 5 ml of a solution containing 20 mM phosphate buffer saline (PBS, pH 7.4) and 500 mM NaCl (buffer A).After sonication, soluble cell extracts were obtained by centrifugation (15,000 rpm) at 4˚C for 30 min, and applied to a 1-ml HiTrap chelating column (GE Healthcare, Little Chalfont, UK) that had been equilibrated with buffer A containing 10 mM imidazole.The column was washed with buffer A containing 40 mM imidazole, and His-tagged proteins were then eluted with buffer A containing 500 mM imidazole.Eluted fractions were applied to a 5-mL HiTrap desalting column (GE Healthcare) that had been equilibrated with a solution containing 20 mM Hepes/KOH (pH 7.5), 50 mM NaCl, and 10% glycerol, and purified His-tagged proteins were stored at −80˚C until use.Purity of proteins was checked by SDS-PAGE and protein concentration was determined using a Protein Assay Kit (Bio-Rad, Hercules, CA, USA) with BSA as the standard.
Enzyme Assay
The CSY activity was assayed by measuring the increase of absorbance at 412 nm with an Ultrospec 3000 spectrophotometer (GE Healthcare) using method of [25].The assay buffer (100 µl) contained 110 mM Tris/HCl (pH 8.0), 2.5 mM EDTA, 0.4 mM DTNB, 0.6 mM oxaloacetic acid, 0.2 mM acetyl-CoA, and an appropriate amount of enzyme.The reaction was started by addition of acetyl-CoA into the solution.The MDH activity was assayed by measuring the decrease of absorbance at 340 nm due to the conversion from NADH to NAD + with a UV1800 spectrophotometer (Hitachi, Tokyo, Japan).The reaction solution contained 100 mM Tris-HCl (pH 7.8), 1 mM DTT, 0.2 mM NADH, 1 mM oxaloacetic acid, and an appropriate amount of enzyme in a final volume of 800 µl.The reaction was started by the addition of NADH into the solution.
For the assay of enzyme activity in cell extracts, cells heterologously expressing recombinant proteins were disrupted by sonication and the resultant homogenate was centrifuged at 15,000 rpm for 20 min.The enzymatic activity was determined using soluble fraction as described above.
Circular Dichroism
Protein concentrations were determined by measuring the optical absorption at 280 nm.A circular dichroism (CD) spectra of recombinant proteins were obtained with aJ-800 spectrometer (JASCO), equipped with a Peltier thermo controller and using a path-length of 0.1 cm.From 8˚C to 50˚C, protein sample of 8 μMin Tris-HCl buffer (pH 7.4) was placed in the quartz cell (1 mm thickness) and 32 scans were averaged.The molar extinction coefficients were determined according to the method of [26].
Western Blot Analysis
Cell extracts prepared from each recombinant E. coli cells, as described above, were solubilized and equal amount of processed sample was resolved on 12.5% SDS-PAGE.The proteins were transferred onto a PVDF membrane, Hybo resolved on nd-P (GE Healthcare).HSPs were detected with antibodies against GroEL (Assay Pro, St. Charles, MO, USA), DnaK (Enzo Life Sciences, Farmingdale, NY, USA), and σ 32 (Neo Clone, Madison, WI, USA), using the ECL prime Western Blotting Detection system (GE Healthcare).Using ImageJ program, we measured relative levels of GroEL, DnaK and σ 32 by densitometry analysis of the same area from each lane representing wild type and respective recombinant proteins.
Effects of High Temperature Treatment on Growth of E. coli
E. coli cells were cultivated at 30˚C until OD 600 of the culture reached 0.5.The heterologous expression of recombinant proteins was induced by incubation with 0.1 mM IPTG for 2 h at 30˚C for MDHs and at 20˚C for CSYs.The concentration of these cultures was adjusted as OD 600 = 0.5, and then incubated at high temperature of 53˚C for 15 min.For the dot assay, each culture was diluted serially (1:10, 1:100, 1:1000, and 1:10000), and 2.5 µl of each sample was spotted onto LB agar plates and incubated at 30˚C for cells heterologously expressing MDHs and at 20˚C for cells heterologously expressing CSYs.
For the growth curve assay, cells were grown and expression of recombinant proteins was induced as described above.After treatment at high temperature of 53˚C for 15 min, cells were grown in LB medium at 20˚C or 30˚C and absorbance at 600 nm was measured at regular time interval.
Thermolability of CpCSY and CpMDH
In order to confirm whether CpCSY and CpMDH are thermolabile proteins, we first compared the temperature-dependent enzymatic activity of recombinant CpCSY and CpMDH with that of CSY and MDH from E. coli (EcCSY and EcMDH).Overexpressed proteins were purified by Ni-affinity chromatography and purity was analyzed by CBB staining after SDS-PAGE separation (Figure 1).
These purified proteins were incubated at various temperatures (8˚C, 15˚C, 25˚C, 30˚C, 40˚C and 50˚C) for 1 h and then their residual enzymatic activity was determined (Figure 2(a)).The residual enzymatic activity of CpCSY, CpMDH, EcCSY and EcMDH was reduced to a half level at ~16˚C, ~22˚C, ~43˚C, and ~50˚C respectively.These results indicate that CpCSY and CpMDH were more thermolabile than EcCSY and EcMDH respectively; i.e. the temperatures causing inactivation of C. psychrerythraea enzymes under in vitro conditions were much lower than those of E. coli enzymes.Also, among the two psychrophilic proteins, CpCSY was found to be more thermolabile than EcMDH.
The CD spectrum for each recombinant protein (Figure 2(b)) is characterized by two negative peaks at 207 nm We also examined the thermolabile nature of CpCSY and CpMDH by measuring relative enzymatic activity directly in crude cell extracts from cultures grown at 20˚C and 30˚C respectively (Figure 3(a)).The enzymatic activities showed 30% and 50% reduced activity for CpCSY and CpMDH compared to those corresponding EcCSY-and EcMDH-expressing cells respectively.Since the cellular expression levels of recombinant proteins (CpCSY compared to EcCSY or CpMDH compared EcMDH) were almost the same in E. coli (Figure 3(b)), lower enzymatic activities in CpCSY-and CpMDH-expressing cells likely to be due to the misfolded-psychrophilic proteins as observed in CD measurements or in vitro enzymatic assay using purified proteins.
Heterologous Expression of CpCSY and CpMDH in E. coli Cells Enhanced the Expression of Heat Shock Proteins
The cellular level of HSPs was determined by Western blotting analysis in heterologously expressing CpCSY, CpMDH, EcCSY and EcMDH and compared to that of cells transformed with empty vector as negative control in strain BL21.E. coli cells heterologously expressing CpCSY were grown at 20˚C and accumulation of GroEL and DnaK was found to be significantly higher (Figure 4(a)).The densitometric evaluation revealed that the levels of GroEL and DnaK were 3.9-fold and 3.5-fold higher than those of vector control (Figure 4(b)), respectively.In contrast, the heterologous expression of EcCSY resulted in a slight increase in levels of HSPs (Figure 4(a) and Figure 4(b)).The levels of GroEL and DnaK in E. coli cells heterologously expressing CpMDH were also higher than those of vector control (BL21) and cells heterologously expressing EcMDH (Figure 4(b)).
We further determined the cellular level of σ 32 in each transformed E. coli.As shown in Figure 4, the cellular level of σ 32 increased in CpCSY-and CpMDH-expressing cells (15.9-fold and 22.4-fold respectively).However, the heterologous expression of EcCSY or EcMDH had almost no effect on the relative level of σ 32 .Overall; these results demonstrated that the heterologous expression of thermolabile proteins, CpCSY and CpMDH, increased the levels of GroEL, DnaK and σ 32 in E. coli cells.
The Heterologous Expression of CpCSY and CpMDH Enhanced Thermotolerance of E. coli
Since heterologous expression of CpCSY and CpMDH increased the cellular level of GroEL, DnaK and σ 32 in E. coli, we examined its effect on their thermotolerance levels.The transformed E. coli cells were incubated at 53˚C for 15 min, and then spotted onto LB agar plates.These plates were grown under permissive conditions and cell viabilities were compared.When the cells were grown on the plate without IPTG in LB medium (no induction of each recombinant protein), and exposed to 53˚C for 15 min, there was almost no growth observed in either of the cells (Figure 5(a)).However, in the presence of IPTG in LB medium, E. coli cells hetero-logously expressing CpCSY and CpMDH exhibited better survival and quicker growth recovery than cells containing empty vector or cells expressing EcCSY and EcMDH (Figure 5(a)).
After 53˚C treatment for 15 min, we further analyzed the growth of each recombinant E. coli (CSY-expressing cells grown at 20˚C, and MDH-expressing cells at 30˚C) at regular time interval by measuring optical density values at 600 nm.Growth curve was plotted using optical density values against time interval (Figure 5(b)).This data imply that E. coli cells expressing CpCSY and CpMDH showed remarkable growth even after hightemperature treatment.On the other hand, cells expressing EcCSY and EcMDH or those containing empty vector showed comparatively lower growth rates.Cell survival and growth rate (without IPTG induction) after high temperature treatment (53˚C) was comparatively lower and almost negligible in diluted fractions (Figure 5(a)).These results demonstrated that heterologous expression of thermolabile proteins (CpCSY and CpMDH) helped E. coli cells to acquire the enhanced thermotolerance.
Discussion and Conclusions
There are numbers of reports about cold-adapted enzymes native to psychrophilic bacteria [27]- [31].It has also been reported that several enzymes from psychrophilic bacteria exhibit thermolability compared with their mesophilic and thermophilic counterparts [18] [19].Our in vitro analysis using CD measurements and loss of enzymatic activities at comparatively lowers-temperatures demonstrated that the recombinant CpCSY and CpMDH were misfolded at physiological growth temperature of E. coli (Figure 2).Thus, CpCSY and CpMDH were selected to investigate the enhanced thermotolerance of E. coli.
It is well established that accumulation of unfolded or misfolded proteins in a cell is one of the most important factors that triggers heat shock response [4].Under physiological conditions, the DnaK chaperone system inactivates sigma factor σ 32 which is asubunit of RNA polymerase specific to the heat shock promoter in E. coli.The DnaK interact directly with σ 32 that mediate its degradation by proteases, such as FtsH [5]- [7] [24].However, misfolded/unfolded proteins produced under stress conditions compete with σ 32 for DnaK binding.The σ 32 released from DnaK becomes available for the expression of HSP genes.This hypothesis has been supported by the observation that the production of structurally unstable firefly luciferase resulted in elevated levels of HSPs in E. coli [32].In this study, we found that the levels of not only GroEL and DnaK but also σ 32 increased inE. coli cells that heterologously expressed CpCSY and CpMDH (Figure 4).Given that CpCSY and CpMDH are thermolabile proteins, heterologously expressed CpCSY and CpMDH should be in misfolded/unfolded state even at physiological temperatures, such as 20˚C to 30˚C, in E. coli.Thus, it is most likely that the misfolded/ unfolded CpCSY and CpMDH sequester the DnaK chaperone system resulting in release of σ 32 .This situation caused the induced expression of GroEL and DnaK simultaneously.Taken together, our results clearly demonstrate that the heterologous expression of thermolabile proteins allows the heat shock response to be induced at physiological-growth temperature.
HSP families play important roles for cellular protection against various environmental stresses, such as hightemperature, salt, desiccation, organic solvent and oxidative stresses [20] [22] [23] [33] [34].Present study showed that the E. coli cells heterologously expressing CpCSY and CpMDH acquired enhanced thermotolerance (Figure 5).This suggests that induction of HSPs by thermolabile proteins contributes to the cellular protection not only from not only to the high-temperature; it may also acquire enhanced tolerance for other kinds of stress such as oxidative stress.Therefore, heterologous expression of thermolabile proteins may be a useful approach for increasing tolerance to various environmental stresses.
Induction of heat shock response by misfolded proteins also reported in eukaryotes [35] [36].Accordingly, our approach may provide a tool for the improvement of stress tolerance not only in E. coli but also in yeasts and plants.To date, many reports have described the effects of constitutive overexpression of only one kind of HSP and its effects on stress tolerance [22] [23].In contrast, approach used in this work allows induction of various HSPs expression simultaneously.Further studies will clarify the effect of multi-induction of HSP production on various environmental stresses.Furthermore, detailed analysis of the expression patterns of HSPs in these transformants will provide important information on the mechanism by which misfolded/unfolded proteins upregulate the stress response.
Figure 2 .
Figure 2. (a) Thermolability of recombinant CpCSY, CpMDH, EcCSY and EcMDH.Proteins overexpressed and purified from Escherichia coli.Each protein was incubated for 1 h at designated temperatures and then rapidly cooled on ice.The residual activity was determined as described in Materials and Methods.Symbols: CpCSY (filled circle), EcCSY (gray circle), CpMDH (filled square), and EcMDH (gray square).Residual activity was calculated in terms of percentage compare to that before the incubation.(b) Representative spectra of circular dichromism measurements for all four proteins.
Figure 3 .
Figure 3. (a) Analysis of enzymatic activities measured directly in cell extracts.Soluble fraction of cell extracts from E. coli cells harboring only the pET21-b vector (white bars), cells over-expressing EcCSY or EcMDH (gray bars), and cells overexpressing CpCSY or CpMDH (black bars) were used for enzyme assay.Relative activities of CSYs and MDHs against respective vector control (pET21-b, abbreviated as WT, wild type) were plotted in graph.(b) The cell extracts were also processed for analysis by SDS-PAGE (12.5% of polyacrylamide-gel percentage) and visualized by CBB staining.Arrowheads indicate the bands of corresponding over-expressed proteins.The MW of each band in protein marker is indicated above the respective bands.All samples were electrophoresed in the same gel and the SDS-PAGE data are rearranged to facilitate better comparison between amounts of expressed proteins.
Figure 4 .
Figure 4. (a) Levels of GroEL, DnaK and σ 32 determined by western blot analysis.(b) Numbers indicate levels of GroEL, DnaK and σ 32 in recombinant E. coli relative to those in vector control (wild type, abbreviated as WT) cells.Using ImageJ program, relative levels were calculated by densitometry analysis of the same area from each lane corresponding to migratory position of respective protein.
Figure 5 .
Figure 5. Thermotolerance of recombinant E. coli.(a)Dot assay of each recombinant E. coli was done for analyzing effect of high temperature on cell survival.Cells were grown, and IPTG was added to induce the expression of CSYs at 20˚C or MDHs at 30˚C for 2 hr.After incubation at 53˚C for 15 min, cells were serially diluted 10-fold with LB, and 2.5 µl of each suspension was spotted onto LB agar plates.These plates were incubated overnight at respective permissive temperatures.(b) Growth curves for each recombinant E. coli.Cells expressing each recombinant protein were grown in LB medium till OD 600 reached to 0.5 and then IPTG was added to induce the protein expression.The 2 hr grown cultures were incubated at 53˚C for 15 min.After that, these cultures were grown at respective permissive temperatures.Cell growth was determined by measuring the absorbance at 600 nm.Symbols: wild type (triangale) CpCSY (filled circle), EcCSY (gray circle), CpMDH (filled square), and EcMDH (gray square).
Table 1 .
Oligonucleotide primers used for PCR amplification. | 4,957.4 | 2016-08-02T00:00:00.000 | [
"Biology"
] |
Visualizing regulatory interactions in metabolic networks
Background Direct visualization of data sets in the context of biochemical network drawings is one of the most appealing approaches in the field of data evaluation within systems biology. One important type of information that is very helpful in interpreting and understanding metabolic networks has been overlooked so far. Here we focus on the representation of this type of information given by the strength of regulatory interactions between metabolite pools and reaction steps. Results The visualization of such interactions in a given metabolic network is based on a novel concept defining the regulatory strength (RS) of effectors regulating certain reaction steps. It is applicable to any mechanistic reaction kinetic formula. The RS values are measures for the strength of an up- or down-regulation of a reaction step compared with the completely non-inhibited or non-activated state, respectively. One numerical RS value is associated to any effector edge contained in the network. The RS is approximately interpretable on a percentage scale where 100% means the maximal possible inhibition or activation, respectively, and 0% means the absence of a regulatory interaction. If many effectors influence a certain reaction step, the respective percentages indicate the proportion in which the different effectors contribute to the total regulation of the reaction step. The benefits of the proposed method are demonstrated with a complex example system of a dynamic E. coli network. Conclusion The presented visualization approach is suitable for an intuitive interpretation of simulation data of metabolic networks under dynamic as well as steady-state conditions. Huge amounts of simulation data can be analyzed in a quick and comprehensive way. An extended time-resolved graphical network presentation provides a series of information about regulatory interaction within the biological system under investigation.
Background
Research projects in systems biology produce large amounts of data that usually spread over various 'omics' domains, are time dependent or belong to different organisms and physiological conditions. Irrespectively of whether these data are produced in a wet lab or on a computer, the evaluation requires visualization techniques representing as much information as possible in an intuitive way. Clearly, the direct visualization of data sets in the context of a biochemical network drawing is one of the most appealing approaches in this field.
This contribution is concerned with data visualization in the context of metabolic networks. It focuses on the representation of an important type of information given by the strength of regulatory interactions between metabolite pools and reaction steps. The following brief survey of visualization methods for metabolomic and fluxomic data shows that up to now metabolite pool size and flux data have been represented mainly in a network context whereas appropriate concepts to visualize regulatory information are missing.
Visualization methods
In general, a metabolic network is drawn as a directed graph where the nodes represent metabolite pools and the edges represent chemical reaction steps. As biochemical reaction steps can have multiple substrates and products, a hypergraph with multi-source multi-target edges is commonly used [1][2][3][4]. Alternatively, by introducing a second set of nodes representing the reactions, the hypergraph can be transformed into a bipartite graph with directed one-to-one edges (cf. Figure 1). This clearly has some consequences for the possible types of information visualization.
Some other conceptional differences found in the literature are whether metabolites or fluxes are allowed to be duplicated in order to avoid edge intersections or whether cometabolites are distinguished optically from reaction substrates and products. In any case, the data to be visualized can be linked directly to the nodes or edges of the network. This can basically be achieved in the following ways.
• The most primitive way to represent data in the network context is to annotate the nodes or edges with textual tags (cf. Figure 2a). Although this is not really a graphical representation, it has the big advantage of being precise and offers the possibility of representing non-quantitative information along with the network [5,6].
• A direct representation of metabolite or flux data is given by mapping numerical values to the size, color or shape of the drawn network nodes [7,8]. For example, pool sizes are frequently visualized by bar plots, level meters or the size of the respective pool symbol (cf. Figure 2b,c). If fluxes are modeled as separate nodes in a bipartite graph, the same visualization options are available.
• The situation becomes more difficult when dynamic (i.e. time-dependent) data have to be displayed. One option then is to show time course plots along with the network nodes or edges (cf. Figure 2d). Another idea is to use dynamic visualization features by producing videos with changing pool size and flux data over time. Taking snapshots from this video can be interactively facilitated by using a slider [8].
• Another frequent task is the visual comparison of different data sets that are related to different physiological conditions, different organisms, experiments versus Three possible graphs representing a small metabolic network simulations or dynamic system states at different times. In this situation the direct representation by changing the appearance of nodes or edges is still applicable. Typically, this results in a bar chart replacing or annotating the network nodes [9].
• Another option is the representation of multiple copies of the whole network. As a special case, 2.5D representations for data comparison have been developed by stacking network plots in three dimensions [10,11].
• Finally, the comparison of different time plots along with the symbols is even capable of comparing several complete time courses, although this approach becomes difficult to percept with a growing number of curves.
Regulatory information
All of these methods are well established and implemented in various software tools for network-based visualization [5][6][7][8][9][10][11]. However, there is still one important type of information missing that is very helpful for interpreting and understanding the function of metabolic networks. It is related to the strength of regulatory interactions between metabolite pools and the reaction steps influenced by these pools. Biologists are used to including graphical representations of these interactions by drawing interaction edges connecting pools and fluxes. These edges are usually labeled with a plus or minus sign for activating or inhibiting interactions, respectively. Clearly, this representation is only possible when fluxes are explicitly displayed as nodes in a bipartite graph.
Interestingly, this qualitative regulatory information has never been represented in a quantitative way in the available visualization tools. This would be a valuable complement to the already displayed pool size and flux data. If, for example, a flux is down-regulated although its substrate pools are at high levels and product pools are at low levels, the cause must be an inhibitory effect of some other metabolite pool. Thus, the incorporation of additional edges for inhibitors and activators would help to explain why metabolic fluxes are at their present levels.
The major problem here is obtaining a precise definition of what is meant by the 'regulatory strength' (RS) of an interaction. The goal of this contribution is to develop such a definition which is suitable for the intuitive interpretation of data under dynamic and steady-state conditions. Clearly, such a definition can only be reasonably given for the case of simulated data because some information on the reaction kinetics of the involved steps is needed to establish a meaningful RS definition.
This contribution is organized as follows. A novel concept for the determination of the RS of effectors in enzyme-cat-alyzed reactions is presented in the first two sections. Next, a general definition for the RS is given followed by a description of the visualization approach. Finally, we provide an example to demonstrate the practical significance of the proposed method, where the whole concept is applied to a relevant dynamic model system of E. coli.
The concept of RS
Properties of RS Before explaining in detail how the RS for metabolite pools influencing reaction steps is defined, a list of properties is given that should be reasonably fulfilled by the new concept. The driving force behind these properties is to ensure a maximum of intuitive interpretability and to avoid an overload of information for the user.
(i) A RS is defined for all effectors (i.e. inhibitors or activators) of a reaction step which are not contained in the set of substrates or products. These effectors can be identified immediately from the corresponding reaction kinetic expression.
(ii) One numerical RS value should be associated to any effector edge contained in the network. Thus, it is possible to visualize RSs directly in the network context. Any of the already-mentioned visualization techniques for pool sizes and fluxes might be used for this purpose.
(iii) The RS of an effector with respect to a reaction step has to be calculated from the momentary values of pool sizes and fluxes in the network with the additional knowledge of the respective reaction kinetic formula and parameters. Consequently, RS is a time-dependent quantity which does not depend on the history of a current system state.
(iv) The RS should express how strong an influence a reaction step has on a given reaction rate. Moreover, it should distinguish between activation relations (positive sign) and inhibition relations (negative sign). In the visualization the cases can be distinguished between easily by using different colors.
(v) The RS should be approximately interpretable on a percentage scale where 100% means the maximal possible inhibition or activation and 0% means the absence of a regulatory interaction.
(vi) If many effectors have an influence on a certain reaction step, the respective percentages should indicate the proportion in which the different effectors contribute to the total regulation of the reaction step.
Here some comments concerning the reasoning and rationale behind these properties are appropriate. Refer-ring to item (i), the definition of RSs for reaction substrates and products (reversible reaction only) is, in principle, possible. However, the obtained values would not indicate any metabolic regulation, but rather how strong the reaction is driven by the availability of substrates and products, respectively. This information can be directly represented by the visualization of metabolic pool sizes and fluxes. In most cases the effectors of an enzymatic reaction are not consumed by the reaction step itself. The only exceptions are substrate and product inhibition mechanisms which are explicitly denoted in the reaction kinetic formula and would also, therefore, be covered by the RS definition.
It is reasonable to quantitate the effector influence by exactly one RS value, otherwise the multitude of visualized information is likely to become confusing (item (ii)). Moreover, for the RS calculation, the general assumption is made that a certain effector molecule modulating an enzymatic reaction step is instantaneously available and distributed equally over the whole cell (item (iii)).
The properties in items (iv)-(vi) are important for a meaningful and intuitive interpretation of RSs. In particular, the distinction between activators and inhibitors is of fundamental importance with respect to the underlying effect of a metabolic enzyme regulation. With regards to the practical implementation of RS values, the definitions of lower and upper bounds are indispensable. In addition, applying a percentage scale facilitates the reception of information.
Conceptual problems in the definition of RS
When trying to construct a RS measure that fulfills these conditions it became clear that different approaches are possible and some decisions have to be made. Moreover, it turns out in the following that the above-mentioned requirements are not completely free of contradictions so that some compromises are necessary. However, it is important to note that the precise value of a displayed quantity plays no role in a graphical visualization, but, rather, it is the rough order of magnitude that is important. Thus, contradictions are not important if they can be resolved by sacrificing some numerical precision.
One conceptional difficulty with the introduction of a RS is that the activation or inhibition state of a reaction step in relation to the state where all activators or inhibitors are absent must be quantitated by exactly k values, where k is the number of effectors. This immediately indicates the implicit assumption that activators and inhibitors act independently in a reaction. In contrast, it is well known from enzyme kinetics that this is not always the case. However, if correlations between the influences of different effectors have to be taken into account, further coeffi-cients of higher order are needed to characterize this correlation. Clearly, this would prevent us from implementing an intuitive network-based visualization.
A well-known family of methods that assigns exactly one coefficient to each effector are the sensitivity-based methods from which the elasticities defined in metabolic control theory are the best-known example. Although elasticities play an important role in metabolic control theory, they are certainly not the right quantities to be used for visualization in the way specified above. This can be explained easily with an inhibitory relationship expressed by a multiplicative hyperbolic term in a reaction kinetic expression (here S is substrate concentration and I is inhibitor concentration): In this example the sensitivity ∂r/∂I tends to zero with increasing inhibitor concentration which would erroneously indicate that the inhibitor has no effect on reaction flux. Obviously, the opposite is true and RS should tend to -100% in this case. Consequently, when used for network visualization, elasticities rather produce non-intuitive results.
Likewise, the use of flux control coefficients is not appropriate because these scaled sensitivities reflect the global network regulation (i.e. the joint action of all reaction steps) and thus cannot be interpreted locally for one isolated reaction step in the network.
In this contribution the aim is now to find a quantity expressing how strong a reaction step is up-or down-regulated compared with the completely non-inhibited or non-activated state, respectively. As a first example, in Equation 1 the RS can be reasonable defined by because this is the percentage by which the non-inhibited flux (I = 0) is down-regulated.
In some cases enzymatic reactions are described by kinetic expressions that show no saturation behavior, i.e. the flux continuously increases with increasing activator concentration. As an example for a mechanistic enzyme description with these properties the following kinetic expression is given, describing an allosterically activated enzyme covering n binding sites for an activator: The determination of the RS for the activator A using the formula will not succeed, because the limit calculation for arbitrary high activator concentrations (A → ∞) leads to infinitely high reaction rates r(S, A). Consequently, this results in a value of zero for ν A . For this reason the definition of an upper bound A max for the activator concentration is suitable. This boundary should be chosen according to the expected physiological concentration range of the respective effector metabolite.
Moreover, the corresponding simulated values must also be restricted to this range. In order to derive a general definition for the RS, upper bounds e max for all effectors are defined. However, in the case of kinetics with saturation behavior, the maximum effector concentrations need not be limited to a finite value for RS calculability.
It turns out that for arbitrary reaction kinetic formulae the definition of a RS is not as simple and straightforward as in the example from Equation 1. For this reason, the concept of RS is defined in the following in a step-by-step approach that starts with simple standard reaction kinetic formulae and successively generalizes the introduced concepts to the most general case. At the end it will be possible to apply the concept to any mechanistic reaction kinetics.
Derivation of a general RS definition Example system
As an instructive example, consider an enzymatic reaction where the conversion of one substrate is regulated by two effectors (cf. Figure 3). Some quasi-stationarity assumptions are used for simplicity.
In this system, the inhibitor and the activator are competitive with respect to each other, i.e. the binding of one excludes the binding of the other. The velocity equation for this system in Michaelis-Menten form is [12] This example system is used in the following to derive a general RS definition including different kinetic types.
Enzyme kinetics with one effector First, consider an enzyme that only possesses binding sites for the substrate and one inhibitor. Regarding the reaction scheme given above, such a system can be described by neglecting the activator influence (A = 0). The velocity equation is then given by The most common inhibition mechanisms can be derived from this equation, i.e. competitive (α → ∞), non-competitive (a = 0, α = 1) and partial competitive (a < 1, α > 1) inhibition [12].
In general the influence of inhibitor I on flux r can be quantified as with The term r max, I (S) denotes the reaction rate for fixed substrate concentration S and a negligible influence of the inhibitor (I → 0). In contrast, r min, I (S) is calculated by assuming a high concentration for the inhibitor (I → I max ). For a small inhibition the present flux r(S, I) is close to the maximum occurring flux r max, I (S), i.e. ν I tends to zero. Conversely, for a strong inhibition r(S, I) is near the minimum flux r min, I (S) and ν I tends to -100%. In the case of a partially competitive inhibition, r min, I (S) > 0 holds. Hence, the present flux is scaled between the maximum and minimum inhibitor influence.
The same approach can also be applied to a kinetic expression describing the action of an enzyme possessing only binding sites for the substrate and one activator. The equilibria follow from Figure 3 by neglecting the inhibitor (I = 0). In analogy to Equation 6, the velocity equation is given by For a purely activating influence of A, b ≥ 1, β < 1 must hold. The RS of the activator A is then defined as with For a strong activation, the flux r(S, A) is close to r max, A (S) and, therefore, ν A tends to 100%. Vice versa, if there is only a small activation, r(S, A) is in the range of r min, A (S) Enzymatic reaction system with one substrate and two effectors Figure 3 Enzymatic reaction system with one substrate and two effectors. The substrate S, the inhibitor I and the activator A bind to the enzyme at different sites to yield ES, EI, ESI, EA and ESA complexes. Binding of the inhibitor reduces the affinity of the substrate for the enzyme and/or the rate k p at which product is formed. The activator has the same, but opposite, effect (i.e. the affinity and/or k p increase). and ν A is near zero. Clearly, depending on the sign of the RS value a distinction between inhibiting or activating influences of the effector can be made.
Multiple effectors of equal directed influence
Many enzymatic reactions exist within the different metabolic pathways. The activities of these reactions are regulated by simultaneously operating effectors. As a simple example, consider an enzyme with binding sites for substrate S and two activators A 1 and A 2 . The system is identical to that of Equation 5 if we substitute A and I by A 1 and A 2 , respectively. In addition, the system is subjected to the restrictions that α, β < 1 (partial competitive activation), a, b > 1 (partial non-competitive activation) or both (mixed-type activation) [12].
To quantify the combined effect of both activators, the same approach as described above is chosen, i.e. the present reaction rate r(S, A 1 , A 2 ) is put into a relation to the completely activated and non-activated state, respectively: with The measure defined in Equation 14 gives an indication of the combined effect of both regulators and, hence, is denoted by 'resulting RS' ν res in the following.
Considerably more interesting than the combined effect ν res are the single influences of each activator, i.e. to what extent is the reaction activated by A 1 and A 2 . Having the already-introduced approach in mind, such a quantification is possible by carrying out a limit calculation for only one effector while all other effectors are excluded (i.e. concentration is set to zero). The corresponding RS for the activator A 1 is then defined as with By using Equation 17, the implicit assumption is made that each activator acts independently in the reaction. This is not true for the kinetic expression given in this example. Consequently, the single influences do not sum up to the resulting RS and, hence, an activating effect not equal to 100% can be obtained. A meaningful interpretation of such RS values is impossible and, therefore, a linear scaling of the single influences taking the ν res value into account is applied: Using the determined scaling factor ω the single RS values are now defined as This simple form of a scaling can be applied in the case of reaction kinetics influenced by multiple inhibitors or activators.
Multiple effectors of oppositely directed influence
For the general case of an enzyme regulated by many inhibitors and activators, the already-introduced approach has to be extended once more. We show that a separate quantification of the influences of all participating effectors is possible by some simplifications, i.e. neglecting cross-interactions between effectors. Consider again the system shown in Figure 3 and the corresponding expression for the velocity rate in Equation 5.
The determination of ν res is analogous to Equation 14
based on the following definitions: Owing to the opposite effect of the activator and inhibitor, the reaction rate runs between two scaling boundaries that indicate a maximal activation r max, e (S) or inhibition r min, e (S). Hence, for the calculation of ν res a flux value for the completely non-regulated state must be defined (cf. To decide whether the resulting RS of each effector is activating or inhibiting, the present reaction rate r(S, A, I) is compared with the rate without any regulatory influence: A value of r(S, A, I) greater than r 0, e (S) indicates a resulting activation of the reaction and, hence, the upper and lower scaling boundaries are set to r max, e (S) and r 0, e (S), respectively. The RS of each single effector is determined by choosing a corresponding effector as variable while setting all other effector concentrations to zero. According to this, the RS value of the inhibitor I in the example system is defined as with As already mentioned these definitions do not take the cross-interactions between the influences of different effectors into account. To approximately quantify these correlations a further scaling is applied, also considering the different effect of the activator and inhibitor: Finally, using the determined scaling factor ω the single influences are defined as The resulting percentages and now indicate the proportion in which the different effectors contribute to the total regulation of the reaction step (cf. Figure 4).
General definition of RS
In the following a general definition for the RS is exemplified by using a frequently used kinetic model often applied in dynamic metabolic network models [13,14]. ν ATP ν FBP ν AMP resulting RS (ν res ) of all effectors. Clearly, without any influence of the inhibitor ATP, the Pk reaction is activated. With increasing ATP concentration the RS of this effector also increases leading to a decrease of the Pk flux. At the same time the proportions (ν FBP , ν AMP ) in which the two activators contribute to the total regulation are reduced. Above a value of ATP = 5 mM the regulation is solely determined by the inhibitor (i.e. ν res = ν ATP = 100%).
In order to allow for an automatic calculation of all RSs of a given metabolic network, the whole method is implemented in a Matlab GUI (version 7.2, supplied by The MathWorks Inc., Natick, MA) providing a direct interface to the MMT2 software package that is used for the simulation of dynamic network models [15]. Before starting the RS calculation, the effectors are classified and the upper bounds for all effectors are defined (items 1 and 2 of the general definition) based on the information from a preliminary simulation run. Afterwards the different range boundaries for arbitrary rate formulae, necessary for the RS determination, are sampled according to the timedependent values of respective effector metabolites.
Visualization tool
Along with this contribution, the network-based visualization tool MetVis has been extended for the visualization of RS data. MetVis was introduced in [8] as a tool for visualizing pool size and flux data under highly dynamic conditions. It represents pool sizes by level meters and fluxes by edge width. It also offers features for dynamic visualization and side-by-side network comparison.
A new feature of MetVis is the visualization of RS by edges connecting metabolite pools and reactions which are represented by nodes in a bipartite manner. Once the precise meaning of RS has been defined, the respective data can be generated by a simulation tool and used for visualization.
The results of simulations are usually delivered as a CSV structured file, containing information about the concentrations of metabolites, flows of reactions and the RS val-ues of effectors. In the case of time-varying simulation data, the dynamic metabolic behavior contained in these data is expressed visually with an animation showing changing metabolite pool sizes and changing fluxes represented by differently filled boxes and varying arrow widths, respectively. Motivated by the fact that the metabolite concentrations and flux values can vary greatly, an adequate scaling of the input data is performed. This can be achieved in different ways depending on the scope of the study [16].
To visualize effectors using MetVis, additional edges representing the inhibition or activation effect connecting metabolites with reactions (enzymes) need to be inserted into the designed network. These connecting edges are visualized with a red circle for inhibition and a green circle for activation and are placed next to the affected reaction. The dynamic behavior of effector influences (i.e. the RS data) is displayed by changing the size of the corresponding circles indicating the level of the respective activation and inhibition.
Visualization example Dynamic network model
In the following the benefits and practical significance of the proposed visualization approach are illustrated with the help of a complex dynamic network model of the central carbon metabolism of E. coli, originally published in [14]. This network includes reactions for the glycolysis and the pentose phosphate pathway, which are linked via the sugar transport system (Pts).
The dynamic model is a system of ordinary differential equations consisting of 18 mass balance equations, 30 reaction rates and 7 analytical functions approximating the measured concentration values of cometabolites (AMP, NAD, etc.). All reactions steps are described by mechanistic enzyme kinetics resulting in a total number of 116 parameters. The special feature of this model is the high number of kinetic expressions describing regulatory interactions between metabolites and reactions (cf. Table 1), which makes it suitable as an example system. Simula- tion data of the network model were generated using the software package MMT2 [15].
Simulation results
By applying the presented approach to the example we simulated the influences of all effectors on the respective reaction steps. Figure 6 shows some of the reaction rates determined as in step 3 of the general algorithm. The resultant scaled single RS values are shown in Figure 7. By comparing the time-dependent RS data and the flux values, information on regulatory interactions within the metabolic network is immediately available.
• The RS values for the Pts reaction indicate a very strong inhibition by G6P in the stationary state as well as in the dynamic state after the glucose pulse. Comparing the two fluxes r and r 0, e the enormous potential of an increase in the glucose transport in the case of an absent product inhibition can be clearly seen. • The allosterically regulated enzyme Pfk is characterized by a nearly equal inhibition through all three effectors AMP, ADP and PEP. Interestingly, this is contradictory to the original definition of AMP and ADP having an activating influence on Pfk [17]. An explanation for this effect can be given by a closer look on the kinetic expression and the corresponding parameter values used in the model:
RS determination of the E. coli model
Using the 'in vivo' estimated parameter values K ADP, a = 128 mM, K AMP, a = 19.1 mM, K ADP, b = 3.89 mM, K AMP, b = 3.2 mM, it follows that A > B and, hence, an inhibitory influence of ADP and AMP is indeed present.
• The low activity of the Pk enzyme is solely determined by the generally small value of the maximal reaction rate (r max = 0.06 mM s -1 ) as the resulting effector influence (ν res ) shows a strong activation. The same holds true for the G1pat reaction (cf. Figure 7), where FBP is a strong activator, but the enzyme activity is very low (r max = 0.008 mM s -1 ).
• The PEPCx reaction is described by a kinetic expression where the flux continuously increases with increasing concentration of the activator FBP. In this case the highest possible concentration for the effector is set slightly above the simulated maximum. Accordingly, the curves for r and r max,e nearly conjoined at this time point.
Biological explanation
Referring to the visualized network the high glucose uptake flux via the Pts, which dominates the system directly after the glucose pulse, can be recognized. Obviously, at this early time point after the pulse the G6P converting enzymes, Pgi and G6pdh, are the most limiting steps for the further conversion of the glucose after uptake.
As already mentioned, the Pts reaction is strongly inhibited by its product G6P and thus limits the glucose uptake in the following time course (cf. Figure 9). The activation effect that FBP exercises on G1pat, Pk and PEPCx increases substantially, thus favoring the production of OAA and increasing the consumption of PEP. The decrease in the concentration of the latter also decreases the inhibition effect on Pfk, leading to a general increase in OAA production.
The first two enzymes, G6pdh and Pgdh, catalyzing the entrance reaction into the pentose phosphate pathway are inhibited by NADPH, leading to a 50% decrease in their reaction rates. Despite the low concentration of NADPH in comparison with the second inhibitor ATP of the Pgdh enzyme, the inhibitory effect is dominated by NADPH. The reason for this effect is the very low affinity of Pgdh towards ATP (K ATP = 208 mM).
Conclusion
A visualization approach has been presented that is suitable for an intuitive interpretation of simulation data under dynamic as well as steady-state conditions. Huge amounts of simulation data can be analyzed in a quick and comprehensive way. The visualization of regulatory interactions in a given metabolic network is based on a novel concept defining the RS of effectors regulating certain reaction steps. These RS values are measures for the strength of an up-or down-regulation of a reaction step compared with the completely non-inhibited or non-activated state, respectively. The concept of RS presented here is applicable to any mechanistic reaction kinetic formula.
So far this regulatory information has never been represented quantitatively in the available visualization tools. Hence, for the first time, by applying the proposed concept using the MetVis tool, a visualization of dynamic changes in metabolite pools, fluxes and RS data within the whole network structure becomes tractable. The incorporation of additional edges for effectors closes the information gap between obtained pool sizes and fluxes leading Visualization of the dynamic metabolic model of E. coli Figure 8 Visualization of the dynamic metabolic model of E. coli. The respective dynamic data were generated using MMT2 [15]. The representation shows a metabolic snapshot of the current metabolite concentrations (box levels), metabolic fluxes (arrow widths) and effector influences (circle sizes) at a time point t = 0.1 s after the glucose pulse.
Long-term pulse response of the E. coli network Figure 9 Long-term pulse response of the E. coli network. The visualization represents a metabolic snapshot at a time point t = 23.9 s after the glucose pulse.
to the right interpretation of metabolic fluxes underlying a certain metabolic regulation.
One limitation of mechanistic enzyme descriptions is the large number of model parameters that have to be estimated according to given experimental data. This has led to the development of approximative kinetic rate equations based on the linlog or power law approach. Despite their broad applicability, one major drawback of these kinetic formats is their indeterminacy for concentration values of zero caused by the logarithmization of pool sizes. Clearly, to utilize the RS concept to the linlog and power law approaches, lower boundaries for the effector pools (e min > 0) also have to be defined. By using the complex example system of the dynamic metabolic E. coli network, it has been shown that the extended time-resolved graphical network presentation provides a series of information about regulatory interaction within the biological system under investigation. Quantitative modeling has also become possible in the fields of transcriptomics and proteomics owing to the enormous increase in information available. Regulation at the genome level mainly takes place during the transcription of genes into mRNA, e.g. inhibition of RNA-polymerase through the binding of certain repressor proteins [18]. In contrast, the protein function is regulated by post-translational modifications such as phosphorylation [19].
The formulation of 'vertical' network models combining all levels of regulation (genome, transcriptome, proteome, metabolome) will help us to gain an insight into the complex cellular network in its entirety. In the case of mechanistic descriptions for the reactions taking place in the different 'omics' levels, the concept of RS can be applied to quantify and visualize all regulatory interactions.
Symbols used
a, b, rate constant factors; e max , vector of maximal effector concentrations; K i , affinity constant for a certain metabolite i; r j , reaction rate of enzyme j; r max , maximum reaction rate; r min, e , r max, e , minimal and maximal reachable flux for variable effector pools; r 0, e , flux in the completely nonregulated state; S, A, I, substrate, activator and inhibitor pool; s, e, a, i, vector of substrate, effector, activator and inhibitor pools; α, β, affinity constant factors; , scaled single effector influence; ν res , resulting RS; ω, scaling factor.
Glossary
Effector: metabolite that modulates an enzyme-catalyzed reaction leading to an acceleration (activator) or deceleration (inhibitor) of the reaction rate.
Regulatory strength: measure for the strength of an up-or down-regulation of a reaction step compared with the completely non-inhibited or non-activated state.
Competitive inhibition: metabolites which are not substrates of the enzyme can bind to the active site and compete with the substrate.
Non-competitive inhibition: an inhibitor can bind to the enzyme substrate complex (ES) forming a complex (ESI) that reduces the amount of active enzyme.
Allosteric inhibition or activation: the enzyme has additional binding sites for specific inhibitors or activators which can change the conformation resulting in a change of the enzyme activity. | 8,136.8 | 2007-10-16T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Lifetime prediction of seLf-Lubricating sphericaL pLain bearings based on physics-of-faiLure modeL and acceLerated degradation test prognozowanie czasu pracy samosmarujących łożysk śLizgowych w oparciu o modeL fizyki uszkodzeń oraz przyspieszone badania degradacji
A spherical plain bearing (SPB) consists of an inner ring with an outer convex spherical surface and an outer ring bore with inner spherical surface (see Figure 1) [3]. Due to small friction coefficient and no need for lubrication during operation, self-lubricating spherical plain bearings (SSPBs) with self-lubricating liner are widely used in operation and transmission systems in aerospace, nuclear power plants, and ship equipment and they are key components of these systems. SSPBs failure will directly affect the operational reliability and safety of the Yashun WAng Xin FAng Chunhua ZhAng Xun Chen Jinzhong Lu
Introduction
A spherical plain bearing (SPB) consists of an inner ring with an outer convex spherical surface and an outer ring bore with inner spherical surface (see Figure 1) [3].Due to small friction coefficient and no need for lubrication during operation, self-lubricating spherical plain bearings (SSPBs) with self-lubricating liner are widely used in operation and transmission systems in aerospace, nuclear power plants, and ship equipment and they are key components of these systems.SSPBs failure will directly affect the operational reliability and safety of the Yashun WAng Xin FAng Chunhua ZhAng Xun Chen Jinzhong Lu
Lifetime prediction of seLf-Lubricating sphericaL pLain bearings based on physics-of-faiLure modeL and acceLerated degradation test prognozowanie czasu pracy samosmarujących łożysk śLizgowych w oparciu o modeL fizyki uszkodzeń oraz przyspieszone badania degradacji
Due to small friction coefficient and no need for lubrication during operation, self-lubricating spherical plain bearings (SSPBs) have been widely used in operation and transmission systems in aerospace, nuclear power plants, and ship equipment and they are key components of these systems.SSPBs failure will directly affect the operational reliability and safety of the equipment; therefore, it is necessary to accurately predict the service life of SSPBs to define reasonable maintenance plans and replacement cycles and to ensure reliability and safety of vital equipment.So far, lifetime prediction of SSPB has been primarily based on empirical formulae established by most important bearing manufacturers.However, these formulae are lack of strong theoretical basis; the correction coefficients are difficult to determine, resulting in low accuracy of lifetime prediction.In an accelerated degradation test (ADT), the load is increased to accelerate the SSPB wear process.ADT provides a feasible way for accurate lifetime prediction of SSPB in a short period.In this paper, wear patterns are studied and methods of wear analysis are presented.Then, physics-offailure model which considers SSPB wear characteristics, structure parameters and operation parameters is established.Moreover, ADT method for SSPB is studied.Finally, lifetime prediction method of SSPBs based on physics-of-failure model and ADT is established to provide a theoretical method for quick and accurate lifetime prediction of SSPBs.
Keywords: accelerated degradation test, self-lubricating spherical plain bearing, lifetime prediction, physics-
of-failure model.
sciENcE aNd tEchNology
equipment; therefore, it is necessary to accurately predict the service life of SSPBs to define reasonable maintenance plans and replacement cycles and to ensure reliability and safety of the equipment.The lifetime of an SSPB strictly relates to friction and wear characteristics of the liner material, while the variation in load has a great influence on wear life.The current lifetime model of SPB is based on empirical formulae established by the most important bearing manufacturers (such as SKF, NTN, INA).However, these formulae come from experimental data and lacking in theoretical basis; it is difficult to determine the formula correction factor, resulting in low accuracy of lifetime prediction, wide interval of predicted lifetime.
Wear is the main failure mode of the SSPB, and wear process is very slow in service.If lifetime of SSPB is predicted by traditional life tests or degradation or wear tests, it is a great challenge to complete the tests in a short or feasible period of time.To overcome this issue, accelerated degradation test (ADT) can be applied in which degradation or wear data are collected under higher levels of stress and allowing extrapolation the reliability information at the use condition [12].During an ADT of SSPB, the load is increased to accelerate the wear process, thus the test provides a feasible way for accurate SSPB lifetime prediction in a short period of time.
At present, researches about ADT methods are mainly based on mixed-effects models or stochastic process models.Approaches for data analysis or optimal design of ADT are based on mixed-effects models which includes only one fixed-effects parameter and one random-effects parameter [1, 8, 10, 12-14, 19, 23-26] as well as general mixed-effects model [21].The stochastic process model describes degradation process, and has many advantages.The model is very suitable to describe a time-dependent degradation process in which error terms cannot be assumed to be independent identically normally distributed.Several methods have been developed for ADT based on stochastic process models, such as inverse Gaussian process [16,22], Wiener process [5,6,17], drift Brownian motion process [4,28], and Gamma process [7,18,27].These methods are mainly from statistical perspective and lacking in support of physical rules; thus, the prediction accuracy depends on sample size and model selection.In order to reflect the physical meaning of degradation process, a model based on physical mechanism of degradation is more suitable.Based on the physical mechanism of the degradation of product performance, several researches have been carried out and physical degradation models have been established, resulting in better results of lifetime and reliability prediction [9,11].However, researches on ADT applied to SSPB lifetime prediction based on physics-of-failure model have not been yet considered.
This paper studies wear patterns and briefly discusses the most common methods used for wear analysis.Then, a physics-of-failure model of SSPB in which wear characteristics, structure parameters and operation parameters are integrated is established.Moreover, the paper studies the ADT method for SSPB, and finally lifetime prediction method of SSPBs based on physics-of-failure model and ADT is established to provide a theoretical method for quick and accurate lifetime prediction of SSPBs.
SSPB physics-of-failure model 2.1. SSPB basic wear model
In an SSPB, the inner ring is made of bearing steel and the anti-wear self-lubricating liner is made of macromolecular composite which is usually Polytetrafluoroethylene (PTFE) composites or fabrics.Wear mainly occurs on composite liners affecting the SSPB service life.Abrasive and adhesive wears are the main mechanisms of sliding wear on self-lubricating liner, and they are always concurrent actually.Abrasive and adhesive wears are described by Archard formulae through Equation ( 1) and ( 2), respectively [15,20]: where V is wear volume; k s is wear constant; F N is normal load; H is the rigidity of softer material; σ s is yield strength of softer material, and x is sliding distance.The wear constant k s relates to contact conditions of the rough surface; thus, two wear constants namely, abrasive wear constant (k s in Equation ( 1)) and adhesive wear constant (k s in Equation ( 2)), respectively, exist.Archard's wear calculation equations assume that the wear volumes are directly proportional to normal load and sliding distance, but inversely proportional to the rigidity or yield strength of the softer materials (i.e. the PTFE self-lubricating liner in SSPB).Equation ( 1) and ( 2) have similar structure.Because abrasive and adhesive wears are concurrent when SSPBs work, they cannot be separated in wear calculation.Given the yield strength as the parameter to estimate the resisting wear ability of the liner, the SSPB wear equation is: where σ s is the strength of self-lubricating liner in SSPB.
In practical use, the maximum allowable clearance between the inner and outer surfaces of an SSPB is considered as the wear failure threshold.The total structure clearance s is derived from the initial clearance u 0 and wear deep u, and s=u 0 +u.According to Equation (3), the wear volume or the deep depend on the maximum contact pressure.It is reasonable to assume that maximum contact pressure p 0 is located at center of contact region.Analysis of contact pressure distribution in SSPB showed that p 0 increases with wear clearance s.Considering a minute region near the center point of contact region, whose area is A 0 , the contact pressure p 0 in the area is uniform.Therefore: By substituting (4) into Equation (4), SSPB wear is:
SSPB physics-of-failure model
In SSPB wear process, the radius values of inner and outer ring contact surface R 1 and R 2 are relative quantities.Thus, it can be con- When the wear amount reaches the prescribed threshold u m , i.e., u=u m , a SSPB failure occurs at the corresponding SSPB lifetime T.
As Figure 2 shows, the typical wear process of SSPB can be split into three stages: running-in wear period (RWP) (I), steady wear period (SWP) (II), and intense wear period (IWP) (III).t is the SSPB working time, and u is the SSPB wear amount.During RWP, the wear rate decreases with t for working conditions of contact rough surfaces gets better.Then the wear rate keeps steady; SWP plays a key role in determining SSPB lifetime.Finally, at IWP stage wear rate increases rapidly and working conditions of rough surfaces worsen.In the same way, the wear processes of PTFE self-lubricating liners of SSPBs also can be described by the same three stages, and the inflection points between the three stages may indicate wear conditions of the rough surfaces.
Since material properties and contact characteristics of the friction pair at each wear stage do not vary, we can deem that wear constants at each wear stage keep constant and can be defined as: running-in wear constant k s,I , steady wear constant k s,II , and intense wear constant k s,III , respectively.The dynamic wear process can be described by a physics-of-failure equation: where t I is the time of RWP turning into SWP, that is time of the first inflection point; t II is the time of the second inflection point; T denotes the SSPB lifetime.To some SSPBs, the intense wear periods of dynamic wear process may not occur, and then the third part in the right side of Equation ( 13) will not appear.
Based on the physics-of-failure model as Equation ( 13), the SSPB lifetime can be expressed as: where F denotes the load of SSPB; 0 u denotes the initial average clearance of SSPB; sidered that R 2 =R=d k /2 is constant during the whole wearing process; d k is the diameter of conformal contact surface, and R 1 changes with wear deep u.Therefore: Although the maximum contact pressure p 0 varies with the radius of the inner ring R 1 , p 0 can be considered as constant in a small relative sliding distance dx.So Equation ( 5) can be written as: where du is the wear increase in sliding distance dx.
While working SSPB mainly swings under a swing angle ±α rad and at a swing frequency f s Hz. α and f s may vary with mission profiles, so they are functions of time.Sliding distance in time dt can be written as: Thus: where k s (u) denotes that the wear constant is a function of the amount of wear and may change due to the change of the contact states in conformal surface during operation.Defining the wear rate as the wear deep in a unit time: The wear rate is in proportion to the swing angle and frequency.If wear constant, swing frequency, and angle are constant and the state of friction pair surface does not change, the wear rate increases with the increase of wear depth.Moreover, according to Equation (10) wear constant affects the wear rate.
The wear rate w obtained by Equation ( 10) represents the wear rate of the SSPB related to the structural parameters R of the bearing.The wear constant k s depends on the material and on characteristics of the contact surface of the friction pair, so the wear constant k s of the bearings is a more basic characteristic quantity than the wear rate w.The SSPB wear constant for liners with the same material and characteristics of the contact surface follows the same physical law.
Take integral of both sides of the Equation (9) with the integral limit of the left side [0, u] and of the right side [t 0 , t T ], then the cumulate wear from t 0 to t T is: At constant swing angle α and frequency f s :
SSPB maximum contact pressure p 0
Since the SSPB contact surfaces fit each other well, with respect to SSPB size, the size of contact area cannot be neglected.This situation results in conformal contact issue that cannot be solved using classical theories based on half-space.Fang proposed a universal approximate model for conformal contact and non-conformal contact of spherical surfaces [2].In conflict with completely spherical surfaces, the contact regions of SPBs are incompletely spherical surfaces from which two plane-symmetrical structures have been removed.Based on the model in [2], Fang also proposed a new method to precisely calculate SSPB contact pressure [3].Below, a quick explanation of the calculation of SSPB maximum contact pressure p 0 according to the new proposed method is given.
Let a be the boundary radius of contact area, and h be the half width usually determined by the outer ring of SSPB, and R 1 and R 2 be the radius of the inner and outer ring respectively.
When 0< (1) a<h, the contact region is still a completely spherical surface, and the contact pressure distribution can be deduced by the original Fang's model [2]: h<a≤R 2 , the contact region is an incompletely spherical surface, and the contact pressure distribution can be deduced by the new model [3]: where: and F denotes the normal concentration force; r is the horizontal distance between the point on the surface and the symmetry axis, i.e., the projective distance; n is the pressure distribution exponent; Γ(⋅) denotes the gamma function; E * denotes equivalent modulus: E 1 , E 2 are elastic modulus and μ 1 , μ 2 are Poisson's ratios of material of inner and outer ring respectively.Using Equations ( 15)-( 18), the maximum contact pressure p 0 of a specific type of SSPB under the load F can be calculated.
Identification of inflection points in SSPB wear process
In actual tests, due to the existence of systematic errors and random errors, the dynamic wear curves of the friction pair may have a large fluctuation, and the accurate identification of the three inflection points in wear process represents a great challenge in SSPB wear analysis.In this paper, we propose the identification of inflection points of the dynamic wear curve by using the n-th order polynomial: to fit the dynamic wear curves.In Equation (19), y is the amount of wear; t is the operation time, p i (i=1, ⋯, n+1) are coefficients to be estimated; n is the highest order of the polynomial.If the fluctuation of dynamic wear curve is too large, the curve can be fitted piecewise.
If the curvature K ρ of f(t) gets the maximum value in the local parts of changing region of the wear periods and: then an inflection point is identified.In Equation ( 20), f'(t) and f ''(t) denote the first order and second order derivatives of fit curves f(t), respectively.
Computation of SSPB wear constant
Wear constant k s (k s =k s,I , k s,II and k s,III corresponding to runningin, steady, and intense wear stage) of the SSPB liner and wear curve can be determined by wear tests, and then running-in wear constant, steady wear constant, and intense wear constant can be computed.Due to measurement random errors in the test wear process, wear curve has big fluctuations.Based on the results of inflection point identification, the wear constant k s can be obtained by fitting piecewise which induce to the minimum sum of squared error between experimental and theoretical dynamic wear curve.That is, the estimation of wear constant k s makes the sum of squared error: take minimum value.In Equation (21), N is the number of sampling points of dynamic wear curve u Test [i] from tests; u PoF [i] is wear amount calculated with Equation (12).Equations ( 15) and ( 16) highlight that the maximum contact pressure is not an explicit functions of the interior clearance s, so the integral and u PoF [i] have to be solved using numerical methods [3].When the wear constant is constant in the interval ( , ] x y u u u ∈ , the iterative equation is: sciENcE aNd tEchNology where u 0 is the SSPB initial clearance which can be obtained by measurements.
SSPB acceleration model assumption
As mentioned in Section 2.2, wear constant k s is a more basic wear characteristics than wear rate w.Therefore, SSPB acceleration models describe the relationship between distribution parameters of the wear constant k s and contact pressure.According to engineering experience and prior information, following assumptions are made: Wear constant (1) k s follows lognormal distribution, i.e., LN µ σ 2 , and the probability density function is: where μ k and σ k are respectively the mean and standard deviation of logarithmic k s .
(2) σ k does not change with load, but μ k is affected by load.Relationship between μ kI and load F in running-in wear stage can be written as: where λ 1 , λ 2 , λ 3 are model parameters which can be estimated by nonlinear fitting method.Relationship between μ kII and load F in steady wear stage can be written as: where A, γ are model parameters which can be estimated by nonlinear fitting method.Relationship between μ kIII and load F in intense wear stage can be written as: where A, γ, B are model parameters which can be estimated by nonlinear fitting method.Equation ( 24)-( 26) are the SSPB acceleration models.
Analysis method for SSPB ADT based on physics-offailure model
Based on above physics-of-failure model and acceleration model assumption, ADT data can be analyzed to realize SSPB working lifetime prediction according to following steps: Nonlinear fitting of wear degradation data (1) After obtaining the dynamic wear data (t, u i ) the data fitting can be carried out using the physics-of-failure Equation (13) of the dynamic wear process.The inflection points of the three stages of the dynamic wear process are determined by the method described in Section 2.4, and then the wear constants of three stages are calculated.Following Table 1 lists the data format.
Wear constants statistical analysis (2) Due to the difference between test units and experimental error, wear constants of the three wear stages under each load level are still random.It is commonly assumed that wear constants of SSPB follow normal or lognormal distribution.Without losing generality, in this paper lognormal distribution is assumed, i.e. k s L k Lk ~( , ) LN µ σ 2 (see Equation ( 23)).
Under stress level S i , the maximum likelihood estimation (MLE) of the distribution parameters of the wear constants is: where i=1, ⋯, K, j=I, II, III.
From Assumption (2), the variance in the lognormal distribution of wear constants does not change with load.The standard deviation of wear constants at the three wear stages can be calculated using the weighted average method: where j=I, II, III; n i is sample size under S i .
Fitting the acceleration model parameter (3) ( , ) F i Lkij µ can be obtained by above-described steps where i=1,⋯,K, j=I,II,III.When j=I, estimation of model parameters in RWP can be obtained by fitting acceleration model (24) to data ( , ) When j=II, estimation of model parameters
sciENcE aNd tEchNology
Figure 4 shows the test apparatus whose working mode is swinging.The load is applied to a test bearing by weights and a lever, and the displacement sensor is fixed at the top of the experimental platform.
Four different levels of stress were applied, i.e., 8 kN, 24 kN, 14 kN, and 42 kN.These stresses guarantee invariant failure mechanism because the product of the contact pressure and SSPB sliding speed (pv) does not exceed the maximum value specified by test standards.Four to six specimens were tested under each stress level; Table 3 summarizes the complete ADT plan.According to pre-estimation lifetime of SSPB testing, the tests are censored by specified testing time under low load and are censored by failure (PTFE fabric liner completely worn through) under high load.Before and after tests, SSPB radial clearance was measured using a clearance measuring station.The displacement sensor measured and recorded the variation of the clearance of tested SSPB during tests.
Wear degradation data nonlinear fitting
Test data are analyzed with the presented physics-of-failure method and Figure 5 By using the statistical parameters of the wear constants under use load, the average wear constants at each wear stage can be calculated as: For running-in wear stage, f(⋅) is the inverse function of ϕ(⋅).For other stages, f(⋅) is functions about F of the right side of acceleration models ( 25) and ( 26).Based on average wear constants, dynamic wear process under use load is determined according to Equation ( 13).Moreover, SSPB dynamic wear curve is obtained at given initial clearance.Finally, operation lifetime corresponding to a given threshold can be calculated by Equation ( 14).
Experimental test example 4.1. Experiment specimen and process
The tested specimen shown in Figure 3 is a GE20ET-2RS radial SSPB with two seals at both sides and fractured outer ring.Table 2 summarizes the SSPB main technical features; the friction pair is made of steel and PTFE fabric.Wear constants of specimens in different wear stages can be obtained accurately after nonlinear fitting of physics-of-failure model to wear degradation data.From the piecewise analysis of the dynamic wear process and the inflection point identification following conclusions can be drawn: Running-in and steady wear stages of all the samples can (1) be clearly identified, and the intense wear stage of some specimens is not significant or absent.Two reasons explain this behavior: first, the specimen do not completely fail within the test time, that is, the censored time of the test is less than the time needed for the specimen to go into the intense wear stage.Moreover, the specimen completely fails, but the dynamic wear curve does not include the intense wear stage induced by difference between specimens and experimental errors.To facilitate analysis of accelerated wear data, the addition of intense wear constants for specimen that does not reach the intense ADT wear stage is required.Test results under 14 kN and 24 kN loads show that the three wear stages are all significant and the intense wear constants are approximately equal to runningin wear constants.The wear constants are on the same order of magnitude under 42 kN load.Therefore, an alternative method to add intense wear constants can be approach: if the specimen does not completely fail, the intense wear constant can be static clearance of all complete failed specimens is 241.47 μm.Therefore, in this paper, we take 250 μm as SSPB failure threshold.SSPB inflection points in wear (3) process can be identified by the presented method.Statistical results show that the time corresponding to inflection points relates to the load: the larger the load, the earlier inflection points appear.However, the total amount of wear (the wear depth of the initial clearance after removal of the initial clearance) corresponding to inflection points is not related to the load.In addition, it is a random value for different specimens.
When the average wear depth μ (4) 1 =57.645μm, SSPB specimens turn into steady wear stage from running-in wear stage; moreover, they turn into intense wear stage from steady wear stage when the average wear depth μ 2 =125.747μm.The time during running-in and intense wear stage is short compared to SSPB life cycle, but the wear quantity is large during these two stages and the stages cannot be neglected.Therefore, SSPB lifetime can be extended by raising the ratio of steady wear to total thickness of self-lubricating inner and reducing intense wear rate by improving SSPB structure and forming process.
Wear constants statistical analysis
Due to difference of specimens and test errors, in PTFE SSPBs ADT, wear constants at each stage under each load level are still random.Between the 12 sets of available test data (three wear stages under four different load levels), the Lilliefors test (normality test) highlights that only running-in wear constants under 42 kN do not follow normal or lognormal distribution.Compared to other running-in wear constants under the same load, the running-in wear constant of specimen M19 is 6.1415×10 -7 .Therefore, the M19 may be considered an outlier.Wear constants are assumed following lognormal distribution, i.e. k s k k
Parameters of wear constants lognormal distribution can be obtained by maximum likelihood estimation method and are shown in Table 4.
Standard deviation of running-in, steady, and intense wear constants can be calculated as: (30) Therefore, we can obtain standard deviation of running-in and steady wear constants σ Lk,I =0.1636, σ Lk,II =0.3301 respectively.However, standard deviations of the intense wear constants are larger under high load level and smaller under low load level.In fact, the intense wear is not stable and the high load level makes the unstable state worsen, inducing to bad consistency of intense wear constants.SSPBs lifetime under use load level is intended to predict, so the standard deviation of intense wear constants for lifetime prediction of SSPB can be calculated by weighted average of standard deviation of intense wear constants under low load level as: According to variation trend of the mean parameters of distribution function of running-in wear constants with the load level, it is very difficult to fit commonly used acceleration equations or their transformation forms to the trend, that is, it is very difficult to determine an accurate function of μ k,I =f(F).To this end, this paper uses the inverse function method by fitting function F=ϕ(μ k,I ) to the variation trend of the mean parameters with the load level, where ϕ (⋅) is an inverse function of f(⋅).Equation ( 24) describes the relationship between μ k,I and F according to data of the variation trend.Parameters in Equation ( 24) can be obtained by nonlinear fitting method as: The first order derivative of function ϕ(⋅) is: The inverse power rate model with shift coefficient of Equation (26)describes the relationship between the mean parameters of distribution function of intense wear constants with the load level, and it can be considered as the acceleration model for intense wear constants.The parameters can be estimated as A=−3.843,B=−14.24,γ=−0.3729.Since intense wear constants follow lognormal distribution too, according to the nature of the lognormal distribution the relationship between the average intense wear constant and the contact pressure can also be expressed as: 5.
Based on the above wear constants, Equation ( 13) gives the dynamic wear curve of the SSPB shown in Figure 9 when initial clearance is 15.55 μm.
According to Figure 9, the presented physics-of-failure model can directly describe the dynamic wear process in SSPB life cycle and is very suitable for SSPB lifetime prediction and design analysis.When the load level is 5 kN, the calculated SSPB GE20ET-2RS average wear lifetime is 3178 hours; the swing angle and frequency are ±20° and 0.5 Hz respectively.The acceleration ratio is 23.93 according to the specimens' average lifetime of 132.8 hours under 42 kN, which shows that the acceleration effect is obvious.In addition, specimens under an 8 kN payload have been running for 1200 hours.According to physics-of-failure model and acceleration model, their calculated average lifetime is 2098 hours.Therefore, the SSPB predicted average lifetime of 3178 hours under the 5 kN payload is a reasonable value.
Discussion
In engineering practice, temperature has effect on the tribology properties of self-lubricating liner material.However, in the experimental process in this paper, the ambient temperature is constant.We monitored surface temperature of the testing self-lubricating spherical plain bearings.And fluctuation of the temperature during running-in wear and steady wear stage of the bearings is not more than 2°C.Furthermore, in order to avoid new failure mechanism to be introduced induced by excessive temperature rise, the product of contact pressure and speed on the contact surface (pv) is limited by that pv is less than or equal to 3000N/mm 2 •mm/s according to the results of the analysis of a large amount pilot test.In addition, if the surface temperature of testing self-lubricating spherical plain bearings is greater than 150°C, they can be directly determined as the occurrence of a failure.So the result of accelerated degradation test of self-lubricating spherical plain bearings is credible and the wear law presented in this paper is applicable under occasion that the temperature fluctuation is small.And the main factor of the wear process is load in this case.
For the above considerations, the presented model in this paper does not account for thermodynamic processes and ignores the effect of temperature on the PTFE.Based on the contact mechanics model and failure physical model presented in this paper, future research will focus on the construction of wear failure physics equation under the coupling of temperature and load, and the mechanism and law of ef-fect of temperature and load on wear rate and wear constant, and corresponding method of lifetime prediction of self-lubricating spherical plain bearings based on accelerated degradation test.
Conclusions
In this paper we present a method based on physics-of-failure model and ADT to give a more reliable SSPB lifetime prediction.First, a physics-of-failure model of SSPB in which wear characteristics, structure and operation parameters are integrated is established.Second, acceleration models for running-in, steady and intense wear constants of SSPB are presented.Finally, a GE20ET-2RS radial SSPB with two seals at both sides and fractured outer ring is tested to assess the validity of the presented method.
The proposed physics-of-failure model shows a clear physical relationship between parameters, thus it can be used in SSPB structural optimization design, wear analysis and lifetime prediction.Moreover, it can accurately describe continuous dynamic wear degradation process started from small SSPB clearance.The presented method of identification of inflection points in SSPB wear process considers the characteristics of the wear process, thus it is more objective and in accordance with the engineering practice.SSPB lifetime prediction method is given based on piecewise analysis method.The presented accelerated models can accurately describe the quantitative relationship between wear constants and load of ADT for SSPB.e-mails: wangyashun@nudt.edu.cn,yueguangxin@126.com,chzhang@nudt.edu.cn,chenxun<EMAIL_ADDRESS>
Fig. 1 .
Fig. 1.SPB typical structures represents the service time during which the average accumulate wear deep increases from 0 to I t u (average wear deep before the first inflection time) in RWPs stages.u represents the service time from first inflection point to the second inflection point (average accumulate wear deep increases from I t u to II t u ) in SWPs stages;
Fig. 2 .
Fig. 2. Sketch diagram of wear process -8 shows the results.The fluctuating curve denotes test dynamic clearance, while the straight continuous solid line represents the theoretical dynamic clearance calculated by physics-of-failure model and nonlinear fitting method.The physics-of-failure equation is a continuous function, i.e., the clearance at the end of previous wear stage is equal to the one at the beginning of the next stage.Test results show that based on identification of inflection points in SSPB wear process the physics-of-failure model can accurately describe the wear degradation process under complex conditions.
Fig. 6 .
Fig. 6.Wear process physics-of-failure description.Applied load: 14 kN sciENcE aNd tEchNology added by taking the same value of running-in wear constant.If the specimen completely failed, the the same value of steady wear constant can be considered.Test results show that the average (2)
Fig. 8 .
Fig. 8. Wear process physics-of-failure description.Applied load: 42 kN ϕ(⋅) a monotonically increasing function, its inverse function μ k,I =f(F) is also a monotonically increasing function.Given an arbitrary value of F, μ k,I exists and is unique, and it can be calculated by a numerical method.µ σ 2 , according to the nature of the lognormal distribution the relationship between the average running-in wear constant and the contact pressure is: f(⋅) is an inverse function of ϕ (⋅).Acceleration model for steady wear constants(2) The inverse power rate model describes the relationship between the mean parameters of distribution function of steady wear constants with the load level, which can be taken as the acceleration model for steady wear constants.Parameters for the acceleration model can be estimated as A=−19.87,γ=−0.06214 by least squares fitting.Since steady wear constants follow lognormal distribution too, according to the nature of the lognormal distribution the relationship between the average steady wear constant and the contact pressure can also be expressed as:
4 . 5 .
Lifetime prediction based on Physics-of-failure model Distribution parameters of wear constants and the average wear constants under use stress level can be calculated by substituting F=5 kN into acceleration models (24)-(26) and Equation (34)-(36) summarized in Table
Fig. 9 .
Fig. 9. Average dynamic wear process based on physics-of-failure model under 5kN Laboratory of Science and Technology on Integrated Logistics Support College of Mechatronics and Automation national university of Defense Technology Yanwachi str., 47 Changsha, 410073, China chunhua zhang School of electronic Information hunan Institute Of Information Technology Mao Tang Industrial Park,Changsha (Xingsha) economic and Technological Development Zone, Changsha, 410151, China Xun chen Laboratory of Science and Technology on Integrated Logistics Support College of Mechatronics and Automation national university of Defense Technology Yanwachi str., 47 Changsha, 410073, China jinzhong Lu Fujian Longxi Bearing (group) Corporation Limited no.388,Tengfei Road, Zhangzhou 363000, Fujian, China
Table 1 .
Wear constants of three stages under different stress levels
Table 4 .
Parameters of wear constants lognormal distribution
Table 5 .
Distribution parameters of wear constants and the average wear constants; F=5 kN | 7,782.6 | 2016-09-17T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Humanitarian Logistics Prioritization Models: A Systematic Literature Review
: Background: Disasters have caused suffering across the world throughout history. Different types of disaster events can manifest themselves in different ways, originating from natural phenomena, human actions and their interconnected interactions. In recent years, organizations in charge of disaster management have faced a series of challenges in humanitarian logistics, leading to an increasing consideration of the use of models of prioritization, in most multi-criteria models, to define the best alternatives for more assertive and strategic decision-making. Methods: This article aims to conduct a systematic review of the literature on the application of prioritization models in humanitarian logistics. To this end, an analysis was carried out of 40 articles, indexed in the Scopus or Web of Science databases. Results: The descriptive analysis revealed that the majority of applications are aimed at dealing with sudden-onset natural-induced disasters. However, there are still gaps in relevant areas, such as addressing inventory management problems at a tactical decision level. Conclusions : The development of prioritization models necessitates the integration of various methodologies, combining optimization models with multi-criteria decision analysis to yield superior outcomes. It is advised to incorporate four distinct criteria—efficiency, effectiveness, equity, and sustainability—to ensure a comprehensive assessment of the decision-making process.
Introduction
Throughout history, numerous disasters have affected society.Disasters are events that disrupt the normal activities of a society or community, resulting in human, material, economic, or environmental losses that exceed the affected community's capacity for recovery using only its own resources [1].
Disasters can be classified according to their origin and onset speed [2].Regarding origin, disasters can be classified as natural-induced disasters (e.g., floods or earthquakes) or human-made disasters (e.g., chemical spills or mass migrations).Regarding onset speed, they can be classified as sudden-onset disasters (e.g., tornadoes or terrorist attacks) or slow-onset disasters (such as droughts or political crises).
The earthquake and tsunami in Southeast Asia in 2004 demonstrated the need for greater knowledge and tools to address large-scale disasters [2].Since then, humanitarian logistics has attracted growing interest, involving universities, governments, and other organizations engaged in this field [3].
Currently, this topic is crucial as it contributes to achieving Sustainable Development Goal 11.5, which aims to reduce deaths from disasters and disaster-related vulnerability, especially in light of the continuous increase in the average annual number of naturalinduced disasters [4] Humanitarian logistics is the process of efficiently planning, implementing, and controlling the flow and storage of goods, materials, and related information from the point of origin to the point of consumption, with the aim of alleviating the suffering of people in vulnerable situations [2].
The activities involved in humanitarian logistics have characteristics that make logistics operations more complex, such as the dynamic nature of the problem, resource scarcity, the disruption of transportation and communication networks, and the inability to predict demand, resulting in significant levels of uncertainty [5,6].In catastrophic events, where local supplies are mostly destroyed, private sector supply chains are severed, and distribution complexity escalates, outside help becomes the primary source of relief [7].This situation underscores the critical need for effective prioritization models.Developing these models is inherently challenging, as they must balance trade-offs between efficiency, effectiveness, and equity [8].Given this challenging scenario, the use of prioritization models plays a fundamental role in humanitarian logistics [9].
Prioritization involves organizing and ranking alternatives based on a specific perspective [9].Prioritization models fall into the following categories: multi-criteria, multi-criteria heuristics, and empirical prioritization [9].
Multi-criteria models assist individuals in making choices that reflect their preferences, enabling the identification of the most advantageous alternatives among those available [10].Multi-criteria heuristics is an approach used for complex problems and offers effective solutions within a reasonable computational time [11] Finally, empirical prioritization denotes an approach distinct from formal models, as highlighted by [9].
In humanitarian logistics, there has been a notable increase in the popularity of research employing prioritization models to optimize operations in humanitarian logistics, especially multi-criteria models [12].These approaches enable the enhancement of the overall performance of these operations [13].
Implementing prioritization models in humanitarian logistics presents practical challenges.One notable issue is the discord between the criteria favored by the academic community and the preferences and priorities of field practitioners [14].Additionally, addressing the complexity of post-disaster situations, where support systems are impacted and dynamically changing, is essential [7].
The primary focus of prioritization models is to conduct decision-making in accordance with the interests of the involved parties, even in circumstances of doubt, uncertainty, conflicts, and competition among various viewpoints.In this type of analysis, several relevant aspects can be taken into consideration [13].
Based on everything that has been presented, the purpose of this article is to answer the following research question: What is the current state of knowledge regarding prioritization models developed in the context of humanitarian logistics?The intention is to conduct a descriptive analysis based on the selected articles and discuss potential research directions.
The contributions of this study lie in a comprehensive analysis of the existing literature, surpassing mere description to pinpoint challenges and potential research directions.Particularly noteworthy is its introduction of an extended classification from [8], which now includes sustainability alongside efficiency, effectiveness, and equity.
After this introduction, the rest of the article is organized as follows.In Section 2, we detail the adopted methodology, addressing the criteria for data collection and the selection of categories for the literature review.The results obtained, organized according to the selected categories, are presented in Section 3. A discussion of the results is addressed in Section 4. Finally, some concluding remarks and possibilities for future research related to the topic are outlined in Section 5.
Materials and Methodology
A systematic literature review entails the comprehensive identification, evaluation, and interpretation of relevant research in a specific area [15].According to [16], this process involves critically analyzing articles related to a theme or research question.In this article, we conducted a systematic literature review focused on prioritization models employed in the context of humanitarian logistics.
The methodology adopted for the systematic review is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology [17], which includes the following phases: identification, screening, data extraction and reporting of relevant literature.The PRISMA checklist for Systematic Review [17] is in Table S1, and the OSF (Open Science Framework) register of this review are listed in the Supplementary Materials section of this paper.
During the identification phase, articles were sourced from two databases, Scopus and Web of Science, utilizing keywords listed in Table 1.The two databases were selected due to the configuration of the largest catalogues of indexed journals [18].Only articles categorized as such, in their final stage, published until 2023, and written in English were considered for inclusion.Figure 1 shows a flowchart of our research based on the PRISMA method.Initially, 250 articles were identified (156 from Scopus and 94 from Web of Science) using specified keywords and filters, with additional consideration for articles in their final stage of publication and written in English.Subsequently, 78 duplicate articles were removed from the analyzed databases.
Materials and Methodology
A systematic literature review entails the comprehensive identification, evaluation and interpretation of relevant research in a specific area [15].According to [16], this pro cess involves critically analyzing articles related to a theme or research question.In thi article, we conducted a systematic literature review focused on prioritization models em ployed in the context of humanitarian logistics.
The methodology adopted for the systematic review is based on the Preferred Re porting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology [17 which includes the following phases: identification, screening, data extraction and report ing of relevant literature.The PRISMA checklist for Systematic Review [17] is in Table S1 and the OSF (Open Science Framework) register of this review are listed in the supple mentary materials section of this paper.
During the identification phase, articles were sourced from two databases, Scopu and Web of Science, utilizing keywords listed in Table 1.The two databases were selected due to the configuration of the largest catalogues of indexed journals [18].Only article categorized as such, in their final stage, published until 2023, and written in English wer considered for inclusion.
Keyword Group 1
Keyword Group 2 prioritization model humanitarian logistic multi-criteria humanitarian operation disaster management Figure 1 shows a flowchart of our research based on the PRISMA method.Initially 250 articles were identified (156 from Scopus and 94 from Web of Science) using specified keywords and filters, with additional consideration for articles in their final stage of pub lication and written in English.Subsequently, 78 duplicate articles were removed from th analyzed databases.During the screening phase, following abstract analysis, 116 articles were excluded from the sample due to lack of relevance to the research topic, with the majority focusing on disaster-risk mapping.Subsequently, efforts were made to retrieve the articles, resulting in only one article not being retrieved.A full-text reading was then conducted of the remaining 55 articles, of which 20 were further excluded for not addressing the topic of interest.Ultimately, 35 articles were selected via databases for further analysis.
Following this, a snowballing technique was employed, which entailed conducting citation searches on all the accepted full-text papers.Judgment was utilized to decide whether to pursue these further, leveraging the effectiveness of this technique in identifying high-quality articles, as noted in [19].Through this process, 5 additional articles were identified via citation searching.Finally, the remaining 40 articles were analyzed and classified by two reviewers.
During the data extraction phase, our study utilized a categorization inspired by the research conducted by [20] in the context of humanitarian logistics.This categorization includes general information, type of disaster, phase of the disaster lifecycle, decisionmaking level, and type of problem.Additionally, a specific category pertaining to the prioritization approach was incorporated.
The categories used in this work are presented as follows: • General information: Considers the name of the journal, country of the case study, and year of publication of the article.
•
Disaster type: Considers the classification of disasters proposed by [5], which distinguishes between natural and human-made disasters, as well as the onset speed, categorized as sudden or slow.
•
Disaster lifecycle stage: Divided into four phases-mitigation, preparedness, response, and recovery [8].The mitigation phase aims to reduce society's vulnerability to a hazardous event.The preparedness phase aims to establish strategies and develop the necessary skills to ensure the success of response and reconstruction operations.The response phase begins immediately after the disaster occurs and aims to alleviate the suffering of affected people.Finally, the recovery phase aims to recover and/or improve the community's functioning [21].
•
Decision-making level: Divided into three levels-strategic, tactical, and operational.Each involves long-term, medium-term, and short-term decisions [22].• Type of problem: Divided into three types-location, inventory, and transportation [22].
The first is related to spatial aspects.The second involves demand estimation and inventory policies.Finally, the third is related to distribution and subsequent activities.
•
Prioritization Model: Involves the object of prioritization, the method used, the number of criteria used in the modeling, and the type of criterion used.The classification proposed by [23] was adopted, which categorizes criteria into three groups: efficiency (such as cost), effectiveness (such as time, coverage, distance traveled, reliability, and safety), and equity.
Results
The results of the systematic literature review are organized into the following sections based on the selection of categories presented in Section 2, which include general information, the type of disaster, the phase of the disaster lifecycle, decision-making level, type of problem, and prioritization models.
General Information
The general information presented in this section considers the countries used as case studies, articles by year, and articles by journals.
Table 2 presents the number of articles with case studies applied to different countries and regions, providing crucial insights into research on prioritization models applied to humanitarian logistics.Iran stands out as the leader with eight articles, representing 24% of the total, indicating significant engagement in this field.Most of these articles are related to activities following an earthquake, which is one of the main disasters occurring in this country (e.g., [24]).Haiti and Turkey follow closely, each with four articles.Additionally, it is interesting to note the geographical diversity of the research, covering countries such as China, Indonesia, France, Brazil, and others.The year 2016 saw a notable surge with five articles.However, in 2017, there was a decline, with only one article published.Starting from 2018, there has been a discernible increase in articles focusing on prioritization models, maintaining a steady output of four to six articles annually.
humanitarian logistics.Iran stands out as the leader with eight articles, representin of the total, indicating significant engagement in this field.Most of these articles a lated to activities following an earthquake, which is one of the main disasters occurr this country (e.g., [24]).Haiti and Turkey follow closely, each with four articles.Add ally, it is interesting to note the geographical diversity of the research, covering cou such as China, Indonesia, France, Brazil, and others.The year 2016 saw a notable surge with five articles.However, in 2017, there was a de with only one article published.Starting from 2018, there has been a discernible in in articles focusing on prioritization models, maintaining a steady output of four articles annually.Regarding articles by journal, Table 3 reveals that the journal with the highest nu of articles on prioritization models in the context of humanitarian logistics is Sustaina with four articles.Most of the listed journals have only one article related to the res topic.Regarding articles by journal, Table 3 reveals that the journal with the highest number of articles on prioritization models in the context of humanitarian logistics is Sustainability, with four articles.Most of the listed journals have only one article related to the research topic.
Disaster Type
In this section, results are presented regarding the disaster studied in the articles, as well as whether it is classified as a sudden-onset or slow-onset disaster, as well as a natural or human-made disaster, following the classification provided by [5].
Out of the 40 selected articles, only 28 explicitly mention a disaster, suggesting that the remaining 12 may encompass comprehensive approaches applicable to various types of disasters.
Figure 3 depicts the distribution of applications based on the type of disaster.The results indicate a stronger emphasis on sudden-onset disasters compared to slow-onset disasters, as well as a greater focus on nature-induced disasters over anthropogenic ones.In Figure 4, disasters are presented according to the classification provided by [5].Regarding the origin of the disaster, it is observed that nature-induced disasters significantly represent the majority compared to human-made disasters, with only one article found studying a human-made disaster.Regarding the onset speed, it is observed that slow-onset disasters, which develop gradually over years, are less studied compared to sudden-onset disasters, which occur suddenly and unpredictably, without the possibility of adequate and rapid preparation or response.In summary, human-made disasters and slow-onset disasters are less studied concerning the application of prioritization models in humanitarian logistics.It was observed that the majority of the analyzed studies are related to earthquakes, with 19 articles, which belong to the sudden-onset category and are considered of natural origin.One of the most studied cases in this category is the 2010 Haiti earthquake (e.g., [11,25]).Another important observation is that cases come from countries corresponding to the authors' affiliation country, for example, Iran (e.g., [26]) and Turkey (e.g., [27]), which are countries with a high seismic risk.
The second most studied type of disaster was floods, with six articles.Following that, pandemics were studied in two articles.The other researched disasters include bomb explosions, drought, storms, famine and cyclones, with only one article each.
In Figure 4, disasters are presented according to the classification provided by [5].Regarding the origin of the disaster, it is observed that nature-induced disasters significantly represent the majority compared to human-made disasters, with only one article found studying a human-made disaster.Regarding the onset speed, it is observed that slow-onset disasters, which develop gradually over years, are less studied compared to sudden-onset disasters, which occur suddenly and unpredictably, without the possibility of adequate and rapid preparation or response.In summary, human-made disasters and slow-onset disasters are less studied concerning the application of prioritization models in humanitarian logistics.In Figure 4, disasters are presented according to the classification provided Regarding the origin of the disaster, it is observed that nature-induced disasters s cantly represent the majority compared to human-made disasters, with only one found studying a human-made disaster.Regarding the onset speed, it is observe slow-onset disasters, which develop gradually over years, are less studied compa sudden-onset disasters, which occur suddenly and unpredictably, without the poss of adequate and rapid preparation or response.In summary, human-made disaste slow-onset disasters are less studied concerning the application of prioritization m in humanitarian logistics.
Disaster Lifecycle Stage
In this section, the results related to the phases of the disaster lifecycle invest in the articles are presented.Four phases are considered: mitigation, preparedne sponse, and recovery, as defined by [8].
In Figure 5, the percentage of articles related to each phase can be observed.Th studied phase is preparedness, representing 49% of the total articles.The second
Disaster Lifecycle Stage
In this section, the results related to the phases of the disaster lifecycle investigated in the articles are presented.Four phases are considered: mitigation, preparedness, response, and recovery, as defined by [8].
In Figure 5, the percentage of articles related to each phase can be observed.The most studied phase is preparedness, representing 49% of the total articles.The second most addressed phase is response, accounting for 33% of the articles.Finally, the mitigation and recovery phases are the least studied, each contributing only 9% of the total articles.The mitigation phase aims to reduce the risks and threats associated with disasters through proactive strategies [21].An example is the study by [28], which evaluates investments in projects to increase disaster resilience in communities.Another study is related The mitigation phase aims to reduce the risks and threats associated with disasters through proactive strategies [21].An example is the study by [28], which evaluates investments in projects to increase disaster resilience in communities.Another study is related to the selection of sites for mitigating sources of sand and dust storms [29].Additionally, there is an article discussing the selection of emergency assembly areas after earthquakes [27].
The preparedness phase involves planning response teams and structuring action plans to deal with various types of disasters [21].Most articles are related to location problems, such as shelter locations (e.g., [30]) and the location of humanitarian aid supply depots (e.g., [31]).Additionally, evacuation planning (e.g., [32]) and innovative topics, including the identification of strategic streets for humanitarian operations (e.g., [33]) and partner selection (e.g., [12]), are studied.
The response phase occurs when teams take action to deal with different situations, providing relief and medical supplies to victims.Its goal is to alleviate the suffering of those affected [21].Mainly, activities such as the distribution of humanitarian aid supplies (e.g., [25,34,35]) are studied.Other activities analyzed in this phase include prioritizing areas damaged post-disaster ( [36]) and searching for missing persons (e.g., [26]).
The recovery phase encompasses the rehabilitation of damaged infrastructure, the resumption of normal activities, and support to communities [21].In this phase, the recovery of bridges and roads (e.g., [25]) and the selection of reconstruction projects (e.g., [37]) have been studied.
Decision Level
In this section, the results related to the level of decision-making focused on by the articles are presented.The three levels of classification of [22] are considered, encompassing strategic, tactical, and operational levels.These three decision-making levels are interdependent and complementary, ensuring that humanitarian logistics is efficient, effective, and capable of meeting the needs of populations affected by humanitarian crises.
In Figure 6, it is noticeable that the majority of articles are focused on the strategic level, involving long-term decisions, representing 60% of the total.Next, we have the operational level, which addresses short-term decisions, with 32.5% of the articles.Lastly, we have the tactical level, related to medium-term decisions, accounting for only three articles, corresponding to 7.5% of the total.we have the tactical level, related to medium-term decisions, accounting for only three articles, corresponding to 7.5% of the total.The strategic level involves long-term decisions that have a direct impact on logistics operations [22], including the development of logistics policies, the identification of new intervention areas, and the establishment of long-term partnerships.For this level, concerning humanitarian logistics, mainly articles related to the location of humanitarian aid supply depots were found (e.g., [38]).Articles related to investments in projects (e.g., [28]) and the design of humanitarian supply chains (e.g., [39]) were also identified.
The tactical level refers to medium-term decisions aimed at optimizing operations and available resources [22], such as inventory planning, resource allocation to affected areas, the coordination of partnerships with other organizations, and the adaptation of logistics strategies as needed.At this level, the selection of supply partners (e.g., [12]), shelter location (e.g., [30]), and the identification of strategic highways in the road network The strategic level involves long-term decisions that have a direct impact on logistics operations [22], including the development of logistics policies, the identification of new intervention areas, and the establishment of long-term partnerships.For this level, concerning humanitarian logistics, mainly articles related to the location of humanitarian aid supply depots were found (e.g., [38]).Articles related to investments in projects (e.g., [28]) and the design of humanitarian supply chains (e.g., [39]) were also identified.
The tactical level refers to medium-term decisions aimed at optimizing operations and available resources [22], such as inventory planning, resource allocation to affected areas, the coordination of partnerships with other organizations, and the adaptation of logistics strategies as needed.At this level, the selection of supply partners (e.g., [12]), shelter location (e.g., [30]), and the identification of strategic highways in the road network (e.g., [33]) are studied.
The operational level involves short-term activities and decisions made in day-to-day operations [22], including the management of immediate resources such as transportation, storage, and the distribution of supplies, field team coordination, and the execution of logistical plans.At this level, most articles are related to transportation issues (e.g., [40]).Additionally, other issues such as the location of post-disaster field hospitals (e.g., [41]) are studied.
Problem Type
In this section, results regarding the type of problem are presented.For this purpose, the classification of [22] was used, which divides the problems into three categories: location, transportation, and inventory.The most studied activities in the research will be presented below.
In Figure 7, the number of articles by type of problem and decision-making level is shown.It was observed that the majority of prioritization models in humanitarian logistics are related to location problems, with 20 articles representing 50% of the total and with most of these articles focused on long-term decisions.Next, 37.5% of the articles are related to transportation, where most of the articles are oriented towards the strategic and operational levels.Finally, it was found that inventory-related problems are significantly less studied, with only five articles, with three of them oriented towards long-term decisions, one towards medium-term decisions, and one towards short-term decisions.Facility location problems involve deciding where to locate one or more facilities to serve a set of demand points (e.g., [42]).Most articles address shelter locations (e.g., [43]).Additionally, problems related to locating depots for positioning humanitarian aid supplies (e.g., [31]) are studied.Articles on the location of emergency operations centers (e.g., [44]), hotspots for disaster mitigation (e.g., [29]), and Emergency Meeting Areas (e.g., [27]) were also found.In this type of problem, the most common objective is to minimize the total cost of operations, which includes establishing facilities and meeting demand (e.g., [38]).
Humanitarian organizations transport large quantities of aid for distribution after disasters.These activities include various, often conflicting, performance criteria such as time deprivation, cost, coverage, and asset ownership.Articles related to the distribution of humanitarian aid supplies, especially last-mile transportation within the first 72 h after Facility location problems involve deciding where to locate one or more facilities to serve a set of demand points (e.g., [42]).Most articles address shelter locations (e.g., [43]).Additionally, problems related to locating depots for positioning humanitarian aid supplies (e.g., [31]) are studied.Articles on the location of emergency operations centers (e.g., [44]), hotspots for disaster mitigation (e.g., [29]), and Emergency Meeting Areas (e.g., [27]) were also found.In this type of problem, the most common objective is to minimize the total cost of operations, which includes establishing facilities and meeting demand (e.g., [38]).
Humanitarian organizations transport large quantities of aid for distribution after disasters.These activities include various, often conflicting, performance criteria such as time deprivation, cost, coverage, and asset ownership.Articles related to the distribution of humanitarian aid supplies, especially last-mile transportation within the first 72 h after the disaster, are mainly analyzed (e.g., [35]).This also includes the use of unmanned aerial vehicles (e.g., [40]).Evacuation problems (e.g., [45]) and search for the injured problems (e.g., [26]) are also studied.Additionally, an article on road networks and the identification of strategic highways in humanitarian operations (e.g., [33]) was found.
Inventory management in humanitarian logistics involves deciding which supplies, and in what quantities, to keep in warehouses and distribution centers [22].It is crucial to balance the need to maintain sufficient stocks to meet immediate demand with cost efficiency and waste minimization.In this regard, the authors of ref. [46] propose lateral transshipment as a solution for dealing with surpluses at demand points.Furthermore, proper inventory control plays a key role in ensuring the continuous availability of essential supplies.Therefore, the authors of ref. [12] propose a model for partner selection that considers criteria such as humanitarian chain efficiency, legal issues, sustainability, and transparency, among others.
In everyday life, we face problems analyzing indicators that may conflict with each other, generating trade-offs.In this context, prioritization models as multi-criteria models have been developed to achieve effective and satisfactory decisions for decision-makers [47].In the context of disaster management, these models gain greater importance due to the presence of multiple stakeholders with diverse objectives in an extremely complex, dynamic situation with a lack of information.
Prioritization Models
In this section, the results related to prioritization models are presented, including an appraisal of the prioritization object, method, and criteria, which extend the classification of [23].Table 4 displays a classification based on these considerations."x" indicates that prioritization models present the mentioned class.OM: optimization model, NC: number of criteria, Ef: efficiency, Ev: effectiveness, Eq: equity, S: sustainability.
Prioritization Object
Regarding the prioritization object, it is evident that they find utility in various areas.This demonstrates the complexity and diversity of the challenges encountered in humanitarian operations.For instance, they are used in humanitarian aid distribution (e.g., [11]), distribution centers location (e.g., [53]), shelter location (e.g., [43]), and hospital location (e.g., [41]).
After a disaster, planning routes for vehicles delivering humanitarian aid, such as food, water, medicines, and clothing, to affected populations is essential.Refs.[11,34,35] tackled this challenge by prioritizing the selection of optimal routes for humanitarian aid distribution.They considered conflicting criteria such as security, time, cost, equity, coverage, and reliability.
To enhance the speed of response to a population's needs, warehouses are strategically placed to store humanitarian aid.This aid becomes accessible to assist people in need following a disaster.In this context, ref. [31] prioritizes the establishment of distribution centers for pre-positioning disaster relief supplies.The authors utilize an optimization model followed by a multi-criteria approach, considering factors such as operational cost, proximity to the Civil Defense Regional Director, the availability of human resources, safety, hygiene, and accessibility.
To evacuate people to safe locations, gyms or stadiums are frequently designated as temporary shelters for those whose homes are no longer inhabitable.For instance, ref. [32] developed an optimization model that prioritizes evacuation planning.This model integrates various decisions at this stage, considering factors such as the selection of shelter locations and routing for both public and individual transport, and taking into account both evacuation time and evacuation risk.Some innovative research addresses transport infrastructure recovery (e.g., [25]) and waste management (e.g., [50]), in addition to partner selection (e.g., [12]) and the selection of investment in projects (e.g., [37]).A comprehensive analysis encompasses humanitarian chain design (e.g., [38]).
Method
Regarding the methods employed, the most commonly used ones involve some form of multi-criteria approach.Articles were also found that utilize an optimization approach, and finally, to a lesser extent, the use of heuristics.
MCDA enables the assessment and comparison of various alternatives by considering multiple criteria, aiding in the identification of the optimal choice.MCDA methodologies employed in humanitarian logistics encompass a range of techniques, as outlined in the subsequent paragraphs.
The Analytic Hierarchy Process (AHP) is a decision analysis technique that breaks down problems into hierarchical structures of criteria and alternatives, facilitating their evaluation and prioritization based on relative importance [64].AHP has been utilized in humanitarian logistics for various purposes, including the determination of emergency assembly areas (e.g., [27]), shelter locations (e.g., [51,65]), distribution centers (e.g., [53]), and decision-making regarding road transport networks (e.g., [33]), among others.
The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is a method used to select the best alternative from a set of options, based on the proximity of each alternative to the ideal solution and its remoteness from the negative ideal solution [66].TOPSIS has been utilized in humanitarian logistics for various purposes, including the selection of suppliers, warehouses, and vehicles (e.g., [48]), the prioritization of damaged areas (e.g., [36]), and partner selection (e.g., [12]), among other applications.
Elimination and Choice Expressing Reality (ELECTRE) is an approach used for decision-making based on comparing alternatives according to multiple criteria, using dominance and preference relationships to classify the alternatives [68].ELECTRE has been utilized in humanitarian logistics for an emergency operations center's location (e.g., [44]) and the selection of reconstruction projects (e.g., [37]).
Preference-Ranking Organization Method for Enrichment Evaluations (PROMETHEE) is a method used for decision-making based on the comparison and ranking of alternatives according to multiple criteria, using predefined preference functions [69].This method was used to select reconstruction projects (e.g., [37]) and relief centers locations (e.g., [57]).
Combined Objective Reconnaissance by Sequential Actions (COBRAS) is a method that combines the construction of an objective model with the selection of sequential actions to achieve those objectives, used in strategic decision-making [26].This method was used to prioritize search operations (e.g., [26]).
Complex Proportional Assessment (COPRAS) assesses and ranks alternatives based on various criteria; then, by comparing alternatives against each criterion, it provides an aggregated ranking [47].This method was used to prioritize emergency assembly area's location (e.g., [27]).
The Analytic Network Process (ANP) is a technique that enables decision-making in complex situations involving interactions between different elements, through the construction and analysis of networks of criteria and alternatives [64].This method was used to prioritize temporary disaster debris management locations (e.g., [50]).
Outranking Process Analysis (OPA) is an approach used for decision-making that is based on identifying dominance relationships between alternatives, allowing the establishment of a preference order without requiring a precise numerical evaluation [46].This method was used by [46] to prioritize demand points after a disaster.
The Borda Count (BORDA) is a voting method used to calculate the ranking of alternatives based on voters' preferences, assigning points to each alternative according to its position in each preference list [27].This method was used by [27] to prioritize location of emergency assembly areas.
The Best Worst Method (BWM) is an approach used for decision-making that focuses on identifying the best and worst alternatives in relation to each criterion, allowing the evaluation of the relative importance of the criteria.This method was used to prioritize disaster logistic hub locations [58].
Swing Weighting is a weighting method used in decision analysis, where criterion weights are adjusted based on the decision's sensitivity to those criteria.This method was used to prioritize investment in humanitarian supply chains [28] and to select distribution center locations [31].
Other methods used include the development of optimization models.As depicted in Figure 8, 69% of articles include optimization models, whereas 31% do not.Among the optimization models employed are goal programming, stochastic models, dynamic models, and vectorial optimization.
Stochastic models in operations research are used to address problems where at least part of the problem is subject to uncertainty or random variability.In humanitarian logistics, this is crucial due to uncertainties surrounding factors such as demand, supply availability, and transportation conditions.Stochastic optimization has been employed to prioritize aerial delivery operations (e.g., [49]), search operations (e.g., [26]), and distribution center locations (e.g., [31]).Goal programming is an optimization technique used to solve decision-making problems with multiple objectives, aiming to minimize or maximize a set of objective functions subject to a series of constraints [70].Goal programming has been utilized to prioritize humanitarian aid distribution (e.g., [34]), recovery operations and distribution (e.g., [25]) aerial delivery operations (e.g., [49]), humanitarian aid distribution (e.g., [35]), the distribution of supplies (e.g., [52]), evacuations (e.g., [45]), and shelter locations (e.g., [43]).
Stochastic models in operations research are used to address problems where at least part of the problem is subject to uncertainty or random variability.In humanitarian logistics, this is crucial due to uncertainties surrounding factors such as demand, supply availability, and transportation conditions.Stochastic optimization has been employed to prioritize aerial delivery operations (e.g., [49]), search operations (e.g., [26]), and distribution center locations (e.g., [31]).
Dynamic models in operations research are utilized to model and solve problems involving temporal changes.In humanitarian logistics, it is important, because they allow for modeling processes that involve time-varying factors such as demand fluctuations and resource availability.For instance, stochastic dynamic programming was employed to prioritize decisions in search operations (e.g., [26]), while dynamic simulation was utilized for humanitarian network design (e.g., [38]).
Vectorial optimization is a branch of optimization dealing with the optimization of vectors of objective functions subject to constraints.It is used to find optimal solutions in problems with multiple objectives.This method is used by ref. [42] to prioritize decisions about shelter locations.
Finally, there are authors who devise algorithms to tackle complex issues, mostly related to transportation problems.These utilized algorithms include the GRASP metaheuristic, Evolution algorithm, and Preference Elicitation Algorithm.
Greedy Randomized Adaptive Search Procedure (GRASP) is a metaheuristic for combinatorial optimization, blending a greedy solution construction with random neighborhood exploration.It is used by ref. [11] to prioritize decisions about routing in humanitarian aid distribution.
Evolution algorithms, inspired by evolutionary theory, optimize by generating and manipulating a population of potential solutions.Through selection, crossover, and mutation operators, they evolve improved solutions over generations.This algorithm was used by [40] for unmanned aerial vehicle path planning in disaster management.
The Elicitation Algorithm figures out what users or agents prefer in interactive systems by analyzing their choices and feedback.It was used by [39] to prioritize decisions about food distribution by food banks.Dynamic models in operations research are utilized to model and solve problems involving temporal changes.In humanitarian logistics, it is important, because they allow for modeling processes that involve time-varying factors such as demand fluctuations and resource availability.For instance, stochastic dynamic programming was employed to prioritize decisions in search operations (e.g., [26]), while dynamic simulation was utilized for humanitarian network design (e.g., [38]).
Vectorial optimization is a branch of optimization dealing with the optimization of vectors of objective functions subject to constraints.It is used to find optimal solutions in problems with multiple objectives.This method is used by ref. [42] to prioritize decisions about shelter locations.
Finally, there are authors who devise algorithms to tackle complex issues, mostly related to transportation problems.These utilized algorithms include the GRASP metaheuristic, Evolution algorithm, and Preference Elicitation Algorithm.
Greedy Randomized Adaptive Search Procedure (GRASP) is a metaheuristic for combinatorial optimization, blending a greedy solution construction with random neighborhood exploration.It is used by ref. [11] to prioritize decisions about routing in humanitarian aid distribution.
Evolution algorithms, inspired by evolutionary theory, optimize by generating and manipulating a population of potential solutions.Through selection, crossover, and mutation operators, they evolve improved solutions over generations.This algorithm was used by [40] for unmanned aerial vehicle path planning in disaster management.
The Elicitation Algorithm figures out what users or agents prefer in interactive systems by analyzing their choices and feedback.It was used by [39] to prioritize decisions about food distribution by food banks.
Criteria
Regarding the criteria, it is observed that 85% of the articles consider criteria related to effectiveness, followed by 65% of the articles that consider criteria related to efficiency.In a smaller quantity, only 50% of the articles consider criteria related to equity.Finally, only four articles considered criteria related to sustainability.
The efficiency criteria, as highlighted by [71], generally aim to minimize costs.For example, the authors of ref. [11] study the problem of distributing humanitarian aid supplies and consider operational cost as one of the criteria to be optimized.On the other hand, ref. [41] analyzes the problem of locating field hospitals, considering criteria such as land cost, investment cost, and installation cost.
In contrast, effectiveness criteria aim to maximize a service measure, often the amount of demand met and the speed at which the demand is fulfilled [71].For example, ref. [27] analyzes the problem of locating emergency assembly areas, considering the coverage area as one of the criteria, which includes accessibility, population density, and expansion potential.Additionally, ref. [35] addresses the issue of distributing humanitarian aid supplies, considering security as a criterion, such as the probability of truck theft, which can obstruct the success of operations.These studies exemplify how effectiveness criteria are related to maximizing the service provided, taking into account not only the quantity served but also the quality and safety of operations.Finally, the authors of ref. [53] include in their model for the location of aid distribution centers the criterion of accessibility, considered not only as the quality of routes to the distribution center but also as alternative routes that allow access in case of road disruptions, which is vital to increase the speed of assistance to affected areas.This approach highlights the importance of considering not only the efficiency of main routes but also the resilience of the system in the face of unforeseen events, ensuring that aid reaches needy areas quickly, even in adverse situations.
Equity criteria refer to fairness in the distribution of services among beneficiaries.However, it is important to note that the definitions of "justice" and "service" can vary significantly among different authors [71].Ref. [34] considers equity optimization by contemplating a maximum deviation proportional to the demands met.In turn, ref. [39], in analyzing food distribution by food banks, considers equity as the supply of food to each municipality in proportion to the demand they meet.Ref. [29] evaluates how sand and dust storms can impact different areas, prioritizing assistance to locations with the greatest impact.
Sustainability criteria are those that consider some objective regarding the reduction of environmental impact.It is worth noting that this category is not part of the categorization proposed by [23], but it is proposed due to its importance today, since an operation that does not consider the environment and its remediation can generate negative impacts on the community in the medium and long term.The authors of ref. [37] evaluate reconstruction projects and consider environmental criteria, such as the use of renewable energy, the assessment of carbon emission rates caused by construction activities, and the use of reusable and recyclable materials.Ref. [53] analyzes the location of relief centers, considering waste control capacity among the criteria for possible alternatives.Finally, ref. [57] considers the reduction of carbon emissions from humanitarian operations as a criterion.
Finally, Figure 9 shows that 2 articles consider two criteria and 10 articles adopted only three criteria, while another 6 articles used four criteria.Additionally, four articles considered models with five criteria, and another seven articles opted to incorporate six criteria.In summary, it was found that 31 out of the 40 studies that were analyzed for prioritization models in humanitarian logistics involved seven categories or fewer, representing 77.5% of the total.On the other hand, the use of seven or more criteria was less frequent among the analyzed articles.
Discussion
In humanitarian logistics, decision-makers encounter a complex scenario fraught with numerous challenges.They must weigh various criteria, often in conflict, crucial for the efficacy of relief efforts [23].Consequently, the necessity for prioritization models becomes apparent, aiding in the selection of optimal strategies for the myriad activities
Discussion
In humanitarian logistics, decision-makers encounter a complex scenario fraught with numerous challenges.They must weigh various criteria, often in conflict, crucial for the efficacy of relief efforts [23].Consequently, the necessity for prioritization models becomes apparent, aiding in the selection of optimal strategies for the myriad activities within humanitarian logistics.Following a systematic literature review on prioritization models in humanitarian logistics, discernible trends and research gaps emerge, which will be delineated in the subsequent paragraphs.
However, it is worth noting that the volume of studies addressing the application of prioritization models to humanitarian logistics is still limited, with less than seven articles published per year.Given the number of stakeholders involved in humanitarian operations and their importance, it is crucial to ensure that the objectives of each party are met.In this context, prioritization models play a significant role.
Research on prioritization models in humanitarian logistics predominantly focuses on applications to nature-induced disasters, particularly earthquakes and floods, given that two out of every three disasters are of natural origin [72].However, there remains a gap in the study of other nature-induced disasters currently affecting the world, such as volcanic eruptions (e.g., the Mount Merapi eruption 2024), landslides (e.g., Brazil mudslides), hurricanes (e.g., the Mexico hurricane 2024), climatological disasters (e.g., Chile wildfires 2024), and biological disasters (e.g., COVID-19 pandemic).
Adapting prioritization models from extensively studied disasters like earthquakes and inundations can serve as an initial step to enhance the performance of models for other types of disasters.For instance, earthquake models, typically focused on shelter location [60], can be expanded to integrate evacuation strategies suitable for volcanic eruptions.Inundation models, which emphasize flood responses, can be tailored for landslides by incorporating effective route-planning measures.Similarly, biological disaster models can adapt location-allocation prioritization models to account for disease spread dynamics [56].
Additionally, there is evidence of a smaller number of articles related to humaninduced disasters.The infrequent exploration of human-induced disasters in the literature could be attributed to their high complexity [20], because these disasters involve studies related to human actions, delving into political, social, and economic debates, thereby adding complexity to the research.Another contributing factor is the challenge of accessing areas affected by man-made disasters, which may pose complications for research in the field [73].Currently, refugee crises, such as the exodus of Venezuelans, and armed conflicts like the Israeli-Palestinian conflict and the Russo-Ukrainian War, require further investigation regarding humanitarian logistics activities that necessitate prioritization.
Regarding the speed of start, the majority of the prioritization models study a suddenonset disaster.However, slow-onset disasters, while allowing more time to react, can have more severe consequences due to their large scale [73].This discrepancy may be attributed to the higher media coverage of sudden-onset disasters, resulting in less attention to slow-onset events.Furthermore, it was observed that the majority of research on humanitarian logistics models focused on two stages of the disaster management cycle: preparedness and response.The lack of articles on multi-criteria models in the disaster recovery phase is evident, despite its significance as the final stage.Additionally, it is important to highlight that the shortage of articles focused on the reconstruction phase was also emphasized by [20].The challenge of rebuilding and restoring both economic and emotional aspects after a disaster warrants a more thorough analysis [21].Regarding the scarcity of articles on mitigation, it is attributed to the consideration of articles solely related to logistical activities.However, it should be emphasized that criteria aimed at disaster risk reduction need to be incorporated into the models of other phases.
In relation to the decision level classification of [22], there is a tendency to focus research on strategic and operational decisions.However, there is a limited amount of research related to the tactical decision-making level, which involves medium-term decisions.Hence, there is an urgent need to develop prioritization models for tactical activities, particularly those related to inventory management problems.
In relation to the problem type classification of [22], location and transportation problems are the most addressed.There is a notable absence of articles addressing issues related to inventory management, such as demand forecasting, supply allocation, and supplier evaluation.An additional area of research of great interest is the integration of correlated activities, aiming to improve the overall efficiency of both operations [25].
In terms of criteria, the criteria most commonly used in prioritization models are related to effectiveness, followed by efficiency.There is a highlighted need to consider prioritization models that also include equity criteria, which ensure a fair and equitable distribution of resources among the affected, ensuring that everyone's needs are addressed impartially [71].Likewise, the use of sustainability criteria would allow for minimizing the environmental impact of humanitarian operations, promoting responsible practices.A recommendation for selecting criteria is to consider at least one criterion from each presented category.
Regarding the selection of method, it should align with the prioritization object under study.It was observed that a variety of approaches are used, including a multi-criteria approach, optimization approach, and heuristic approach.
Optimization models allow for obtaining optimal results, while also incorporating the stochastic and dynamic approaches inherent to post-disaster situations (e.g., [26]).However, due to limitations in computational memory, modelers may be hindered from adding a greater number of criteria.
The use of multi-criteria heuristics is an excellent option for problems with a high computational complexity in humanitarian operations [32], such as humanitarian aid distribution problems.However, similar to optimization models, the number of criteria that can be used may be limited by computational memory.
On the other hand, multi-criteria models allow for considering the expertise of decisionmakers and can also take into account a greater number of criteria (e.g., [31]).However, the selection of alternatives to prioritize may be subjective.
In response to this, it is proposed as future research to integrate different approaches, where optimization models or heuristics can select high-quality alternatives that can serve as input for multi-criteria models.These multi-criteria models can then incorporate criteria that could not be added in the previous stage and take into account the decision-makers' expertise.
Regarding the number of criteria, approximately 70% of articles address only four or fewer criteria.Therefore, the question arises whether there is a necessity to include more criteria in prioritization models of humanitarian operations.The response to this question is dependent on various factors.The selection of the number of criteria is related to the prioritization object being analyzed.However, as mentioned earlier, it is encouraged to use at least four criteria: one efficiency criterion, one effectiveness criterion, one equity criterion, and one sustainability criterion.
Conclusions
Prioritization models are powerful tools for managing the trade-offs inherent in humanitarian operations, where resources are limited and needs are urgent.These models enable decision-makers to systematically evaluate and balance different criteria, enhancing the immediate response to crises and improving the overall performance of humanitarian operations.
This systematic literature review conducted a descriptive analysis of predefined categories and proposed areas of future research in the context of applying prioritization models in humanitarian logistics.By selecting 40 articles from keywords searched in relevant databases, it was found that the quantity of articles has been slightly increasing in recent years.
Figure 1 .
Figure 1.Flowchart of research based on PRISMA method.Figure 1. Flowchart of research based on PRISMA method.
Figure 1 .
Figure 1.Flowchart of research based on PRISMA method.Figure 1. Flowchart of research based on PRISMA method.
Figure 2
Figure 2 displays the distribution of articles per year, revealing fluctuations in publication trends.In 2011, only one article was published, followed by two articles in 2014.The year 2016 saw a notable surge with five articles.However, in 2017, there was a decline, with only one article published.Starting from 2018, there has been a discernible increase in articles focusing on prioritization models, maintaining a steady output of four to six articles annually.
Figure 2
Figure 2 displays the distribution of articles per year, revealing fluctuations in cation trends.In 2011, only one article was published, followed by two articles inThe year 2016 saw a notable surge with five articles.However, in 2017, there was a de with only one article published.Starting from 2018, there has been a discernible in in articles focusing on prioritization models, maintaining a steady output of four articles annually.
Figure 2 .
Figure 2. Number of articles per year.
Figure 2 .
Figure 2. Number of articles per year.
Figure 3 .
Figure 3. Number of articles by disaster.
Figure 3 .
Figure 3. Number of articles by disaster.
Figure 3 .
Figure 3. Number of articles by disaster.
Figure 4 .
Figure 4. Number of articles by type of disaster.(a) Disasters by origin; (b) disasters by onset
Figure 4 .
Figure 4. Number of articles by type of disaster.(a) Disasters by origin; (b) disasters by onset speed.
Logistics 2024, 8, x FOR PEER REVIEW 8 of 21 addressed phase is response, accounting for 33% of the articles.Finally, the mitigation and recovery phases are the least studied, each contributing only 9% of the total articles.
Figure 5 .
Figure 5. Number of articles by disaster lifecycle stage.
Figure 5 .
Figure 5. Number of articles by disaster lifecycle stage.
Figure 6 .
Figure 6.Number of articles by decision-making level.
Figure 6 .
Figure 6.Number of articles by decision-making level.
Figure 7 .
Figure 7. Number of articles by type of problem and decision-making level.
Figure 7 .
Figure 7. Number of articles by type of problem and decision-making level.
Figure 8 .
Figure 8. Articles with and without optimization model.
Figure 8 .
Figure 8. Articles with and without optimization model.
21 Figure 9 .
Figure 9. Number of criteria per article.
criteria in prioritization model
Figure 9 .
Figure 9. Number of criteria per article.
Table 2 .
Number of articles per country in case studies.
Table 2 .
Number of articles per country in case studies.
Table 3 .
Number of articles by journal.
Table 3 .
Cont.Applied Soft Computing, Asian Journal of Shipping and Logistics, Buildings, Civil Engineering Journal, European Journal of Operational Research, International Journal of Critical Infrastructure Protection, International Journal of Emergency Services, International Journal of Geographical Information Science International Journal of Information Technology and Decision Making, International Journal of System Assurance Engineering and Management, International Journal of Systems Science: Operations and Logistics, Journal of Enterprise Information Management, Journal of Environmental Planning and Management, Journal of Global Optimization, Journal of Heuristics, Journal of Systems Science and Systems Engineering, Mathematics, Natural Hazards Review, Omega (United Kingdom), Optimization Letters, Production, Production and Operations Management, Quality and Quantity, Scientia Iranica, Transportation Research Part E: Logistics and Transportation Review, Urban, Planning and Transport Research
Table 4 .
Prioritization classification of articles. | 11,652.2 | 2024-06-07T00:00:00.000 | [
"Environmental Science",
"Business",
"Engineering"
] |
Comparison of quantized transfers of a given volume of gas through a pipe under its temporal and velocity probability distributions
. The paper considers the quantization of gas volume transfers along a hypothetical pipe. The volume of gas is represented by two probability distributions in temporal and velocity form. The known model of optimal in the sense of filling information quantization is applied to quantization of material objects. There is a constant size gap between quanta. Technological aspects of formation of quanta are not considered. On the basis of the carried out minimum cost study the advantage of velocity quantization over time quantization for a number of selected quanta is determined. A normal probability distribution is taken as the initial distribution of the gas volume. The results of the article can be applied to any material and information processes. The main purpose in future research is recommended for any complex network processes, analysis and synthesis of their quality
Introduction
In the articles [1,2] the question about the existence along with the probability distributions of velocity probability distributions was considered earlier. The idea of the problem goes back to R. Fisher [3], where the velocity distributions were called fiducial. The author of this paper calls them dual, opposite to each other, and the very principle of their constructionmathematical dualism.
The question is raised that these types of distributions can be useful and applied to solve practical problems.
The purpose of this paper is to show their usefulness on the simplest example of transmission of a given volume of gas through a pipe using the information quantization, optimal in the sense of filling, proposed by the authors of the paper [4]. Brief meaning of this article in the mathematical form is presented in the form of following expression: where ( ) − probability density of quantized random amount of information, − quantum of information, − space between quanta, − maximal integer number of information quanta with deficiency. An additional unit in the integral is introduced for the quantum, which contains information not taken into account in the filled quanta. ( ) − minimum memory size for placing thus quantized information. The equation in order to find the optimal value of the quantum 0 is solved mathematically strictly [4] or numerically with a certain accuracy. We use expression (1) to solve the set problem of comparison of the process and quality of gas volume transfer through the pipe at two dual methods of its transfer, namely at time and velocity probability distributions.
The principle of duality of distributions and its main relations
Let the time of occurrence of an event be a random variable subject ̂ to the distribution ( ). Consider the quantity ̂= 1 . It has the meaning of the rate of occurrence of a single event ( ). Let us find its probability distribution Assuming that both random variables are positive and their probability densities exist ( ), ( ). Then according to [5] we have: where − is the Dirac delta function. The corresponding (2) distribution function is and the additional function (reliability) has the form where ̄( ) = 1 − ( ). The intensity of the event occurrence is determined by the formula Let us introduce a corresponding expression for the velocity intensity of the event occurrence: .
From formula (6) we obtain the expression known in the theory of random processes: Similarly, we can write the formula substituting into which (6) after a simple transformation we obtain (4), which should be expected. The Laplace transform of the distribution density (2) is equal to It is also true that the relation of the form where * , − image symbol and the Laplace variable. From expressions (7) and (8) differentiating by , taking into account that = 0, it is easy to see that there are no moments of a random variable în the domain [0, ∞).
Comparison of quantum transfers of gas through the pipe under temporal and velocity distributions of its volume probabilities
The same pipe is supposed to send gas from one point to another by quanta separated from each other by the same gaps. Two strategies for forwarding the same volume of gas are considered, related to the temporal and velocity representation of the volume. The gaps between the quanta are the same in both cases. It is required to compare quantitatively between each other the effects of the forwarding processes.
The mathematical expectation of the value of the quantized time volume of the gas is determined by the expression and the mathematical expectation of the value of the quantized velocity volume of the gas by the expression The value of the space between quanta is assumed to be equal to = 5.
Partial properties of quanta. Final estimation of the quantization effect
Quantum properties include: size of quantum, cost value of quantum, number of quanta at quantization, total cost of given quantization, total cost of gaps at given quantum. Total costs refer to the sum of costs for a particular quantum plus the sum of costs for gaps in that quantum. Partial estimation refers to the above four quanta. Each quantum was considered separately in time and velocity representations. The results of the estimation are summarized in Table 1. Table 1. Partial estimation of the quantum. The total volume of time slices and their intervals: 1. Type of quantum -time (M) or speed (N). 2. Quantum volume -time (nv) or velocity (n). 3. The total volume of quanta is temporal (∑ ) or high-speed (∑ ). 4. Total volume of gaps in time or speed quantization (∑ ). 5. Total volume of time quanta and gaps or total volume of velocity quanta and their gaps(∑ и ∑ and∑ и ∑ ).
Explanation of definitions using the example of a temporal distribution. Costs for the amount of gaps ⋅ ∫ ( ( ) + 1) ⋅ ( ) .
Costs for speed distribution
Similarly, the costs for speed distribution are determined ( ). Total final evaluation. Judging by the total costs of quantization (Table 1) quantization of the velocity distribution for all quanta except for the quantum of the last line with the result ∑ : = 101.658, has advantages in comparison with time quantization. The paper did not consider technological aspects of preparation of time and velocity quanta and other physical factors related to gas transfer through the tube.
Conclusion
The process of quantization of gas transfer in a hypothetical pipe is considered at two representations of gas volume in the form of probability distributions -temporal and velocity. The basis for the dual representation was the author's proposal to introduce two versions of probability distributions, obtained by him earlier.
The idea and mathematical form of optimal quantization in the sense of minimum cost is borrowed from the article [4], in which the authors considered quantization of information with gaps between quanta in order to determine the time costs for its storage. In this article, the author applied this idea not to information, but to material means, this is its first difference. The second difference is the consideration of quantization for dual distributions. For the first time the idea of dual distribution was proposed by R. Fisher [3], who called it fiducial, parametric.
The material of the article can be applied to any physical and informational processes. It can serve as a starting point for research of various networks, more precisely, network processes of application nature, informational and material. During the study of these networks direct tasks -analysis and inverse tasks -synthesis of ensuring of their required quality can be set and solved. For an additional acquaintance with velocity distribution, dualism in the theory of random processes one may refer to R. Fisher's monograph [3] and references to articles published by him. The article [6,7] is useful for investigating issues of network analysis and synthesis. In our opinion, velocity research should be performed on | 1,850.8 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Vulnerability of CMOS image sensors in megajoule class laser harsh environment
CMOS image sensors (CIS) are promising candidates as part of optical imagers for the plasma diagnostics devoted to the study of fusion by inertial confinement. However, the harsh radiative environment of Megajoule Class Lasers threatens the performances of these optical sensors. In this paper, the vulnerability of CIS to the transient and mixed pulsed radiation environment associated with such facilities is investigated during an experiment at the OMEGA facility at the Laboratory for Laser Energetics (LLE), Rochester, NY, USA. The transient and permanent effects of the 14 MeV neutron pulse on CIS are presented. The behavior of the tested CIS shows that active pixel sensors (APS) exhibit a better hardness to this harsh environment than a CCD. A first order extrapolation of the reported results to the higher level of radiation expected for Megajoule Class Laser facilities (Laser Megajoule in France or National Ignition Facility in the USA) shows that temporarily saturated pixels due to transient neutron-induced single event effects will be the major issue for the development of radiationtolerant plasma diagnostic instruments whereas the permanent degradation of the CIS related to displacement damage or total ionizing dose effects could be reduced by applying well known mitigation techniques. OCIS codes: (280.5395) Plasma diagnostics; (040.6070) Solid state detectors; (040.1240) Detectors: Arrays; (250.3140) Integrated optoelectronic circuits; (280.4788) Optical sensing and sensors; (110.2970) Image detection systems; (350.5610) Radiation. References and links 1. G. R. Hopkinson, “Radiation effects in a CMOS active pixel sensor,” IEEE Trans. Nucl. Sci. 47(6), 24802484 (2000). 2. J. Bogaerts, B. Dierickx, G. Meynants, and D. Uwaerts, “Total dose and displacement damage effects in a radiation-hardened CMOS APS,” IEEE Trans. Electron Dev. 50(1), 84-90 (2003). 3. V. Goiffon and P. Magnan, “Radiation Damages in CMOS Active Pixel Sensors, ” in Imaging Systems Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper IMA3. http://www.opticsinfobase.org/abstract.cfm?URI=IS-2011-IMA3 4. A. M. Armani, P. Barrochin, F. Joffre, R. Gaillard, F. Saigné, and J.L. Mainguy, “Enhancement of the Total Dose Tolerance of a Commercial CMOS Active Pixel Sensor by Use of Thermal Annealing,” in Proceedings of the Conference on Radiation Effects On Components and System, paper PD2 (2011). 5. T. P. Ma and P. V. Dressendorfer, Ionizing Radiation Effects in MOS Devices and Circuits (WileyInterscience, 1989). 6. J. R. Srour, C. J. Marshall, and P. W. Marshall, “Review of displacement damage effects in silicon devices,” IEEE Trans. Nucl. Sci. 50(3), 653–670 (2003). 7. J. L. Bourgade, V. Allouche, J. Baggio, C. Bayer, F. Bonneau, C. Chollet, S. Darbon, L. Disdier, D. Gontier, M. Houry, H. P. Jacquet, J. P. Jadaud, J. L. Leray, I. Masclet-Gobin, J. P. Negre, J. Raimbourg, B. Villette, I. Bertron, J. M. Chevalier, J. M. Favier, J. Gazave, J. C. Gomme, F. Malaise, J. P. Seaux, V. Yu Glebov, P. Jaanimagi, C. Stoeckl, T. C. Sangster, G. Pien, R. A. Lerche, and E. R. Hodgson, “New constraints for plasma diagnostics development due to the harsh environment of MJ class lasers,” Rev. Sci. Instrum. 75, 4204-4212 (2004). 8. J. L. Bourgade, R. Marmoret, S. Darbon, R. Rosch, P. Troussel, B. Villette, V. Glebov, W. Shmayda, J.C. Gommé, Y. Le Tonqueze, F. Aubard, J. Baggio, S. Bazzoli, F. Bonneau, J. Y. Boutin, T. Caillaud, C. Chollet, P. Combis, L. Disdier, J. Gazave, S. Girard, D. Gontier, P. Jaanimagi, H.P. Jacquet, J. P. Jadaud, O. Landoas, J. Legendre, J. L Leray, R. Maroni, D. D. Meyerhofer, J.L. Miquel, F. J. Marshall, I. MascletGobin, G. Pien, J. Raimbourg, C. Reverdin, A. Richard, D. Rubins de Cervens, C. T. Sangster, J. P. Seaux, G. Soullie, C. Stoeckl, I. Thfoin, L. Videau, and C. Zuber, “Present LMJ Diagnostics Developments Integrating its Harsh environment,” Review of Scientific Instruments 795(10) (2008). 9. S. Girard, Y. Ouerdane, M. Bouazaoui, C. Marcandella, A. Boukenter, L. Bigot, and A. Kudlinski, “Transient radiation-induced effects on solid core microstructured optical fibers,” Opt. Express 19, 2176021767 (2011). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-22-21760 10. E. R. Fossum, “CMOS image sensors: Electronic camera-on-a-chip,” IEEE Trans. Electron Devices 44(10), 1689–1698 (1997). 11. P. E. Dodd and L. W. Massengill, “Basic Mechanisms and Modeling of Single-Event Upset in Digital Microelectronics,” IEEE Trans. Nucl. Sci. 50(3), 583–602 (2003). 12. E. Pailharey, J. Baggio, C. D'hose, and O. Musseau, “Reliability of 1300 nm laser diode for space applications. In Photonics for Space and Radiation Environments,” Berghmans, Francis, Ed., Proc. SPIE 3872; 139-147 (1999). 13. Y. Tanimura, and T. lida, “Effects of DD and DT neutron irradiation on some Si devices for fusion diagnostics, ” Journal of nuclear materials 258-263, 1812–1816, (1998). 14. J. Baggio, M. Martinez, C. D'hose, and O. Musseau, “Analysis of Transient Effects Induced by Neutrons on a CCD image Sensor,” Proc. SPIE 4547, (2002). 15. A. Fish and O. Yadid-Pecht, “Active Pixel Sensor Design: From Pixels to Systems,” in CMOS Imagers, Springer, 99–139 (2004). 16. J. Killiany, “Radiation Effects on Silicon Charge-Coupled Devices,” IEEE Trans. Compon., Hybrids, Manuf. Technol. 1, 353 – 365 (1978). 17. A. M. Chugg, R. Jones, M. J. Moutrie, J. R. Armstrong, D. B. S. King, and N. Moreau, “Single particle dark current spikes induced in CCDs by high energy neutrons,” IEEE Trans. Nucl. Sci. 50(6), 2011–2017 (2003). 18. J. R. Srour, and D. H. Lo, “Universal Damage Factor for Radiation-Induced Dark Current in Silicon Devices,” IEEE Trans. Nucl. Sci. 47(6), 2451–2459 (2000). 19. C. Virmontois, V. Goiffon, P. Magnan, S. Girard, O. Saint-Pé, S. Petit, G. Rolland, and A. Bardoux, “Similarities Between Proton and Neutron Induced Dark Current Distribution in CMOS Image Sensors,” IEEE Trans. Nucl. Sci. 59(4), (2010). 20. C. Virmontois, “Analyse des effets des déplacements atomiques induits par l’environnement radiatif spatial sur la conception des imageurs CMOS,” Ph.D. Thesis (2012). 21. I. Hopkins, and G. Hopkinson, “Random telegraph signals from proton-irradiated CCDs,” IEEE Trans. Nucl. Sci. 40(6), 1567–1574 (1993). 22. C. Virmontois, V. Goiffon, P. Magnan, S. Girard, C. Inguimbert, S. Petit, G. Rolland, and O. Saint-Pe, “Displacement Damage Effects Due to Neutron and Proton Irradiations on CMOS Image Sensors Manufactured in Deep Submicron Technology,” IEEE Trans. Nucl. Sci. 57(6), 3101–3108 (2010). 23. G. Yates, and B. Turko, “Circumvention of radiation-induced noise in CCD and CID imagers”, IEEE Trans. Nucl. Sci. 33(1), 2214–2222 (1986).
Introduction
The radiation response of CMOS image sensors (CIS), also called Active Pixel Sensors (APS), has been widely studied as these devices are considered for use in various harsh environments like the ones associated with space [1][2][3], military applications or vision systems in nuclear industries [4].
This class of optical sensors possesses numerous advantages for their use in a radiation environment over Charge Coupled Devices (CCD), such as the absence of radiation-induced degradation of charge-transfer efficiency and an overall higher tolerance to ionizing radiations.Previous studies have revealed most of the degradation mechanisms of these CMOS devices when exposed to radiations [1][2][3].Among the different degradation mechanisms, the following have been identified in CIS: measured uniform dark current increase, creation of hot pixels, decrease of sensitivity and of the sensor dynamic range.These studies also showed that these effects differ depending on the nature of the radiation source: neutrons, protons or photons (X and γ-rays).In the case of photons, only ionization occurs within the material constituting the device.For neutrons, displacement damage (DD) governs the device degradation whereas in the case of protons both total ionizing dose (TID) [5] and displacement damage dose (DDD) [6] effects have to be considered.It should be emphasized that TID induces uniform dark current increases, whereas DD generate hot pixels (pixel with extreme dark current values), and thus large dark current non-uniformities.
Megajoule class lasers like the Laser Megajoule (LMJ) in France or the National Ignition Facility (NIF) are designed to study fusion by inertial confinement (ICF).In the case of LMJ, the facility will operate in the indirect drive inertial confinement fusion scheme [7].In this configuration, the incoming laser energy is focused inside a cylindrical gold Hohlraum of ∼ 1 cm through two axial laser entrance holes and then converted into soft X-rays with a high efficiency that reaches almost 80% [8].This time-shaped pulse of X-rays implodes a 2 mm diameter capsule filled with a Deuterium-Tritium mixture, placed in the middle of the Hohlraum.If ignition of the hot spot is propagated through the surrounding compressed fuel an energy gain will occur and up to 5×10 18 neutrons will be produced in a very short period of time (30 ps) in a localized region of approximately 100 µm.Different configurations of diagnostics are studied for use on the LMJ to provide optical, x-ray, and nuclear product measurements for these experiments.Several of these diagnostics are designed to provide images of the different phases of the ignition process and will require image sensors capable of operating at such facilities.
All the imaging systems located inside the experimental hall of the LMJ (see fig. 1(a)) will have to survive the mixed gamma, X-ray and 14 MeV neutron pulses associated with the fusion experiments [9].A detailed description discussing this specific harsh environment can be found in reference [7].Its main particularity remains the short duration of the mixed irradiation, leading to very high neutron fluxes and the associated high dose rates.The exact radiation constraints will depend on the device location and of the different mitigation techniques (such as shielding) put in place for each system.For each location, the dose rates, neutron or gamma doses were calculated using a Monte Carlo based numerical code called Tripoli [8].Typical values for these doses and dose rates are illustrated in fig.1(b) for different zones, such as inside the target chamber or outside partially protected by a 50 cm boron-doped concrete wall to protect the equipment from neutrons.
To investigate the vulnerability of CMOS APS in Megajoule class laser environments, we performed experiments at the OMEGA facility in Rochester.This facility provides a unique platform to evaluate the effects of a pulsed mixed radiation environment, at a lower yield than is expected for LMJ (10 13 instead of 10 18 neutrons).To fully understand the mechanisms occurring in these devices, we performed complementary testing on photodiodes representatives of the pixels, and on sensors either in "OFF" state or in acquisition mode during the shot.
Tested devices
The selected sensor is a 13µm-pitch-1024×1024-pixel array with a classical 3T active pixel design [10] : one photodiode and three N channel MOS field effect transistors (as shown in Fig. 2).It was designed by the Image Sensor Research Team of the Institut Supérieur de l'Aéronautique et de l'Espace (ISAE), and manufactured using a 0.35 µm CMOS process optimized for imaging application (e.g. with optimized doping profiles, metal stack and optical interfaces, etc).The substrate is a slightly P-doped epitaxial layer roughly 5 µm thick grown on top of a heavily P-doped substrate.The on-chip electronics was voluntarily limited to the following necessary functions to reduce the occurrence of Single Event Effects (SEE) [11]: • the pixels (described above), • the row and column decoders: CMOS combinatorial logic circuits using N and P MOSFETs but without any latch or memory, • and the analog readout chain consisting of N and P channel MOSFETs.Since no latch or digital memory is used, the integrated circuit is naturally immune to Single Event Upset (SEU) [11] and other digital SEE (e.g.multiple bit upsets).Single Event Latchup (SEL) cannot occur in the pixel array because only N-MOSFETs are used (whereas SEL can be triggered only if N and P-MOSFETs are close enough to each other [11]).In the analog readout chain, the density of N and P-MOSFETs is not high enough to induce SEL during the radiation pulse.Therefore, the only expected effects are Single Event Transients (analog or digital) occurring during the neutron radiation pulse.
Three other devices have been tested for additional investigations: a 128×128-3T-CIS with a 10-µm-pitch and two 800×800 µm² CMOS photodiodes.The two photodiodes have been used to measure the radiation-induced voltage at very high neutron flux, their small sizes enabling to locate them at only 50 cm of the target, whereas the 128×128-3T-CIS was used to investigate the cumulated fluence effect on the radiation induced dark current distribution.These three devices have been manufactured using a 0.18 µm CMOS process also dedicated to CIS.The CIS has the same architecture as the 1024×1024-pixel sensor.
Test setup
An overview of the test bench is presented in Fig. 3.A radiation tolerant antifuse ACTEL/Microsemi AX500 FPGA device ensures that no configuration loss can happen following a radiation pulse.The selected 14-bit-Analog-to-Digital-Converter (ADC) is not a radiation tolerant product, but it exhibits a good tolerance to SEL and has not been affected by the radiation levels of our experiment.The parallel output of the ADC and the synchronization signals from the FPGA are connected to the data acquisition computer by 30 m long optical fibers (left side of Fig. 4(a)).Optical fibers were chosen because of their intrinsic hardness against the neutron pulse and the intense electromagnetic radiation pulse following the laser shot.Because of the possible SEE constraint, no serializer is used and the 30 m long optical fiber cable is made of 20 independent optical fibers.The laser diodes (TX interface in Fig. 3) used to emit the digital signals over the fibers are known to be radiation hard [12], and the photodiodes (RX interface in Fig. 2) used to receive the data and synchronization signals are located in the instrumentation zone of the OMEGA Facility, called "La Cave", where the radiation level is not significant.In addition to the use of optical fibers, the whole electromagnetic compatibility constraint was taking care of by placing all the electronics in a dedicated shielded box (as shown at the top of Fig. 4(a) and on the right side of Fig. 4(b)).
ICF environment
The vulnerability of the image sensors has been evaluated during a high-yield deuteriumtritium fusion campaign at OMEGA in 2011.For a first estimation, a dose monitoring system has been exposed to typical 10 13 neutron yield shots on the OMEGA Laser facility (Laboratory for Laser Energetics, University of Rochester) at a location comparable to that of our tested device: 90% of the dose is generated in the first 370 ns of the experiments, the corresponding fluences at the device location are between 1.8×10 6 and 7×10 6 n/cm².After a few tens of nanoseconds, the dose rate drops to a negligible level [7].The strongest constraint on the device operation remains the neutron effects, since the TID at the device level remains below 0.05mGy(SiO 2 ) per shot.
Transient perturbation during the laser shot
During the eleven laser shots of the experiment (shot number seven was cancelled), no functionality loss and no single event induced permanent damage have been observed.As discussed in the experimental section, this is not surprising because the sensor and test bench architectures were selected to minimize the risk of SEE.This result is therefore important to show that using an APS or a CIS in such a harsh environment is not any riskier than using a CCD device, as long as the use of digital electronics is reduced inside the sensor.If highly integrated digital functions are needed, then SEE mitigation techniques must be implemented.However, as observed in CCDs [13,14], the neutron radiation pulse generates signal charges randomly in the pixel array through indirect localized ionization.These Single Event Transients (SET) can lead to randomly distributed transient saturated single pixels or to clusters of saturated pixels (often called stars [13]) in a single frame.This section focuses on these transient effects.a at the detector level.
Temporal coupling between the neutron radiation pulse and the CIS operation
As shown in Fig. 5, the sensor is operated in the rolling shutter mode [15], meaning that the rows are reset and sampled sequentially (as illustrated in Fig. 5(b)), not simultaneously.The tested imager is not dedicated to high speed imaging and has a total readout time of ~0.5s.In order to maximize the probability that the neutron pulse impinges the sensor during the integration phase, an additional integration time was used.Figure 5(a) shows that a total integration time of 1 s (~0.5s during the readout phase plus 0.5s of additional integration time) is selected for the experiment.It means there is a 50% chance that the neutron pulse passes through the sensor during the readout operation, and a 50% chance that it happens during the additional integration phase.In the first case, the radiation pulse can lead to readout perturbation, as discussed in the following.In the second case, the radiation induced charges will simply be collected by the integrating pixels together with the charge of the useful image.
The details of the "readout" operation in Fig. 5(a) are illustrated in Fig. 5(b).During this phase, each row is activated (selected) sequentially.Once a row is selected, the signal and reference values of the entire pixel row are sampled in the sampling capacitance located at the bottom of each column (analog readout chain in Fig. 2).Then, the sampled analog values are measured through the analog readout chain by selecting each column sequentially.Once all the pixels of a given row have been readout, the next row is activated and the operation is repeated.
During the experiment the sensor acquires frames continuously, without any synchronization with the radiation pulse.Therefore, one can notice that two equally probable cases of temporal coupling with the radiation pulse (arrows A and B in Fig. 5(a)) can be considered as a first approximation: the neutron pulse can impinge on the sensor either during the additional integration phase or during the readout phase.During the integration phase (case B), the neutron-induced SET will be distributed all over the current frame (i.e. the next frame to be readout).In the case where neutrons pass through the sensor during the readout of row N (case A1 and A2 in Fig. 5(b)), the rows between N+1 to 1024 of the frame being readout will be affected by the radiation pulse whereas the rows between 1 to N-1 will not.The effect of the radiation pulse on rows 1 to N-1 will only be visible when the next frame will be readout.Finally regarding row N, the effect depends on the temporal coupling between the radiation pulse and the readout phase of row N and all the possible cases will not be discussed here.It should however be emphasized that the worst effect that can happen in this case (arrow A1) is the corruption of the data sampled during the readout of row N (not of the rest of the array), but the probability of occurrence for such an effect is low (less than 1%).
Transient response of the CIS during a laser shot
Figure 6 and Fig. 7 present a typical response of the tested CIS when the neutron pulse hits the sensor during the readout phase (case A in Fig. 5).It can be seen in Fig. 6(a) that white or gray pixels already exist in the raw dark frame before the occurrence of the radiation pulse (the contrast has been tuned to emphasize the effect, in a full scale representation these white pixels would not have been visible).However, most of these so called "hot-pixels" can be removed simply by subtracting to the current dark frame an average dark frame (here, a temporal average of 10 dark frames), as shown in Fig. 7(a) This operation is often performed for low light level applications, or more generally for high performance applications where such a dark signal non uniformity is not acceptable.It should be emphasized that this operation does not remove the temporal shot noise associated to the pixel dark signal level.Moreover, even if hot pixels can be removed with this technique, the loss of dynamic range induced by the high value of dark signal before the subtraction cannot be recovered.Since the radiation pulse hits the sensor during the readout phase (case A), only a part of the current frame is disturbed by the neutron flux.This is the reason why the number of white pixels suddenly rises in the bottom third of Fig. 6(b) and why the upper part of this frame is similar to the upper part of Fig. 6(a) The neutron induced transient perturbation in the upper part of the array is visible in the next frame (Fig. 6(c)), as discussed in the previous section.
In order to analyze in more details the SET induced by the neutron pulse, the image of the perturbation is reconstructed by the following operation.First the bottom part of Fig. 6(b) is added to the upper part of Fig. 6(c), leading to a complete image of the perturbation superimposed on the dark signal non uniformity (i.e. the hot pixels).Then, an average dark frame is subtracted to remove the permanent non uniformities, including hot pixels.The result is shown on the whole array in Fig. 7(b), and on a reduced area in Fig. 7(c).It can clearly be seen that contrary to permanent hot pixels, neutron-induced transient white pixels cannot be removed by subtracting an average dark frame.In the targeted application (plasma diagnostic instruments), the useful image is acquired during the laser shot and its quality is greatly degraded by these white pixels.The neutron-induced transient perturbation is thus one of the main limitations of plasma diagnostic instruments for MegaJoule class inertial fusion facilities.
Figure 7(c) shows that similarly to what is generally observed in CCD [13,14], a neutron interaction in a pixel sensitive volume can lead to different types of interactions [14] leading to isolated white pixels, cluster of white pixels or tracks of white pixels.
The distribution of the charge deposited by the radiation pulse in the pixel array and collected by the pixel photodiodes is presented in Fig. 8.This distribution is achieved by computing the statistical histogram of the reconstructed radiation pulse image (Fig. 7(b)).Since the dark signal background is removed, most of the pixel values (all of them in the case where no radiation pulse is generated) are within the first bins of this distribution (i.e.around 0 ke -).This main population represents the pixels which are not disturbed by the radiation pulse, either because no interaction occurred in (or in the vicinity of) their sensitive volume or because the generated charge is too small to be measured.The pixels disturbed by the neutron radiation pulse (i.e. the ones that collect excess parasitic charges) can be recognized by comparing the distributions to the one achieved when no radiation pulse is generated (blue triangles in Fig. 8).The diversity of possible interactions, in addition to the pixel-to-pixel diffusion crosstalk mechanisms, lead to an almost exponential distribution shape (i.e. a linear evolution on this semi-logarithmic plot) of the number of collected parasitic electrons per pixel.At the end of the "exponential" part (above 140 ke -), a Gaussian distribution appears.This last population represents the saturated pixels.The reason why the saturated pixels are not located in a single distribution bin is due to two main causes: 1) the saturation level of each pixel is not the same (due to mismatches between the in-pixel devices) and 2) the dark signal non uniformity reappears in the saturation regime because of the dark frame subtraction.Therefore, both non-uniformities add up to lead to this Gaussian distribution of saturated pixels.
It is also important to notice in Fig. 8 that the number of disturbed and saturated pixels rises with the neutron fluence at the detector.These two populations are estimated by the following criterion.All the pixels exhibiting signal values above 15 ke -are considered as disturbed pixels, whereas all the pixels with values above 125 ke -are considered as saturated.The evolution of these two indicators with neutron fluence is represented in Fig. 9.Both populations appear to increase linearly with neutron fluence.At the maximum fluence reached during the experiment, almost 12% of the pixels are disturbed and 2% of them are in the saturation regime.For most of the designed plasma diagnostics instruments, hardening by system techniques are applied when technically possible to avoid the degradation induced by parasitic neutrons at the acquisition time of the image.However, in some cases where one will have to integrate the image of the plasma when the neutron radiation pulse impinges on the integrated imager, these disturbed and saturated pixels will directly degrade the image quality and reduce the useful part of the acquired image in the real application.
Permanent damages
Fig. 10 shows a 200×200 pixel window of a raw dark frame acquired before the first laser shot, and another one acquired after the last shot.Before the experiment (Fig. 10(a)), some hot pixels can be observed.After the last shot, one can see in Fig. 10(b) that several additional white and gray pixels have been generated by the successive neutron radiation pulses.This well-known permanent effect has been observed in early work on neutron irradiated CCD [16] and is due to the creation of electro-active bulk defects acting as Generation-Recombination centers in the pixel depleted volume.These displacement damage induced defects enhance the dark current of the damaged pixel [6], leading to gray or white spots in the raw dark frame.
In order to increase the number of neutron-induced hot pixels and improve the statistics, the 128×128 pixel CIS was placed, grounded, in the target chamber (at 50 cm from the target) of the LLE-OMEGA facility during a 2010 experiment.By doing so, the total neutron fluence reached at the end of the day was as high as 10 10 n/cm² (which is close to what can be expected for an imager during one LMJ/NIF shot).The total displacement damage energy was deposited by eleven neutron radiation pulses.The flux of each radiation pulse was higher than 10 24 n/cm²-s and the corresponding average neutron fluence per laser shot was around 10 9 n/cm².Figure 11 presents the dark current distribution of the tested sensor before and after irradiation.The dark current distribution of a similar sensor exposed at CEA DAM Valduc facility to the same 14 MeV neutron fluence but with a flux of 10 7 n/cm²-s is also shown in this figure.It can be seen that neutron irradiation induces an exponential tail in the distribution, as previously observed in neutron irradiated CCD and APS [17].Moreover, the hot pixel tails generated by the two neutron irradiations are similar, despite the very different neutron flux used during the two irradiations.It shows that the neutron flux does not have any significant effect on the displacement damage induced dark current.This important result has a direct consequence for the use of solid state imagers in ICF environments: all the published results on displacement damage effects obtained at low and moderate neutron fluxes appear applicable to extrapolate them to the very high flux reached during an inertial confinement fusion experiment.It enables, for instance, estimating the average dark current level after a given number of laser shots using Srour's universal damage factor [18].Recent work by Virmontois et al. also suggests that the dark current distribution could also be estimated using a similar approach [19].Several techniques can be used to extend the lifetime of the sensors used in these harsh environments by reducing the impact of these pixels exhibiting high dark current values.First, the use of very small integration time will significantly reduce the number of integrated dark electron (the total number of dark charges in a frame is simply the product of the dark current and the integration time).Clearly, the 1 s integration time used in this experiment is not a best case from the dark current point of view (a few milliseconds would be enough to integrate the useful signal).Another option that is often used is to cool down the sensor, since the dark current is known to drop exponentially with temperature.Finally, the dark frame subtraction operation mentioned previously, also contributes to reduce the impact of hot pixels on the image quality.
Contrary to what is observed in CCDs, where charge transfer efficiency is degraded by bulk defects, no other permanent damage was observed on the tested CIS.Regarding TID induced effects, the cumulated TID at the end of the day (roughly 0.5 mGy(SiO 2 )) was not sufficient to induce any degradation in these inherently radiation tolerant sensors (as compared to CCD).
Extrapolation to Megajoule Class ICF environment
In this part, we try to evaluate from the presented tests at OMEGA the future vulnerability of the CIS to the LMJ environment.We distinguish between the transient perturbations of the CIS during a single shot and the permanent degradation linked to the successive shots accumulated during the facility lifetime.
Permanent degradations
The permanent degradation is of primary interest for the diagnotics designed to ensure the acquisition of the useful signal before the neutron pulse.For these particular diagnostics, the permanent effects will limit the lifetime of the CIS.The fluence reached during a high-yield LMJ shot at the detector level of a plasma diagnostic instrument is expected to be between 10 10 n/cm² and 10 12 n/cm² per shot.This is more than three orders of magnitude above the fluence of Table 1 but clearly more comparable to the fluence deposited on the 128×128 pixel CIS located at 50cm of the OMEGA target during the whole day of experiments (10 10 n/cm²).Regarding permanent defects, a fluence of 10 12 n/cm² is relatively high, but except for the larger increase in dark current, and the associated noise and fluctuations (such as random telegraph signal [21]) that must be mitigated (by operating the sensor at low temperature for example), the CIS will remain functional as demonstrated up to 10 13 n/cm² on the same device in [22].Regarding the sensor sensitivity, the quantum efficiency measurements performed on the sensor exposed to a fluence of 0.5×10 12 n/cm² illustrates that this parameter should no be degraded in the real application.Thus, the permanent degradation should be acceptable or easily overcome with appropriate operating conditions of the CIS.
Transient perturbations
The transient degradation is of primary interest for the diagnostics that cannot avoid acquiring the useful signal at the same time than the neutron induced transient perturbation.
During a 2010 experiment at the LLE, 800×800 µm² CMOS photodiodes manufactured using the same CIS process as the 128×128 pixel sensor were also placed in the target chamber (50 cm from target chamber center), and connected to a fast digital oscilloscope.The typical response of one of these CIS photodiodes during a laser shot is presented in Fig. 13.The peak amplitude of the photodiode transient response was used as an indicator of the quantity of charge deposited in the CMOS photodiode during each shot.The evolution of the photodiode voltage peak amplitude as a function of neutron yield is shown in Fig. 14 appears that the voltage peak, hence the number of generated charges, is directly proportional to the neutron yield of the corresponding ICF laser shot.It thus appears reasonable, as a first approximation, to linearly extrapolate the result achieved here to fluences representative of the LMJ environment.Indeed, if the neutron yield achieved at LMJ facility will largely exceed the ones of the OMEGA facility, the CIS in the LMJ facility will be located at a larger distance from the target (> 5 m) than our photodiodes at OMEGA (50 cm), thus reducing the constraint, in terms of neutron flux, to levels more comparable to our OMEGA test conditions.Therefore, assuming a direct proportionality between the neutron fluence and the number of disturbed/saturated pixels, one can see in Fig. 9 that nearly 100% of the pixels would be disturbed at a fluence of 6×10 7 n/cm² and saturated after a fluence of 4×10 8 n/cm², which is almost two orders of magnitude below the minimum fluence estimated for the application.
These extrapolations for both transient and permanent effects on CIS of the LMJ fusion by inertial confinement experiments clearly show that the main issue will be the mitigation of the transient white pixels induced by the radiation pulse in the sensor.Several techniques exist to reduce the number of transient saturated pixels in neutron irradiated CCD, such as the dump-and-read technique [23], but they cannot be directly used with CIS because of the very different operating mode.Future work will focus on the ways to transpose such methods to CIS.
Conclusion
The vulnerability of CIS to Megajoule Class Laser environment has been evaluated in the LLE-OMEGA laser facility.No functional interruption of the tested CIS was observed during the laser shots.Using an APS device in such a harsh environment does not seem any riskier than using a CCD, as long as the use of digital electronics is reduced within the sensor.These solid state image sensors have intrinsic advantages compared to CCDs for the development of plasma diagnostic instruments, such as no degradation of the charge transfer efficiency, a better tolerance to TID or no blooming issue.However, the presented results also show that similarly to CCDs, CIS are disturbed by the charge deposited during the neutron radiation pulse.A simple extrapolation has shown that these CIS would be completely saturated, as would a CCD, during a typical LMJ shot.Most of the forthcoming efforts should then be focused on finding mitigation techniques to reduce the number and the intensity of transient saturated pixels in the integrated imagers.
Fig. 1 .
Fig. 1.(a) CAD drawing of the Experimental Hall (EH) of the Laser Megajoule facility.The EH diameter is 30 m whereas the diameter of the target chamber is 10 m.(b) Neutron and gamma fluxes at the locations of diagnostic components during full performance shots in LMJ, from [8].
Fig. 3 .
Fig. 3. Synoptic diagram of the test setup in the LLE OMEGA facility.
Fig. 4 .
Fig. 4. Picture of the CMOS imager test setup.(a) In the lab, before deployment.(b) In the target bay of the LLE OMEGA facility.
Fig. 5 .
Fig. 5. CIS operation timing diagram with possible temporal coupling cases with the radiation pulse (plain vertical red arrow).(a) Overview of the CIS operation showing the sequence between the readout phase and the additional integration phase.(b) Details of the readout phase.
Fig. 6 .
Fig. 6.Raw dark frames : (a) before a shot.(b) during shot 8 and (c) right after shot 8. Neutron fluence at the detector level: 7×10 6 n/cm 2 .The image contrast has been tuned to emphasize the studied effects.
Fig. 7 .
Fig. 7. Reconstituted dark frame during laser shot 8 with subtracted average dark level.(a) before a shot.(b) during the shot.(c) magnification of an area of the reconstituted image taken during a shot (indicated by a white dashed square in (b).The image contrast has been tuned to emphasize the studied effects.
Fig. 8 .
Fig.8.Distribution of the number of generated electrons after three selected shots.
Fig. 9 .
Fig. 9. Evolution of the number of disturbed and saturated pixels with neutron fluence.
Fig. 10 .
Fig. 10.Raw dark frame (200×200 pixel window at the top left corner of the pixel array) taken at room temperature (a) before the first shot and (b) after the last shot showing the creation of a few permanent hot pixels.The image contrast has been tuned to emphasize the studied effects.
Fig. 11 .
Fig. 11.Dark current distribution of 128×128 pixel CIS before irradiation and after exposition to neutrons at a 10 10 n/cm² fluence (with two different flux).These data come from [20].
Fig. 14 .
Fig. 14.Evolution of the transient voltage pulse as a function of neutron fluence. | 8,136.8 | 2012-08-27T00:00:00.000 | [
"Physics"
] |
On the Successive Overrelaxation Method
Problem statement: A new variant of the Successive Overrelaxation (SO R) method for solving linear algebraic systems, the KSOR method w as introduced. The treatment depends on the assumption that the current component can be used s imultaneously in the evaluation in addition to the use the most recent calculated components as in the SOR method. Approach: Using the hidden explicit characterization of linear functions to in troduce a new version of the SOR, the KSOR method. Prove the convergence and the consistency analysis of the proposed method. Test the method through application to well-known examples. Results: The proposed method had the advantage of updating the first component in the first equation from the firs t step which affected all the subsequent calculatio ns. It was proved that the KSOR can converge for all po ssible values of the relaxation parameter, ω*∈R-[2, 0] not only for ( ω∈(0, 2) as in the SOR method. A new eigenvalue funct io al relation similar to that of the SOR method between the eigenvalues of the it eration matrices of the Jacobi and the KSOR methods was proved. Numerical examples illustrating his treatment, comparison with the SOR with optimal values of the relaxation parameter were con sidered. Conclusion: The relaxation parameter ω* in the proposed method, can take values, ω*∈R-[-2, 0] not only for ( ω∈(0, 2) as in the SOR. The enlargement of the domain has the affect of relaxin g the sensivity near the optimum value of the relaxation parameter. Moreover, all the advantages of the SOR method are conserved and the proposed method can be applied to any system. This approach is promising and will help in the numerical treatment of boundary value problems. Other extensi ons and applications for further work are mentioned .
INTRODUCTION
The problem of solving linear systems of algebraic equations appears as a final stage in solving many problems in different areas of science and engineering, it is the result of the discretization techniques of the mathematical models representing realistic problems (Saad and Vorst, 2000) and the references cited therein. We consider linear systems of the form Eq. 1: We assume that the system has a unique solution and the equations are ordered so that, a ii ≠0, (Darvishi and Hessari, 2011;Papadomanolaki et al., 2010;Wang, 2010;Louka et al., 2009;Salkuyeh and Toutounian, 2006). Jacobi method is the simplest known iterative method; it is a direct application of the fixed point theorem. The point Jacobi method for system (1) From the computational point of view Gauss-Seidel method is a surprising natural extension for Jacobi method. Historically, Gauss introduced his method when he was working in least squares problem, in 1823, while Jacobi work appeared in 1853, (Saad and Vorst, 2000;Hackbusch, 1994). Gauss-Seidel idea depends on the use of the most recent calculated values. The point Gauss-Seidel method for system (1) Moreover, the novel successive over relaxation approach, the SOR method, generalizes the Gauss Seidel method. The point SOR method for system (1) x is the solution obtained by the Gauss Seidel method (Hackbusch, 1994;Burden and Faires, 2005;Varga, 1965).
Using matrix notations, the system of Eq. 1 can be written as Eq. 6: where, D is a diagonal matrix with the same diagonal elements as A and-L, -U are the strictly lower and upper triangular parts of A respectively, (Hackbusch, 1994;Burden and Faires, 2005;Varga, 1965;Young, 1971). Accordingly, we have.
Jacobi method:
( ) T j is the Jacobi iteration matrix Eq. 7.
Gauss-seidel method: T G is the Gauss-Seidel iteration matrix Eq. 8.
Definition:
The spectral radius of a matrix H, denoted ρ(H), is given by: It is well known that a necessary and sufficient condition for the convergence of a given iterative method is that the spectral radius of the corresponding iteration matrix is less than one. The smaller the spectral radius of the iteration matrix is, the faster the rate of convergence of the corresponding iterative method is (Saad and Vorst, 2000;Hackbusch, 1994;Burden and Faires, 2005;Varga, 1965;Young, 1971;1954).
The Gauss-Seidel and Successive Over-Relaxation (SOR) methods are important solvers for a class of large scale sparse linear systems due to their efficiency and simplicity in implementation. Many other surprising methods appeared in the last few decades used the same philosophy to introduce formulas that contain more parameters and include the other methods as special cases for some values of the parameters. The Accelerated Over Relaxation (AOR) method is a novel two parameter generalization of the above mentioned methods (Hadjidimos, 1978;Avdelas and Hadjidimos, 1981). Albrecht and Klein (1984), have considered extrapolated iterative methods, they have illustrated that the classical iterative methods can be interpreted as integration methods for certain systems of linear differential equations.
The basic idea in the KSOR method depends on the process of updating the residue in the right hand side of the SOR method (4). It is assumed that the current value can be used in addition to the use of the most recent calculated ones (i.e., updating the residue simultaneously with the current new component). Apparently, this process leads to an implicit formula but actually it is explicit due to the linearity of the equations. Accordingly, the first component is updated in the first step which affects all the subsequent steps. Unlike Gauss-Seidel (SOR), AOR and the extrapolated versions of iterative methods in which the solution is updated after the determination of the new component in the KSOR it is assumed that the update prosses can take place simultaneosoully with the evaluation of the new components. The iteration matrix of the proposed method is obtained, theoretical considerations are being discussed. It is proved that the method is completely consistent and can converge for values of the relaxation parameter (ω*∈R-[-2, 0]) not only for the relaxation parameter (ω∈(0, 2)) in the SOR. Moreover, the proposed method will be convergent when the classical SOR (ω∈(0, 2)) is convergent. Comparison of the results of the proposed method with other well-known iterative methods especially with SOR with optimal values of the relaxation parameter ω has proved the efficiency and reliability of the method. Numerical examples with the graphical behavior of the spectral radius of the corresponding iteration matrices as a function in ω* are discussed. Moreover, the proposed KSOR method has the same simple explicit appearance as the SOR method.
MATERIALS AND METHODS
Assuming that we can use the current component simultaneously on the evaluation of the residue appears in the SOR method in addition to the use of the most recent calculated one. It appears that the method will be implicit; however after the rearrangement of the terms, we get an explicit formula. Accordingly, the KSOR method can be written in the form Eq. 10-12: The relaxation parameter ω*∈ R-[-2, 0] plays the same role as ω in the SOR method but with extended domain. It is used to control the spectral radius of the iteration matrix, accordingly the rate of convergence.
The matrix formulation of the KSOR method is Eq. 13 and 14: where, T KSOR is the iteration matrix of the KSOR method. We first prove a basic result which gives the maximum range of values of ω* for which the KSOR iteration can converge.
Proof:
The proof is straightforward application of the definition of consistency (Young, 1971).
Theorem 3:
The characteristic equation of the KSOR iteration matrix can be written in the form Eq. 15: Proof: the characteristic equation is: We must have: This result holds for any system of the form (1) (have a unique solution with a ii ≠ 0. Moreover, for any ω*∈R-[-2, 0],β ≠ 0, because a ii ≠ 0.
Theorem 4: For any matrix that satisfies Eq. 16: In general two cyclic consistently ordered matrix in the sense of Young (1971); Theorem 3.3pag 147) and Varga (1965). The eigenvalues β of the KSOR point iteration matrix are related to the eigenvalues µ of the Jacobi point iteration matrix by the relation Eq. 17: Which proves that, * * 1/ 2 β βω 1 µ ω β + − = is an eigenvalue of the Jacobi iteration matrix. This result gives a direct correspondence between the eigenvalues β of the KSOR iteration matrix, T KSOR and those of the Jacobi iteration matrix, T j . In particular if T j has a p-fold zero eigenvalue, then T KSOR has p corresponding eigenvalues equal to 1/(1+ω*). Moreover, associated with the 2q nonzero eigenvalues ±µ i of T j there are 2q eigenvalues of T KSOR which satisfy: The correspondence between the eigenvalues β of the KSOR iteration matrix, T KSOR and those λ of the SOR iteration matrix, T SOR , will be considered in a next work. From the point of view of integration methods for certain systems of linear differential equations, Albrecht and Klein (1984) and the references therein the KSOR method can be considered as the method which uses the prediction correction philosophy in one step. From the point of view of extrapolated methods the KSOR method, like the SOR method can be considered as an extrapolated Gauss Seidel method. The KSOR method and other iterative methods can be combined from the point of view of prediction correction techniques and this will be our considerations in a subsequent work.
The KSOR algorithm: we introduce the algorithmic formulation of the KSOR method. This algorithm is similar except for some constant multipliers of the already well-established SOR algorithm (Burden and Faires, 2005).
Algorithm (KSOR):
Input: The number of equations m: • The entries a ij ,1≤ i, j≤m , of the matrix A • The entries b i , 1≤ i ≤m of b • The entries XO i ,1≤ i ≤m of XO=X (0) ; the parameter ω* • Tolerance TOL; maximum number of iterations N Output: The approximate solution x 1 ,…x m or a message that the number of iterations was exceeded: Step 1 : Set k=1 Step 2 : While (k≤N) do steps 3-6 Step 3 : For i =1,…, m ( ) Step 4 : If X-XO < TOL Then output (x 1 ,…x n ) (Procedure completed successfully.) STOP.
Step 6 : For i = 1,…, m set XO i = xi (Varga, 1965;Young, 1971). In the first example we present the solution values and the graphical representation of the absolute values of the eigenvalues of the SOR and KSOR iteration matrices In the second example we present the eigenvalues of the Jacobi µ i ,I = 1,2,3 and 4 and Gauss Seidel v i ,i = 1,2,3 and 4 iteration matrices and obtained the eigenvalues λ i ,i = 1,2,3 and 4 of the SOR iteration matrix as functions in ω and the eigenvalues β i ,i = 1,2,3 and 4 of the KSOR iteration as functions in ω*.
The main difficulty in the efficient use of iterative methods which depends on some parameters like the SOR method, the AOR method lies in making a good estimate of the optimum relaxation parameters which maximizes the rate of convergence of the method. In the following we consider two well known examples with known optimum relaxation parameter ω opt . Determining the optimum value of the relaxation parameters is a very important task and it will be considered later in a separt work.
Example 1: Consider a system with data Eq. 18: Whose exact solution is x 1 =1, x 2 =1, (Young page 96) and (Varga, 1965). It is well known that, for this system we have. The eigenvalues of the Jacobi iteration matrix T j in this example are: The eigenvalues of the Gauss Seidel iteration matrix T G are: Figure 2 illustrates the behavior of the absolute value of the eigenvalues of the KSOR iteration matrix T KSOR agains the relaxation parameter.
It is well known that, for this system we have. The eigenvalues of the Jacobi iteration matrix T j are the roots of the equation: It is clear from, Table 6, that corresponding to µ i = 0,i = 1,2, i * 1 β ,i 1, 2 1 ω = = + and the relation, between the eigenvalues µ i ,β i and the relaxation parameter ω*, theorem(4), µ i ω*β i 1/2 =β i +β i ω*-1 is completely satisfied all calculations and graphs are performed with the help of the computer algebra system Mathematica 7.0.
RESULTS
• The KSOR updates the residue simultaneously with the solution in addition to the use of the most recent calculated solution which reflects the the rapied convergence at the begainning appeared in the numerical examples • The domain of the relaxation parameter in the KSOR is ω*∈R-[-2,0] instead of ω∈ (2,0) in the SOR • The iteration matrix of the proposed method, the consistency and convergence analysis of the method are well established • Afunctional eigenvalue relation between the eigenvalues of the iteration matrices and the relaxation parameters (theorem (4)) is well established • Numerical examples illustrating and confirming the theoritical eigenvalue functional relation are considered • From Table 5-7, we see that the spectral radius ρ(T SOR ) changes from 0.072-0.092 while ρ(T KSOR ) changes from 0.072-0.073 in an interval of length 0.005 around the minimum value i.e., the change in ρ(T SOR ) is 20 times the change in ρ(T KSOR ) along an interval of the same length which illustrates relaxation of the sensitivity around the minimum value • Further extensions are mentioned
DISCUSSION
Although the problem of solving large sparse linear systems of algebraic equations is one of the old problems (Saad and Vorst, 2000;Hackbusch, 1994) it is still has an important role in many modern areas of science. The SOR is one of most used iterative methods espcially when a good estimation of the optimum value of the relaxation parameter ω opt is avaliable. Even if ω opt is avaliable, it is sensitive as illustrated in the results of the numerical examples.
In comparison with the SOR method, with known optimal value of the relaxation parameter, the KSOR method has the same advantages of the SOR. Even from the point of view of the splitting of the coefficient matrix, one can see that the SOR uses the splitting 1 ω D ω L D ω U ω = + − − + in addition to the possibility of the use of the philosophy of the prediction correction techniques, which we will consider in a subsequent work. It remains to introduce an effective procedure for the estimation of the optimum value of the relaxation parameter ω * opt.
which maximizes the rate of convergence of the proposed KSOR method and this will be the objective of a subsequent work. Also the KSOR can be used with more relaxation parameters as well as combinations of the SOR and the KSOR can be considered.
CONCLUSION
The KSOR has the same simple structure as the SOR method so its implementation is an easy task. The theoretical properties, the convergence as well as the consistency of the proposed method are proved. Comparison with other iterative methods especially with the SOR, with known optimal value of the relaxation parameter is discussed.
From the the computational point of view the method has the advantage of updating the first component from the first step unlike the other iterative methods which reflects the rapid convergence at the beginning.
The study of the spectral radius of the iteration matrices, which is a measure of the convergence rate of the linear iterative methods, have proved that there is a value of the relaxation parameter ω* for which ρ(T KSOR ) is comparable with that of the SOR corresponding to ω opt . The numerical examples have confirmed the theoretical eigenvalue functional relation (theorem 4) and illustrated that the extension of the domain of the relaxation parameter has the the effect of relaxing the sensitivity ρ(T SOR ) around its minimum value. | 3,696 | 2012-02-22T00:00:00.000 | [
"Mathematics"
] |
Sequential multiscale modelling of SiC / Al nanocomposites reinforced with WS 2 nanoparticles under static loading
E. I. Volkova,1,2 I. A. Jones,1,* R. Brooks,1 Y. Zhu,3 and E. Bichoutskaia2,* 1Division of Materials, Mechanics and Structures, Faculty of Engineering, University of Nottingham, Nottingham, NG7 2RD, United Kingdom 2School of Chemistry, University of Nottingham, Nottingham, NG7 2RD, United Kingdom 3College of Engineering, University of Exeter, Exeter, EX4 4QF, United Kingdom (Received 9 February 2012; revised manuscript received 21 July 2012; published 24 September 2012)
I. INTRODUCTION
Lightweight high performance ceramic/metal composites have recently attracted intense academic and industrial interest due to their high strength, ductility, and hardness, as well as the ability to withstand severe shock loadings. 1apid development of these composites, focused mainly on inclusion of ceramic nanoparticles such as B 4 C, SiC, TiB 2 , and Al 2 O 3 , offers a great potential for utilization in many critical protective applications. 2,3It has been confirmed both experimentally 4 and theoretically 5 that these composite materials can exhibit much higher strength and hardness than their parental bulk counterparts, not only under general ambient conditions but also under high shock loadings.Additional incorporation of inorganic fullerene-like (IF) nanoparticles into ceramic/metal matrices leads to a new form of multiphase nanocomposites 6 with potentially improved shock absorbing properties.Nanocomposites containing IF-WS 2 nanoparticles have already demonstrated outstanding tribological and wear properties. 7][10] Similar to IF-WS 2 nanotubes, IF-WS 2 fullerenes are hollow multilayered structures, but with approximately spherical shape.They can act as molecular absorbers in the SiC/Al nanocomposite, damping shock energy through the large interlayer separation (van der Waals gap) similar to that of the bulk interlayer distance of 0.62 nm.Thus a combination of a tough and strong SiC/Al matrix, which effectively stops fragment penetration and perforation, with energy-absorbing IF-WS 2 nanoparticles, could ultimately offer the next generation of antishock materials.
The high demand on imminent utilization of the advanced multiphased ceramic nanocomposites under severe loading conditions requires an improved understanding of the relationship between the atomic and macrostructure of these materials and its effect on the shockwave response.A detailed analysis of the influence of multiple phases on the mechanical and elastic properties could elucidate the advantages of these nanocomposites compared to the corresponding single-phase materials.The finite element modelling of hollow, faceted IF-WS 2 nanoparticles under compression was recently reported by Kalfon-Cohen et al. 11 In their work the mechanism of failure under compression was investigated using a modification of a model previously published for WS 2 nanotubes. 12he present work is motivated by a need to model the response of IF-WS 2 -containing nanocomposites to a variety of load cases, beginning with simple static loads and progressing via a range of dynamic situations, with a view to constrtucting a comprehensive explicit FE model of the material's response to shock and impact loads.It is also motivated by the need for a set of reliable and complete input data for such an analysis.In this paper, a first stage of this analysis is reported, focusing on the generation of reliable material property data for WS 2 and the prediction of elastic properties for the nanocomposite.The methodology employs a sequential multiscale modelling approach (as classified by, for example, Lu and Kaxiras 13 ) also referred to as hierarchical multiscale modeling 14 that combines atomic level investigation of the elastic properties of the WS 2 bulk material with the continuum level study of the aggregate behavior of an example SiC/Al nanocomposite impregnated with IF-WS 2 nanoparticles.This is in contrast to a concurrent multiscale approach where the models at different scales are coupled and run simultaneously, for example a coupled FE/MD simulation. 13The paper is organized in three further sections.In Sec.II the adopted methodology is described, and the results for the elastic properties of bulk WS 2 and MoS 2 (included as a test study) materials computed with density functional theory (DFT) including dispersion corrections are presented.In Sec.III, the results of the static loading finite element simulation of the multiphase nanocomposite are discussed and compared with the existing theoretical approach of Budiansky. 15In Sec.IV we analyze the obtained results and draw conclusions.
II. CALCULATION OF THE ELASTIC PROPERTIES AT THE ATOMIC LEVEL
As the diameter of the IF nanoparticles is approximately 100 nm 16 and therefore much larger than the interlayer spacing, the curvature of the particles has been ignored, and in the DFT calculations of the structure and the elastic properties, the WS 2 and MoS 2 materials have been treated as bulk structures.Three approximations for the exchange-correlation energy have been explored, namely the local density approximation (LDA/CA-PZ 17,18 ), and the Perdew-Burke-Ernzerhof (PBE 19 ) and the Perdew-Wang (PW91 20 ) parametrizations of the generalized gradient approximation (GGA).Van der Waals interactions have been included using the semiempirical dispersion corrections to the total DFT energies.The OBS dispersion scheme 21 has been used for the LDA and PW91 functionals, and the G06 scheme 22 has been used for the PBE functional as implemented in CASTEP 5.5 quantum chemistry code. 23A unit cell consisting of six atoms has been used as shown in Fig. 1(b).An on-the-fly pseudopotential generator has been used to eliminate the core states and describe the valence electrons by nodeless pseudo-wave functions.A plane wave basis set with the cutoff energy of 440 eV has been used, and the Monkhorst-Pack grid of 12 k points has been employed to sample the Brillouin zone.This approach has been initially used on the MoS 2 bulk material for which there is ample theoretical and experimental structural data [24][25][26] (see Table I).8][29] The authors judged it essential to perform a robust test of the ab initio predictions prior to calculating the WS 2 elastic properties for which no complete set of data appears to be available in the literature.
Inclusion of dispersion interactions in DFT calculations improves the prediction of the interlayer equilibrium geometry of the MoS 2 and WS 2 bulk, yielding a better agreement with experiment.This is particularly evident for the GGA functionals (see Tables I and II).For example, the optimized GGA/PW91 + OBS values of the lattice parameters for WS 2 are calculated to be a = b = 0.318 nm, c = 1.250 nm in good agreement with experiments. 27,28The W-W distance is found to be 0.61 nm in the c direction and 0.31 nm in the ab plane, which is also consistent with the values reported in the literature. 27The GGA + OBS values of the layer thickness 2cz and the lattice constant c are closer to experimental values 27,28 than previously reported GGA evaluations 29 without dispersion correction.Similarly good agreement is seen in Table I for MoS 2 , where experimental values of the vdW gap and the Mo-S bond length are additionally available for comparison with predictions.
Having obtained the optimized structures of MoS 2 and WS 2 the full elastic constant tensors have been calculated for both structures.According to Hooke's law, 30 the relationship between stress and strain can be linearly expressed as where C ij is the stiffness matrix.
The elastic constants, in the form of the stiffness matrix, are obtained within the CASTEP package using the finite strain method described by Milman and Warren 31 and in TABLE I. Equilibrium geometry (in nm) of the MoS 2 bulk material: a and c are the lattice parameters of the unit cell, 2cz is the thickness of the layer as shown in Fig. 1(b), and d the CASTEP theory documentation. 32Prescribed strains are applied to the unit cell, the structure is optimized for each deformed state, and the stresses for each strain state are calculated.Only small values of strains were applied in order to remain within the elastic region of the compounds.The elastic stiffness coefficients C ij within a general definition of Hooke's law are then obtained by fitting the stresses as a linear function of strains.The fitting procedure is necessary because it is not possible to calculate elastic properties of the compounds from applying just one value of strain. 33In the present work, a strain of 0.007 was applied in nine incremental steps, and it was found to give a good compromise between nonlinearity and numerical errors.Since MoS 2 and WS 2 bulk materials form hexagonal crystals with a layered structure, it is assumed that these materials are transversely isotropic (in common with other materials with hexagonal structure), 34 so that they can be characterized by a set of five independent elastic constants. 35,36istinct, nonzero terms from the stiffness matrices for MoS 2 and WS 2 are presented in Tables III and IV It is straightforward to show that the single-crystal bulk modulus, K trans , of each of these transversely isotropic compounds can be calculated from the elastic constants, namely Young's moduli, E i , and Poisson's ratios, ν ij , as follows: where 37 The computed GGA values for K trans of transversely isotropic structures are very close to the experimental values 38,39 for the MoS 2 38 and WS 2 39 bulk structures obtained by fitting the pressure-volume data (derived from lattice parameter measurements under pressure loading) to the thirdorder Birch-Murnaghan equation of state (see Tables III and IV).The LDA/CA-PZ + OBS approach, however, overestimates K trans by about 20%.The values of C 11 , C 33 , and C 44 for the MoS 2 bulk, calculated using GGA in conjunction with PBE and PW91 functionals with empirical dispersion correction, agree well (within 10% or less) with Feldman's data 40 though agreement is poorer for C 13 and irreconcilable with Feldman's surprising negative inferred value for C 12 .The LDA C 33 values are in poorer agreement with all other values including a large mismatch (up to 47%) with the experimental data of Sourisseau. 41In general, the LDA results with OBS dispersion scheme both for lattice parameters and for the elastic properties of MoS 2 are in the poorest agreement with the experimental data.The components of the elastic matrix calculated for the WS 2 bulk, using both LDA and PW91 functionals including empirical dispersion correction, are in disagreement with Sourisseau's experimental data. 41The values of C 11 are at least 60% different.PW91 with OBS dispersion scheme, however, gives the lowest energy for both MoS 2 and WS 2 bulk materials and therefore the GGA/PW91 + OBS elastic properties are taken forward for use within the continuum finite element model in the form of the stiffness matrix components C ij .
III. ELASTIC PROPERTY CALCULATION FOR NANOCOMPOSITE AT THE NANOPARTICLE SCALE A. Finite element model of the nanocomposite
As a first stage towards understanding the behavior of the nanocomposite under a range of load cases, the elastic properties obtained from DFT calculations have been subsequently used in simulations of the nanocomposite using the finite element method (FEM). 42The FEM involves discretizing a structure into a large number of regions or elements, each associated with an appropriate material model, and the deformation of the structure is defined in terms of displacements of the nodes to which each element is linked.The relationship between nodal displacements and forces takes the form of a stiffness matrix calculated from the mechanical properties and geometry of each element.In the present (quasistatic) problem, an implicit solution method is used in which the unknown quantities (unrestrained displacements and reaction forces at restrained nodes) are found using linear algebra techniques.Here, the prediction of the nanocomposite's behavior has been performed using a unit cell model established within the ABAQUS finite element system, 43 which is a commercially available suite of software for solving engineering mechanics problems using the techniques described above.The true structure will have randomly distributed particles of WS 2 and SiC incorporated in an Al matrix with no particular directionality.As a first approximation each WS 2 particle is assumed to have a perfect hollow spherical shape having transversely isotropic properties such that the local x and y directions are tangential to the spherical surface and the local z direction is radial.In order to represent such a material via a unit cell model of a manageable size, a regular structure has been adopted as a first approximation which for convenience initially assumes an interpenetrating simple cubic configuration [Fig.1(c)] identical to caesium chloride crystal structure.The mechanical properties, particle dimensions, and volume fractions used to generate the ABAQUS models are presented in Table V.The WS 2 properties were defined in the model using a local spherical coordinate system.
The bounding planes of the unit cell are defined as follows: where x, y and z are the Cartesian coordinates of the unit cell, and h 1 , h 2 and h 3 are respectively its dimensions in the three Cartesian directions.In the present case h 1 = h 2 = h 3 = 100 nm.The planar and rotational symmetries of the unit cell allow a simplified form of periodic boundary conditions, which, for the application of direct strains, are implemented as follows [Fig.2(a)]: where u, υ, and ω are uniform displacements applied to the opposing faces perpendicular to the three Cartesian axes, and u 0 is a displacement value chosen to achieve an appropriate level of strain, in the present case 0.001 or 0.1%.In a similar manner, the following boundary conditions were used for application of shear strains [Fig.2(b)]: Figure 3 depicts the distorted shape of the unit cell, with the displacements exaggerated for clarity and with the stresses shown as color contours.Noting that the configuration of the unit cell is symmetric over the three Cartesian directions, the components of the stiffness matrix for the nanocomposite were recovered from the reactions at the unit cell boundaries using the following relations: where ε x = 2u 0 h 1 is the extensional strain in the x direction and γ xy = 2u 0 h 2 is engineering shear strain in the xy plane.RF x , RF y , and RF z are reaction forces occurring in the x, y, and z directions, respectively.A x , A y , and A z are the projected areas (taken to be equal in the present case) of the unit cell normal to the x, y, and z directions respectively so that A x = h y h z etc. Mesh convergence was demonstrated by confirming that C 11 changed by − 0.1% as the number of elements was approximately doubled from 98804 to 216215.
B. Results of the finite element simulations
The stiffness and compliance components of the nanocomposite recovered by this method are It should be noted that the assumed particle configuration is an idealization, which acts as a first approximation to what is in reality a random structure with no preferential directions.Therefore, the model would not be expected to produce the isotropic properties of the true random material, and the predicted properties are those of a cubic material which needs three independent elastic constants to be defined.Specifically, its elastic behavior is uniquely defined by any three of the follows constants, which can be calculated from the cubic stiffness matrix: 37 The model yields the following values: K = 100.34GPa, G = 48.49GPa, E = 149.90GPa, and ν = 0.2510.The realistic material with a random structure can then be considered to consist of an isotropic aggregate with a locally cubic structure.Voigt and Reuss 44,45 approximations give the theoretical maximum and the minimum values of the average isotropic elastic moduli, respectively.The Voigt approximation assumes that the uniform strain in the compounds is equal to external strain and the Reuss approximation assumes that the uniform stress in the compounds is equal to external stress.The Reuss bulk modulus K R and the Voigt bulk modulus K V are equal for cubic materials 46 and are given by and the Reuss shear modulus G R and the Voigt shear modulus G V are given by Hill 46 demonstrated that the approximations of Voigt and Reuss give upper and lower bounds of the isotropic moduli, respectively, and recommended that the realistic approximation of the isotropic moduli of isotropic aggregates is the arithmetic mean of these limits.Hence the elastic moduli, K iso and G iso of the isotropic material can be approximated as Since these two constants now fully define the isotropic material, the values of Poisson's ratio ν iso and Young's modulus E iso are obtained by substituting the calculated values of K iso and G iso into The obtained values are K iso = 100.34GPa, G iso = 52.79GPa, E iso = 134.74GPa, and ν iso = 0.2762.The authors are not aware of an analytical solution which fully models a particulate with hollow, spherical, transversely isotropic inclusions.However, Budiansky's method, 15 in turn based upon Eshelby's inclusion technique, 47 finds the aggregate elastic constants of an inhomogeneous material with multiple isotropic phases taking the form of ellipsoidal or (as a special case) spherical inclusions embedded in a matrix phase; despite the assumption of inclusion shape, the solution reduces to a form symmetric across all phases including the matrix.Budiansky's solution is used here to obtain two estimates of the overall modulus of the nanocomposite.Both estimates treat the hollow interior of the nanoparticles as a fourth phase of zero modulus and volume fraction 2.4% and use the moduli for the SiC and the Al phases directly from Table V.They also both treat the nanoparticles as isotropic and with a volume fraction equal to the true volume fraction of their solid component (17.1%).It is also assumed that their Poisson's ratio is 0.22, equal to ν 12 for the WS 2 from Table V.However, one estimate assumes that the Young's modulus for the nanoparticles takes a value of 224 GPa numerically equal to E 1 = E 2 , while the other assumes a value of Young's modulus of 60 GPa which gives the same bulk modulus as for the transversely isotropic WS 2 in Table IV.The results of this comparison are presented in Table VI.While no attempt has been made to explore whether these estimates are indeed bounds on the true value of the nanocomposite modulus, it is reassuring that the estimate of average isotropic E iso calculated from the FE model lies close to the one of the estimates of Budiansky with E(WS 2 ) = 60 GPa, while the FEM estimate of Poisson's ratio is within 7% of the Budiansky estimates.It is seen in Fig. 3(a), and from closer examination of the FEM output, that the highest stresses in the direct loading situation occur as circumferential direct stresses in the loading direction, at the surface of the nanoparticles.It is noteworthy that the transverse and shear flexibility of the WS 2 prevents transmission of significant load to the inner layers.Similarly, the shear loads are carried as shear stresses at the surfaces of the nanoparticles [Fig.3(b)].The stresses in the ceramic reinforcement, and in the metallic matrix, are considerably lower than in the WS 2 nanoparticles, and it would appear that the load carrying mechanism of the nanoparticles acts to avoid stress concentrations in either of these two phases at the expense of causing a relatively large stress concentration in the nanoparticle (Fig. 4).While the response of WS 2 particles to Hertzian crushing loads has been explored elsewhere, 49 we are not aware of work characterizing their response to the present load cases and with present application.
The case study presented here acts as a feasibility study in which the modelling of IF particles with local spherically orthotropic properties has been achieved, and the results of the analysis demonstrated to be consistent with alternative approaches.This lays the foundations for undertaking more sophisticated analyses, involving larger numbers of particles and more specialized (explicit) solution methods, in order to model the response of the nanocomposite to more challenging load cases such as shock and impact.In order to model the dynamic and failure response of the composite under high rate loading conditions, it will also be necessary to consider the strength of the IF particles and of the interfaces between particles and matrix material.
IV. SUMMARY
In conclusion, the present work has provided a rigorous theoretical prediction of MoS 2 and WS 2 elastic properties.For MoS 2 where more complete existing data are available, excellent agreement has been observed, giving confidence in the predictions for WS 2 .The structural geometric data for both compounds agree well with the existing published data.The present work also incorporates the predicted elastic properties for WS 2 into the first mechanical analysis of WS 2 /ceramic nanocomposites.This work both explores the load carrying mechanisms of the IF particles, and serves as a feasibility study for testing the mesh generation, boundary condition, and property definition procedures required in more complex analyses.Future work will concentrate upon extending the mechanical analysis to consider more practical particle architectures and load cases, and to incorporate failure predictions via interface properties derived from MD calculations.
FIG. 1 .
FIG. 1. (Color online) Sequential multiscale approach: (a) TEM lattice image of IF particle showing layered structure as adopted from; 16 (b) a unit cell of the bulk material used in DFT: tungsten (molybdenum) atoms are denoted in green and sulfur atoms are denoted in yellow; (c) interpenetrating simple cubic unit cell of the nanocomposite used in the mechanical FE model: a SiC nanoparticle is in the center of the unit cell; IF-WS 2 nanoparticles are in the corners of the Al matrix.
, respectively.Our assertion that the materials are transversely isotropic is supported by noting that C 11 = C 22, C 13 = C 23, C 44 = C 55 , and C 66 = C 11 −C 12 2 to an accuracy of at least 1%.Both materials show a high degree of elastic anisotropy, with the highest stiffness constants being C 11 = C 22 along the a axis and b axis, respectively, where deformation involves bond bending and bond stretching.The stiffness constant C 33 is significantly lower (approximately by a factor of 5) because it involves weak interlayer van der Waals forces.
FIG. 2 .
FIG. 2. Periodic boundary conditions for the FE model.The uniform displacements are (a) applied to the left and right surfaces for the FE simulations under direct stresses and (b) to the top and the bottom surfaces for obtaining shear components.By contrast, the remaining surfaces are fixed against any movements.
5 (C 11 −
FIG. 3. (Color online) Three-dimensional distorted view of the FE unit cell under (a) direct stresses and (b) shear stresses.Deformation scale factor is 100.A uniform strain of 0.1% has been applied in both cases.
TABLE II .
(Mo-S) is the length of molybdenum-sulfur bond.Equilibrium geometry (in nm) of the WS 2 bulk material: a, c, 2cz and d are defined in TableI.
TABLE III .
Stiffness matrix, C ij (in GPa), and bulk modulus, K trans (in GPa), of the MoS 2 bulk obtained using Eq.(2).Duplicate and zero elements of C ij are omitted.
TABLE IV .
Stiffness matrix, C ij (in GPa), and bulk modulus, K trans (in GPa), of the WS 2 bulk obtained using Eq.(2).Duplicate and zero elements of C ij are omitted.
TABLE V .
Mechanical properties of materials used for the FEM analysis.The elastic properties of WS 2 are given here for comparison, and are an alternative representation of the stiffness matrix components presented in Table IV for PW91 + OBS, which were used directly as the FE input data.
TABLE VI .
15mparison of elastic properties of the isotropic composite from the DFT/FEM approach and Budiansky analytical model.15 | 5,439 | 2012-09-24T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Secured Optimized Resource Allocation in Mobile Edge Computing
Department of Computer Science, Kohsar University Murree, Punjab, Pakistan Department of Information Technology, e University of Haripur, Haripur 22620, Khyber Pakhtunkhwa, Pakistan Department of Computer Science, Faculty of Science and Arts, Belqarn, Sabt Al-Alaya 61985, University of Bisha, Saudi Arabia Faculty of Computing, e Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan Department of Computer System Engineering, University of Engineering and Technology Peshawar, Peshawar 25000, Khyber Pakhtunkhwa, Pakistan
Introduction
e recent increase in demand for mobile devices and the use of the cloud as virtual storage demanded the eld to evolve, named mobile edge computing (MEC). Research in MEC is performed by load balancing and o oading. To decrease resource demand in MEC architectures, a pattern of traditional clouds is used. e majority of the MECs do not consider optimization and cost factors. MEC also provides the network to use the resources that have less memory, time, and energy consumption.
MEC provides the opportunity to reduce latency while o oading tasks in the network. MEC allows the resources to o oad tasks easily and safely in the network. MEC tries to reduce the consumption of power and energy. It removes all the delays from the network. MEC designs the nodes to o oad tasks in the network to remove all the delays from the network. MEC allows the nodes to get information about the other nodes that are o oading data in the network so there will be no collision. It allows nodes to use all the resources from the network. Linear programming is the most common approach to optimize an objective function, for example, to reduce resource consumption, reduce the total execution time, reduce latency, or increase the quality of experience.
One of the recent approaches to solving the o oading problem is by using deep learning [13]. Table 1 shows an overview of di erent surveys conducted in the eld. Limitations of the surveys are also mentioned in the table.
A gap in the [14] study is that it does not consider offloading in the large-scale network, while the quality of offloading tasks in the network is not considered in [15]. Safety is not considered in [16], while task execution is difficult for the user in [17,18]. Modern facilities to use are not well-thought-out [19]. e disadvantage in [20] is that it does not consider if any virus destroys this data, then what will be sent next. e negative of [21] is that it does not consider the multiple attacks solution. e undesirable thing about [22] is that it does not consider the solution of inputting tasks easily for the user. e ploy of [23] is that it does not consider the offloading task in the network easily and more securely. e depraved thing of [24] is that it does not consider more instructions to make the system more secure. e downside of the study is that it does not consider cryptography techniques to make the network more secure [25]. Table 2 presents the critical analysis. e rest of the paper is arranged as follows. Section 2 contains a literature review. Section 3 states the problem statement while Section 4 elaborates on the proposed solution. Section 6 concludes the paper with a proper future direction.
Literature Review
e authors of [26] found that cloudlets offload the task easily by using the techniques of DOTA, CBL, and FATO. Mobile devices request for the transmission of information by making sure the message is in the network so there will be no collision.
MCC provides the facility to store large data over the network, and it also ensures the sending of large amounts of data on the network. Mobile edge computing improves the speed of the node by removing the slow node PCs from the network and adding the neighbor PCs. It is found in the paper that a single node makes the network so busy that it increases. DOTA, CBL, and FATO are used for dividing tasks in cloudlets. e limitation is that less energy consumption having less cost technique is not developed. It also provides good efficiency to the nodes in the network. So the nodes will offload tasks in the network with good quality. SDN provides the facility for MEC to divide the task into nodes. e nodes will offload tasks in the network sequentially. So there will be no loss of data in the network. MEC also provides the service of offloading tasks in the wireless network [51].
Wang et al. proposed a new architecture for computation and storage offloading based on fog computing and found that COCA offloads the task from the smartphone to the fog server [27]. e result was deduced that with the enactment of the cloud upgrade, uploading data became fast. In this research, no technique was used for uploading large amounts of data. More resources must be added due to the hinging of the network.
In [28], local computing (mobile device) combined with the computing system and found system loss function (SYLF) minimization problem. Markov Table 1: Survey-based analysis.
Year [paper] Topic discussion and overview Limitation February 2019 [1] Machine and deep learning techniques are mostly used in papers of this survey. Machine and deep learning are used to detect the encryption traffic in offloading tasks in the network.
ere are grand challenges in edge computing security.
March 2017 [2] Network virtualization and Lyapunov optimization-based dynamic computation offloading (LODCO) algorithm is used. Network virtualization manages the flexibility of the virtual representation provided by the MEC.
Dynamic resources are added while offloading tasks in the network.
Resources that are added to offload tasks in the network increase delay. Markov-based techniques reduce the time and redundancy while offloading tasks in the network. January 2017 [4] Machine learning advanced communication techniques are used to offload tasks in the network. Slow processing increases time delay.
June 2017 [5] Advanced communication techniques are used. Refraction and reflection increase delay.
May 2018 [6] Migrating running service technique and compression algorithm are used. Migrating running service technique is used for migrating the task in the network. e size of the task is not reduced, so there is a delay in the network.
2007 [7] IDS is used to interpret the traffic while offloading tasks in the network. e cost factor is not considered 2007 [8] e pushback technique is used to aggregate the traffic while offloading tasks in the network.
A comparison of complexity analysis is not performed.
2014 [9] RTT communication and task scheduling algorithms are used. RTT communication is used to handle the traffic while offloading data in the network and divide the task in the node to remove the delay.
Processing delay is not entertained.
2020 [10] VM migration and genetic algorithm are used. VM migration is used to migrate the task in the network and makes the performance better. ere is delay in resource consumption.
2017 [11] e virtual machine is used to migrate the task virtually in the network and make the performance better. ere is time delay.
2015 [12] Lyapunov optimization and online control algorithm are mostly used in the research paper of this survey. Lyapunov optimization is used to run the online program to offload tasks in the network.
Maximization of resource usage is not performed. Deployed cloudlets used for dividing the resources in k nodes A single node makes the network so busy that increase. DOTA, CBL, and FATO are used for dividing tasks in cloudlets.
Mobile Information Systems
Cloudlets offload the task easily by using the techniques of DOTA, CBL, and FATO.
Less energy consumption having less cost technique is not developed.
October 2018 [27] New architecture for computation and storage offloading based on fog computing New architecture for computation and storage offloading based on fog computing COCA offloads the task from the smartphone to the fog server.
e enactment of the cloud upgraded, uploading data become fast.
August 2020 [28] Local computing (mobile device) that combines with the computing system Local computing (mobile device) combines with the computing system.
System problem loss function (SYLF) minimization problem QLCOF scheme effectively reduces the SYLF.
Pairing-free multiserver authentication protocol Secure mutual authentication, anonymity, and scalability are achieved.
None
May 2020 [31] Comparison technique (proposed MUMACO with benchmark) Offloading of all applications is done to the cloudlets, but a fraction of cloudlets is idle.
Time consumption, energy consumption, and load balancing are optimized.
Multiobjective is performed. Optimization cannot be performed January 2020 [32] Offloading algorithm (hybrid intelligent optimization algorithm) Optimization of task delay and resource consumption e proposed algorithm effectively improves the offloading utility as compared to baseline.
Offloading in the uncertain network is not available.
March 2020 [14] Comparison and optimizing technique (offloading) Performance and energy of mobile device can be improved by edging.
Proposed HIQCO provides accurate results and then compared the algorithm.
Storage cannot be considered in the comparison to HIQCO and baseline algorithms.
August 2019 [33] MCOWA technique used for uploading tasks on the network easily By using MCOWA technique, algorithm problem is solved as it solved the complexity of the network.
Time and energy are consummated so the network becomes fast.
None 2018 [34] Analysis technique used for the scalability and performance of an edge cloud system Interedge is unchanged. Bandwidth should remain.
If capacity is added to the existing edge network deprived of increasing the interedge bandwidth, then it will pay for networkwide congestion.
Increasing distance and low bandwidth will increase the load.
March 2020 [35] Cloud modeling operator introduced that deals with the execution of packets in the network By using this strategy, the performance of computing resources improves.
Time and energy are consummated. Also, improve the utilization of computing resources and ensure the QoS, and this is critical to edge-cloud computing business models.
e problems such as management resources of MEC and the cloud are not improved and considered.
April 2020 [36] FL technique introduced for round communication between the nodes in the network e nodes will only send a message from one node to other when they receive a message that the network is free.
By using the FL technique, the network becomes safe from the collaboration of messages. us, the packets will not be lost.
Privacy is not considered and improved to make the network secure.
April 2020 [37] GPS technique used for measuring frequency and sending the signals even from the satellite GPS technique is used for sending the signals from satellite to the user. So the user can easily send a message from Earth to satellite, and vice versa. e offloading task increases if the user sends a message to the satellite. ere will be no interruption of other networks as the nodes only send one packet at one time.
No technique is used to make the signal powerful as if the signal is weak, the packet will be lost.
January 2021 [38] DECCO technique used for maintaining signals from a long distance Long distance creates the network slow. us, the packet delay. Energy and power consumption.
DECCO used that maintain the long distances plackets. us energy, time, power and quality consumption.
Many computing capabilities are not considered. Sending a large amount of data is not considered.
September 2019 [40] Edge-centric IoT used that is responsible for offloading data safely in the network Security is very important for offloading tasks in the network.
If there is no security in the network, then the delay will occur, and the data can also be hijacked.
If data is hijacked and caught by a virus, then no technical solution is considered here.
January 2019 [41] GMaxEOQU and GMinEOIP used in the network GMaxEOQU and GMinEOIP are used to minimize the quality errors in the network.
If there are less nodes present in the network, then there will be a delay in the network.
Offloading times by multiple nodes are not considered.
February 2020 [42] QMPOS technique used in the network QMPOS technique is used to derive the result in the network and evaluate the performance of VN.
By balancing the load of VN, the task will offload within time. e network becomes burdenless.
e cost of VNs is not considered.
June 2020 [43] Fog computing used in the network Fog computing provides management, security, and availability of resources that helps offload tasks.
Offloading tasks becomes too easy and secure. Nodes will get many resources for offloading tasks in the network.
e cost of resources is not considered.
June 2020 [44] SMSC and RAMWS used in the web servers SMSC works to control the requests that arrive on web servers, and RAMWS works to overcome the request time out in the web server.
e web server provides the resources to the users to use the resources and offload their tasks. It also provides the user the facility to get information from the websites.
Protection of web servers is not considered.
August 2020 [41] Skippy technique used Serverless is used in the network that provides all resources to offload the task in the network, and Skippy helps serverless do this.
A large amount of data is able to send in the network by using serverless.
If any unauthorized network hacks the data, then a large amount of data will have lost in the network.
March 2019 [45] Pervasive technique used Pervasive helps the computer provide all resources to the nodes to the nodes offload their task.
Pervasive helps the user find anything from the computer by using it. It also provides the user to interact with the computer easily.
Security is not considered.
August 2018 [46] Routers used in the wireless network Routers develop the communication between the two networks and make communication possible.
Wireless network also provides the facility to the nodes and make communication easy like space.
Cost is not considered. decision process (MDP) designed a state loss function (STLF) to measure the quality of experience. In it, multioperator multiverge cloud state was not considered. Slow nodes must remove because multiusers increase the cost. Mobility management is the reason for the disconnected link between the devices and the edge network. It manages horizontal and vertical mobility. Heterogeneity deals with the wireless network interface, for example, Wi-Fi. Low delay and high bandwidth are the main challenges. Decrease in price by adding neighbors to offload tasks early is the main challenge. MEC provides the service of offloading tasks in the network by using the Internet. MEC allows users to download anything from the Internet keeping the security in the mobile devices. MEC provides the service of using passwords on mobile devices. It provides the facility of storing data on the Internet so that when the user accesses the Internet, he will easily access the information without any delay. MEC provides biometric security so that when the user enters his data in biometrics, his data will remain safe and will not leak to anyone. MEC makes it possible that when the person will enter his fingerprint, all his data will come out. is data will not be accessible to anyone because MEC provides security. MEC provides security to the nodes when the nodes will offload data in the network. e data will be safe and will not be disclosed to anyone. MEC provides security to the nodes by using passwords and keys that will not be shared with anyone.
In [29], the authors used mixed integer programming to find NP-hard problems and EcoMD. EcoMD provides improved performance in terms of resources. But resources must be stable because increasing nodes will increase the cost. However, there are some other solutions as well that do not fit our study [52][53][54][55]. e authors of [30] used the elliptic curve cryptosystem and MSA protocol for the MCC environment to find a pairing-free multiserver authentication protocol and achieved secure mutual authentication, anonymity, and scalability, but there was no mechanism of security proposed in it.
In [31], the authors used the comparison technique (proposed MUMACO with benchmark) to find offloading of all applications to the cloudlets, but a fraction of cloudlets was idle. Time consumption, energy consumption, and load balancing were optimized but multiobjective optimization cannot be performed, and less cost and energy consumption resources must be used.
In [32], the authors proposed offloading algorithm (hybrid intelligent optimization algorithm) and found optimization of task delay and resource consumption. e proposed algorithm effectively improves the offloading utility as compared to the baseline algorithm, but offloading in an uncertain network is not available [56].
In [14], the authors introduced a cloud modeling operator that deals with the execution of packets in the network by using this strategy; the performance of computing resources improves. Also, they improve the utilization of computing resources and ensure the QoS and thus are critical to edge-cloud computing business models [57].
In [33], the FL technique was introduced for round communication between the nodes in the network. e nodes will only send messages from one node to another when they receive a message that the network is free. By Do not consider a secure and easy offloading task.
2019 [24] AES-based cryptography approach FPGA No collision will happen to destroy the data.
Do not consider a more secure system. 2020 [25] AES-based cryptography approach LLCA Encrypt and decrypt data exactly in the network.
Do not consider more cryptographies to make the network more secure.
2020 [48] AES-based cryptography approach LSM Protect the data of user from not being able to hack for the other person.
Do not consider an easy and secure offloading task network.
2020 [49] AES-based cryptography approach RSA Criminal record update from time to time Do not consider more functions to offload tasks in the network.
[50]
Blockchain ACO algorithm Make the system more secure to offload data in the network.
Do not consider more systems to make offloading tasks in the network more secure and easy.
using the FL technique, the network becomes safe from the collaboration of messages. us, the packets will not be lost. Privacy is not considered and improved to make the network secure.
In [34], the GPS technique is used for measuring frequency and sending the signals even from the satellite. GPS technique is used for sending the signals from satellite to the user. So the user can easily send messages from Earth to satellite, and vice versa. e offloading task increases if the user sends a message to the satellite. ere will be no interruption of other networks as the nodes only send one packet at one time. No technique is used to make a signal powerful as if the signal is weak, the packet will be lost. MEC does the encryption and decryption task in the network. e nodes will encrypt data in the network. MEC makes this task possible that regardless of what the user sends for encryption, the network will decrypt the same data in the network without any delay. MEC also makes the money transaction possible and safe by using the keys such as ATM keys and PINs. e PIN is only known by the user who uses the ATM.
us, the data and the money will be safe. MEC provides security to the criminal record. If the person does any crime, then the record will be written in the file. is file will not leak to anyone and will be updated from time to time. It is possible due to MEC. MEC provides the security and privacy for storing data in the network that when the user wants to access the data, MEC makes the task present in the network.
is removes the delay from the network to access the network.
In [35], the authors used the DECCO technique for maintaining signals from a long distance. Due to long distance, the network becomes slow resulting in high packet delays, energy, and power consumption. DECCO maintains the long-distance packets. us energy, time, power, and quality consumption are achieved. Cloud servers are far away from mobile devices that create signal issues, so the resources become weak. e authors in [36] used edge-centric IoT that is responsible for offloading data safely in the network. Its security is very important for offloading tasks in the network. If there is no security in the network, then the delay will occur, and the data can also be hijacked. If data is hijacked and caught by a virus, then no technical solution is considered here. If there is no security and privacy in the network, then the data will create delay and be hijacked. e network must be protected by passwords, and the password must be secure. e password is not shared by anyone [58,59].
In [37], the authors used GMaxEOQU and GMinEOIP in the network, and they are used to minimize the quality errors in the network. If there are less nodes present in the network, then there will be a delay in the network. Offloading time by multiple nodes is not considered. If there are less nodes used in the network, then the delay will occur [60,61]. e authors in [38] used the Skippy technique. e server is less used in the network that provides all resources to offload the task in the network, and Skippy helps serverless do this. A large amount of data is able to be sent to the network by using serverless. If any unauthorized network hacks the data, then all large amounts of data will have been lost in the network. e data must be protected by using a password, and some keys so a large amount of data will be safe.
In [39], the authors used the pervasive technique. Pervasive helps the compute provide all resources to the nodes to offload their task. If any unauthorized user hacks the data, then it will give the wrong information and data to the users.
In [40], the authors used load balancing (virtual machines) Apache JMeter (Tool). e majority of MCC do not consider cost factors. For multiple users, one virtual machine architecture is most suitable. e time to execute a task increases by 23 times while the resource utilization decreases by two-third. Execution time is more for projected architecture [51].
In [41], the authors discussed that today, the Internet is too common, while using Internet security is also needed to offload tasks from the Internet, download anything from the Internet, and so on. Also, information on the Internet must be secure so that the user can access it at any time and access it without delay.
AES is a part of the block symmetric cipher. AES provides the facility for MEC to offload the message in the network by using nodes. AES also provides the facility to encrypt and decrypt the data in the network without creating any delay in the network. AES has the ability to use different keys such as 128, 192, and 256 bits. Each bit has different features [42]. AES is required in every field where security is needed. AES provides the encryption and decryption of data from one field to the other field easily and without delay in the network [62].
In paper [44], cloud computing provides the security to the user to offload tasks in the network. AES provides the facility of encryption and decryption in the cloud computing network. AES provides the facility of security to the network to exchange information without any delay [43]. As today the Internet is too common, while using the Internet, security is also needed to offload tasks from the Internet, download anything from the Internet, and so on. Also, information on the Internet must be secure so that the user can access it at any time and access it without delay [41]. AES divides the task into two portions, that is, one is the offloadable and the other is the un-offloadable program so the offloadable task can be offloaded easily in the network without any delay. AES provides some security and divides tasks into the un-offloadable task so the task can easily offload in the network without delay [45]. AES provides the security to the users to offload tasks in the network without any interference from a virus or delay in the network. AES helps the user provide antivirus to the user so the virus will not attack and destroy the user data and offload in the network without delay [46]. Edge computing became top popular as it removes the delay from the network while offloading data using the Internet. Also, it makes all Internet applications secure for offloading data. While offloading the data using the Internet, the drastic event that can happen is an attack by an attacker. Malicious attackers do the collision in the information present in the network. e collision in the information makes the information lost; thus, the data are lost and destroyed because of malicious attacks. ese attacks happen due to the recovery of the secret key of AES [63]. All the internal collisions are detected by the AES, but the linear collision is not detected by the AES. S-box gives the output that talks about the collision happening internally [23]. e issue with the proposals and techniques discussed above is that during the offloading process of the data from the Internet, there can be malicious attackers who can intervene in the communication and perform different kinds of attacks. To cope with these issues, we devise a mechanism through MEC that can help mitigate such attacks. Also, the response time, resource utilization, and fair usage of mobile devices are increased.
Our Contributions to the Field
Design and implementation of a novel task placement framework that does the following.
(1) Reduces response time of processing tasks, (2) Un-usable resources become useful, (3) Demand of mobile device will increase, and (4) Usage of mobile resources as a replacement of cloud servers.
Problem Statement
When MEC requests to buy computer resources while executing a task, it faces a delay in request and response to and from MEC, so this delay increases time. Similarly, many mobile resources were being wasted by users despite having 4 to 8 GB RAM and 128 GB plus storage.
In this section, we briefly state the three main problems to highlight the problem scenario and drawbacks that can occur due to these problems.
Problem I: Utilization of Mobile Idle
Resources. MEC provides the facility of low delay rate, low cost, and high efficiency of offloading tasks in the network. So mobile resources can be used to make a local edge cloud for working in an efficient manner.
Problem II: Task Execution Delay.
When MEC requests to buy computer resources while executing a task, it faces a delay in request and response to and from MEC and computer resources. is paper is basically solving a problem as per a scenario in which a user wants to process a huge amount of data at that time and users have mobile devices with either them or with their friends, so users can make the local cloud without having remote servers. e following research questions are formed from the above problems: (1) Question 1: How to reduce the response time of processing by making an edge instead of waiting for a single device?
(2) Question 2: How to save wastage of resources on mobile, and how will they be utilized in a timely and effective manner?
Proposed Solution
e particle swarm optimization technique is used and modified according to MEC requirements to gain efficiency by finding the optimal nodes, which will be used in the MEC. To find the best node, we will check its previous record of connecting time delay and its distance from its master device. We will provide a list of nodes to the swarm algorithm, and it will compare the first node with other nodes and will place the node having less connecting time and a short distance from the master device. At the first index, the comparison will continue till we sequence the list in the best node in ascending order in the FCFS list position. So we will have the best device and the best mobile edge for task execution. To overcome the problem of time delay, it is better that the MEC server should remain connected with MEC clients so that the delay of connection would not appear when a task appears as MEC is already connected so it will start executing the task without having the connection delay.
To overcome the problem of resource wastage, the solution is to use mobile resources if it is available and ready to use. MEC uses multiple mobiles to compute multiple complex tasks, which are nearly impossible to compute on a single device.
is study aims to propose and implement a novel framework to cover challenges raised by application execution on resource-constrained devices. Two main tasks that the proposed solution is performing are task allocation and task execution. Breakdowns of these tasks are given below.
Task Allocation
(1) Secured resource discovery of mobile devices for connecting to edge servers (2) Secured resource allocation algorithms used for checking device capability (whether the device is capable of executing the task); optimization methods will be used (3) Secured resource allocation algorithms used for making an effective offloading communication that will make sure that offloading resource communication is secure (4) Transfer of data from a mobile device to the edge nodes
Task Execution
(1) Scheduling of tasks at the edge nodes by the offloaded device (2) Offloading of task by the offloaded device (sending task) (3) Transfer of results back to the source mobile device Mobile Information Systems (4) Edge server that will gather results and perform integration of it Figure 1 shows the ow diagram of the scheme.
5.4.
Algorithms. Algorithms 1 and 2 represent problems I and II, while for optimization, we present Algorithm 3.
Implementation
Steps. e following steps are performed while implementing the proposed solution: (1) We will make a connection between o oader and o oadies devices using a nearby API and distribute the task in form of bytes. ere are strategies such as P2P, P2Cluster, and others to create a connection among the devices. We will be using connection P2Star because it suits our scenario. P2Star will o oad tasks in the local cloud more quickly than other connections strategies. Algorithms are used in this connection for making the handling of the o oading tasks better. (2) After choosing a strategy, the o oader device will start discovering the o oadies, and o oadies will start advertising so that they are discoverable, and a connection can be established among the devices. (3) Now as o oadies are discoverable, o oader will start connecting with the o oadies one by one and accept their connection of o oadies to work as a slave for the master device and to compute the task provided by the o oader devices and send back the results. (4) After establishing the connection between the oloadies, these devices will send their speci cation information and available resource information so that the o oader can decide which devices are capable of serving the master device and which oloadies are not capable. (5) After discovering the ability of devices on the basis of RAM, CPU, battery, and available RAM, the oloader will lter the devices and ignore the rest of the devices [56]. (6) Now, the o oader will split the task into a number of available devices and send the task to available devices for the sake of processing. In our case, the task is image processing; however, it also depends upon the application requirements of the user [64][65][66]. (7) e o oadies will process the image processing task on their end using the OpenCV library using their own power and processing power.
(8) After processing, each o oadies will send back the result to the o oader device, and the o oader will use that result for its own use. (7) for each Device d do (8) Calculate the Compatibility check using FCompatibleDevice (), add the device to the Cd list (9) end for (10) for each Cd do (11) Rs � Output {task} (12) end for ALGORITHM 2: Algorithm for problem II. Connection Request (Cr), Task (T),
Simulation and Results
is section thoroughly describes the equations, simulation, and results of the study.
Equations.
e following generalized equations are formed: where Rs,t is reserved resources of server at t-th time location, Rc,t is reserved resources of the client at t-th time location, and Uc,t is unused resources of the clients at t-th time location.
where R s,t is reserved resources of server at t-th time location, Rc,t is reserved resources of the client at t-th time location, and UWc,t. is unused-wholesaled resources of clients at t-th time location.
where R s,t is reserved resources of server at k-th time location, R c,t is reserved resources of the client at k-th time location, and U B is unused-buyback resources of clients at the k time location Equations (1)- (3) show the generalized working of the proposed solution by seeing results that total resource utilization occurred in this pattern, while the following equation is showing the total time that is consumed in executing a task: where total consumption time for all tasks � T T, communication delay � CD, tasks list � Ts, and number of devices � DL.
Results without Optimization.
It can be seen in Table 3 that without optimization the time consumption is 4 seconds for a task. It is due to the lack of a particle swarm algorithm. Also, the CPU percentage is high with relatively high memory in use. Table 3 represents the results without optimization.
Results of Proposed Solution (after Optimization).
Previously, time consumption was 4 seconds for a task, and now after optimization, 1 second decreases because we have chosen the device, which is nearest by applying the particle swarm algorithm. Similarly, for three devices, the time was 2.5 previously, and now, it is 1.5 seconds. e memory usage also decreased. While CPU consumption previously in 1 device was 7%, after optimization, it is 2%, and for 3 devices, it is 3.5%, and after optimization, it is 2.5%. Table 4 presents the results of the proposed solution after optimization.
Simulation Results.
In Figures 2-4, simulation results are clearly showing that the results without optimization are improved by applying optimization techniques of particle swarm. Moreover, adding a greater number of devices improved the results significantly.
Conclusion and Future Work
Hence, it is concluded that in this paper, selection of optimized resources and then their allocation in mobile edge computing decreased time, energy, and memory while executing tasks. ese tasks if executed on a single device can increase these resources in a linear order. Complex and tedious tasks can easily be executed by making a mobile edge, and resources can be utilized in a better way. Edge can reduce consumption delay (CD) by adding N number of devices, which will improve the utilization of resources and will ensure the quality of service. Mobile edge shares resources to other edges by wholesale (sending resources) and buyback (receiving resources) scheme. In the future, wholesale and buyback resources from edge-to-edge servers will be used for profit maximization. Experimentation research techniques will be used to optimize resource allocation between two MECs, and algorithms will be designed for optimal memory, CPU, time, and power consumption between two MECs. Besides this, mobile edge computing has still faced a lot of challenges, and these are mobility management, heterogeneity, price, scalability, and security. We will also work on these mentioned sides in the future.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no potential conflicts of interest. | 8,325.8 | 2022-08-21T00:00:00.000 | [
"Computer Science"
] |
Construction of a Dynamic Diagnostic Approach for a Fuzzy-Interval Petri Network
: Fault diagnosis plays a crucial role in enhancing system dependability and minimizing potential catastrophic consequences for both equipment and human safety. This article presents a research study focused on developing a diagnosis and control approach for discrete event systems using the Petri net Fuzzy Interval (IFPN). The Petri net is utilized as a modeling tool for the target system. The paper describes a case study conducted on an ingredient mixing system, where the objective is to maintain the concentration of ingredients within a valid range. A diagnostic framework is constructed and successfully applied to identify faults in the system. The proposed approach is further validated through simulation tests conducted on a mixing system.
Introduction
In the literature, most applications of fuzzy theory to automatic control systems are basically directed toward the development of mathematical models or fuzzy logic controllers for linear and nonlinear systems with given or unknown system models. However, few applications on real systems are presented [1]. This paper presents an approach for the modeling and design of an automatic control real system in a fuzzy environment, where we consider that the input of the system is variable and the finished product depends on the variability in the value of the raw material at the input of the system.
Our goal is the construction of a robust control to ensure product quality parameters at the output. In addition, the constraint to be guaranteed does not depend on time. In terms of modeling, the introduction of a new tool is useful. In fact, the control of a process under interval constraints is for any quality [2]. So, we flatten out a model that integrates the uncertain aspect in the form of fuzzy intervals and use the built model for defect diagnosis. The simulation results of our approach prove its validity in the process under consideration [3].
This work is devoted to the construction of a diagnostic system capable of detecting defects in an uncertain system. For this, we use our developed model, which describes the mixing process.
First, we present the interval theory, specifically the fuzzy interval and Fuzzy Petri Net Model [4]. Second, we propose a new modeling approach for the detection of defects in uncertain systems. This modeling approach combines two tools, namely the MOP and the interval approach. A statistical approach is presented to construct the ranges of validity of the fuzzy intervals assigned to the places of the constructed MOP model [1].
Our approach is validated by a simulation program to prove its validity and robustness.
Fuzzy Subset Concept (SEF)
The fuzzy subset concept was introduced by L.A. It is based on a degree of belonging, which generalizes characteristic functions and thus allows for modeling the human representation of knowledge and improving the performance of the decision systems using modeling [5]. This is because, in classical theory, this notion of belonging is rigid; for an X set, an element belongs or not to a subset of X. On the other hand, in the fuzzy approach, if one considers a reference set X [6], a fuzzy subset A of this reference is characterized by a function of belonging A; fuzzy subset A of a set X is a function A: X → L, where L is the interval [0, 1]. This function is also called a membership function. A membership function is a generalization of an indicator function (also called a characteristic function) of a subset defined for L = {0, 1}. More generally, one can use any complete lattice L in a definition of a fuzzy subset A ( Figure 1). Our approach is validated by a simulation program to prove its validity and robustness.
Fuzzy Approach
The introduction of fuzzy set and interval theory is increasingly being applied in the field of modeling and analysis of uncertain systems. Unlike classical set theory, where intervals describe only all possible values and from a membership point of view, it is all or nothing: an element belongs or does not belong to a set. Fuzzy Subset Theory (FSS) is more suited to represent all qualitative knowledge, especially when it comes to manipulating data and exploiting vague or imprecise data. This theory gave rise to the notion of fuzzy intervals that take into account, in their definition, the degree of uncertainty for possible values [5]. For an element of information, the theory of fuzzy subsets gives the possibility of belonging to a set, at a level of membership ranging from 0 to 1.
Fuzzy Subset Concept (SEF)
The fuzzy subset concept was introduced by L.A. It is based on a degree of belonging, which generalizes characteristic functions and thus allows for modeling the human representation of knowledge and improving the performance of the decision systems using modeling [5]. This is because, in classical theory, this notion of belonging is rigid; for an X set, an element belongs or not to a subset of X. On the other hand, in the fuzzy approach, if one considers a reference set X [6], a fuzzy subset A of this reference is characterized by a function of belonging A; fuzzy subset A of a set X is a function A: X → L, where L is the interval [0, 1]. This function is also called a membership function. A membership function is a generalization of an indicator function (also called a characteristic function) of a subset defined for L = {0, 1}. More generally, one can use any complete lattice L in a definition of a fuzzy subset A ( Figure 1). We can define the fuzzy subset of numbers approximately equal to 5 by = 5 = 0,0 , 1,0 , 2,0 , 3,0 , 4,0.5 , 5,1 , 6,0.5 , 7,0 , 8,0 , 9,0 , 10,0 An instant subset may be represented as a triangular, trapezoidal, or parabolic function.
The purpose of the concept of SEF is to allow gradations in the membership of an element X to class A, that is, to allow an element to belong strongly to that class. This notion of belonging is generally represented in the form of a function. Figure 2 shows three concepts used in the theory of fuzzy subsets [7],
•
The support of the fuzzy subset is the set of elements x of X such as µ A(x) ≥ 0; • The core of the fuzzy subset is the set of elements x of X such as µ A(x) = 1; • The support of the fuzzy subset is the set of elements x of X such as µ A(x) ≥ 0; The purpose of the concept of SEF is to allow gradations in the membership of an element X to class A, that is, to allow an element to belong strongly to that class. This notion of belonging is generally represented in the form of a function. Figure 2 shows three concepts used in the theory of fuzzy subsets [7], • The support of the fuzzy subset is the set of elements x of X such as ( ) The core of the fuzzy subset is the set of elements x of X such as ( ) 1 The support of the fuzzy subset is the set of elements x of X such as ( ) The height h indicates the degree of belonging of element X to the set.
An element X is either in A or excluded; it can never partially belong to the set.
Fuzzy Intervals
A fuzzy interval represents an interval whose edges are poorly defined of the type 'between a and b' [7]. A fuzzy number is used to represent a fuzzy assessment of the "about" type. A singleton allows entry into a system that manipulates only symbolic quantities ( Figure 3). The interest of our choice of fuzzy interval lies in its capacity for the correct representation of imprecise quantities. This is the case with our study on modeling for the diagnosis of uncertain systems. In this context, fuzzy numbers and singletons can be used for a unified representation of symptoms [6].
Fuzzy Intervals
A fuzzy interval represents an interval whose edges are poorly defined of the type 'between a and b' [7]. A fuzzy number is used to represent a fuzzy assessment of the "about" type. A singleton allows entry into a system that manipulates only symbolic quantities ( Figure 3). The purpose of the concept of SEF is to allow gradations in the membership of an element X to class A, that is, to allow an element to belong strongly to that class. This notion of belonging is generally represented in the form of a function. Figure 2 shows three concepts used in the theory of fuzzy subsets [7], • The support of the fuzzy subset is the set of elements x of X such as ( ) The core of the fuzzy subset is the set of elements x of X such as ( ) 1 The support of the fuzzy subset is the set of elements x of X such as ( ) The height h indicates the degree of belonging of element X to the set.
An element X is either in A or excluded; it can never partially belong to the set.
Fuzzy Intervals
A fuzzy interval represents an interval whose edges are poorly defined of the type 'between a and b' [7]. A fuzzy number is used to represent a fuzzy assessment of the "about" type. A singleton allows entry into a system that manipulates only symbolic quantities ( Figure 3). The interest of our choice of fuzzy interval lies in its capacity for the correct representation of imprecise quantities. This is the case with our study on modeling for the diagnosis of uncertain systems. In this context, fuzzy numbers and singletons can be used for a unified representation of symptoms [6]. The interest of our choice of fuzzy interval lies in its capacity for the correct representation of imprecise quantities. This is the case with our study on modeling for the diagnosis of uncertain systems. In this context, fuzzy numbers and singletons can be used for a unified representation of symptoms [6].
Fuzzy Interval Operations
In the following, we are interested in the fuzzy operations applied to the types of fuzzy quantities of particular shapes leading to the simplification of the calculations: the membership functions of the trapezoidal and the triangular shape [8]. Let us consider Figure 2 and the fuzzy interval (a − α, b + β) ; the linguistic label characterizes the support of this function where (a − α) and (b + β) are, respectively, the lower and upper limits of the fuzzy interval. These can be represented from a quadruplet (a, α, b, β) , with a < b, [a, b] as the modal value of the interval [9]. The expression of the corresponding trapezoidal membership function in Figure 2 is written as follows: α β , with a < b, [a, b] as the modal value of the interval [9].
The expression of the corresponding trapezoidal membership function in Figure 2 is written as follows: when a = b, there will be a triangular membership function ( Figure 4). The arithmetic operations on the intervals can be generalized, and for two given Example: Let us consider the intervals of the functions of Figure 5. If we apply the rule of addition to functions h1 and h2, we obtain function h3 [10].
The arithmetic operations on the intervals can be generalized, and for two given fuzzy intervals A and B defined
Example: Let us consider the intervals of the functions of Figure 5. If we apply the rule of addition to functions h1 and h2, we obtain function h3 [10]. In this study, we combine two approaches: In the first, we establish the interval bounds using mathematical equations, and in the second, we detect faults using rules. To accomplish this, we must first create a kind of arithmetic system that permits operations on qualitative numbers. Addition and subtraction are the only necessary fundamental In this study, we combine two approaches: In the first, we establish the interval bounds using mathematical equations, and in the second, we detect faults using rules. To accomplish this, we must first create a kind of arithmetic system that permits operations on qualitative numbers. Addition and subtraction are the only necessary fundamental operations [11].
Qualitative Rules and Fuzzy Relationships
The representation of qualitative knowledge relevant to a given domain can be achieved using the IF-THEN fuzzy logic concept, which is frequently used to describe the following form of logical reliance between variables: If V1 is K1 and Vn is Kn, then U is M, with K1 . . . , Kn, and M acting as predicates to describe V1 . . . , Vn, and U [8].
In our work, we are interested in diagnosing defects. So, within this framework, we are going to have two fuzzy propositions: one about symptoms and the other about defects.
In the field of diagnosis, the rules are written as follows: where n: the number of rules; P(x) and P(y): Fuzzy proposals on observable symptoms and defects, respectively.
The integration of qualitative rules and fuzzy relationships is performed in MOPs [8].
In our work, we introduce a tool for modeling uncertain systems. This tool integrates fuzzy logic into a Petri model. This modeling method provides a framework for exploiting fuzzy knowledge that qualitatively yields more refined results than conventional logic. The tool thus constructed will be adapted in the modeling of detection/decision functions by a place approach at fuzzy intervals [12].
Consider the following linguistic description: It is clear how fuzzy PN and the linguistic phrase "IF THEN rules" are related. The relationship between the Petri network and the "IF-THEN" rules is depicted in Figure 6. It is clear how fuzzy PN and the linguistic phrase "IF THEN rules" are related. The relationship between the Petri network and the "IF-THEN" rules is depicted in Figure 6.
Fuzzy Rule, RdP, and Fault Tree Mapping
In order to model the detection function, this RdP modeling tool is proposed, which integrates the temporal aspect-the instant appearance of several defects in the monitored system. This model is targeted for fuzzy logic rule modeling that results from the fault tree logic expression, which is identified at the monitored system level.
We are interested in failure trees constructed only of "AND" and "OR" logic gates and corresponding logic operations [13], The fault tree uses the most unexpected fault event as its scan target and goes through each potential contributing element one by one. After choosing the appropriate symbols to represent the events, we connect them to the top-level events, intermediate events, and fundamental events to create the graph (Figure 7).
Fuzzy Rule, RdP, and Fault Tree Mapping
In order to model the detection function, this RdP modeling tool is proposed, which integrates the temporal aspect-the instant appearance of several defects in the monitored system. This model is targeted for fuzzy logic rule modeling that results from the fault tree logic expression, which is identified at the monitored system level.
We are interested in failure trees constructed only of "AND" and "OR" logic gates and corresponding logic operations [13], The fault tree uses the most unexpected fault event as its scan target and goes through each potential contributing element one by one. After choosing the appropriate symbols to represent the events, we connect them to the top-level events, intermediate events, and fundamental events to create the graph (Figure 7). In order to model the detection function, this RdP modeling tool is proposed, which integrates the temporal aspect-the instant appearance of several defects in the monitored system. This model is targeted for fuzzy logic rule modeling that results from the fault tree logic expression, which is identified at the monitored system level.
We are interested in failure trees constructed only of "AND" and "OR" logic gates and corresponding logic operations [13], The fault tree uses the most unexpected fault event as its scan target and goes through each potential contributing element one by one. After choosing the appropriate symbols to represent the events, we connect them to the top-level events, intermediate events, and fundamental events to create the graph (Figure 7).
Description of the Process to Be Modeled
The mixing process in Figure 8 will serve as an illustration of our strategy for modeling uncertain systems for fault identification [13]. In order to create Ajax water for glass panes, two separate ingredients are combined with water in this method. Its concentration affects the final product's quality. When deciding whether or not to approve the product, this index is taken into account [14].
Description of the Process to Be Modeled
The mixing process in Figure 8 will serve as an illustration of our strategy for modeling uncertain systems for fault identification [13]. In order to create Ajax water for glass panes, two separate ingredients are combined with water in this method. Its concentration affects the final product's quality. When deciding whether or not to approve the product, this index is taken into account [14]. Through the two valves V1 and V2, two ingredients Q1 and Q2, respectively, in tanks R1 and R2 are discharged into the mixing tank. To create Ajax water for windows, a specified amount of water will be added to this mixture and swirled for a predetermined amount of time using a stirrer. The appropriate valves are used to change the amount of each ingredient as well as the amount of water. The entire amount must not be more than 95% of the tank's overall capacity. The batch will automatically be weighed following the shaking process. The product is then packaged in bottles for marketing. The value of the concentration, which we wish to keep constant, affects the product's quality.
Model Construction
The concentration of the resultant mixture is largely related to the stresses that are examined. The model that will be created must be able to explain how this concentration's value changes over time. We believe that this variation is a result of changes in the relative concentrations of the two products to be combined as well as changes in the amount of Through the two valves V1 and V2, two ingredients Q1 and Q2, respectively, in tanks R1 and R2 are discharged into the mixing tank. To create Ajax water for windows, a specified amount of water will be added to this mixture and swirled for a predetermined amount of time using a stirrer. The appropriate valves are used to change the amount of each ingredient as well as the amount of water. The entire amount must not be more than 95% of the tank's overall capacity. The batch will automatically be weighed following the shaking process. The product is then packaged in bottles for marketing. The value of the concentration, which we wish to keep constant, affects the product's quality.
Model Construction
The concentration of the resultant mixture is largely related to the stresses that are examined. The model that will be created must be able to explain how this concentration's Appl. Sci. 2023, 13, 7603 7 of 17 value changes over time. We believe that this variation is a result of changes in the relative concentrations of the two products to be combined as well as changes in the amount of water used in the mixture [15].
The concentration value must be kept within a predetermined range by our management. The latter should be viewed as the fuzzy interval's center. The intermittent use of the Petri network theorems and properties [16] is anticipated to be the same as that of our Blurred RoP model. A first-order differential around a reference point is utilized to quantify the fluctuation in concentration at the different stations of the process, allowing us to linearize the model. Because one is still very close to an equilibrium point, this linearization property is employed to move within the space of valid solutions corresponding to each interval's limit.
Fast systems make it impossible to control every manufacturing unit. After sampling, the worth of an average is assessed.
It is a batch and recurring operation in our situation. Managing the variations in product quality upon entry, which can vary depending on each ingredient, is carried out along with controlling the variations in concentration on each lot.
The first ingredient's concentration C1, the second ingredient's concentration C2, and the amount of water added, together represented as C, determine the mixture's concentration. Adjustment operations carried out at the supply valves to control the flow rates can be used to ensure the correction of variations in the mixture's concentration.
An adjustment can be made to control the new value(s) of C1 and C2 while simultaneously affecting the volume of water to be added.
Fuzzy Interval Petri Grating Modeling
In order to solve the issue of concentration variation and thus the range constraints on this concentration [17], we must therefore take into account the other three parameters. The following equation is considered to give the relationship between the C concentration and the ingredient masses (for each lot produced): where C is the concentration of the final product in kg/mm 3 ; m 1 is the mass of ingredient 1 in kg; m 2 is the mass of ingredient 2 in kg; Ve volume of water in mm 3 . m 1 = C1 * V1 and M2 = C2 * V2.
where Ci and Vi are the product Qi (i = 1, 2) concentrations (in kg/mm 3 ) and volumes (in mm 3 ), respectively. This connection can be expressed as follows: The following are the parameters' interval restrictions: Appl. Sci. 2023, 13, 7603 8 of 17 The tolerances for each parameter must be determined before building the model. To do this, we suggest a statistical approach based on precise values obtained from statistical production data. The procedure for using this method of computation and the presumptions taken into account are described in the next subsection.
Linear Approximation
We suppose that there is very little variance in the parameters around the mean. The relationship between the order near the reference establishment and the linearization approach can therefore be approximated as follows: Let C0, C10, h10, C20, h20, and Ve0 be the desired values for parameters C, C1, h1, C2, and Ve, respectively. After linearizing the equation around the operating point [17], we can write which can be in the form of with where b 1 = δC b is the parameter where standard deviation is taken into account. The first approximation can be stated as follows if we assume that the parameter variation conforms to the normal distribution: The application of the relationship gives us After simplifying this relationship (15), the following is obtained: Appl. Sci. 2023, 13, 7603 9 of 17
Application: Experimental Results
The parameters affecting the mixture's concentration value have been defined in the section before: -Ingredient Q1's concentration in factor C1; -Ingredient Q2's concentration in factor C2; -And factor Ve is the amount of water.
In our study, we consider that the parameter values are uncertain: -The C1 and C2 concentrations rely on a number of factors and are determined to be inaccurate and unreliable [18]; - The quantity or weight of products Q1 and Q2 can be changed to alter levels h1 and h2; -Depending on the values of C1 and C2, the amount of water to be added to Ve will vary [19];.
We shall use the production's actual statistical data to address this issue. The tolerances for these characteristics are then determined.
The standard deviation of the concentrates C, C1, and C2 and the amount of water Ve are provided to us by the process's statistical data. These are, in order [13], We assume that the target values for good concentration are In these circumstances, the above equation may be used to calculate the standard deviation of the concentration: ITE = 0.26 is the tolerance for water volume.
The following Table 1 lists the parameter tolerances [20]. Target final product C: concentrate of product. It is the target value the system aims to achieve. The stocks at the system's inputs are represented in this model by the location's P01 and P02. The quantities of product C1 in reservoir 1, the level in reservoir 1, the quantity of product C2 in reservoir 2, and the level in tank 2 are, respectively, places PC1, Ph1, PC2, and Ph2. The locations Pe and Pc stand for, respectively, the volume of water and the final mixing operation's concentration [21]. Figure 9 represents the network ICPN model of the mixing system ( Figure 8). In this model, places P01 and P02 represent stocks to the entries of system. Places Pc1, Ph1, Pc2, and Ph2 are, respectively, the quantity of product C1 in tank 1, the level in tank 1, the quantity of product C2 in tank 2, and the level in tank 2. The places Pe and hc represent, respectively, the quantity of water and the concentration of the finished mixing operation. Place Pc is a place of control that prevents the simultaneous crossing of transitions T1 and T4. Transitions T1 and T4 cannot occur at the same time, due to the control function of location Cp.
Finally, the process model is constructed, its structural properties may be examined, and it is demonstrated that the majority of P-time PN structural features can be applied to our model.
The method is used to determine interval limits using production data information [22].
Modeling the Process with Simulink
To simplify the problem, the behavior of the process is assimilated to a simple tank Transitions T1 and T4 cannot occur at the same time, due to the control function of location Cp.
Finally, the process model is constructed, its structural properties may be examined, and it is demonstrated that the majority of P-time PN structural features can be applied to our model.
The method is used to determine interval limits using production data information [22].
Modeling the Process with Simulink
To simplify the problem, the behavior of the process is assimilated to a simple tank ( Figure 10). We consider three levels: N1: Low level (min limit); N0: Target level (optimal height); N2: High level (max limit).
The inputs and outputs considered in our process are summarized in Table 2.
N1
Low-level sensor V3 Water filling valve in the mixing tank N0 Medium-level sensor VS Mixed product flow valve N2 High-level sensor Matlab's Simulink module allows for simulating continuous, discrete and nonlinear systems in relation to the working memory of Matlab (workspace).
For the modeling of the mixing system, three blocks are considered: • A control/command block for simulating an operating sequence.
•
A process block to simulate the change in water level. • A sensor block that provides all information on the level of mixture in the tank.
Level Modeling
The water level in the reservoir varies with inlet and outlet flows. The height in the tank varies according to the concentration of the product [23]. The value of the latter changes from batch to batch. We assume that these are small variations around an equilibrium point.
For the simulation, we take the following values: We consider three levels: N1: Low level (min limit); N0: Target level (optimal height); N2: High level (max limit).
The inputs and outputs considered in our process are summarized in Table 2. Matlab's Simulink module allows for simulating continuous, discrete and nonlinear systems in relation to the working memory of Matlab (workspace).
For the modeling of the mixing system, three blocks are considered: • A control/command block for simulating an operating sequence. • A process block to simulate the change in water level. • A sensor block that provides all information on the level of mixture in the tank.
Level Modeling
The water level in the reservoir varies with inlet and outlet flows. The height in the tank varies according to the concentration of the product [23]. The value of the latter changes from batch to batch. We assume that these are small variations around an equilibrium point.
For the simulation, we take the following values:
Modeling Level Sensors
The model of the levels is described in Figure 12. The level sensors N1, N0, and N2 are placed with the values given in the preceding paragraph [24]. Ni = 1, if the level is reached; Ni = 0 otherwise.
Building the Global Model
For the construction of the overall model of the process, we consider the operating cycle described above. Figure 13 represents the overall model of the process under consideration.
Modeling Level Sensors
The model of the levels is described in Figure 12. The level sensors N1, N0, and N2 are placed with the values given in the preceding paragraph [24].
Appl. Sci. 2023, 13, x FOR PEER REVIEW 13 of 19 Figure 11. Block diagram of the tank level.
Modeling Level Sensors
The model of the levels is described in Figure 12. The level sensors N1, N0, and N2 are placed with the values given in the preceding paragraph [24]. Ni = 1, if the level is reached; Ni = 0 otherwise.
Building the Global Model
For the construction of the overall model of the process, we consider the operating cycle described above. Figure 13 represents the overall model of the process under consideration.
Building the Global Model
For the construction of the overall model of the process, we consider the operating cycle described above. Figure 13 represents the overall model of the process under consideration.
Procedures for Constructing the Dynamic Model for Diagnosis
After constructing the overall model of the process (of Figure 8), which represents a part of the constructed fuzzy interval RdP model, in this section [25], we turn to the
Procedures for Constructing the Dynamic Model for Diagnosis
After constructing the overall model of the process (of Figure 8), which represents a part of the constructed fuzzy interval RdP model, in this section [25], we turn to the construction of the dynamic model for diagnosis [25]. This diagnosis must ensure the real-time detection and isolation of faults. To build our diagnosis, we follow the steps described in Figure 10.
•
Step 1: Inventory probable failures. For this purpose, we use the FMEA method, and the results are summarized in Table 1.
•
Step 2: Build the dynamic model of the system without faults. This allows us to perform the temporal identification of the process.
•
Step 3: Build the dynamic model of the system in the presence of faults. This is a detection step.
This step is based on the time data summarized in Table 3.
•
Step 4: Build the insulation block. In this step, the identification is based on the fuzzy rules and the defects considered. Figure 14 shows the modeling of the injected defects of valve V3. In this model, we consider the defects following the request to open valve V3 (V3s = 0) and the request to close valve V3 (V3s = 1). Figures 15 and 16 illustrate, respectively, the injection of the defects of the opening request and of the closing of valve V3. We find that the valve remains closed despite the request for opening [26] and remains open despite the request for closing ( Figure 16). Figures 15 and 16 illustrate, respectively, the injection of the defects of the opening request and of the closing of valve V3. We find that the valve remains closed despite the request for opening [26] and remains open despite the request for closing ( Figure 16). Figures 15 and 16 illustrate, respectively, the injection of the defects of the opening request and of the closing of valve V3. We find that the valve remains closed despite the request for opening [26] and remains open despite the request for closing ( Figure 16).
Diagnostic Performance Analysis
In order to analyze the performance of the constructed diagnoser, we inject a defect into the system. Figures 17 and 18 show, respectively, the normal operation of the method and the faulty operation. Figure 17 shows that, despite the request to open valve V3 (red signal), it remains closed (blue signal). This time represents the occurrence of a failure. Then, sensor N1 (purple signal) remains in state 0 at 3 s after the request to open the valve V3. This instant represents the instant of detection of the defect. Finally, sensor N0 (green signal) remains in the 0 state. After activation of the detection state, this instant corresponds to the location of the defect [27].
Diagnostic Performance Analysis
In order to analyze the performance of the constructed diagnoser, we inject a defect into the system. Figures 17 and 18 show, respectively, the normal operation of the method and the faulty operation. Figure 17 shows that, despite the request to open valve V3 (red signal), it remains closed (blue signal). This time represents the occurrence of a failure. Then, sensor N1 (purple signal) remains in state 0 at 3 s after the request to open the valve V3. This instant represents the instant of detection of the defect. Finally, sensor N0 (green signal) remains in the 0 state. After activation of the detection state, this instant corresponds to the location of the defect [27]. Figures 17 and 18 show, respectively, the normal operation of the method and the faulty operation. Figure 17 shows that, despite the request to open valve V3 (red signal), it remains closed (blue signal). This time represents the occurrence of a failure. Then, sensor N1 (purple signal) remains in state 0 at 3 s after the request to open the valve V3. This instant represents the instant of detection of the defect. Finally, sensor N0 (green signal) remains in the 0 state. After activation of the detection state, this instant corresponds to the location of the defect [27]. This analysis made by the diagnosis enables us to find the defect, and valve V3 remains closed at its opening request (V3_SC) (Figure 18).
Conclusions
In conclusion, the use of Petri's networks in modeling manufacturing workshops has proven to be highly effective in representing the characteristics and interactions between different components. These networks are particularly useful in describing and solving complex problems, especially those involving time-constrained processes where adherence to specified constraints is crucial for product quality and conformity.
In this study, we presented an approach for automatic control specification of an industrial process, specifically focusing on maintaining a constant product concentration in the presence of non-deterministic concentrates C1 and C2. Our proposed control loop, based on an Interval Fuzzy Constraint Petri Net model, successfully achieved this objective.
The methodology employed in this research involved designing experiments to determine the validity intervals of critical parameters. Although a wide range of manual control settings were generated, not all of them were considered, due to their variability. However, the Interval Fuzzy Constraint Petri Net model, incorporating completely defined intervals, provided a practical perspective by integrating experiential data into This analysis made by the diagnosis enables us to find the defect, and valve V3 remains closed at its opening request (V3_SC) (Figure 18).
Conclusions
In conclusion, the use of Petri's networks in modeling manufacturing workshops has proven to be highly effective in representing the characteristics and interactions between different components. These networks are particularly useful in describing and solving complex problems, especially those involving time-constrained processes where adherence to specified constraints is crucial for product quality and conformity.
In this study, we presented an approach for automatic control specification of an industrial process, specifically focusing on maintaining a constant product concentration in the presence of non-deterministic concentrates C1 and C2. Our proposed control loop, based on an Interval Fuzzy Constraint Petri Net model, successfully achieved this objective.
The methodology employed in this research involved designing experiments to determine the validity intervals of critical parameters. Although a wide range of manual control settings were generated, not all of them were considered, due to their variability. However, the Interval Fuzzy Constraint Petri Net model, incorporating completely defined intervals, provided a practical perspective by integrating experiential data into automation specifications. This approach combined Fuzzy Sets Theory, Statistics, and Petri Nets, allowing for a comprehensive analysis of the system.
The validity of our approach was demonstrated through its application to an industrial case study. While we chose the simplest case for validation purposes, it is important to note that our approach can be extended to more complex scenarios. For instance, it can be utilized in fault diagnosis involving common causes, utilizing the model's properties for supervision and diagnosis specifications.
Numerical characteristics of the obtained results, such as the accuracy of maintaining the desired product concentration under non-deterministic conditions, should be provided to further quantify the effectiveness and efficiency of the proposed approach. | 8,582 | 2023-06-27T00:00:00.000 | [
"Computer Science"
] |
Airglow Measurements of Gravity Wave Propagation and Damping over Kolhapur ( 16 . 5 ∘ N , 74 . 2 ∘ E )
Simultaneous mesospheric OH and O (S) night airglow intensity measurements from Kolhapur (16.8N, 74.2E) reveal unambiguous gravity wave signatures with periods varying from 01 hr to 9 hr with upward propagation. The amplitudes growth of these waves is found to vary from 0.4 to 2.2 while propagating from theOH layer (∼87 km) to theO (S) layer (∼97 km).We find that vertical wavelength of the observed waves increases with the wave period. The damping factors calculated for the observed waves show large variations and that most of these waves were damped while traveling from the OH emission layer to the O (S) emission layer. The damping factors for the waves show a positive correlation at vertical wavelengths shorter than 40 km, while a negative correlation at higher vertical wavelengths. We note that the damping factors have stronger positive correlation with meridional wind shears compared to the zonal wind shears.
Introduction
Upward propagation of gravity waves and tides is an important aspect in studying dynamical coupling between different regions in earth's atmosphere (e.g., [1]).Though the negative density gradient and conservation of energy suggest that the amplitudes of these waves grow exponentially with altitudes, dissipation processes (such as saturation and interaction of these waves with background wind and other waves) limit the amplitude growth of these waves (e.g., [2]).Information on these gravity waves and tides in upper mesosphere is considered important because of their potential association with ionospheric phenomena [3][4][5][6][7][8][9].Passive airglow monitoring is a simple and cost effective method which provides required temporal resolution to study the short period gravity waves with periodicity.In particular, OH (peak emission altitude ∼87 km), O 2 (peak emission altitude ∼94 km), and O ( 1 S) (peak emission altitude ∼97 km) emissions are often utilized to measure and characterize the upper mesospheric gravity waves (e.g., [10][11][12]).Upward propagating gravity waves with vertical wavelengths larger than the airglow layer thickness (typical full width at half maxima, 10 km) can be observed at multiple airglow emissions almost simultaneously.Such data can be used to estimate the amplitude growth and the propagation characteristics of gravity waves [13][14][15].Taori et al. [16] utilized more than two years of OH and O 2 temperature data from Maui (20.8 ∘ N, 156.2 ∘ W) to study the amplitude growth for long as well as short period waves and found strong dissipation during summer time.Recently, Liu and Swenson [17] and Vargas et al. [18] provided a numerical model to study gravity wave induced oscillations in the airglow emission intensity and temperatures where they suggested the wave amplitudes have the following relation: (1) International Journal of Geophysics In the above equation, 0 is wave amplitude at OH emission, is the amplitude at O ( 1 S) emission, is the height difference between OH and O ( 1 S) emission layers, is the damping factor, and is scale height.The quantity "" indicates whether the observed waves were freely propagated, saturated, or damped.In a case when = 0, (1) yields = 0 /2 , which suggests exponential growth of wave amplitudes, that is, free propagation of waves in an ideal atmosphere without any dissipation.Similarly, < 1 suggests waves grow lesser than the case when = 0, while > 1 suggests strong damping.In references [18,19] investigated the airglow data obtained over Rikubetsu (43.5 ∘ N, 143.8 ∘ E) and reported the damping factor for the waves observed during March 2004 to August 2005.They found that most of the waves observed at OH and O 2 emissions simultaneously were dominated by the damping.
As far as the Indian sector is concerned, reports of multiple airglow emission monitoring at mesospheric altitudes to study vertical propagation are limited [8,20,21].In the present investigation, we use the mesospheric OH (peak emission altitude ∼87 km) and O ( 1 S) (peak emission altitude ∼97 km) airglow emission intensity data obtained during February-March 2010 from Kolhapur (16.8 ∘ N, 74.2 ∘ E) to study propagation characteristics and amplitudes of gravity waves.We report new data on damping factors of various dominant waves and their possible association with mesospheric winds.
Instrumentation and Data Description
We use collocated airglow and wind measurements from Kolhapur, the description of which is as follows.
Mesospheric Airglow Data. The mesospheric OH and O
( 1 S) emission monitoring is done with the help of a photomultiplier tube (EMI-9658B) based photometer having a full field of view of 10 ∘ .The temperature stabilized interference filters are mounted on a computer controlled filter-wheel with integration time at each filter ∼10 s.The interference filters mounted on filter-wheel have full width and half maxima of ∼ 0.8 nm and are maintained at 25 ∘ C. Details of the instrument and method of temperature retrieval are discussed elsewhere [22].The errors arising due to the photomultiplier electronics (dark current and readout noise) and filter movement are about 0.2% at 25 ∘ C. The present data are obtained for zenith viewing during February and March 2010 when clear, moonless night conditions allowed more than 6 hours of observations consecutively for 14 nights.Though photometer is capable of measuring the temperatures, in the present study, we utilize only intensity data collected at OH and O ( 1 S) emissions because the wave induced perturbations were larger in intensity data (e.g., [23]).Note that the quantities measured with any airglow photometer are the integrated emission rates, which are termed as "intensity." In the present case we measure intensity in relative units as the photometer has not been calibrated.
Mesospheric Wind Data.
The mesospheric winds were obtained from the medium frequency (MF) radar operating at 2 MHz.The radar makes use of spaced antenna technique and samples the horizontal winds in the 78 to 98 km altitude region using the full correlation analysis [24].For a suitable comparison with the night time airglow data, in the present study, we utilize the averaged wind profiles obtained during 1800-2800 (i.e., 0400) h IST to understand the mean nighttime mesospheric wind variability and their suitable association with observed nocturnal mesospheric wave characteristics.
Observed Wave Characteristics.
It is important to state here that airglow emission altitudes show long term variability (e.g., [25]).However, as the emission altitude variation is <2 km over low and equatorial latitudes, in this paper, we have assumed that peak emission altitude does not vary significantly within a night.The intensity variability in a given night results from the superposition of various wave components prevailing on that night, which encompasses long-period planetary waves, tidal waves, and highly varying short period gravity wave.As the nightly data utilized in this study are confined to <12 hour duration, waves with periodicity longer than 12 hr may create only a slow moving trend in the data.In this regard, to identify the dominant short period waves, we remove the nightly average values (arithmetic mean of nocturnal data on a given night) from the data and obtain the deviations from the nightly average.Further, for a suitable comparison of gravity waves and their amplitudes on all the nights, we normalize the mean deviations to their nightly average values to get the percentage intensity variation.We use these percentage intensity variations to assess the wave characteristics.Note that there may be a contribution from tidal oscillations in the data which may cause error in the estimation of wave characteristics.However, we believe that simple best-fit cosine model is suitable to obtain the most probable solution (e.g., [15]).In doing so, we restrict the investigations to only two most dominant wave measurements on a given night.
Figure 1 exhibits nocturnal data obtained on the night of February 9, 2010 to illustrate (a) the complicated nocturnal variability in the presence of multiple waves in the data and (b) our best-fit method of approximation for dominant wave identification in the nocturnal data.Figures 1(a) and 1(c) show the normalized mean deviations (in percentage variability) in OH (Figure 1(a)) and O ( 1 S) emission intensity data.The solid red lines in each plot show results of the bestfit cosine model.We note that the mean intensity deviation data in OH emission are dominated by the 8.4 ± 0.5 hr wave with amplitude ∼4% (Figure 1(a)).It is noteworthy that the time length of nighttime airglow monitoring is limited to 9 hours and as per the Nyquist criteria, it is difficult to estimate the periods of the same or larger oscillation.To avoid the problems associated with this, we perform the bestfit analysis for a wide range of waves with periods varying from 6 hr to 12 hr and select the wave parameters for which the 2 values are close to 1, suggesting the best possible explanation of the variability.As an example, the wave-fitting corresponding to periods 7 hr and 9 hr is also shown in Figure 1(a) together with the chosen one for 8.4 hr.It is evident that other wave-fits do not represent the variability and, therefore, the presence of 8.4 hr wave was finalized.We fitted same wave period obtained from OH data to the O ( 1 S) mean intensity deviations to obtain the amplitude and phase of this wave.Analysis reveals the wave amplitude to be ∼6.8%.This indicates that wave amplitudes grew while propagating from OH to the O ( 1 S) layers.Thus the airglow data show observed wave amplitude growth to be ∼1.7.Also, we note that the minima of phase of this wave occurred at 26.5 h (i.e., 2.5 h LT) in OH data and at ∼25.4 h (i.e., 1.4 h LT) in O ( 1 S), suggesting a phase difference of ∼0.9 hr.This means that wave was propagating upward.Assuming a layer separation of 10 km, the observed phase difference results in a vertical phase velocity of ∼3 m/s.This, in turn, indicates that the vertical wavelength of 8.4 hr wave is ∼90 km.It is important to state here that the observed wave amplitude growth is apparent as these signatures represent integrated effects occurring at airglow layer with thickness of ∼5-8 km.Nonetheless, we believe the variability is true.
The bottom panels in the Figure 1 show the residual variability obtained by subtracting the best-fit cosine model data from the mean intensity deviations.The best-fit model results on the residuals (Figure 1(b)) show the period of the residual wave to be 3.1 ± 0.3 hr in OH data with ∼1.7% oscillation amplitude and minima of phase at ∼25.3 h (i.e., 1.3 h LT).The O ( 1 S) data (Figures 1(c) and 1(d)), on the other hand, show the amplitude of 3.1 hr wave oscillation to be ∼4% and minima of phase occurring at ∼24.6 h.These values result in a wave amplitude growth of ∼2.4 and vertical wavelength of ∼45 km.The plot also suggests that in the presence of several waves the biases may influence the best-fit approximation of wave parameters and one should take due care in the inspection of wave amplitude and phases for the best possible results.Note that the calculation of phase differences is carried out by cross correlating two time series and because most cases show that the identification of minima was better recognized, we have used it for the characterization of vertical wavelengths for the upward propagating waves.Noteworthy in the plot is that residual wave amplitudes are ∼1% which may be debated.However, the variability depicting the wave feature is conspicuous with good signal-to-noise ratio.In the wave analysis, we have included only data when wave signatures were evident and their amplitudes were above 0.5%.We carry out similar best-fit analysis on the nocturnal data of 14 nights of observations to identify the principal as well as residual waves observed during the February and March 2010 campaign.We note that on some nights the primary wave exhibited a very long-period trend whose periodicities could not be identified with the best-fit analysis; therefore, we have not included those long waves.As stated above, the residual waves with oscillation amplitudes below 0.5% were ignored.With the above criteria in place, the results of the best-fit analysis and observed vertical wavelengths are shown in Table 1.Of relevance is that on some days we note the presence of ∼11 hr wave in the data which was estimated using best-fitting.Though, we believe that this may be a true representation of variability, the results corresponding to such waves must be further validated using other roundthe-clock measurements which at present are not available.These results are summarized in Figure 2. The observed wave characteristics show large variability in terms of wave growth factor and vertical wavelengths.Filled black circles in each plot show the observations corresponding to principal waves while the filled red circles represent the residual waves.We observe that wave growth factor varies from 0.4 to 3.8 for the duration of observation (Figure 2 a wave which is propagating from 87 to 97 km without any dissipation should have a wave amplitude growth ∼2 in order to conserve its energy.Therefore, our data clearly indicate that most of the waves observed in our data were dissipated and only few of them were nearly freely propagating.This is somewhat similar to the findings of Taori et al. [16] where wave amplitude growth values were reported to vary from ∼0.4 to 4 with most of the waves exhibiting severe dissipation over Maui (20.8 ∘ N, 156.2 ∘ W).
The vertical wavelengths deduced from the observed phase differences at two emission layers for the observed waves are plotted in Figure 2(b).We note that most of the observed waves have vertical wavelengths varying from 25 km to 75 km.We investigate the possible relation between the wave period and vertical wavelengths in Figure 3.We note that a near linear relation exists between them, with most of the short period waves having smaller vertical wavelengths compared to the longer period ones.The linear best-fit It is important to state here that highly varying wave saturation and dissipation processes occurring at mesospheric altitudes may lead this relationship to vary.Over equatorial latitudes, Taylor et al. [26] also investigated the relationship between different wave parameters.They used image data to characterize very short period waves and with the help of coincident lidar data they showed that vertical wavelength and wave periodicity (5 to 20 min periods) have a relation.
They reported large vertical wavelengths for shorter period waves which differ from our observations (though wave periods >20 min based on the lidar data showed a different dependency which agrees well with our result).In a recent investigation Taylor et al. [9] investigated relations between horizontal wavelength and wave periods in a 5 min to 90 min range and found a positive correlation which was explained by a power law.The gravity wave dispersion relation suggests that / = /, where is natural oscillation period, is wave period, is vertical wavelength, and is horizontal wavelength.It is therefore implied that if and are fixed then, as increases, shall increase.Based on the above argument, our results are in agreement with that of Taylor et al. [9] which suggests a positive correlation between wave period and horizontal wavelengths.amplitudes depends on the dissipation/filtering processes.
Wind Variation and
The observed wave periods in our data suggest that these waves were not completely dissipated while propagating from OH layer to the O ( 1 S) layer.The interaction of waves with the mean wind is the most important dissipation mechanism (e.g., [2]).It is known that upward propagation of waves depends on the horizontal propagation direction of waves and zonal wind characteristics.The direction of wave propagation may vary from one season to another (eg., [27]).It would be ideal to have the gravity wave propagation directions through image data for suitable wind and wave interaction study.However, we investigate this with the help of coincident zonal and meridional winds (e.g., [12,17,28]).
To scrutinize the effects of winds on wave propagation, in Figure 4, we plot the observed nightly mean zonal (Figure 4(a)) and meridional (Figure 4(b)) wind variation (time averaged from 1800 h LT to 2800 h LT, i.e., averaging from evening to early morning) corresponding to the night airglow observations.We observe large variation from one night to another.The nightly mean zonal winds, in particular at ∼86 km altitude in February 2010, show oscillating nature; that is, on February 8 zonal winds are eastward, while on February 9 they turn westward which continues till the end of February 2010.In March 2010, however, at ∼86 km, winds are mostly westward.The meridional winds also reveal the oscillatory nature from one night to the other; however, at about 94 km most of the time they are southward.The temporal variation of the winds shows a strong semidiurnal tide to be present in the data which show a gradual variation.To elaborate this, we plot the zonal wind variability in Figure 5 for February 9-10, 2010.It is evident with best-fit (red curve) that a semidiurnal tidal feature was dominant in wind data.For a comparison with OH data, we carry out a Fourier analysis of residual (from best-fit) wind variability and OH intensity data.Figure 5 shows the results of the Fourier analysis.It is clear that the spectrum at both data indicates that the wave periods are somewhat similar.This emphasis that the cause of wind variability as well as the OH intensity variability are wave processes of similar nature.This is an interesting aspect which needs to be further investigated.However, the aim of the present investigation is to find out a link between wave parameters and wind shears; at present, we limit our discussion on this aspect.Because of the nocturnal variability noted in the wind data, we believe that it should be the vertical shears that would affect the vertical propagation characteristics.Therefore, we computed the wind shears at 87-97 km altitudes.We observe that on February 8, 9, and 16 the wind shear magnitudes are smaller than that observed on other nights.The relation between the wind shears and wave dissipation is discussed in the following section.
Wave Damping.
The amplitude growth of the waves observed at OH and O ( 1 S) emission altitudes can be translated into a damping factor.Numerical investigations by Liu and Swenson [17] and Vargas et al. [18] estimated the damping rates of upward propagating waves at O 2 , OH, and O ( 1 S) emission layers.Using the observations discussed in Section 3.1, we calculate the damping factors as explained in (1).The estimated damping factors are plotted in Figure 6 against the observed vertical wavelengths.We find that the damping factors for the observed waves change from 0.2 to 1.9.This suggests that no wave freely propagated during the observation period under consideration which is in agreement with earlier reports (e.g., [19,29]).Interestingly, on occasion, few waves with vertical wavelengths 20-50 km were propagating upwards without having significant damping (in fact, they show large variation, values from 0.2 to 1.7), while more than 50% waves were either saturated or damped.
We note that at shorter than 40 km vertical wavelengths, the damping factors increase with increasing vertical wavelengths.On the other hand, the vertical wavelengths show a negative correlation with the damping factors.In this regard, Takahashi et al. [19] investigated a relation between vertical wavelength and damping factors.They found wave amplitude growth to have a positive relation with vertical wavelengths and that the damping factors decrease with increasing vertical wavelengths which is similar to our results.Important to note is that numerical study of Vargas et al. [18] shows that for vertical wavelengths varying from ∼15 km to 50 km, wave amplitude growth decreases from 1.8 to 1.4, which possibly explains the reason for large scatter for shorter than 30 km vertical wavelengths in Figure 6.Also, they suggest that the wave amplitude growth varies from 0.6 to 2.0 for vertical wavelengths varying from 15 to 50 km, which broadly agrees with our results.Further, as explained earlier, wind shears may be a responsible factor for observed wave amplitude growth and hence the damping factor; we plot the damping factors against the observed wind shears between 87 and 97 km altitudes in Figure 7.In the absence of direction of wave propagation, we investigate the effects of zonal as well as meridional wind shears on the estimated wave damping factors.We note that with increasing zonal wind shears (Figure 7(a)), damping factors tend to increase.The linear best-fit analysis shows the following relation ( 2 = 0.18) between damping factor and zonal wind shears.In our analysis we have taken the difference in the wind velocity between 87 km and 97 km as a measure of wind shears: damping factor = 0.75 + 0.007 × zonal wind shear.(3) The poor 2 obviously suggests that most probably gravity wave propagation vector is not inclined to the zonal plane and may have a strong meridional propagation.To investigate this we carry out same analysis on the meridional winds.We note that the damping factors show somewhat better relation with the meridional wind shears.The linear fit shows the 2 value to be 0.49 with the following relation: Further, the damping factors of principal waves (filled black circles) show better dependency on the wind shears, while the shorter period (residual wave) ones show a large scatter which may be affecting the deduced 2 values.It is interesting to note that most of the time meridional winds were southward and as the wind shears tend to become northward damping factor increases.Similarly, in zonal direction, as the wind shears tend to be more eastward, damping increases.This in turn suggests that possibly observed waves had a stronger meridional component compared to the zonal component and have preferential north-eastward movement.Though, simultaneous image measurements are not available at present, earlier results from Indian sector have shown that most of the time the gravity waves show strong northward propagation (e.g., [20,30,31]) which supports our assertion.Though it is understood that horizontal and vertical propagation characteristics depend on the wind filtering, we show that not only the propagation but also the observed amplitude growth/damping factor of the waves depends on the wind shears which have an important bearing on the heat and momentum transfer.However, our results are based on a limited data; these conclusions are tentative and must be confirmed by further study.
Conclusions
Our night airglow measurements from the low latitude Indian station, Kolhapur, (16.8 ∘ N, 74.2 ∘ E) during February and March 2010 lead to the following conclusions.
(1) Mesospheric airglow data show large variability in the gravity wave amplitudes.
(2) Most of the upward propagating waves observed in both the OH and O ( 1 S) emission altitudes show amplitude growth varying from 0.4 to 3.8.
(3) The data reveal a positive correlation between wave periodicity and vertical wavelength.In conclusion, because the present investigation is based on limited data, a wider study is required to confirm the conclusions drawn here.Further, the effects of these wave processes on the thermosphere-ionosphere system need to be established with coordinated measurements in the near future.
Figure 1 :
Figure 1: The observed nocturnal variability noted on February 9-10, 2010 in OH (a, b) and O ( 1 S) emission (c, d) intensities.The solid lines in each plot exhibit the best-fit model results.(a, c) show the intensity deviations from the nightly average, normalized to their nightly averages.(a) shows the sample wave-fit results for two other wave modes together with the selected 8.4 hr wave.One may note the presence of principal nocturnal wave (a, c) and residual wave (b, d) in the data.
Figure 2 :
Figure 2: Night-to-night variation in the wave amplitude growth (a) and vertical wavelength (b) of the observed waves during February-March 2010 (plotted in month/day format).Filled black and red circles show the results for principal and residual waves, respectively.
Figure 3 :
Figure 3: Relation between the wave periodicity and their estimated vertical wavelengths.Filled black and red circles show the results for principal and residual waves, respectively.
Figure 4 :
Figure 4: Observed mean zonal (a) and meridional (b) wind variability corresponding to the night airglow observations during February and March.
Figure 5 :Figure 6 :
Figure 5: The averaged zonal wind variability during 1800 h to 0600 h on February 9-10, 2010 for 86-92 km altitudes (a).The Fourier analysis of zonal residual winds and OH intensity data shown in the bottom (b) indicate commonality in the period of oscillation in both data.
Figure 7 :
Figure 7: The distribution of damping factors with respect to the observed zonal and meridional (b) wind shears between 87 and 94 km altitudes.Filled black and red circles show the results for principal and residual waves, respectively.
( 4 )
The waves having vertical wavelengths less than 40 km show a positive correlation with damping factors, while the larger ones show negative correlation.(5)The damping factors of waves show a positive correlation with the zonal and meridional wind shears.
Table 1 :
Observed wave characteristics over Kolhapur during the February-March 2010 campaign are shown for each night (first and second rows on each night show principal and residual waves).The best-fitted wave periodicity and wave amplitudes are shown together with the goodness of fit measured as 2 values. | 5,780.6 | 2014-07-07T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Influence of Crystal Habit on the Dissolution of Simvastatin Single Crystals
In order to achieve better in-vivo performance of the final dosage form comprising a poorly soluble drug the physicochemical properties of the active pharmaceutical ingredient can be altered not only by changing the solid state form but also through the conversion of their crystal habits. To elucidate this approach in the case of simvastatin, the dissolution behaviour of large crystals with the same internal structure but expressing different crystal habits was studied using atomic force microscope. The obtained differences in the dissolution were explained through the determination of crystal morphology, its orientation and assignation of the molecular functional groups that were emerging on the surface of the dissolving crystal face. The dissolution rates of the particular crystal faces measured in situ by atomic force microscopy were found to be distinctly higher than others. The dissolution rate of single crystals differed as a consequence of higher incidence of more polar crystal faces in case of rod shaped crystals isolated from more hydrophilic solvent mixture which we have established through a thorough research of the single crystal morphology, orientation and the assignation of specific functional groups for each of evolved crystal faces.
Introduction
When developing an oral solid dosage form with poorly water soluble active pharmaceutical ingredients (APIs) it is of utmost importance to thoroughly evaluate and understand the physicochemical nature of the chosen API, which has to be carefully selected and specified in order to ensure repeatable income material with desired functional properties for the final dosage form.
Most marketed pharmaceuticals contain APIs in the form of molecular crystals. 1The arrangement of the molecules in such a crystal defines its physicochemical properties which in consequence determine the performance of the API in the final dosage form in terms of dissolution rate, physical and chemical stability and processibility by affecting its shape and mechanical properties. 2 In order to enable reproducible characteristics of the final dosage form, the internal structure and particle size, sometimes also specific surface area of the API, are usual-ly defined in a relative narrow range.This, however, is not always sufficient.Additional characteristics of the API have to be defined to achieve the specified in-vitro characteristics of the final dosage form in order to produce a highquality, safe and efficient drug product, such as crystal habit, hydrophillicity of the bulk sample or other properties which reflect a certain position of the exposed functional groups on the surface and the incidence of those.
Crystal habit is defined during the crystallization process. 3When considering structures with the same internal structure, possessing different crystal habit, the mechanical and chemical properties of such crystals can vary significantly.
For the present study, simvastatin was selected as a model poorly soluble API with aqueous solubility of 30 μg/mL. 4 Simvastatin is a derivative of lovastatin and is a potent competitive inhibitor of 3-hydroxy-3-methylglutaryl coenzyme A reductase, a rate-limiting enzyme in cholesterol biosynthesis.It may also interfere with steroid hormone production. 5Simvastatin is a BCS Class II drug. 6ts solubility is therefore a crucial rate limiting factor to achieve the desired level in systemic circulation for pharmacological response. 5lthough polymorphism is a very common phenomenon among active pharmaceutical substances, 7 detailed searches in the literature and the Cambridge Structural Database 8 revealed only one known stable crystal structure of simvastatin at room temperature, 9 first reported in 2002 by ^ejka et.al. 10 In some cases, a lack of polymorphism in APIs may be an advantage, since no phase transformations, which could affect the performance of the drug, can occur during manufacturing, storage or administration (Brittain, 2002, 11 Zhang et al., 2004 12 ).However, there is less space for the improvement of the aqueous solubility of simvastatin by physical modification of the API itself, e.g.selecting a more/less soluble polymorphic form in connection to the particle size and specific surface area.The presence of different simvastatin crystal habits may provide a possibility to alter the properties of simvastatin in order to enhance its dissolution rate.
Knowledge about the dissolution rate and also the dissolution mechanism of the different crystal faces is becoming progressively interesting and valuable method for the characterisation of poorly soluble active pharmaceutical ingredients. 7,13In our study the use of atomic force microscopy for this purpose is presented.
The influence of different crystal habit of active pharmaceutical ingredient on the physico-chemical and biopharmaceutical properties of the final dosage form has been studied by various research groups.Crystal habit can significantly influence for instance the disintegration capability 14 of tablets, their crushing strength, 15 compactibility 16 and the dissolution behaviour. 14,16t is known 13,17,18 that different crystal planes possess different surface energy properties and can therefore exhibit different dissolution kinetics which are demonstrated among others in the work of Tenho et.al. 17ho used this technique to study the dissolution behaviour of acetylsalicylic acid and tolbutamide and Prasad et al. 13 on paracetamol single crystals.These published studies suggest that it is possible to modify the dissolution rate of the active pharmaceutical ingredients by altering only their crystal habits.The purpose of our research was to crystalize large crystals expressing different crystal habits, determine the dissolution rate of individual crystal faces and explain the observed differences in the dissolution behaviour of simvastatin crystals on the molecular level.
The difference in the dissolution rate of crystals possessing different crystal habits was usually studied on bulk samples and explained in connection to polymorphic transitions or different particle size and specific surface area.
The focus of the present study was to explain different dissolution kinetics of simvastatin crystals possessing different crystal habits at the molecular level.This was ac-hieved through the assignation of specific functional groups for each evolved crystal face and the incidence of those.Present work additionally elucidated in more detail the same effect we have observed during our previous research performed on small simvastatin crystals expressing the same difference in their crystal habit.
1. Materials
We have performed several sets of experiments where we have used different crystallization approaches and different solvents and their mixtures in order to isolate simvastatin crystals expressing different crystal habit.During first sets of experiments small crystals were isolated.By altering the crystallization conditions we were then able to crystallize large simvastatin crystals expressing the same difference in their crystal habit as the small crystals had.
The solvents used in our experiments differed in their polarity.More polar solvent mixture was composed of acetone (Merck, Germany) and water and for the less polar solvent mixture ethanol (Riedel de Haën, Germany) and n-heptane (Merck, Germany) were selected.All the solvents were analytical grade.
The water used for crystallization and surface characterization was organic-free, distilled water.
1. Crystalizaton of Large Single Crystals Expressing Different Habits
Large single crystals of simvastatin having the same crystal structure but different crystal habits were prepared by recrystallization of untreated simvastatin from solvent mixtures exhibiting different polarity.A mixture of acetone and water and a mixture of ethanol and n-heptane were selected.By using these solvent mixtures we were able to prepare large and also small simvastatin crystals with the same crystal habit by only altering some of the crystallization process conditions, mainly the isolation time.
1. 1. Crystallization from Hydrophilic Solvent
Mixture (acetone/water) Untreated simvastatin (10 g) was dissolved in 30 mL of acetone at room temperature.To the clear solution 30 mL of water (antisolvent) was added slowly during 30 min mixing at room temperature.The reaction mixture was stirred with an overhead stirrer at 250 RPM.After 17 ml of the antisolvent was added to the reaction mixture, a two-phase system was formed.The solute separated from the solution in the form of droplets resulting in emulsion of the oily solute in the solvent mixture.After the entire amount of antisolvent was added, the formation of crystals from the oily phase was observed resulted in thick suspension.The suspension was allowed to evaporate slowly at room temperature for 1 week in order to obtain large crystals which were additionally dried in a desiccator for 24 h and stored in appropriate airtight container for further analytical evaluation.
1. Crystallization from Hydrophobic Solvent
Mixture (ethanol/n-heptane) Simvastatin (5 g) was dissolved in 13 mL of absolute ethanol at 38 °C.A clear solution was added to cooled (15 °C) heptane (65 mL) in the reactor equipped with an overhead stirrer in 10 min.When a droplet contacted the antisolvent, the nucleation of particles was observed followed by their dissolution.When the entire amount of the simvastatin solution was added, some particles remained undissolved.Hazy solution was heated to 37 °C and covered with a membrane (parafilm).Solvent was allowed to slowly diffuse trough a membrane for 1 month in order to obtain large crystals which were afterwards isolated and dried in a desiccator for 24 h at room temperature and stored in appropriate airtight container for further analytical evaluation.
Characterization of Starting Material and Isolated Crystals
Powder X-ray diffraction patterns of simvastatin crystals were obtained using the X-ray diffractometer Panalytical (X'Pert PRO MPD, Netherlands), at 45 kV 40 mA, over the range of 3-50° 2θ, using CuKα radiation wavelength 1,541 Å.
Single-crystal X-ray diffraction was used to determine the indices of the crystal faces.This task was performed on a Nonius Kappa CCD diffractometer using MoKα radiation and the NoniusCollect software.Large single crystals were glued to the holder and positioned onto the goniometer.Indices of the crystal faces were determined, using the "Orient Measure Crystal" within the software package.
After the crystals were characterized as such to know the indices of crystal faces, we were then able to monitor each individual crystal face in order to obtain information about its properties with regard to their dimensions, topography, mechanical properties and the dissolution kinetics.
Evaluation of crystal habit was performed using a scanning electron microscope (ULTRA plus, Carl Zeiss, GER).Crystals were characterised before and after the exposure to the dissolution medium phosphate buffer with p-H of 6.8 with 0.5% sodium lauryl sulphate (SDS).
The atomic force microscope Innova microscope, Veeco, USA with CP-II MicroCell, was used for the determination of mechanical properties and the in situ dissolution study.A 3D AFM image analysis was performed by NanoScope Analysis software.
For the dissolution study we carefully placed the crystal in the liquid chamber of the AFM microscope, secured the crystal with a double-sided adhesive tape and carefully added the dissolution media without the formation of air bubbles using a syringe.The dissolution was measured in a static environment since simvastatin has a very low solubility in aqueous media; in this way the differences were more pronounced and the measurements were stable.The AFM probe was then adjusted in a way that one edge of the crystal face to be examined was perpendicular.
As the crystal dissolved, the regression of the crystal face was measured as a function of time.Dissolution measurements were stopped when the formed voids were too big to measure using the AFM probe.Measurements were conducted at room temperature on all major accessible crystal faces on three individual crystals for each habit.The scanning range was 95 × 95 μm.
Mechanical properties were determined through the measurement of Young's Modulus according to the Hertz model with SPIP 6.0.13 software.AFM probe was used as a picoindenter (force spectroscopy mode).This approach models the interaction between the tip and sample as two springs in a series.The small indentation depths and low loads create an elastic interaction between the tip and sample.The elasticity of the material is then calculated from the obtained force curves (Hoffman).
1. Crystal Structure
In order to perfom indexing of the developed crystal faces by conventional single-crystal X-ray diffractometry (SCXRD), which requires crystals of a suitable size and quality, single crystals of sufficient quality were grown.In general, the minimum dimension along each axis of the crystal should exceed 50 microns, unless a heavy atom (atomic number >17) is present in constituent molecules 2 .Simvastatin crystals selected for the X-ray study were about 5 mm long and about 200-400 μm wide.
To obtain information on physicochemical characteristics of the prepared crystals, X-ray powder diffraction measurements were conducted.As it is known from the work of Hu{ak et.al. 9, simvastatin can be found in three crystal modifications, with only Form I being stable at temperatures higher than 272 K. Form II is present within the temperature range between 272 and 232 K, and Form III at temperatures below 232 K.The lattice parameters between those three modifications almost do not change and even the transformation to the lower symmetry (monoclinic space group P21) for form III does not result in significantly different crystal lattice.According to ^ejka et.al 10 who were the first to determine the crystal structure of simvastatin, simvastatin crystallizes in an orthorhombic structure with the space group P212121, Z = 4 and unite cell parameters a = 6.128Å, b = 16.296Å, c = 22.466 Å.
Our powder diffraction data collected on gently milled large crystals prove the consistency with the data published for simvastatin form I. As it can be seen from the recorded X-ray powder diffraction patterns (Fig. 1), both recrystallized samples have the same crystal structure and the degree of crystallinity which is also comparable to the starting material, i.e. "untreated simvastatin".
2. Crystal Habit
The crystals, isolated from more hydrophilic solvent mixture are rod-shaped.Their longest dimension extends along the 1 0 0 crystallographic direction and the crystal faces at each end are normally broken, close to being per-pendicular to the longest dimension and were thus described as (100) and (-100).The projection of the rod along the longest dimension is more or less distorted hexagon with the same types of crystal faces being always developed: (011), (0-1-1) (0-11), (01-1), ( 001) and (00-1) (Fig. 2a).Crystal faces (001) and (00-1) are significantly smaller than the other four.Crystals isolated from the hydrophobic solvent mixture are thinner, more plate-like, with crystal faces (001) and (00-1) being the most expressed, whereas other four are less evolved (Fig. 2b); in extreme cases this pair of crystal faces becomes very dominant.
Typical microscopic images of the obtained crystals from each solvent mixture are shown in Fig. 3a and Fig. 3d.
SEM analysis revealed numerous circular irregularities (Fig. 3d) on the surface of the hydrophobic crystals, which were formed when the residual heptane evaporated from the crystals during their growth.Crystals obtained from a hydrophilic solvent mixture exhibit a smooth surface (Fig. 3b).
The Connection Between the Crystal Structure and Crystal Shape
The key drivers which define the shape of the growing crystal are related to the crystal lattice energy of the molecular solids.However the lattice energy is not the only influencing factor.The crystallization kinetics is also important since it influences not only the size and morphology of the crystals, but also their structure. 19s proposed in the work of Berkowitch et.al. 20the difference in the crystal habit is attributed to the effect of solvent, namely the preferential adsorption of the solvent molecules on specific crystal faces.A delay in growth of some crystal faces is attributed to the prerequisite need of solvation layer removal prior to the deposition of the next growing layer.Since polar solvents preferentially interact with polar crystal faces, the most pronounced effect is anticipated for crystals exhibiting faces with significant differences in their polarity.The polarity of a crystal face is determined by the atoms or functional groups which are exposed normal to the face and are easily accessed by the solvent molecules.
In our case, the biggest difference between the samples isolated from the hydrophilic solvent mixture, in comparison to those from a more hydrophobic solvent mixture, was the relative growth along the c-axis in com- parison to the growth in the directions of the diagonals between c and b axes.Growth along the c axis was the fastest in the case of crystals grown from a hydrophilic solvent mixture, thus the formed crystals have less prominent (001) and (00-1) pair of crystal faces and more expressed (0-1-1), ( 011) and (0-11), (01-1) pairs of crystal faces.
In the crystal lattice simvastatin molecules are interconnected with hydrogen bonds which facilitate the formation of infinite chains along the b axis (the hydroxyl group of the oxooxan cycle is connected to the carbonyl group of the 2,2-dimethylbutanoate).Molecular clusters form a zig-zag pattern, which is exposed on the (001) crystal face.This face thus consists of voids which can form H-bonds and convex hydrophobic areas where mainly hexahydronaphtalene rings and 2,2-dimethylbutanoil groups are present (Fig. 5).
The crystal face (011) is of a more hydrophilic nature since simvastatin molecules are bound by H-bonds in this direction.On this face mostly methyl-groups of the hexahydronaphtalen ring and oxooxane rings are present.The crystal face (0-11) is composed of hexahydronaphatlen rings and 2,2 dimethylbutanoil groups with the possibility of forming H-bonds.
The zig-zag pattern is also evident on the crystal face (100), with convex hexahydronaphtalene rings and voids where oxooxan rings are present which are able to form H-bonds.
4. Mechanical Properties of Large Simvastatin Crystals
Young's modulus also known as elastic or tensile modulus was determined for all crystal faces and compared between both types of samples.It measures the force needed to stretch the analysed material.A material whose Young's modulus is very high can be approximated as rigid. 21he results in Table 1 show mechanical properties of the grown crystals.The values obtained on the same crystal face did not differ between crystals expressing different morphology.In all cases the crystal face 001 is the most rigid one, whereas faces (011) and (01-1) are softer and of similar elasticity.
5. The Dissolution Kinetic of Single Crystals
Furthermore we wanted to correlate the properties of different exposed crystal faces to the different dissolution behaviour of crystals through explanation of their ability to exhibit hydrophilic interactions with the aqueous dissolution media as a consequence of exhibition of different molecular functional groups exposed at each crystal face. 22he use of AFM permits direct nano-to micro-meter scale in-situ observation of solid crystal-water reaction processes occurring on single crystal surfaces and dissolution processes are monitored through topographic changes of the crystal surface over time. 23he crystals selected for the dissolution experiment were approximately 0.5-1 cm long with well defined faces.Only one crystal face per crystal was studied in each expe-Bukovec et al.: Influence of Crystal Habit on the Dissolution ... riment.The results thus represent the behaviour of a total of 6 separate crystals for each experiment.Since there were little differences between each crystal (less than 5%), we believe the results are representative for the whole sample.
As demonstrated (Fig. 7-9) the dissolution kinetics of the crystal face (011) is much faster in comparison to (001).The crystal face with lower dissolution rate is more expressed in case of plate-like crystals.Satisfactory measurements of the crystal face (011) could be made only on rod-shaped crystals (Fig. 7) since this face was too small for accurate examination of the plate-like crystals.
We have observed that the voids which form when the molecules transit to the dissolution media are rectangular-shaped.It is evident from the estimated volume of the dissolving phase (volume of the formed void) that Large molecular clusters (approximately 10 8 molecules per hour per individual void) rather than individual molecules dissolve through the observed time-points.
The dissolution behaviour of the (001) crystal face was analysed also on rod-shaped crystals.Both cases show lower dissolution rate of this crystal face in comparison to the crystal face (011); its character is therefore more hydrophobic.
The comparative dissolution profile of the ( 001) and (011) crystal faces is shown on Figure 9.During the dissolution study not only voids were forming but also the sur-face of the crystals was slowly dissolving so they were becoming thinner (shown on Fig. 7c-d).We have therefore plotted the results of the measured surface area of the formed void for each time point rather than the volume of the voids itself.The results represent an average value of the measured projected area (2D) of 10 individual voids for each time point.The dissolution rate of the crystal face (011) is faster in comparison to (001) and is therefore more hydrophilic.The collected data confirm our previous predictions (section 3.3.)based on the internal structure and exposition of the functional groups on the individual crystal face.
These results additionally prove and explain our previous findings performed on smaller crystals isolated from the same solvent mixtures, expressing the same differences in crystal habit and the dissolution rate of the bulk (intrinsic dissolution rate), which we have at that time, explained through different hydrophillicity of the surface as determined by the drop-shape analysis on compacted disks.
Conclusion
Simvastatin crystals crystallized from different solvent mixtures showed significant differences in their shape.The crystals isolated from a more hydrophilic solvent mixture e.g.acetone/water mixture were rod-shaped whereas the crystals isolated from solvents with lower polarity, such as ethanol/heptane solvent system were plate-like.
The main difference between both samples was the size of (011) crystal face, which was larger in case of rodshaped crystals.According to the results of the dissolution kinetics performed on individual crystal faces, we can assume that this crystal face is the most polar one.Moreover, the size of the crystal face (001) in comparison to the size of the other evolved crystal faces differed between crystals expressing different crystal habit.Relatively speaking in plate-like crystals isolated from a less polar solvent mixture the crystal face (001), was significantly larger compared to the rod-shaped crystals.A slower dissolution rate from this crystal face is attributed to its lower hydrophillicity.These findings additionally prove and explain in more detail the results of our previous research performed on smaller crystals isolated from the same solvent mixtures, expressing the same differences in crystal habit and the dissolution rate.
The crystals obtained from polar solvents exhibit relatively large polar crystal faces, while nonpolar crystal faces are the most prominent in both cases.These crystal faces also vary in their mechanical properties as a consequence of more molecular voids present on the polar crystal faces.Since polar crystal faces are less expressed in plate-like crystals, the material isolated from less polar solvent mixture is more rigid.Different mechanical properties can significantly alter the biopharmaceutical properties of the final dosage form.In our case faster dissolving rod-shaped simvastatin crystals are prone to be plastically more deformed when formulated into a tablet in comparison to the plate-like crystals.They could therefore require more disintegration power in order to achieve proper disintegration of the final dosage form.
As it was already described in the literature, the polarity of the solvent and the interactions that lead to its preferential adsorption at selected crystal faces of the solute are critical factors in determining the habit of a crystallizing solid.We have shown in the case of simvastatin that grown crystal faces can vary considerably with regards to their polarity (and therefore relative hydrophobicity) depending on the atoms/functional groups that emerge at the surface of the crystal due to specific interactions with the crystallization solvent.
Hydrophilic crystals were isolated from more polar solvent mixture, whereas a less polar solvent mixture contributed to more hydrophobic crystals with slower dissolution kinetics.The dissolution kinetics is dependent on the incidence and size of specific crystal faces.Present work proves that it is also possible to alter the dissolution rate by only changing the crystal habit of the substance without changing any other physico-chemical parameters such as particle size, specific surface area and/or internal structure of the active pharmaceutical ingredient.
Habit modification is therefore one of the possible approaches by which we can regulate the dissolution rate and consequently improve the bioavailability of poorly water soluble active pharmaceutical ingredients which do not express the polymorphism phenomena.
Bukovec et al.: Influence of Crystal Habit on the Dissolution ...
Figure 1 :
Figure 1: X-ray powder diffraction patterns of milled crystals: a) untreated material, b) crystals isolated from a hydrophilic solvent mixture c) crystals isolated from a hydrophobic solvent mixture, d) calculated mono-crystal structure data 10
Figure 2 :
Figure 2: Crystal faces of crystals isolated from hydrophilic (a) and hydrophobic (b) solvent mixture.All crystals are projected along the a axis
Figure 4 :
Figure 4: Packing arrangement of simvastatin in connection to the crystal habit (a) isolated from a hydrophilic solvent mixture and (b) isolated from a hydrophobic solvent mixture
Figure 3 :
Figure 3: Microphotographs and SEM images obtained on rod-shaped crystals (a-c) and plate-like crystals (d-f) showing their shape (a and d) and close-up images which show the surface topography before (b,e) and after (c,f) the exposure to the dissolution medium at final time point 5 h.
Figure 5 :Figure 6 :
Figure 5: A schematic view of simvastatin molecules positioned on the individual faces
Figure 7 :Figure 8 :Figure 9 :
Figure 7: Surface topography of the face 011 of the large crystals exposed to the dissolution media recorded at time points 0 (a), 1 h (b), 3 h (c), 6 h (d) respectively at 500× magnification
Table 1 :
The results of Young's modulus (E) obtained from the most prominent faces of simvastatin crystals given as the average value of all measured crystals | 5,776.4 | 2015-11-16T00:00:00.000 | [
"Materials Science"
] |
Development and Performance Evaluation of Thin-Layer Color Antiwearing Paving Materials
School of Traffic and Transportation Engineering, Changsha University of Science & Technology, Changsha 410114, Hunan, China Guangxi Communications Investment Group Corporation Ltd, Nanning 530007, Guangxi, China Hunan Provincial Communications Planning, Survey and Design Institute Co., Ltd, Changsha 410200, Hunan, China Hunan International Scientific and Technological Innovation Cooperation Base of Advanced Construction and Maintenance Technology of Highway, Changsha University of Science and Technology, Changsha 410114, Hunan, China
Introduction
Road traffic safety has always been the focus of scientific research workers [1,2]. e State Council's " irteenth Five-Year Modern Comprehensive Transportation System Development Plan" pointed out that future highway construction should be "innovation-driven, green, and safe" as the basic principles, firmly establish the concept of safety first, and comprehensively improve the safety and reliability of transportation, and to ensure the safety of users to reach the destination has become the basic property of the highway [3]. However, the road traffic accident is still a difficult problem that cannot be avoided all over the world. According to the data of "road traffic and transportation safety development report (2017)," there are 8.643 million traffic accidents in China in 2016, resulting in direct property losses of 1.21 billion yuan [4]. e analysis of the causes of a large number of traffic accidents shows that, in addition to the vehicle and the climate, the driver is also one of the important accident risk factors. For example, long-term driving distraction can easily cause visual fatigue. Besides, some expressways contain a lot of curved road sections, appearing in the form of a combination of curved road sections and long downhills. e "nonorthogonality" of these linear designs bury hidden dangers for driving stability and safety [5,6]. It is urgent to improve road traffic safety. e color antiwearing thin layer provides a new idea for improving road driving safety. It coats the road surface with adhesives and colored antiwearing aggregates to form a 4-10 mm thick structural layer, and through the change of color and the improvement of vibration, the driver is reminded to enter the dangerous road section from the visual and tactile aspects, respectively. Also, the structure layer has good antiwearing performance, which improves driving safety in a diversified way [7]. e research and application of the color antiwearing thin layer originated from Europe, the United States, and other developed countries in the 1950s, and it is applied to traffic engineering fields involving safety management [8][9][10][11], such as parking lot, tunnel entrance and exit, ramp, and deceleration belt. However, the research in this field started late in China; until the beginning of the 21 st century, color pavement technology is applied to public places such as carriageway and residential area. With the maturity of the preparation, production, and construction technology of colored paving materials, the colored pavement is developing towards lightness, high performance, and safety [12]. e development of color thin layer binder has experienced three stages; they are color coating, color asphalt, and polymer resin [13]. At present, epoxy resin, polyurethane, and methyl methacrylate are widely studied. Chen et al. [14] developed a kind of antiwearing layer adhesive material for pavement, which was composed of epoxy resin, curing agent, toner, and quartz sand. e bonding material had excellent mechanical properties, deformation properties, and durability. Quan et al. [15] developed a one-component polyurethane adhesive, which was composed of polyaspartic resin, DOP, and MD polyether prepolymer, and had the characteristics of transparency, wears resistance, and good sealing performance. Wu [16] prepared a thermoplastic elastomer-modified polymethyl methacrylate color nonslip coating, which mainly included modified MMA resin liquid, rubber particles, dibutyl phthalate, and formyl peroxide, and had nonpolishing, nonslip, quiet, and other characteristics. With the development of the types of adhesives, the pavement properties of adhesives have been paid more and more attention. Zhong et al. [17] adopted pure shear strength test methods to compare the shear strength of epoxy resin and modified emulsified asphalt under different temperature conditions to evaluate the interlayer adhesion. ey found that epoxy resin had good adhesion to asphalt concrete, and its shear strength was higher than emulsified asphalt. Xue et al. [18] measured the adhesion between color antiwearing thin layers under different resin coating amounts by paint and varnish pullapart adhesion test method. It was found that, with the increase of resin adhesive coating amount, the adhesion between layers gradually tends to a fixed value. Li et al. [19] compared the interfacial bond strength of the original pavement with or without groove through the pull-out test and shear test, which found that the shear strength and pullout strength can be increased by one-third under the groove condition. Wu et al. [20] determined that the structure depth of the colored antiwearing wearing course was 1.81 mm and the friction coefficient BPN was 59.8, which was higher than that of the general asphalt pavement. Zhang et al. [21] compared the friction coefficient of the color antiwearing thin layer with SMA-13 and AC-13 pavement under dry and wet conditions and found that the epoxy color antiwearing wear layer had the best performance and could shorten the braking distance by 40%. Liu et al. [22] compared the skid resistance performance of colored antiwearing thin layers from different aggregate sources and found that the antiwearing performance of bauxite particles, ceramic particles, quartz sand, and colored sand decreased in turn.
To sum up, there is still a need for further research in formulation development, thin-layer material adhesion, and skid resistance durability. erefore, this paper through the single factor studied epoxy resin curing agent mass ratio, mass ratio of two types of curing agents, toughening agent dosage, and diluent dosage on the properties of adhesive, and then, the response surface method was used to obtain the optimal ratio. Finally, the adhesion between epoxy resin adhesive and ceramic particles was explored, and the antiwearing durability was evaluated.
Materials.
Since the adhesive of the epoxy resin was generally composed of epoxy resin and curing agent, to improve its comprehensive performance, it needed to be realized by adding tougheners, diluents, etc. e information on epoxy resin, curing agent, toughening agent, diluents, and other materials selected is shown in Table 1.
e color ceramic was the aggregate in the test, and its main technical indexes are shown in Table 2.
Drawing Strength Test.
e microdrawing instrument of Shijiazhuang Zhuopu Technology Co., Ltd. was used. e epoxy resin adhesive was evenly stirred and applied on the surface of the clean steel plate, and then, the drawing head with a diameter of 100 mm was placed. After standing at room temperature and curing for 24 hours, the drawing strength was tested with a tensile rate of 10 mm/min. e principle of the equipment is shown in Figure 1. e calculation formula is shown in the following formula: (1) In formula (1), τ is drawing strength (MPa), F is the maximum drawing value (N), and D is the diameter of the puller (mm).
Shear Strength Test.
e epoxy resin adhesive was evenly applied on the 100 mm × 100 mm surface of three 100 mm × 100 mm × 50 mm cement concrete specimens. After 24 hours of static curing at room temperature, it was 2 Advances in Civil Engineering placed in a self-made shear fixture, and its shear strength was tested at 1 mm/min: In formula (2), τ is shear strength (MPa), F is shear force (kN), and S is the stress area (m 2 ). e test principle is shown in Figure 2.
Bending Strength Test.
Based on "test method of resin casting performance (GB/T 2567-2008)," cuboid epoxy resin castables were prepared, and then, a three-point bending test was carried out.
Adhesion Test.
According to the low-temperature adhesion test of asphalt and aggregate (T0660-2000), a 2 mm thick epoxy resin adhesive was applied on the central surface of the clean steel plate. Afterward, the color ceramic particles were evenly spread at a spreading amount of 1.8 kg/m 2 to form a circle with a radius of 70 mm. After curing at room temperature for 24 hours, the steel balls were dropped from different heights to observe the number of ceramic particles dropped after impact.
Antiwearing Durability
Test. e self-made indoor accelerated wear equipment ( Figure 3) was used, which was mainly composed of a drive system, temperature control system, wheelset system, and control system. e asphalt mixture rut board specimen (300 mm × 300 mm × 50 mm) was prepared, and the surface was laid with a color antiwearing thin layer. e frequency conversion and wear times were set according to the test speed requirements, and the texture depth in the circular wear zone and the mass loss per unit area of the specimen were measured after different wear times.
Effect of Single Material Composition on the Properties of Adhesives.
To get a good application in the pavement, the thin layer material should have excellent bonding performance and effectively deform with the pavement structure without damage. erefore, a single factor test was designed to analyze the effects of the mass ratio of the epoxy resin to the curing agent (phenolic amine and polyamide) and mass ratio of phenolic amine to polyamide, toughening agent, and diluent on the drawing strength, shear strength, and bending strength of the adhesive. In the test, the content of silane coupling agent (KH550), nano-SiO 2 , and nano-CaCO 3 were 0.5% (extra-mixing). e test factor level design and test results are shown in Table 3, and the following conclusions are drawn .
(1) When the mass ratio of the epoxy resin curing agent increased, drawing strength, shear strength, and bending strength of adhesive firstly increase and then decrease ( Figure 4). e reason is that, at the beginning of the mass ratio increase, the amine group in the curing agent causes the epoxy group in the epoxy resin to form a hydroxyl group and then react with the epoxy group to form a network polymer, which improves the overall performance of the adhesive [23,24]. However, when the mass ratio is further increased, the excess epoxy resin makes the curing agent not have enough amine groups to cure and crosslink with it, and the epoxy resin itself has a low viscosity, which reduces the bonding performance and toughness. In this respect, the mass ratio of the epoxy resin to the curing agent should be about 3. (2) As the mass ratio of phenolic amine to polyamide increases, the drawing strength and shear strength of Advances in Civil Engineering the adhesive gradually increase at the initial stage, but slightly decrease when the mass ratio exceeds 2 ( Figure 5). On the contrary, the bending strength gradually increases, but the increase rate slows down when the mass ratio exceeds 2. is is due to the synergistic effect of polyamide and phenolic amine, which can improve the curing degree of the epoxy system and make the network structure formed by the reaction of active groups and epoxy firmer. However, it is possible that phenolic amine can only enhance the performance of the epoxy system within a certain dosage range, and the brittleness of the cured product beyond this range is gradually reflected [25,26]. erefore, from this perspective, the mass ratio of phenolic amine to polyamide should not exceed 2. (3) With the increase in the amount of the toughening agent, the drawing strength, shear strength, and bending strength of the adhesive first increase and then decrease ( Figure 6). e reason is that, at the initial stage of the increase in the amount of the toughening agent, the molecular weight of dioctyl phthalate (DBP) is relatively small, which can intersperse between the molecular chains of the epoxy cured product, increasing the mobility of the molecular chains and the spatial freedom of the epoxy resin molecular chain, and improve the toughness and strength. When the dosage is further increased, excessive DBP is incorporated into the epoxy system, and it cannot participate in the curing reaction of the epoxy resin, and there are a large number of small molecules in the cured product. Due to the low viscosity, the mechanical properties of adhesives are reduced to some extent [27]. erefore, the optimal content of diluent is 3%. (4) e increase of the content of diluent harms the properties of epoxy resin adhesive. With the increase of the diluent content, the drawing strength, shear strength, and bending strength of the adhesive all show a decreasing trend (Figure 7). is may be because the butyl glycidyl ether diluent belongs to a linear small molecule compound, and the linear chain segment in the molecule after curing reaction weakens the rigid structure of the whole crosslinking network and reduces the mechanical properties. However, from the perspective of construction workability, it is still recommended to add an appropriate amount of diluent [28].
Optimization of Adhesive Formulation Based on Response
Surface Methodology. Response surface method is a statistical method of experimental design, which uses a reasonable experimental design to obtain the mathematical expression between design variables and response values through limited experiments, and obtains the optimal number of parameter groups through regression equation analysis, which is widely used in biology, chemistry, industry, engineering, and other fields [29][30][31]. To optimize the composition ratio of the epoxy resin adhesive, the drawing strength, shear strength, and bending strength are used as response values, and the three-level response surface with four factors including the mass ratio of the epoxy resin to the curing agent, the mass ratio of phenolic amine to polyamide, the content of toughening agent, and the diluent is designed.
e factor levels are shown in Table 4, and the test results are shown in Table 5.
Since the single factor method cannot explain the interaction between factors and cannot give a clear model between factors and response values, the Design Expert software is used to perform multiple fitting regression on the test results in Table 5 to obtain the quadratic multinomial regression model which is the model of drawing strength (X), shear strength (Y), and bending strength (Z) on the mass ratio of the epoxy curing agent, the mass ratio of phenolic amine to polyamide, the content of toughening agent, and the diluent: where X is the drawing strength, MPa, Y is the shear strength, MPa, Z is the bending strength, MPa, A is the mass ratio of the epoxy resin to the curing agent, B is the mass ratio of phenolic amine to polyamide, C is the amount of the toughening agent, %, and D is the dilutant dosage, %.
To test the validity of the regression equation and further determine the influence degree of various factors on the drawing strength, shear strength, and bending strength, the regression equation is analyzed by variance with the drawing strength as an example. e results are shown in Table 6, and from this, we can see the following conclusions. Advances in Civil Engineering and the regression equation can describe the relationship between the various factors and the response value. e test method is reliable.
(2) According to the P value in the regression model, the influence of a single factor on the response value from large to small is D, B, A, and C. e P value of D is less than 0.0001, which indicates that it has the most significant impact on the drawing strength, the interaction term BD has a more significant impact, and the others are not significant. e quadratic term A 2 has the most significant impact, and the others are highly significant, which indicates that various factors have an interactive impact on the drawing strength, rather than a simple linear relationship. e response surface graph can vividly describe the interaction between the factors. e steeper slope of the response surface indicates that the response value is more sensitive to changes in the ratio. On the contrary, the slope of the curve is gentler, and the change of the ratio has little influence on the response value. e influence of factor interaction on the response value is shown in Figures 8-13 , and from this, we can see the following conclusions.
(1) e interaction surface of the epoxy curing agent mass ratio and phenolic polyamide mass ratio, epoxy curing agent mass ratio and toughening agent content, epoxy curing agent mass ratio, and diluent content is steep, which indicates that the interaction effect on drawing strength is significant. e reason is that the mass ratio of the epoxy resin to the curing agent plays a leading role in the performance of the adhesive, and the effects of different levels of toughener and diluent depend on the level of the mass ratio of the epoxy resin to the curing agent. (2) e interaction surface of phenolic polyamide mass ratio and toughening agent content, phenolic polyamide mass ratio and diluent content, toughening agent content, and diluent content is gentle, indicating that the interaction on drawing strength is not significant.
As shown in Figure 14, the drawing strength is compared by the test results and the prediction results of the response surface curve model. e results show a good linear relationship, so the model can predict the drawing strength under unknown conditions and seek the optimal solution. erefore, A is 2.969%, B is 1.903%, C is 2.502%, and D is 1.000%.
In the same way, the shear strength regression equation is analyzed by variance, and based on the model prediction equation, A is 2.977%, B is 1.823%, C is 2.497%, and D is 1.000%. e bending strength regression equation is analyzed by variance, and based on the model prediction equation, the optimal ratio was A is 2.971%, B is 1.940%, C is 2.352%, and D is 1.000%.
On comprehensive consideration of drawing strength, shear strength, and bending strength, using Design Expert software, the three quadratic regression equations are combined. e maximum response value is solved within the variable range of factors, and the optimal ratio of epoxy resin binder is finally obtained. A is 2.971%, B is 1.887%, C is 2.455%, and D is 1.000%.
Study on Road Performance of Color
Antiwearing Thin Layer 4.1. Adhesion. e adhesion between adhesive and aggregate is an important index to evaluate the antiwater damage and antiloose ability of the color antiwearing thin layer. erefore, the steel plates are placed in −15°C, 20°C, 60°C, and 70°C environmental boxes for heat preservation, and the Advances in Civil Engineering steel balls are allowed to fall freely from 50 cm to 100 cm heights, respectively. e number of ceramic particles falling is observed. e test results are shown in Figure 15, and from this, we can see the following conclusions.
(1) Ambient temperature has an important influence on the adhesion of epoxy resin adhesive and ceramic particles, and the adhesion of the thin layer decreases at low temperature. At −15°C, the number of ceramic Advances in Civil Engineering 7 particles falling is 2-10 times that of 20°C, 60°C, and 70°C. is is mainly because the macromolecular chain in the adhesive is frozen under a low-temperature environment, and the crack propagation ability is reduced, and it shows brittle failure under external force. erefore, winter is the main period for the spalling of the antiwearing thin layer, and the temperature range in an unfavorable season should be considered in the performance evaluation of the antiwearing thin layer.
(2) When the impact load is increased, the adhesion between the epoxy resin adhesive and the ceramic particles decreases, showing that the number of ceramic particles dropped increases. e impact load has a certain temperature sensitivity on the adhesion of epoxy resin adhesive and ceramic particles. When the drop height of the steel ball increases from 50 cm to 100 cm, the number of falling ceramic particles increases by 11 at −15°C. When the temperature exceeds 20°C, the number of ceramic particles dropped increases by 1-2 with the increase of impact load, and the effect of impact load on adhesion decreases.
Antiwearing
Durability. e color antiwearing thin layer is affected by wheel impact and other effects in the use process, and the aggregate will drop and polish, which will affect its use effect, so its antiwearing durability is one of the important performances. erefore, based on the self-made indoor accelerated wear simulation equipment, different degrees of accelerated wear tests are carried out to test the texture depth and mass loss per unit area after wear. e weight ratio of binder to aggregate is 60 : 40. In the test, the ambient temperature is 30°C, the grounding pressure is 0.7 MPa, and the rotation speed is 60 r/min. e test results are shown in Table 7, and from this, we can see the following conclusions.
(1) With the increase of wear times, the wearing resistance of the color antiwearing thin layer decreases gradually. e texture depth decreases from 1.52 mm to 1.14 mm with only 500 times of wear at the initial Advances in Civil Engineering stage. However, in the later stage, the wear rate is significantly slower, increasing from 500 times to 20,000 times of wear, and the texture depth decreased from 1.14 mm to 0.99 mm ( Figure 16). erefore, the design and evaluation of the antiwearing thin layer should pay attention to the changes in the wear process, rather than focusing on the antiwearing performance at the initial stage. (2) Under the action of traffic load, the wearing resistance performance of the color antiwearing thin layer decreases, which is mainly caused by the drop of aggregate particles in the initial stage, and the mass loss per unit area increases significantly. e mass loss per unit area increases from 0 to 0.015 g/cm 2 after 500 wear times. With the further increase of wear times, the mass loss per unit area slows down, which is mainly caused by the surface structure polishing of ceramic particles.
Conclusion
(1) With the increase of the mass ratio of the epoxy resin to the curing agent and the dosage of the toughening agent, the bonding properties and toughness of epoxy resin adhesives were firstly improved and then decreased. e toughening effect of phenolic amine in the curing agent was only reflected in a certain range. When it exceeded this range, the increase of bending strength was limited, while the drawing strength and shear strength decreased slightly. Diluent harmed the mechanical properties of epoxy resin adhesive, but it was recommended to add an appropriate amount of diluent to improve the construction workability. (2) Based on the response surface optimization results, the recommended mass ratio of the epoxy resin to a curing agent, mass ratio of phenolic amine to polyamide, toughening agent, and diluent were 2.971%, 1.887%, 2.455%, and 1.000%, respectively. (3) Adhesion tests showed that temperature had a significant effect on the adhesion between the adhesive and the painted ceramic particles. e adhesion of the thin layer was decreased at low temperature, and the sensitivity to impact load was enhanced. However, both aspects were improved when the temperature exceeded 20°C. e accelerated wear test showed that, with the increase of wear times, the structure depth of the color antiwearing thin layer specimen first decreased rapidly and then decreased slowly, and the mass loss per unit area first increased rapidly, then increased slowly, and finally tended to a certain value. is showed that the color antiwearing thin layer could still maintain good skid resistance performance after repeated wear, which provided a guarantee for driving safety.
Data Availability
All the data in this paper are checked, which are obtained from tests in this study, and no other data were used to support this study.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 5,511.8 | 2021-06-10T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Suppression of Hepatic Bile Acid Synthesis by a non-tumorigenic FGF19 analogue Protects Mice from Fibrosis and Hepatocarcinogenesis
Critical regulation of bile acid (BA) pool size and composition occurs via an intensive molecular crosstalk between the liver and gut, orchestrated by the combined actions of the nuclear Farnesoid X receptor (FXR) and the enterokine fibroblast growth factor 19 (FGF19) with the final aim of reducing hepatic BA synthesis in a negative feedback fashion. Disruption of BA homeostasis with increased hepatic BA toxic levels leads to higher incidence of hepatocellular carcinoma (HCC). While native FGF19 has anti-cholestatic and anti-fibrotic activity in the liver, it retains peculiar pro-tumorigenic actions. Thus, novel analogues have been generated to avoid tumorigenic capacity and maintain BA metabolic action. Here, using BA related Abcb4−/− and Fxr−/− mouse models of spontaneous hepatic fibrosis and HCC, we explored the role of a novel engineered variant of FGF19 protein, called FGF19-M52, which fully retains BA regulatory activity but is devoid of the pro-tumoral activity. Expression of the BA synthesis rate-limiting enzyme Cyp7a1 is reduced in FGF19-M52-treated mice compared to the GFP-treated control group with consequent reduction of BA pool and hepatic concentration. Treatment with the non-tumorigenic FGF19-M52 strongly protects Abcb4−/− and Fxr−/− mice from spontaneous hepatic fibrosis, cellular proliferation and HCC formation in terms of tumor number and size, with significant reduction of biochemical parameters of liver damage and reduced expression of several genes driving the proliferative and inflammatory hepatic scenario. Our data bona fide suggest the therapeutic potential of targeting the FXR-FGF19 axis to reduce hepatic BA synthesis in the control of BA-associated risk of fibrosis and hepatocarcinoma development.
Hepatocellular carcinoma (HCC) is the sixth most common malignancy and the third most frequent cause of cancer-related death 1 . The lack of effective therapeutic options makes the quest for novel putative treatment strategies of paramount importance. The gut-liver axis homeostasis relies on a tight control of bile acid (BA) levels in order to avoid BA overload that is critical in the pathogenesis of hepatic diseases.
BAs are the end products of cholesterol catabolism, synthesized in the liver and released into the small intestine after meal ingestion. BA production and circulation are tightly regulated via the nuclear receptor, farnesoid X receptor (FXR). In the liver, FXR reduces conversion of cholesterol to BAs by downregulating the rate limiting enzyme of BA synthesis cytochrome P450 A1 (CYP7A1), via the small heterodimer partner (SHP). Moreover, FXR promotes hepatic bile secretion by increasing the expression of crucial BA transporters. In the enterocytes, BA-bound FXR induces the transcription of the fibroblast growth factor FGF15/19 (mouse and human, respectively), an enterokine secreted into the portal circulation, able to reach the liver and bind to the fibroblast growth factor receptor 4 (FGFR4)/β-Klotho complex. This initiates a phosphorylation cascade in the c-jun N-terminal kinase-dependent pathway, ultimately inhibiting CYP7A1 expression hence BA synthesis 2 , a mechanism working in synergy with the hepatic FXR-SHP-dependent one 3 Altered BA signaling in the liver and intestine is associated with severe diseases including the development of cholestasis and HCC [4][5][6][7][8] . Hepatic diseases causing intrahepatic cholestasis, such as the progressive familial intrahepatic cholestasis type 2 and 3 (PFIC2-3) caused by the multidrug resistance protein 3 (MDR3) deficiency, represent a specific risk of HCC development, especially in children 9 . ATP-binding cassette transporter (Abcb4) −/− and Fxr −/− mice are commonly used as elective models of HCC development 8,[10][11][12][13][14][15] . Abcb4 −/− mice lack the liver-specific permeability-glycoprotein responsible for phosphatidylcholine flippase on the outer leaflet of the hepatocyte canalicular membrane and therefore for secretion of phosphatidylcholine in bile. The absence of phospholipids from bile causes bile regurgitation into the portal tracts 16 , inducing accumulation of toxic BA levels, and consequent fibrosis that leads to hepatocyte dysplasia first and HCC at 12-15 months of age, mimicking human progressive familial intrahepatic cholestasis 17 . Fxr −/− mice exhibit increased BA pool size and display cell hyperproliferation leading to the development of spontaneous HCC at 12 months of age 18 .
Strategies aimed at limiting BA overload are anticipated to provide hepatoprotection, as earlier reported in Fxr −/− mice treated with BA-sequestering agents 8 . We have recently shown that specific intestinal Fxr activation is sufficient to restore BA homeostasis in Fxr −/− mice, thus protecting them from age-related hepatic inflammation, fibrosis, and cancer 18 . Also, we have recently shown that long-term administration of a Fxr agonist enriched diet prevents spontaneous hepatocarcinogenesis in Abcb4 −/− mice via Fxr-Fgf15-dependent suppression of hepatic Cyp7a1 19 .
The discovery of the role of FXR target gene, FGF15/19, in the feedback regulation of hepatic BA synthesis shed light on the physiological relevance of the crosstalk between the liver and intestine in the context of BA homeostasis 2,20,21 . FGF19 has also been implicated in HCC development. In fact, it is amplified in HCC and its expression is induced in liver of patients with extrahepatic cholestasis [22][23][24] . Interestingly, induction of Fgf15 expression in mice by intestinal Fxr overexpression protects against cholestasis and fibrosis, along with a reduction of the BA pool size 25 suggesting that modulation of FGF19 levels could offer benefits in a plethora of BA-related metabolic disorders. However, despite its protective action, FGF19 has been shown to be protumorigenic and accelerate hepatic tumour formation in Abcb4 −/− mice in an FGFR4-dependent fashion 26,27 raising doubts on the safety of a chronic administration of this hormone 28 . Recently, the generation of a FGF19 variant (M70) that is equally effective as endogenous Fgf15/19 in terms of bile acid metabolic regulatory actions but does not show any pro-tumorigenic activity in 8 months old Abcb4 −/− mice 27 was described. Moreover, administration of FGF19-M70 in healthy human volunteers potently reduces BA synthesis 29 . This data provided us with the impetus to investigate the potential protective role of a novel non-tumorigenic FGF19 variant in spontaneous HCC development during BA dysregulation.
In the present work, we show for the first time that the non-tumorigenic variant of FGF19, namely FGF19-M52, retaining its intrinsic metabolic effects on Cyp7a1 repression and consequent reduction of hepatic BA synthesis, protects Abcb4 −/− and Fxr −/− mice against spontaneous hepatocarcinogenesis thus electing hepatic BA suppression as a metabolic strategy to prevent fibrosis and HCC in susceptible models.
M52 is a non-tumorigenic variant of FGF19 that retains activity in regulating BA synthesis.
The novel engineered variant of the FGF19 protein M52 has been recently generated. M52 differs from wildtype FGF19 by five amino acid substitutions (A30S, G31S, H33L, V34L, H35Q) and five-amino acid deletion at the N terminus (Fig. 1a). In order to characterize the M52 variant and compare it to the full length FGF19, we tested CYP7A1 repression in primary human hepatocytes. qRT-PCR analysis shows that relative mRNA levels of CYP7A1 do not change in both FGF19 and M52 cell treatment, indicating that the M52 variant retains his biological activity of repression of de novo BA synthesis (Fig. 1b). Further in vivo analysis in db/db mice revealed that M52 is present in plasma as well as FGF19 (Fig. 1c). Moreover, while FGF19 administration causes an increased number of tumors per liver as well as a raised liver weight and liver/body weight ratio compared to controls, the M52 variant does not show any tumorigenic activity ( Fig. 1d-f, respectively).
FGF19-M52 metabolically protects Abcb4 −/− mice from age-related liver damage. The non-tumorigenicity of the novel FGF19-M52 variant and its concomitant ability to maintain CYP7A1 repression, prompt us to explore its metabolic and antitumoral ability in aged Abcb4 −/− mice, a murine model of impaired BA homeostasis-induced HCC. Aged Abcb4 −/− mice have been elected as a unique animal model for studying HCC pathogenesis because they resemble many features of human HCC progression. 100% of aging Abcb4 −/− mice display metabolic derangement due to liver inflammation and toxicity induced by a progressive increase and accumulation of BAs. ELISA measurement shows a strikingly higher amount of FGF19 in adenovirus (AAV)-FGF19-M52-treated mice compared to AAV-green fluorescent protein (GFP) controls (AAV-GFP 8.38 ± 6.28 vs AAV-FGF19-M52 434.5 ± 83.65 pg/ml). Compared to the control group, AAV-FGF19-M52-injected mice displayed a significant lower hepatic mRNA expression of the key limiting enzyme of BA synthesis Cyp7a1 (Fig. 2a). Also, the cytochrome P450 family 8 subfamily B member 1 (Cyp8b1)another critical enzyme controlling the ration between Cholic Acid (CA) and Chenodeoxyxholic acid (CDCA) by regulating the synthesis of CA -results inhibited in AAV-FGF19-M52-treated mice compared to control (Fig. 2b). These changes were translated in an extremely powerful reduction of the plasmatic total BA pool size (AAV-GFP 74.65 ± 12.54 vs AAV-FGF19-M52 0.63 ± 0.11 μM) and a shift in both plasma and liver BA composition to a more hydrophilic BA pool profile due to the enrichment in muricholic acid (MCA) (Fig. 2c, Tables 1 and 2).
In order to investigate whether the metabolic changes induced by the FGF19-M52 analogue would translate in protection from HCC occurrence, macroscopic analysis of 16 months old Abcb4 −/− mice liver was performed at the day of sacrifice. Earlier studies suggested that attenuation of systemic BA overload reduce number and size of liver tumors 8,18 . As expected, 100% of Abcb4 −/− mice injected with the control adenovirus AAV-GFP showed macroscopically identifiable tumours. Remarkably, almost no tumors could be identified in injected with AAV-FGF19-M52 compared to controls (Fig. 2d), and when a tumor was borne in these mice it was significantly smaller compared to control mice. This protection was accompanied by a great reduction of plasma biochemical parameters of liver damage, as indicated by a decrease of the hepatic enzymes aspartate transaminase (AST), alanine aminotransferase (ALT) and alkaline phosphatase (ALP) in FGF19-M52 mice compared to GFP-treated mice (Fig. 2e). We also analysed liver morphology and HE staining shows liver injury and disrupted hepatic parenchymal structure in GFP-injected Abcb4 −/− mice, while FGF19-M52-injected mice display a more preserved hepatic parenchyma and less inflammatory infiltrates (Fig. 3a). This is accompanied by a reduction of the immune response regulators C-type lectin domain family 7 member A (Clec7a) and monocyte chemo-attractant protein 1 (MCP-1) (Fig. 3b).
FGF19-M52 protects Abcb4 −/− mice from hepatic collagen deposition and fibrosis. Liver fibrosis results from chronic damage to the liver in conjunction with the accumulation of extracellular matrix (ECM) proteins, which is a characteristic of most types of chronic liver diseases 30 . Lack of the Abcb4 gene elicits a plethora of detrimental cell responses, including hepatic fibrosis, thus laying the foundation to hepatocarcinogenesis. In order to explore the effect of our FGF19 analogue on fibrogenesis we examine the extent of hepatic collagen deposition in liver isolated from aged Abcb4 −/− mice injected with or without M52. Liver immunohistochemical analysis with Sirius Red staining and quantification of the signal revealed that livers from M52-treated Abcb4 −/− mice were less fibrotic as compared to GFP mice ( Fig. 3f,g). This finding was paralleled by mRNA inhibition of Collagen type 1 alpha 1 (Col1a1) in M52 Abcb4 −/− mice compared to the control group (Fig. 3h).
FGF19-M52 protects Abcb4 −/− mice from overexpression of HCC oncogenes.
Alterations of cell-cycle-related genes have been documented in hepatocarcinogenesis 31,32 as well as a compensatory proliferative response to BA-induced hepatocellular damage, thus providing evidence for a prognostic role of G1-S modulators in HCC. Abcb4 −/− mice also present with cell hyperproliferation. CyclinD1 (Ccnd1) is a key regulator of cell cycle progression, and its overexpression has been reported to be sufficient to initiate hepatocellular carcinogenesis 33 . Accordingly, mouse models of disrupted BA homeostasis, such as Fxr −/− and Shp −/− mice, display enhanced Ccnd1 expression 34,35 . AAV-FGF19-M52 lowered Ccnd1 protein, as shown by immunohistochemical analysis, and transcript in aged Abcb4 −/− mice (Fig. 3c,d). Furthermore, dysregulated cyclinE1 (Ccne1) expression, as well as c-myc gene, have been shown to act as potent oncogenes, and amplification of both genes promotes HCC formation 36,37 . mRNA analysis also revealed a marked inhibition of Ccne1 and c-myc expression in AAV-FGF19-M52-injected Abcb4 −/− mice compared to controls (Fig. 3e).
FGF19-M52 protects Fxr −/− mice from HCC. We have previously shown that intestinal-specific Fxr reactivation, and in particular the entero-hepatic Fxr-Fgf15 axis activation, is able to prevent liver damage and its spontaneous hepatocarcinogenesis progression even in the absence of hepatic Fxr. In order to bypass the Fxr involvement and corroborate the importance of the intestinal hormone FGF19, we performed the same (Fig. 4a,b). These changes were translated in a reduction of the plasmatic total BA pool size (AAV-GFP 11.48 ± 13.88 vs AAV-FGF19-M52 8.20 ± 3.59 μM) and a shift in both plasma and liver BA composition to a more hydrophilic BA pool profile due to the enrichment in MCA (Fig. 4c, Tables 1 and 2). 14 months old AAV-FGF19-M52 Fxr −/− mice presented with a striking lower number of macroscopically visible liver tumors and smaller in size compared to controls (Fig. 4d). This was accompanied by a significantly decreased level of plasma ALP level and a lowered trend of ALT and AST compared to control mice (Fig. 4e). Liver morphology and HE staining shows liver injury and disrupted hepatic parenchymal structure in GFP-injected Fxr −/− mice, in contrast with a more preserved hepatic parenchyma and less inflammatory infiltrates observed in FGF19-M52-treated mice (Fig. 5a). In parallel, a decrease in Clec7a and MCP-1 was observed in FGF19-M52-injected mice compared to the control group (Fig. 5b). Previous studies have shown that, during aging, Fxr −/− mice are characterized by enhanced fibrogenesis and cell proliferation 8,11,38,39 , therefore we performed Sirius Red and Ccnd1 stainings along with q-RTPCR analysis. M52-treatment revealed a protection in terms of fibrosis and collagen deposition at (Fig. 5f-h) as well as significantly lower hyperproliferative molecular status compared to GFP-treated controls as indicated by a decrease in protein and mRNA levels of Ccne1, p21 and c-myc (Fig. 5c-e).
Discussion
FGF19 is a post-prandial enterokine and a cornerstone of BA synthesis control, also regulating carbohydrate, lipid and energy homeostasis 40,41 . The landmark discovery of the FXR-FGF19 axis in the regulation of the BA homeostasis core 2,20,21,42 opened new avenues for intestinal-specific therapeutic management of chronic diseases of the gut-liver axis. The therapeutic exploitation of the intestinal FXR/FGF19 axis activation in cholestasis 25 and its translation into protection from HCC development in Abcb4 −/− mice 19 prompt us to further examine the feasibility of a FGF19-based therapy in chronic liver and intestinal disease. However, literature data are debatable whether the enterokine FGF19 is per se implicated [22][23][24]26,28 or not 43,44 in HCC development. The in-depth characterization of FGF19 molecular structure and function allowed us to design a novel FGF19-based pre-clinical therapeutic agent uncoupling its metabolic activities from the proliferative ones. In fact, another novel engineered variant of FGF19 protein (M70) which fully retains BA regulatory activity but is devoid of murine pro-tumoral activity has been recently identified 45 . M70 differs from wild-type FGF19 by three amino acid substitutions (A30S, G31S, H33L) and a five amino acid deletion at the N terminus (P24-S28). Zouh et al. demonstrated that M70 interacts with the FGFR4 receptor and exhibits the pharmacologic characteristics of a biased ligand that selectively activates certain signalling pathways (e.g., cytochrome P450 7A1, phosphorylated extracellular signal-regulated kinase) and exclusion of others (e.g., tumorigenesis, phosphorylated signal transducer and activator of transcription 3) 45 . The identification of these types of FGF19 variants, including our M52, has greatly helped in overcoming the severe side effect of therapies targeting FGF19 46,47 , due to derangement in the gut-liver axis regulation of BA homeostasis and consequent development of liver toxicity and diarrhoea. The therapeutic potential of FGF19-M70 analogue has been well characterized and, unlike the natural form of FGF19, it does not show any pro-tumorigenic activity in 8 months old Abcb4 −/− mice, age at which these mice have not developed hepatic tumours yet 27 .
Here, we study the novel non-tumorigenic analogue FGF19-M52 as putative drug to inhibit Cyp7a1, reduce BA synthesis and eventually protect against BA-induced cancer in the liver. To this end, we show for the first time that the FGF19-M52 analogue protects against spontaneous hepatic tumorigenesis in 16 months old mice Abcb4 −/− , through the reduction of BA concentrations and modification of the BA pool. Moreover, our results demonstrate that M52 retains control of BA negative feedback of the gut-liver loop and prevents BA-induced spontaneous hepatocarcinogenesis in 14 months old Fxr −/− mice. Moreover, FGF19-M52 is also able to reduce Cyp8b1 expression, while no changes in the expression of BA transporters (Bsep, Ntcp, Oatp1 and Oatp2) and FGF19 receptors (Fgfr4 and β-klotho) were found (data not shown), indicating that FGF19-M52 globally reduces BA levels without a direct transcriptional impact on their secretory or transport genes.
It is well known that both Fxr and Abcb4 genes ablation in mice leads to liver damage, fibrosis, cholestasis and spontaneous HCC induced by high level of hydrophobic cytotoxic BAs. Indeed, the absence of Abcb4 leads to accumulation of intraductal and biliary BAs that in absence of phosphatidylcholine are cytotoxic and exert their detergent activity that represent the primum movens for the sequaela of events that lead to HCC. On a different angle, the absence of Fxr leads to de-repression of hepatic Cyp7a1 with consequent potent increased BA synthesis and concentration systemically and within the liver. These events represent the leading step for liver damage, inflammation and fibrosis that bring Fxr −/− mice to spontaneous HCC formation. FGF19-M52-dependent inhibition of Cyp7a1 and the consequent decrease of intrinsically harmful BA pool size and composition protect from hepatic tumour formation, even in the absence of Fxr. Mechanistically, the decrease of BA chronically high level and the shift of their composition into a more favourable hydrophilic one result in an outstanding inhibition of hepatic fibrosis that promptly translates into a blockage of overexpression of the typical HCC oncogenes Ccnd1, c-myc and others. Thus, in both models, FGF19-M52-dependent modulation of BA concentration with reduction of their levels and toxicity protect from liver damage and HCC.
Our findings support the concept that the control of BAs synthesis is definitely of great importance and could effectively reverse Abcb4-and Fxr-deficiency-associated hepatocarcinogenesis, suggesting that multiple metabolic players are involved in the hepatocarcinogenesis-preventing scenario. Our finding leverages the fine effort of testing novel FGF19 engineered variants that could display anti-fibrotic and anti-inflammatory effects but also antitumoral actions, thus opening bona fide a novel pharmacological strategy for example in PFIC patients who are susceptible to HCC formation even in young age. Primary human hepatocytes. Primary hepatocytes from human livers (Life Technologies) were plated on collagen I-coated 96-well plates (Becton Dickinson) and incubated overnight in Williams' E media supplemented with 100 nM dexamethasone and 0.25 mg/mL matrigel. Cells were treated with recombinant M52 or FGF19 proteins for six hours before lysis. Cells were treated with recombinant M52 or FGF19 proteins for six hours before lysis. Chemicals. CA and other endogenous BAs were purchased from Sigma-Aldrich (St. Louis, MO). All solvents were of high purity and used without further purification. Acetonitrile for HPLC was from Merck (Darmstadt, Germany); methyl alcohol RPE, ammonia solution 30% RPE, glacial acetic acid RPE were from Carlo Erba Reagent (Milan, Italy); activated charcoal was from Sigma-Aldrich; and ISOLUTE C18 cartridges (500 mg, 6 ml) for the plasma sample pretreatment were purchased from StepBio (Bologna, Italy). Plasma BA free rat plasma was treated with activated 50 mg/ml charcoal and stirred at 4°C overnight. After centrifugation at 3000 g for 5 minutes the plasma was filtered through Millipore A10 Milli-Q Synthesis (0.45 µm) and stored at −20° C.
Measurement of Plasma
Bile Acid Measurements. Plasma and hepatic BAs were identified and quantified by high-pressure liquid chromatography-electrospray-mass spectrometry/mass spectrometry (HPLC-ES-MS/MS) by optimized methods 48 suitable for use in pure standard solution, plasma and liver samples after appropriate clean-up preanalytical procedures. Liquid chromatography analysis was performed using an Alliance HPLC system model 2695 from Waters combined with a triple quadruple mass spectrometer QUATTRO-LC (Micromass; Waters) using an electrospray interface. The analytical column was a Waters XSelect CSH Phenyl-hexyl column, 5 µm, 150 × 2.1 mm, protected by a self-guard column Waters XSelect CSH Phenyl-hexyl 5 µm, 10 × 2.1 mm. BAs were separated by elution gradient mode with a mobile phase composed of a mixture ammonium acetate buffer 15 mM, pH 8.0 (Solvent A) and acetonitrile:methanol = 75:25 v/v (Solvent B). Chromatograms were acquired using the mass spectrometer in multiple reaction monitoring mode. Plasma and hepatic bile acids were extracted using a standard, previously validated protocol 19 .
mRNA extraction and quantitative real time qRT-PCR analysis. Total RNA was isolated from tumor-free livers using RNeasy Micro kit (Qiagen, Milano, Italy). cDNA was generated from 4 µg total RNA using High Capacity DNA Archive Kit (Applied Biosystem, Foster City, CA) and following the manufacturer's instructions. mRNA expression levels were quantified by qRTPCR using Power Syber Green chemistry and normalized to cyclophillin mRNA levels. Relative quantification was performed using the ΔΔCT method. Validated primers for real time PCR are available upon request.
Histology and Immunohistochemistry. Macroscopically tumor-free liver samples were fixed in 10% buffered formalin for 24 hours, dehydrated, and embedded in paraffin. Five-micrometer-thick sections were stained with hematoxylin-eosin (H&E) following standard protocols. Liver fibrosis was analyzed with Sirius Red by using Direct Red 80 and Fast Green FCF (Sigma Aldrich, Milan, Italy). Hepatocyte proliferation was assessed by immuno-histochemical detection of cyclin D1 (CCND1). All stained sections were analyzed through a light microscope. Histological features of hepatic disease have been assessed according to Histological Scoring System by Kleiner DE et al. 49 and Brunt EM et al. 50 .
Statistical Analysis. All measurements were performed in technical triplicates. All results are expressed as means ± standard error of the mean (SEM). Significant differences between three groups were determined by one-way ANOVA followed by Dunnett's post hoc test, while between two groups were determined by Mann-Whitney's U test. All statistical analyses were performed with GraphPad Prism software (v5.0; GraphPad Software Inc., San Diego, CA) and conducted as a two-sided alpha level of 0.05.
Ethics Statement. The Ethical Committee of the University of Bari approved this experimental set-up, which also was certified by the Italian Ministry of Health in accordance with internationally accepted guidelines for animal care. | 5,149.2 | 2018-11-07T00:00:00.000 | [
"Medicine",
"Biology"
] |
Arc-transitive dihedral regular covers of cubic graphs
A regular covering projection is called dihedral or abelian if the covering transformation group is dihedral or abelian. A lot of work has been done with regard to the classification of arc-transitive abelian (or elementary abelian, or cyclic) covers of symmetric graphs. In this paper, we investigate arc-transitive dihedral regular covers of symmetric (arc-transitive) cubic graphs. In particular, we classify all arc-transitive dihedral regular covers of K4, K3,3, the 3-cube Q3 and the Petersen graph.
Introduction
Covering techniques are known to be a useful tool in algebraic and topological graph theory.Application of these techniques has resulted in many important examples and classification of certain families of graphs with particular symmetry properties.For example, Djoković used graph covers to prove that there exist infinitely many 5-arc-transitive cubic graphs, as elementary abelian covers of Tutte's 8-cage.
Recently, quite a lot of attention has been paid to the classification of arc-transitive covers of symmetric graphs.Approaches have involved voltage graph techniques (see [9]) and universal group methods (see [3]).In most cases, the group of covering transformation is either cyclic or elementary abelian, or more generally abelian.These methods have been successfully applied in the classification of arc-transitive elementary abelian or abelian covers of symmetric cubic graphs, such as the complete graph K 4 , the complete bipartite graph K 3,3 , the 3-cube graph Q 3 , the Petersen graph and the Heawood graph and so on.
In this paper, we are aiming to extend our research on arc-transitive abelian covers to non-abelian covers which is harder and has not been previously considered.We begin with some further background in Section 2, and determine the arc-transitive cyclic regular covers of the Möbius-Kantor graph and the Desargues graph in Sections 3 and 4, respectively.In Section 5, we deal with dihedral covers, and give a complete classification of arc-transitive dihedral covers of K 4 , K 3,3 , Q 3 and the Petersen graph.
Preliminaries
Throughout this paper, all the graphs are finite and simple.A covering projection is defined as a graph homomorphism p : X → X which is surjective and locally bijective, which means that the restriction p : N (ṽ) → N (v) is a bijection whenever ṽ is a vertex of X such that p(ṽ) = v ∈ V (X).We call X the base graph, X a covering graph.A covering projection p : X → X is called regular if there exists a semi-regular subgroup N of the automorphism group Aut( X) of X such that the quotient graph X/N (with vertices taken as the orbits of N , and two vertices adjacent whenever there exists an edge between these two N -orbits) is isomorphic to X.In that case we call X a regular cover of X.The regular covering projection is called dihedral (or cyclic) if N is a dihedral (or cyclic) group.Similarly, we say a regular covering projection is abelian (or elementary abelian) when the group N is abelian (or elementary abelian).
Let p : X → X be a covering projection, and suppose α and β are automorphisms of X and X such that α • p = p • β, that is, such that the following diagram commutes: Then we say that α lifts along p to β, and β projects to α, and also we call β a lift of α, and α a projection of β.Note that α is uniquely determined by β, but β is not generally determined by α.The set of all lifts of a given α ∈ Aut(X) is denoted by α.If every automorphism of a subgroup G of Aut(X) lifts, then α∈G α is a subgroup of Aut( X), called the lift of G.
In particular, the lift of the identity subgroup of Aut(X) (or equivalently, the subgroup of all automorphisms of X that project to the identity automorphism of X) is called the group of covering transformations, or voltage group, and is sometimes denoted by CT(p).The normalizer of CT(p) in Aut( X) projects to the largest subgroup of Aut(X) that lifts.Hence in particular, if the latter subgroup is A, say, then the lift of A has a normal subgroup CT(p) with quotient isomorphic to A.
Two regular covering projections p : Y → X and p : Y → X are called isomorphic if there exist graph isomorphism θ : Y → Y and graph automorphism θ : X → X such that θp = p θ.In particular, isomorphic covering projections p and p are called equivalent, if θ is the trivial automorphism.Similarly, two regular covers Y and Y are called equivalent if the two regular covering projections p and p are equivalent.Usually, regular covers are studied up to equivalence.
For every symmetric cubic graph, we now know that the automorphism group is a quotient of one of seven finitely-presented groups, which can be listed as 4 , G 2 4 and G 5 , and presented as follows (see [5,4]) If a finite group G acts as an s-arc-regular group of automorphisms of a cubic graph X, then G is a smooth quotient of G s or G i s , where i = 1 or 2 depending on whether or not the group contains an involution a that reverses an arc (in the cases where s is even).(Note, 'smooth' here means that the orders of the generators are preserved.)Let U be either G s or G i s , then G is a smooth quotient U/K of U by some torsion-free normal subgroup K.If X is a regular cover of X admitting a group action of the same type, then there exists a normal subgroup L of U contained in K, with U/L being the corresponding group automorphisms of X.Then the group U/L is an extension of the covering group K/L by the given group G = U/K.In order to find all cyclic covers, we need to find all possibilities for L such that K/L is cyclic.The presentation of K can be found using Reidemeister-Schreier Theory, or by use of the Rewrite command in Magma [1].In the cases we will consider, K is a free abelian group of finite rank d, namely the Betti number of the base graph X, with some basis {w 1 , w 2 , • • • , w d }.Algebraic or computational techniques can be applied to find the actions by conjugation of the generators of U on the generators of K.And these actions induce linear transformations of the free abelian group K. Equivalently, a d-dimensional matrix representation of the group G = U/K can be given.Therefore, in order to find all the cyclic covers, we need to find all the Ginvariant subgroups L of rank d−1, equivalently we need to find all the (d−1)-dimensional representation of G.
More details of Conder and the author's universal group method can be seen in [3] and [8].Here we introduce some computational techniques that involves using Magma.To find all finite cyclic regular covers with cyclic covering groups of exponent m, we may consider the action of G by conjugation on the generators of K/K (m) , where K (m) is the characteristic subgroup of K generated by the mth powers of all elements of K. Since K/K (m) is G-invariant, we use Magma to construct a finite group K/K (m) G which is the extension group of G by K/K (m) .Note that this can be done, since both K/K (m) and G are finitely-presented subgroups of U.With a finite group stored in Magma we can use the commands NormalSubgroups and meet to find all the subgroups L of K/K (m) which are normal in K/K (m) G.Note that the 'type' of group K/K (m) G may not work for using the above commands in Magma, then what one needs to do is using the double coset graph construction method (more details can be seen in [3,Section 2]) to transform it into a permutation group (namely, an arc-transitive group of automorphisms).This method works successfully for 'small' integer m.Generally, if m = p e 1 1 p e 2 2 • • • p et t is the prime-power factorisation of m with distinct primes p i , then the factor K/L is a direct product of its Sylow subgroups.It follows that we need only consider the G-invariant subgroups of prime-power index in K/K (m) .
Once all the possibilities for L have been found, we can determine additional information, such as uniqueness up to isomorphism and arc-transitivity of the covering graphs.
Arc-transitive cyclic regular covers of the Möbius-Kantor graph
In this section, we classify all the arc-transitive cyclic covering graphs of the Möbius-Kantor graph GP (8,3).The automorphism group of GP (8, 3) is isomorphic to GL(2, 3) C 2 and acts 2-arc-regularly on the arcs.There are two other 1-arc-regular subgroups GL(2, 3) and SL(2, 3) 2 has two normal subgroups of index 96, both with quotient GL(2, 3) C 2 , but these are interchanged by the outer automorphism that takes the three generators h, a and p to h, ap and p respectively, so without loss of generality we can take either one of them.
We will take the one that is contained in the subgroup Using the Rewrite command in Magma, we find that the subgroup N is free of rank 9, on generators Easy calculations show that the generators h, a and p act by conjugation as below: (Note that the actions of generator ap is just the composition of a and p.) Now take the quotient G 1 /N where N is the derived subgroup of N , which is an extension of the free abelian group N/N ∼ = Z 9 by the group G 1 /N ∼ = GL(2, 3), and replace the generators h, a and all w i by their images in this group.Also let K denote the subgroup N/N , and let G be G 1 /N .Then, in particular, G is an extension of GL(2, 3) by Z 9 .
By the above observations, we see that the generators h, a and p induce linear transformations of the free abelian group K ∼ = Z 9 as follows: , the electronic journal of combinatorics 21(3) (2014), #P3.5 and These matrices generate a group isomorphic to Aut(GP (8, 3)), with the first two generating a subgroup isomorphic to GL(2, 3); and the first and the product of the first and the third generating a subgroup isomorphic to SL(2, 3) C 2 .Note that the matrices of orders 3, 2 and 8 representing h, a and ha have traces −3, −1 and 1, respectively.
Next, the character table of the group GL(2, 3) is given in Table 1, with γ being the zeroes of the polynomial t 2 + 2t + 3.
By inspecting traces, we see that the character of the 9-dimensional representation of GL(2, 3) over Q associated with the above action of G = h, a on K is the character χ 3 + χ 4 + χ 5 + χ 7 , which is reducible to the sum of χ 3 , χ 4 + χ 5 and χ 7 , which are characters of three irreducible 2-, 4-and 3-dimensional representations over the rational field Q. Especially, the 4-dimensional representation is reducible to two 2-dimensional irreducible representations over fields containing zeroes of the polynomial t 2 + 2t + 3. Therefore, for any prime k other than 2 and 3, there is no G-invariant subgroup of rank 8.
For prime integers 2 and 3, with the help of Magma by using the commands GModule and Submodules for matrix groups over prime fields, there is a unique G-invariant subgroup U of rank 8, which is generated by w 1 w 9 , w 2 w −1 9 , w 3 w −1 9 , w 4 w 9 , w 5 w 9 , w 6 w −1 9 , w 7 w −1 9 and w 8 when k = 3.In particular, for prime integer 3 and exponent 3 2 , using the NormalSubgroups command in Magma we can show that there is no normal subgroup of rank 8 and exponent 9.
Next, by calculation, we can see that the subgroup U is also p-invariant for the additional generator p. Hence the full automorphism group GL(2, 3) C 2 can be lifted, and the covering graph is at least 2-arc-transitive.Now we consider the lifting of SL(2, 3) C 2 , which is an 1-arc-regular subgroup generated by the cosets N h and N ap of the quotient G 1 /N .
With the help of Magma, a reduced character table of group SL(2, 3) C 2 is given in Table 2 where δ is a primitive 3rd root and φ is a primitive 4th root.
Therefore, we can see that for prime k / ∈ {2, 3}, if a primitive 3rd root δ exists, K is a direct sum of four G-invariant subgroups of ranks 1, 1, 3 and 4; and if a primitive 4th root φ exists, K is a direct sum of four G-invariant subgroups of ranks 2, 2, 2 and 3. (Note that, here we are only interested in the existence of G-invariant subgroups of rank 8.) In fact, with the help of Magma, if δ exists then these four G-invariant subgroups are generated by {w 1 w δ ) δ .Hence these two cyclic covers are isomorphic, and we now take them as one cover.However, no cyclic covers exist for k / ∈ {2, 3} when lifting the subgroup GL (2,3).Therefore, the cyclic covering graph is 1-arc-transitive but not 2-arc-transitive.
By [4, Proposition 2.3], this covering graph can not be 3-arc-transitive.Suppose this graph is 4-arc-transitive, then it is a cover of the Heawood graph, by [4,Proposition 3.2].Thus the cyclic covering group must be of order 7 e for some e.The full automorphism group of the cyclic cover is of order 16 • 8 • 7 e .Since the 4-arc-transitive symmetric cubic graphs have vertex-stabilizer S 4 , hence the order of cyclic covering graph is equal to 16 • 8 • 7 e /24 which is not an integer, contradiction.Therefore the cyclic covering graph cannot be 4-arc-transitive.Finally, again by [4, Proposition 3.4], if the covering graph is 5-arc-transitive, then it is a cover of the Biggs-Conway graph which is of order 2352.Similar to the above argument, the covering graph cannot be 5-arc-transitive.Therefore, these cyclic covering graphs are 1-arc-transitive.
For either k equal to 2 or 3, similar to the lifting of GL(2, 3) with the help of Magma, there is only one G-invariant subgroup of rank 8 of exponent 3. Hence not only the subgroup SL(2, 3) C 2 can be lifted but also the full automorphism group Aut(GP (8, 3)) can be lifted.In particular, by Conder's list [2] we know that there is only one symmetric cubic graph of order 48; in which case the covering graph is 2-arc-regular.
Theorem 1.Let n = k e be any power of a prime k, with e > 0. Then the arc-transitive cyclic regular covers of the Möbius-Kantor graph with cyclic covering group of exponent n are as follows: (1) For k ≡ 1 mod 3, only the subgroup SL(2, 3) C 2 can be lifted, and there is one 1-arc-regular cover.
Arc-transitive cyclic regular covers of the Desargues graph
In this section, we classify all the arc-transitive cyclic regular covering graphs of the Desargues graph GP (10, 3).The automorphism group of GP (10, 3) is isomorphic to S 5 × C 2 and acts 3-arc-regularly on the arcs.There are two other 2-arc-regular subgroups S 5 and Using the Rewrite command in Magma, we find that the subgroup N is free of rank 11, on generators Easy calculations show that the generators h, a and p act by conjugation as below: (Note that the action of q can be given by the composition apa.)Now take the quotient G 3 /N , which is an extension of the free abelian group N/N ∼ = Z 11 by the group G 3 /N ∼ = S 5 × C 2 , and replace the generators h, a, p and all w i by their images in this group.Also let K denote the subgroup N/N , and let G be G 3 /N .Then, in particular, G is an extension of S 5 × C 2 by Z 11 .
By the above observations, we see that the generators h, a and p induce linear transformations of the free abelian group K ∼ = Z 11 as follows: These matrices generate a group isomorphic to S 5 × C 2 , with the first two generating a subgroup isomorphic to A 5 × C 2 ; and the first and the product of the other two generating a subgroup isomorphic to S 5 .Note that the matrices of orders 3, 2, 2, 6 and 6 representing h, a, ap, hap and (ha) 2 h −1 a have traces −1, −3, −1, 1 and 1, respectively.
By inspecting traces and the character tables (which can be easily given by the CharacterTable command in Magma) of groups A 5 × C 2 and S 5 , we see that the 11dimensional representation of S 5 over Q associated with the above action of h, ap on K is a sum of U and V , which are two irreducible 6-dimensional and 5-dimensional representations over the rational field Q.Also the 11-dimensional representation of A 5 × C 2 over Q associated with the action of h, a on K is a sum of ϕ 1 and ϕ 2 , which are characters of two irreducible 6-dimensional and 5-dimensional representations over the rational field Q.However, in particular, if there exist zeros of the polynomial t 2 − t − 1, ϕ 1 is reducible to a sum of ϕ 1,1 and ϕ 1,2 each of which is a character of an irreducible 3-dimensional representation.
Therefore, for any prime k other than 2, 3 and 5, there is no h, ap -and h, a -invariant subgroup of rank 10; equivalently, no cyclic regular cover exists.
For prime k = 3 and 5, with the help of Magma, there is also no h, ap -and h, ainvariant subgroup of rank 10.Thus there are no cyclic regular covers.
For prime k = 2, with the help of Magma, there are only two h, a -invariant subgroups of rank 10 of exponent 2 and 4, respectively.Thus, correspondingly, there are two the electronic journal of combinatorics 21(3) (2014), #P3.5 cyclic covering graphs of order 40 and 80. Also there are only two h, ap -invariant subgroups of rank 10 of exponent 2 and 4. By Conder's list [2], we know that there are unique symmetric cubic graphs of orders 40 and 80, respectively, each of which is 3-arc-regular.Hence the above two cyclic covering graphs are exactly these two graphs.
Theorem 2. There are only two arc-transitive cyclic regular covers of the Desargues graph, both are 3-arc-transitive, with cyclic covering groups C 2 and C 4 , respectively.
Dihedral regular covers of cubic graphs
First of all, arc-transitive cubic graphs of small order like the complete graph K 4 , the complete bipartite graph K 3,3 , the 3-cube Q 3 and the Petersen graph are well known.The arc-transitive properties of each of the above graphs are as follows.
The complete graph K 4 is 2-arc-regular with automorphism group S 4 , and the only arc-transitive subgroup of automorphisms of K 4 is the subgroup A 4 , which acts regularly on the arcs.The complete bipartite graph K 3,3 is 3-arc-regular.Its automorphism group is the wreath product S 3 C 2 , and this contains three arc-transitive subgroups which act 1-, 2-and 2-arc-regularly on the arcs of K 3,3 , respectively.In particular, two of these three subgroups are minimal, one is the group A 3 C 2 which acts 1-arc-regularly, while the other is (A 3 × A 3 ) C 4 which acts 2-arc-regularly.The 3-cube Q 3 is 2-arcregular, and its automorphism group is the direct product S 4 × C 2 .And the only arctransitive proper subgroups of automorphisms are S 4 and A 4 × C 2 , each of which acts 1-arc-regularly on the arcs of Q 3 .And finally, the Petersen graph is a 3-arc-regular graph.Its automorphism group is the symmetric group S 5 , and the only other arc-transitive subgroup of automorphisms is the subgroup A 5 , which acts 2-arc-regularly.
Before investigating the dihedral covers, we remind readers of the following useful result given by Gardiner and Praeger in [6].Theorem 3. [6] Let Γ be a connected G-symmetric graph of valency p a prime.For each normal subgroup N of G one of the following holds: (a) Γ is N -symmetric; (b) N acts regularly on vertices, so Γ is a Cayley graph for N ; (c) N has just two orbits on vertices and Γ is bipartite; or (d) N has r p + 1 orbits on vertices, the natural quotient graph Γ N on N -orbits is G/N -symmetric of valency p, and Γ is a topological cover of Γ N .Now, suppose graph X is an arc-transitive dihedral regular D n -cover of cubic graph X where dihedral group D n is of degree n (here, we always assume n > 2), then we have the following lemma: Lemma 4. X is a cyclic regular cover of a 2-cover of X.
Proof.Since X is an arc-transitive dihedral cover of X, then there exists an arc-transitive subgroup D n A of Aut( X) which is the lifting subgroup of an arc-transitive subgroup A of Aut(X).Let C n be the cyclic subgroup of D n , then C n is normal in D n A. Especially, the electronic journal of combinatorics 21(3) (2014), #P3.5 Theorem 9.The complete bipartite graph K 3,3 has no arc-transitive dihedral regular cover.
Proof.Suppose K 3,3 has an arc-transitive dihedral regular cover D, then by Lemma 4, D is a cyclic regular cover of a 2-cover of K 3,3 .However, we know that K 3,3 has no arc-transitive 2-cover, and in fact, there is no arc-transitive cubic graph of order 12, contradiction.Hence K 3,3 has no arc-transitive dihedral covering graph.
For the 3-cube graph Q 3 , the classification of arc-transitive dihedral covers is as follows.
Theorem 10.Let X be an arc-transitive dihedral regular D n -cover of the Q 3 , then n is equal to 3.
Proof.We know that each dihedral regular cover of Q 3 is a cyclic regular cover of the Möbius-Kantor graph.In Theorem 1, there are two types of cyclic regular covers of the Möbius-Kantor graph.Firstly, if there exists a primitive 3rd root δ, then the cyclic covering groups of the cyclic covers are generated by u = {w 1 w δ 2 2 w δ 3 w 4 w δ 2 5 w δ 6 w 7 w δ 2 8 w −δ 9 } and v = {w 1 w δ 2 w δ 2 3 w 4 w δ 5 w δ 2 6 w 7 w δ 8 w −δ 2 9 }, respectively.Since these two covering graphs are isomorphic, here we only consider the covering group generated by u.The images of u under the conjugation actions of generators h and ap are u 4 and u −1 .Since SL(2, 3) C 2 = h, ap , there is a unique normal subgroup of order 2 which is generated by (haph) 6 .And the image of u by the conjugation action of (haph) 6 is equal to u 16 6 .Since k ≡ 1 mod 3 but 16 6 + 1 ≡ 2 mod 3. Hence u 16 6 = u −1 , which suggests there is no dihedral normal subgroup of SL(2, 3) C 2 .
Secondly, for k = 3 and the cyclic covering group of order 3, the covering graph is 2-arc-regular and of order 48.With the help of Magma, we can easily verify that its a dihedral regular covering graph of the Q 3 , with automorphism group isomorphic to D 3 (S 4 × C 2 ).
In [7], the author classified all the arc-transitive cyclic covers of the dodecahedron graph, and gave the following result.
Theorem 11. [7] Let n = k be any power of a prime k, with > 0. Then the arctransitive cyclic regular covers of the dodecahedron graph with covering group of exponent n are as follows : Now, we can give the following results for arc-transitive dihedral regular D n -covers of the Petersen graph.
Theorem 13.Let X be an arc-transitive dihedral regular D n -cover of the Petersen graph, then n is equal to either 3 or 6.
Proof.First of all, every dihedral regular cover of the Petersen graph is a cyclic regular cover of a 2-cover of the Petersen graph.And we know that there are two 2-covers of the Petersen graph which are the dodecahedron graph and the Desargues graph.However by Corollary 12, we only need to consider the cyclic covers of the dodecahedron graph.
By Theorem 11, we know that there are only finitely many cyclic covers.For n = 2, by [3], we know that the Petersen graph has a (C 2 ) 2 -cover.For n = 4, the covering graph is of order 80 with automorphism group isomorphic to Q 8 S 5 where Q 8 is the quaternion group of order 8. Hence its a 'quaternion' Q 8 -cover of the Petersen graph instead of a dihedral cover.
Similarly, for n = 3, we have a 2-arc-transitive C 3 -covering graph of automorphism group C 3 (A 5 × C 2 ).With the help of Magma, we have C 3 (A 5 × C 2 ) ∼ = D 3 A 5 which suggests that its a dihedral regular D 3 -cover of the Petersen graph.
Therefore, the Petersen graph only has two dihedral covers with covering groups D 3 and D 6 .
( a ) 3 .
If k = 2, there are exactly two such covers, namely • one 3-arc-transitive cover with covering group Z 2 where = 1, • one 3-arc-transitive cover with covering group Z 4 where = 2. (b) If k = 3, there is exactly one such cover, namely • one 2-arc-transitive cover with covering group Z 3 where = 1.(c) There is no arc-transitive cyclic cover for other prime integer k = 2, Corollary 12.All the arc-transitive cyclic regular covering graphs of the Desargues graph are also arc-transitive cyclic regular covers of the dodecahedron graph.the electronic journal of combinatorics 21(3) (2014), #P3.5
Table 2 :
The character table of the group SL(2, 3) C 2 | 6,253.8 | 2014-07-10T00:00:00.000 | [
"Mathematics"
] |
From Small to Big Data: paper manuscripts to RDF triples of Australian Indigenous Vocabularies
This paper discusses a project to encode archival vocabularies of Australian indigenous languages recorded in the early twentieth century and representing at least 40 different languages. We explore the text with novel techniques, based on encoding them in XML with a standard TEI schema. This project allows geographic navigation of the diverse vocabularies. Ontologies for people and place-names will provide further points of entry to the data, and will allow linking to ex-ternal authority. The structured data has also been converted to RDF to build a linked data set. It will be used to calculate a Levenshtein distance between wordlists.
Introduction
Of the several hundred languages spoken in Australia over millennia before European settlement, less than fifty are currently learned by new generations of Aboriginal children. Records of the languages that are no longer spoken everyday are thus extremely valuable, both for those wanting to relearn their own heritage language, and for the broader society who want to know about indigenous knowledge systems. In this paper we discuss current work to encode vocabularies collected by Daisy Bates for a number of indigenous Australian languages, mainly from Western Australia in the early 1900s. These papers have been in the public domain since 1936 and as a collection of manuscripts held in Australian state libraries. We outline the process of creation and naming of the digital images of this paper collection, then show how we have encoded parts of this material, and created novel views based on the encoded formats, including page images with facsimile text. As the project develops we expect to build a model that can be applied to further sections of the collection that are not as well structured as the vocabularies. This work is offered as one way of encoding manuscript collections to provide access to what were otherwise paper artefacts 1 .
The task
The complex problem of using historical records of Australian languages has benefited from the cooperation of a linguist (NT) with a technology expert (CT). The dataset has been constructed according to the TEI Guidelines 2 , to embody both a (partial) facsimile of the original set of typescripts and a structured dataset to be used as a research collection. This material will be open to reuse, in particular providing access for indigenous people in remote areas to vocabularies of their ancestral languages. The model will also be an exemplar of how a text and document-based project, typical of humanities research, can benefit from new methods of encoding for subsequent reuse. For more on the content of the collection see [3].
By processing the wordlists and making them accessible online, we have prepared material that will be of use to indigenous Australians today, as well as creating an open research dataset which may be linked to and from other data. We believe that new digital methods can enrich the metadata and the interpretation of primary records in this collection. There are some 23,000 images on microfilm, and the first task has been to rename all files. Analysis of the texts identified three types of document, the 167 original questionnaires (around 100 pages each), 142 typescript versions of those questionnaires (each made up of varying numbers of pages), and 84 handwritten manu-scripts that could be either questionnaires or additional material.
Any given word in a typescript comes from a predictable location in the associated questionnaire, and so can be assigned an identifier to allow targeted searching. Thus links can be automatically established to display a typescript image page and a questionnaire image page for any target word.
The JPEG files of typescripts were sent to an agency for keyboarding. The XML was subsequently enriched as seen in the snippet in Fig.1.
At the end of the first stage of work we are able to visualise the wordlists in various ways, including a geographic map (Fig. 3), a list of all words and their frequencies, and a list of wordlists and the number of items they contain, in addition to being able to search the whole work for first time. Sorting and arranging the words helps in the correction of errors that inevitably The scale of the Bates dataset requires outsourced transcription, but it is difficult to outsource the full (lexicographic) semantics, that is, capturing the meaning of added entries and examples. This is even more the case as the documents include a great deal of variation, both in their spellings and in their contents, so it is not necessarily easy to interpret them semantically. We focused the outsourced transcription task on a superficial (typographic) encoding of the documents. The encoding captured the tabular layout (i.e. the text is divided into rows and cells), local revisions (i.e. rows added to the table), and pagination. The right-hand column of these tables, generally containing a comma-separated list of indigenous words, was then marked up by an automated process (an XSLT transformation). To explicitly encode the lexicographic data in the forms, we needed to tag each of the words, classify it as either English or indigenous, and hyperlink each indigenous word or phrase to the English words or phrases to which it corresponds.
Given the size of the encoding task, it was essential to minimise the amount of manual work, and reduce the scope for human error, by automating the markup process as much as possible.
The typing did not include identification of the relationships between the words in the lexicon, recognising that it is preferable to use transcribers to capture source texts with a high level of accuracy, but conceptually at a superficial level, and then to add those semantics later, automatically, or using domain experts. We provided our keyboarders with document layout (i.e. pages and tables), rather than linguistic categories (terms and translations).
As an example of the automatic addition of semantic information, we decided to recover the lexicographic semantics implicit in the text by programmatic means, inserting explicit metadata (markup) in the text to record these inferred semantics. This had the additional advantage that the automated interpretation could be revised and re-run multiple times, and the output checked each time. We see the visualisation of the results that is permitted by this work as contributing to the repeated process of correction of the data. The tabular layout itself implies a relationship between a prompt term (in the left hand column of the questionnaire), and one or more terms in the right hand column. The right hand column contains one or more terms in an indigenous language, but in addition it may contain other English words, typically in brackets, or separated from the indigenous words by an "=" sign. (e.g. Sister joo'da, nar'anba = elder (57-033T 4 ) ).
<row> <cell>Snake</cell> <cell> Burling, jundi (carpet), binma, yalun </cell> </row> The left-hand column of the questionnaire form was pre-printed by Bates, for example, in Fig. 2 the printed word was "Snake". The right hand column was to be filled in with the local language term. In this case the recorder wrote Burling, jundi (carpet), binma, yalun. Our aim is to identify which of the words are intended to represent indigenous words, and which (like "carpet") are actually additional English words which specify a refinement of the original term. In this case, the word jundi is specifically for "carpet snake", whereas the other words may refer to snakes more generically.
The intermediate TEI documents (containing automatically-inferred term/gloss markup) will contain errors in many places, due to inconsistency and ambiguity in the source documents.
Those errors became most apparent in the word lists and maps generated in the first phase outputs of the project, as shown in Fig. 3.
Markup: automation and "markup by exception"
The transformation of the base XML files is via an XSLT script that parses the lists into distinct words, and inserts the appropriate hyperlinks to relate each indigenous word to the English word(s) to which it corresponds. Some indigenous words have multiple corresponding English words, separated by commas or sometimes semicolons: Ankle Kan-ka, jinna werree, balgu Occasionally, the word "or" is used before the last item in a list:
Blood Ngooba or yalgoo
Sometimes the right hand column contains additional English language glosses, generally to indicate a narrower or otherwise related term. Most commonly, these additional English glosses were written in parentheses, following the corresponding indigenous word:
Kangaroo
Maloo (plains), margaji (hill) Sometimes an additional English gloss is written before the corresponding indigenous term, and separated with an equals sign (or occasionally a hyphen):
Woman, old
Wīdhu; old man = winja An XSLT script is easily able to handle all these cases, and rapidly produce markup which is semantically correct. However, as the forms were filled out by many different people, inevitably there are some inconsistencies in the text which can lead the XSLT script into error. Sometimes, for instance, the indigenous words are in brackets, rather than the English words. Sometimes the text is written in a style which is just not amenable to parsing with a simple script: Bardari -like a bandicoot, only with long ears and nose.
Bira -also like a bandicoot, but short and thick body, little yellow on back.
In these exceptional cases the easiest thing to do is apply some human intelligence and mark up the text by hand. Figure 3 shows the range of equivalents for the word 'sister' mapped geographically This naturally leads to an iterative data cleaning workflow in which a time-consuming batch process crunches through the documents, gradually enhancing them, performing automated validation checking, and finally generating visualisations for humans to review. We found the data visualisations to be a potent force for quality assurance. It is often very easy to spot interpretive errors made by the automated parser, and that correction can feed back, either as a refinement of the automated process, or as a manual correction of the document markup, leading to a gradual improvement in the data quality.
Conversion to Linked Data
The TEI XML contains a great deal of detail about the questionnaires as texts, such as how the word lists were formatted, punctuated, and paginated, and although this is essential in order to be able to read the questionnaires as texts, for proofing, it is also helpful to be able to abstract away from that contingent information and deal only with the vocabularies as linguistic data. For this purpose, once the TEI XML has been automatically enhanced to include the explicit lexicographic semantics, a final XSLT extracts information from each of the TEI documents and reexpresses it as an RDF graph encoded in RDF/XML, using the SKOS 5 vocabulary for the lexicographical information, and the Basic Geo (WGS84 lat/long) 6 vocabulary for the geospatial location of each vocabulary. The distinct RDF graphs are then merged to form a union graph, by saving them into a SPARQL Graph Store.
Each vocabulary is represented as a SKOS:ConceptScheme, which in turn contains a SKOS:Concept for each distinct concept; either a concept identified by Bates in her original questionnaire, or a concept added during the original interviews.
In addition, a special SKOS:ConceptScheme (called "bates") represents the original blank questionnaire, and functions as a hub in the network of concepts. Each concept in the "bates" vocabulary is explicitly linked (as a SKOS:exactMatch) to the corresponding concept in every one of the indigenous vocabularies.
The concepts in the "bates" vocabulary have labels in English, whereas the corresponding concepts in the other vocabularies are labelled with indigenous words. Many of the concepts in the indigenous vocabularies have multiple labels attached, representing the synonyms recorded in the questionnaires.
Once the RDF graphs are loaded into the SPARQL Store, the union graph can be easily queried using SPARQL. We use SPARQL queries to produce a map of each word, histograms of the frequency of vocabularies containing a given concept, and of the varying conceptual coverage of the different vocabularies. We can also extract the indigenous words in a form convenient for further processing, including computing Levenshtein Distance between vocabularies, to support automated clustering of the vocabularies.
Next steps
Once all the typescripts have been keyboarded we will be in a position to edit the whole collection for consistency. As noted, each wordlist was compiled by different people, and was then typed under Bates's supervision, so having access to both the manuscript and typescript will enable an edition that captures the content more accurately than is currently the case. In phase two, we will implement a framework that allows these images and text to be presented together, and then ex- tend the model into other parts of the Bates collection that also includes words and sentences in Aboriginal languages with the potential that we can attract volunteers (crowdsourcing) to work on transcribing and correcting the content. We will also be in a position to generate similarity measures between vocabularies and from them a multidimensional scaling view of the distance between the vocabularies as in [2], and more recently [1].
Conclusion
With the work undertaken so far it is clear that the process of encoding has led to a deeper understanding of the target material. It has provided novel visualisations and helped us to appreciate the context of the original material. While the more usual approach to archival lexical material has been to extract lexical items into a relational database or spreadsheet, the data could not be coerced into such a form now without a significant amount of interpretation and loss of contextual information.
It would be a mistake to focus immediately on the lexicographic data embedded in the forms, and neglect other aspects of the forms. We have no access to the original language speakers; for us the questionnaires are themselves the data, and we should therefore record the questionnaires, not just the data "contained in" the questionnaires. Further, by maintaining the links back to the primary records we allow users to situate the encoded material in its source. Premature data reduction to cells in a database risks ignoring useful information. The data modelling task aims to capture the data along with all possible context. The use of TEI, rather than, say, a relational database enables that conceptually openended, exploratory and iterative data modelling. | 3,315.2 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
A Decomposition Algorithm for Noncrossing Trees
Based on the classic bijective algorithm for trees due to Chen, we present a decomposition algorithm for noncrossing trees. This leads to a combinatorial interpretation of a formula on noncrossing trees of size n with k descents. We also derive the formula for noncrossing trees of size n with k descents and i leaves, which is a refinement of the formula given by Flajolet and Noy. As an application of our algorithm , we answer a question proposed by Hough, which asks for a bijection between two classes of noncrossing trees with a given number of descents.
Introduction
A noncrossing tree (NC-tree for short) is a tree drawn on n points numbered in counterclockwise order on a circle in such a way that its edges are rectilinear and do not cross.We always consider the points labeled counterclockwise from 1 to n and the root labeled 1.The size of a NC-tree is defined as the number of edges.It is well known that the number of NC-trees of size n equals 1 2n+1 3n n , the generalized Catalan number.Noncrossing configurations have been extensively studied, see, for example, [1,3,[5][6][7][8][9][10][11]13].In a NC-tree T , a vertex v is planted at u if (u, v) is an edge and u lies on the unique path from the root to v.Moreover, (u, v) is a descent if u > v. Denote N n,k the number of NC-trees of size n with k descents.Using generating function, Hough [8] pointed out the following formula Theorem 1.1 (Theorem 2.2, [8]).
In this paper, we give a combinatorial interpretation of formula (1.1) by introducing a decomposition algorithm for NC-trees.Moreover, we derive a refined formula on NC-trees of size n with respect to the number of descents and the number of leaves.Notice that this formula is also a refinement of a result given by Flajolet and Noy [6].The second result of this paper is to provide a bijective proof for Question 1.2.Our combinatorial interpretation of formula (1.1) relies on the (l; r)-representation of NC-trees introduced by Panholzer and Prodinger [11].Represent a NC-tree T by a plane tree with each edge labeled by l or r using the following rules: Label an edge by l if it is a descent; Otherwise, label it by r.See Figure 1 for the (l; r)-representation of a NC-tree.Briefly, we call an edge labeled by l or r an l-edge or an r-edge respectively.It is obvious that a plane tree obtained in this way has two properties: (i) the edges adjacent to the root are always r-edges; (ii) for any internal vertex v, the vertices planted at v with l-edges are always to the left of the vertices planted at v with r-edges.In addition, these two properties provide a necessary and sufficient condition for a plane tree to become a NCtree.Note that each l-edge in such a plane tree corresponds to a descent.To construct the decomposition algorithm, we still need the notion of labeled NC-trees.A labeled NC-tree of size n is referred to the (l; r)-representation of a NC-tree of size n with the label set [n + 1] := {1, 2, . . ., n + 1}.See Figure 2 for a labeled NC-tree.Clearly, the number of labeled NC-trees of size n equals (n + 1)! times the number of NC-trees of size n.
A decomposition algorithm for labeled NC-trees
The purpose of this section is to give a decomposition algorithm for labeled NC-trees that leads to a combinatorial interpretation of formula (1.1).Our algorithm is a generalization of the bijective algorithm for plane trees in [2].It should be pointed out that Chen [2] established a more general bijection for trees.
A planted tree is a rooted tree with root degree one, and a planted edge is the edge adjacent to the root.Define a planted tree r-unique if the planted edge is an r-edge and all the other edges are l-edges.
We begin to deal with a labeled NC-tree T of size n with k l-edges.Remind that the label set of T is [n + 1].The first procedure of our algorithm is to decompose T into a forest F on [2n − k], where F is composed of n − k r-unique planted trees.
Decomposition algorithm:
Step 1. Find an r-unique planted subtree of T with planted edge (u, v) such that v is the rightmost child of u and the original label i of u is the smallest.Then we may obtain an r-unique planted subtree with planted edge (u, v).
Step 2. Remove the r-unique planted subtree and relabel the vertex u by n + 2 in T .However, the original label i will be reused for later comparisons of vertices.
Step 3. Repeat the above steps for the remaining tree and subsequently supply n + 3, n + 4, . . ., 2n − k to relabel the encountered vertex u.where a match is a rooted tree with two vertices.
Step 4. Find an l-edge (u, v) such that v is the rightmost child of u, v is a leaf and the label i of u is the smallest.Then we obtain a match with root u.Step 5. Remove the match and relabel the vertex u by 2n − k + 1 in F .However, the label i is still used for later comparisons of vertices.
See Figure 3 (b) for an example of the second procedure.It is clear that for k = 0, the decomposition algorithm for a labeled NC-tree with k descents reduces to the algorithm for a plane tree introduced by Chen [2].We call a match labeled by r or l an r-match or l-match respectively.For k 1, the resulting n matches on [2n] have the following properties: A vertex labeled by a mark * or a mark # is called a * -marked or a #-marked vertex respectively.
Merging algorithm: Step 1. Find the tree T without r-edge and #-marked vertex, such that the root i of T is the smallest.
Step 2. Find the tree T # with the smallest #-marked vertex.Let j # be this marked vertex.
Step 3. If j # is the root of T # , then merge T with T # by identifying i and j # , put the subtrees of T to the right of T # , and keep i as the label of the combined vertex.This operation is called a horizontal merge.If j # appears as a leaf of T # , then replace j # by T in T # .This operation is called a vertical merge.See Figure 4.
Step 4. Repeat the above steps until no #-marked vertex left.
Step 5. Find the tree T without * -marked vertex and the root i of T is the smallest.
Step 6. Find the tree T * with the smallest * -marked vertex j * .If j * is the root of T * , then merge T with T * by applying the horizontal merge.If j * appears as a leaf of T * , then merge T with T * by applying the vertical merge.
Step 7. Repeat Step 5 and Step 6 until we get a labeled NC-tree.Relying on the above algorithms, we arrive at the following result.
Theorem 2.1.The decomposition algorithm and the merging algorithm are inverse to each other.Consequently, for k 1, there is a bijection between the set of labeled NCtrees of size n with k descents and the set M n,k .
The proof of Theorem 2.1 is elementary but tedious, and we present it in Appendix A.
Combinatorial proof of Theorem 1.1.A NC-tree of size n with 0 descent reduces to a plane tree with n edges.It is well known that the set of plane trees of n edges is counted by the n-th Catalan number, see, for example, [12].Hence, the formula (1.1) is clear.We only need to confirm (1.1) for k 1. Keep in mind that the number of labeled NC-trees of size n with k descents equals (n + 1)! times the number of NC-trees of size n with k descents.By Theorem 2.1, it is equivalent to prove the following expression Note that the set of matches in M n,k satisfies conditions (A1)-(A3).Formula (2.1) holds since (i) there are 2n−k n−k ways to choose n − k labels from [2n − k] for the roots of rmatches, (ii) there are n−k 1 ways to choose a label to match 2n for an r-match, (iii) there are n+k−1 k ways to choose k labels from the remaining label set for the roots of l-matches, and (iv) there are (n − 1)! ways to arrange the remaining labels for the leaves of the matches.This gives the number which equals the right hand side of (2.1).
Observe that in the merging algorithm, a leaf with an unmarked label in M is still a leaf in the corresponding labeled NC-tree.We may derive a refined formula on NC-trees with respect to the number of descents and the number of leaves.Theorem 2.2.For k 1, the number of NC-trees of size n with k descents and i leaves is equal to Proof.The proof is similar as Theorem 1.1, and we give a sketch.Note that a labeled NC-tree of size n with k descents and i leaves corresponds to a set of n matches M ∈ M n,k with i unmarked leaves.Consider the set of M in M n,k with exactly i unmarked leaves and t r-matches with unmarked roots, which is counted by By summing over t, we derive that the number of labeled NC-trees of size n with k descents and i leaves is
This implies expression (2.2).
the electronic journal of combinatorics 21(1) (2014), #P1.5 We remark that for k = 0, the number of NC-trees of size n with k descents and i leaves is given by the Narayana number, see [4].The formula (2.2) is also a refinement of a formula obtained by Flajolet and Noy [6, Theorem 1].
By arranging the matches in M ∈ M n,k in terms of the increasing order of their leaves, we obtain a sequence on the roots of these n matches.Assume that the label for the root of an r-match is colored white, while the label for the root of an l-match is colored black and written in boldface.Therefore, M can be represented by a bicolored sequence consisting of n distinct integers such that (B1) there are n − k white elements that belong to the set [2n − k], (B2) there are k black elements that belong to the set [2n − 1], (B3) the last element is white.
Denote P n,k the set of bicolored sequences satisfying conditions (B1)-(B3).It is straightforward to obtain the following correspondence between labeled NC-trees and the bicolored sequences.
Theorem 2.3.For k 1, there is a bijection between labeled NC-tree of size n with k descents and the set P n,k .
A bijective proof of Question 1.2
In this section we present a bijective proof of Question 1.2.To describe our bijection, we first notice that it is equivalent to construct a bijection between the set of labeled NC-trees of size 2k + 1 with k l-edges and the set of labeled NC-trees of size 2k + 1 with k − 1 l-edges.Using Theorem 2.3, it remains to give a one-to-one correspondence between P 2k+1,k and P 2k+1,k−1 .
Let A 2k+1,k denote the set of bicolored sequences π in P 2k+1,k such that a white element i is underlined and a black element j is double underlined.For example, (16, 9, 8, 13, 21, 12, 7, 5, 14, 18, 17) belongs to A 11,5 .By definition, we have Let B 2k+1,k−1 be the set of bicolored sequences σ in P 2k+1,k−1 such that (i) a white element i is underlined, (ii) a white element j different from i is double underlined, (iii) neither i nor j is the last white element in σ.We proceed to construct a bijection Φ : A 2k+1,k → B 2k+1,k−1 , which relies on the partition of A 2k+1,k and B 2k+1,k−1 .Without loss of generality, we may assume that the elements in an integer set are arranged in the increasing order.As mentioned above, the underlined and double underlined elements are always denoted by i and j respectively.Now we need to consider five cases.Case 1. φ 1 : S 1 → T 1 , where • S 1 is the set of bicolored sequences π in A 2k+1,k such that i is not the last white element and j < 3k + 3, • T 1 is the set of bicolored sequences σ in B 2k+1,k−1 such that 3k + 3 is not a white element in σ.
For π ∈ S 3 , assume that j is the m-th black element in π.This implies that 1 m k.Then φ 3 (π) can be generated from π by the following procedures: Step 1. Remove the double underline from j to the m-th white element; Step 2. Change the black element 3k + 3 to a white element 3k + 3; Step 3. Exchange the positions of 3k + 3 and i, where i is still underlined.
For π ∈ S 4 , φ 4 (π) can be generated from π by the following procedures: Step 1. Remove the double underline from j to the m-th white element; Step 2. Replace the black element j by a white element 3k + 3; Step 3. Remove the underline from i to 3k + 3. • S 5 is the set of bicolored sequences π in A 2k+1,k such that 3k + 3 is an element in π, i is not the last white element and j > 3k + 3, • S 6 is the set of bicolored sequences π in A 2k+1,k such that 3k + 3 is not an element in π, i is not the last white element and j > 3k + 3, • S 7 is the set of bicolored sequences π in A 2k+1,k such that • T 5 is the set of bicolored sequences σ in B 2k+1,k−1 such that 3k + 3 is a white element but not the last in σ, i < 3k + 3 and j < 3k + 3.
For π ∈ S 5 ∪ S 6 ∪ S 7 , we have the following three cases.
the electronic journal of combinatorics 21(1) (2014), #P1.5 1.If π ∈ S 5 , assume that j is m-th element in π greater than 3k + 3. Then φ 5 (π) can be generated from π by the following procedures: Step 1. Change the black element 3k + 3 to a white element 3k + 3; Step 2. Remove the double underline from j to the m-th white element, except for i and 3k + 3.
2. If π ∈ S 6 , let B denote the set of elements in π, except for j, greater than 3k + 3. Assume that j is the m-th element in the set {3k + 4, 3k + 5, . . ., 4k + 1} − B. Then φ 5 (π) can be generated from π by the following procedures: Step 1. Replace the black element j by a white element 3k + 3; Step 2. Remove the double underline to the (m + |B|)-th white element, except for i and 3k + 3.
It is easy to see that (i) S i ∩ S j = ∅ holds for 1 i = j 7, (ii) the electronic journal of combinatorics 21(1) (2014), #P1.5 This finishes the proof of Question 1.2.
We only prove Claim 1 for brevity and similarity.Set F ′ the corresponding forest by applying Step 1 -Step 4 of the merging algorithm to M .Note that any #-marked label in an r-match of M appears as a leaf.It follows that each r-match with #-marked label will be merged by adopting a vertical merge in the merging algorithm, and the resulting tree is an r-unique planted tree without #-marked label.This implies that F ′ is composed of r-unique planted trees.To verify Claim 1, it suffices to show that F ′ = F .Remind that the number of l-edges in F is k and the number of trees in F is n − k.We proceed by induction on the number k.
If k = 1, denote F by {T 1 , T 2 , . . ., T n−1 }, where T 1 contains the unique l-edge.It implies that T 1 is composed of two edges.More precisely, an l-edge is attached under an r-edge in T 1 .In addition, any of T 2 , . . ., T n−1 is an r-match.By applying Step 4 -Step 6 of the decomposition algorithm to F , we get M = {T 11 , T 12 , T 2 , . . ., T n−1 }, where T 11 and T 12 are the l-match and the r-match respectively decomposed from T 1 .Conversely, it is easy to see that (2n) # is the unique #-marked label in M and (2n) # appears as the leaf of T 12 .Thus, by applying Step 1 -Step 4 of the merging algorithm, T 11 will be merged with T 12 by a vertical merge.Clearly, this operation recovers F , that is, F ′ = F .
Assume that the result holds with k replaced by k − 1.Then if there are k l-edges in F , let (i, j) be the first edge decomposed from F at the first running of Step 4 -Step 6 in the decomposition algorithm.Notice that (i, j) is an l-edge, j is the rightmost child of i and j is a leaf.Let F 1 be the forest obtained from F by deleting the edge (i, j) and the vertex j.Besides, denote M 1 the corresponding set of matches by applying Step 4 -Step 6 of the decomposition algorithm to F 1 .One sees that there are k − 1 l-edges in F 1 .By the induction hypothesis, we can recover F 1 from M 1 by applying Step 1 -Step 4 of the merging algorithm.
In light of the decomposition algorithm, it is straightforward to derive the following relations between M and M 1 .
(1) (i, j) is an l-match in M but not in M 1 ; (2) The set of other matches in M is the same as M 1 if we replace the label (2n−k +1) # in M by i, and replace the #-marked labels in M 1 in increasing order by (2n−k+2) # , (2n − k + 3) # , . .., (2n) # respectively.
Observe that (i, j) is still the first edge encountered whenever applying the merging algorithm to M (This is implied by the fact that all the l-matches without #-marked vertex are obtained from those leaves that are the rightmost child.Among such matches, (i, j) has the smallest root label).Furthermore, the first merge step for M is the merging of the match (i, j) and the match with the label (2n − k + 1) # .Then clearly, the following merge steps are the same as those for M 1 .Moreover, at any step, j is always the rightmost child of i.The proofs of Claim 3 and Claim 4 are similar to that of Claim 1, and we only present a sketch for the proof of Claim 3. Set T ′ the corresponding tree by applying Step 5 - Step 7 of the merging algorithm to F .Since each tree in F is an r-unique planted tree, we eventually get a NC-tree T ′ by adopting merging algorithm.Now, it suffices to show T ′ = T .We proceed by induction on the number of r-edges in T .
If there is only one r-edge in T , T must be an r-unique planted tree.In terms of the decomposition algorithm, F contains a unique tree T .Hence, we can recover T from F trivially.
For a general labeled NC-tree T , let T be the first r-unique planted subtree of T encountered at the first running of Step 1 -Step 3 in the decomposition algorithm.Moreover, let (i, j) be the planted edge of T .Notice that (i, j) is an r-edge and j is the rightmost child of i.Let T 1 be the tree obtained from T by deleting T but keeping the vertex i.It is clear that the number of r-edges in T 1 decreases by 1. Denote F 1 the corresponding forest by applying Step 1 -Step 3 of the decomposition algorithm to T 1 .By the induction hypothesis, we can recover T 1 from F 1 by applying Step 5 -Step 7 of the merging algorithm.
One can check that the relations between F and F 1 are (1) T is an r-unique planted tree in F but not in F 1 ; (2) the set of other trees in F is the same as F 1 if we replace the labels (n + 2) * in F by i, and replace the * -marked labels in F 1 in increasing order by (n + 3) * , (n + 4) * , . .., (2n − k) * respectively.
In addition, T is still the first tree encountered whenever applying the merging algorithm to F .Furthermore, the first merge step for F is the merging of T and the tree with the label (n + 2) * .After that, the merge steps are the same as those for F 1 .Finally, T ′ can be obtained from T 1 by combining the vertex i of T 1 and the vertex i of T such that j is the rightmost child of i, which implies T ′ = T .
Figure 3 (
Figure 3 (a) is an illustration of the first procedure of the decomposition algorithm.The second procedure of our algorithm is to decompose F into a set M of n matches on [2n],where a match is a rooted tree with two vertices.
6 Figure 3 :
Figure 3: The decomposition algorithm of a labeled NC-tree.
Figure 4 :
Figure 4: A horizontal merge and a vertical merge.
For
example, π = (16, 9, 8, 13, 12, 19, 3, 2, 14, 21, 17) is in A 11,5 .By definition, k = 5 and ([4k + 1] − E(π) − {3k + 3}) ∪ {j} = {1, 3, 4, 5, 6, 7, 10, 11, 15, 20}.It follows that m = 2 and 1 m k.One can verify that π belongs to S 4 .Then φ 4 (π) = (16, 9, 8, 13, 12, 19, 18, 2, 14, 21, 17).Case 5. φ 5 : S 5 ∪ S 6 ∪ S 7 → T 5 , where It follows that the forest F ′ , corresponding to M by Step 1 -Step 4 of the merging algorithm, can be obtained from F 1 by attaching a rightmost child j to i.This gives F ′ = F .Before the first running of Step 5 -Step 7 in the merging algorithm, there are n − k trees and n − k − 1 * -marked vertices in these trees.Thus there must be a tree without * -marked vertex.After each merge, both the number of trees and the number of * -marked vertices decrease by 1.It means that we can always find a tree without * -marked vertex at any step.Therefore, Step 5 -Step 7 in the merging algorithm are feasible.To complete the proof of Lemma .2, it remains to show that Claim 3. if F is a forest obtained from a tree T by applying Step 1 -Step 3 of the decomposition algorithm, then we can recover T from F by applying Step 5 -Step 7 of the merging algorithm; Claim 4. if T is a tree obtained from a forest F by applying Step 5 -Step 7 of the merging algorithm, then we can recover F from T by applying Step 1 -Step 3 of the decomposition algorithm.
The proof of Lemma .2.The feasibility of Step 1 -Step 3 in the decomposition algorithm is obvious.It is routine to check the feasibility of Step 5 -Step 7 in the merging algorithm.the electronic journal of combinatorics 21(1) (2014), #P1.5 | 5,671.4 | 2014-01-12T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |