text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Highlights from SUSY searches with ATLAS
Supersymmetry (SUSY) is one of the most relevant scenarios of new physics searched by the ATLAS experiment at the CERN Large Hadron Collider. In this writeup the principal search strategies employed by ATLAS are outlined and the most recent results for analyses targeting SUSY discovery are discussed. A wide range of signatures is covered motivated by various theoretical scenarios and topologies: strong production, third-generation fermions, long-lived particles and R-parity violation, among others. The results are based on up to ~5 fb-1 of data recorded during 2010-2011 at sqrt(s) = 7 TeV centre-of-mass energy by the ATLAS experiment at the LHC.
Introduction
Supersymmetry (SUSY) [1] is an extension of the Standard Model (SM) which assigns to each SM field a superpartner field with a spin differing by a half unit. SUSY provides elegant solutions to several open issues in the SM, such as the hierarchy problem, the identity of dark matter, and grand unification.
SUSY searches in collider experiments typically focus on events with high transverse missing energy (E miss T ) which can arise from (weakly interacting) Lightest Supersymmetric Particles (LSPs), in the case of R-parity conserving SUSY, or from neutrinos produced in LSP decays, when R-parity is broken. Hence, the event selection criteria of inclusive channels are based on large E miss T , no or few leptons (e, µ), many jets and/or b-jets, τ-leptons and photons. The exact sets of cuts ("signal regions", SRs) are a compromise between the necessity to suppress events coming from known SM processes while maintaining sufficient number of surviving SUSY events. Typical SM backgrounds are topquark production -including single-top-, W/Z in association with jets, dibosons and QCD multi-jet events. These are estimated using semi-or fully data-driven techniques. Although the various analyses are motivated and optimised for a specific SUSY scenario, the interpretation of the results are extended to various SUSY models or topologies.
A brief summary of recent results (as of June 2012) on searches for SUSY with and without Rparity conservation and for long-lived massive superpartners is presented. The reported results are based on up to 4.7 fb −1 of data from pp collisions at a center-of-mass energy of √ s = 7 TeV recorded in 2010 -2011 by ATLAS [2] at the Large Hadron Collider (LHC) [3].
Inclusive channels
Analyses exploring R-parity conserving (RPC) SUSY models are currently divided into inclusive searches for: (a) squarks and gluinos, (b) third-generation fermions, and (c) electroweak production (χ 0 ,χ ± ,˜ ). Recent results from each category of ATLAS searches are presented in this Section. It is stressed that, although these searches are designed to look for RPC SUSY, interpretation in terms of R-parity violating models is also possible (cf. Sec. 3).
Squarks and gluinos
Strong SUSY production is searched in events with large jet multiplicities and large missing transverse momentum, with and without leptons. Various channels fall into this class of searches; here two cases are highlighted: the 0-lepton plus jets plus E miss T and the lepton(s) plus jets plus E miss T . In the 0-lepton search [4], events are selected based on a jet+E miss T trigger, applying a lepton veto, requiring a minimum number of jets (two to six), E miss T > 160 GeV, and large azimuthal separation between the E miss T and reconstructed jets, in order to reject multi-jet background. Events are analysed in five SRs based on jet multiplicity, which are further divided to an overall of eleven channels by using different m eff (incl.) thresholds. The latter variable is defined as the scalar sum of the transverse momenta of jets with p T > 40 GeV plus the E miss T . The most important sources of background are estimated with data-driven methods, by using measurements in control regions (CRs) and Monte Carlo (MC) predictions for SRs and CRs, applying similar techniques as for the one/two-leptons search described below. The m eff (incl.) distributions for data, for various background processes before and after fitting to CR observations and for two MSUGRA/CMSSM benchmark model points with m 0 = 500 GeV, m 1 /2 = 570 GeV, A 0 = 0, tan β = 10 and µ > 0 and with m 0 = 2500 GeV, m 1 /2 = 270 GeV, A 0 = 0, tan β = 10 and µ > 0 are shown in Fig. 1 (left).
Limits for squark and gluino production are set in the absence of deviations from SM predictions. Figure 1 (right) illustrates the 95% confidence level (CL) limits set under the mSUGRA/CMSSM framework. Exclusion limits are obtained by using the signal region with the best expected sensitivity at each point. In the MSUGRA/CMSSM case, the limit on m 1/2 reaches 300 GeV at high m 0 and 640 GeV for low values of m 0 . Squarks and gluinos with equal masses below 1360 GeV are excluded in this scenario. The one-and two-leptons search [5] is motivated by models with SUSY decay chains with intermediateχ 0 ,χ ± ,˜ , for which isolated leptons are a clean signature. The SRs are divided into those which require exactly one lepton and three or four jets and those with at least two leptons and two or four jets. Within this search, ATLAS also performs a soft-lepton analysis which enhances the sensitivity of the search in the difficult kinematic region where the neutralino and gluino masses are close to each other forming the so-called "compressed spectrum". Further cuts are applied on E miss T , m T , m eff (incl.) and E miss T /m eff , where m eff is defined as the scalar sum of the E miss T and the transverse momenta of the selected leptons and jets.
The major backgrounds (tt, W+jets, Z+jets) are estimated by isolating each of them in a dedicated control region, normalising the simulation to data in that control region, and then using the simulation to extrapolate the background expectations into the signal region. The multijet background is determined from the data by a matrix method. All other (smaller) backgrounds are estimated entirely from the simulation, using the most accurate theoretical cross sections available. To account for the cross-contamination of physics processes across control regions, the final estimate of the background is obtained with a simultaneous, combined fit to all control regions.
Apart from other theoretical models (mSUGRA, GMSB), results are also interpreted under simplifiedmodel assumptions. Figure 2 (left) illustrates the diagrams of two of the topologies used for the interpretation: a one-stepq L -pair and a two-stepg-pair production. In Fig. 2 (right), the excluded cross sections at 95% confidence level for the one-step simplified model of gluino pair-production with g → qq χ ± 1 → qq W ±χ0 1 are shown. The plots are from the combination of the hard and soft singlelepton channels.q * L q Lq L and subsequent decay via charginos (top); two-step simplified model with pp →gg and subsequent decays via charginos and sleptons or sneutrinos (bottom). Right: Excluded cross sections at 95% confidence level for the one-step simplified model of gluino pair-production withg → qq χ ± 1 → qq W ±χ0 1 [5]. The chargino mass is set to be halfway between gluino and LSP masses. The band around the median expected limit shows the ±1σ variations on the median expected limit, including all uncertainties except theoretical uncertainties on the signal. The dotted lines around the observed limit indicate the sensitivity to ±1σ variations on these theoretical uncertainties. The numbers indicate the excluded cross section in femtobarns.
Third-generation squarks
The mixing of left-and right-handed gauge states which provides the mass eigenstates of the scalar quarks and leptons can lead to relatively light 3 rd generation particles. Stop (t 1 ) and sbottom (b 1 ) with a sub-TeV mass are favoured by the naturalness argument, while the stau (τ 1 ) is the lightest slepton in many models. Therefore these could be abundantly produced either directly or through gluino production and decay. Such events are characterised by several energetic jets (some of them b-jets), possibly accompanied by light leptons, as well as high E miss T . The first analysis presented here [6] comprises the full 2011 dataset of 4.7 fb −1 and adopts an improved selection that requires large E miss T , no electron or muon and at least three jets identified as originating from b-quarks (b-jets) in the final state. Results are interpreted in simplified models where sbottoms or stops are the only squarks produced in the gluino decays, leading to final states with four b-quarks.
The gluino-sbottom model is an MSSM scenario where theb 1 is the lightest squark, all other squarks are heavier than the gluino, and mg > m˜b 1 > mχ0 1 , so the branching ratio forg →b 1 b decays is 100%. Sbottoms are produced viagg or byb 1b1 direct pair production and are assumed to decay exclusively viab 1 → bχ 0 1 , where mχ0 1 is set to 60 GeV. Exclusion limits are presented in the (mg, m˜b 1 ) plane in Fig. 3 (left). Multi-jet, 4.7 fb Fig. 3. Results from searches for gluino-mediated third-generation squarks. The dashed black and solid bold red lines show the 95% CL expected and observed limits respectively, including all uncertainties except the theoretical signal cross-section uncertainty. The shaded (yellow) band around the expected limit shows the impact of the experimental uncertainties while the dotted red lines show the impact on the observed limit of the variation of the nominal signal cross section by 1σ theoretical uncertainty. Left: Exclusion limits in the (mg, m˜b 1 ) plane for the gluino-sbottom model [6]. Also shown for reference are the previous CDF, D0 and ATLAS analyses. Right: Exclusion limits in the (mg, mχ0 1 ) plane for the Gtt model [6]. Also shown for reference are the previous ATLAS analyses.
A simplified scenario ("Gtt model"), wheret 1 is the lightest squark but mg < mt 1 , is considered. Pair production of gluinos is the only process taken into account since the mass of all other sparticles apart from theχ 0 1 are above the TeV scale. A three-body decay via off-shell stop is assumed for the gluino, yielding a 100% BR for the decayg → ttχ 0 1 . The stop mass has no impact on the kinematics of the decay and the exclusion limits are presented in the (mg, mχ0 1 ) plane in Fig. 3 (right). Furthermore, a search for light top squarks has been performed in the dilepton final state: ee, µµ and eµ with 4.7 fb −1 [7]. The leading lepton p T is required to be less than 30 GeV, a Z-veto is imposed (|m − m Z | > 10 GeV) and E miss T > 20 GeV is required. Good agreement is observed between data and the SM prediction in all three flavour channels. The results are interpreted in the (mt 1 , mχ0 1 ) plane, shown in Fig. 4 with the chargino mass set to 106 GeV, and with the assumption that the decaỹ t 1 → bχ ± 1 occurs 100% of the time, followed by decay via a virtual W (χ ± 1 → W * χ0 1 ) with an 11% branching ratio (per flavour channel) to decay leptonically. A lower limit at 95% confidence level is set on the stop mass in this plane using the combination of flavour channels. This excludes stop masses up to 130 GeV (for neutralino masses between 1 GeV and 70 GeV). . The dashed and solid lines show the 95% CL expected and observed limits, respectively, including all uncertainties except for the theoretical signal cross-section uncertainty (PDF and scale). The band around the expected limit shows the ±1σ result. The dotted ±1σ lines around the observed limit represent the results obtained when moving the nominal signal cross section up or down by the theoretical uncertainty. Illustrated also is the region excluded at the 95% CL by the CDF experiment, where the lowest neutralino mass considered was 44 GeV, indicated by the horizontal dotted line.
Direct weak-gaugino production
Signatures with multiple charged leptons can arise at the LHC through cascade decays of charginos and neutralinos. These weak gauginos can either be produced directly or can result from decays of squarks and gluinos. The analysis presented here [8] consists of a search for direct production of weak gauginos in final states with three leptons and E miss T at √ s = 7 TeV with 2.06 fb −1 . In one of the theoretical scenarios considered, the phenomenological minimal supersymmetric Standard Model (pMSSM), a series of simplifying assumptions reduces the 105 parameters of the R-parity conserving MSSM to 19. These assumptions include no new sources of CP violation and degenerate 1 st and 2 nd generation sfermion masses. This analysis made further assumptions, e.g. tan β = 6 to ensure the same leptonic branching fraction for each flavour, to reduce the number of parameters to three: the U(1) gaugino mass M 1 , the SU(2) gaugino mass M 2 , and the higgsino mass |µ|.
The baseline event selection requires three leptons with p T > 10 GeV, E miss T > 50 GeV, and at least one same-flavour, opposite-charge (SFOC) lepton pair. Two signal regions have been considered, both vetoing jets identified as originating from b-quarks. SR1 is defined by requiring that the invariant mass of the SFOC pair be further than 10 GeV from the Z mass. Conversely, SR2 is defined by requiring the SFOC mass to be within 10 GeV of the Z mass. The SR1 and SR2 selections target SUSY events with intermediate slepton or on-mass-shell Z-boson decays, respectively. In SR1 (SR2), 32 (95) events are observed in data. The total SM prediction is 26 ± 5 (72 ± 12) events. The background-only p-value is found to be 19% (6%). 95% CL limits are set on the parameter space of pMSSM, as shown in Fig. 5. An upper bound of 9.9 fb (23.8 fb) at 95% CL has been placed on the visible cross section in SR1 (SR2).
R-parity violating SUSY
Searches are also performed in ATLAS for several signatures associated with the violation of R-parity (RPV). In one of them, a term W RPV = λ i jkũ jdk l i is introduced into the SUSY Lagrangian, which in turn permits a process dd → e − µ + via t-channel top squark exchange. In 2.1 fb −1 of data, ATLAS has performed the first search for continuum production of a muon and electron of opposite sign [9], finding no excess above the SM expectation. As demonstrated in Fig. 6 (left), for a coupling parameter product of |λ 131 λ 231 | = |λ 132 λ 232 | = 0.05, such processes are ruled out at 95% CL for mt < 200 GeV. A search requiring four or more leptons (electrons or muons) in the final state [11] is sensitive to various supersymmetric models including pair-production of strongly interacting SUSY particles with R-parity breaking decays of aτ 1 LSP [10]. Moderate missing transverse momentum is expected in the final state due to the presence of neutrinos originating in the decay of the LSP.
Isolated electrons (muons) with p T > 10 GeV and pseudorapidity |η| < 2.47 (|η| < 2.4) are considered. A signal region selecting events with at least four leptons, E miss T > 50 GeV and a veto on events containing a Z-boson candidate is defined. At least one of the selected leptons has to be in the efficiency plateau (p e T > 25 GeV and p µ T > 20 GeV) and match a lepton firing the trigger. With 2.06 fb −1 of pp collision data, zero events are observed, while 0.7 ± 0.8 events are expected from SM processes. Observed (expected) upper limits of 1.5 (1.5) fb set on the visible cross-sections for new phenomena are subsequently used to constrain the mSUGRA/CMSSM scenario with m 0 = A 0 = 0, µ > 0, and one R-parity lepton flavour violating parameter λ 121 = 0.032 at m GUT . In this scenario, the RPV coupling is small enough so that the SUSY particle pair production dominates, and large enough that theτ 1 LSP decays promptly. Values of m 1/2 < 800 GeV are excluded at 95% CL if tan β < 40 and mτ 1 > 80 GeV, as demonstrated in Fig. 6 (right). These are the first limits from the LHC experiments on a model with aτ 1 as the lightest supersymmetric particle.
Meta-stable particles
We discuss here the results of a search for the decay of a heavy long-lived particle producing a multitrack displaced vertex (DV) that contains a high-p T muon at a distance between millimeters and tens of centimeters from the pp interaction point [12]. The results are interpreted in the context of an RPV SUSY scenario, where such a final state occurs in the decayχ 0 1 → µqq , allowed by the non-zero RPV coupling λ 2i j .
Events that pass a single-muon trigger of p µ T > 40 GeV are selected. The reconstruction of a DV begins with the selection of high-impact-parameter tracks with p T > 1 GeV. At least four tracks in the DV are required, to suppress background from random combinations of tracks and from material interactions. Background due to particle interactions with material is further suppressed by requiring m DV > 10 GeV, where m DV is the invariant mass of the tracks originating from the DV. High-m DV background arise from random spatial coincidence of such a low-m DV vertex with a high-p T track is suppressed by vetoing vertices that are reconstructed within regions of high-density material. Figure 7 (left) shows the distribution of m DV versus N trk DV for vertices in the selected data events, including vertices that fail the requirements on m DV and N trk DV , overlaid with the signal distribution for a signal sample with mq = 700 GeV and mχ0 1 = 494 GeV. Fewer than 0.03 background events are expected in the data sample of 33 pb −1 , and no events are observed. Based on this null observation, upper limits are set on the supersymmetry production cross-section σ× B of the simulated signal decay chain for different combinations of squark and neutralino masses and for different values of cτ, where τ is the neutralino lifetime (cf. Fig. 7, right). In another search for long-lived particles, SUSY is looked for through disappearing tracks, motivated by anomaly-mediated symmetry breaking (AMSB) models where the chargino can live long enough to be detected within the inner detector volume. Since the chargino and the neutralino are almost degenerate in mass in these models, the charged particle (π ± ) from the decay of this chargino is too soft to be reconstructed, therefore a disappearing track is expected. Events are selected based on large E miss T , high jet multiplicity and a lepton veto. Chargino candidates are selected among goodquality tracks before the TRT (outer part of the inner detector with a radius between 56 to 108 cm) and less than five hits in the TRT outer module. A comparison of the number of hits in the TRT outer volume between the signal, the SM background and the data is presented in Fig. 8 (left). Constraints on the AMSB chargino mass and lifetime were set with 1.02 fb −1 ; a chargino having mχ± 1 < 92 GeV and 0.5 < τχ± 1 < 2 ns was excluded at 95% CL, as illustrated in Fig. 8 (right).
Summary
Supersymmetry signals have been sought after by the ATLAS experiment, motivated by various models and topologies: strong production, 3 rd generation fermions, mass degeneracies, R-parity violation, among others. They lead to a wide spectrum of signatures: E miss T + jets + leptons / photons / b-jets / τ-leptons, displaced vertices, not possible to cover all of them here; analyses based on photons and τ-leptons are detailed in Refs. [14,15] and [16], respectively. No deviation from known SM processes has been observed so far with ∼ 5 fb −1 at √ s = 7 TeV. As both techniques and strategy keep evolving, ATLAS will keep looking for supersymmetry with the new data that become available at the LHC. | 4,765.6 | 2012-10-05T00:00:00.000 | [
"Physics"
] |
From Heteroaromatic Acids and Imines to Azaspirocycles: Stereoselective Synthesis and 3D Shape Analysis
Abstract Heteroaromatic carboxylic acids have been directly coupled with imines using propylphosphonic anhydride (T3P) and NEt(iPr)2 to form azaspirocycles via intermediate N‐acyliminium ions. Spirocyclic indolenines (3H‐indoles), azaindolenines, 2H‐pyrroles and 3H‐pyrroles were all accessed using this metal‐free approach. The reactions typically proceed with high diastereoselectivity and 3D shape analysis confirms that the products formed occupy areas of chemical space that are under‐represented in existing drugs and high throughput screening libraries.
In recent years, the biological evaluation of under-explored regions of chemical space has attracted significant attention in the search for new pharmaceutical lead compounds. In particular, rigid, three-dimensional scaffolds have been targeted, as they are generally poorly represented in current drugs and screening libraries. 1 With this in mind, functionalised spirocycles are of much current interest and efficient methods to generate such compounds are of high value. 1,2 In this paper, the formation and 3D shape analysis of spirocyclic indolenines and related azaspirocycles are described. Spirocyclic indolenines (also known as 3H-indoles) 3 are important scaffolds in their own right, being present in a number of bioactive natural products, and also since they serve as precursors to other privileged heterocycles including carbolines, 4 oxindoles 5 and indolines. 6 The most common synthetic strategies currently used to generate spirocyclic indolenines are shown in Figure 1A. Interrupted Fisher-indole reactions (1 4) 7 and intramolecular imine condensation routes (2 4) 8 have each been well used over the years, while dearomatising spirocyclisation reactions (3 4) 9 are of particular current interest 10 and underpin the approach described herein.
Our new, connective method is based on the coupling of aromatic carboxylic acids 5 with imines 6 to form reactive Nacyliminium ions 7 11,12 in situ, that can then be intercepted by intramolecular nucleophilic attack, exemplified in Figure 1B by the formation of spirocyclic indolenines 8. 13 The high electrophilicity of the N-acyliminium ion intermediate is a key design feature, as it means sufficiently mild conditions can be used to the allow the products to be isolated, without competing 1,2-migration and dimerisation/trimerisation reactions taking place. 3 Herein we report the successful implementation of this strategy, which allows indoles and other simple, electron-rich aromatics to be converted into complex azaspirocycles, in a one-pot, metal-free, stereoselective process. Furthermore, 3D shape analysis, 14 using the principal moments of inertia (PMI) method, 15 shows that most of the products formed occupy interesting and under-exploited regions of '3D chemical space'. To explore the viability of this new approach, the reaction between 2-methyl-3-indole acetic acid 5a and imine 6a was first examined (Scheme 1), by stirring these compounds in the presence of NEt(i-Pr)2 and propylphosphonic anhydride (T3P) in THF at RT. Pleasingly, this led to the formation of the expected spirocycle as a mixture of diastereoisomers (8a:9a, 11:1), via a process that is conceptually similar to an interrupted Pictet-Spenger reaction. 16 The diastereomeric products were partially separable by column chromatography, and isolated in 92% overall yield (Scheme 1). The stereochemistry of the major diastereoisomer 8a was confirmed by X-ray crystallography ( Figure 2, see later). 17 Following a temperature and solvent screen (see Supporting Information), a range of other 2-methyl indole acetic acid derivatives (5b-5f) 18 were also coupled with imine 2 under the optimised conditions; substitution on all positions of the indole ring was examined and the desired spirocyclic indolenines were formed in good to excellent overall yield (8/9b-f, 78-96%). The diastereoselectivity was universally high (d.r. 6:1-13:1) with the same major diastereoisomer being formed in all cases. 19 Indole acetic acid itself (5g) was also compatible with the standard procedure, furnishing spirocycles 8g/9g in good yield (Scheme 2), demonstrating that substitution on the indole 2position is not a requirement, which is pleasing given the propensity for related compounds to undergo 1,2-migration reactions. 20 Phenyl substitution at the 2-position (acid 5h) was also well tolerated, with spirocycles 8h/9h being formed in good yield, and interestingly the major product in this case was 9h (confirmed by X-ray crystallography, Figure 2), which shows opposite diastereoselectivity to the previous examples. 17 Finally, 6-membered ring spirocyclic lactams 8i/9i were formed in good overall yield, using higher homologue 5i.
Scheme 2. Additional acid substrates in the spirocyclisation with imine 6a; for full experimental details see Supporting Information. A plausible explanation for the observed diastereoselectivities is depicted in Figure 3, using the reaction of indole 5a and imine 6a as an example. The reaction is thought to proceed via activation of the carboxylic acid with T3P, followed by Nacylation to generate a reactive N-acyliminium ion 7a. Assuming that this is correct, the stereoselectivity is then determined by the facial selectivity of the nucleophilic attack onto the Nacyliminium ion (7a 8a/9a). In A, the benzenoid rings of the imine and indole components appear to be relatively close together in space and look well-suited to experience a stabilising -stacking interaction, whereas in B, this interaction is absent, and replaced by a potentially destabilising steric clash between the imine and the indole 2-methyl group. These transition state models also offer a plausible explanation for the switch in stereoselectivity in products 8h/9h; in this case, as the indole 2position is substituted with a phenyl group rather than a methyl, a stabilising -stacking interaction now appears to be viable in model B. The reactions are believed to be under kinetic control, based on the fact that re-subjecting a purified sample of spirocycle 8a to the optimised reaction conditions led to no change in the dr, indicating that the spirocyclisation is not reversible in this case.
COMMUNICATION
The scope of the reaction with respect to the imine coupling partner was examined next, with the imines used (6b-6g) 21 shown in Figure 4 and spirocyclisation results in Scheme 3. Dimethoxy-substituted imine 6b successfully gave the expected products 8j and 9j in moderate diastereoselectivity. Tetrasubstitution around the aromatic ring of the imine did not hinder the reaction as 2,5-dibromo-3,4-dimethoxy-substituted substrate 6c gave products 8k and 9k in good yield and diastereoselectivity. Thiophene-and pyrrole-fused imines 6d and 6e were also suitable substrates, as was benzylated imine 6f, all forming the expected spirocycles 8l/9l-8n/9n with generally good diastereoselectivity and in good yield. Acyclic imines, which are often avoided in related methods based on Nacyliminium ion chemistry due to their tendency to hydrolyse, 22 are also well tolerated, with spirocyclic products 8o/9o and 8p/9p each isolated in good overall yields. The major diastereoisomer formed in each case was assigned based on 1 H NMR spectroscopy, 19 and in the case of spirocycles 8n and 8o, confirmed by X-ray crystallography (Scheme 3). 17 Preliminary work also confirms that this method can be extended to other heterocyclic systems. Aza-indoles are important structures in medicinal chemistry 23 and pleasingly we found that aza-indoleacetic acid 10 24 reacted with imine 6a under the usual conditions to give the spirocyclic product 11 in excellent yield as a single diastereoisomer (Scheme 4), with the stereoselectivity seemingly being consistent with the analogous indole examples. Dearomatising via the 2-position of pyrroles 12 and 13 25 is also possible; 26 on these systems, only a small amount of the desired product was formed when the standard conditions were used, but by switching the reaction solvent to CHCl3 and increasing the temperature to 70 °C, spirocycles 14 and 15 were each formed in good yield, with good to excellent diastereoselectivity. In the phenyl-substituted case, the major diastereoisomer 15 was separable by chromatography and X-ray crystallography was used to assign the configuration depicted. 17,27 Finally, this same modified set of conditions were used to form spirocycle 17 via the reaction of pyrrole 16 with imine 6a; this example is noteworthy, given that 3H-pyrroles are known to be unstable and their synthesis is a considerable challenge using existing methods. 28 Scheme 4. Other heterocycles in the spirocyclisation; for full experimental details see Supporting Information.
The dearomatisation reactions described allow access to a diverse array of spirocyclic scaffolds, and the products formed are well primed to undergo further transformations, allowing additional structural diversity to be introduced or the properties of the compounds to be tuned. This is exemplified using spirocycles 8a and 8g (Scheme 5), and it is likely that similar processes (and many more) will be broadly applicable across the other spirocyclic products described in this paper. For example, indolenines 8a and 8g were both reduced to indolines 18 and 19 respectively by sodium borohydride in refluxing COMMUNICATION methanol. In the case of product 18, the reduction was completely diastereoselective, with the hydride source approaching the indolenine from the less sterically hindered side (i.e. away from the two benzenoid rings, verified by X-ray crystallography). 17 Indolenines 18 and 19 could also be reduced further, forming products 20 and 21, upon reaction with lithium aluminium hydride in refluxing THF. Products with complementary relative stereochemistry to indoline 18 could also be obtained through the addition of carbon-based nucleophiles; products 22 and 23 were formed, again with complete diastereoselectivity, via the addition of pyrrole and methyl magnesium bromide respectively to indolenine 8g. 29 Scheme 5. Modification of spirocycles 8a and 8g.
Finally, the principal moments of inertia (PMI) method 14 was used in order to characterise the 3D shape of the azaspirocycles produced. 30 The PMI method utilises the molecular mechanicsgenerated lowest energy conformation and the normalized principal moments of inertia ratios, NPR1 and NPR2, are displayed on a triangular plot, with the three vertices corresponding to rod, disc, and spherical-shaped molecules. A PMI plot containing the major diastereoisomeric forms of all of the azaspirocyclic products synthesised during the course of our study is shown in Figure 5. As this plot highlights, 88% (22 out of 25, compounds 8a-g, 8i-p, 9h, 14, 15, 17-21, 23) of the new azaspirocycles occupy the highlighted '3D region' (blue triangle) and have values of (NPR1 + NPR2) >1.2. These 3D shape properties are in stark contrast to the majority of current drugs, most of which lie close to the rod-disc axis. For example, PMI analysis of 1439 FDA-approved small molecule drugs 31 shows that just 23% are found within the (NPR1 + NPR2) >1.2 area (see Supporting Information), and most drug screening libraries have a similar shape distribution. 32 Hence, these results are significant, in view of the current interest in the synthesis of compounds that populate under-explored regions of chemical space, especially spherical areas (e.g. azaspirocycles 8b, 8c, 8e, 8n, 8p, 9h,11), in pharmaceutical lead-identification programs. In conclusion, a new, metal-free, connective method for the synthesis of a range of 3D spirocyclic scaffolds has been reported, starting from far simpler 2D building blocks. The reactions proceeded in moderate to excellent yields and are diastereoselective, with the major diastereoisomers isolable in good overall yield in the majority of cases. This study focused predominantly on the synthesis of spirocyclic indolenines, but the successful results obtained using azaindoleacetic acid, as well as 2-and 3-substituted pyrrole acetic acids, indicate that the process is much more general. 3D shape analysis indicates that a high percentage of the compounds generated in this study occupy under-explored regions of chemical space, and the ability to modify the scaffolds further has also been demonstrated, meaning that their desirable spatial and physicochemical properties can be further tuned. Future applications in the generation of medicinally relevant scaffolds/lead compounds and natural products are anticipated, and the development of asymmetric variants of these reactions will also be explored. 33 | 2,647.4 | 2016-03-23T00:00:00.000 | [
"Chemistry"
] |
Reliability, accuracy, and minimal detectable difference of a mixed concept marker set for finger kinematic evaluation
The study of finger biomechanics requires special tools for accurately recording finger joint data. A marker set to evaluate finger postures during activities of daily living is needed to understand finger biomechanics in order to improve prosthesis design and clinical interventions. The purpose of this study was to evaluate the reliability of a proposed hand marker set (the Warwick marker set) to capture finger kinematics using motion capture. The marker set consisted of the application of two and three marker clusters to the fingers of twelve participants who participated in the tests across two sessions. Calibration markers were applied using a custom palpation technique. Each participant performed a series of range of motion movements and held a set of objects. Intra and inter-session reliability was calculated as well as Standard Error of Measurement (SEM) and Minimal Detectable Difference (MDD). The findings showed varying levels of intra- and inter-session reliability, ranging from poor to excellent. The SEM and MDD values were lower for the intra-session range of motion and grasp evaluation. The reduced reliability can potentially be attributed to skin artifacts, differences in marker placement, and the inherent kinematic variability of finger motion. The proposed marker set shows potential to assess finger postures and analyse activities of daily living, primarily within the context of single session tests.
Introduction
Quantifying finger kinematics has the potential for enhancing the understanding of finger function, facilitating the design of efficient prosthetics, identifying movement disorders and assessing the impact of rehabilitation interventions [1].Numerous studies have evaluated finger kinematics using a variety of methods and purposes, including the use of musical instruments [2], during Activities of Daily Living (ADL) [3], and interaction with technological devices [4].
Generally, upper extremity kinematics are more complex when compared to lower leg kinematics [5], therefore, for studies focusing on understanding finger function, reliable characterization of finger biomechanics is required.
Motion capture is regarded as the "gold standard" for biomechanical evaluation of human movement.Consequently, this study focuses on the development of a marker set specifically designed for tracking finger motion using motion capture cameras.A review conducted by Reissner et al. [15], examined and compared available marker sets used during ADL, using one, two or three markers (single or in clusters) per segment.The review found no three-marker marker set being used to study finger motion during ADL, and so the potential of such marker sets to gather functional kinematic data remains unknown, despite their potential value in tracking finger movements that occur outside of the standard movement planes.Lee & Jung [16], compared the differences in angles resulting from different marker attachment methods.Their findings suggest that, for dynamic evaluation, the use of three markers per segment is recommended due to reduced skin movement artifacts.
Additionally, Metcalf & Notley [17], recommended the use of marker set concepts with two or more markers, as they exhibit diminished skin movement artifacts, thereby improving measurement accuracy.Two-marker concepts assume that proximal inter-phalangeal (PIP) joint and distal inter-phalangeal (DIP) joint motion occur in a single plane movement of flexion-extension [18], and metacarpophalangeal (MCP) joints have two degrees of freedom (DOFs) [19].The thumb's interphalangeal (IP) and metacarpophalangeal (MCP) angles have been considered to have one DOF [20,21].Notably, the interpretation of the carpometacarpal (CMC) joint angle of the thumb can be further elucidated through the utilization of anatomic landmark calibration, which facilitates the analysis of its three DOFs [22].
Considering the advantages offered by the options of using two or three markers per segment, and the lack of experiments undertaken for the study of finger function during ADL using motion capture, we developed a novel hybrid marker set specifically designed for implementation with motion capture cameras.
Currently, there is insufficient information available regarding the reliability of finger kinematic data obtained through motion capture [15,23].Although there are more studies assessing goniometric reliability [24,25], ensuring reproducibility and meaningful data interpretation requires reliability for assessing reported outcomes [1].Therefore, the aim of this study was to evaluate the intraand inter-session reliability of a newly proposed marker set (the Warwick marker set) designed to track finger kinematics using motion capture systems.
The primary goal of this research is to evaluate the reliability of finger kinematic data collected during range of motion movements and selected grasps (for intra-session reliability), repeated over two sessions (for inter-session reliability).
For the segments requiring a three-marker cluster, calibration markers were used for landmark definition.Due to the absence of standardized procedures for placing markers on finger anatomical landmarks, palpation guidelines were developed, and incorporated in this work (see the Supplementary material).
We hypothesize than intra-session reliability will range from moderate to good, while inter-session reliability will be comparatively lower.This expectation stems from the potential drawbacks associated with finger testing and marker set application between sessions.The results of the study allow determination of the capabilities of the proposed marker set in evaluating finger kinematics.
Methods
The study received ethical approval from the University of Warwick Biomedical & Scientific Research Ethics Committee (BSREC, ID: 77/21 -22).Twelve participants (eight male and four females, mean age 21.3 ± 4.2), were recruited for the study and gave informed consent.Each participant attended two data collection sessions at least one day apart to investigate intra-and inter-session reliability.Any participants suffering from any impairments that affect hand and finger motion were excluded from the experiment.
Data collection took place at the University of Warwick's Gait Laboratory.A motion capture system consisting of 12 MX-T20 cameras (Vicon Motion Systems, LA, USA) collecting at 500Hz was used.The camera setup and markers employed followed the recommendations of Yang et al. [26] for capturing high-resolution movement in small volumes (Fig. 1).Data were captured via reflections from 4 mm spherical markers and two 14 mm markers for the wrist.The marker trajectories were then labelled using Vicon Nexus.Subsequently, the data were saved in Vicon ProCalc, where segment definition and joint angle calculation were performed.
Marker set
A total of 78 markers were positioned on each participant's right hand.A total of 33 calibration markers were positioned on finger joint anatomical landmarks to define rigid segments.Consultant Hand Surgeons from the University Hospitals Coventry and Warwickshire (UHCW), contributed to the development of palpation guidelines (Supplementary material) in order to standardize marker positioning and aid joint identification.Fig. 2a shows the dorsal view of the markers, where blue circles indicate calibration markers.Fig. 2b shows palmar calibration markers.
The calibration markers were used to define the rigid segments of the fingers for kinematic analysis.The segments were defined using non-collinear markers.The anatomical reference planes were defined using a right-handed Cartesian coordinate systems using a Cardan XYZ rotation [27].The rotation sequence involved initial rotation about the laterally directed axis (X); followed by rotation around the anteriorly directed axis (Y), and finally, rotation around the vertical axis (Z) [28].
The segments defined were.
• The hand, • For the Index, Middle, Ring, and Little fingers: proximal segments • For the Index and Middle fingers: Middle and distal segments A total of 45 tracking markers were used.A four-marker cluster was used for the hand segment taking advantage of space for easy marker tracking.Three marker clusters were used for kinematic segments, except for the Middle and distal segments of the Ring and Little fingers where two markers per segment were used.The selection of this approach took into consideration the participants' comfort while wearing the marker set, the limited space available on smaller segments, and consideration of joint angles as having one degree of freedom (flexion-extension).Segment vectors were defined using two collinear markers in the distal, medial, and proximal clusters (Fig. 3a).PIP and DIP joint angles were calculated as the angles between the two vectors.
Abduction/adduction angles were defined as the angle between the fingers' reference vectors and the Middle finger reference vector position during static calibration (Fig. 3b) respectively.For the thumb carpometacarpal (CMC) joint, rotations around each axis were recorded and are shown as CMC_x, CMC_y and CMC_z.For the rest of the joints, only flexion/extension angles were extracted and reported.
Experimental protocol
The preparation of the participants for the data collection took place following the steps outlined in the preparation phase of Fig. 4. First, calibration markers were attached using the palpation guidelines previously mentioned.Second, tracking markers were attached.In step 3, a static trial was recorded with participants wearing all calibration and tracking markers to capture the necessary kinematic segments.After removing calibration markers in step 4, step 5 consisted of establishing a zero-degree baseline for the finger joints during tests by recording a calibration static trial using only tracking markers.Following procedures described by Cook et al. [29], and Nataraj & Li [30], participants utilized a flat, square block of wood, acting as a digit alignment device.In this position, as depicted in Fig. 4a, step 5, participants laid their hand flat with fully adducted fingers, while the thumb remained fully extended and adducted.Finger joint angles were recorded and averaged over 1 s for the normalization of all motion tasks and static grasps.
The data collection phase (Fig. 4b, steps 6 and 7), consisted of recording Range of motion (ROM) tasks and static grasps.
• Range of motion (ROM) tasks
All participants were asked to perform a series of movements to evaluate joint maximum, minimum and ROM angles.The participants were instructed to flex and extend finger joints as much as possible during these movements.To isolate joint movements, participants performed the movements in different sets, as seen in Fig. 5.Each participant completed three trials for each task.
• Static grasps
To evaluate full hand kinematic reliability, participants were instructed to hold a series of objects, which remained consistent for all participants and across sessions.The grasps were obtained from the GRASP taxonomy [32] and were chosen based on the following criteria: 1) grasps that allow the recording of all finger segments without overlapping or occlusion; 2) grasps were all fingers are in contact with the object; and 3) commonly observed grasps during household and machining tasks [33].Participants held the indicated object and maintained the desired grasp for 1 s without moving or changing position (Fig. 6).Three trials per grasp were recorded for each participant.Fig. 3. A) Vectors defined for the calculation of flexion/extension of PIP and DIP joints for the ring and little fingers, b) representation of the finger vectors used to calculate abduction/adduction.A reference vector was defined using the midpoint of the Rsp and Usp markers (See Supplementary material) towards the centre of the middle finger PIP joint.For the index, ring, and little fingers, vectors were defined from the MCP joint origin to the PIP joint origin.Angles were calculated using the middle finger static calibration line as a reference.
Data processing and data analysis
As outlined in Fig. 4c, step 8, raw marker trajectories were filtered in Vicon Nexus using a fourth order zero-lag low-pass Butterworth filter with a cut-off 15 Hz frequency.Raw marker trajectory data were filtered to remove any displacement distortion that could result in angle signal peaks during calculation.Finger joint angles were calculated using Vicon ProCalc and then exported to MATLAB R2020b software.Angle data for all trials and tasks were filtered using a zero-lag fourth order low-pass Butterworth filter with a 5Hz cut-off frequency, based on Skogstad et al. [34], recommendations for hand motion tracking (Fig. 4c, step 9).This filtering approach was chosen given its strengths at noise attenuation during hand kinematics motion capture and usefulness at recording freehand motion.For the range of motion tasks, the maximum, minimum and range of motion angles were extracted.For static grasps, joint angles were averaged over 1 s of recording.
To assess inter-session reliability (Fig. 4c, step 10), a two-way mixed, absolute agreement, average of K measurements was used, where K = 3 trials per session.For intra-session reliability, three trials from the second session were used using a two-way mixed, absolute agreement, single measures model.ICC model and type was selected following Koo & Li [34], and Shrout & Fleiss [35], recommendations for assessing test-retest reliability.ICCs were interpreted according to Koo & Li, 2016, where <0.50 represents poor reliability; 0.50-0.74moderate reliability; 0.75-0.89good reliability and ≥0.9 excellent reliability.
Standard Error of Measurement (SEM) was calculated using Equation ( 1), as the square root of the error variance [36] where [37]: The Minimal Detectable Difference (MDD) was calculated using Equation (2) [38]: All calculations were performed in MATLAB using custom code written by Kevin Brownhill (Imaging Sciences, KCL<EMAIL_ADDRESS>based on Shrout and Fleiss' original paper [35].
Results
The intra-session reliability results for the joint angles were consistently higher than the inter-session reliability for both the static and ROM tasks.For the Index finger, ROM task reliability was lower in the inter-session case (ICC = − 0.87-0.9)compared to the intrasession case (ICC = 0.72-0.99),with most of ICC values in the good-excellent categories (Table 1).This was also the case for the Middle finger.A similar trend was observed during static grasps, except for the DIP joint, where inter-session reliability was better during power sphere (0.75 > 0.5) and precision sphere grasps (0.76 > 0.64) (Table 1).
Table 2 shows a reduced ICC for the parallel extension grasp in the inter-session case of the Middle finger for DIP (− 0.17), PIP (0.65) and MCP (0.25) joints when compared to other grasps.Intra-session reliability was generally higher than inter-session reliability except for the power sphere, precision sphere, prismatic 4 fingers and parallel extension grasps.This occurred for the Index (DIP in power and precision sphere, PIP in prismatic 4 fingers, and MCP in prismatic 4 fingers, Table 1), Middle (DIP in precision sphere, PIP in power sphere, precision sphere and prismatic 4 fingers, and MCP in power sphere and precision 4 fingers, Table 2), Ring (all joints, Table 3), and Little (PIP, MCP, and abduction/adduction, Table 4)
ROM tasks
• Inter-session: Finger abduction-adduction Inter-Class Correlation (ICC) values ranged from poor to excellent for all fingers, with notably low values for the Index, Middle and Ring fingers (Tables 1-3).
The reliability of MCP joint angles during flexion-extension was generally lower for the maximum angle compared to the minimum angle (Tables 1-4), except for the Little finger MCP joint (Table 4).
It was noted that when the reliability of the maximum or minimum angle was low, it also affected the corresponding reliability of the ROM values.
Conversely, the reliability of flexion-extension in DIP joints was primarily poor for the maximum and minimum angles, although the ROM reliability reached moderate to good values, except for the Little finger DIP joint (Table 4), where all reliability values ranged between moderate and good.As for the PIP joints, ROM and minimum angle reliability were higher (ICC = 0.64-0.92)for all fingers except for the Index PIP joint (Table 1).
Regarding thumb abduction-adduction, maximum CMC angles' reliability was moderate to excellent (ICC = 0.67-0.97)(Table 5), but poor for the minimum and ROM reliability.The reliability of CMC angles during flexion-extension varied from poor to excellent.The MCP flexion-extension reliability was excellent for the maximum angle and poor for the minimum angle and ROM.IP flexionextension, the reliability was good for ROM but poor for both maximum and minimum angle reliability.
• Intra-session:
The reliability of kinematics for all fingers, including all joints and angles, ranged from moderate to excellent.Consistent with the inter-session findings, the SEM and MDD values for the Index, Middle, Ring, and Little fingers were lower (SEM = 1.25-7.67,MDD = 3.46-21.25)(Tables 1-4) compared to the thumb kinematics (SEM = 1.72-23.91,MDD = 4.77-66.28)(Table 5).Despite improved kinematic reliability results for the thumb in the intra-session data compared to the inter-session data, the SEM and MDD values remained high, despite moderate to excellent ICC values.
Static grasps • Inter-session:
The Index finger kinematic reliability during static grasps was poor to good (ICC = 0.05-0.9)(Table 1).The DIP joint displayed the lowest reliability and larger SEM and MDD values, mainly for the prismatic 4 fingers and parallel extension grasps.Conversely, the Middle finger had poor to excellent reliability (ICC = − 0.17-0.91)(Table 2).The MDD values were higher for the MCP joint and abduction/adduction.On the other hand, the largest MDD value for the Middle finger was for the DIP joint during the parallel extension grasp.Regarding the Ring finger, reliability results were poor to good (ICC = 0.19-0.86)(Table 3).The MDD values were higher for the PIP and MCP joints for the prismatic 4 fingers grasp and for the MCP and adduction/adduction during parallel extension grasps.The reliability results for the Little finger varied from poor to excellent (ICC = − 0.07-0.91)(Table 4).Reliability was lower for the MCP joint, with higher MDD values for the precision sphere, prismatic 4 fingers and parallel extension grasps, particularly during parallel extension in cases of abduction/adduction.
• Intra-session:
Index finger reliability results ranged from poor to excellent (ICC = 0.4-0.94)(Table 1).In terms of MDD values, the prismatic fourfinger grasp exhibited higher values across all joints.Similarly, the Middle finger yielded poor to excellent reliability results (ICC = 0.15-0.96)with higher MDD values observed for the prismatic four fingers grasp in the PIP, MCP and abduction/adduction, as well as for the precision sphere in the DIP joint (Table 2).Ring finger reliability results ranged from poor to excellent (ICC = − 0.07-0.98)(Table 3), displaying larger SEM and MDD values for the MCP joint during the precision sphere, prismatic four fingers and parallel extension grasps.Likewise, the Little finger exhibited poor to excellent reliability (ICC = 0.05-0.96)(Table 4), with higher SEM and MDD values for the MCP joint during the power sphere, precision sphere, prismatic 4 fingers and parallel extension grasps.
As for the thumb, its reliability results ranged from poor to excellent (ICC = 0.17-0.99)(Table 5).The largest SEM and MDD values were observed for the medium wrap CMC_x and CMC_y values and for CMC_x during parallel extension and precision disc grasps.The largest SEM (16.1) and MDD (44.5) values were for the MCP joint during the adducted thumb grasp.
Discussion
A marker set was developed for evaluating finger biomechanical function using motion capture systems.This study aimed to establish a comprehensive marker set for the hand and assess its measurement accuracy and reliability.Results obtained partially confirm the experimental hypothesis, indicating that inter-session reliability was lower compared to intra-session reliability.However, both intra and inter-session reliability results across the studies varied from poor to excellent.
Joint ROM measurement is important for clinicians as it serves as an assessment metric, which provides insights into the effects of an intervention [39].Index, Middle, Ring, and Little fingers' ROM SEM results ranged between 1.62 • and 17.77 • for the inter-session case and 2.29-7.67• for intra-session case.SEM values larger than 5 • were observed for the Index PIP, Middle MCP, Ring MCD, Little MCP and Little PIP (inter-session) and for the Index DIP, PIP, Middle DIP, PIP, Ring MCP, PIP, Little MCP and PIP (intra-session) joints.Previous marker set evaluations of reliability, akin to manual goniometry, have considered a 5 • accuracy threshold [40].Intra-session values exceeded the 5 • threshold by no more than 2.7 • , indicating a higher accuracy in ROM measurements.However, it is noteworthy that for the thumb, the ROM and SEM reached up to 70.86 • for inter-session and 23.91 • for intra-session data, suggesting that ROM measurements for this finger are less accurate using the present method.
The reliability during static grasps appears to depend on the finger posture required by each grasp.Distal joints exhibited larger SEM and MDD values, particularly for the prismatic 4 fingers and parallel extension grasps in the inter-session case, and for MCP and PIP joints during the precision sphere, power sphere, prismatic 4 fingers and parallel extension grasps for the intra-session case.The lower repeatability of prismatic four-finger and precision sphere grasps may be attributed to variations in fingertip positioning while holding the object, resulting in trial-to-trial variability.Furthermore, lower reliability was observed for the thumb's MCP joint during the adducted thumb grasp and for all fingers during the parallel extension grasp, as these grasping configurations are susceptible to the gimbal lock effect, where finger joints approach an extended position close to 0 • .
When averaged across all joints, the SEM and MDD values for the intra-session data were 4.27 • and 11.83 • , respectively, compared to 11.23 • and 31.13• for the inter-session data.These intra-session results align with a previous study [15], where the marker set demonstrated SEM ranging from 2.1 • to 5 • and MDD ranging from 5 • to 16 • for test-retest results.It is important to note that alternative marker sets are more robust for between-day examinations, allowing for better recognition of smaller changes in mobility within a day.However, only the aforementioned marker set has evaluated reliability and accuracy during kinematic tasks using motion capture systems.Therefore, caution should be exercised when interpreting changes over different sessions as true change or as measurement error when employing this method in a clinical setting.
Several factors can limit the reliability of the studied measures.Finger joint active angle measurement, as indicated by previous research [41], is a highly complex process that presents lower reproducibility compared to simpler joints [42].This is attributed to the involvement of multiple muscles crossing the joints and tendon gliding [43].Unlike other joints or structures that provide physical limitations, finger joint movement is relatively unrestricted, making it less reliable compared to the range of motion (ROM) measurements of simple hinge joints [24,44].
The primary objective of the present study was to establish a standardized procedure for capturing finger kinematic data and evaluating maximum, minimum, and ROM angles.However, it should be noted that humans rarely perform movements at their maximum or minimum amplitude, and such tasks are often poorly controlled [45].Instead, we recommend further evaluation of marker sets in representative movements derived from ADL with defined stages, for instance, systematically recording the reach, grasp, and release phases during an activity like pouring water into a cup would provide valuable insights for future investigations.
The lower inter-session reliability observed in our study may be reminiscent of the lower between-rater reliability commonly observed during goniometric measurements [25].Therefore, we recommend primarily implementing the methods described in this paper on a single-session basis to enhance reliability and minimize potential sources of error.
Further investigation is warranted to explore the rotation sequence of thumb angles, specifically to identify positions that may trigger the gimbal lock effect and develop thumb-specific rotation sequences to mitigate its occurrence [22,[46][47][48].It is not recommended to employ reduced marker sets for the thumb as they overlook its anatomical considerations the three-dimensionality nature of its movements.
The limitations of this study include its modest number of participants and the reduced dynamic evaluation of finger function.Further studies could involve the examination of a broader range of hand motions relevant to ADL.Another limitation lies in the use of calibration markers to define segments can introduce errors due to variations in marker placement across sessions [49].Conversely, the application of surface markers based on palpation introduces some level of inaccuracy in determining the precise location of underlying bone structures [50].Mixed methods combining imaging and palpation techniques can be useful in mitigating marker placement error, and it is particularly suitable for clinical settings equipped with readily accessible 3D imaging equipment [51].By employing such mixed methods, researchers and clinicians can improve accuracy and reduce uncertainties associated with marker placement.
On the other hand, this study's strengths lie in its innovative evaluation of both static and dynamic finger kinematics using a novel marker set concept.
In conclusion, intra-session ICC results indicate that a mixed marker set concept is sufficiently reliable when assessing finger joint angles in single-session experiments.The hypothesis was confirmed with inter-session reliability being lower than intra-session reliability.We suggest that the application of The Warwick marker set aligns with the research question at hand, preferably in the context of single-session evaluations.
Fig. 1 .
Fig. 1.Camera configuration.The cameras were arranged and positioned approximately 90 cm from the intended capture volume.
Fig. 2 .
Fig. 2. A) Dorsal and b) palmar view of the full marker set with calibration and tracking markers.Blue markers are calibration markers.
Fig. 4 .
Fig. 4. Preparation and static calibration process for data collection.
Fig. 5 .
Fig. 5. Finger joints motion tasks.Adapted from Hirt et al. [31], Joint rotations signs correspond to the sequence defined by Robertson, 2014, except for thumb flexion/extension and abduction/adduction, which are defined by the CMC joint.a) Index to little finger abduction/adduction was calculated by the radial/ulnar deviation from the line of reference obtained from static calibration.b) MCP, PIP and DIP flexion/extension angles were calculated for index to little fingers.c) Thumb CMC joint kinematics for angle interpretation.d) Thumb's IP and MCP flexion/extension angles.
Fig. 6 .
Fig. 6.Selected grasps for the static tests.a) Medium wrap involved all fingers wrapping around the cylinder object while b) Adducted thumb required a different thumb position.c) Power sphere grasp involved fingers wrapping the ball whereas d) Precision sphere required only fingertips to hold the ball.For the e) Prismatic 4 finger, f) Precision disc and g) Parallel extension grasps, a marker, a detergent lid and a card were used to accomplish each grasp respectively.
Table 1
Reliability results for the Index finger.
Table 2
Reliability results for the Middle finger.
Table 3
Reliability results for the Ring finger.
Table 4
Reliability results for the Little finger.
Table 5
Reliability results for the Thumb. | 5,831 | 2023-10-01T00:00:00.000 | [
"Engineering",
"Medicine"
] |
The Reeb Graph Edit Distance is Universal
We consider the setting of Reeb graphs of piecewise linear functions and study distances between them that are stable, meaning that functions which are similar in the supremum norm ought to have similar Reeb graphs. We define an edit distance for Reeb graphs and prove that it is stable and universal, meaning that it provides an upper bound to any other stable distance. In contrast, via a specific construction, we show that the interleaving distance and the functional distortion distance on Reeb graphs are not universal.
Introduction
The concept of Reeb graphs of a Morse function first appeared in [12] and was subsequently applied to the problems in shape analysis in [13,9].The literature on Reeb graphs in the computational geometry and computational topology is ever growing (see, e.g., [3,4] for a discussion and references).The Reeb graph plays a central role in topological data analysis, not least because of the success of Mapper [14], a method providing a discretization of the Reeb graph for a function defined on a point cloud.
A recent line of work has concentrated on questions about identifying suitable notions of distance between Reeb graphs: These include the so called functional distortion distance [3], the interleaving distance [6], and various graph edit distances [8,7,2].There is of course interest in understanding the connection between different existing distances.In this regard, it has been shown in [4] that the functional distortion and the interleaving distances are bi-Lipschitz equivalent.The edit distances defined in [8,7] for Reeb graphs of curves and surfaces, respectively, are shown to be universal in their respective setting, so the functional distortion and interleaving distances restricted to the same settings are a lower bound for those distances.Moreover, an example in [7] shows that the functional distortion distance can be strictly smaller than the edit distance considered in that paper.
In this paper we concentrate on the setting of PL functions on compact triangulable spaces and in this realm we study the properties of stability and universality of distances between Reeb graphs.Inspired by a construction of distance between filtered spaces [11], we first construct a novel distance δ PL based on considering joint pullbacks of two given Reeb graphs and prove it satisfies both stability and universality.Via analyzing a specific construction we then prove that neither the functional distortion nor the interleaving distances are universal.Finally, we define two edit-like additional distances between Reeb graphs that reinterpret those appearing in [8,7,2] and prove that both are stable and universal.As a consequence, both distances agree with δ PL .
Topological and categorical aspects of Reeb graphs
We start by exploring some topological ideas behind the definition of Reeb graphs.All maps and functions considered in this paper will be assumed to be continuous.Otherwise, we call them set maps and set functions.
Reeb graphs as quotient spaces
The classical construction of a Reeb graph [12] is given via an equivalence relation as follows: Definition 2.1.For f : X → R a Morse function on a compact smooth manifold, the Reeb graph of f is the quotient space X/∼ f , with x ∼ f y if and only if x and y belong to the same connected component of some level set While this definition was originally considered in the setting of Morse theory, it does not make explicit use of the smooth structure, and so it can be applied to a quite broad setting.However, some additional assumptions of X and f are justified in order to maintain some of the characteristic properties of Reeb graphs in a generalized setting.With this motivation in mind, we revisit the definition in terms of quotient maps and functions with discrete fibers.
A quotient map p : In particular, a surjection between compact Hausdorff spaces is a quotient map by the closed map lemma.A quotient map p : X → Y is characterized by the universal property that a set map Φ : Y → Z into any topological space Z is continuous if and only if Φ • p is continuous.
The motivation for considering quotient maps and functions with discrete fibers is explained by the following fact.Proposition 2.2.Let f : X → R be a function with locally connected fibers, and let q : X → X/∼ f be the canonical quotient map.Then the induced function f : Proof.To see that the fibers of f are discrete, we show that any subset S of f −1 (t) is closed.Let T = f −1 (t) \ S .Then q −1 (T ) is a disjoint union of connected components of f −1 (t).Since f −1 (t) is locally connected, each of its connected components is open in the fiber, and so q −1 (T ) is open in f −1 (t), implying that q −1 (S ) is closed in f −1 (t) and hence in X.Since q is a quotient map, q −1 (S ) is closed if and only if S is closed, yielding the claim.
Reeb quotient maps and Reeb graphs of piecewise linear functions
We now define a class of quotient maps that leave Reeb graphs invariant up to isomorphism.The main goal is to provide a natural construction for lifting functions f : X → R to spaces Y through a quotient map Y → X in a way that yields isomorphic Reeb graphs.To this end, we will define two categories, the category of Reeb domains and the category of Reeb graphs.Definition 2.3.We define the category PLReebDom of (compact triangulable) Reeb domains as follows: • The objects of PLReebDom (Reeb domains) are connected compact triangulable spaces.
• The morphisms of PLReebDom (Reeb quotient maps) are surjective piecewise linear maps with connected fibers.
The fact that this is indeed a category will be established in Theorem 2.13.
Definition 2.4.The category of Reeb graphs, denoted by PLReebGrph, is the category whose objects are Reeb domains R f endowed with PL functions f : R f → R with discrete fibers called Reeb functions, and whose morphisms between Reeb domains R f and R g respectively endowed with Reeb functions f and g are PL maps Φ : R f → R g such that g • Φ = f .
In particular, the isomorphisms between Reeb graphs are PL homeomorphisms that preserve the function values of the associated Reeb functions.A Reeb graph is actually a finite topological graph (a compact triangulable space of dimension at most 1).
Theorem 2.5.Any Reeb graph R f in PLReebGrph is a finite topological graph.
Proof.By definition, f is (simplexwise) linear for some triangulation of R f .If there were a simplex σ of dimension at least 2 in the triangulation of R f , then for any x in the interior of σ, the intersection σ ∩ f −1 ( f (x)) would have to be of at least dimension 1.But this would contradict the assumption that f has discrete fibers.Definition 2.6.Generalizing the classical definition (Definition 2.1) , we say that a Reeb graph R f is the Reeb graph of f : X → R if there is a Reeb quotient map p : The following lemma shows how a transformation g = ξ • f of a function f lifts to a Reeb quotient map ζ between the corresponding Reeb graphs.
Lemma 2.7.Assume that f : R f → R, g : R g → R are Reeb functions, p f : by the assumption that p f is a Reeb quotient map.By commutativity, we have f f is a set map.Moreover, since p g is continuous and p f is closed, the map ζ is continuous; since p g and p f are PL, the map ζ is PL as well.Now let y ∈ R g and let s = g(y).Similarly to above, Remark 2.8.By Proposition 2.2 and Lemma 2.7, the Reeb graph R f of f : X → R is isomorphic to X/∼ f .As a consequence, the Reeb graph R f together with the Reeb quotient map p is unique up to a unique isomorphism, turning the Reeb graph into a universal property.
We now proceed to prove that Reeb quotient maps are closed under composition.We start by showing that not only the fibers, but more generally all preimages of closed connected sets are connected.
the sets U and V are also closed in X.The images p(U) and p(V) are closed by the closed map lemma, and their union is K.By connectedness of K, their intersection is nonempty.Let y ∈ p(U) ∩ p(V).We have The subspaces (p −1 (y) ∩ U) and (p −1 (y) ∩ V) are closed in p −1 (y), and by connectedness of the fiber p −1 (y), their intersection must be nonempty.In particular, U ∩ V is nonempty.
Corollary 2.10.If p : X → Y and q : Y → Z are Reeb quotient maps, then the composition q • p : X → Z is a Reeb quotient map too.
As mentioned before, the main purpose of Reeb quotient maps is to lift Reeb functions to larger domains while maintaining the same Reeb graph.The following property is a consequence of the above statement: Corollary 2.11.Let f : X → R be a function with Reeb graph R f , and let q : Y → X be a Reeb quotient map.Then R f is also the Reeb graph of f • q : Y → R.
We now show that Reeb quotient maps are stable under pullbacks.
If the map p 1 (resp.p 2 ) is a Reeb quotient map, then so is the map q 2 (resp.q 1 ).
Proof.First note that the category of compact triangulable spaces has all pullbacks [15].For x 2 ∈ X 2 , by surjectivity of p 1 there is some ) is connected being a fiber of p 1 , implying that p −1 1 (p 2 (x 2 )) × {x 2 } is connected.Finally, applying Proposition 2.9 to q 2 , we obtain that the pullback space X 1 × Y X 2 is connected.The proof for q 1 is analogous.
Theorem 2.13.The Reeb domains and Reeb quotient maps form a finitely complete category, i.e., every finite diagram has a limit.Proof.By Corollary 2.10, the Reeb quotient maps are closed under composition and contain the identity maps of Reeb domains, so they form a category.This category has all pullbacks by Proposition 2.12, and the one-point space is a terminal object, so equivalently it has all finite limits [1, Prop.5.14 and 5.21].
Stable and universal distances
Throughout this paper, we will use the term distance to describe an extended pseudo-metric d : X × X → [0, ∞] on some collection X.Our main goal is the introduction of a distance between Reeb graphs that is stable and universal in the following sense.Definition 3.1.We say that a distance d S on the objects of PLReebGrph is stable if and only if given any two Reeb graphs R f and R g respectively endowed with Reeb functions f and g, for any Reeb domain X with Reeb quotient maps p f : X → R f and p g : X → R g we have Note that stability implies that isomorphic Reeb graphs have distance 0. Indeed, an isomorphism of Reeb graphs γ : Moreover, we say that a stable distance d U on the objects of PLReebGrph is universal if and only if for any other stable distance d S on PLReebGrph, we have Remark 3.2.By connectedness of R f and R g , there is at least one space X with maps p f , p g as needed to define the stability property: X = R f × R g , with p f , p g the canonical projections.The resulting functions In particular, for compact Reeb graphs a stable distance is always finite.
The definition of stability yields the following canonical universal distance.
Definition 3.3.For any two Reeb graphs R f and R g endowed with Reeb graph functions f and g, let where X is any Reeb domain, and p f , p g are Reeb quotient maps.
Proposition 3.4.The distance δ PL is the largest stable distance on PLReebGrph.Hence, δ PL is universal.
Proof.To see that δ PL is a distance, the only non-trivial part is showing the triangle inequality.To this end, given diagrams p f : R f ← X → R g : p g and p g : R g ← Y → R h : p h , we can pullback the diagram p g : X → R g ← Y : p g to obtain the diagram q X : X ← X × R g Y → Y : q Y , where X × R g Y is a Reeb domain and q X , q Y are Reeb quotient maps by Proposition 2.12.
where the last inequality holds because im q X ⊆ X and im q Y ⊆ Y. Hence, We now consider an example where we can explicitly determine the value of the distance δ PL (R f , R g ) between two specific simple Reeb graphs R f = S 1 = {(x, y) ∈ R 2 : x 2 + y 2 = 1} with f (x, y) = x and R g = [−1, 1] with g(t) = t.The example demonstrates the non-universality of certain distances proposed in the literature.We prove: The proof of this proposition will be obtained from the two claims below.
Proof.Consider the cylinder C = {(x, y, z) ∈ R 3 : x 2 + y 2 = 1, |2z − x| ≤ 1} together with functions f (x, y, z) = x and g(x, y, z) = z defined on C. Then R f is a Reeb graph of f via the Reeb quotient map (x, y, z) → (x, y), and R g is a Reeb graph of g via the Reeb quotient map (x, y, z) → z.Since we have Proof.Assume for a contradiction that there is a diagram p f : R f ← Z → R g : p g of Reeb quotient maps such that, letting f = f • p f and ĝ = g • p g , we have f − ĝ ∞ = δ < 1.We then observe the following: ) consists of two circular arcs homeomorphic by f to [−δ, +δ], and thus, by Proposition 2.9, f −1 ([−δ, +δ]) consists of two connected components C + and C − as well.
The current example illustrates that the functional distortion distance introduced in [3] and the interleaving distance introduced in [6] both fail to be universal.We first recall the definition of the former.For any Reeb graph R f with Reeb function f , consider the metric on R f given by Given maps φ : R f → R g and ψ : R g → R f , we write for the correspondences induced by the two maps.The functional distortion distance is To see that neither the functional distortion distance nor the interleaving distance are universal we establish: Proof.By [4, Lemma 8], the functional distortion distance is an upper bound on the interleaving distance on Reeb graphs [6], and so it is enough to prove that d FD (R f , R g ) ≤ 1 2 .To this end consider the maps φ : R f → R g , (x, y) → x and ψ : R g while for every pair q, q ∈ R g , we have d g (q, q ) = |g(q) − g(q )|.This implies that for any two corresponding pairs (p, q), (p , q ) ∈ G(φ, ψ), we have |d f (p, p ) − d g (q, q )| ≤ 1, and thus D(φ, ψ) ≤ 1 2 .Moreover, both maps preserve function values, so 4 The topological and graph edit distances where for n ∈ N f1 , . . ., fn are Reeb functions with f1 = f and fn = g, and the maps This way, we may think of a Reeb zigzag diagram as a sequence of operations transforming the R f into R g .The elementary diagram on the left corresponds to an edit operation: the space X i−1 , together with a function X i−1 → R with Reeb graph R i , is transformed to another space X i , with a function X i → R having the same Reeb graph R i .The elementary diagram on the right corresponds to a relabel operation: the function on X i with Reeb graph R i is transformed to another function with Reeb graph R i+1 .The idea of edit and relabel operations is inspired by previous work on edit distances for Reeb graphs [7,2].
In order to define an edit distance using Reeb zigzag diagrams, we need to assign a cost to a given Reeb zigzag diagram between R f and R g .To that end, we can consider a cone from a space V by Reeb quotient maps We call this diagram a Reeb cone.Any Reeb zigzag diagram admits such a cone.Indeed, the category PLReebDom has all finite limits by Theorem 2.13, and the limit over the lower part of diagram (1), consisting of Reeb quotient maps, yields a limit over the whole diagram.In a Reeb cone, by commutativity, each of the Reeb functions fi induces a unique function f i : V → R. By Corollary 2.11, the Reeb graph of f i is isomorphic to R i .This way, we pull back the individual functions fi to functions f i on a common space with the same Reeb graphs, where they can be compared using the supremum norm.
Using these ideas, we can now introduce distances on the objects of PLReebGrph, and proceed to prove that they are stable and universal.Definition 4.1.Given a Reeb cone from a space V as in (2), we define the spread of the functions ( f i ) i=1,...,n : V → R, as the function s V : V → R, x → max i=1,...,n f i (x) − min j=1,...,n f j (x).Moreover, for a Reeb zigzag diagram Z between R f and R g as in (1), consider the limit of Z, denoted by L. The cost of the Reeb zigzag diagram Z is the supremum norm of the spread s L , Definition 4.2.We define the (PL) edit distance δ ePL between Reeb graphs R f and R g in PLReebGrph as the infimum cost of all Reeb zigzag diagrams Z in PLReebDom between R f and R g : Moreover, we define the graph edit distance δ eGraph between Reeb graphs R f and R g in PLReebGrph analogously by restricting the infimum to Reeb zigzag diagrams Z where all the spaces X i and R i are finite topological graphs, and all the maps are PL.
Thus, on PLReebGrph we have two edit distances, satisfying The Reeb graph edit distance δ eGraph is a categorical reformulation of the definition given in [2].The main goal is to prove that these distances have the stability and universality properties (Propositions 4.4 and 4.5, Theorem 5.6, and Corollary 5.7).As a consequence, whenever applicable, they actually coincide with the canonical universal distance δ PL defined in Definition 3.3: The proofs of stability and universality for δ ePL are straightforward and are given next.The verification of stability and universality for δ eGraph follows in Section 5. Proposition 4.4.δ ePL is a stable distance.
Proof.Let R f , R g be Reeb graphs with Reeb functions f and g.For any space X such that there exist two Reeb quotient maps p f : X → R f and p g : Our proof of universality of the edit distance is similar to previous universality proofs for the bottleneck distance [5] and for the interleaving distance [10].Proposition 4.5.δ ePL is a universal distance.
Proof.Let R f , R g be Reeb graphs with Reeb functions f and g.Let δ ePL (R f , R g ) = d.Hence, for any ε > 0, there is a Reeb zigzag diagram Z between R f = R 1 and R g = R n , with limit L and functions f i as in Definition 4.1, having cost Let p f : L → R f and p g : L → R g be the induced Reeb quotient maps.If d S is any other stable distance (cf.Definition 3.1) between R f and R g , we have Since the above holds for all ε > 0, we have
Stability and universality of the Reeb graph edit distance
We now turn to the proof of stability and universality for the Reeb graph edit distance.Recall that, in the case of δ eGraph , the admissible Reeb zigzag diagrams are PL zigzags of finite topological graphs.As mentioned above, the distance δ eGraph is applicable to Reeb graphs of compact triangulable spaces.
Lemma 5.1.Let X = |K| and let V be the vertex set of K. Let f, g : X → R be PL functions, simplexwise linear on K. Let χ : im f → im g be a weakly order preserving PL surjection such that χ • f (v) = g(v) for every vertex v ∈ V. Then there is a Reeb quotient map X/∼ f → X/∼ g .
Proof.For simplicity, we write R f = X/∼ f , R g = X/∼ g , and R h = X/∼ h , where h = χ • f .Applying Proposition 2.2, f can be factorized as f = f • q f , where q f : X → R f is the canonical projection and f : R f → R is a Reeb function.Analogously, we obtain g = g • q g and h = h • q h .We show that there is a Reeb quotient map k : X → R h making the following diagram commute: The claim then follows by applying Lemma 2.7 to obtain Reeb quotient maps R f → R h and R h → R g , which compose to the desired map R f → R g .In order to prove the existence of such a Reeb quotient map k, we define the relation Here st K denotes the open star on X = |K|, defined as Note that the converse relation to the open star is the (closed) carrier, st −1 K = carr K , where carr K (A) is the underlying space of the smallest subcomplex of K containing A ⊆ X.We will also use the open carrier relation carr The remainder of the proof is split into several lemmas.Lemma 5.2 describes the behaviour of the functions h and g on the simplices of K. Lemma 5.3 shows that k is a continuous surjection, and Lemma 5.4 shows that k has connected fibers.Since h • k = g, we conclude that k is PL.Thus, k is a Reeb quotient map, and the claim follows from Lemma 2.7.
Proof.We have h(σ) = g(σ) because h is equal to g on the vertices of K, and h = χ • f with f linear on σ and χ a weakly order preserving surjection.
To show that g(σ • ) ⊆ h(σ • ), note that since g is linear on σ, either g is constant on σ and so g(σ • ) = g(σ) = h(σ), or g(σ • ) = (g(v), g(w)) for some vertices v, w of σ.In the latter case, since h and g coincide on the vertices, we have g(σ ) and the claim follows.
Proof.We fist show that k is right-unique, i.e., for any x ∈ X and y, y ∈ k(x), we have y = y .To see this, let t = g(x) and note that h h (y) and ξ ∈ q −1 h (y ).But since h −1 (t) ∩ τ is necessarily connected for every simplex τ, we know that ζ lies in the same connected component of h −1 (t) ∩ st K (x) as both ξ and ξ , and so we have y = q h (ξ) = q h (ξ ) = y as claimed.
To show that k is left-total, we need to show that for every x ∈ X, k(x) ∅.It suffices to show that for every x ∈ X, st K (x) contains a point x with h(x ) = g(x).This follows by considering the simplex σ ∈ K with x ∈ σ • .Now by Lemma 5.2, there is a point x ∈ σ • ⊆ st K (x) with h(x ) = g(x) as claimed.
To show that k is right-total, we show that for every y ∈ R h , there is some or equivalently, there is some x ∈ carr K • q −1 h (y) such that g(x) = h(y).If q −1 h (y) contains some vertex v of K, choose x = v.Otherwise, let ξ ∈ q −1 h (y), and let σ ∈ K be such that ξ ∈ σ • .Now by Lemma 5.2 there is a point x ∈ σ ⊆ carr K • q −1 h (y) with g(x) = h(ξ) = h(y).Finally, to show that k is continuous, we show that for every closed subset L of R h , the preimage k −1 (L) is closed.Since k −1 = (carr K • q −1 h ) ∩ (g −1 • h), it is sufficient to show that both carr K • q −1 h (L) and g −1 • h(L) are closed in X.First note that carr K • q −1 h (L) is closed as a subcomplex of K. Furthermore, the image h(L) is closed by the closed map lemma.By continuity of g it follows that g −1 • h(L) is closed in X.
Lemma 5.4.The fibers of k are connected.
Proof.Let y ∈ R h be a point in the Reeb graph with value t = h(y), and C = q −1 h (y) ⊆ h −1 (t) the corresponding component of the level set of h.Let U = carr K (C), and let L be the corresponding subcomplex of K. Writing D = k −1 (y), we have C = U ∩ h −1 (t) and D = U ∩ g −1 (t).To prove that D is connected, it is sufficient to show that C and D have finite closed covers with isomorphic nerves; since C is connected, both nerves and hence also D are then connected too.
We thus have shown the existence of the Reeb quotient map k.This completes the proof of Lemma 5.1.We will now apply Lemma 5.1 to construct Reeb graph edit zigzags from straight line homotopies.
We have By the surjectivity of q ρ i , for every i there is x ,i ∈ X such that q ρ i (x ,i ) = r i ( ).Thus, In conclusion, for every ∈ L, s L ( ) Corollary 5.7.δ eGraph is a universal distance.
Proof.The claim is a direct consequence of inequality (3) together with Theorem 5.6 and Propositions 4.4 and 4.5.
).By definition of stability, d S ≤ δ PL for any stable distance d S defined on the objects of PLReebGrph, implying that δ PL is universal.Example 3.5.Consider the one point Reeb graph * c endowed with the function identical to c ∈ R.Then, for any Reeb graph R f endowed with the function f , δ PL • K , where carr • K (A) is the smallest union of open simplices of K covering A. Note that the open carrier relation is symmetric, i.e., (carr • K | 6,490.4 | 2018-01-05T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Autocompensating quantum cryptography
Quantum cryptographic key distribution (QKD) uses extremely faint light pulses to carry quantum information between two parties (Alice and Bob), allowing them to generate a shared, secret cryptographic key. Autocompensating QKD systems automatically and passively compensate for uncontrolled time-dependent variations of the optical fibre properties by coding the information as a differential phase between orthogonally polarized components of a light pulse sent on a round trip through the fibre, reflected at mid-course using a Faraday mirror. We have built a prototype system based on standard telecom technology that achieves a privacy-amplified bit generation rate of ~1000 bits s-1 over a 10 km optical fibre link. Quantum cryptography is an example of an application that, by using quantum states of individual particles to represent information, accomplishes a practical task that is impossible using classical means.
42.2
have built a variety of systems for carrying out QKD, both over optical fibre and through free space. [3] provides an excellent recent review of the field.
In its simplest (and to date, most practical) form, quantum information is encoded in the states of single photons (or extremely faint coherent light pulses) that travel from Alice to Bob. An equivalent [4], but more difficult, method for conveying quantum information is to let Alice and Bob share an entangled pair of photons [3]- [8]. In that case, a definite value of a physical quantity (such as angular momentum or energy) is associated with the pair state, but not with either of the photons individually. For these entangled-pair states, very strong correlations will exist in the results of measurements Alice and Bob make on their separate photons, allowing them to generate a cryptographic key from these results and public discussion of them. Eavesdropping necessarily reduces the degree of correlation between Alice and Bob's measurements, so the degree of correlation in the data can be used to calculate an upper bound on the amount of information leaked to possible adversaries.
Autocompensating interferometry as a basis for QKD
A general challenge for quantum information is to choose a coding scheme that avoids disturbance by the dominant noise sources. For QKD based on either fibre or free space optical systems, fluctuations of the medium through which the photons travel are a source of noise. In free space, atmospheric density fluctuations are locally highly isotropic. Because polarization coding is essentially a differential phase encoding between two orthogonal polarization states, density fluctuations give rise to 'common mode' phase fluctuations, which do not degrade the contrast between the polarization basis states. Perturbations of the photon's propagation direction by density fluctuations only reduce the collection efficiency and key generation rate.
The situation is quite different when QKD is carried out over optical fibres. Optical fibre systems are a natural choice for QKD over distances up to several tens of kilometres, particularly since fibre-optic networks already connect many computers. Unfortunately standard optical fibre (for example SMF-28) has small but significant birefringence due to variations in its structure and composition and due to mechanical stresses caused, for example, by bends and twists. Other optical components used in a fibre-optic network can also introduce birefringence. These effects lead to random time-varying changes of polarization state, so that constant measurement of the properties of the optical fibre and active feedback control of some sort of compensating optical devices is required to use polarization as a basis for coding quantum information.
While this active approach has been successfully used in fibre-optic QKD systems [9], it is also possible to build a system that inherently compensates for changing fibre characteristics by using a differential coding scheme. This idea can be understood by considering Martinelli's observation that light sent on a round trip through an optical fibre terminated with a Faraday mirror returns in the polarization state orthogonal to that in which it started, independent of the optical properties of the fibre [10]. This property arises from the fact that a Faraday mirror exchanges orthogonal polarization states, with the consequence that the total phase accumulated by light over the course of a round-trip is independent of the input polarization. Because of this, information encoded as a differential phase between two orthogonal polarization states will be unaffected by birefringence in the system, provided that this birefringence does not change in the time it takes light to make a round-trip through the fibre (2n/c ∼ 10 µs km −1 ). The idea that a QKD system could be made insensitive to the state of the fibre by using round-trip propagation with a Faraday reflection mid-course was developed independently by groups at the University 42.3 of Geneva [11]- [14] and IBM [15,16], who built systems that used 1.3 µm light. Operation at 1.55 µm, preferred for long-haul telecom systems, has also been demonstrated [17,18].
Mathematical treatment of autocompensating system
Martinelli originally used the Jones matrix formalism to show that a Faraday mirror can 'orthoconjugate' light-i.e., generally transform any polarization state into an orthogonally polarized state. This formalism can be extended to describe autocompensating QKD systems. We represent the transformation of the polarization state of light propagating forward and backward through an optical component or system by matrices T and T R given by Here U and V are arbitrary unitary matrices that transform between the eigenmode basis of the optical system and any desired bases at the system ends, and e iX and e iY describe the phase and amplitude changes for each of the eigenmodes. The matrices for backward propagation through such components are derived from their forward counterparts by transposing and negating the off-diagonal elements (following the convention that backward propagating fields be described in a right-handed coordinate system with reversed zand x-axes) [19,20,21]. For reciprocal components, X and Y are independent of propagation direction. For non-reciprocal components such as Faraday rotators, reversing the propagation direction also changes the sign of the Faraday rotation angle, given by γ = ∆n c k 0 L = V Bk 0 L where ∆n c = 1 2 (n − − n + ) (with n ± the refractive indices for positive and negative helicity light), k 0 is the light wavevector in vacuo, V is the Verdet constant, B is the component of magnetic field parallel to k 0 and L is the path length in the medium. Thus the forward and backward polarization transformations through a Faraday rotator are related by F (γ) = F R (−γ), and can be written as 2 × 2 rotation matrices for angles ±γ. Purely optically active media are described similarly, but with no sign change for γ in the reverse direction.
A Faraday mirror (FM), which is a combination of a 45 • Faraday rotator and a normal mirror, transforms polarization states according to the Jones matrix: FM can be multiplied by a factor r to account for attenuation and phase shift due to the optics. It is readily verified that for unitary matrices such as U , U R F MU = F M. Thus, for a round trip through a system described by T and terminated by an FM, the overall Jones matrix M is given by M is simply given by the FM matrix multiplied by overall phase and amplitude attenuation factors. If several components described by T i of the form in equation (1) are concatenated before Faraday reflection, a round trip is described by the matrix M with X and Y replaced by X i and Y i , respectively. The final polarization state of light after a round trip is still orthogonal to the initial state, independent of the properties of the optical system described by the T i , and the total phase accumulated is Σ(X i + Y i ), independent of the initial polarization state.
42.4
When a light pulse with horizontal polarization state ψ i = a H 0 is presented at the input of such a system, the state that is returned is which has vertical linear polarization. Similarly, when the vertically polarized state 0 a V is input to the system, the returned state is −e i(ΣX i +ΣY i ) r a V 0 , and is horizontally polarized. The prefactors are the same in both cases; thus no differential phase shift or attenuation is introduced between two orthogonally polarized light pulses that travel on a round trip through the same sequence of optical elements terminated by a FM. For QKD, this result allows Alice and Bob to robustly encode quantum information in the differential phase between orthogonally polarized pulses. In the preceding discussion, we assumed that the modal phase shifts X and Y were the same for outgoing and returning light. However, this will only be true if the system optical properties vary slowly compared with the round trip time for light. By deliberately violating this condition using fast phase modulators, Alice and Bob can introduce useful differential phase shifts between the orthogonal polarization states. These shifts can be read out at Bob's station either polarimetrically (by combining the orthogonally polarized waves) or interferometrically (by combining the waves with parallel polarization) using appropriate optics. Systems built at IBM based on these two read-out methods are described below in sections 2 and 3, respectively. Figure 1 shows the initial version of our quantum cryptography setup. Alice and Bob's setups, in the same lab but with separate computers and independent equipment, are only connected via a 10 km (or 20 km) SMF-28 optical fibre link that carries both quantum information at 1.3 µm and wavelength multiplexed bi-directional timing and coordination pulses using 1.55 µm transceivers. They also communicate via the building LAN to carry on public discussions.
Experimental setup
In this optical design, a horizontally polarized pulse from the 1.31 µm DFB laser (Lucent D2304G) passes through a variable attenuator and bat-wing polarization adjuster (not shown) and then is fibre coupled to a bulk optical polarization and analysis module. The pulse passes through two polarizing beamsplitters (PBSs) PBS1 and PBS2 (with cancelling ±45 • rotations between due to the Faraday rotator and WP1), and then, after a 45 • rotation by WP2, is split by PBS3 into Hand V -polarized components with amplitudes we can describe by the vector ns by a loop of polarization-maintaining fibre before a second encounter with PBS3 directs it into the fibre toward Alice. Since pulses H and V pass through the system at different times, they can be independently traced through the system in order to determine their states upon returning to PBS3. figure 2, an annealed-proton-exchange (APE) LiNbO 3 waveguide modulator can be used to apply an overall phase shift to the arbitrarily polarized pulse, even though the waveguide transmits only one linear polarization (y). The y-polarized component of ψ 1H is sent by PBS4 directly to PM A . The x-component travels to the FM, is rotated to y-polarization by the FM, returns to PBS4 and is coupled into the loop by PBS4. PM A is offset from the loop midpoint by an optical path L equal to the path from PBS4 to the FM, so that the counter-propagating x and y pulse components meet in PM A having each travelled the same distance (C/2 + L) from PBS4, where C is the loop circumference. This arrangement places the modulator at the exact midpoint of the overall optical path. As the x and y components pass through PM A at the same time in opposite directions, an equal phase shift φ AH is imparted to both of them. The component that was originally x-polarized continues anticlockwise around the loop to PBS4. The component that was originally y-polarized travels clockwise around the loop to PBS4 and then onto the FM, where it is rotated to x-polarization and sent back to PBS4, where the two components are reunited.
The overall combined effect of PBS4, the fibre loop and PM, the FM, and the attenuation due to WM and Alice's other components (which is included in r), can be described by the matrix Here φ s = k x L + k y (C + L) is a static phase shift that takes into account the birefringence of the polarization-maintaining fibre path. This arrangement is thus equivalent to a standard FM plus a controllable phase shift, φ AH , that can be intentionally applied using Alice's PM. After reflection, phase shifting and attenuation, as represented by matrix F M , pulse H returns through the fibre to PBS3, arriving in the state Thus H, which originally was horizontally polarized, returns to PBS3 vertically polarized and is directed through Bob's delay loop. The loop adds both a fixed phase, φ τ , and a controllable phase φ BH that Bob can apply using PM B . H then arrives at PBS3 for the second time in the state where the attenuation and fixed phase factors have been combined into the new constant r .
The V-polarized pulse.
The vertically polarized pulse V , initially in state ψ 0V , travels through the system in an analogous way, but passes through the delay loop immediately on departure. As it did for H, the loop adds the fixed phase φ τ and a phase shift φ BV , controlled by PM B . V returns to PBS3 in the state which is horizontally polarized and has an intentional phase shift φ AV + φ BV .
42.7
Recall that the first pulse, H, is directed through Bob's delay loop on its return to PBS3 and arrives at PBS3 for the second time after a delay τ . Since τ is also the delay between H and V , H's second arrival at PBS3 coincides with V 's arrival. The two orthogonally polarized pulses thus emerge from PBS3 at the same time, overlapping in space, and travel together toward WP2. The polarization state of the recombined pulse as it leaves PBS3 can be expressed by the vector where In typical operation, Alice sets φ AH = 0 while the first pulse passes through the modulator and then rapidly switches to φ AV = 0 in a time short compared with the delay τ between the H and V pulses. Similarly, Bob turns his modulator off for all pulses leaving his station (φ BV = 0), but sometimes applies a phase φ BH = 0 to returning pulses, so that ∆φ = (φ AV − φ BH )/2.
Polarization analysis for decoding the quantum bits.
After leaving PBS3, the light represented by state ψ 3 is rotated by WP2 to the final state Recalling that the state ψ 0 = a H a V was originally produced by the passage of a purely horizontally polarized state through WP2 in the forward direction, we can write so that ψ 4 = −r e iΣφ − cos 2θ sin 2θ sin 2θ cos 2θ e i∆φ sin 2θ e −i∆φ cos 2θ = −r e iΣφ −i sin ∆φ sin 4θ cos ∆φ − i sin ∆φ cos 4θ .
In the ideal case WP2 is rotated to the angle θ = π/8, and thus This shows that PBS2 directs all of the light to D 0 for ∆φ an even multiple of π/2, and to D 1 (via PBS1) for ∆φ an odd multiple of π/2. If WP2 is rotated by δ away from its ideal angle, the fraction of the intensity leaking to D 1 has a minimum value (4δ) 2 (when ∆φ = π), while the minimum intensity to D 0 (for ∆φ = 0) will be 0, independent of δ. A 2 • misalignment of WP2, for example, will misroute ∼2% of the intensity for even ∆φ/(π/2), while for odd ∆φ/(π/2) all photons will be routed correctly. Thus, on average, a 2 • misalignment of θ will contribute ∼1% to the bit error rate (BER). With their shared control over the differential phase shift ∆φ in equation (13), Alice and Bob can implement the four-state BB84 protocol. A possible assignment of basis and bit values to Alice and Bob's phase shifts is given in table 1. When Alice and Bob match bases, with both choosing either an odd or an even multiple of π/2, the photons are deterministically routed to a specific detector, depending on Alice's choice of bit.
Single-photon detection using pulse-biased InGaAs avalanche photodiode (APD) detectors
Quantum cryptography requires detection of single photons. The circuit used for this task is shown in figure 3 . This signal pulse is amplified and sent to an electronic gate (described in detail in [16]), giving the final output pulse shown by the bottom trace of figure 3(b), overlaid on a trace obtained with no photon signal present. The output pulses are sent to a discriminator followed by appropriate logic and counting modules. The transient cancellation is independent of pulse amplitude, pulse shape and photodiode characteristics. The detectors are pulse biased at 1 MHz, and with peak voltages ∼2 V above breakdown a quantum efficiency η ∼ 20% with dark count probability ∼2-4 × 10 −5 is achieved. The 1.55 µm timing pulses sent from Bob to Alice are used to synchronize the modulator PM A to the arriving light pulses, and are echoed from Alice to Bob to trigger Bob's detector bias pulser and drive a counter to keep track of the pulse number. The Fujitsu detectors have worked well, but operate best at quite low temperature and unfortunately are no longer available. However, recent work has identified devices (JDS Uniphase EPM 239 AA SS) that have acceptably good performance even above 200 K, making thermoelectric cooling an attractive option [22]- [24]. the elegant design of Ribordy et al [14], with the slight modification that polarizationmaintaining fibre is used throughout, so no polarization adjusters are needed in the phasesensitive portion of the optical path. Bob, rather than initially splitting a 45 • polarized beam using a bulk-optical polarizing beam-splitter cube, splits his linearly polarized input pulse into two replicas using a 2 × 2 fibre coupler constructed with polarization-maintaining fibre. The upper leg of the splitter is connected to a fibre-optic PBS with a connector modified to couple two polarization-maintaining fibres with their fast axes perpendicular. The delayed pulse from the lower branch of the coupler reaches the PBS without polarization rotation, and leaves Bob's station polarized orthogonally to the undelayed pulse, as in the earlier setup. Alice also incorporates a fibre PBS with polarization-maintaining fibre to implement her PM loop. Keeping the light in single-mode fibre at both stations reduces the optical loss significantly, and most importantly, gives on/off interferometric switching contrast at the variable coupler of ∼650:1, an order of magnitude higher than the 62:1 contrast obtained with the bulk-optic polarizers. It is also simpler to use a fibre-optic circulator module to direct light to D 1 than to use the waveplate/Faraday-rotator/bulk PBS combination shown in figure 1.
Experimental setup
If we assume the initial coupler is symmetric, with amplitude transmission and reflection coefficients t and ir respectively (r and t real), we obtain expressions for the final amplitudes in the D 0 and D 1 channels analogous to those in equation (12), with the substitutions sin(4θ) → 2irt and cos(4θ) → (t 2 − r 2 ). The minimum probability for photons to be routed to D 1 is (t 2 −r 2 ) 2 . A non-ideal coupler with a 55:45 power splitting ratio, for example, will direct ∼1% of the photons to D 1 at nominal null (∆φ = π). With careful adjustment of the polarization-maintaining variable coupler, we achieve a leakage rate three to four times lower than this.
System operation
Computers at Alice and Bob's stations, running LabVIEW TM programs, control the QKD system. They sequentially carry out the quantum information transfer, error correction and privacy amplification. The weak 1.3 µm pulses and the 1.55 µm timing pulses are carried over a single SMF-28 optical fibre, while coordinating signals and data for error correction and privacy amplification are sent using TCP/IP over the building LAN. Data obtained for 10 and 20 km fibre links with the all-fibre system are shown in figure 5. The raw key rates (squares) for photons detected with matching bases for Alice and Bob (∼1/2 the total detection rate) are given by The error-corrected rates (circles) are based on the bits remaining after Alice's and Bob's computers carry out error correction over the LAN. Starting with the raw key, they shuffle and block the key data three times, each time comparing row parities and bisectively searching out the errors in rows with parities that mismatch. The block sizes are chosen to leave < ∼ 10 errors after three stages. The sparse remaining errors are found by comparing parities of randomly selected matching subsets of one-half of the remaining bits. If for a given draw the parities disagree (as will occur with probability 1 2 if there are any remaining errors), a bisective search is used to find and eliminate the error. Success is assumed after 20 consecutive parity matches occur. In both the block and random subset operations, a bit is discarded for each parity bit revealed.
42.12
As a by-product of the error correction procedure, an estimate of the initial BER is obtained. This estimate is needed for privacy amplification. The actual BERs as a function of µ are also plotted (diamonds) in figure 5. Based on the estimated BER and the measured value of µ, Alice and Bob compute an estimated upper bound, E, for the number of bits of information Eve possesses about their N ec error-corrected bits. To generate privacy-amplified key bits, Alice's and Bob's computers continue to compute parities for matched subsets of one-half of the errorcorrected bits (selected using a publicly transmitted random matrix), but now, rather than publicly comparing them, Alice and Bob keep these parities as secret key bits. (N ec − E − s) bits are generated in this way. The privacy amplification theorem asserts that, for a chosen s, Eve's information about the final key will be at most ∼2 −s / ln 2 bits [1, 2].
Privacy amplification
The maxima in the privacy-amplified key rates versus µ seen in figure 5 reflect the tradeoff between low photon arrival rate and high BER at low µ, and Eve's increasing ability to gain information about the pulses at high µ. The degree of vulnerability of weak laser pulses to eavesdropping is a central and interesting question. The rates for privacy-amplified key generation in figure 5 were calculated using the simple BB84 estimate that Eve obtains fractions 2 · BER and µ of the sifted bits using read-and-replace and beamsplitting attacks, respectively [1,2]. For a low BER this approximation gives a key rate that goes as ∼µ(1 − µ), with a maximum near µ = 0.5 and falling to zero as µ → 1. For the 10 km link, using the BB84 leak estimate gives a maximum key rate ∼1.5 kbit s −1 for µ = 0.3, while for the 20 km link, the ∼4 dB greater attenuation, higher backscattering, and consequently increased BER reduce the privacy-amplified key rate at µ = 0.3 by about an order of magnitude, to 200 bit s −1 .
Analyses that consider more sophisticated eavesdropping attacks have been recently made [25]- [33], in some cases granting Eve powers within the realm of known physics, but far beyond those provided by today's technology. Lütkenhaus [29]- [31], for example, considers a scenario where Eve can measure the photon number for each pulse, block the single-photon pulses, enhance the transmission of multi-photon pulses and learn the exact state for pulses containing two or more photons, allowing perfect re-sending. Gilbert and Hamrick (GH) [32] make a similar analysis, but allow Eve less information for two-photon pulses, arguing that their state cannot be analysed perfectly and that only those pulses where Eve and Bob each detect one photon contribute to Eve's knowledge of the error-corrected key. This is an important distinction, since double-photon pulses are relatively numerous. Figure 6 compares the key generation rates versus µ calculated according to the prescriptions of these authors for various levels of system efficiency with those given by the simple BB84 estimate (shown in blue) [1,2]. The calculated curves assume dark count rates and backscattering levels typical of our system with the 10 km fibre link. The predicted net key rates are similar for ideal channels (quantum efficiency η and link transmission T both equal to unity), but diverge for lower η and T . Using the prescription of GH gives a bit rate of 200 bit s −1 for our 10 km system rather than the 1.5 kbit s −1 obtained with the simple BB84 prescription, and no net bits with for the 20 km link. Our 20 km BER is increased significantly (by 2-3%) owing to the greater intensity of backscattered light at 1.3 µm arriving at the detectors. One solution to this problem is to run Bob's laser intermittently, storing a train of pulses behind Alice's attenuator in a delay line [14], and detecting the returning photons during periods when no backscattered photons are present. While the system duty-factor is somewhat reduced using this approach, key bits / pulse µ (photons/pulse) Figure 6. Secure key bits generated per pulse for several detector quantum efficiencies, η, and Alice-to-detector transmission values, T , according to the information leakage bounds given in [1,2,30] and [32].
the very high cost of increased BER to key generation makes the tradeoff highly worthwhile for longer distances.
In considering what level of data sacrifice is required for adequate privacy amplification, it is helpful to consider the point of view of the Geneva group [3,33]: whereas perfect security would be infinitely costly and impractical, practical security to a very high level is achievable with current level QKD systems if we accept quite reasonable limits on Eve's abilities. For example, large high-speed quantum memories with very long retention times are unavailable, and could be defeated simply by delaying sufficiently the exchange of basis information. Alternatively, Alice and Bob could generate all matching basis choices on the fly by using some initially shared secure key information, and never need to reveal the bases [34]! In either of these cases Eve would be forced to analyse the photons split from two-photon pulses without a priori knowledge of the bases, which halves the information she can obtain about them. The optimum value of µ and the net rate of key generation for our 10 km link approximately double if the GH leak estimate is modified in this way, as shown by one example in figure 6.
Conclusion
Autocompensating quantum cryptography systems make it possible to generate shared, secret cryptographic key data at kilobit-per-second rates over distances up to a few tens of kilometres. The advantage of the autocompensating approach is that it allows accurate optical readout of the quantum information with no monitoring or active control of the optical properties of the system, despite the presence of random, time-varying disturbances of the fibre link. This is accomplished by using a differential phase coding that is immune to these disturbances. | 6,322.4 | 2002-04-24T00:00:00.000 | [
"Physics",
"Computer Science"
] |
BERT-Beta: A Proactive Probabilistic Approach to Text Moderation
Text moderation for user generated content, which helps to promote healthy interaction among users, has been widely studied and many machine learning models have been proposed. In this work, we explore an alternative perspective by augmenting reactive reviews with proactive forecasting. Specifically, we propose a new concept text toxicity propensity to characterize the extent to which a text tends to attract toxic comments. Beta regression is then introduced to do the probabilistic modeling, which is demonstrated to function well in comprehensive experiments. We also propose an explanation method to communicate the model decision clearly. Both propensity scoring and interpretation benefit text moderation in a novel manner. Finally, the proposed scaling mechanism for the linear model offers useful insights beyond this work.
Introduction
Text moderation is essential for maintaining a nontoxic online community for media platforms (Nobata et al., 2016). Many efforts from both academia and industry have been made to address this critical problem. Recently, the most prototypical thread is to do sophisticated feature engineering or develop powerful learning algorithms (Nobata et al., 2016;Badjatiya et al., 2017;Bodapati et al., 2019;Tan et al., 2020;Tran et al., 2020). Automatic comment moderation schemes plus human review are certainly the cornerstone of the fight against toxicity.
These existing works, however, are reactive approaches to handling user generated text in response to the publication of new articles. In this paper, we revisit this challenge from a proactive perspective. Specifically, we introduce a novel concept text toxicity propensity to quantify how likely an article is prone to incur toxic comments. This is a proactive outlook index for news articles prior to the publication, which differs radically from the existing reactive approaches to comments.
In this context, reactive describes comment-level moderation algorithms after the publication of news articles (e.g., Perspective (Perspectiveapi)), which quantifies whether comments are toxic and should be taken down or sent for human review. Proactive emphasizes article-level moderation effort before the publication (without access to comments), which forecasts how likely articles are to attract toxic comments in the future and gives suggestions (e.g., rephrase news articles properly) in advance. Our work can be viewed as the first machine learning effort for a proactive stance against toxicity.
Formally, we propose a probabilistic approach based on Beta distribution (Beta) to regress article toxicity propensity on article text. For previously published news articles with comments, we take the average of comments' toxicity scores as the ground-truth label for model learning. The effectiveness of this approach is shown in both test set and human labeling. We also develop a scheme that can provide convincing explanation to the decision of the deep learning model.
Recently, context, in the form of parent posts, has been studied but it is only viewed as regular text snippets for lifting the performance of toxicity classifiers (Pavlopoulos et al., 2020) while screening posts. Our work instead focuses on predicting the proactive toxicity propensity of articles before they receive user comments.
Beta distribution is usually utilized as a priori in Bayesian statistics. The most popular example in natural language processing is Topic Model (Blei et al., 2003), where the multivariate version of beta distribution (a.k.a. Dirichlet distribution) generates parameters of mixture models. Beta regression is originally proposed for modeling rate and proportion data (Ferrari and Cribari-Neto, 2004) by parameterizing mean and dispersion and regressing parameters of interest. It has been applied to evaluate grid search parameters in optimization (McKinney-Bock and Bedrick, 2019), model emotional dimensions (Aggarwal et al., 2020) and statistical processes of child-adult linguistic coordination and alignment (Misiek et al., 2020).
Beta Regression
In this work, both comment toxicity score and the derived article toxicity propensity score (to be detailed in the subsequent section 4.1) range from 0 to 1. Empirically, their distributions exhibit an asymmetry and may not be modelled well with the Gaussian distribution (Figs. 2 and 3 of Appendix A). Furthermore, comment toxicity score distributions of individual articles vary with article content as shown in Fig. 3 of Appendix A. Modelling the entire distribution of an article comment toxicity scores is thus a reasonable approach. Beta distribution is very flexible and it can model quite a wide range of well-known distribution families from symmetric uniform (α = β = 1) and bellshaped distributions (α = β = 2) to asymmetric shapes (α = β).
In this context, the toxicity propensity score y is assumed to follow the Beta distribution with probability density function (pdf): where α and β are two positive shape parameters to control the distribution. B(α, β) is the normalization constant and support y meets y ∈ [0, 1]. Eq. 1 holds the probabilistic randomness given α and β, we thus impose a regression structure of them on text content. Formally, given a training set D = {(x n , y n )} N n=1 with raw text feature vector x n and label y n for sample n, we apply feature engineering or text embedding g(·) and then regress α n (> 0) and β n (> 0) on g(x n ) respectively as ( 2) where f α (·) and f β (·) are learned jointly. g(·) can be either pre-fixed or learned together with f α (·) and f β (·), which is detailed in the subsequent section. Specifically, the learning procedure of f α (·), f β (·) and g(·) (if applicable) is to minimize loss Substituting Eqs. 1 and 2 into it gives the final objective function.
In the inference phase, with learned f α (·), f β (·) and g(·), α m and β m for a new sample x m can be readily derived from Eq. 2. We take the mean of Eq. 1 as a point estimator: y m = αm αm+βm because we are predicting the average toxicity.
Dataset
We collect a dataset of articles published on Yahoo media outlets, which are all written in English. We also exclude articles with low comment volume to make the distribution learning reliable. The number of comments for 99% of the analyzed articles lie in [10, 8K], with 25% quantile of 20, median of 50 and mean of 448. The employed dataset is then split into training, validation and test parts based on the publishing date with ratio of 8:1:1 as described in Table 1. It's worthwhile to note that input text x n is the concatenation of article title and text body. The toxicity propensity score y n of article n is defined as the average toxicity score of all associated comments. Comments are scored by Google's Perspective (Perspectiveapi), which lies in [0, 1]. Perspective intakes user generated text and outputs toxicity probability. It's a convolutional neural net (Noever, 2018) trained on a comments dataset 1 of wikipedia labeled by multiple people per majority rule.
Experiment Setup
In Eq. 2, we set both f α (·) and f β (·) to singlelayer neural networks. For g(·), we experiment with either Bag of Words (BOW) or BERT embedding (BERT) (Devlin et al., 2019). Specifically, we take uni-gram and bi-gram words sequence and compute the corresponding Term Frequency-Inverse Document Frequency (TF-IDF) vectors, which leads to around 5.8 million tokens for BOW. For BERT, we take the base version and then finetune f α (·) and f β (·) on top of the [CLS] embedding, which ends up with 110 million parameters. If input text exceeds the maximum length (510 as [CLS] and [SEP] are reserved), we adopt a simple yet effective truncation scheme (Sun et al., 2019). Specifically, we empirically select the first 128 and the last 382 tokens for long text. The rationale is that the informative snippets are more likely to reside in the beginning and end. Batch size is 16 and learning rate is 1e − 5 for Adam optimizer (Kingma and Ba, 2015). They are called BOW-β and BERT-β for short.
Baseline Methods and Metrics
We compare with the linear regression method using BOW features, as well as the BERT base model. Both are combined with one of two loss functions, Mean Absolute Error (MAE) or Mean Squared Error (MSE). We call them BOW-MAE, BOW-MSE, BERT-MAE, BERT-MSE, respectively. The experiment settings are same as the Beta regression.
Since we are interested in identifying articles of high toxicity propensity, we want to make sure that an article with high average toxicity is ranked higher than one with low propensity. Thus in addition to mean absolute error, root mean squared error (RMSE) and AUC@Precision-Recall curves (AUC@PR), we measure performance using two ranking metrics, Kendall coefficient (Kendall) and Spearman's coefficient (Spearman).
Results
We perform evaluation on the whole test set and on human labels.
Test Set
Table 2 details the performance comparisons. Overall, Beta regression stands out across different metrics regardless of feature engineering due to its modeling flexibility. BERT-based methods also outperform BOW ones in terms of feature engineering and representation. This is reasonable as the former has 20 times as large parameters as the latter and offers the contextual embedding. Interestingly, MAE and MSE schemes don't achieve the minimum MAE and RMSE although they are dedicated to this goal, which might result from the limitation of point estimator.
Human Labels
As labels are derived from machine, we want a sanity check to ensure that the model decision conforms to human intuition. Namely, when the model classifies an article as having high toxicity propensity, we want to make sure that it correlates well with human judgement. To this end, we divide test set into 10 equal buckets with an interval of 0.1 and merge the last 4 buckets into [0.6, 1] due to much fewer articles with score being above 0.6 (as shown in Fig. 2 . We then randomly take 100 samples per bucket and set aside 10% for human training and the remaining are labelled by the human judges as the benchmark set. We recruit two groups of people for independent annotation, which are required to pick one from five levels (a reasonable balance between smoothness and accuracy for manually labeling toxicity propensity per judges' suggestion) to describe the propensity extent to which an article is likely to attract toxic comments: Very Unlikely (VU), Unlikely (U), Neutral (N), Likely (L) and Very Likely (VL). Table 3 is the confusion matrix showing how much two groups of human judges agree with each other. Table 3). Moreover, Cohen's Kappa is about 0.23 by taking expected chance agreement into account 2 . In light of this, we jointly score the set by assigning −2, −1, 0, 1 and 2 to VU, U, N, L and VL, respectively. Since each article has two labels, the addition gives an integer score interval [-4,4]. Table 4 reports the performance with human labels as the ground truth, which confirms the previous findings that BERT-β performs the best. Additionally, we pick scores 2, 3 and 4 as thresholds to monitor precision and recall curves (Fig. 1). Likewise, the proposed schemes achieve compelling performance widely.
Taken together, our probabilistic methods agree more with both machine and human judgements.
Explanation
As we focus on the pre-publication text moderation, a reasonable explanation is an essential step to convince stake-holders of subsequent operations. For BERT-β explanation, we adopt gradient-based saliency map variants from computer vision (Simonyan et al., 2013;Shrikumar et al., 2017). We compute the gradient ∇f (x) with respect to input tokens embedding e(x), where f (x) = α(x)/(α(x) + β(x)) is the mean prediction for sample x (Section 3), and x = (t 1 , t 2 , · · · , t L ) where t l (l = 1, 2, · · · , L) is a single token. The element of ∇f (x) is partial derivative ∂f ∂e(t l ) (x) to measure the token-level contribution to the scoring. The explanation is conducted by assuming the article is controversial, and we want to figure out which words cause some comments to be toxic. So it also makes sense to maximize the maximum toxicity of the comments. We thus experiment with f (x) = (α(x) − 1)/(α(x) + β(x) − 2), which is the mode (corresponding to the peak in the PDF of Beta distribution) under reasonable assumption (α, β > 1). We denote the resulting scheme by subscript "mode".
For saliency map (SM) (Simonyan et al., 2013), the metric is ∂f ∂e(t l ) (x) 2 without direction. A variant is dot product (DP) between token embedding and gradient element e(t l ) T · ∂f ∂e(t l ) (x) with direction (Shrikumar et al., 2017). We also propose a hybrid (HB) scheme to take magnitude of SM and direction of DP to form a new metric. We perform an ablation study (AS) to delete single token t l alternately and then compute the score discrepancy between original x and x ¬l as well. As a reference, we examine the regression coefficients (RC) of linear BOW-MSE, which are easy to check for explaining the contribution of corresponding words.
A few well-trained human judges are recruited to tag k (example-specific, determined by annotators) most important words. We then prioritize tokens with different metrics and pick top k ones as candidates. Hit rate (proportion of human annotated tokens covered by schemes) is used to compare different tools. We take 1, 000 examples for human review and compute the average hit rate, as compared in Table 5. All schemes for BERT-β are much better than linear scheme RC, which is consistent with the predictive performance discrepancy. SM and HB are close and outperform black-box ablation study, which implies the valuable role of model-aware gradients in the explanation. DP is inferior to AS and seems not consistent with human annotation as well as other gradient based methods. In practice, we take SM for the explanation (Appendix B) due to its out-performance and simplicity. As expected, mode (SM mode ) covers more annotated words than mean (SM) on average (more discussions in Appendix C).
where X is the training corpus. y ∈ [0, 1] N ×1 and τ (X) ∈ Z N ×M (M = 5.8 million) are the training labels and TF-IDF matrix. y and τ (X) are their column-wise means. The pre-computed w can be viewed as a surrogate of the regression coefficient for the linear regression problem, which is used to scale TF-IDF of BOW-MSE in both training and inference phases. We call it Naive Bayes Linear Regression (NBLR) for short.
The scaling benefits the performance, as compared in Table 6. As can be seen, NBLR improves upon BOW-MSE significantly, although it is not as good as BERT-β. Our work can benefit text moderation. The proactive propensity offers a toxicity outlook for comments, which could be utilized in multiple ways. For example, stricter moderation rules are enforced for articles that are predicted to have a high toxicity propensity. Furthermore, the propensity could be used as an additional feature for the downstream reactive toxicity recognition models, as well as for allocation of appropriate human resources.
The explanation tool can also be used to remind editors to rephrase some controversial words to mitigate the odds of attracting toxic comments. Text moderation is an important yet challenging task, our proactive work is attempting to open up a new perspective to augment the traditional reactive procedure. Our current model, however, is not perfect as shown by article b in Fig. 3 of Appendix A where the learned distribution doesn't fit well the observed histogram. Technically, NBLR is an encouraging lightweight extension to Linear Regression. Likewise, we will continue to work towards the improvement of the non-linear Beta regression.
Conclusion
We approach text moderation by developing a wellmotivated probabilistic model to learn a proactive toxicity propensity. An explanation scheme is also proposed to visually explain the connection between this new prospective score, and text content. Our experiment shows the superior performance of the proposed BERT-β algorithm, compared with a number of baselines, in predicting both the average toxicity score, and the human judgement.
A Toxicity Score and Beta Distribution
The distribution of news articles' toxicity propensity score is reported in Fig. 2. Comment score distributions of two articles with predictive distribution are given in Fig. 3.
B SM Explanation Examples
We pick two samples from the test set and then leverage SM in section 4.5 to highlight key words for the illustration purpose, as shown in Fig. 4. The color intensity is proportional to the normalized saliency map value. The darker the color of a token is, the more important it's to the scoring. There's also a positional bias towards the first sentence as it's the article title.
C BERT-β mode
We also explore the mode of BERT-β as a point estimator and compare it with the mean. Table 7 details the performance discrepancy between the test set and human labels. For the toxicity propensity prediction in the test set, it does make sense for mean to slightly outperform mode as ground-truth labels are the score mean of comments. When it comes to human labels and explanation, people annotate news articles based on the perceived controversial words most likely to incur toxic comments. Mode is thus able to capture the worst case better and agrees more with human annotations. This finding is in line with the better explanation performance, as compared in Table 5. | 4,029.2 | 2021-09-18T00:00:00.000 | [
"Computer Science"
] |
Tunable GHz pulse repetition rate operation in high-power TEM 00-mode Nd : YLF lasers at 1047 nm and 1053 nm with self mode locking
We report on a high-power diode-pumped self-mode-locked Nd:YLF laser with the pulse repetition rate up to several GHz. A novel tactic is developed to efficiently select the output polarization state for achieving the stable TEM00-mode self-mode-locked operations at 1053 nm and 1047 nm, respectively. At an incident pump power of 6.93 W and a pulse repetition rate of 2.717 GHz, output powers as high as 2.15 W and 1.35 W are generated for the σand π-polarization, respectively. We experimentally find that decreasing the separation between the gain medium and the input mirror not only brings in the pulse shortening thanks to the enhanced effect of the spatial hole burning, but also effectively introduces the effect of the spectral filtering to lead the Nd:YLF laser to be in a second harmonic mode-locked status. Consequently, pulse durations as short as 8 ps and 8.5 ps are obtained at 1053 nm and 1047 nm with a pulse repetition rate of 5.434 GHz. ©2012 Optical Society of America OCIS codes: (140.3480) Lasers, diode-pumped; (140.3530) Lasers, neodymium; (140.3580) Lasers, solid-state; (140.4050) Mode-locked lasers. References and links 1. U. Keller, “Recent developments in compact ultrafast lasers,” Nature 424(6950), 831–838 (2003). 2. K. J. Weingarten, D. C. Shannon, R. W. Wallace, and U. Keller, “Two-gigahertz repetition-rate, diode-pumped, mode-locked Nd:YLF laser,” Opt. Lett. 15(17), 962–964 (1990). 3. G. P. A. Malcolm, P. F. Curley, and A. I. Ferguson, “Additive-pulse mode locking of a diode-pumped Nd:YLF laser,” Opt. Lett. 15(22), 1303–1305 (1990). 4. T. Juhasz, S. T. Lai, and M. A. Pessot, “Efficient short-pulse generation from a diode-pumped Nd:YLF laser with a piezoelectrically induced diffraction modulator,” Opt. Lett. 15(24), 1458–1460 (1990). 5. K. J. Weingarten, U. Keller, T. H. Chiu, and J. F. Ferguson, “Passively mode-locked diode-pumped solid-statelasers that use an antiresonant Fabry Perot saturable absorber,” Opt. Lett. 18(8), 640–642 (1993). 6. M. B. Danailov, G. Cerullo, V. Magni, D. Segala, and S. De Silvestri, “Nonlinear mirror mode locking of a cw Nd:YLF laser,” Opt. Lett. 19(11), 792–794 (1994). 7. S. D. Pan, J. L. He, Y. E. Hou, Y. X. Fan, H. T. Wang, Y. G. Wang, and X. Y. Ma, “Diode-end-pumped passively CW mode-locked Nd:YLF laser by the LT-In0.25Ga0.75As absorber,” IEEE J. Quantum Electron. 42(10), 1097–1100 (2006). 8. S. D. Pan and Y. G. Wang, “Diode end-pumped passively mode-locked Nd:YLF laser at 1047 nm using singlewall carbon nanotubes based saturable absorber,” Laser Phys. 21(8), 1353–1357 (2011). 9. U. Keller, K. J. Weingarten, F. X. Kärtner, D. Kopf, B. Braun, I. D. Jung, R. Fluck, C. Hönninger, N. Matuschek, and J. A. D. Au, “Semiconductor saturable absorber mirrors (SESAM’s) for femtosecond to nanosecond pulse generation in solid-state lasers,” IEEE J. Sel. Top. Quantum Electron. 2(3), 435–453 (1996). 10. S. Tsuda, W. H. Knox, S. T. Cundiff, W. Y. Jan, and J. E. Cunningham, “Mode-locking ultrafast solid-state lasers with saturable Bragg reflectors,” IEEE J. Sel. Top. Quantum Electron. 2(3), 454–464 (1996). 11. C. Hönninger, R. Paschotta, F. Morier-Genoud, M. Moser, and U. Keller, “Q-switching stability limits of continuous-wave passive mode locking,” J. Opt. Soc. Am. B 16(1), 46–56 (1999). 12. H. C. Liang, R. C. C. Chen, Y. J. Huang, K. W. Su, and Y. F. Chen, “Compact efficient multi-GHz Kerr-lens mode-locked diode-pumped Nd:YVO4 laser,” Opt. Express 16(25), 21149–21154 (2008). 13. H. C. Liang, Y. J. Huang, W. C. Huang, K. W. Su, and Y. F. Chen, “High-power, diode-end-pumped, multigigahertz self-mode-locked Nd:YVO4 laser at 1342 nm,” Opt. Lett. 35(1), 4–6 (2010). #171067 $15.00 USD Received 22 Jun 2012; revised 19 Jul 2012; accepted 19 Jul 2012; published 24 Jul 2012 (C) 2012 OSA 30 July 2012 / Vol. 20, No. 16 / OPTICS EXPRESS 18230 14. H. C. Liang, Y. J. Huang, P. Y. Chiang, and Y. F. Chen, “Highly efficient Nd:Gd0.6Y0.4VO4 laser by direct inband pumping at 914 nm and observation of self-mode-locked operation,” Appl. Phys. B 103(3), 637–641 (2011). 15. A. V. Okishev and W. Seka, “Diode-pumped Nd:YLF master oscillator for the 30-kJ (UV), 60-beam OMEGA laser facility,” IEEE J. Sel. Top. Quantum Electron. 3(1), 59–63 (1997). 16. M. S. Ribeiro, D. F. Silva, E. P. Maldonado, W. de Rossi, and D. M. Zezell, “Effects of 1047-nm neodymium laser radiation on skin wound healing,” J. Clin. Laser Med. Surg. 20(1), 37–40 (2002). 17. Y. J. Huang, C. Y. Tang, W. L. Lee, Y. P. Huang, S. C. Huang, and Y. F. Chen, “Efficient passively Q-switched Nd:YLF TEM00-mode laser at 1053 nm: selection of polarization with birefringence,” Appl. Phys. B, doi:10.1007/s00340-012-4933-9. 18. G. Q. Xie, D. Y. Tang, L. M. Zhao, L. J. Qian, and K. Ueda, “High-power self-mode-locked Yb:Y2O3 ceramic laser,” Opt. Lett. 32(18), 2741–2743 (2007). 19. B. Braun, K. J. Weingarten, F. X. Kärtner, and U. Keller, “Continuous-wave mode-locked solid-state lasers with enhanced spatial hole burning: Part I,” Appl. Phys. B 61(5), 429–437 (1995). F. X. Kärtner, B. Braun, and U. Keller, “Continuous-wave mode-locked solid-state lasers with enhanced spatial hole burning: Part II,” Appl. Phys. B 61(6), 569–579 (1995). 20. Y. J. Huang, Y. P. Huang, H. C. Liang, K. W. Su, Y. F. Chen, and K. F. Huang, “Comparative study between conventional and diffusion-bonded Nd-doped vanadate crystals in the passively mode-locked operation,” Opt. Express 18(9), 9518–9524 (2010). 21. Y. F. Chen, Y. J. Huang, P. Y. Chiang, Y. C. Lin, and H. C. Liang, “Controlling number of lasing modes for designing short-cavity self-mode-locked Nd-doped vanadate lasers,” Appl. Phys. B 103(4), 841–846 (2011).
Introduction
High-power mode-locked solid-state lasers with multi-GHz pulse repetition rate are attractive in a great number of applications including high-speed optical sampling, optical clocking, ultrafast spectroscopy, high-capacity telecommunication, and so on [1].The Nd:YLF crystal is specifically characterized by the negative dependence of the refractive index on the temperature, which can partly compensate for the positive contribution from the end-face bulging of the gain medium to exhibit a relatively weak thermal-lensing effect.As a consequence, the Nd:YLF crystal is recognized as one of the most competitive candidates for constructing high-power lasers with excellent output beam quality.In addition, the natural birefringence enables the Nd:YLF crystal to easily emit a linearly polarized beam and completely eliminate the possibility of the thermal depolarization under the high-power operation.More importantly, the Nd:YLF crystal has a gain bandwidth wider than 1 nm at the transition line of 4 F 3/2 → 4 I 11/2 , which is more favorable for generating short-duration modelocked pulses as compared with other Nd-doped laser crystal.Over the past years, continuouswave (CW) mode-locked operation in the Nd:YLF crystal has been successfully realized with various methods [2][3][4][5][6][7][8].However, high-power mode-locked Nd:YLF lasers with the pulse repetition rate up to several GHz have been scarcely demonstrated so far.
Passive mode locking with the semiconductor saturable absorber mirror (SESAM) is by far the most powerful technique to obtain the ultrashort pulses with the pulse repetition rate in the several hundred MHz range [9,10].To further increase the pulse repetition rate into the GHz region for the passively mode-locked laser, the main challenge is to overcome the tendency of the Q-switched mode locking, where the pulse train is modulated by the Qswitched envelop [11].As a result, the SESAM usually needs to be intricately designed to have very small modulation depth (typically, below 1%) and low saturation fluence for achieving a multi-GHz passively mode-locked laser without the Q-switching instability, which undoubtedly increases the difficulties and costs in fabricating the SESAM.The extremely small modulation depth and low saturation fluence make it possible to utilize the nonlinear effect of the laser crystal itself as an alternative means for fulfilling a highrepetition-rate mode-locked laser.In fact, it was recently found that the third-order nonlinearity of the gain medium could be used to achieve a fairly stable multi-GHz operation in Nd-doped vanadate crystals with the mechanism of the self mode locking [12][13][14].Such self-mode-locked mechanism relies on the fact that the number of the oscillated longitudinal modes in a multi-GHz mode-locked laser with the Nd-doped laser crystals to be characterized by the narrow gain bandwidth is not large, which is generally less than about 20 modes.
Consequently, the third-order nonlinearity of the gain medium could offer enough locking strength to lock several tens of the oscillated longitudinal modes in a short cavity for accomplishing a high-repetition-rate mode-locked laser without any additional modulation elements except the gain medium inside the laser cavity, which is experimentally verified with our previous works [12][13][14].It is worthwhile to point out that the previously reported high-repetition-rate self-mode-locked lasers were mainly focused on the performance at 1064 nm and 1342 nm.One of the attractive features of the Nd:YLF crystal is the emission lines at 1053 nm and 1047 nm.The 1053-nm line is inherently useful in developing a master oscillator for the Nd:glass power amplifier [15], while the 1047-nm line is found to play an important role in the skin wound healing [16].Therefore, high-power high-repetition-rate mode-locked Nd:YLF lasers at 1053 nm and 1047 nm are highly desirable to be developed.
In this work, we report our experimental observations on a diode-pumped self-modelocked Nd:YLF laser with multi-GHz pulse repetition rate for the first time.A novel approach that bases on the natural birefringence of a wedged Nd:YLF crystal and the alignment sensitivity of an optical resonator is utilized for efficient selection of the output polarization state.With the developed method, stable self-mode-locked operations with TEM 00 transverse mode are accomplished for the σand π-polarization, respectively.At an incident pump power of 6.93 W, this compact pulsed laser produces output powers up to 2.15 W and 1.15 W at 1053 nm and 1047 nm under a pulse repetition rate of 2.717 GHz.We find that decreasing the separation between the gain medium and the input mirror brings in the pulse shortening thanks to the enhanced effect of the spatial hole burning (SHB), where the shortest pulse widths of 8 ps and 8.5 ps are obtained for the σand π-polarization, respectively.Furthermore, the occurrence of the second harmonic mode locking rather than the fundamental mode locking is experimentally observed when the gain medium is closely adjacent to the input mirror.
Experimental setup
The experimental setup for the diode-pumped self-mode-locked Nd:YLF laser is schematically sketched in Fig. 1.The input mirror was a concave mirror with the radius of curvature of 500 mm.It was antireflection (AR) coated at 806 nm on the entrance face, and was coated for high transmission at 806 nm as well as for high reflection at 1053 nm on the second surface.The gain medium was a 0.8 at.% a-cut Nd:YLF crystal with dimensions of 3 × 3 × 20 mm 3 .Both facets of the laser crystal were AR coated at 806 nm and 1053 nm.Besides, the second surface of the gain medium was wedged at an angle θ w = 3 with respect to the first surface for efficient selection of the output polarization state as well as absolute elimination of the etalon effect.The laser crystal was wrapped with indium foil and mounted in a water-cooled copper heat sink with the temperature of 16°C.The pump source was an 806-nm fiber-coupled laser diode with the core diameter of 400 µm and the numerical aperture of 0.14.The pump beam with the spot radius of approximately 200 µm was reimaged inside the laser crystal with a lens set that has the focal length of 25 mm and the coupling efficiency of 90%.A flat wedged mirror with the reflectivity of 90% in the range of 1040-1060 nm was used as the output coupler during the experiment, which is experimentally found to allow for the maximum output power under the CW mode-locked circumstance and provide sufficient intracavity energy for stable self mode locking simultaneously.The optical cavity length was set to be about 55 mm, which corresponds to the free spectral range of 2.717 GHz.By using the ABCD-matrix theory, the cavity mode radius inside the laser crystal was calculated to be 230 µm.It should be mentioned that the stable self-mode-locked operation could be realized with the optical cavity length ranging from 45 mm to 100 mm, corresponding to the free spectral range of 1.5-3 GHz.
The real-time temporal behaviors of the mode-locked pulses were received by a highspeed InGaAs photodetector with the rise time of 35 ps, and the recorded signal was connected to a digital oscilloscope (Agilent, DSO 80000) with the electrical bandwidth of 12 GHz and the maximum sampling interval of 25 ps.The output signal of the photodetector was also delivered to a radio frequency (RF) spectrum analyzer (Advantest, R3256A) with the bandwidth of 8 GHz.The fine structure of the mode-locked pulses was measured with the help of a commercial autocorrelator (APE pulse check, Angewandte Physik and Elektronik GmbH).A Fourier optical spectrum analyzer (Advantest, Q8347), which is constructed with a Michelson interferometer, was employed to monitor the spectral information with the resolution of 0.003 nm.
Performance of the self-mode-locked Nd:YLF laser
On the basis of the combined effect of the different deflection angle for the σand πpolarization due to the natural birefringence of a wedged laser crystal and the alignment sensitivity of an optical resonator, we experimentally verify that the output polarization state of the Nd:YLF laser could be switched simply by tilting the orientation of the output coupler [17].Note that the σand π-polarization in the Nd:YLF crystal correspond to the emission lines at 1053 nm and 1047 nm, respectively.Then the Nd:YLF laser was finely adjusted to be in a steady CW mode-locked state by monitoring the real-time oscilloscope traces.The separation d between the laser crystal and the input mirror was initially set to be around 8 mm.At a pulse repetition rate of 2.717 GHz, the output powers for the CW mode locking at 1053 nm and 1047 nm as a function of the incident pump power at 806 nm are illustrated in Fig. 2. Note that the stable CW self-mode-locked operation could always be achieved as long as the incident pump power reaches the threshold.Meanwhile, the laser beam radius of 230 µm is larger than the pump beam radius of 200 µm in the present setup, which leads to a relatively large diffraction loss induced by the thermal lens aberration.Based on the assumption of a Gaussian-like pump profile and following the analysis in Ref [18], here we find that the combined effect of the mode size change in the laser crystal due to the Kerr selffocusing and the thermally induced diffraction loss could result in the nonlinear diffraction loss modulation on the order of 10 −4 .This nonlinear loss modulation of the so-called thermo-Kerr mode locking is experimentally confirmed to be sufficient for the self-starting of the present self-mode-locked laser, where the required nonlinear loss modulation is numerically estimated to be around 10 −5 [18].Refer to the Fig. 2, the threshold pump powers at 1053 nm and 1047 nm are found to be almost the same.The maximum output power as high as 2.15 W is achieved for the σ-polarization, while the maximum output power for the π-polarization is 1.15 W. The two-dimensional spatial distributions at 1053 nm and 1047 nm under an incident pump power of 6.93 W were recorded with a digital camera, and both are found to display a near-diffraction-limited TEM 00 transverse mode, as revealed in the inset of Fig. 2 for the case of the σ-polarization.
Incident pump power at 806 nm (W) Typical oscilloscope traces of the mode-locked pulses at 1053 nm are illustrated in Figs.3(a)-3(b) with the time span of 1 µs and 5 ns, respectively.The amplitude fluctuation is experimentally found to be better than 2%.Moreover, the full modulation of the pulse trains without any CW background indicates that the complete mode locking is achieved in the current configuration.It is also worthwhile to mention that the Q-switched mode locking is not experimentally observed in the self-mode-locked Nd:YLF laser.For Q-switched mode locking, the period between the Q-switched envelops ranges from several hundreds of microsecond to several millisecond.However, the pulse train in millisecond time scale is not presented here because the temporal behavior could not be properly resolved with such wide time span owing to the reduction of the sampling rate.Therefore, we measure the RF spectrum to confirm the stability of the present self-mode-locked Nd:YLF laser, which is displayed in Figs.3(c)-3(d).It can be seen clearly that the peak of the fundamental harmonic is 35 dBc above the background level, and the relaxation oscillation sidebands are negligibly observable.The stability of the laser is examined by the relative frequency deviation of the fundamental harmonic ∆ν/ν, where ν is the central frequency and ∆ν is the full width at half maximum (FWHM) of the fundamental harmonic, respectively.The relative frequency deviation of the fundamental harmonic is experimentally found to be around 10 −5 over the day-long operation, which implies a nice long term stability.
On the other hand, Fig. 3(e) reveals the autocorrelation trace at 1053 nm.Assuming the temporal intensity to follow the Gaussian-shaped profile, the pulse width could be evaluated as 28.5 ps. Figure 3(f) depicts the corresponding optical spectrum with the central wavelength of 1053.34 nm and the FWHM of approximately 0.05 nm.As a result, the time-bandwidth product is estimated to be 0.424, which indicates the present pulses to be frequency chirped.Note that the mode spacing of 0.01 nm between the adjacent longitudinal modes is consistent with the fundamental harmonic of 2.717 GHz, as indicated in Fig. 3(f).Previous studies have theoretically analyzed and experimentally realized that decreasing the separation between the gain medium and the input mirror allows more longitudinal modes to oscillate in a standing-wave cavity [19][20][21].Consequently, the SHB effect would be enhanced to lead the duration of the mode-locked pulse to be effectually shortened.In the present configuration, the pulse width of the mode-locked Nd:YLF laser is found to reduce continuously with decreasing the separation d between the gain medium and the input mirror.When the laser crystal is intimately next to the input mirror, i.e., d = 0.5 mm, the shortest pulse width of 8 ps is obtained at 1053 nm, as shown in Fig. 3(g).Figure 3(h) demonstrates the corresponding optical spectrum for d = 0.5 mm.It is obvious that the pulse shortening is accompanied with the spectral broadening due to the enhancement of the SHB effect.More intriguingly, the mode spacing between the adjacent longitudinal modes for d = 0.5 mm is found to be two times wider than that for d = 8 mm.The mode spacing of 0.02 nm corresponds to the pulse repetition rate of 5.434 GHz, which implies that the Nd:YLF laser changes the mode-locked status from the fundamental mode locking to the second harmonic mode locking.This observation might come from the fact that the locking strength due to the third-order nonlinearity is not strong enough to lock the phases of all lasing longitudinal modes with the mode spacing of 0.01 nm for d = 0.5 mm, in which the number of the oscillated longitudinal modes is estimated to be approximately 30 modes.In order to keep a stable self-mode-locked state within the large gain bandwidth supported by the enhanced SHB effect, it is experimentally found that the number of longitudinal modes would reduce by half with the longitudinal mode spacing to be doubled.This effectively introduces the effect of the spectral filtering to lead the Nd:YLF laser to operate at the second harmonic mode locking rather than the fundamental mode locking.This experimental result is never observed in our previous studies on the self-mode-locked lasers owing to the relatively narrow gain bandwidth in the Nd-doped vanadate crystals as compared with the Nd:YLF crystal [12][13][14].
Finally, the overall characteristics for the self-mode-locked Nd:YLF laser at 1047 nm are graphically summarized in Fig. 4. Generally speaking, the mode-locked performance for the π-polarization is found to be similar to the results obtained with the σ-polarization.
Conclusion
In summary, we have developed a novel technique that relies on the natural birefringence of a wedged laser crystal and the alignment sensitivity of an optical resonator to realize the reliable TEM 00 -mode self-mode-locked lasers in the Nd:YLF crystal at 1053 nm and 1047 nm, respectively.It is experimentally found that the fundamental mode locking of the Nd:YLF laser could be reliably obtained with the tunable pulse repetition rate in the range of 1.5-3 GHz.At an incident pump power of 6.93 W and a pulse repetition rate of 2.717 GHz, this compact mode-locked laser produces the output powers up to 2.15 W and 1.35 W for the σand π-polarization, respectively.Furthermore, we have found that decreasing the separation between the laser crystal and the input mirror not only brings in the pulse shortening due to the enhancement of the SHB effect, where the shortest pulse durations at 1053 nm and 1047 nm are 8 ps and 8.5 ps, but also effectively introduces the effect of the spectral filtering to cause the Nd:YLF laser to operate at the second harmonic mode locking instead of the fundamental mode locking.
Fig. 1 .
Fig. 1.Configuration of the cavity setup for the diode-pumped self-mode-locked Nd:YLF laser.
Fig. 2 .
Fig.2.Output powers for the CW mode locking at 1053 nm and 1047 nm as a function of the incident pump power at 806 nm, where the pulse repetition rate is 2.717 GHz.Inset: twodimensional spatial distribution of the TEM00 transverse mode for the case of the σpolarization. | 4,935 | 2012-07-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
Temporal Segmentation of MPEG Video Streams
Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher e ffi ciency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame di ff erence that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.
INTRODUCTION
The development of highly efficient video compression technology combined with the rapid increase in desktop computer performance, and a decrease in the storage cost, have led to a proliferation of digital video media.As a consequence, many terabytes of video data stored in large video databases, are often not catalogued and are accessible only by the sequential scanning of the sequences.To make the use of large video databases more efficient, we need to be able to automatically index, search, and retrieve relevant material.
It is important to stress that even by using the leading edge hardware accelerators, factors such as algorithm complexity and storage capacity are concerns that still must be addressed.For example, although compression provides tremendous space savings, it can often introduce processing inefficiencies when decompression is required to perform spatial processing for indexing and retrieval.With this in mind, one of the initial considerations in development of a system for video retrieval is an attempt to enhance access capabilities within the existing compression representations.
Since the identification of the temporal structures of video is an essential task of video indexing and retrieval [1], shot detection has been generally accepted to be a first step in the indexing algorithm implementation.We define a shot as a sequence of frames that were (or appear to be) "continuously captured from the same camera" [2].A scene is defined as a "collection of one or more adjoining shots that focus on an object or objects of interest" [3].
Shot change detection algorithms can be classified, according to the features used for processing, into uncompressed and compressed domain algorithms.Algorithms in the uncompressed domain utilize features extracted from the spatial video domain: pixel-wise difference [4], histograms [5], edge tracking [6], and so forth.These techniques are computationally demanding and time-consuming, and thus inferior to the approach based on the compressed domain analysis.
Development in this area is particularly focused on the use of the prevalent MPEG compression standard.Pioneering work by Arman et al. [7] introduced the initial approach to the compressed domain shot detection by analysing the Discrete Cosine Transform (DCT) coefficient subsets and their correlation.Yeo and Liu [8] analysed the sequence of the reduced images extracted from DC coefficients in the transformation domain called the DC sequence.Sethi and Patel [9] used DC sequence histograms to apply χ 2 statistical test.Continuing in the similar manner, Lee et al. [10] exploited information from the first few AC coefficients in the transformation domain, and tracked binary edge maps to parse the video sequence.Although utilizing DCT coefficients appeared to be a much faster approach than the spatial domain analysis, processing time needed to apply motion compensation remained an obstacle to this approach.On the other hand, the algorithms that omitted motion compensation and analysed only I frames, required a second pass to accurately detect the shot change at B or P frames.Meng et al. [11] presented an original approach by utilizing only features directly embedded in MPEG stream: statistics on the numbers and types of prediction vectors used to encode P and B frames.Likewise, Kobla et al. [12] detect shot changes using discontinuous difference metrics and validate the changes by analysing the DCT data.A step forward was done by Pei and Chou [13] where they matched patterns of macroblocks (MB) types within abrupt or gradual change with the expected shapes combining it with partially spatial information.However, these methods have not shown realtime processing capabilities and none of them generated continuous output, essential for further scalable analysis.
In this paper, the main goal is to develop a new approach to the fundamental problems of a system for real-time video retrieval, searching, and browsing.The initial research objectives are directed towards the performance of the core video processing algorithms in the compressed domain, using the established international video standards: MPEG-1-2, H.263, and in future MPEG-4.This approach should introduce improvements in video retrieval with low access latency, as well as advances in processing speed and algorithm complexity.A method for extraction of a continuous frame difference that transforms the 3D video stream into a 1D curve is presented.This transform is then further employed to extract temporal units within the analysed video sequence.
This paper is organized as follows.In Section 2, the algorithm for detection of abrupt shot changes is presented.Section 3 describes the gradual transition detection algorithms that are built on the similar approach, as well as adds some interesting conclusions.Overall results are presented in Section 4, while Section 5 brings final conclusions and a summary of the paper.
SCENE CHANGE DETECTION
MPEG-2 encoders compress video by dividing each frame into blocks of size 16 × 16 called macroblocks (MB) [14].An MB contains information about the type of temporal prediction and corresponding vectors used for motion compensation.The character of the MB prediction is defined in an MPEG variable called MBType.It can be intra coded, forward referenced, backward referenced, or interpolated.Within a video sequence, a continuously strong interframe reference will be present as long as no significant changes occur in the scene.The "amount" of interframe reference in each frame and its temporal changes can be used to define a metric, which measures the probability of scene change in a given frame.We propose to extract MBType information from the MPEG stream and to use it to measure the "amount" of interframe reference.Scene changes are then detected by thresholding the resulting function.
Without loss of generality, we assume that a group of pictures (GOP) in the analysed MPEG stream has the standard frame structure: [IBBPBBPBBPBBPBB].Observe that this frame structure can be split into groups of three having the form of a triplet: IBB or PBB.In the sequel, both types of the reference frames (I or P) are denoted by R i , first bidirectional frame of the triplet as B i , while the second bidirectional frame is denoted as b i .Thus, the MPEG sequence can be analysed as a group of frame-triplets in the form This convention can be easily generalized to any other GOP structure.The possible locations of a cut in a frametriplet are depicted in Figure 1.If the first referenced frame B i is the first frame in the next shot, the next reference frame R i+2 predicts a significant percentage of interframe MBs in both B i and b i+1 .If the scene change occurs at R i , then the previous bidirectional frames B i−2 and b i−1 will be mainly referenced to R i−3 .Finally, if the scene change occurs at b i , then B i−1 will be referenced to R i−2 while b i will be referenced to R i+1 .
If two frames are strongly referenced, then most of the MBs in each frame will have the corresponding type, forward, backward, or interpolated, depending on the type of reference.Thus, we can define a metric for the visual frame difference by analyzing the percentage (or simply the number) of MBs in a frame that are forward referenced and/or backward referenced.
Let Φ T (i) be the set containing all forward referenced MBs and B T (i) the set containing all backward referenced MBs in a given frame with index i and type T. Then we denote the cardinality of Φ T (i) by ϕ T (i) and the cardinality of B T (i) by β T (i).The frame difference metric ∆(i) is defined as Since ∆(i) is a frame-to-frame difference metrics, peaks in the ∆(i) are presenting the strong and abrupt changes in the video content.The cut positions are determined by thresholding using either predefined constant threshold or an adaptive threshold.
GRADUAL CHANGES DETECTION
The next step in the implementation of a shot change detection algorithm is the detection of gradual changes.Unlike cuts, gradual transitions do not show such a significant change in any of the features, and thus are more difficult to detect.Furthermore, there are various types of gradual changes: dissolves, where the frames of the first shot become dimmer, while the frames from the second one become brighter and superimposed; wipes, where the image of the second shot replaces the first one in a regular pattern, such as vertical line, and so forth.Since there is inevitably additional processing in feature analysis for gradual changes extraction, real-time implementation is even more unachievable than it is for basic cut detection.
To reduce additional processing for gradual changes, a new approach is applied.Since the change of features during a gradual transition lasts longer than the analysed frametriplet unit, it is essential to include a component in a difference metrics that will be proportional to the overall change in a GOP.The difference metrics formula for a frame with index i and type T(i) is becoming a linear combination of cardinalities of macroblock type sets within one GOP: (2) In addition to the previously defined sets Φ T (i) and B T (i), sets of intracoded MBs are denoted by I T (i), while interpolated MBs are denoted by Π T (i).Cardinalities of the corresponding sets are denoted by ϕ T (i), β T (i), ı T (i), and π T (i).The metric ∆(i) is proportional to visual changes within the frame triplet as well as to longer alterations during the gradual transitions.During a gradual change, the number of intracoded macroblocks is increasing, because of the lack in visual similarity with both reference frames.On the contrary, the number of interpolated macroblocks is falling, so that we can use this behaviour to enhance the metric sensitivity.
After noise suppression depicted in Figure 2, the same depending on the frame type, there are three different linear combinations of variables ϕ T (i), β T (i), ı T (i), and π T (i) for both bidirectional frames in a frame triplet.Each linear combination has two main coefficients that are directly proportional to the visual content change within predicted and reference frame in a frame triplet (k = +1), and two that are inversely proportional (k = −1) to it.Additional factors k π and k ı are describing overall change in a triplet, one in direct (k ı ) and one in inverse (k π ) proportion.The coefficient values are determined by the rule of thumb, and are presented in Table 1.
The raw difference metric has a strong noise that makes further processing of the data almost impossible.However, we know that the source of this noise is in the discontinuous nature of the difference metrics.Since the metrics value is determined separately for each frame and the content change is based on frame triplets, low-pass filtering with kernel proportional to triplet length would eliminate the noise.The filter with Gaussian pulse response is applied where i ∈[−4σ, 4σ] and σ = 1.5.The value for σ is chosen to maximize the smoothing within one frame triplet.Metric with suppressed noise is calculated as a convolution of Gaussian filter pulse response and the raw noisy metrics After noise suppression, the same filtering procedure is applied to eliminate small spurious peaks and to smooth the difference metrics function.As in noise suppression, the filtering kernel is Gaussian, but with parameter σ = 3.The positions of the central points in the shot change are determined by locating local maxima of the smooth metrics curve.Continuing the process of Gaussian filtering with increasing kernel value, the scale-space of metric curves is generated.It enables a multiresolution analysis of the temporal structure within the analysed video sequence.
RESULTS
The collection of C++ classes called MPEG development classes, implemented by Dongge and Sethi [15], are used as the main tool for manipulating the MPEG streams, while Berkeley mpeg2codec was used as the reference MPEG codec.Some test sequences were produced by Multimedia and Vision Research Lab, Queen Mary, University of London, while some others were provided by the School of Electronic Engineering, Dublin City University, Dublin, Ireland.
To show the typical behaviour of the first algorithm, a sample of MPEG-2 video sequence is generated with three abrupt shot changes at the 6th, 16th, and 23rd frame.
As depicted in Figure 3, the first cut is positioned at rear b frame, and as proposed, it is clear that the level of forward reference is high at previous B frame ϕ (5), and that at the present frame there is strong backward referencing β (6).In the same way, for the 16th I type frame there are significant levels of ϕ (13) and ϕ (14), and the 23rd B type frame has strong β(23) and β(24).Stages of the noise suppression and smoothing process are depicted in Figure 4.It shows noisy raw metric, metric after the noise suppression, and the smoothed metric.
To evaluate the algorithms behaviour, statistical comparison "based on the number of missed detections (MD) and false alarms (FA), expressed as recall and precision" [2] is applied Recall = Detects Detects + MDs , Precision = Detects Detects + FAs . ( Performance comparison and dataset is based on the work of Gargi et al. [2].Dataset is generated as an MPEG-1 sequence with resolution 320×240, having the same sequence length (1200 seconds), same number and type of transitions for a particular programme type (news, sports, and sitcom) as in the performance evaluation.Manually detected positions of the shot boundaries are taken as the ground truth, defining in that way the number of missed detections and [7], while MB uses variance and prediction statistics of macroblock prediction types [11].MD analyses DC sequence differences [8], whilst ME applies χ 2 statistical test on DC coefficients [9].Two algorithms from the spatial domain family showed the best results: 1D bin-to-bin colour histogram comparison in LAB colour space, and 3D histogram intersection in Munsell colour system.
CONCLUSIONS
A novel scene change detection technique based on the motion variables extracted from the MPEG video stream is proposed.First, a method for abrupt changes detection that uses interframe reference derived only from the statistics of the macroblock types was introduced.Second, the similar interframe reference metrics was applied in the algorithm for gradual shot detection.Improved frame difference metric that utilizes additional MBType information enables the detection of longer transitions.Finally, the experimental results were introduced in Section 4.
Performance comparison with other compressed domain algorithms shows much better results in terms of both recall and preciseness.Furthermore, implementation of the algorithm on PC 750 MHz workstation runs four times faster than real-time requirement for CIF MPEG-1 stream.Unlike the most MPEG-based video partitioning methods, this algorithm generates continuous 1D frame difference metric, suitable for further steps of video indexing.A scale-space of curves can be generated to index the sequence in hierarchical and scalable way.
Possibilities of improving the real-time gradual shot changes detection by using multidimensional clustering of MPEG compressed features are investigated.Also, the multiresolution analysis of the temporal structure for hierarchical and scalable video indexing is in the development process.
Figure 1 :
Figure 1: Possible positions of the cut in a frame triple.
Table 1 :
Coefficients in the linear combination ∆(i). | 3,607.8 | 2002-01-01T00:00:00.000 | [
"Computer Science"
] |
Predicting apparent personality from body language: benchmarking deep learning architectures for adaptive social human–robot interaction
First impressions of personality traits can be inferred by non-verbal behaviours such as head pose, body postures, and hand gestures. Enabling social robots to infer the apparent personalities of their users based on such non-verbal cues will allow robots to gain the ability of adapting to their users, constituting a further step towards the personalisation of human–robot interactions. Deep learning architectures such as residual networks, 3D convolutional networks, and long-short time memory networks have been applied to classify human activities and actions in computer vision tasks. These same architectures are beginning to be applied to study human emotions and personality by focusing mainly on facial features in video recordings. In this work, we exploit body language cues to predict apparent personality traits for human–robot interactions. We customised four state-of-the-art neural network architectures to the task, and benchmarked them on a dataset of short side-view videos of dyadic interactions. Our results show the potential for deep learning architectures to predict apparent personality traits from body language cues. While the performance varied between models and personality traits, our results show that these models could still be able to predict sole personality traits, as exemplified by the results on the conscientiousness trait. GRAPHICAL ABSTRACT
Introduction
Personality computing is considered fundamental for a variety of life aspects: from one's own psychological wellbeing to occupational and relational choices [1]. It aims to solve three main problems: automatic personality recognition (the recognition of the true personality of an individual), automatic personality perception (the prediction of the personality others attribute to a given individual, the apparent personality), and automatic personality synthesis (the generation of artificial personalities through embodied agents) [1]. In this paper, we focus on personality perception, otherwise known as automatic Apparent Personality Prediction (APP).
Studies such as [2][3][4] have shown that human-robot interactions depend on personality computing as much as human-human interactions. Hence, if social robots were able to detect the personality of their users, they would be equipped with an important tool that they could use to adapt to the people they are interacting with, consequently improving the quality of the interaction and their users' general well-being [5].
CONTACT Marta Romeo<EMAIL_ADDRESS>Body language cues (e.g. head pose, gaze, facial expressions and body language) provide fundamental information to form a first impression of the personality traits of a person [6]. Observing facial features and eye contact duration [7], as well as the frequency and amplitudes of the head movements, hand gestures and shifts in body postures, facilitates humans to infer certain personality traits of their interlocutors [8]. Likewise, a robot equipped with the ability of detecting and analysing such non-verbal behaviours should also be able to infer the apparent personality of its interlocutors, creating the first impression even before the actual interaction starts. By observing people interacting with each other, robots may learn to predict the apparent personality of their potential users and use this information at the initiation of contact, to make their users more at ease when directly addressing them [4].
In this work, we aim to answer the following research question: what algorithm is best to equip robots with this ability, so that they can approach their users and start an interaction in the most suitable way?
We hypothesise that Deep Learning (DL) architectures are effective for the task, as they have been successfully used to identify human activities from images and video recordings [9] and also start to attract attention from the human-robot interaction community in the field of emotion recognition [10] and personality detection [11].
In fact, from [11,12] it is clear that personality computing, and especially APP benefits from the advancements in DL. However, not many of the proposed solutions for APP take into consideration bodily signals, even though they could greatly benefit apparent personality trait analysis [13]. Most of the works taking a closer look at bodily social signals are mainly based on hand-crafted features and standard machine learning approaches in multimodal settings, like [14,15]. In both works, high-level features including body motion, body activity and gaze are extracted from the input data and used to classify the apparent personality traits through regression and Support Vector Machines (SVM).
The state-of-the art performance of DL for APP has been attained on datasets not taking into consideration bodily signals, but rather focusing on facial features [16,17]. For instance, the ChaLearn Looking at People Apparent Personality Analysis competition 1 proposes a challenge on first impressions recognition aiming to recognise apparent personality traits from short videos of people directly looking at the camera. The outcomes of this challenge indicate that DL models can be effective for predicting apparent personality from video input data. The first [18], the second [19], and the third-place [20] winners of the competition use both audio and visual information available in the ChaLearn dataset [16]. In addition, 3DCNNs have also been successfully applied to human action and activity recognition tasks [21].
Motivated by the above successes, we take a step further to investigate whether these state-of-the-art DL architectures are also effective at predicting the first impressions of the BIG5 personality traits [22] given body language data of individuals from side-view videos. To this end, we adapt a dataset that provides sideview videos of two interlocutors interacting with each other, we then benchmark and analyse four stateof-the-art DL architectures for APP on the adapted dataset: • 3D Convolutional Neural Network (3DCNN) based on [21]. • 3D Residual Network (3DResNet) based on [20].
In the rest of the paper, we first introduce the dataset and the data pre-processing pipeline, give details on the 4 DL architectures and describe our experiment evaluation. Finally, we discuss the performance of the architectures.
Dataset and pre-processing pipeline
Very few datasets exist for vision-based APP and almost all of them comprise videos, or still images, recorded from frontal-view cameras [12]. The ChaLearn First Impression dataset [16] provides both audio and visual information from 10,000 clips (average duration 15 s) extracted from more than 3000 different YouTube videos of people facing and speaking into the camera. The VLOG dataset [17] consists of videos between 1 and 6 min long from 469 different vloggers. The ELEA corpus [23], gathered with the aim of analysing emergent leadership in newly formed groups, collects data from 40 meetings composed of 3 or 4 members and annotated with self-reported and perceived personality.
The Multimodal Human-Human-Robot Interaction (MHHRI) dataset [24], in contrast, provides visual data from full-body movements rather than just head and facial features. Although the dataset of [23] also contains side-views captures, they are of four people sitting around a table, which result in partially occluded data. This is why we adapt the MHHRI dataset for our task.
The multimodal human-human-robot interaction dataset
The MHHRI dataset comprises both human-human and human-human-robot interactions. In addition to firstperson vision data, it also provides side-view RGB video recordings of 12 interaction sessions, with each session involving two participants (for a total of 18 participants, 9 women and 9 men). In the interactions, participants are seated face to face and take turns asking a set of questions to each other. Each interaction lasts 10-15 min, resulting in 290 short clips of Human-Human Interactions (HHI).
We decided to use the recordings of HHI from the MHHRI dataset as input for our DL architectures for the following reasons: (1) The HHI recordings provide unobstructed thirdview data, captured through one static RGB Kinect sensor, which are considered most suitable for predicting apparent personalities from body language. (2) The process from which the annotations of the personalities are derived is explained and considered reliable in dictating the ground truth labels.
The MHHRI dataset provides meta-data, derived from BFI-10 questionnaires [25] filled in by participants for both self-assessment and acquaintance assessment (the assessment of the other participants partaking in the study) of the BIG5 personality traits. The BIG5 Model [22] identifies five major traits that correspond to individual differences in human behaviour, way of thinking, and feelings: (i) extroversion (being sociable, playful, assertive, etc.), (ii) agreeableness (being appreciative, kind, etc.), (iii) conscientiousness (being organised, efficient, etc.), (iv) neuroticism (being insecure, anxious, etc.), and (v) openness to experience (being intellectual, curious, etc.).
Unlike other trait theories that sort traits of individuals into binary categories, the BIG5 asserts that each personality trait is a spectrum, and each trait concurs equally to the definition of the personality of a person. Therefore, individuals are ranked on a scale between two extreme ends for each of the BIG5 traits.
To prepare the video clips derived from the HHI recordings to serve as input for the DL architectures, we pre-processed the dataset with five steps, outlined hereafter.
Cleaning the dataset
To create a consistent pool of inputs for the DL models, some of the original frames had to be discarded because they were either corrupted or they were portraying the participants with their back directed towards the camera. After this operation, over 140,000 PNG frames were left, from the 290 original HHI clips, divided among the 12 interaction sessions of the original dataset.
Creating input clips
To train the models, input clips of 16 subsequent frames were created (following the clip length used in [21]). As the original recordings of the MHHRI present a different and inconsistent number of frames, time duration, and frames per second, firstly, a step of normalising the time length of the clips was carried out to maintain temporal consistency. The original frames were grouped into 16 frames clips while trying to preserve a mean frame rate of 8 Hz and a time duration as close to 2 s as possible. This time duration was considered enough to create a first impression on the personalities of the participants depicted in the frames. This is because, according to [8,26], an exposure time as brief as 100 ms is sufficient for individuals to form a first impression. After this step, a total of 9831 clips, with a duration between 0.96 and 4.3 s were obtained (mean = 1.88 ± 0.31 s).
Extracting individual participants
We extracted the pixels of individual participants from the frames in the dataset before feeding them into the models for training. In this pre-processing step, each frame was input to a Mask R-CNN [27] used for detecting and isolating the two participants in it. The Mask R-CNN implementation used in this work is the opensource network of [28], pre-trained on the COCO dataset [29].
The outputs of the Mask R-CNN step are the bounding boxes for the two different participants in the frame; one for the participant on the right and one for the participant on the left. The bounding boxes found by the Mask-RCNN were then used to crop the original frames of the MHRRI dataset and produce two frames, one for the right and one for the left participants. Finally, the resulting cropped images were resized to 128 × 128 pixels, while keeping the aspect ratio. An example of these steps is depicted in Figure 1.
Defining the ground truth labels
The MHHRI dataset provides self-assessment and acquaintance assessment annotations from the participants' answers to the BFI-10 questionnaire [25]. Each of the 10 items in the BFI-10 questionnaire contributes to the score of one particular trait, with 2 BFI items for each of the BIG5 dimensions. The answers to the questionnaires are measured on a 10-point Likert scale.
We used the acquaintance assessments from the MHHRI dataset, which provide 9-12 acquaintance ratings per participant, to define the ground truths labels to train the models. A score, ranging from 1 to 10, for each participant's ground-truth personality labels on each trait was computed by averaging over all raters. Finally, a binarisation step was carried out with respect to the mean score for each of the five traits independently, computed over all participants, to group them into two classes (i.e. low and high) per personality trait, following the procedure used in [24].
Train, test, and validation splits
In order to train and test the models, a 6-fold crossvalidation was set up to evaluate their ability to learn to Personality traits are divided into two classes (low (↓) and high (↑)) per personality trait.
generalise and predict the different classes (i.e. low and high) for each of the five personality traits. The MHHRI dataset HHI was collected as a set of 12 interaction sessions, each session involving two participants. Even though some participants took part more than once, they always had a different partner to interact with. For this reason, they were considered as a separate instance of the same class. Therefore, the resulting 24 instances of person interactions (2 × 12) were grouped into six groups (G1, G2, G3, G4, G5, G6), with each group containing four instances out of the original 24. Since all sessions in the MHHRI dataset had a different number of clips, the grouping of the participants into the groups was done in a way that allowed each of the six folds to have roughly the same size in the test set (approximately 20% of the total clips).
Each model was trained six times, and each time a different group was kept out of the training set to serve as a test. This way, it was assured that the clips belonging to the test set were never seen by the models during the training phase.
After removing one of the 6 groups, creating the test set out of the input clips, a further 10% of the remaining input clips was assigned to the validation set, leaving all the remaining clips to form the training set. At this point, an additional augmentation step of mirroring each frame on their vertical axis was carried out for each of the clips in training and validation sets.
After dividing the input data into train, test, and validation sets, the clips underwent 2 final normalisation steps: mean subtraction and rescaling.
The final inputs to the DL architectures were the clips derived from the pre-processing pipeline described in the aforementioned five steps. Each clip was a tensor of 16 128 × 128 × 3 frames, resized and cropped with the bounding boxes obtained from the Mask R-CNN step. Table 1 shows the number of clips in the dataset for each personality trait and their class (high or low). The final number of clips in the train, test, and validation splits in each the 6-fold cross validation groups is shown in Table 2.
Deep learning architectures
The architectures taking part in the ChaLearn Looking at People Apparent Personality Analysis competition are still considered the ones dictating the performance baseline. This can be seen from [11], where it is shown that DL approaches for APP builds on the winners of the ChaLearn Competition. For this reason, we chose [18][19][20] as the starting point for 3 of the architectures benchmarked in this work for the task of predicting apparent personality from bodily signals using side-views videos. Moreover, in [19] an additional architecture based on a 3DCNN was tested. Even though its performance were not considered satisfactory compared to the CNN + LSTM architecture that they proposed for the challenge, 3DCNNs have been successfully applied in human action and activity recognition tasks in works such as [21]. For this reason, we decided to include a 3DCNN architecture based on [21] in the pool of compared DL architectures. The 3DCNN architecture studied in this work. The eight convolutional layers apply stride 1. The numbers in the convolutional layers represent the number of filters and the kernel size, respectively. The five max-pooling layers always apply a stride of 2 and have pool size 2 × 2 × 2, except for pool1, having size 1 × 2 × 2 and applying stride 1 × 2 × 2. In addition, not showing in the picture, a batch normalisation layer has been added after the conv1, conv2, conv3b, conv4b, and conv5b layers.
The details of the four architectures benchmarked through this work are given in the following sections.
3D deep convolutional network (3DCNN)
3DCNNs perform 3D convolutions over the spatiotemporal video volume. Unlike classical spatial 2D convolutions, 3D convolutions preserve the temporal information of the input signals. For this reason, they are better suited for learning spatio-temporal features and could be appropriate for APP from video clips.
The implemented 3DCNN in this work follows [21], and it is further described in Figure 2. An additional zeropadding operation (adding a border of pixels all with value zero around the edges of the input images) had to be carried out between conv5b and pool5, to ensure the continuity of size between the output of the convolution and the pooling layer. All convolutional layers are initialised following the Normal He initialisation [30]. Additionally, a batch normalisation layer is added before each pooling layer. All convolutional and fully connected layers, with the exception of the output layer, are activated by ReLU function.
Differently from the architecture in [21], the number of filters for each convolutional layer, and for the first two fully connected layers has been halved. This choice was made based on the necessity of reducing the number of trainable parameters to speed up the computation time while avoiding overfitting the model, too big in its original version for the amount of data available for training.
To solve the multi-label classification problem of predicting the five personality traits, the last layer in the model uses a sigmoid function for label prediction. The output of the last fully connected layer gives the probability scores for each of the five personality traits. Both the first and the second fully connected layers are followed by a dropout layer where outputs are dropped at a 50% rate.
3D residual network (3DResNet)
The work done in [20] uses a deep residual network (ResNet) comprising of an auditory stream and a visual stream coming together in a fully connected audiovisual layer to predict personality traits from facial and auditory features.
ResNets have been successfully used in a variety of computer vision tasks [31], and the possibility of using volumetric convolutions for ResNets has been successfully explored in [32] for activity recognition from video inputs. Therefore, we expanded the network of [20] into a 3DResNet making use of volumetric convolutions. The resulting architecture is further explained in Figure 3.
The 3DResNet developed in this work has 18 layers. Each convolutional layer is initialised following the He Normal initialisation, activated by a ReLU function, and followed by a batch normalisation layer. The last fully connected layer is preceded by a global average pooling layer, and it is activated by a sigmoid function for multi-label classification.
VGG with descriptor aggregation network (VGG DAN+)
The implemented version of VGG analysed in this work is a VGG DAN+ architecture, based on the model defined in [18], which was successful in the ChaLearn 2016 competition.
The work of [18] modifies a traditional VGG-16 architecture with what is defined as a Descriptor Aggregation Network (DAN+). The main difference between the original VGG-16 architecture and the VGG DAN+ is that the last three fully connected layers of the VGG-16 are dropped and replaced by a concatenation layer, as exemplified by Figure 4.
Until the first convolution of the fifth block, the architecture follows a standard VGG-16 [33]. The difference between [33] and [18] is that, after conv5b and after pool5, a DAN+ block is added. The two DAN+ blocks are then concatenated in the last step of the architecture, right before the fully connected layer. The DAN+ blocks perform a global average pooling and a global max pooling, both followed by a L 2 regularisation step, in parallel and on the same input they receive: the first time on the output of conv5b, and the second time on the output of pool5. The last step of the architecture is the concatenation of 4 outputs, 2 coming from the first DAN+ block and 2 coming from the second DAN+ block. Hence, the concatenation layer concatenates 2 global average pooling outputs and 2 global max pooling outputs, respectively. The last fully connected layer gives as output the probabilities score for the 5 personality traits in the clip and it is activated by a sigmoid function for multi-label classification.
The network implemented in this work differs from [18] by the following modifications: the use of volumetric 3D convolutions instead of classic 2D convolutions; the use of the Normal He kernel initialisation and the ReLU activation function for the convolutional layers; the addition of a batch normalisation layer for each of the 5 convolutional blocks, similarly to what has been done for the 3DCNN. Moreover, an additional zero-padding operation had to be carried out between conv5c and pool5, to ensure the continuity of size between the output of the convolution and the pooling layer.
CNN + LSTM network (CNN + LSTM)
Another approach to overcome the inability of 2D convolutions to capture temporal information is to combine 2D convolutional layers with recursive layers, used to learn the temporal patterns of the input [34]. This idea contrasts with the one of employing volumetric convolutions, as explored in the previous three architectures.
The use of an architecture concatenating CNN layers with a final LSTM layer has been explored before for action recognition in videos [35], and it is further explored in [19] for learning first impressions of personality. The architecture from [19], exemplified in Figure 5, was implemented and tested as the final model under examination in this work.
Since this architecture uses 2D convolutions, instead of volumetric convolutions, in each convolutional layer, the input is only one of the 16 frames of the input clip. For this reason, the convolutions, the pooling operations, and the first fully connected layer operations are carried out in parallel 16 times, one per frame composing the clip. The 16 frames are then analysed as a sequence by the LSTM layer and by the final fully connected layer. All convolutional layers, and the first fully connected layers, are activated by a ReLU function, while the last fully connected layer is activated by a sigmoid function. Each convolutional layer is initialised following the Normal He initialisation and is followed by a batch normalisation layer. After the first fully connected and the LSTM layers a dropout layer, where outputs are dropped with an 80% rate, is added.
Experimental evaluation
The DL models have been trained end-to-end following a 6-fold cross-validation method, keeping as a test set one of the six groups per time, and using the train sets generated by the pre-processing pipeline of Section 2. The clips in the dataset were fed to the networks in mini batches of 12. All the models were trained using the Stochastic Gradient Descent (SGD) optimiser with momentum 0.9. The loss function SGD was optimising was the binary cross entropy. The early stopping method, monitoring the loss in the validation set, was employed to terminate the training whenever there was no substantial improvement.
To determine the best learning rate to use with each of the models, a Learning Rate Finder (LRF) was implemented. The LRF is a technique introduced in [36] that allows to identify, with a few iterations of training, a range of learning rates that would be optimal for a given model, taking into consideration the input dataset.
The models were implemented using the TensorFlow open-source platform with the Keras API. The training was carried out using an Nvidia GeForce RTX 2080 GPU with 8GB of RAM. Table 3 reports the learning rate, the number of trainable parameters, the number of epochs, the time needed to train one epoch, and the final training/validation loss and accuracy for each model averaged over the six groups. All models were trained on 200 epochs but, thanks to the early stopping technique, some groups needed a lower number of epochs to reach a satisfying level of performance on the training set.
Evaluation metrics
Given the multi-label nature of the problem investigated in this work, the classical definition of metrics for binary classification would not be adequate to understand the performance of the architectures taken into consideration. Popular metrics used for multi-label classification are the Hamming loss, Hamming score, precision, recall, F 1 score, subset accuracy (exact match), and subset zero-one loss.
The Hamming loss provides the proportion of labels predicted correctly [37]. The Hamming distance between two strings of equal length measures the number of positions at which the corresponding symbols are different. The accuracy taken into consideration in this work refers to the Hamming score (defined as 1 − Hamming loss) which symmetrically measures how close the predictions are to the ground truth labels.
On the other hand, the subset zero-one loss is a generalisation of the well-known zero-one loss to the multilabel setting [38]. It requires, for each sample, that the predicted set of labels are correctly predicted as an exact match of the true set of labels. The subset accuracy (defined as 1 − subset zero − one loss) provides the proportion of correctly classified examples [37]. The subset accuracy is a very strict evaluation measure, compared to the Hamming score, especially when the size of the label space is large.
Additionally, a per class (binary) analysis of the performance of the models for each of the five personality traits was performed. For this, precision, recall, and F 1 score were taken into consideration [39] together with the balanced accuracy [40]. Balanced accuracy is defined as the average of recall obtained on each class, and it is used in multi class and binary classification problems when the dataset used is unbalanced. Since the problem faced in this work is a binary multi-label problem on an unbalanced dataset (see Table 1), the balanced accuracy is used as a metric instead of the classical accuracy when analysing the performance of each trait separately. Another useful tool used to visualise and evaluate the performance of a classifier is the Receiver Operating Characteristic (ROC) curve [41]. In the ROC curves, the true positive rate is on the Y -axis, and the false positive rate is on the X-axis. Usually, a larger area under the curve (AUC) is the sign of better output quality.
Results
A summary of the performance of the models is given in Tables 4 and 5. These results show that the 3DCNN was most successful when identifying conscientiousness; the 3DResNet when identifying conscientiousness and neuroticism; the VGGDAN+ when identifying conscientiousness and extroversion; the CNN + LSTM when identifying agreeableness.
Overall, conscientiousness is the trait for which the prediction has been mostly successful. This is also exemplified by the ROC curves for each of the BIG5 personality traits shown in Figure 6. Looking at these results for each personality trait and each architecture, summarised in Tables 4 and 5, it can be seen that not only conscientiousness was the most successful trait but also that the VGG DAN+ was the most successful architecture overall.
Discussion
One of the main obstacles of predicting personality traits with DL models faced by this work was obtaining appropriate training data. As previously outlined in [12], there is a lack of unified public datasets and tools to model and evaluate methodologies for APP. The MHHRI dataset used in this work was considered the most suitable for predicting the first impressions of personalities from non-verbal bodily social signals from side-view videos.
Even if the MHHRI dataset provides structured and complete data, there are some limitations that hinder the modelling task. First of all, the structure of the data is inconsistent across the HHI sessions in terms of the number of frames, session duration, and frame rate. Therefore, we carried out a thorough pre-processing to structure the data into a consistent format. Despite this, the dataset remained unbalanced (shown in Table 1) in terms of the representation of the 'high' and 'low' classes for each trait, leading to an additional unbalanced division of the available labels in different groups. Moreover, the visual data in the MHHRI dataset recordings are less informative than ideal: participants were too few and they were always portrayed in each frame sitting in front of each other, engaging in scripted conversation. This led to fewer movements, gestures and body shifts throughout the dataset.
The lack of available datasets for personality prediction from non-verbal interactive behaviour, complicates the necessary benchmarking evaluation. Two datasets that could potentially be used to evaluate this task are [42,43], where the portrayed interactions happen in a more Table 4. Comparison of the four architectures in terms of the precision (P.), recall (R.), F 1 score (F 1 ), balanced accuracy (Acc.) and ROC area under the curve (AUC) of the personality trait classification task. natural way, people interacting with each other standing and not following a pre-scripted dialogue. However, the SALSA dataset [42] gives only self-assessment personality annotations, which is not suitable for the task of APP. Moreover, the recordings, involving 18 subjects during an indoor social event, show all subjects simultaneously from an overhead perspective, resulting in noisy and messy data. While the AICO corpus [43] presents a very similar dataset to the MHHRI (dyadic interaction recorded from a side-view RGB Kinect sensor of two people standing in front of each other, among others), it does not provide complete BIG5 personality annotations.
Comparing the performance of the four networks presented in our work with other approaches of DL for APP, reviewed in [11,12], would be cumbersome as none of the previous work have been trained for the same task. However, [24] and [15] provide an evaluation of the MHHRI dataset. They evaluated the classification performance using an SVM with a function kernel in conjunction with First Person Vision and Second Person Vision individual features. A similar evaluation is performed in [14], on the ELEA corpus, using ridge regression and linear SVM regression classifiers. The performance of the better classification model, in these studies, varies for each trait. The classification methods, datasets and metrics differ significantly between studies, thus not allowing us to provide a complete and fair comparison. Nonetheless, conscientiousness was the most successfully predicted personality trait by our models (62% balanced accuracy for VGG DAN+ and 3DResNet). This value is higher than the accuracy values reached by [24,15] and, most significantly, by [14] where their results for conscientiousness and neuroticism were not substantially different than the random baseline. For neuroticism, 3 of our DL models obtained better F 1 scores (0.65 for 3DCNN, 0.71 for 3DResNet and 0.66 for VGG DAN+) than the best mean F 1 (0.60) reported on the acquaintance labels by [24]. However, we found an overall worse performance of our models on extroversion, whereas [14,15,24] obtained their best results on it, and openness to experience, considered to be the most challenging trait to predict also by [15].
These results can be explained by taking into consideration two problems equally contributing to the performance of our architectures: the problem of unbalanced data and of the subjectiveness and lack of consistency in the annotations. As shown in Table 1, extroversion and openness to experience are the most unbalanced and less represented traits in the 'high value' class, while conscientiousness is one of the traits that is more equally balanced. This reinforces the theory that having unbalanced data resulted in under-performance. In addition, an analysis of the annotations provided in the original work describing the MHHRI dataset [24] showed that conscientiousness had the highest self-acquaintance agreement (similarity between the personality judgements made by self and acquaintances) among all traits. This means that the labels for this trait can be considered the most reliable in the dataset. Other traits, like openness to experience, presented low self-acquaintance agreement among the annotations, meaning that their labels cannot be considered as reliable as the ones of the conscientiousness trait. This underlines that the task of APP is challenging even among human annotators.
Overall, the VGG DAN+ and the 3DResNet outperform the other models, with an overall accuracy (average Hamming score) of 58% and 55%, respectively. Besides the overall accuracy scores, this finding is supported by them showing more consistency and relatively higher values across the personality traits, as seen in the ROC curves ( Figure 6) and in the results in Tables 4 and 5. This is especially significant for the conscientiousness trait where they reached 62% in the balance accuracy. Finally, the hybrid CNN + LSTM model performs the worst. It performs inconsistently for all the trait predictions, achieving an average Hamming score of 45% and an average subset accuracy of 7%. Moreover, as underlined in Figure 6, it even underperforms a random classifier for most of the traits. We found that volumetric convolutions led to better performance for the APP task from the body language of human-human interactions than combining classical 2D convolutions and a LSTM network. In [32] an empirical study of the effects of different spatiotemporal convolutions for action recognition in video was presented and they found a noticeable gap between the performance of 2D and that of 3D or mixed convolutional models, suggesting that motion modelling is important for action recognition. Although interpretability for deep video architectures is still in its early stages and the DL community does not have a clear concept of how to decode spatiotemporal features, works such as [44] help us give insights as to why volumetric convolutions seem to work better for our task: on average, networks powered by 3D convolutions focus on shorter and more specific sequences than networks using 2D convolutions and LSTM cells.
Conclusions and future work
In this paper, we analysed the effectiveness of bodyrelated non-verbal cues and DL architectures for predicting apparent personality traits in social human-robot interactions. We customised 4 state-of-the-art DL architectures for the APP task on side-view videos from the MHHRI dataset.
If social robots could form a first impression of the personality of their users, based on the BIG5 personality traits, they could be enhanced with the ability of approaching and relating to their users in a more personalised way, adding value to the human-robot interaction itself, as the first encounter between a robot and a human can be crucial for both short-term engagement and long-term interactions [45].
Although the performance varied between models and personality traits, our evaluation showed the potential of the analysed architectures in predicting the different personality traits. These results are a starting point for discussing the benefits that using bodily signals features from video data input can have for APP, and its application to adaptive social robots.
In the future, we plan to use the AICO corpus [43], fully annotated with the BIG5 personality trait labels, to further verify these results, and to perform an empirical search for the best hyperparameters that will optimise the performance for these models. Moreover, our work sets a good starting point to study optimal strategies of adapting robots to the personality of human users in human-robot interactions. In psychology, various theories have argued on what could be the best personality match for each trait. Implementing our models in realistic human-robot setups will help to understand how these theories apply to robots and what are the traits their users expect to see, leading to an empirical analysis of building better human-robot companionship [46]. There have already been efforts in HRI to adapt the personality of robots to the personality of their users, demonstrating that this approach is beneficial for the overall interaction [47]. However, personality matching was achieved based on prior given measures of personality. This approach presents three problems: first, measures may not match with the actual personality of the person; second, they cannot be adapted; third, they cannot be used to interact with people robots are meeting for the first time. To that end, our work takes a step forward by providing an instrument for the community to automatically predict the apparent personality traits of robots' users. Note 1. https://gesture.chalearn.org/2016-looking-at-people-eccv -workshop-challenge.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This work was partially supported by a grant of AIST-AIRC (Japan) for the collaboration with the University of Manchester. The study is based on the results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). The work was also supported by the EPSRC UKRI TAS Node on Trust and the European Research Council (H2020) projects PERSEO ETN and eLADDA ETN. | 8,458.8 | 2021-10-02T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Molds with Advanced Materials for Carbon Fiber Manufacturing with 3D Printing Technology
Fused Deposition Modeling (FDM) 3D printing is the most widespread technology in additive manufacturing worldwide that thanks to its low costs, finished component applications, and the production process of other parts. The need for lighter and higher-performance components has led to an increased usage of polymeric matrix composites in many fields ranging from automotive to aerospace. The molds used to manufacture these components are made with different technologies, depending on the number of pieces to be made. Usually, they are fiberglass molds with a thin layer of gelcoat to lower the surface roughness and obtain a smooth final surface of the component. Alternatively, they are made from metal, thus making a single carbon fiber prototype very expensive due to the mold build. Making the mold using FDM technology can be a smart solution to reduce costs, but due to the layer deposition process, the roughness is quite high. The surface can be improved by reducing the layer height, but it is still not possible to reach the same degree of surface finish of metallic or gelcoat molds without the use of fillers. Thermoplastic polymers, also used in the FDM process, are generally soluble in specific solvents. This aspect can be exploited to perform chemical smoothing of the external surface of a component. The combination of FDM and chemical smoothing can be a solution to produce low-cost molds with a very good surface finish.
Introduction
The FDM process was first patented by Stratasys in the 1990s to build a threedimensional plastic object without the use of a mold. The parts are produced layer by layer through the extrusion of thermoplastic filaments usually wound in spools [1]. This is the most popular additive manufacturing technique nowadays as it offers a wide range of thermoplastic material choices from common PLA [2] up to engineering-grade materials such as Nylons [3]. This manufacturing process can be used to create solid components with complex shapes and geometries, as highlighted by the studies of SAVU et al. [4] and Brian et al. [5]. Even though the Additive Manufacturing (AM) processes are challenged because of their low productivity, inferior surface quality, dimensional instability, and the internal anisotropy that decreases the mechanical properties of the products [6]. This process has also shown suitable to produce end-use parts and for small series production [7,8].
3D Printing for Supporting the Component Manufacturing Process
FDM 3D printing is the most cost-effective additive manufacturing process for thermoplastic materials on the market. This is primarily due to the relative simplicity of hardware construction compared to other technologies. For example, the closed and heated chamber is not always necessary as in the case of Selective Laser Melting (SLM). The filament production is simple, made from polymer granules, and hence there are dozens of manufacturers and a large number of thermoplastic polymers available on the market. The FDM process includes some safety advantages as well, opposite to printing from powder technologies that always require personal protective equipment to avoid problems with the respiratory system during the powder handling and component cleaning. Subsequently, filament 3D printing showed no problems during material handling since the polymer is wound on a spool and is easy to substitute once the filament is finished. Any fumes produced during printing can be effectively removed using specific filters installed on the printer. Nevertheless, the main drawback of FDM printing is related to the surface quality of the component that is lower than that obtained with other 3D printing technologies such as SLM, Stereolithography (SLA) [9], and multi jet fusion from HP [10]. Some research findings suggest that surface noise given by spikes and peaks in the component during modelling could lead to improper print quality [11]. The challenge in correctly predicting residual stresses [12] and deformations of printed components once extruded has so far limited the use of FDM printing for structural components, thus requiring numerous trials before obtaining the finished component with the desired quality level. Overall, FDM printing can be used as a support procedure to create other components, especially in the case of very limited batches or prototypes. Other findings of Komineas et al. [13] can help to accurately calculate overall build time to industrialize elements built with AM. A noteworthy application case is the manufacturing of metal components with the lost wax casting technique [14]. In this case, the starting component on which the ceramic mold is built is no longer made of wax but is 3D printed using low melting thermoplastic polymers such as PLA or ABS [15]. These are suitably modified to reduce the creation of ashes during the polymer removal phase from the mold, which takes place in the furnace. In this case, FDM 3D printing allowed us to avoid the need to create a metal mold (made in aluminum or steel otherwise) in which the wax is casted to obtain the desired model, thus being able to lead to a significant reduction in costs. The alternative to 3D printing would be to make the wax models manually, in which case the printing process allowed us to obtain greater reproducibility and better dimensional tolerances. Additionally, it should be remembered that SLA technology is also used to produce models intended for the traditional lost-wax casting process [16], allowing us in this case to obtain finished components with surface roughness identical to that obtained with wax models. However, the SLA technique is limited to small components due to the large deformations that would occur during the resin printing process and is usually used in the goldsmith and medical sectors [17], and the cost of the resin is also usually higher than that of FDM printing filaments. Another example of how the FDM process can be used as a supporting technology is the mold creation for silicone components [18]. The advantages are similar to the previous example.
Mold to Create Carbon Fiber Components
This study was mainly focused on building a custom mold made through FDM and smoothing processes for polymer-matrix composites. This solution led to a high surface finish that can guarantee the production of continuous fiber-reinforced components such as carbon fiber composites with an epoxy matrix. The mold is necessary, as the current moldless technologies do not allow the production of carbon fiber components starting from a fabric, being constrained to a single, continuous fiber, allowing us just to make reinforcements [19].
Mold manufacturing is usually very expensive. Material options are fiberglass or metal molds. Nevertheless, a starting model is required for the construction of a fiberglass mold; it is usually made by CNC in wood or in high-density polyurethane foam. The model is then covered with a layer of release agent (Polyvinyl Alcohol) or wax. A layer of gelcoat is applied over the model to obtain a shiny and homogeneous surface on the mold, and finally the fibers are soaked in epoxy or a thermosetting resin to give rigidity and consistency to the mold. The whole process has to be done manually by experienced operators and it is difficult to mechanize. The creation of single prototypes in carbon fiber is therefore extremely expensive since it is necessary to make the mold and amortize it with a single piece. The aim of this research is to introduce a solution to this problem using 3D printing technology.
FDM 3D Printed Mold
We need to obtain cost-effective, high-quality molds to reduce the costs of prototypes or small batch production. The possibility of making molds with FDM technology is a smart solution [20,21]. The main challenge is related to the high surface roughness that would be transferred directly to the final component. A filler could be used on the mold, followed by manual sandblasting to improve the surface finish. This technique can be applied to components with a relatively simple geometry with low tolerance values, but either way it would still require an important manual intervention.
From Solvent Bonding to Chemical Smoothing
Thermoplastic polymers are generally soluble in some solvent compounds. The application of the solvent on the surface of the plastic component softened its surface. If two components are compressed against each other and the surfaces have been treated with the solvent, a mutual diffusion of the polymer chains is obtained, and the result is a very strong adhesion once evaporated. Solvent bonding differs from adhesive bonding since the solvent does not become permanently adhered to the adhered substrate. A further advantage is that this softening usually occurs well below the glass Transition Temperature (Tg) and therefore the overall component integrity is maintained. This process can also be used to smooth the surface of thermoplastic components [22]. This is superfluous for parts made with injection molding as little surface roughness could be achieved, but instead it could become a process to improve the surface characteristics of a component made by FDM [23]. This process is called chemical smoothing and it allows a localized reaction on the surface of the component only, keeping the main structure unchanged. This process used by research of Kuo et al. [24] can be used to ensure watertight surfaces are achieved on the mold. Once reaching the desired smoothing quality, the component must be cooled in open air or under forced ventilation to promote the evaporation of the solvent from the surface. In this research, the chosen machine for the vapor smoothing is the Polymaker Polishear (Polymaker Inc., Shanghai, China), designed specifically for Polyvinyl butyral (PVB) smoothing using Isopropyl Alcohol (IPA).
Material Choice
Polyvinyl butyral (PVB) FDM filament (Polymaker Inc., Shanghai, China) is the chosen material for this application. It is the result of a reaction between polyvinyl alcohol and butyraldehyde. This polymer is usually used in the creation of multilayer safety glass [25] in the automotive sector due to its high transparency but is not popular in FDM printing. Currently, there are only two filament manufacturers available, and the cost of this product is higher respect to the most common PLA, but considerably lower than that of Nylon and other engineering materials. Printability is excellent compared to PLA and its mechanical properties are such as the latter [26]. Having a low glass temperature (Tg), the deformations in the printing phase (warping) are limited, even on medium-sized prints, similar to what occurs with PLA. Additionally, similar to every thermoplastic material, it is soluble in a specific solvent, such as IPA alcohol. IPA is a very volatile solvent but it is not very harmful if in contact with human skin. It is sold without special regulations. PLA is also soluble in chloroform [27], but this liquid is much more dangerous and is not for sale. Other polymers such as ABS or ASA could be considered a valid alternative to PVB as they are soluble in acetone [28]. The high Tg of the latter two materials allows the thermal resistance to be greatly increased at the expense of printability. However, ASA and ABS require very high printing and bed temperatures as they require a heated chamber for medium-sized components (150 × 150 × 150 mm). Nevertheless, the onset of warping and delamination phenomena between the layers remains a serious problem. Finally, ASA and ABS contain styrene, which is a toxic component, and the production of fumes during printing could lead to various respiratory system diseases. For safety reasons it is therefore necessary to have a suitable device for filtering the fumes. Overall, PVB offers the best tradeoff between PLA and ABS [29], obtaining the good printability of the former and the solubility of the latter in readily available solvents. In Figure 1 it is possible to see the effect of smoothing in an image taken using an optical microscope at 20× magnification. is also soluble in chloroform [27], but this liquid is much more dangerous and is not for sale. Other polymers such as ABS or ASA could be considered a valid alternative to PVB as they are soluble in acetone [28]. The high Tg of the latter two materials allows the thermal resistance to be greatly increased at the expense of printability. However, ASA and ABS require very high printing and bed temperatures as they require a heated chamber for medium-sized components (150 × 150 × 150 mm). Nevertheless, the onset of warping and delamination phenomena between the layers remains a serious problem. Finally, ASA and ABS contain styrene, which is a toxic component, and the production of fumes during printing could lead to various respiratory system diseases. For safety reasons it is therefore necessary to have a suitable device for filtering the fumes. Overall, PVB offers the best tradeoff between PLA and ABS [29], obtaining the good printability of the former and the solubility of the latter in readily available solvents. In Figure 1 it is possible to see the effect of smoothing in an image taken using an optical microscope at 20× magnification.
Case Study: Manufacturing of a Carbon Fiber Fuel Tap Protection for a Racing Motorbike
The need of light-weight carbon fiber protection is necessary to protect exposed components. This prevents debris or contact with other riders from causing the part to break or malfunction. The fuel cap is particularly exposed in the Husqvarna TC 85 motorbike, and it is therefore necessary to protect it, to avoid dangerous fuel leakage. The need is to produce protection to be installed on that motorbike for the European and World championships. This case study was followed by further components designed to be produced in a very limited series and for the exclusive usage of the team.
Mold Geometry
The starting point was the CAD drawing of the fuel tap guard. The software used is PTC Creo (PTC Inc., Boston, MA, USA), and the overall dimensions were acquired directly on the fuel tank by means of a caliper. Once the protection geometry was created, a Boolean approach was chosen for the construction of the mold. The CAD file of the protection was modified with the addition of material and draft angles to obtain the correct geometry
Case Study: Manufacturing of a Carbon Fiber Fuel Tap Protection for a Racing Motorbike
The need of light-weight carbon fiber protection is necessary to protect exposed components. This prevents debris or contact with other riders from causing the part to break or malfunction. The fuel cap is particularly exposed in the Husqvarna TC 85 motorbike, and it is therefore necessary to protect it, to avoid dangerous fuel leakage. The need is to produce protection to be installed on that motorbike for the European and World championships. This case study was followed by further components designed to be produced in a very limited series and for the exclusive usage of the team.
Mold Geometry
The starting point was the CAD drawing of the fuel tap guard. The software used is PTC Creo (PTC Inc., Boston, MA, USA), and the overall dimensions were acquired directly on the fuel tank by means of a caliper. Once the protection geometry was created, a Boolean approach was chosen for the construction of the mold. The CAD file of the protection was modified with the addition of material and draft angles to obtain the correct geometry for the slot on the mold. Finally, the addition of fittings made it possible to avoid ripples in the fabric that could rise to defects in the final component. The overall process is summarized in Figure 2.
Printing Strategy and Settings
The printing strategy adopted for this component could also be generalized to other parts with similar characteristics. Table 1 shows the printing parameters used to create the component. The first key point is the orientation of the part with respect to the build platform. Although it is a relatively simple component, there are four possible part orientations with respect to the print bed, as shown in Figure 3. The software used for slicing was
Printing Strategy and Settings
The printing strategy adopted for this component could also be generalized to other parts with similar characteristics. Table 1 shows the printing parameters used to create the component. The first key point is the orientation of the part with respect to the build platform. Although it is a relatively simple component, there are four possible part orientations with respect to the print bed, as shown in Figure 3. The software used for slicing was Cura v4.9.1 (Ultmaker Inc., Zaltbommel, The Netherlands). Moreover, in the first case ( Figure 3A), the mold is placed flat on the printing surface. The idea is to minimize the height of the printed component, thus reducing the total number of layers to be created. This printing mode allowed us to reduce the printing time, but it did not reproduce very well the curvature in the build direction of the component due to the so-called staircase-effect. The action of chemical smoothing can improve the surface roughness of the component and reduce the staircase-effect. In general, it is necessary to obtain the best possible surface prior to treatment in order to reduce the exposure time to solvent vapors. High exposure to the solvent could irreparably damage the surface.
In the second case ( Figure 3B), the mold is placed vertically onto the build platform, leading to a time increase of 5% compared to the previous condition, but the staircase-effect problem was significantly improved. As a drawback, many supports were generated, meaning a waste of material and a poor surface finish on supported surfaces. Overall better mold finish quality could be achieved with the use of soluble supports at the expense of a significant price increase for the creation of the mold. Moreover, the result in case 3 ( Figure 3C) was similar to the previous case, but the generation of supports was reduced, and the staircase-effect was still present at some points. Finally, the printing position used in the fourth case ( Figure 3D) minimized the generation of supports and allowed the maximum resolution of the curvature of the mold. Moreover, in the first case ( Figure 3A), the mold is placed flat on the printing surface. The idea is to minimize the height of the printed component, thus reducing the total number of layers to be created. This printing mode allowed us to reduce the printing time, but it did not reproduce very well the curvature in the build direction of the component due to the so-called staircase-effect. The action of chemical smoothing can improve the surface roughness of the component and reduce the staircase-effect. In general, it is necessary to obtain the best possible surface prior to treatment in order to reduce the exposure time to solvent vapors. High exposure to the solvent could irreparably damage the surface.
In the second case ( Figure 3B), the mold is placed vertically onto the build platform, leading to a time increase of 5% compared to the previous condition, but the staircaseeffect problem was significantly improved. As a drawback, many supports were generated, meaning a waste of material and a poor surface finish on supported surfaces. Overall better mold finish quality could be achieved with the use of soluble supports at the expense of a significant price increase for the creation of the mold. Moreover, the result in case 3 ( Figure 3C) was similar to the previous case, but the generation of supports was reduced, and the staircase-effect was still present at some points. Finally, the printing position used in the fourth case ( Figure 3D) minimized the generation of supports and allowed the maximum resolution of the curvature of the mold.
However, the settings on Table 1 were adopted to further improve the quality of the mold before the chemical smoothing process to reduce possible imperfections in the cavity. The seam of the outermost wall was preferentially positioned in the rear corner of the mold to avoid seams in the cavity, as seen in Figure 4. Thereafter, it was decided to use a variable layer height, as seen in Figure 5, to further improve the fidelity of the curvature, whilst speed up the printing process at the same time. The molds were not 100% filled; a 20% Gyroid type infill approach with three contour lines were used. This allowed us to obtain molds that could withstand the vacuum lamination process and minimize the material used. However, the settings on Table 1 were adopted to further improve the quality of the mold before the chemical smoothing process to reduce possible imperfections in the cavity. The seam of the outermost wall was preferentially positioned in the rear corner of the mold to avoid seams in the cavity, as seen in Figure 4. Thereafter, it was decided to use a variable layer height, as seen in Figure 5, to further improve the fidelity of the curvature, whilst speed up the printing process at the same time. The molds were not 100% filled; a 20% Gyroid type infill approach with three contour lines were used. This allowed us to obtain molds that could withstand the vacuum lamination process and minimize the material used.
Chemical Smoothing of the Mould Surface
The mold was initially placed inside the Polyshear device in the same position in which it was printed, and then a 10-min smoothing cycle was performed. Subsequently, it was turned upside down, and a second 10-min smoothing cycle was carried out. Subsequently, two molds were printed with the same printing parameters (thus the same gcode), and both sustained the smoothing process to verify the reproducibility of the process. Both were left to dry for 24 h at ambient temperature. Figure 6 shows the differences before and after the smoothing process.
Chemical Smoothing of the Mould Surface
The mold was initially placed inside the Polyshear device in the same position in which it was printed, and then a 10-min smoothing cycle was performed. Subsequently, it was turned upside down, and a second 10-min smoothing cycle was carried out. Subsequently, two molds were printed with the same printing parameters (thus the same gcode), and both sustained the smoothing process to verify the reproducibility of the process. Both were left to dry for 24 h at ambient temperature. Figure 6 shows the differences before and after the smoothing process.
Dimensional Verification of a 3D-Printed Mold with an Optical 3D Scanner
A Faro 3D scanner was used to check the fidelity of the printed model compared to the designed CAD file. In Figure 7 it is possible to see the cloud of points obtained from the scanning of the first mold. In order to evaluate the reproducibility of this process, two molds with the same gcode were printed. Therefore, a comparison with the theoretical,
Dimensional Verification of a 3D-Printed Mold with an Optical 3D Scanner
A Faro 3D scanner was used to check the fidelity of the printed model compared to the designed CAD file. In Figure 7 it is possible to see the cloud of points obtained from the scanning of the first mold. In order to evaluate the reproducibility of this process, two molds with the same gcode were printed. Therefore, a comparison with the theoretical, CAD file could be appreciated in Figure 8, in which the matching was good altogether, as an absolute range of 0.05 mm was obtained in most of the mold. There were areas on the inner boundary highlighted in blue where the matching was not accurate. Nevertheless, the results shown how accurate the printing process was. On the other hand, the overall printing precision could be improved. A so-called loop optimization could be carried out to obtain greater fidelity with the cad file. It could be noticed that although the staircase effect was reduced to a minimum, its effect persisted at some points, where the differences between the cad file and the printed model were higher. Figure 6. (A) Mold immediately after printing; (B) mold after smoothing treatment. On the top of the (B) mold are small circular spots; these are due to the support surface during smoothing and the rotation of 180 degrees. In the lower part of the mold (A) it is possible to see the layers, due to the staircase effect; on the right they have disappeared thanks to the smoothing process.
Dimensional Verification of a 3D-Printed Mold with an Optical 3D Scanner
A Faro 3D scanner was used to check the fidelity of the printed model compared to the designed CAD file. In Figure 7 it is possible to see the cloud of points obtained from the scanning of the first mold. In order to evaluate the reproducibility of this process, two molds with the same gcode were printed. Therefore, a comparison with the theoretical, CAD file could be appreciated in Figure 8, in which the matching was good altogether, as an absolute range of 0.05 mm was obtained in most of the mold. There were areas on the inner boundary highlighted in blue where the matching was not accurate. Nevertheless, the results shown how accurate the printing process was. On the other hand, the overall printing precision could be improved. A so-called loop optimization could be carried out to obtain greater fidelity with the cad file. It could be noticed that although the staircase effect was reduced to a minimum, its effect persisted at some points, where the differences between the cad file and the printed model were higher. The visualization tool seen in Figure 9 allowed us to observe small oscillations on the surface of the two molds. These were probably due to the printing parameters and could be improved by reducing acceleration and jerk. These oscillations were less visible in Fig The visualization tool seen in Figure 9 allowed us to observe small oscillations on the surface of the two molds. These were probably due to the printing parameters and could be improved by reducing acceleration and jerk. These oscillations were less visible in Figure 8 because most result values were in the range of −0.025 +0.025 mm with only the peak values obtained outside this range. The visualization tool seen in Figure 9 allowed us to observe small oscillations on the surface of the two molds. These were probably due to the printing parameters and could be improved by reducing acceleration and jerk. These oscillations were less visible in Figure 8 because most result values were in the range of −0.025 +0.025 mm with only the peak values obtained outside this range. Furthermore, Figure 10 shows the comparison between the two molds point-clouds with each other. It could be seen that the reproducibility of printing with this specific material was very high. In fact, the two molds could be superimposed, as can be seen from the almost complete green color of the image. Furthermore, Figure 10 shows the comparison between the two molds point-clouds with each other. It could be seen that the reproducibility of printing with this specific material was very high. In fact, the two molds could be superimposed, as can be seen from the almost complete green color of the image.
Dimensional Verification of the 3D-Printed, Chemical-Smoothed Mold with an Optical 3D Scanner
After the first dimensional verification, both molds were subjected to a chemical smoothing process, as described in Section 2.3.3. Afterwards, dimensional verification with respect to the CAD file gave the results visible in Figure 11. As expected, the chemical smoothing process reduced the peaks of innacuracies and filled the valleys, reducing the overall external dimensions of the component. Once again, the verification was carried
Dimensional Verification of the 3D-Printed, Chemical-Smoothed Mold with an Optical 3D Scanner
After the first dimensional verification, both molds were subjected to a chemical smoothing process, as described in Section 2.3.3. Afterwards, dimensional verification with respect to the CAD file gave the results visible in Figure 11. As expected, the chemical smoothing process reduced the peaks of innacuracies and filled the valleys, reducing the overall external dimensions of the component. Once again, the verification was carried out on both molds printed to evaluate the repeatability of the process, and a comparison was made with respect to the CAD file values and with respect to the scanning performed before the smoothing process.
Dimensional Verification of the 3D-Printed, Chemical-Smoothed Mold with an Optical 3D Scanner
After the first dimensional verification, both molds were subjected to a chemical smoothing process, as described in Section 2.3.3. Afterwards, dimensional verification with respect to the CAD file gave the results visible in Figure 11. As expected, the chemical smoothing process reduced the peaks of innacuracies and filled the valleys, reducing the overall external dimensions of the component. Once again, the verification was carried out on both molds printed to evaluate the repeatability of the process, and a comparison was made with respect to the CAD file values and with respect to the scanning performed before the smoothing process. Afterwards, thanks to the Geomagic visualization tool (3d systems Inc., Valencia, CA, USA), a comparison with Figure 9 was performed in Figure 12, and it was possible to see a greater homogenization of the surfaces, which can be seen from the zebra stripes. This was due to the smoothing process that turned the surfaces smoother and glossier. Additionally, it was possible to appreciate the effect of smoothing on the dimensional variation of the component in Figures 13 and 14. The molds before and after treatment were compared.
Polymers 2021, 13, x FOR PEER REVIEW 11 of 15 Afterwards, thanks to the Geomagic visualization tool (3d systems Inc., Valencia, CA, USA), a comparison with Figure 9 was performed in Figure 12, and it was possible to see a greater homogenization of the surfaces, which can be seen from the zebra stripes. This was due to the smoothing process that turned the surfaces smoother and glossier. Additionally. it was possible to appreciate the effect of smoothing on the dimensional variation of the component in Figures 13 and 14. The molds before and after treatment were compared. From Figure 13 it could be deduced that overall, the dimensional tolerance of the component after smoothing reached an absolute value of 0.1 mm, with both molds completely colored green. Figure 14 shows that the treatment was uniform; positive or negative values also differed in the way in which the software superimposed the two scans and did not indicate removal or addition of material. Therefore, they must be understood in an absolute manner. In fact, the software tried to minimize the distance between the points of one mesh and those of the other, obtaining the result shown.
In Figure 15A the reproducibility of the process can be evaluated. In fact, it can be seen that the two post-treatment molds differed in an absolute value by less than 0.1 mm, with the likely analysis almost completely colored green. Figure 15B shows the areas that were slightly positive and those that were negative, but overall a good matching was obtained, making it possible to guarantee narrow tolerances. From Figure 13 it could be deduced that overall, the dimensional tolerance of the component after smoothing reached an absolute value of 0.1 mm, with both molds completely colored green. Figure 14 shows that the treatment was uniform; positive or negative values also differed in the way in which the software superimposed the two scans and did not indicate removal or addition of material. Therefore, they must be understood in an absolute manner. In fact, the software tried to minimize the distance between the points of one mesh and those of the other, obtaining the result shown.
In Figure 15A the reproducibility of the process can be evaluated. In fact, it can be seen that the two post-treatment molds differed in an absolute value by less than 0.1 mm, with the likely analysis almost completely colored green. Figure 15B shows the areas that were slightly positive and those that were negative, but overall a good matching was obtained, making it possible to guarantee narrow tolerances. and did not indicate removal or addition of material. Therefore, they must be understood in an absolute manner. In fact, the software tried to minimize the distance between the points of one mesh and those of the other, obtaining the result shown.
In Figure 15A the reproducibility of the process can be evaluated. In fact, it can be seen that the two post-treatment molds differed in an absolute value by less than 0.1 mm, with the likely analysis almost completely colored green. Figure 15B shows the areas that were slightly positive and those that were negative, but overall a good matching was obtained, making it possible to guarantee narrow tolerances. Figure 15. Comparison between the two molds (mold 1 (A), mold 2 (B)) after smoothing.
Carbon Fiber Vacuum Lamination Process
The mold in this study was used to create a part with the vacuum lamination process to make the component of Figure 2A. It is worth noticing that the FDM-produced mold Figure 15. Comparison between the two molds (mold 1 (A), mold 2 (B)) after smoothing.
Carbon Fiber Vacuum Lamination Process
The mold in this study was used to create a part with the vacuum lamination process to make the component of Figure 2A. It is worth noticing that the FDM-produced mold did not require the use of a release agent, unlike conventional fiberglass and gelcoat or metal molds. Therefore, a resin-impregnated carbon fiber cloth was placed directly in the cavity of the mold, after which a layer of peel ply fabric and a layer of absorbent tissue were placed and the vacuum was made. The resin used was AERO68 ® (Rius Composites SRL, Italy)), suitable for wet-layup laminations at room temperature with carbon fiber, glass fiber, and Kevlar. The component was then extracted from the mold with the aid of a plastic wedge and then finished and mounted on the fuel tank. Figure 16 shows a series of images that summarize the production process of the component. did not require the use of a release agent, unlike conventional fiberglass and gelcoat or metal molds. Therefore, a resin-impregnated carbon fiber cloth was placed directly in the cavity of the mold, after which a layer of peel ply fabric and a layer of absorbent tissue were placed and the vacuum was made. The resin used was AERO68 ® (Rius Composites SRL, Italy)), suitable for wet-layup laminations at room temperature with carbon fiber, glass fiber, and Kevlar. The component was then extracted from the mold with the aid of a plastic wedge and then finished and mounted on the fuel tank. Figure 16 shows a series of images that summarize the production process of the component.
Conclusions
The procedure highlighted proved to be valid for creating carbon fiber components produced in very small numbers. The choice of using molds printed in FDM allowed us to reduce time and costs considerably compared to conventional methods. The need to obtain a smooth surface of the final component is also fundamental to guarantee optical and mechanical properties. The chemical smoothing allowed us to obtain a smooth sur-
Conclusions
The procedure highlighted proved to be valid for creating carbon fiber components produced in very small numbers. The choice of using molds printed in FDM allowed us to reduce time and costs considerably compared to conventional methods. The need to obtain a smooth surface of the final component is also fundamental to guarantee optical and mechanical properties. The chemical smoothing allowed us to obtain a smooth surface, without using gelcoat or other products/techniques for reducing surface roughness. Overall, the quality of the final component was high, but there are margins for improvement, especially in terms of dimensional tolerances. By knowing the effect of the smoothing process at a dimensional level approximately, it is possible to modify the starting cad file to obtain more accurate final dimensions of the mold.
The printing parameters used for the creation of the mold could also be improved. The correct setting of printing parameters aimed at obtaining a better surface quality, therefore guaranteeing shorter smoothing times, and reaching higher dimensional tolerances. These ensure less alcohol absorption by the surface and therefore less time to get the component finished and ready to be used.
Future Developments
Future developments include further printing methodologies for this material, i.e., the adoption of a lower layer height, that would lead to much longer printing times but an even higher surface quality.
Additional tests of this element are needed, e.g., if it would be possible to make the entire mold in cheap PLA and only the outer layer in PVB, after verifying a good adhesion between the two materials. A possible alternative to PLA could be PETG, in order to further reduce the costs of a single mold and take full advantage of the existing technologies of 3D FDM printing and the smoothing technique.
Further research is needed for the vapor-process parameters and how to influence the surface roughness and the actual fabrications of parts with other materials. Mechanical tests could be carried out to evaluate the mechanical properties of the carbon fiber components obtained with this technology. | 8,604.4 | 2021-10-27T00:00:00.000 | [
"Materials Science"
] |
Connecting reservoir computing with statistical forecasting and deep neural networks
Standfirst Among the existing machine learning frameworks, reservoir computing demonstrates fast and low-cost training, and its suitability for implementation in various physical systems. This Comment reports on how aspects of reservoir computing can be applied to classical forecasting methods to accelerate the learning process, and highlights a new approach that makes the hardware implementation of traditional machine learning algorithms practicable in electronic and photonic systems.
performs nonlinear transforms on the input and that, if the network is appropriately chosen and the readout sampled correctly, the desired output can be approximated.In methods such as NVAR, the nonlinear transforms of the input are chosen directly and make up the so-called feature vector, the elements of which are linearly combined to produce the output (see Fig. 1c).A certain similarity between RC and NVAR is apparent and it was recently shown that there are conditions under which these methods are equivalent 5 .
Inspired by the results of Bollt 5 the authors of Gauthier et al. 1 applied aspects of RC to NVAR and thereby introduced what they call Next Generation Reservoir Computing (NG-RC).Specifically, Tikhonov regularization is used and the role of correlations in the feature vector is also considered.With their approach, the authors achieve good results for typical time-series prediction tasks, while simultaneously having several advantages compared with conventional RC.Firstly, the absence of a reservoir means that there are fewer hyperparameters to tune.Secondly, the authors show that, at least for their chosen tasks, shorter training data sets are required and that the dimension of the output vector is smaller than the number of nodes for comparable reservoir computers.These two factors also lead to shorter computation times.If similar reductions in the required training data sets are also viable for real-world problems, where training data is often limited, then NG-RC could indeed be favorable.However, using this approach essentially trades the optimization of reservoir hyperparameters for the optimization of the elements of the feature vector, and the latter is still very much an open problem.It remains to be seen if the choice of the feature vector is generally an easier process than hyperparameter optimization for reservoir computers.
Although the authors of Gauthier et al. 1 refer to their method as RC, they also state that it most closely resembles NARX and it could be argued that it is in fact also a statistical learning method.However, one important difference between machine learning in general and statistical methods is the intended purpose 6 .
For NARX methods the selection algorithms are designed such that only a few terms are selected and the resulting model is compact and transparent enough to gain insights into the relationship between elements of the underlying system 7 .Whereas, in machine learning, the goal is exclusively the optimization of the performance for a given task.The latter approach is also taken for the NG-RC method and its feature vectors have far more terms than for typical NARX methods.It is the difference in the choice of feature vector terms and the use of Tikhonov regularization that sets the NG-RC apart from the well-established statistical methods.The authors of 1 do, however, address that the number of components in the feature vector can possibly be reduced without significantly influencing the error, as some of the components are very small.In this regard, NG-RC could be of interest to the statistical forecasting community, if it is possible to reduce the resulting NG-RC to a manageable model from which inferences about the underlying system can be made.
A lot of the interest in RC, in recent years, stems from the possibility for hardware implementation, as there are substantial gains to be made in terms of speed and power consumption compared with the implementation on a traditional computer 4 .Particularly suited to this is the concept of delay-based RC which was introduced in ref. 8 (see Fig. 2a).In this case, the reservoir need only consist of one nonlinear element with time-delayed self-feedback.For small inputs, the RC performance can be deduced from the linear response of the physical node 9 .Adding delay to any system makes it technically infinitely dimensional.In practical terms the systems do not have infinite dimensions, however, if the parameters are chosen correctly, such a system can exhibit complex, high-dimensional transient dynamics and can therefore perform well on various machine learning benchmarking tasks, see for example 10 .Using just a single node with a delay instead of a network of randomly coupled nodes is a great simplification that makes this scheme especially suited for hardware implementation with optical devices 11 .
In 2 the authors take this idea of using only a single physical node with delay and extend it to emulate deep neural networks, an approach which they have coined Folded-in-time Deep Neural Network (Fit-DNN).This is achieved by having multiple delay loops with adjustable feedback strengths, while the physical node supplies the nonlinearity (see Fig. 2c for the Fit-DNN with the corresponding DNN in Fig. 2b).A network node of the Fit-DNN is now defined as the system state at a certain time and all nodes are sampled sequentially.Coupling between the layers is achieved by coupling back the appropriate time-delayed signals.
The authors show that if the temporal separation between the nodes, i.e. the time intervals at which the system is sampled, is sufficiently large compared with the characteristic timescales of the physical node, then the Fit-DNN is equivalent to a deep neural network.When the node separation is small, there are additional inter-layer and intra-layer connections because temporally adjacent nodes are not fully independent, and a modified back-propagation method needs to be used to train the system.For small node separation and for the case of a sparse DNN, i.e. a Fit-DNN with a reduced number of delay loops, the authors test the performance on various benchmark tasks.For the sparse Fit-DNN they find good performance, but emphasis that removing/adding delay loops changes an entire diagonal of a coupling weight matrix and for this, a new training method is still required.In the case of small node separation, the performance is diminished, however, this needs to be weighed up with the decreased computation time that can be achieved by reducing the time between the sampling of the nodes.For a fully connected Fit-DNN with large node separation, the conventional DNN is fully reproduced with unaltered performance.
Overall, we find the novel method introduced in ref. 2 very interesting and hope to see hardware implementations of this approach in the near future.However, at this point, we must also mention a significant drawback that the authors also discuss.Although this approach is well suited for implementation in hardware, for example in photonic or optoelectronic setups, the training must still be performed using a conventional computer.Furthermore, due to the need to solve a delay differential equation, the training time can be significantly increased.Therefore the speed and efficiency of the final trained system need to be weighed up against the training process.
To summarize, the two newly introduced non-conventional computing schemes, i.e.NG-RC and Fit-DNN, suggest ways to realize effective and high-performing machine learning applications with small ecological footprints.Furthermore, bringing together knowledge from different communities, here the statistical learning, the nonlinear dynamics, and the machine learning communities, has led to cross-fertilization with high innovative potential.
Fig. 1
Fig. 1 Visualization of different machine learning architectures.Topology and training scheme for the three different concepts discussed in this comment.DNN deep neural networks; RC reservoir computing, NVAR nonlinear vector-auto-regression.
Fig. 2
Fig. 2 Deep neural networks via time delay and sequential sampling.Illustration of a delay-based RC scheme with a delay loop and one physical node (blue circle), b deep neural network (DNN) with three layers (each layer contains two physical nodes), and c corresponding folded-in-time deep neural network (Fit-DNN) realized via three feedback loops with time-varying feedback strengths and sequential sampling of the physical node (iteration 5 and 6 are shown). | 1,831.2 | 2022-01-11T00:00:00.000 | [
"Computer Science"
] |
Consumption and hysteresis: the new, the old, and the challenge
Abstract Consumers are reluctant to change immediately their consumption patterns when confronted with budgetary changes, in spite of fluctuating economic conditions. Their reluctance evokes the notion of hysteresis used by economists to describe the persistent influence of past economic events. The importance of hysteresis in economic research represents a natural consequence of the development of economic sciences and of the pursuit of understanding economic systems’ evolution by taking into account their ‘memory’, their conscience of the past. The present paper represents an attempt to review some of the most relevant approaches to hysteresis in economics and to emphasise the impact of the phenomenon on macroeconomic consumption in Romania. The paper aims at reviewing the application of hysteresis to economic models, and subsequently at constructing a two-phase research on households’ individual final consumption in Romania during 1990 and 2016, employing both the unit root and the so-called ‘true’ approach to hysteresis. The research results indicated the existence of hysteresis at the macroeconomic consumption level, thus revealing several implications for economic policy, inaccessible through the standard economic models.
Introduction
The history of hysteresis started in the nineteenth century with the research of the physicist James Alfred Ewing, who introduced the new term to define irreversibility. Being a visionary, he rejected the successive attempts made by his peers to drop the new notion of hysteresis, arguing that they were dealing with a generic phenomenon and that it would enter other domains as well. Indeed, shortly afterwards, hysteresis crossed the borders of ferromagnetism and came into prominence in conductivity, ferroelectricity, biology, chemistry and, last but not least, social sciences.
Its implantation in the fertile soil of economic science is not at all surprising since the neoclassical economists enthusiastically adopted concepts, metaphors, and equations from physics in their endeavour to establish economics as a science. Hysteresis, as an enlarged concept and subsequently as terminology, therefore entered the field of economics, and starting with the 1980s, even became a favourite topic of research.
Its importance in the economic research represents a natural consequence of the development of economic sciences and of the growing concern to understand economic systems' evolution by taking into account their 'memory', their conscience of the past. Hysteresis represents one of the most important path-dependency forms in economics (Lang, 2009). The inclusion of hysteresis in economic models is a complex activity, and although a plethora of approaches already exists, hysteresis has not yet been formally incorporated into orthodox economic models. Nonetheless, its relevance cannot be denied, since history and expectations play a most important role in determining economic outcomes (Dutt, 1997).
The paper represents an attempt to review some of the most relevant approaches to hysteresis in economics and to emphasise the impact of the phenomenon on macroeconomic consumption in Romania. The results revealed that past economic events affect consumption in Romania, with significant implications for economic policy.
Hysteresis and economics: significant landmarks
The term comes from the Greek 'hysterein' translated as 'that which comes later'. The physicist James Alfred Ewing used it for the first time to describe the persistent effects of the temporary exposure of ferric metals to magnetic fields. The subsequent states of the material were better understood by reference to their past states. Ewing (1881, pp. 122-123) mentioned in his work: The same tendency towards persistence of previous state is exhibited whenever we change the magnetisation of a piece of iron or steel by the alternate application and removal of any kind of stress [ … ] and accordingly I have called it Hysteresis.
Although the term per se was only introduced in 1881, the concept has a history that precedes the nineteenth century. Leibniz enunciated the antithesis hysteresisequations of stateas early as the seventeenth century, assessing that due to ontological reasons the past in itself cannot have a greater influence on the present than that determined by the traces left by the past in the present (Elster, 1976).
Considering that ontological hysteresis is impossible, one cannot implicitly argue the existence of epistemological hysteresis. Even admitting that the past can only influence the present by the persistent effects of the past in the present, the characterisation of present phenomena exclusively with current variables values may prove incomplete compared to that which can be achieved by evoking hysteresis. Therefore, hysteresis is used for explaining the functioning of systems for which no unhistorical explanation may be thought as viable. According to Franz (1990, p. 110), these systems have a long-run memory and may be considered as 'historical' systems.
Strong epistemological hysteresis is characteristic of systems whose functioning cannot be explained by any possible set of equations of state. Weak epistemological hysteresis is typical of those systems whose working may be described by a set of equations that contain past values of some of the variables, but by no known set of equations containing exclusively present values of all variables (Elster, 1976). The interest for the study of hysteresis may be therefore assimilated to the one for the past up to the extent to which it potentiates the proper ways to understand the present.
Subsequently to its introduction to magnetism, the ease of perceiving intuitively the hysteresis effect facilitated its entrance to various domains: conductivity; ferroelectricity; biology; chemistry; and social sciences.
As in other such domains, the phenomenon was also observed in economics and used to describe the persistent influence of past economic events. Moreover, it was considered as a significant progress in economic theory and credited with the potential to reduce the distance between economic modelling and reality (Cross et al., 2009).
The term may be considered as a new addition to the economic vocabulary, with only rare occurrence before 1970. The notion was, however, present especially in the study of consumption, starting in the late 1940s. Duesenberry (1948) in his relative consumption theory and Modigliani (1949) proved that households have the tendency to maintain consumption when faced with an income reduction, consumption behaviour being influenced by customs. Brown (1952) also observed, both in cases of income growth and reduction, the lack of promptness in consumer reactions attributed to a kind of inertia he called hysteresis. The idea of the influence of previous events on the present was explored by Georgescu-Roegen (1950) by raising the question regarding the dependence of indifference varieties on the economic experience of an individual, and emphasising that they were not invariant because the temporary experiences of a person were visible even after the initial conditions had been restored. The notion, although not the term itself, was also used in the works of Haavelmo (1970) and von Weizs€ acker (1971), focused on consumer behaviour. The establishment of the term hysteresis in consumer behaviour was completed in Georgescu-Roegen's 1971 work , The Entropy Law and the Economic Process, in which although he did not offer a consistent definition of hysteresis, he described a general framework for its application to social sciences and especially to consumer behaviour theory. He signalled the difficulties in assessing hysteresis in human behaviour, due to the impossibility of evaluating the effect of the latest experience on consumer behaviour until it actually took place, that is, until one observed exactly what was intended to be predicted (Georgescu-Roegen, 1996).
After 1970, hysteresis had become the usual practice, especially in the fields of unemployment and international trade. Edmund Phelps (1972) found new opportunities to use the term for describing dependence from the past in unemployment, while Murray Kemp and Henry Wan (1974) consecrated hysteresis in international trade. In international trade, hysteresis denotes the persistent influence of temporary factors such as exchange-rate variations and their impact on prices and quantities, with the most illustrative example being that of sunk costs. In the 1980s, pieces of empirical evidence were brought demonstrating that unemployment did not return to natural or equilibrium levels following the implementation of disinflation strategies, but remained at a high level or even increased. These proofs marked the moment when economists started developing alternative explanations for the persistence of unemployment based on the accumulation of consequences of the most significant previous shocks experimented by the economy, thus introducing hysteresis (Lang, 2009).
Later on, hysteresis has also been used to explain several phenomena in foreign investments, capital formation or marketing. But although its relevance to economic systems has been acknowledged, it has not yet been incorporated in formal economic models (Cross et al., 2010). The current economic conditions do, however, create an excellent framework for empirical testing as well as for a deeper theoretical development of hysteresis in economic science.
Consumption and hysteresis: the research methodology
3.1. Methodological considerations: the models In economics, hysteresis was incorporated in formal models in two approaches: the first is based on the existence of zero/unit roots of differential equations, while the second describes 'true' hysteresis.
In the first case, hysteresis is illustrated as a natural consequence of the Cauchy-Lipschitz theorem on the existence of solutions of systems of linear differential equations.
Let us consider the equation: where b is a constant and e is a stochastic variable. Equation (1) may be written as: If the equation has a unit root ða ¼ 1Þ then the solution of (2) is: This solution points out that the current value of x depends on past values, thus signalling hysteresis. If a < 1 and e ¼ 0 the solution of (2) becomes: so x depends exclusively on a and b: For small values of t, the past influences present values of x; to the limit, however, x tends to the values which do not indicate a dependence on the past (4), confirming the presence of hysteresis only as a particular case.
Criticism was brought to this approach, the most important referring to the oversimplification of the concept and the impossibility of characterising the structural changes of hysteresis effects, since it only takes into account the application to linear systems (Setterfield, 2009;Cross, 1993). This alienation from its usages in physics and mathematics determined the approach to hysteresis through unit/zero roots in linear differential equations to be labelled as 'bastard' usage (Piscitelli et al., 2000).
The contributions of Vito Volterra, especially the predator-prey model (Volterra, 1927), demonstrate that hysteresis was a major preoccupation for mathematicians as early as the beginning of the 1900s. An important temporal lag between physical hysteresis and mathematical hysteresis is noticed, however. Moreover, although applicative studies used mathematical approaches of hysteresis, these were mere calculations and not functional analyses. Only in 1966 did hysteresis become the object of the functional analysis when R. Bouc modelled a series of hysteretic phenomena, regarding hysteresis as a map between function spaces (Visintin, 2006).
Between 1970 and 1980, M.A. Krasnosel'skii, A.V. Pokrovskii and their colleagues elaborated a formal model with hysteresis operators starting from the magnetic hysteresis model of Franz Preisach (Preisach, 1935), and conducted a systematic analysis of the mathematical properties of these operators. Their efforts were concretised in a monograph published in 1983 and translated into English in 1989. This model elaborated by Krasnosel'skii and his collaborators represented the conceptual basis for the introduction of 'true' hysteresis in economics by Cross (1993) and Amable et al. (1993Amable et al. ( , 1994Amable et al. ( , 1995. In the following paragraphs, there will be presented the analysis elaborated by Krasnosel'skii, using the explanations offered by Mayergoyz (1986, pp. 604-605) and Cross (1993, pp. 59-66).
The system is considered to be affected by pairs of expansionary and contractionary shocks such as that a value a of the shock will raise the output, while a value b of the shock will determine the output's decrease. The combinations of critical values of a and b are denoted by Hab, which defines a set of hysteresis operators (hysterons).
The economic agent is affected by a shock r t . When the shock reaches the critical value a, then the agent's output will rise by 1. When r t drops to the critical value b then the agent's output will decrease by 1.
The hysteron Hab describes how the aggregate shock r t determines the increase or the decrease of the output for a certain agent, and this output is denoted as Hab r t .
The shocks' intensity required to determine the output increase or decrease, respectively, differs between agents and also, in the case of the same agent, over time, which imposes the necessity to define a function g (a,b) specifying the relative weight of the output of each agent in the aggregate output y t . Then, the total output may be written as: ð ÞHab r t dadb: The hysteresis effect may be illustrated by the following situation: a first expansionary shock determines the augmentation of the output for those agents whose values of a are less than the aggregate shock. A subsequent contractionary shock will not surpass the b values of all agents which previously increased output, so the initial shock continues to influence their current output. The initial condition is that the first shock r 0 is less than b 0 so that the entire system is affected, determining the decrease of the output of all agents, all Hab carrying the value -1. A second shock, r 1 , then affects the system. The outputs of the agents with values of a less than or equal to r 1 will increase while the other agents will continue to reduce output, which will lead to the subdivision of agents in two categories: the category of agents increasing output (Sþ) and the category of agents decreasing output (S-). A subsequent contractionary shock r 2 will determine the agents with values of b greater than or equal to r 2 to decrease output, while the rest will continue to increase output, thus modifying the subdivision of agents in the two categories. For some of the agents, the effects of the initial expansionary shock have been annulled by the effects of the subsequent contractionary shock. One may, therefore, assert that the memory of the system is selective, and only the non-dominated maximum and minimum values of previous shocks are remembered, thus affecting current output.
A second expansionary shock r 3 will determine the output to increase for agents with values of a less than or equal to r 3 . A second contractionary shock r 4 will determine the output to decrease for those agents with values of b greater or equal to r 4 . The new conditions create the opportunity for a new subdivision of agents in the two categories. Continuing the process and allocating decreasing values for input maxima and increasing values for input minima will lead to a new division between categories.
Aggregate output is determined by the subdivision of agents, which is in turn determined by the extreme values of the experimented shocks. In other words, the system's memory records only the non-dominated maxima and minima experienced. The aggregate output can be written as: ð ÞHab r t dadb: Given that Hab r t ¼ þ1 for agents in the category of increasing output (Sþ) and Hab r t ¼ -1 for agents in the category of decreasing output (S-), then the aggregate output can be written as: ð Þdadb: According to the compelling mathematical definition of hysteresis given by Krasnosel'skii and Pokrovskii (1989) and supported by Mayergoyz (1991), a system with memory is considered to be a hysteresis system if it has two properties: remanence and selective memory. Remanence is best illustrated by the first example of magnetic hysteresis given by Ewing: after successively applying to a probe two opposite magnetic fields of the same intensity, the probe would not return to the initial state. Similarly, one may interpret remanence in economic systems: if the system experiences successively two equal but opposite shocks it will not return to the initial state. The selective memory refers to the system's property to retain only the nondominated maxima and minima, that is, the most significant previous shocks experienced. Piscitelli et al. (2000, pp. 63-71) took a step further in the arguments exposed by Krasnosel'skii (1983Krasnosel'skii ( , 1989, Mayergoyz (1986Mayergoyz ( , 1991, and Cross (1993) and developed an algorithm for computing hysteresis variables for time series.
Methodological considerations: the economic background
The importance of consumption within national economies sustains its continuous study. Romania is no exception with final consumption accounting for approximately 70% of the GDP. The recent evolution of consumption is consistent with the general evolution of both the Romanian economy and the international economic context. The most important turning points in the evolution of Romanian consumption offer an accurate reflection of the social, political, and economic state of the country.
Following the dismissal of the Communist Party at the end of 1989, new political factors opted for orientation toward a market economy, but as in the case of other former socialist countries, this alternative generated serious negative consequences for the population. During the 1990s, Romania faced significant gaps compared to the Western European countries as far as economic development was concerned. The accumulation of disequilibria caused by the slow rhythm of the reforms rather frequently doubled by inconsistent public policies reflected upon the evolution of economic phenomena and processes. A significant problem faced by the Romanian economy after 1990 was inflation. The phenomenon was definitely present before the year 1990, but the specific mechanisms of the socialist economy kept it under control. Once released from this artificial restraint the inflation rate reached high levels during the 1990s, with a peak of 256.1% in 1993.
Although in the late 1990s GDP continued to drop, and the inflation rate was on a rather upwards path, the year 1999 laid the foundations for economic growth at the beginning of the twenty-first century. The year 2000 marked a growth of 1.6% of the GDP, after three years of involution. In addition, the inflation rate, the budget deficit, and the unemployment rate dropped. The priorities of the new government formed following the elections organised in 2000 targeted economic growth and the reduction of the inflation rate as preliminary objectives for EU accession.
Positive economic results generated by the EU accession objective characterised the period 2000-2006. The major economic indicators grew and consolidated the positive trend initiated at the beginning of the new century.
Following EU accession on 1 January 2007, the positive trend continued up to and including 2008. Although starting with the second half of 2007 the effects of the global crisis became apparent, the Romanian economy experienced the first signs of the crisis only in 2009 when the GDP dropped, the budget deficit increased, and the national currency faced depreciation. According to Duhnea (2012), the net direct investments, which recorded unprecedented growth between 2006 and 2008, amounted to only 3.5 billion euros in 2009, lower by 61.9% compared to the previous year, signalling a compression trend that persisted throughout 2010 and 2011.
As for the evolution of consumption in Romania, one may notice a close interdependence with the overall development of the national economy. In the early 2000s the proliferation of credit opportunities and the growth of real salary determined an increase in consumption, the annual growth rate attenuated during 2005-2006 and dropped in 2009 (by 5.4%), when the effects of the economic crisis became obvious. The growth rate of households' consumption, which surpassed the growth rate of GDP, illustrated the accentuated dynamism of consumption during 2000-2008. In 2010, households' consumption in constant prices increased slightly, only to return in 2011 to the level from 2009. The period 2009-2016 shows an oscillatory evolution of households' consumption, with levels below those registered in 2008, although GDP has been on an obvious ascendant path following 2012 (Figure 1). Figure 2 shows the households' consumption share in GDP, for the period 1990-2016.
Methodological considerations: the data
For the research presented in this paper, official annual data for the period 1990-2016 on households' actual individual final consumption (C), on the disposable income of households (Y), and on the monetary aggregate M 1 , were used.
According to the methodology of the National Institute for Statistics, actual final consumption 'comprises the households' actual individual final consumption and the government's actual collective final consumption. The households' actual individual final consumption includes households' expenditure for purchasing goods and services to meet their members' needs, expenditure for individual consumption of general government (education, health, social-security and social activities, culture, sport, recreation, waste collection) and expenditure for individual consumption of nonprofit institutions serving households (religious organisations, trade unions, political parties, unions, foundations, cultural and sport associations)' (National Institute of Statistics, Monthly Statistical Bulletin, No 2/2012, p. 140). The households' actual individual final consumption accounts for more than 90% of the actual final consumption, which justifies our choice to use this variable in the study.
According to the definition provided by the Central European Bank (www.ecb.int), M 1 (narrow money) 'includes currency such as banknotes and coins, as well as balances which can immediately be converted into currency or used for cashless payments, i.e. overnight deposits'. The choice for the latter indicator is justified by the fact that 'money (M 1 ) is no doubt a dominant asset: the store of value' (Dwivedi, 2005, p. 248). Moreover, the 'monetary theory has emphasized two different, but not mutually exclusive, functions of money: a medium of exchange and a store of wealth' (Batten & Thornton, 1985, p. 30) and is simultaneously related to the current households' consumption. Therefore, it represents the most suitable option given the particularities of the Romanian economy in which case the aggregate M 1 is best represented compared to the additional elements of the M 3 aggregate, considered by other authors as a representation of wealth. According to the most recent Annual Report of the National Bank of Romania (2016, p. 67), the weight of M 1 in M 3 has continued its ascendant path of the past five years, reaching, at the end of the period, the record of the last 22 years (57.3%).
The sources for the data were the Statistical Yearbooks of Romania (1995Romania ( -2014, the Monthly Statistical Bulletins (2012)(2013)(2014)(2015)(2016), and the Monthly Bulletins of the National Bank of Romania (1998Romania ( -2016. All the data were deflated and the series were stationarised by taking the first order differences. The research was conducted on the time series for households' consumption, disposable income of households, and the monetary aggregate M 1 , used to approximate wealth, containing 27 observations. The number of observations is suitable for a reliable analysis (Tiron, 1976).
Methodological considerations: the research goal and hypotheses
The goal of our research was to reveal the presence of the hysteresis phenomenon in Romanian households' final consumption. To this end, we employed an integrated unit root -'true' hysteresis approach, where the unit root theory was partially applied to evaluate the time series used for research. Both the unit root and the 'true' hysteresis approaches were presented earlier in the paper.
Initially, a statistical analysis was performed for testing the following hypotheses about the study series: the autocorrelation (using the autocorrelation function and the Durbin-Watson test for the first order autocorrelation), the existence of a unit root against the stationarity (by the Augmented Dickey -Fuller and KPSS tests), and the series' homogeneity (by the Pettitt, Buishand and Standard Normal Homogeneity Test -SHNT tests). The latter tests were performed since hysteresis is associated with structural changes determined by the historical experience (Setterfield, 2009).
All tests were performed at a significance level of 0.05. Emphasis will not be placed on these tests since they are well known in statistics. The reader may refer to Pfaff (2008).
The research was conducted in two phases.
In the first phase, we aimed at testing the relationship between the dependent variable households' actual individual final consumption (C) and both the current and previous values of two independent variables: disposable income of households (Y) and wealth (W).
The following hypotheses were formulated: Hypothesis 1. Consumption is influenced by the current income. Hypothesis 2. Consumption is influenced by both current and previous income. Hypothesis 3. Consumption is influenced by current income and previous consumption.
Hypothesis 4. Consumption is influenced by current income and current wealth. Hypothesis 5. Consumption is influenced by current income and current wealth and previous income and previous wealth.
XLStat and E-views Enterprise software (Edition 7.0) were used for performing the statistical analysis and the modelling.
In the second research phase, we took the 'true' hysteresis approach to test for the presence of hysteresis in Romanian households' final consumption. To this end, we applied the algorithm elaborated by Piscitelli et al. (2000, pp. 63-71) (Reprinted by permission from Springer Nature: Springer Nature COMPUTATIONAL ECONOMICS, A test for Strong Hysteresis, Piscitelli, L., Cross, R., Grinfeld, M., Lamba, H., COPYRIGHT # Kluwer Academic Publishers (2000)).
Finally, the cointegration test of Johansen was performed.
The research results
The results of the statistical analysis for the initial series of income, consumption, and wealth were the following.
All the series were autocorrelated. In Figure 3 we present the autocorrelograms of the consumption and wealth series. The dashed lines represent the limits of the confidence interval at the confidence level of 0.95.
The Durbin-Watson test confirmed the existence of the first order autocorrelation. The ADF and KPSS tests rejected the stationarity hypothesis for all the series. The non-stationarity was also confirmed by the slow damping of the autocorrelogram. Therefore, the series were stationarised by taking the first order difference.
After testing the hypothesis H 0 : the series is homogenous (there is no change point in the time series) against the alternative H 1 : the series is not homogenous (there is at least a change point in the series), the null hypothesis was rejected for all the series. The results of the Pettitt, Buishand, and SNHT tests are presented in Table 1, together with the change points.
In the following we denote the series of independent variables, respectively by: incomecurrent and previous levels (Y t , Y t-1 ), previous consumption (C t-1 ), and wealth (W)current and previous levels (W t , W t-1 ).
In the first research phase a series of econometric models having households' consumption as a dependent variable were tested. Considering the series of dependent variables to be consumption (C t ), the following models were created: where u t is the residual random variable. For each model, the significance of the coefficients was tested, using the Student t-test and the significance of the model as a whole, using the F-test. The results of these tests are shown in Table 2.
From Table 2, it results that only in Models 1 and 4 are the variables significant. Therefore, consumption is influenced by current income and wealth, while previous consumption, previous income, and previous wealth do not have a significant influence. The relationship between consumption and income is obvious and unquestioned. The relationship between consumption and wealth (with M 1 used as a proxy for wealth) is consistent with previous studies conducted in Romania (Moraru & Moise-Titei, 2012).
In the second research phase, for the 'true' hysteresis approach, only the variables found significant at the previous stage were taken into account as independent variables. These are income and wealth. During this phase, the stationarised series of income and wealth were transformed in hysteresis time series (HY and HW) using the algorithm suggested by Piscitelli et al. (2000), as presented in the previous sections of this paper. Figures 4 and 5 show the evolutions of income and wealth between 1990 and 2016, before and after the hysteresis transformation, respectively.
The Augmented Dickey-Fuller (ADF) was applied to the hysteresis transformations of income (HY) and wealth (HW) and the stationarity hypothesis was not rejected.
Subsequently, the existence of cointegration relationships between the studied series was tested using the Johansen cointegration test, and the results are presented in Table 3.
The existence of cointegration relationships was not rejected at the significance level of 0.05.
Conclusions
We may therefore conclude that hysteresis in the case of consumption in Romania cannot be overruled, or, in other words, temporary influences on consumption determinants seem to have remanent effects on consumption. Our study revealed that income and wealth strongly influence consumption in Romania and that the presence of hysteresis cannot be denied. Therefore, the non-dominated shocks affect the equilibrium of the system. In other words, severe changes occurring in one or both of the independent variables determine a lasting impact on consumption.
It is our strong belief that decision makers should be aware of this reality and act accordingly; up to the present moment, however, this seems not to be the case. The effects of the economic crisis determined the government to adopt to adopt a series of austerity measures. Among those, some of the most controversial measures adopted in 2009 and especially in 2010 had a strong, unfortunate, and, one may add, lasting impact on consumption. These include, but are not limited to, the increase of VAT from 19% to 24%, the extensively debated and contested measure of cutting budgetary salaries by 25%, a 16% tax on deposit interests, capital market and monetary market operations income, and notable increases of local taxes as well. To sum up, the economic recovery policy focused on cutting salaries, pensions, subventions, social allowances, and unemployment benefits and at the same time on increasing numerous taxes. Even though some of the measures have been reversed since then, their effects are most likely to last for a significant length of time. Up to 2016, the level of households' consumption has maintained below the peak level registered in 2008, before the economic crisis.
Considering that the current economic context represents a favourable framework for empirical testing of hysteresis and may not only herald new opportunities for study but also set new directions for economic policy orientation, the present paper aimed at following up the inclusion of the phenomenon at the level of macroeconomic consumption in Romania. The research results admitted the presence of hysteresis, thus revealing implications inaccessible by a different approach and pointing out several concerns regarding the effects of recently adopted economic measures on consumption in Romania. | 7,011.2 | 2018-01-01T00:00:00.000 | [
"Economics"
] |
Drug repositioning based on individual bi-random walks on a heterogeneous network
Background Traditional drug research and development is high cost, time-consuming and risky. Computationally identifying new indications for existing drugs, referred as drug repositioning, greatly reduces the cost and attracts ever-increasing research interests. Many network-based methods have been proposed for drug repositioning and most of them apply random walk on a heterogeneous network consisted with disease and drug nodes. However, these methods generally adopt the same walk-length for all nodes, and ignore the different contributions of different nodes. Results In this study, we propose a drug repositioning approach based on individual bi-random walks (DR-IBRW) on the heterogeneous network. DR-IBRW firstly quantifies the individual work-length of random walks for each node based on the network topology and knowledge that similar drugs tend to be associated with similar diseases. To account for the inner structural difference of the heterogeneous network, it performs bi-random walks with the quantified walk-lengths, and thus to identify new indications for approved drugs. Empirical study on public datasets shows that DR-IBRW achieves a much better drug repositioning performance than other related competitive methods. Conclusions Using individual random walk-lengths for different nodes of heterogeneous network indeed boosts the repositioning performance. DR-IBRW can be easily generalized to prioritize links between nodes of a network.
Background
Traditional drug research and development depends on cell-based or target-based screening of chemical compounds to identify a small subset of 'hits' . The identification process aims to further increase their affinity, efficacy and selectivity, before moving forward to animal tests and clinical trials [1]. Drug development in general is complicated, time-consuming and expensive with high-risk [2]. In light of these difficulties in traditional drug discovery, *Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS>1 College of Computer and Information Sciences, Southwest University, 400715 Beibei, Chongqing, China 2 School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, 100044 Beijing, China 5 Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, 430074 Wuhan, China Full list of author information is available at the end of the article identifying new indications for existing drugs, also known as drug repositioning, has attracted increasing interests from both the pharmaceutical industry and research community [3]. Drug repositioning is much more economic compared with traditional approaches, it offers a promising alternative to reduce the cost and time, since the repositioned drug has already passed the required safety tests.
However, most successfully repositioned drugs up to date have been the consequence of incidental observations of unexpected efficacy and side effects in the development or on the market [4]. For example, Sildenafil was originally tested for angina, now is indicated for erectile dysfunction and pulmonary hypertension [2]; Minoxidil was originally tested for hypertension; now is indicated for hair loss [5]. With the influx of big biochemical and phenotypic data, drug repositioning holds great potential for precise medicine. It is profitable and promising to develop computational methods to predict new indications for approved drugs on large scale. Some computational drug repositioning methods have been proposed and they can be roughly divided into two categories: focusing on the interactions between drugs and the targets; and focusing on exploiting the knowledge of diseases and drugs [6]. To name a few, Bleakley and Yamanishi [7] developed a bipartite local model (BLM) to predict target proteins of a given drug and target drugs of a give protein, and then combine these two predictions to give a final prediction for each candidate drug-target interaction. Cheng et al. [8] used a drug-target bipartite network topology similarity and a network based inference algorithm (NBI) to infer new targets for known drugs. Wang et al. [9] used known drug-target interactions as well as drug-drug and target-target similarities to construct a heterogeneous network, and then introduced a Heterogeneous Graph Based Inference (HGBI) method to iteratively update the strength between unlinked drugtarget pairs based on all the paths in the network connecting them. These drug-target prediction methods can be readily adopted for drug repositioning.
Chiang et al. [10] attempted to predict novel associations between drugs and diseases based on the widelyadopted 'guilt-by-association' principle. This principle assumes that if a drug can treat one of two similar diseases, then it might treat the other also; alternatively a disease can be treated by two similar drugs. Following this principle, Gottlieb et al. [11] measured the similarity between the pertaining drug and disease of drug-disease pairs that are known to be associated based on multiple drug-drug sources and disease-disease similarity metrics, and then ranked the accumulative evidence for association using a logistic regression scheme to predict novel drug indications. Wang et al. [1] integrated omics data about diseases, drugs and drug targets to construct a heterogeneous network and then applied random walks on the network to replenish missing associations between drugs and diseases. Martinez et al. [6] integrated information on diseases, drugs and targets (proteins) to construct a heterogeneous network and then performed propagation flow on the network to prioritize candidate associations between diseases and drugs according to their interconnections in the network. Luo et al. [12] proposed MBiRW to predict drug-disease associations. MBiRW employs known drug-disease associations to improve the drugdrug and disease-disease similarity measures; and then integrates the similarity networks and drug-disease associations to build a drug-disease heterogeneous network; after that, it performs bi-random walk with restart on the network to predict novel potential drug-disease associations. Liu et al. [13] performed a drug-centric random walk and a disease-centric random walk to obtain the association confidence between the disease nodes and drug nodes of a heterogeneous network.
Most of these aforementioned methods in essence are random walk based solutions. Although they make use of the network topology from different perspectives, they ignore the different contributions of different nodes on transferring the information on the network and almost all adopt a fixed walk-length for all nodes. To overcome this issue, we propose a novel drug repositioning approach (called DR-IBRW) that performs bi-random walk with restart on a heterogeneous network with quantified individual walk-length for each node. DR-IBRW uses disease symptom information [14] and drug chemical fingerprints [15] to construct a composite disease-disease similarity network, drug-drug similarity network. It then quantifies the individual walk-length for each node based on the topology of known drug-disease association network. Next, it constructs a heterogeneous network based on these three networks. After that, it performs bi-random walks with the quantified walk-lengths to account for the structural differences of these networks and contribution differences of different nodes (including diseases and drugs), and to predict new associations between drugs and diseases, and thus to accomplish the drug repositioning. We evaluate and compare the performance of DR-IBRW on several public datasets. DR-IBRW obtains much better performance than other related comparing methods [7-9, 12, 16] in identifying new indications for existing drugs, and the quantified individual walk-length indeed contributes to an improved prediction performance. We want to remark that the proposed individual bi-random walk solution is different from existing personalized random walk solutions [17,18] that mainly focus on setting different restart probabilities for different nodes.
Dataset
The datasets used in this work include drug-disease associations, drug fingerprints and disease symptoms. We collected 4219 diseases from MeSH [19] and 322 symptoms for each disease from the supplementary material of [14]. The drug-disease association dataset was obtained from [20], it includes 3250 known drug-disease associations involving 799 drugs and 719 diseases. We also collected 881 fingerprints for each drug from PubChem [15]. Since only 525 diseases can find their relevant symptom information from the supplementary material of [14], the final processed dataset includes 525 diseases, 718 drugs and 2177 drug-disease associations. All these data were collected on November 1st, 2017.
Similarity measures
We separately apply a four-step measurement to quantify the inner-similarity between diseases and between drugs.
The first three steps are based on the comprehensive similarity measurement used by Luo et al. [12]. In the fourth step, we use Gaussian interaction profile kernel similarity [21] to measure the similarity between drugs and diseases. Finally, we combine these similarities to form the composite similarity between diseases and between drugs. The four-step procedure of measuring the similarity between drugs is briefly introduced as follows.
Step 1: Based on the chemical fingerprints of the drug molecules, we can initially measure the similarity S 1 r ∈ R n r ×n r between n r drugs via the widely used Cosine similarity metric [22]. Let r i and r j be the vector forms of the chemical fingerprints of drug r i and r j , the chemical similarity S 1 r (r i , r j ) between drug r i and r j is defined as: Step 2: Too small similarity provides little information for drug repositioning and can be transformed into zeros for accurate prediction [9,12]. We partition S 1 r into ten subranges ((0, 0.1], (0.1, 0.2], etc.) and calculate the average similarity of drug pairs with shared diseases for each subrange. We also randomly shuffle S 1 r and repeat the partition and calculation process again. If the average of the non-shuffled subrange is smaller than that of the respective shuffled subrange, the drug similarities divided into this subrange are viewed as not informative; otherwise, they are informative. We then adopt a logistic function [23] to shrink these non-informative similarities to zero and to enlarge these informative similarities. The logistic function is defined as follows: where c and d are the parameters can be tuned to control the adjustment of S 1 r . c is the upper bound of the first subrange whose average similarity is smaller than that of the respective shuffled subrange, d = log(999). After that, we obtain a updated drug similarity matrix S 2 r .
Step 3: Two drugs are more similar if they are grouped into the same cluster. To make use of this assumption, we first construct a new weighted drug sharing network with drugs as nodes and edge weight reflecting the number of common diseases by respective pair nodes. After that, we adopt a graph clustering method, ClusterONE [24], to identify potential drug clusters on the network. We then add the clustering cohesiveness of a cluster with S 2 r if and only if the two drugs belong to that cluster.
where W in (C) denotes the total weight of edges within a cluster of vertices, W bound (C) represents the total weight of edges connecting nodes of this cluster to nodes of other clusters, and P(C) is the penalty term. Suppose that drug r i and drug r j locating in the same cluster C, the comprehensive drug similarity S 3 r (r i , r j ) between drug r i and r j is defined as (1 + f (C)) * S 2 r (r i , r j ). In this way, we obtain an improved drug similarity matrix S 3 r .
Step 4: Based on the assumption that similar drugs tend to show similar interaction and non-interaction profiles with the diseases, we further use Gaussian interaction profile kernel similarity to measure the similarity between drugs [21, 25,26]. The interaction profile IP(r i ) of drug r i is defined as a binary vector encoding the presence or absence of the known associations between the drug and n d diseases. The Gaussian interaction profile kernel similarity between two drugs (r i and r j ) is computed as follows: where ϒ r is the kernel bandwidth,Υ r is the average number of associated diseases per drug.
To this end, we combine S 3 r and S KR r into the composite similarity matrix S r between n r drugs as follows: Following the above four-step, we can also compute the composite similarity S d ∈ R n d ×n d between n d diseases based on the symptom information of these diseases and drug-disease associations.
Quantifying individual walk-length
Network-based drug repositioning methods generally apply random walk on a network with a fixed walk-length for all nodes to explore the network topology [12,27,28]. They ignore the different contributions of different nodes to some extent. Given that, we introduce an individual walk-length measure and try to make better use of the topology of known drug-disease association bipartite network W rd ∈ R n r ×n d of n r drugs and n d diseases. W rd (r i , d j ) = 1 if the association between the drug r i and disease d j is known; and 0 otherwise.
The walk-length of a node generally depends on its influence in the network [29]. We extend the Jaccard index measure introduced by Lu et al. [16] to quantify the individual walk-length of nodes. Suppose N r (r i ) denote the set of neighbours of drug r i and N d (d j ) denote the set of neighbours of disease d j , if r i and d j share many common neighbours, they will be more probably influenced with each other. For a randomly selected feature f of either r i or d j , traditional Jaccard index measures the probability that both r i and d j have that feature as follows [30]: Since there is no relationship between diseases or between drugs in the drug-disease bipartite network, is an empty set. For this reason, we have to modify the definition of Jaccard index for a bipartite graph. Particularly, we define N r ( as the set of drugs associated with r i 's neighbours. Then the Jaccard index on the bipartite network is defined as follows: represents the influence between drug-disease pair (r i , d j ). We assume that a node with high quantified influence has more probability to interact with others during the random walk process, and this node should have larger walk-length. Based on this assumption, we can measure the walk-length of each node as follows: where L r ∈ R n r and L d ∈ R n d store the individual walklengths of n r drugs and n d diseases, respectively.
Individual bi-random walk
Based on the inner similarity network (defined by S r ) of drugs, the inner similarity network (defined by S d ) of diseases, and the drug-disease bipartite network initialized by known drug-disease associations, we can construct a heterogeneous network of drugs and diseases (see Fig. 1 for example). We adopt a bi-random walk with restart procedure [27] on the heterogeneous network. Compared with traditional random walk with restart, the bi-random walk with restart can separately propagate information in different subnetworks, instead of the global network [28]. For this reason, bi-random walk can separately account for the inner structure of disease similarity network and of drug similarity network, and also make use of associations between drugs and diseases. A random walker can take a drug as the starting node, its associated diseases as intermediate nodes, and then traverse to other disease nodes. In this way, we can get probabilistic associations between the drug and new diseases, and thus identify potential new indications of the drug. To mimic this process, we perform random walk with restart starting from drug nodes and then traversing to disease nodes based on the quantified individual walk-length and the heterogeneous network topology as follows: where F t r (r i , d j ) is the predicted relevance between drug r i and disease d j in the t-th iteration, F 0 r = W rd , α > 0 controls the probability for a walker staying at the starting , the random walker starting Fig. 1 A heterogeneous network consists of drug similarity network S r ∈ R nr ×nr with n r drugs, disease similarity network S d ∈ R n d ×n d with n d diseases, drug-disease association network W rd ∈ R nr ×n d between n r drugs and n d diseases. Each circle represents a drug, each hexagon represents a disease. In the drug (disease) similarity network, the solid edges describe the similarities of drug (disease) pairs. In the drug-disease association network, the solid edges indicate the known drug-disease associations, and the dashed edges indicate the potential associations between drugs and diseases, which are the new indications of drugs from r i will not jump any more. We want to recomment that unlike traditional random walks and bi-random walks that adopt the same walk-length for all the nodes, the walk-length of a node in Eq. (10) is adaptively set based on its topology relationship with other nodes and is different from the walk-lengths of other nodes.
Similarly, a random walker can also start from a disease node and then traverse to drug nodes based on known drug-disease relationships and drug similarity network. In this way, we can obtain another probability between the disease and drug. To simulate this process, we perform random walk with restart from the disease node (d j ) as follows: where F t d (r i , d j ) is the predicted relevance between drug r i and disease d j in the t-th iteration, and the same normalization procedure is applied to S r to construct the . After iteratively applying Eqs. (10-11) with individual walk-lengths, we can obtain F r and F d , which separately reflect the association confidences between n r drugs and n d diseases from the perspective of the disease similarity network, and from the drug similarity network, along with the known drug-disease associations. To this end, we integrate them as follows: Obviously, the larger the value of F(r i , d j ), the larger the probability that drug r i associated with disease d j is. In this way, we can finally identify new indications for existing drugs. The whole procedure of DR-IBRW is described in Algorithm 1.
Performance comparison with other methods
DR-IBRW is compared with five related and recent methods (MBiRW [12], BLM [7], JI (Jaccard Index) [16], HGBI [9] and NBI [8]) on the processed dataset. MBiRW, BLM, HGBI and NBI were introduced in the Introduction, the last four methods are originally developed for predicting drug-target interactions and can be directly adopted to predict drug-disease associations. Parameters of these comparing methods are set (or optimized) as the authors suggested (or provided) in their respective papers or codes. As to DR-IBRW, α for random walk restart probability is set to 0.1. To reach a comprehensive evaluation, we use six widely used metrics, namely AUROC, AUPR, Macro-F1, Micro-F1, Precision, Recall. These metrics are Algorithm 1: DR-IBRW Input: Drug set R, disease set D, drug-disease association matrix W rd and parameter α Output: predicted drug-disease association matrix F 1 Calculate drug-drug (disease-disease) similarity matrix S r based on Eqs. (1-6); 2 Quantifying individual walk-length L r and L d for drugs and diseases using Eqs. (7-9); (:, j)), F 0 r = F 0 d = W rd ; 4 for t = 1 to max L r do 5 for i = 1 to n r do 6 Update F t r (r i , :) using Eq. (10); 7 end 8 end 9 for t = 1 to max L d do 10 for j = 1 to n d do 11 Update F t d (:, d j ) using Eq. (11) [7-9, 12, 16]. The formal definitions of these metrics are omitted here, but interested readers than can find the formal definitions of these metrics in these references and references therein. All these methods follow ten fold cross-validation experimental protocol, and then report the average results and standard deviation in Table 1. In addition, we also plot the receiver operating characteristic (ROC) curve and precision recall (PR) curve, and the value of area under perspective curve in Fig. 2.
We can easily find that DR-IBRW achieves better performance than these comparing methods. Although both DR-IBRW and MBiRW utilize the drug similarity network, disease similarity network and drug-disease association network to construct a heterogeneous network, and then apply bi-random walks with restart to account for the structural difference of this network, DR-IBRW still performs significantly better than MBiRW. That is because DR-IBRW takes into account the different contributions of different nodes and applies individual walklengths for them, whereas MBiRW equally treats all nodes and applies the same walk-length. In addition, DR-IBRW uses the Gaussian interaction profile kernel similarity to strengthen the effect of known drug-disease associations.
HGBI also applies random walks with restart on the heterogeneous network, but it does not take into account structural difference between drug similarity network and disease similarity network. BLM tries to build a separate classifier for each drug and each drug, but it is still suffered The entry in boldface represent the method perform best in this evaluation metric from biased training data, since there are more negative samples than positive samples (known associations). In fact, a number of negative samples should be positive ones. For this reason, BLM has a high Precision and Recall but with a low AUPR value. JI takes into account the influence of a node in the bipartite network and uses common neighbours to predict drug-disease associations. NBI only utilizes known drug-disease associations to run a two-step diffusion model on the bipartite graph and it can not predict new associations for a drug without known associations. For these reasons, both JI and NBI are outperformed by DR-IBRW.
Individual walk-length analysis
To study the contribution of our proposed individual walk-lengths, we also test the performance of DR-IBRW with fixed walk-lengths for all the nodes by varying walklength in the disease network and drug network from 0 to 10, respectively.
Drug and disease similarity analysis
As introduced in Section 5, we measure the composite inner similarity between diseases and drugs in four steps.
To investigate the impact of these four steps and the contribution of Gaussian interaction kernel profile similarity, we introduce three variants (DR-IBRW123, DR-IBRW124, DR-IBRW134) of DR-IBRW. Particularly, DR-IBRW123 only uses the first three steps (as done by Luo et al. [12]), or excludes the Gaussian interaction kernel profile similarity, to measure the inner similarity between diseases and between drugs. Similarly, DR-IBRW134 excludes the second step without shrinking low similarity and enlarging high similarity. DR-IBRW124 follows the same naming rule. The AUROC and AUPR values of DR-IBRW and its variants by ten fold cross-validations are shown in Fig. 4. Obviously, the AUROC and AUPR values of DR-IBRW123 are lower than those of other methods, which show the contribution of Gaussian interaction profile kernel similarity for drug repositioning. Another interesting observation is that DR-IBRW134 has a higher AUPR value than other variants and DR-IBRW. The cause is that AUPR and AUROC measure the performance from different perspectives and under varying thresholds. The second step may wrongly shrink low similarity and enlarge high similarity, and thus compromise the performance.
Experiments on another two datasets
We collected another two datasets to further study the performance of DR-IBRW. The first dataset (named 'Gottlieb's Dataset'), was obtained from [11]. This dataset contains 1933 known drug-disease associations involving 593 drugs registered in DrugBank and 313 diseases listed in the Online Mendelian Inheritance in Man (OMIM). The another dataset ('Luo's Dataset') is obtained from [12], it includes 663 drugs registered in DrugBank, 409 diseases listed in OMIM database and 2352 known drug-disease Fig. 4 The AUROC and AUPR values of DR-IBRW and its variants associations. Table 2 reports the results of 10 fold crossvalidation of DR-IBRW and comparing methods on these two datasets. The experimental setups are kept the same as in previous experiments. From these tables, we can also find that DR-IBRW again obtains much better performance than these comparing methods across different evaluation metrics.
Case study
To further demonstrate that the drug-disease associations predicted by DR-IBRW can be confirmed by biological experiments, we apply DR-IBRW to prioritize potential drug-disease pairs. Here, we use all the collected drugdisease associations as training samples, and then select the top 10 drug-disease pairs with the largest association probabilities as the predicted drug-disease associations.
After that, we manually check these associations by referring to the associations stored in Comparative Toxicogenomics Database(CTD) [31]. Particularly, we use the data of chemical-disease associations labeled with therapeutic downloaded from CTD. The label therapeutic represents a chemical that has a known or potential therapeutic role in a disease. For the predicted associations cannot find in the CTD, we further manually check them on PubMed and list the supportive PubMed IDs. We highlight the drug-disease associations supported by recent papers in PubMed but not included in CTD in boldface. The currently supported and un-supported associations are listed in Table 3. From Table 3, 6 out of top 10 predicted associations are supported by associations in CTD, the other two drugdisease pairs are supported by recent papers in PubMed but not included in CTD. For instance, Labetalol is an effective agent in essential hypertension as documented in open studies and controlled studies [32]. For another instance, Greminger et al. confirmed the high efficacy of captopril in treatment of severe hypertension refractory to conventional drugs [33]. Meanwhile, ranolazine therapy is safe and well tolerated in a pilot study involving pulmonary arterial hypertension [34]. Although we can not find the direct evidence for the associations of flurandrenolide and scalp dermatoses, flurandrenolide topical is used to treat the itching, redness, dryness, crusting, scaling, inflammation, and discomfort of various skin conditions [35].
These predicted results confirm the capability of DR-IBRW in identifying novel drug-disease associations with high confidence. We want to remark that the 2 unsupported associations should not be viewed as incorrect associations. As more experimental evidence becomes available, they maybe further supported.
We also report the top 10 repositioned examples made by other comparing methods, and then manually check these examples by referring to the associations stored in CTD. We further check the associations that cannot find in the CTD on PubMed and list the supportive PubMed IDs. We highlight the drug-disease associations supported by recent papers in PubMed but not included in CTD in boldface. Tables 4, 5, 6, 7 and 8 list the currently supported and un-supported associations for MBiRW, BLM, JI, HGBI and NBI, respectively.
From Table 4, 5 out of top 10 predicted associations are supported by associations in CTD, the other two drugdisease pairs are supported by recent papers in PubMed but not include in CTD. From Table 5, we can clearly see that 1 out of top 10 predicted associations is supported Table 2 The performance results of DR-IBRW and comparing methods on Gottlieb's dataset [11] and Luo's dataset [12] The entry in boldface represent the method perform best in this evaluation metric The entries in boldface represent the drug-disease associations supported by recent papers in PubMed but not included in CTD The entries in boldface represent the drug-disease associations supported by recent papers in PubMed but not included in CTD The entries in boldface represent the drug-disease associations supported by recent papers in PubMed but not included in CTD The entries in boldface represent the drug-disease associations supported by recent papers in PubMed but not included in CTD The entries in boldface represent the drug-disease associations supported by recent papers in PubMed but not included in CTD Ampicillin Streptococcal Infections 2306432 10 The entries in boldface represent the drug-disease associations supported by recent papers in PubMed but not included in CTD by CTD and the other six associations are supported by recent papers in PubMed. From Table 6, JI totally finds 6 drug-disease pairs with evidence among the top 10 predicted associations. From Table 7, 5 out of top 10 predicted associations are supported by associations in CTD, the other two drug-disease pairs are supported by recent papers in PubMed but not include in CTD. From Table 8, NBI can find 6 associations with evidence. In summary, DR-IBRW can make more confident drug-disease repositioning than these comparing methods.
Quantified individual walk length is reasonable
The drug-disease association prediction task is frequently modeled as a link prediction problem in a heterogeneous graph [36][37][38]. The link prediction relies on calculating the similarity between nodes. The number of paths between nodes and walk lengths are regarded as effective similarity metrics in the social network and biological network [36,39,40]. The similarities between drugs and diseases can be measured based on the number of walks that connect drug nodes and disease nodes in the network. Integrating the number of walks and their lengths can more comprehensively quantify the potential association probability of the drug-disease pair. In addition, the contribution of different nodes in the heterogeneous network is different. In other words, the information carried by each node in the heterogeneous work is imbalanced. Therefore, it is an issue to adopt a fixed walk-length for all nodes in link prediction.
In order to answer why the choice of quantified individual walk length is reasonable, we calculate the shortest path for each drug and disease node, and measure the difference between shortest path and quantified individual walk length. We use the matrix SP(r i , d j ) to represent the shortest path from the i − th drug to j − th disease, SP ∈ R (n r +n d )×(n r +n d ) . To calculate SP, we firstly construct an adjacency matrix W: where W rr ∈ R n r ×n r contains the shortest path between each two drug nodes, W dd ∈ R n d ×n d contains the shortest path between each two disease nodes. W rd is the drug-disease association matrix and W dr is the transpose of W rd . Then, we adopt the Dijkstra algorithm to compute the shortest path between two nodes in matrix W. P r = (rp 1 , rp 2 , . . . , rp i , . . . , rp n r ) where rp i represents the longest path in the shortest path between i-th drug and all the diseases. P d = (dp 1 , dp 2 , . . . , dp j , . . . , dp n d ) where dp j represents the longest path in the shortest path between jth disease and all the drugs. In other words, rp i is the maximum shortest path for drug i, which can include nearly all the path information with diseases. dp j is the maximum shortest path for disease j and it can approximately represent the path between disease j and all the drugs. L r and L d store the quantified individual walk-lengths of n r drugs and n d diseases. After that, we calculate the margin between P r and L r for drugs, and that between P d and L d for diseases. The statistical results are shown in Fig. 5. We can find that nearly 60% nodes' differences are no larger than one. It can explain that the quantified individual walk lengths of most nodes are inline with the shortest path between the respective nodes. However, the maximum shortest path can only partially represent the path information from a drug node to a disease node. L r can give more emphasize on shorter path between diseases and drugs than maximum shortest path, and it generally has a smaller value than P r . It is recognized that the shorter the distance between two nodes, the larger the similarity between them is. For these reasons, our random walk with individual walk achieves more prominent performance than random walk fixed walk length (as shown in Fig. 3) We also perform the correlation analysis on drug similarity matrix S r and drug shortest path matrix W rr . We firstly partition S r into ten subranges ((0, 0.1], (0.1, 0.2], etc.) and then partition W rr into ten subranges to ensure that all the drug pairs in each subrange of S r falling into the corresponding subrange of W rr . Next, we calculate the average shortest path of each subrange for W rr , and compute the correlation of average shortest paths and drug similarities between W rr and S r in each subrange. Similarly, we conduct the correlation analysis on disease similarity S d and disease shortest path matrix W dd in the same way and report the results in Fig. 6. We can clearly observe that the average shortest paths between drug pairs or disease pairs decrease as the increases of their similarities. This observation also differentiates the contribution of different walk lengths based on the assumption that nodes with shorter walk lengths contribute more to the similarity between two nodes.
Conclusion
In this paper, we proposed a computational drug repositioning approach that encodes the drug chemical structure information, disease symptom information and known drug-disease interactions information into a heterogeneous network. Our approach accounts for structural difference of subnetworks of the heterogeneous network by bi-random walk, and for the contribution differences of different nodes via specifying quantified individual walk-lengths to them. Experimental study demonstrates that our approach performs better than other related competitive methods and the individual walk lengths contribute to an improved performance. We want to remark that our proposed approach can be easily generalized to predict links between nodes of a heterogeneous network.
Abbreviations AUPR: The area under precision-recall curve ; AUROC: The area under the receiver operating characteristic curve ; CTD: Comparative Toxicogenomics Database ; DR-IBRW: Drug repositioning approach based on individual bi-random walks ; OMIM: Online Mendelian Inheritance in Man ; PR: precision-recall ; ROC: receiver operating characteristic | 7,792.4 | 2019-12-01T00:00:00.000 | [
"Computer Science"
] |
An Unusual Case of Severe Debilitating Arthritis in a Patient with End-Stage Renal Disease
Severe arthropathy can occur in patients with end-stage renal disease (ESRD) receiving hemodialysis. While the general work up for arthritis includes makers of inflammation, serologies and respective joint imaging, for patients with ESRD, it is important to consider arthritis caused by deposition of uremic toxins such as beta-2 microglobulin causing amyloidosis or secondary hyperparathyroidism. We present a unique case of a patient with ESRD who developed a severe, progressive polyarthritis shortly after starting hemodialysis. This prompted a work-up that confirmed the diagnosis of amyloidosis. Arthritis in ESRD requires a careful work-up to establish the underlying etiology that will direct the most effective treatment.
Introduction
Amyloidosis is a rare condition caused by deposition of fibrils causing numerous systemic manifestations including inflammatory arthritis and organ damage to the kidney, heart, nervous system and gastrointestinal tract. The severity depends on the organs affected and how much amyloid has accumulated. The actual deposition of amyloid fibrils in the synovium of joints is rarely reported [1]. In this case report, we describe a 61-yearold man with end-stage renal failure, presenting with severe arthralgia's and synovitis manifest as localized soft tissue swelling causing marked reduction in use of his hands, shoulders and knees. Inflammatory work-up was negative hence; a synovial biopsy was done which confirmed amyloidosis. Given that he was receiving hemodialysis and met the criteria for the diagnosis of dialysis related amyloidosis, an attempt was made to treat with intensive dialysis and anti-inflammatory agents with poor response. The amyloid biopsy was sent for mass spectrometry, which confirmed the diagnosis of AL amyloid. Although AL amyloidosis is a rare condition, it should be considered while evaluating atypical symptoms in patients presenting with rheumatic complaints. A high index of suspicion is necessary for proper diagnosis, as delay in diagnosis will yield a worse treatment outcome.
Case Presentation
RJ is a 61-year-old male with a history of alcohol and smoking abuse, ESRD secondary to HTN who started dialysis in March of 2018 using a right internal jugular perm-cath. Symptoms of carpal tunnel syndrome (CTS) as manifested by sharp pain in both hands relieved by running them under warm water or dependent position began in December 2018. The hand pain became worse after he had a left upper extremity arteriovenous graft placed with Doppler evaluation showing no evidence of vascular steal. In 2019, he developed olecranon bursitis and worsened CTS requiring bilateral carpal tunnel release.
He established care with rheumatology in 2020 for "arthralgia." He complained of "pain all over" mostly in the ankles, bilateral knees, wrists and shoulders. Symptomatically, the joint symptoms were associated with swelling but reported to improve with rest without any predominance of symptoms in the morning hours. Patient denied any extra-articular symptoms concerning for a systemic rheumatic disease including rashes, sicca, nasal or oral ulcers, upper respiratory symptoms, pleurisy, etc. Initial differential included mechanical joint symptoms from past trauma associated with his work in construction when patient hurt his ankles and hands.
On physical exam RJ is an elderly man, who looks much older than his stated age. At his initial presentation, he had no malar rash, skin ulcerations, nasal or oral ulcerations, pharyngeal exudates or lymphadenopathy. Patient's skin examination raised concern for slight skin thickening with darkening but otherwise, his exam was positive for synovial thickening at bilateral wrists with tenderness to palpation ( Figure 1a) and joint effusion identified on ultrasound (Figure 1b). Both knees were tender to palpation and limited flexion due to pain and effusion. Other upper and lower extremities were without increased warmth, swelling or signs of effusion. Given the aforementioned symptoms of polyarticular joint swelling and physical examination as well as ultrasound findings showing joint inflammation, the differential diagnoses included various rheumatic diseases such as rheumatoid arthritis and seronegative spondyloarthropathy among others. Scleroderma was also considered given findings of skin thickening. There is a marginal erosion at first metacarpal head and potential small erosion along the medial base of the fourth and fifth proximal phalanges raising concern for erosive arthritis.
Results
Serological evaluation included rheumatoid factors, CCP, ANA, scleroderma 70 antibody, anti-centromere antibody, SSA and SSB antibody, which were all negative. Infectious work up was negative for hepatitis, GC/Chlamydia, HIV, TB, CMV, EBV. SPEP/UPEP showed monoclonal free kappa present in the gamma globulin region with concentration of 0.15 mg/dL and immunoglobulin A at 0.47 mg/dL with immunoglobulin G and M within range. ESR was 61mm/Hr and CRP was 39 mg/dl. CBC showed a normocytic anemia, creatinine was elevated and estimated GFR was low as expected with ESRD. PTH was drawn monthly and ranged from 111-320 pg/ mL, phosphorus 5.2 mg/dl and corrected calcium of 9.9 mg/dl (Table 1).
X-Rays were ordered of the hand, wrist, knee which suggested inflammatory arthropathy. X-ray of the wrists showed multiple bony cysts, metacarpophalangeal erosions and periosteal reactions along the metatarsal diaphysis bilaterally that developed over 1.5 years (Figure 2a). MRI showed severe tenosynovitis bilaterally in the wrists (Figure 2b). Aspiration of the synovial fluid in the right and left wrist showed 178/uL WBC and 1013/uL WBC respectively with macrophage and neutrophils predominance, no organisms or crystals seen. Knee arthrocentesis also showed 1000/uL WBC predominance with macrophages and neutrophils predominance.
In June 2021, the patient had a biopsy of a left dorsal extensor mass from the wrist that was positive for amyloid deposition in a background of synovial fluid and Congo red staining was positive. B2M level was elevated at 14.6 mg/dl. The specimen was sent for amyloid typing by liquid chromatography tandem mass spectrometry (LC MS/MS) which detected a peptide profile consistent with AL (kappa)-type amyloid. A bone marrow biopsy was done which showed plasma cell neoplasm with 12-15 kappa light chain restricted, CD19-CD56+, plasma cells involving normocellular marrow (10-20% cellular). Plasma IgG, IgA and IgM were low and free kappa light chains and ratio was elevated (Table1). Volume
Treatment Plan
After evaluation by Rheumatology, the prevailing diagnosis was dialysis-related amyloidosis (DRA) given characteristic triad of scapulohumeral arthritis, CTS and flexor tenosynovitis of the hand and rapidly enlarging bone cysts. The patient had an evaluation from Hematology/Oncology and initially there was low suspicion of primary amyloidosis given patient's preliminary cancer screening work up had been negative. There was minimal evidence of other forms of amyloidosis, specifically AL amyloidosis, with the exception of a slight increase in a serum kappa light chain (0.15 g/dL), which can also be seen in the setting of DRA [2]. The patient was started on a systemic corticosteroid course along with intra-articular steroid injections for certain medium and large joints. However, the patient experienced only temporary relief and had recurrence of joint pain and swelling soon after the corticosteroid taper. Methotrexate was avoided given the history of ESRD and the patient was started a course of leflunomide as a steroid-sparing agent along with doxycycline 100 mg twice a day based on evidence that it can alleviate joint pain from DRA [3]. The patient received hemodialysis three times a week with excellent urea clearance and a Kt/V of over 1. mg and prednisone 5 mg, symptoms improved slightly from severe (rated 10/10) to moderate (rated 6/10). Due to ongoing swelling and pain most noted in the left knee with an exam consistent with tenosynovitis and synovitis, treatment with adalimumab was started [4]. Given the poor response to aggressive antiinflammatory agents, the biopsy specimen was sent for LC MS/MS with findings of ALkappa-type amyloid deposits. A bone marrow was done which was confirmed multiple myeloma and the patient was started on chemotherapy (daratumumab-cyclophosphamide, bortezomib and dexamethasone (Dar-CyBorDex) [5].
Discussion
Amyloidosis can have multi-organ involvement with varied presentations in different patients, making efficient diagnosis often difficult. Currently 36 different insoluble proteins are recognized to have amyloidogenic potential and it is important to identify the protein as it directs treatment options [6]. In the past, little effective treatment could be offered to amyloidosis patients. Consequently, relying on characteristic clinical findings of the different amyloid types was often sufficient for supportive patient management [6]. However, more selective therapies have evolved rapidly and accurate determination of the amyloid protein type has become mandatory [6]. As exemplified by the case we presented, the most commonly recognized amyloid type is systemic AL amyloidosis due to deposition of immunoglobulin light chains produced by a bone marrow-based monotypic plasma cell population. Primary AL amyloidosis most commonly affects the heart, kidney, liver nervous system and GI tract . This disease requires systemic chemotherapy directed at the bone marrow plasma cells, which the patient is now receiving with a future option of a stem cell transplantation pending his response.
Inflammatory polyarthritis is a rare manifestation of AL amyloidosis but can occur mimicking primary rheumatic diseases such as rheumatoid arthritis or seronegative spondyloarthropathy [7]. A careful clinical exam including symptom onset, joint involvement and symmetry along with serolologic, tissue and imaging studies should be done. Radiographic signs of amyloid arthropathies are often absent or nonspecific [8]. When they exist, the findings are diverse and may include bone erosions and cysts, bone demineralization, or thickening of the soft tissues [9].
Dialysis-related amyloidosis (DRA) is a serious complication of long term dialysis therapy and a debilitating disease characterized by the deposition of amyloid fibrils of Beta-2 microglobulins (B2M) in the bone, osteoarticular structures and viscera of patients [10]. B2M are normally cleared by glomerular filtration with reabsorption and catabolism in the proximal tubules [11]. Consequently, B2M is inversely related to glomerular filtration rate (GFR) and residual renal function is the best determinant of B2M levels [12]. For this reason, one of the therapeutic goals in treatment of patients with chronic kidney disease is to maintain the GFR as long as possible. For patients who develop end-stage renal disease (ESRD), microglobulins accumulate in the plasma and may result in slow but extensive tissue deposition with resultant arthropathy with synovial thickening [13]. B2M levels can increase up to 60fold in patients on dialysis [14]. However, it has been demonstrated that high plasma B2M level is not a reliable predictive marker of DRA [15] and tissue biopsy of the affected areas remains the gold standard for the diagnosis [14]. Despite some guidance regarding diagnostic tools for DRA, the pathologic triggering events that causes DRA remain in question. Pathogenicity has been attributed to several factors including inadequate clearance by high flux dialyzers, intradialytic B2M production, advanced glycation end products and elevated levels of cytokines [16]. Common clinical symptoms and findings are carpal tunnel syndrome (CTS), bone cysts, scapula-humeral periarthritis, peripheral joint arthropathy, and destructive spondyloarthropathy similar to symptoms on the case presented [17][18][19]. In regard to incidence, postmortem studies have showed 21% of patient develop amyloid deposition <2 years after starting on dialysis, 50% in 4-7 years post-dialysis initiation, and up to 100% in >13 years [19]. European studies have suggested that DRA can be seen in as many as 20% of patients after 2-4 years on dialysis [14]. The criteria for the diagnosis of DRA includes at least two or more major findings of: multiple joint involvement, carpel tunnel syndrome, trigger finger, spinal lesions and bone cysts [17]. While DRA was considered in the differential for amyloidosis and the patient met the criteria, the diagnosis was excluded as the LC MS/MS did not detect the B2M peptide [6].
In summary, amyloidosis is a rare, complex disease, which requires a high index of suspicion for diagnosis. Technical advances have developed mass spectroscopy based proteomics to identify the amyloid fibrils implicated in the disease process. As amyloid disease continues to have a toll on the quality of life, we need to utilize these diagnostic methods to make a timely diagnosis and prevent further disease. Finally, given the rarity of this disease, to best identify the underlying biologic pathways, collaborative efforts across institutions will be necessary. | 2,745.4 | 2022-11-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
RadCalNet: A Radiometric Calibration Network for Earth Observing Imagers Operating in the Visible to Shortwave Infrared Spectral Range
: Vicarious calibration approaches using in situ measurements saw first use in the early 1980s and have since improved to keep pace with the evolution of the radiometric requirements of the sensors that are being calibrated. The advantage of in situ measurements for vicarious calibration is that they can be carried out with traceable and quantifiable accuracy, making them ideal for interconsistency studies of on-orbit sensors. The recent development of automated sites to collect the in situ data has led to an increase in the available number of datasets for sensor calibration. The current work describes the Radiometric Calibration Network (RadCalNet) that is an e ff ort to provide automated surface and atmosphere in situ data as part of a network including multiple sites for the purpose of optical imager radiometric calibration in the visible to shortwave infrared spectral range. The key goals of RadCalNet are to standardize protocols for collecting data, process to top-of-atmosphere reflectance, and provide uncertainty budgets for automated sites traceable to the international system of units. RadCalNet is the result of e ff orts by the RadCalNet Working Group under the umbrella of the Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) and the Infrared Visible Optical Sensors (IVOS). Four radiometric calibration instrumented sites located in the USA, France, China, and Namibia are presented here that were used as initial sites for prototyping An example is shown demonstrating how top-of-atmosphere data from RadCalNet can be used to determine the interconsistency between two sensors.
Introduction
Earth observation (EO) is used for a wide range of societal and commercial applications. Increasingly users are combining data from different sensors, either to provide long-term records of environmental change that require time series greater than the lifetime of a single sensor, or to provide operational services using data from different sensors (from different satellites) to maximize coverage. Both the space agencies and commercial operators are launching a wider range of sensors with different spectral bands, and increasingly small, relatively cheap sensors are being launched with no on-board calibration (e.g., onboard CubeSats). Post-launch radiometric performance assessment of Earth observation optical sensors is essential to understand changes occurring after preflight laboratory characterizations and calibrations. EO optical sensors can benefit from the use of vicarious calibration approaches whether they rely on onboard calibration devices or not. The work here concentrates on the radiometric calibration of earth imagers operating in the visible to shortwave infrared spectral range (400 nm to 1000 nm).
Vicarious techniques concentrating on radiometric calibration in the visible to short-wave infrared spectral range have been developed to provide calibration for a single sensor, including absolute [1], band-to-band [2], and detector-to-detector relative for a single band [3,4]. Multiple sensor approaches have been developed to ensure consistency between multiple sensors of different type [5,6].
The work presented here traces its heritage to the long history of vicarious calibration methods linking in situ radiometric measurements to a spaceborne sensor, either directly or via a reference sensor. One of the first applications of such techniques dates to [7], who measured the radiance above a ground target from a high-altitude aircraft to verify the degradation of the response of the Coastal Zone Color Scanner's shorter wavelength bands [8]. Sensors on an Earth Resources (ER-2) aircraft provided cross-calibration of the Advanced Very High Resolution Radiometer (AVHRR) via sensors onboard the ER-2 aircraft [9]. The basic concept of such comparison involves two sensors viewing the same instrumented site at the same time from the same viewing geometry with identical spectral bands. Teillet et al. developed a variation of the approach to account for the unavoidable small differences in view and solar geometry [10]. The method relies on spatially and spectrally characterizing the surface reflectance from aircraft data and using the reflectance with coincident atmospheric data to predict the at-sensor radiance. Such an approach was used to compare data from a wide array of sensors viewing the Railroad Valley instrumented site on a single day [11] and has been used to cross-compare data from sensors viewing the site on different dates [12].
The reflectance-based approach was developed in the 1980s for the calibration of Landsat 4 and 5 Thematic Mapper [13,14]. The reflectance-based method relies on measurements of surface reflectance and atmospheric conditions at a selected instrumented site nearly coincident in time with the overpass of an imager to be calibrated. The use of a radiative transfer code permits the calculation of the top-of-atmosphere (TOA) radiance that is used to evaluate the radiometric calibration of the sensor being evaluated. The method has been successfully used for many sensors relying on several instrumented sites with data collected by groups in multiple countries [15][16][17][18][19][20][21] One drawback of the reflectance-based approach is that poor weather, budget limitations, and personnel constraints can limit the number of successful data collections. It is also not feasible to apply the approach to a large number of sensors that do not share overpass times and dates. Automated approaches to address these issues have been developed that rely on slightly different processing Remote Sens. 2019, 11, 2401 3 of 24 approaches but essentially the same types of ground-based data to provide results similar in absolute and relative uncertainty as those of the traditional reflectance-based approach [22].
Methods are now needed that permit comparisons of a wide range of sensors that do not concentrate on one-to-one comparisons between sensors. Automated collections over multiple sites are well-suited to allow intercomparisons of many sensors. Ensuring that the in situ data from those sites are traceable to the international system of units (SI-traceable) with known uncertainties allows the combination of results from the multiple sites in a straightforward fashion.
This paper describes the overall Radiometric Calibration Network (RadCalNet) concept, including descriptions of the initial four sites that are part of the network. The processing approach for RadCalNet is presented along with sample uncertainties, and more importantly, the approaches that are being implemented within RadCalNet to ensure that network members are following appropriate methods for determining uncertainties. The last section of the paper shows an example interconsistency study between two sensors and two independent instrumented sites to illustrate how RadCalNet users will benefit from the data available from the network.
RadCalNet Overview
RadCalNet, the Radiometric Calibration Network, is a network of sites that can be used to compare different satellite sensors to each other and to a common reference. RadCalNet was proposed and implemented within the Infrared Visible Optical Sensors (IVOS) Subgroup of the Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) and was made publicly available in June 2018 as a culmination of almost two decades' effort within WGCV and IVOS, beginning with the definition of the original LandNet sites [23].
RadCalNet is based on the reflectance-based approach described above, with continuous deployment of automated instrumentation that is calibrated traceably to SI and with known, and peer-reviewed, uncertainties. Results from previous automated sites [24] and [25] have shown that they can achieve uncertainties similar to collection [26] The RadCalNet concept has been developed to address temporal sampling issues, and to provide global consistency, by networking the measurements from several automated sites and making these data widely available to teams responsible for the in-flight monitoring of sensors as well as to users of those sensors. Data from the different automated instrumented sites are quality controlled and processed to TOA reflectances (see Section 2.3) that are suitable for vicarious calibration or radiometric monitoring of a sensor viewing the sites. There are admittedly limitations as to how these data can be used for the in-flight assessment of spaceborne sensors as they are provided only at a 30-min interval for a nadir viewing configuration only. Although this limits the possibilities of matching up the data with spaceborne sensors with off-nadir viewing angles, it should allow for intercomparison with sensors viewing any of the sites with low viewing angles. While additional data might be gathered by site owners at each site allowing for off-nadir and higher temporal or spectral resolution TOA reflectance simulations, these are not available through RadCalNet (site owners should be contacted directly). Moreover, the RadCalNet are representative of areas on the ground that can only be matched by a sensor having sufficiently high spatial resolution. In addition, the RadCalNet TOA reflectance are derived through the reflectance-based methodology e.g.,: ( [26,27]) in which typical uncertainty in predicting TOA atmosphere reflectance is estimated to be around 5%. While the RadCalNet TOA reflectance and their associated uncertainty should be sufficient to assess the in-flight radiometric performance of spaceborne sensors operating in the visible to shortwave infrared designed with an absolute radiometric accuracy requirement of the order of 5% (as typically required for land-monitoring space missions), they might not be able to address the needs of sensor with high absolute radiometric calibration requirements of around 2% (such as ocean color space missions). The data are made available through a web portal to registered users [28]. This paper describes this networked approach along with the methods applied to ensure consistency of the results from the different sites and the SI-traceability of the data. There are six parts of RadCalNet: 1) Instrumented sites and their operators; 2) RadCalNet Quality Control and Processing; 3) RadCalNet Data Distribution; 4) RadCalNet Working Group; 5) RadCalNet Admission Review Panel; and 6) RadCalNet users. Figure 1 illustrates the relationships between the different parts. The instrumented sites and their operators are the basis of the data that is the key to RadCalNet. Participation in RadCalNet is on a voluntary basis, and as part of membership in RadCalNet, the instrumented sites agree to provide their data and uncertainties in a specific format and undergo peer review evaluation of the uncertainties associated to their data. The initial instrumented site membership includes the four sites described below.
initial instrumented site membership includes the four sites described below.
The data from the sites are provided to the RadCalNet Processor: a central facility that ensures the data from the individual sites are processed with the same approach to provide TOA reflectances and associated uncertainties based on the inputs from the instrumented site. The results from the RadCalNet processing are provided to the RadCalNet Data Distribution that provides the interface to the user community to distribute the results. The RadCalNet Data Distribution also provides quality assurance (QA) on the products and provides metadata and documentation of the RadCalNet approach.
It is expected that new sites will seek inclusion in RadCalNet. The RadCalNet Admission Review Panel will evaluate requests from an instrumented site to become part of RadCalNet based on a review of the site's uncertainty budget, provision of operational data, comparison results, and assurances that the site will operate for at least five years. The panel consists of IVOS and WGCV members. The panel makes recommendations to the full WGCV membership, and the WGCV determines whether to admit an instrumented site to RadCalNet. Figure 1 shows the panel overlapping with the RadCalNet Working Group to indicate that members of the panel can also be members of the working group.
RadCalNet Working Group
The large ellipse in Figure 1 represents the RadCalNet Working Group. The working group, which was established in 2013, includes members of IVOS and WGCV, the site operators, and RadCalNet Processor and Data Distribution developers. An initial meeting took place in January 2014 with representatives of three automated sites, a national metrology institute, and interested space The data from the sites are provided to the RadCalNet Processor: a central facility that ensures the data from the individual sites are processed with the same approach to provide TOA reflectances and associated uncertainties based on the inputs from the instrumented site. The results from the RadCalNet processing are provided to the RadCalNet Data Distribution that provides the interface to the user community to distribute the results. The RadCalNet Data Distribution also provides quality assurance (QA) on the products and provides metadata and documentation of the RadCalNet approach.
It is expected that new sites will seek inclusion in RadCalNet. The RadCalNet Admission Review Panel will evaluate requests from an instrumented site to become part of RadCalNet based on a review of the site's uncertainty budget, provision of operational data, comparison results, and assurances that the site will operate for at least five years. The panel consists of IVOS and WGCV members. The panel makes recommendations to the full WGCV membership, and the WGCV determines whether to admit an instrumented site to RadCalNet. Figure 1 shows the panel overlapping with the RadCalNet Working Group to indicate that members of the panel can also be members of the working group.
RadCalNet Working Group
The large ellipse in Figure 1 represents the RadCalNet Working Group. The working group, which was established in 2013, includes members of IVOS and WGCV, the site operators, and RadCalNet Processor and Data Distribution developers. An initial meeting took place in January 2014 with representatives of three automated sites, a national metrology institute, and interested space agencies. The objective of the meeting was to define the architecture shown in Figure 1 and the processing methodology described below (and shown in Figure 2). The meeting also produced a roadmap to demonstrating a RadCalNet operational concept and recommendations to CEOS/WGCV/IVOS and CEOS/WGCV for evolution of the initial RadCalNet into an operational network. TOA radiance used by the site operators for their projects.
The Level 2 data from the individual sites are delivered and archived at the RadCalNet processing facility. All data are processed through the same processing code to produce the TOA reflectance at the RadCalNet spectral sampling, resolution and range at the 30 minute interval. The results are processed to include uncertainties for each individual data point. The results, along with the input data used to derive them, are then provided to the user community via the RadCalNet portal. The role of the RadCalNet Working Group is one of evaluation and support through two-way interaction with the three components of RadCalNet: instrumented sites; processor; and data distribution. The working group provides guidance and assistance in developing sites and encourages and facilitates communication between sites. The site operators provide the working group with descriptions of their site and uncertainty assessments for their specific instrumentation and processing approaches, which the working group reviews and makes recommendations on. The working group plays a similar two-way communication between the processor and data distribution. Communication of lessons learned from the processor and data distribution teams to the site operators is also facilitated by the working group.
The current RadCalNet Working Group (WG) consists of the authors of this paper. Four radiometric calibration instrumented sites (one each in China, France, Namibia, and the USA) were selected as the starting point for RadCalNet. The RadCalNet WG used its initial prototyping years to 1) define the architecture of RadCalNet, 2) demonstrate the operational concept using currently available satellite sensors; 3) provide recommendations to CEOS/WGCV for the evolving RadCalNet towards an operational project, and 4) provide guidelines for the addition of new sites.
Radcalnet Instrumented Sites
There is no requirement from RadCalNet on the site operators to provide bottom-of-atmosphere (BOA) reflectance at a specific level of uncertainty. They should however demonstrate that the BOA reflectance they provide is SI-traceable and within a verified uncertainty budget. Site operators are responsible for collecting surface and atmospheric data at their sites and providing surface reflectance and atmospheric parameters to the RadCalNet Processor. The inputs from the sites must include uncertainties as determined by the site operator. They best understand their own site and its instrumentation, thus RadCalNet relies on their expertise to provide QA of the data being provided to RadCalNet. Any data provided should be of sufficient quality to warrant processing by RadCalNet. As mentioned above, there are currently four sites in operation. Overviews of the sites are provided here along with references to provide greater details.
The longest-operating automated site of the four is the La Crau site in France. The La Crau site has been used since 1987 for the vicarious calibration of SPOT cameras [29]. Early calibration activities were conducted during field campaigns devoted to the characterization of the atmosphere and the site reflectance. In 1997, an automatic photometric station (ROSAS) was set up on the site on top of a 10 m high post [30] in the center of a 400 m × 400 m area (0.16 km 2 ) in the "La Crau Seche," that is a 60 km 2 flat, pebbly area in southeastern France (longitude: 4.87 • E, latitude: 43.50 • N, altitude 20 m). The area has a dry and sunny Mediterranean climate. The soil is mainly composed of pebbles and is sparsely covered by a low vegetation. The surface optical properties do vary within the year but are continuously monitored. The photometer measures spectral solar extinction and sky radiance to characterize the optical properties of the atmosphere. It also views the upwelling radiance over the ground to derive the surface reflectance. The photometer samples the spectrum from 380 nm to 1650 nm in 12 narrow bands. The photometer automatically and sequentially performs measurements every noncloudy day. Data are transmitted by GSM (Global System for Mobile communications) to Centre Nation d'Etude Spatiales (CNES). The photometer calibration is performed in situ using the sun measurements for irradiance and cross-band calibration, and over the Rayleigh scattering for the short wavelengths radiance calibration. This calibration is validated by measurements made by the instrument manufacturer, taking advantage of the good atmospheric conditions of a high-altitude site (Izaña, Tenerife) for irradiance calibration and by using an integrating sphere for radiance calibration. The data are processed by operational software that calibrates the photometer, estimates the atmosphere properties, and computes a modelled bidirectional reflectance distribution function of the site.
The Railroad Valley Playa site in Nevada, USA has been used for reflectance-based vicarious calibration since the mid-1990s [1]. Automated solar transmittance data collections began in 2001, and early versions of automated, downward-looking radiometers were deployed in 2002. The current Radiometric Calibration Test Site (RadCaTS), which relies on a suite of four ground-viewing radiometers (GVRs), was completed in 2011. The 1 km 2 area of RadCaTS is centered at 38.497 • N and 115.690 • W at an altitude of 1435 m. The site is in a remote area with limited access. The average surface reflectance is typical of clay-based playas, with lower values in the blue portion of the spectrum, increasing to values in excess of 0.3 in the near-infrared and shortwave infrared. The surface reflectance is generally stable under dry conditions but changes due to periodic rain and snowfall events. The surface is approximately Lambertian out to view angles approaching 30 • [31]. The yearly average aerosol optical depth is typically low, on the order of 0.060 at 550 nm [32]. The GVRs are multispectral radiometers with seven bands in the visible and near-infrared (VNIR) and one shortwave infrared (SWIR) channel. Atmospheric data are collected at the default acquisition scheme for the aeronet network AERONET [33], and the GVRs acquire data every two minutes. Data are transmitted to the University of Arizona via satellite uplink on a regular basis. The data are processed using operational software that determines a site reflectance based on the average of the GVR suite, and that average is used to determine an optimal hyperspectral reflectance.
The Baotou Site is the only site to rely on artificial surfaces [34]. There are four gravel targets that are each 48 × 48 m 2 in size. The "white" area has reflectance of approximately 0.6, while the "gray" and "black" are 0.25 and 0.10 in reflectance. Only the gray target is considered a RadCalNet site as its spectral characteristics best match the site-surrounding spectral characteristics, thus limiting the impact of adjacency effect on TOA simulations. The site is located approximately 50 km from Baotou City at latitude 40.84 • N and longitude 109.46 • E, with an altitude of 1270 m. The surface of the surrounding area is mainly desert, bare land, grassland, and farmland. Baotou features a cold semi-arid climate, with average precipitation of 300 mm. The surface bidirection reflectance factor (BRF) of the site is obtained with a commercially available field spectrometer. The spectrometer was deployed in 2016, while measured the ground-reflected radiance every two minutes. The spectral range of the spectrometer is 380 nm to 1080 nm, with a sampling interval of 2 nm. Atmospheric parameters are derived from AERONET solar photometer measurements [33]. Automated measurements are transmitted to the Academy of Opto-Electronics, Chinese Academy of Sciences via the 4G mobile network. The fourth site is in Gobabeb, Namibia. This was set up during the RadCalNet prototype phase and has been providing data from July 2017 [35,36]. It is operated jointly by European Space Agency (ESA) (subcontracting to the National Physical Laboratory of Unites Kingdom (NPL)) and CNES and has the same type of instrument and operational protocol as the La Crau ROSAS system, and the data are analyzed using the same analysis software at CNES. It is the only site of the four to be selected through a global search, relying primarily on quantitative assessments of spectral characteristics, spatial uniformity, probability of clear skies, and other key site parameters. The Gobabeb site was used by the RadCalNet Working Group to develop the requirements and guideline documents for candidate sites to the network.
RadCalNet Data Product
RadCalNet provides SI-traceable, spectrally resolved TOA reflectance for a nadir view at 30 min intervals from 9 am to 3 pm local standard time for a given site. The TOA reflectance has a spectral sampling of 10 nm intervals and covers the 400 nm to 1000 nm spectral range (required for all sites), with data for longer wavelengths (up to 2500 nm; also 10 nm intervals) provided in those cases where the site provider is capable of a broader spectral range. The area over which the TOA reflectance is applicable is defined as appropriate for each site, but in all cases is at least 45 × 45 m 2 .
The input data provided by the site operators are nadir-viewing BOA reflectance along with atmospheric parameters such as surface pressure, columnar water vapor, columnar ozone, aerosol optical depth, and the Angstrom coefficient to define aerosol sizes. Simulation to TOA reflectance is performed through the central RadCalNet Processing for all sites using the method described below. TOA reflectance and BOA reflectance data are provided, along with their associated uncertainties, to users through the RadCalNet portal [28]. The data are in ASCII format.
TOA Reflectance Computation
An overview of the TOA reflectance determination is provided in this section and in more detail in the subsections below. The basic approach is illustrated in Figure 2 for two representative automated sites (Site 1 and Site 2) operating independently. They both collect their Level 0, or raw data, using instrumentation and techniques suitable for their individual site. Likewise, they independently process these data to a physical property such as bidirectional reflectance factor or site radiance, along with atmospheric parameters, the Level 1 product. All groups include some level of quality check in their processing when proceeding from Level 0 to Level 1. Sites may choose to perform their own processing to TOA reflectance from this Level 1 product, and such data may be made available separately to users and tailored to an individual satellite sensor (e.g., spectral band integration, temporal interpolation, viewing angle corrections). Such products are not part of the RadCalNet data dissemination.
Sites belonging to RadCalNet provide the RadCalNet processor with nadir-view BOA reflectance data and atmospheric parameterizations in a common format. This is the Level 2 product, as shown in Figure 2, and comprises the surface and atmospheric parameterizations in the format specified by RadCalNet and described above. The Level 2 data can be very similar to what is produced as part of the site's TOA radiance/reflectance predictions, but it is not identical. Figure 2 illustrates this by showing separate data products, the L2 output provided to RadCalNet and the TOA radiance used by the site operators for their projects.
The Level 2 data from the individual sites are delivered and archived at the RadCalNet processing facility. All data are processed through the same processing code to produce the TOA reflectance at the RadCalNet spectral sampling, resolution and range at the 30 min interval. The results are processed to include uncertainties for each individual data point. The results, along with the input data used to derive them, are then provided to the user community via the RadCalNet portal.
RadCalNet Site Field Collection Data
Each of the four current RadCalNet sites makes independent measurements using methods and techniques that are slightly different to one another, but which follow the general philosophy of the reflectance-based vicarious method [37]. The reflectance-based approach relies on surface bidirectional reflectance factor derived from measurements of upwelling radiance coupled with atmospheric parameterizations derived primarily from solar transmittance within a radiative transfer model to predict TOA radiance for sensor-sun geometry [14].
Details of the specific instrumentation, measurement approaches, and processing specific to each instrumented site are documented in the site description documents and uncertainty statements documents that are reviewed by the RadCalNet Working Group and can be found on the RadCalNet portal [28] and have also been introduced in Section 2.2. All current sites deploy downward-looking radiometers to retrieve the upwelling radiance near ground level. Some sites use multispectral radiometers, others use spectrometers, and some sites use fixed, vertically viewing sensors, while others use multi-angle approaches. The upwelling radiance is converted to a nadir-viewing, BOA reflectance using the ratio of the upwelling radiance to the downwelling total irradiance from both the sun and sky. The downwelling irradiance is typically derived through radiative transfer modeling using the measured atmospheric parameters, measurements of hemispheric downwelling irradiance, or integration of multi-angle sky radiance coupled with direct solar irradiance. Conversion of multispectral data to hyperspectral coverage suitable for RadCalNet is typically obtained using spectral models of the site based on hyperspectral ground-based measurements of the site reflectance.
Atmospheric retrievals rely on measurements of directly transmitted solar irradiance to retrieve spectral aerosol optical depth, aerosol size information, and column amounts of gaseous absorbers such as ozone and water vapor. The same atmospheric measurements are also used to screen out cloud-contaminated data through continuous measurements of the atmospheric extinction. The cloud screening detailed approach is site-dependent and left to the responsibility of the site owner as it might require site-specific implementation. Ancillary information such as surface pressure measurements, site altitude, or numerical weather predictions provide molecular scattering optical depths. Sky radiance measurements are used at some sites to provide aerosol size information while at others as validation of the aerosol size distribution retrieved from solar transmittance. Still others use the sky radiance as a means to quality check their assumptions regarding aerosol composition.
Site Data Provided to RadCalNet
In general, the data produced specifically for internal use by the site operators is different than that required for RadCalNet processing. It should be emphasized that RadCalNet does not specify to the site operators how they are to convert their instrumented site measurements to the input requirements for RadCalNet. Of greater importance to RadCalNet is a defensible uncertainty budget for the data that are provided.
The TOA reflectance processing for RadCalNet requires the site operators to provide a nadir-viewing surface reflectance at 10 nm intervals from 400 nm to 1000 nm from 9 am to 3 pm local standard time at 30 min intervals. There is a goal of RadCalNet to obtain data beyond 1000 nm when possible. The reported reflectance to RadCalNet is required to be valid for an area of at least 45m × 45 m. All sites currently sample the reflectance at the requested times (and, in reality, at higher temporal frequency), and no interpolation for time is required for the typical case where all data are successfully acquired by the instrumentation. Three of the sites collect the reflectance data at multispectral wavelengths, and these data are combined with representative spectra derived from site measurements to provide the 10 nm hyperspectral data needed by RadCalNet.
RadCalNet requires the following atmospheric information: aerosol optical depth at 550 nm, Angstrom coefficient, column water vapor, and column ozone. An aerosol type corresponding to one of the standard inputs that are part of the MODTRAN radiative transfer code is also needed. The aerosol parameters are derived from the closest measurement in time at the sites, but this difference is typically Remote Sens. 2019, 11, 2401 9 of 24 less than a few minutes at each of the sites. In addition to the atmospheric and surface nadir reflectance provided for each valid time and date, the sites provide a surface pressure and temperature for each data point, column water vapor and ozone. The site operators also provide RadCalNet with their site elevation, latitude, and longitude that represents the center of the area that is valid for their provided BOA reflectance.
The site operators perform quality assessments on their data to evaluate which data are suitable for inclusion in the RadCalNet processing and archive. Any input data provided for inclusion in RadCalNet by the site operators are assumed to be valid and to have the reported input parameter uncertainties. The derivation of uncertainties associated to each site's surface reflectance measurements is dependent on the site-specific instrumentation, processing methods, and site characteristics.
RadCalNet Processing Methodology
Once ingested into the RadCalNet archive, the input data are used with the MODTRAN V5.3 radiative transfer code (RTC). Processing all RadCalNet data using the same RTC provides consistency in the predicted TOA reflectance. The risk to using the same processing approach is that it can create a systematic error (or colloquially, a bias), but evaluation of the impact of the radiative transfer code on reflectance-based results has shown greater sensitivity to the uncertainty of the input parameters rather than the choice of specific radiative transfer code [38], and differences between RTCs are known.
During initial set up for data processing, an input file for all the site-specific default MODTRAN parameters is generated. It is not expected that these parameters will change value, but the file is provided so that the MODTRAN input file for any data run can be recreated. The site-specific data include a site identifier, latitude, longitude, and altitude. The processing is specified for the UTC time, day, and year, allowing calculation of the solar zenith angle for each data point.
MODTRAN has numerous options, and the description given here is not designed to allow the reader the capability to reproduce exactly the tape5 file needed to produce TOA radiance, but rather is a summary of the configuration used for RadCalNet as guidance on how RadCalNet uses MODTRAN. MODTRAN is configured for RadCalNet processing to run in multiple scattering mode using the eight-stream option for the DISORT (discrete ordinates) computations. One of the advantages to MODTRAN is that the absorption by gases is included within the radiative transfer calculations. Absorption is calculated using the MODTRAN band model as opposed to correlated-k option. Auxiliary gaseous species (OH, HF, HCl, HBr, HI, ClO, OCS, H 2 CO, HOCl, N 2 , HCN, CH 3 Cl, H 2 O 2 , C 2 H 2 , C 2 H 6 , PH 3 ) are not included to improve processing time with minimal impact on results in the VNIR and SWIR. Carbon dioxide mixing ratio is assumed to be 390 ppmv.
Vertical profile information is based on the midlatitude summer model for all sites. The profile is scaled according to surface temperature, pressure, and column water vapor. The impact of the specific vertical distribution of atmospheric constituents is minimal for scattering-dominated radiative transfer calculations, and the midlatitude summer option allows for the combination of surface temperatures and total columnar water vapor found at typical vicarious calibration sites.
The MODTRAN calculations are performed for a "vertical slant path to space" for multiple scattering including both thermal emission and solar scattering. The surface is assumed to be Lambertian, and no adjacency effects are included (i.e., the underlying assumption is that the spectral properties of the site surroundings are identical to those measured at the site). The near-Lambertian nature of the sites included in RadCalNet coupled with the typically low aerosol loading and moderate to high surface reflectance means that the impact of the non-Lambertian effects on the diffuse-light field are minimal [32]. Adjacency effects are not included as most sites have a low adjacency effect and to simplify processing.
The spectral resolution used for the MODTRAN calculations is a triangular-shaped response with 20 nm full width, half maximum. Spectral sampling is 400 nm to 1000 nm at 10 nm intervals for all sites, with additional spectral bands available at those sites providing inputs beyond the required RadCalNet spectral range. The Chance/Kurucz solar irradiance spectral model is used but has no impact on the predicted TOA reflectance for RadCalNet since the same solar irradiance is used to scale the MODTRAN results to reflectance.
The given Angstrom coefficient and 550 nm aerol optical depth (AOD) are used within MODTRAN to compute the spectral variation in AOD that is combined with the aerosol model to determine scattering characteristics needed for the DISORT calculations. The aerosol model used in RadCalNet is selected by the site operator as one of six standard MODTRAN types (Rural 23 km visibility, Rural 5 km visibility, Maritime with user-defined visibility, maritime with 23 km visibility, urban with 5 km visibility, tropospheric with 50 km visibility). To date, all of the sites rely on a Rural with 23 km visibility model.
RadCalNet Output
The MODTRAN output in the '.7sc' file is extracted to compute the TOA reflectance. For each wavelength, the TOA reflectance is calculated as: where L TOA (l) and E sun (l) are the top-of-atmosphere spectral radiance and incoming solar spectral irradiance as calculated by MODTRAN respectively and ζ(l, λ) is the MODTRAN spectral bandpass function for the central wavelength λ (or l). The solar zenith angle at the time of the measurement is θ. An output flag is provided for any times and dates omitted by the site operators. Omitting data is typically due to cloudy weather, instrument malfunction, surface moisture effects, and so forth. Currently the spectral bandpass function ζ(l, λ) is a triangular function, centered on λ with a half-base width of 20 nm (there is a discussion underway to reduce this to 10 nm at the next RadCalNet reprocessing).
SI-Traceability and Uncertainty Analysis
Providing well-understood and SI-traceable uncertainties are as important as the TOA reflectance itself. The key element to the success of RadCalNet is its adherence to SI-traceability with quantitative uncertainties. Ensuring SI-traceability is necessary to support the establishment of the Global Earth Observation (GEO) System of Systems (GEOSS). The Quality Assurance framework for Earth Observations [39] was established at the request of GEO, with the key principle that "all Earth observing (EO) data and derived products should have associated with them a quality indicator based on a documented quantitative assessment of its traceability to internationally agreed upon reference standards (e.g., SI units)".
Input Data Quality
The input data provided by the site operators includes uncertainties for each input parameter for each time. The uncertainties are based upon the expertise of the instrumented site operators with their specific instrumentation, processing methods, and instrumented site characteristics. All instrumented sites must have their uncertainty budgets peer reviewed by the RadCalNet Working Group and documented in an uncertainty summary report (and, where appropriate, a peer reviewed publication).
The uncertainty analysis for a site considers the calibration of the site instruments (both permanent and any used in site characterization, e.g., to provide high spectral resolution measurements between the spectral bands of the permanent instruments), uncertainties associated with the use of that instrumentation in the field (including aging and temperature sensitivities), and uncertainties associated with the determination of ground reflectance from the raw measurements (e.g., due to sun and sky irradiance calculations). The uncertainty analysis and summary report documents the traceability of the measurement of surface reflectance to SI.
Output Data Quality
The surface and atmosphere measurements provided as input to RadCalNet for the generation of simulated reference TOA reflectance are provided with associated uncertainties. These uncertainties are propagated to TOA via a lookup table (LUT) approach containing precomputed uncertainties corresponding a set of surface/atmospheric parameters combinations and their associated uncertainties. The sampling of the input parameter space associated to the surface/atmosphere parameters and their uncertainties and corresponding to the LUT nodes was adjusted to allow for a nearest-neighbor interpolation of the LUT in the RadCalNet processing scheme while providing the RadCalNet TOA reflectance uncertainty within a 0.5% uncertainty.
The TOA uncertainties stored in the LUT were derived via a Monte Carlo approach. Each LUT entry corresponds to the TOA reflectance uncertainty associated to a particular combination of input parameters (specific atmospheric and surface conditions for an individual data point) derived from multiple runs of MODTRAN, with simulated input parameter errors drawn from probability distributions describing the uncertainties associated with the input parameters. This approach is applicable to all sites and provides a computationally efficient solution to the propagation of input parameter uncertainties.
The uncertainty associated to the RadCalNet TOA reflectance in spectral regions strongly affected by gaseous absorption and, in particular, water vapour is significantly larger than in other spectral regions. These regions are thus systematically flagged in the RadCalNet TOA uncertainty products and should be avoided when using the RadCalNet TOA reflectance.
Uncertainties associated to the TOA reflectance due to the modelling of the atmospheric scattering and absorption by MODTRAN are not included in the RadCalNet TOA uncertainty computation. It is expected, however, that they are not the main contributor to the total TOA uncertainty.
Consistency between RadCalNet Sites
To validate the uncertainty statements for the sites (associated with both ground measurements and TOA reflectance values), it is important to check the consistency between the measurements carried out at each RadCalNet site. The RadCalNet WG is developing procedures for ground-based site-to-site comparisons involving multispectral or hyperspectral travelling reference radiometers, but these are yet to be implemented.
The radiometric consistency is currently verified at TOA reflectance level by comparing the RadCalNet TOA reflectance products to observations from the Multi Spectral Instrument (MSI) onboard the Sentinel-2 satellites. These sensors were chosen because their performance has been thoroughly characterized in-flight and they are continuously monitored.
Calibration and Harmonization Using RadCalNet
The basic processing approach to calibrate a selected sensor using RadCalNet consists of six basic steps:
1.
Extract the predicted TOA reflectance for the dates and times corresponding to when the sensor under study imaged the selected RadCalNet site, including uncertainties.
2.
Determine test sensor output for the selected RadCalNet site and associated uncertainties.
3.
Perform a temporal correction to the RadCalNet TOA reflectances to account for time differences between the sensor's imaging of the site and the RadCalNet 30 min interval.
4.
Convolve the RadCalNet TOA reflectance to the test sensor's spectral response to determine the band-integrated TOA reflectance and associated uncertainty.
5.
Convert RadCalNet TOA reflectance and associated uncertainty to appropriate units for comparison to test sensor output found in #2.
6.
Compare imaging sensor output to the corresponding RadCalNet-based TOA reflectance and determine uncertainty associated with comparison.
The examples shown in this section follow the approach above, as applied to data from two of the RadCalNet sites and two currently operating, on-orbit satellite sensors. The specific sites and sensors used are not given to emphasize the overall RadCalNet concept as opposed to specific methodologies, sites, or sensors. The two sensors are well-understood, multispectral imagers with similar spatial resolutions of 10 m to 30 m. Both imagers acquired at least nine representative images for each of the two sites over an 18 month time period. The imagery from both sensors was obtained via each sensor's public data distribution website. All available images for the two sites for both sites were obtained from the data provider, and dates for which the images showed obvious presence of clouds have been excluded from the study.
The dates and acquisition times for those scenes that appear not to be affected by clouds were used to locate matching RadCalNet output data. The approach used here to determine the predicted TOA reflectance from RadCalNet output was a nearest-in-time approach. Such an approach is reasonable for the sites that will be part of RadCalNet, since TOA reflectance varies more slowly with time than does the TOA radiance. Dates for which the RadCalNet TOA reflectance varied by more than 10% within the 60 min of the image acquisition were omitted from the results shown here. Figure 3 shows typical TOA reflectance predicted from one of the instrumented sites as a function of time from four randomly selected dates. These data represent a variety of seasons, surface moisture conditions, and various clear-sky atmospheric conditions. The wavelength shown is in the midvisible. The first thing to note is that the reflectance of this particular location can vary by >30% depending on date, demonstrating the utility of having automated measurements to assess such changes. The second noticeable feature is that the variation in reflectance can be as large as 20% over a single day, caused by surface bidirectional reflectance effects and atmospheric angular scattering effects. The examples shown in this section follow the approach above, as applied to data from two of the RadCalNet sites and two currently operating, on-orbit satellite sensors. The specific sites and sensors used are not given to emphasize the overall RadCalNet concept as opposed to specific methodologies, sites, or sensors. The two sensors are well-understood, multispectral imagers with similar spatial resolutions of 10 m to 30 m. Both imagers acquired at least nine representative images for each of the two sites over an 18 month time period. The imagery from both sensors was obtained via each sensor's public data distribution website. All available images for the two sites for both sites were obtained from the data provider, and dates for which the images showed obvious presence of clouds have been excluded from the study.
The dates and acquisition times for those scenes that appear not to be affected by clouds were used to locate matching RadCalNet output data. The approach used here to determine the predicted TOA reflectance from RadCalNet output was a nearest-in-time approach. Such an approach is reasonable for the sites that will be part of RadCalNet, since TOA reflectance varies more slowly with time than does the TOA radiance. Dates for which the RadCalNet TOA reflectance varied by more than 10% within the 60 minutes of the image acquisition were omitted from the results shown here. Figure 3 shows typical TOA reflectance predicted from one of the instrumented sites as a function of time from four randomly selected dates. These data represent a variety of seasons, surface moisture conditions, and various clear-sky atmospheric conditions. The wavelength shown is in the midvisible. The first thing to note is that the reflectance of this particular location can vary by >30% depending on date, demonstrating the utility of having automated measurements to assess such changes. The second noticeable feature is that the variation in reflectance can be as large as 20% over a single day, caused by surface bidirectional reflectance effects and atmospheric angular scattering effects.
The variation over a 30 minute time period is typically <5% and is typically smaller for the overpass times of most sun-synchronous imagers. Simplistic approaches such as using the closest data point in time or linear interpolation are sufficient for most applications to ensure that the temporal variation in TOA reflectance is not the dominant uncertainty source. Applications using times of day for which the solar illumination angle changes significantly over time may necessitate more sophisticated approaches, depending upon the user's uncertainty requirements. The variation over a 30 min time period is typically <5% and is typically smaller for the overpass times of most sun-synchronous imagers. Simplistic approaches such as using the closest data point in time or linear interpolation are sufficient for most applications to ensure that the temporal variation in TOA reflectance is not the dominant uncertainty source. Applications using times of day for which the solar illumination angle changes significantly over time may necessitate more sophisticated approaches, depending upon the user's uncertainty requirements. Figure 4 shows a sample TOA reflectance as a function of wavelength obtained from RadCalNet. The results are given at the native resolution of RadCalNet. Also shown in the figure are relative spectral response function (SRF) data for what is referred to here as sensor 1, as representative of a set of spectral bands used in earth imagers. The TOA reflectance for a selected sensor band can be derived through a variety of approaches, including choosing the closest RadCalNet wavelength to the sensor's center wavelength, band-averaging across the sensor's spectral response at the RadCalNet 10 nm interval, or interpolating the RadCalNet output to the same spectral resolution as the sensor's spectral response data.
RadCalNet 10 nm interval, or interpolating the RadCalNet output to the same spectral resolution as the sensor's spectral response data.
The relatively smooth nature of the TOA reflectance seen in Figure 4 is a result of the smoothly varying spectral reflectance of the surface coupled with the lack of strong atmospheric absorption features. The few spectral regions affected by water vapor absorption (720 nm, 830 nm, 940 nm) and oxygen at 760 nm are visible as broad, shallow features due to the 20 nm spectral averaging used to obtain the TOA reflectance in RadCalNet. Increased uncertainties in bands affected by gaseous absorption are reported within the RadCalNet output, and users should exercise care in these spectral regions to mitigate the increasing uncertainties resulting from atmospheric absorption.
The relative spectral response data shown in Figure 4 were obtained from the same data source as that used to obtain the imagery. The method for band-averaging of the RadCalNet TOA reflectance data used in this work is to linearly interpolate the TOA reflectance data to match the spectral sampling of the imager's SRF to determine a band-weighted TOA reflectance. Such an approach is suitable here because the bands being studied are in regions of the spectrum not affected by absorption. Once the band-averaged TOA reflectance is computed from RadCalNet data for the sensor of interest, the results are compared to the reported TOA reflectance from the sensor under test. The site is located for each image, and a spatial average of the area recommended by the site operator is used to determine the image sensor's output for each spectral band. Both sensors used in the current work provide an at-sensor reflectance as their data product, thus negating the need for including a solar The relatively smooth nature of the TOA reflectance seen in Figure 4 is a result of the smoothly varying spectral reflectance of the surface coupled with the lack of strong atmospheric absorption features. The few spectral regions affected by water vapor absorption (720 nm, 830 nm, 940 nm) and oxygen at 760 nm are visible as broad, shallow features due to the 20 nm spectral averaging used to obtain the TOA reflectance in RadCalNet. Increased uncertainties in bands affected by gaseous absorption are reported within the RadCalNet output, and users should exercise care in these spectral regions to mitigate the increasing uncertainties resulting from atmospheric absorption.
The relative spectral response data shown in Figure 4 were obtained from the same data source as that used to obtain the imagery. The method for band-averaging of the RadCalNet TOA reflectance data used in this work is to linearly interpolate the TOA reflectance data to match the spectral sampling of the imager's SRF to determine a band-weighted TOA reflectance. Such an approach is suitable here because the bands being studied are in regions of the spectrum not affected by absorption.
Once the band-averaged TOA reflectance is computed from RadCalNet data for the sensor of interest, the results are compared to the reported TOA reflectance from the sensor under test. The site is located for each image, and a spatial average of the area recommended by the site operator is used to determine the image sensor's output for each spectral band. Both sensors used in the current work provide an at-sensor reflectance as their data product, thus negating the need for including a solar irradiance model. If the imagery data are provided as a spectral radiance, it would be converted to TOA reflectance using the solar irradiance model recommended by the data provider.
Three examples are presented here to demonstrate the utility of RadCalNet for sensor radiometric calibration as well as harmonization of multiple sensors. Section 5.1 provides results for a single multispectral imaging sensor (Sensor #1) from multiple days from one of the current RadCalNet instrumented sites (Site A). The second example presented in Section 5.2 gives results from Sensor #1 from two of the sites (Sites A and B). Section 5.3 describes the results from a second sensor (Sensor #2) using multiple dates at both Sites A and B and compares the results to those obtained from Sensor #1. The specifics regarding the locations and properties of Sites A and B or the characteristics of Sensors #1 and #2 are not critical to this discussion, beyond the fact that both sensors have been demonstrated not to be suffering from temporal degradation, and the sites have been audited through RadCalNet to ensure their SI-traceable uncertainties.
Single Sensor, Single Site
Employing the approach described above to Sensor #1 at Site A provided 10 near-nadir data images coinciding with available RadCalNet TOA reflectance predictions over approximately a 14-month period in 2015 and 2016. The maximum number of near-nadir views for this sensor that could be available over that time period is 30 possible scenes. The 20 "missing" data points are due to cloudiness over the site; instrumentation at the RadCalNet site undergoing maintenance; or unsuitable surface conditions, indicated by anomalously low reflectance (wet surface), anomalously high reflectance (snow), or rapidly changing values on a given date. The gap in results late in 2015 to early in 2016 is due primarily to cloudy conditions coupled with surface variability. Figure 5 shows the results of these 10 dates as a function of time for a single spectral band of the imager, centered approximately at 650 nm. The data are displayed as the ratio of the predicted TOA reflectance to that reported by the imaging sensor. The error bars shown are the nominal absolute uncertainties for a single measurement being currently reported by RadCalNet. The uncertainties represented in Figure 5 do not include the uncertainty related to Sensor #1 s image data, such as that caused by geolocation uncertainty, sensor noise, absolute uncertainty, and so forth.
Remote Sens. 2019, 11, 2401 14 of 25 irradiance model. If the imagery data are provided as a spectral radiance, it would be converted to TOA reflectance using the solar irradiance model recommended by the data provider. Three examples are presented here to demonstrate the utility of RadCalNet for sensor radiometric calibration as well as harmonization of multiple sensors. Section 5.1 provides results for a single multispectral imaging sensor (Sensor #1) from multiple days from one of the current RadCalNet instrumented sites (Site A). The second example presented in Section 5.2 gives results from Sensor #1 from two of the sites (Sites A and B). Section 5.3 describes the results from a second sensor (Sensor #2) using multiple dates at both Sites A and B and compares the results to those obtained from Sensor #1. The specifics regarding the locations and properties of Sites A and B or the characteristics of Sensors #1 and #2 are not critical to this discussion, beyond the fact that both sensors have been demonstrated not to be suffering from temporal degradation, and the sites have been audited through RadCalNet to ensure their SI-traceable uncertainties.
Single Sensor, Single Site
Employing the approach described above to Sensor #1 at Site A provided 10 near-nadir data images coinciding with available RadCalNet TOA reflectance predictions over approximately a 14month period in 2015 and 2016. The maximum number of near-nadir views for this sensor that could be available over that time period is 30 possible scenes. The 20 "missing" data points are due to cloudiness over the site; instrumentation at the RadCalNet site undergoing maintenance; or unsuitable surface conditions, indicated by anomalously low reflectance (wet surface), anomalously high reflectance (snow), or rapidly changing values on a given date. The gap in results late in 2015 to early in 2016 is due primarily to cloudy conditions coupled with surface variability. Figure 5 shows the results of these 10 dates as a function of time for a single spectral band of the imager, centered approximately at 650 nm. The data are displayed as the ratio of the predicted TOA reflectance to that reported by the imaging sensor. The error bars shown are the nominal absolute uncertainties for a single measurement being currently reported by RadCalNet. The uncertainties represented in Figure 5 do not include the uncertainty related to Sensor #1's image data, such as that caused by geolocation uncertainty, sensor noise, absolute uncertainty, and so forth. One noticeable feature in Figure 5 is the scatter of the data. Such scatter is also present in reflectance-based results relying on measurements made with personnel located at the instrumented site [27,29] and can result from uncertainties associated to the on-ground instrumentation radiometric calibration and stability and the site area sampling strategy, both typically estimated of the order of few percent. On top of such on-ground measurements uncertainties, this scatter is also explained by a) the uncertainty associated to the propagation of the surface reflectance measurements to the top of the atmosphere via radiative transfer simulation (due to uncertainty in atmospheric parameters measurement, such as aerosol optical properties and gaseous absorption) and b) the uncertainty associated to the spaceborne sensor calibration. The encouraging result is that the RadCalNet results agree with the imaging sensor's absolute radiometric calibration to better than 5%, except for the initial two data points. This result is in line with the overall expected uncertainty of the reflectance-based methodology used by RadCalNet [27,29]. The data also indicate a trend of about 1% per year, which is statistically not significant over such period if compared with the uncertainty of about 5% obtained with the RadCalNet TOA simulations.
Averaging the 10 data points for each of the spectral bands leads to Figure 6 that shows the average ratio between the RadCalNet predictions and Sensor #1 s reported TOA reflectance results for each spectral band. Such an average assumes that the sensor being calibrated is not changing in time, as is the case given prior knowledge of the radiometric stability of sensor #1. The average of the data shown in Figure 5 are shown as the fourth band, centered near 650 nm. The error bars shown in this figure are the 1-σ standard deviations of the temporal averages. The standard deviations shown here match well with those reported in past work for reflectance-based methods with on-site personnel [1] Remote Sens. 2019, 11, 2401 15 of 25 One noticeable feature in Figure 5 is the scatter of the data. Such scatter is also present in reflectance-based results relying on measurements made with personnel located at the instrumented site [27,29] and can result from uncertainties associated to the on-ground instrumentation radiometric calibration and stability and the site area sampling strategy, both typically estimated of the order of few percent. On top of such on-ground measurements uncertainties, this scatter is also explained by a) the uncertainty associated to the propagation of the surface reflectance measurements to the top of the atmosphere via radiative transfer simulation (due to uncertainty in atmospheric parameters measurement, such as aerosol optical properties and gaseous absorption) and b) the uncertainty associated to the spaceborne sensor calibration. The encouraging result is that the RadCalNet results agree with the imaging sensor's absolute radiometric calibration to better than 5%, except for the initial two data points. This result is in line with the overall expected uncertainty of the reflectancebased methodology used by RadCalNet [27,29]. The data also indicate a trend of about 1% per year, which is statistically not significant over such period if compared with the uncertainty of about 5% obtained with the RadCalNet TOA simulations.
Averaging the 10 data points for each of the spectral bands leads to Figure 6 that shows the average ratio between the RadCalNet predictions and Sensor #1's reported TOA reflectance results for each spectral band. Such an average assumes that the sensor being calibrated is not changing in time, as is the case given prior knowledge of the radiometric stability of sensor #1. The average of the data shown in Figure 5 are shown as the fourth band, centered near 650 nm. The error bars shown in this figure are the 1-σ standard deviations of the temporal averages. The standard deviations shown here match well with those reported in past work for reflectance-based methods with on-site personnel [1] The dominant result to note is that the averages all agree to within their associated standard deviation (just below 5%), which is in line with the overall expectation of the reflectance-based methodology used by RadCalNet; a spectral trend can also be seen. Such a trend could indicate a band-to-band calibration feature of the sensor or be an indication of a systematic spectral effect in the processing of the RadCalNet data. Statistical evaluation of the means of the 450 nm, 480 nm, and 550 The dominant result to note is that the averages all agree to within their associated standard deviation (just below 5%), which is in line with the overall expectation of the reflectance-based methodology used by RadCalNet; a spectral trend can also be seen. Such a trend could indicate a band-to-band calibration feature of the sensor or be an indication of a systematic spectral effect in the processing of the RadCalNet data. Statistical evaluation of the means of the 450 nm, 480 nm, and 550 nm bands are not different at the 5% level. Taken individually, the 650 nm and 850 nm bands are statistically different from the three shorter-wavelength bands at the 5% level. Results for all bands, however, agree with the RadCalNet predictions to within the absolute uncertainty for the site, as reported by the site operator and evaluated by the RadCalNet Working Group. Evaluation of the measurement methodology, instrumentation, and processing does not indicate that a bias is present.
One approach to understand the origin of any bias is to examine the behavior of the same sensor at multiple sites that follow similar measurement and uncertainty protocols.
Single Sensor, Multiple Sites
One goal of RadCalNet is to increase the number of available calibration opportunities through networking results from multiple sites. Such a goal requires that the results from different sites can be used interchangeably to assess the radiometric calibration of a sensor. One method to assess the suitability of using multiple sites is to compare results from the sites using a well-understood sensor that is not changing significantly with time. A similar approach has been used for reflectance-based results at two sites in the western USA [32]. The comparison of sensor #1 measurements to site A and B RadCalNet TOA simulation is first presented, and then similarly for sensor #2 again against the RadCalNet simulation from site A and B. Figure 7 repeats the results for RadCalNet Site A and Sensor #1 given in Figure 5, while including results from RadCalNet Site B. Ideally, one would have the same number of data points when comparing the impact of site-related uncertainties, but weather conditions and other factors limit the opportunity for such a controlled evaluation. Of course, the discussion above depends on there being no systematic differences larger than absolute uncertainties at the multiple sites. Visual inspection of the data in Figure 7 indicates that the results from Site B appear to have a lower ratio than those from Site A. The difference is readily apparent in the data from 2015. A first evaluation of the two datasets examined whether either site indicates a trend in the sensor's radiometric response or possibly a nonlinear response in relation to the surface reflectance levels of site A and B. As mentioned previously, the data from Site A for this spectral band near 650 nm do not indicate a temporal change in the ratio that is statistically different from zero at the 5% confidence level. The results from Site B also show no trend at the 5% confidence level, as does the combined Site A and B results. Figure 8 helps to understand the significance of the possible systematic differences between the two sites. The figure shows the average ratio for the two sites based on the seven and 10 days shown in Figure 7. The means for the 650 nm and 850 nm bands are statistically different at the 5% confidence level, while those of the other bands agree. However, it should be made clear that the results for all bands agree to within the absolute uncertainties determined by each site operator for their site and agreed upon by the RadCalNet Working Group (in the range of 3% to 5%, depending of the spectral region here covered and the sites). The first feature to note is that the gap in data from late 2015 to early 2016 is present for both Sites A and B. Such a lack of data from two widely separated sites over the same time period is rare and is one motivation for expanding RadCalNet to include sites with varying climatologies (and particularly the use of both southern hemisphere and northern hemisphere sites) to reduce the chance of such gaps occurring. Figure 7 helps to illustrate one of the advantages of providing data from multiple sites. Consider the time period near the middle part of 2016. Three data points should be sufficient to derive a radiometric calibration with statistical confidence [40], but the opportunity to include additional data points of similar absolute and relative uncertainties improves the understanding of the sensor's radiometric behavior. Likewise, consider the results shown from 2015 for the three data points from Site A. The noise in those results could be interpreted to imply a trend in the sensor's response as a function of time. Combining the results from both sites would reinforce the lack of a statistical trend from Site A.
Of course, the discussion above depends on there being no systematic differences larger than absolute uncertainties at the multiple sites. Visual inspection of the data in Figure 7 indicates that the results from Site B appear to have a lower ratio than those from Site A. The difference is readily apparent in the data from 2015. A first evaluation of the two datasets examined whether either site indicates a trend in the sensor's radiometric response or possibly a nonlinear response in relation to the surface reflectance levels of site A and B. As mentioned previously, the data from Site A for this spectral band near 650 nm do not indicate a temporal change in the ratio that is statistically different from zero at the 5% confidence level. The results from Site B also show no trend at the 5% confidence level, as does the combined Site A and B results. Figure 8 helps to understand the significance of the possible systematic differences between the two sites. The figure shows the average ratio for the two sites based on the seven and 10 days shown in Figure 7. The means for the 650 nm and 850 nm bands are statistically different at the 5% confidence level, while those of the other bands agree. However, it should be made clear that the results for all bands agree to within the absolute uncertainties determined by each site operator for their site and agreed upon by the RadCalNet Working Group (in the range of 3% to 5%, depending of the spectral region here covered and the sites). Figure 9 shows the same as Figure 8, except for Sensor #2 at Sites A and B. The number of datasets for Sensor #2 at Site A is 16, and that for site B is five. None of the means from the two sites for a given band are found to be different statistically at the 5% confidence level. The agreement for several of the bands is close enough that the two symbols are not discernible. The disagreements are again largest at 650 nm, indicating a possible feature in this part of the spectrum between the results of the two sites. It is possible that there is a systematic effect that has been missed in the uncertainty analysis. One advantage of RadCalNet is that the availability of data from other sites and sensors will allow improvements in understanding the RadCalNet uncertainty budgets as well as possible sensor features that could cause site-dependent differences.
It is noted above that the band-to-band differences for Sensor #1 for both sites were not statistically different. That is, the radiometric calibration of all bands is within the uncertainties of the predicted TOA reflectance from two RadCalNet sites across all spectral bands. The Sensor #2 data in Figure 9 show similar agreement to the RadCalNet TOA reflectance. The band-to-band differences also agree to within the absolute uncertainties, but the relative difference between the average ratios for spectral bands in the 650 nm to 800 nm spectral region are statistically different from those of the shortest wavelength bands for Site A. Results from Site B confirm the statistical difference for bands between 700 nm and 800 nm and those between 480 nm and 600 nm. Such a result, confirmed at two different sites, should lead to an examination of the band-to-band radiometric calibration of Sensor #2. Figure 9 shows the same as Figure 8, except for Sensor #2 at Sites A and B. The number of datasets for Sensor #2 at Site A is 16, and that for site B is five. None of the means from the two sites for a given band are found to be different statistically at the 5% confidence level. The agreement for several of the bands is close enough that the two symbols are not discernible. The disagreements are again largest at 650 nm, indicating a possible feature in this part of the spectrum between the results of the two sites. It is possible that there is a systematic effect that has been missed in the uncertainty analysis. One advantage of RadCalNet is that the availability of data from other sites and sensors will allow improvements in understanding the RadCalNet uncertainty budgets as well as possible sensor features that could cause site-dependent differences.
Multiple Sensors, Single Site
Data from a single RadCalNet site can be used to examine the calibration of multiple sensors. Such results have been obtained in the past by groups using the reflectance-based method with onsite personnel, as well as by the RadCalNet site operators relying on their automated systems. Figure 10 is similar to Figures 8 and 9 but shows the results from Site A for the two sensors and illustrates one of the advantages of using data from RadCalNet for sensor radiometric comparisons. The results for the two sensors are plotted as a function of the band-averaged center wavelength obtained from the sensor's relative spectral response data. Recall that the ratios are obtained from similar bandaveraging approaches and thus, the results inherently contain spectral band adjustment factors [41], allowing direct comparisons between the sensors against a common reference. It is noted above that the band-to-band differences for Sensor #1 for both sites were not statistically different. That is, the radiometric calibration of all bands is within the uncertainties of the predicted TOA reflectance from two RadCalNet sites across all spectral bands. The Sensor #2 data in Figure 9 show similar agreement to the RadCalNet TOA reflectance. The band-to-band differences also agree to within the absolute uncertainties, but the relative difference between the average ratios for spectral bands in the 650 nm to 800 nm spectral region are statistically different from those of the shortest wavelength bands for Site A. Results from Site B confirm the statistical difference for bands between 700 nm and 800 nm and those between 480 nm and 600 nm. Such a result, confirmed at two different sites, should lead to an examination of the band-to-band radiometric calibration of Sensor #2.
Multiple Sensors, Single Site
Data from a single RadCalNet site can be used to examine the calibration of multiple sensors. Such results have been obtained in the past by groups using the reflectance-based method with on-site personnel, as well as by the RadCalNet site operators relying on their automated systems. Figure 10 is similar to Figures 8 and 9 but shows the results from Site A for the two sensors and illustrates one of the advantages of using data from RadCalNet for sensor radiometric comparisons. The results for the two sensors are plotted as a function of the band-averaged center wavelength obtained from the sensor's relative spectral response data. Recall that the ratios are obtained from similar band-averaging approaches and thus, the results inherently contain spectral band adjustment factors [41], allowing direct comparisons between the sensors against a common reference.
Analysis of the results in Figure 10 indicates no statistically significant difference between the similar bands of the sensors at the 5% confidence interval when relying on either the standard deviation of the average or the estimated absolute uncertainty. Likewise, the results are not statistically different from a ratio of unity for any of the spectral bands.
The results in Figure 10 also confirm the discussion of the previous section related to the band-to-band agreement for Sensor #2. The power of RadCalNet is well demonstrated in the band-to-band differences seen in Sensor #2 by RadCalNet. One would expect that processing data through an algorithm relying only on the near-identical bands of Sensors #1 and #2 would give similar results for either sensor because the radiometric calibrations agree. One advantage for Sensor #2 is that the added bands from 700 nm to 800 nm allow unique algorithms to be developed, making use of this spectral information. Conclusions drawn from algorithms using these bands could be suspect because of the lack of consistent radiometric calibration across the sensor. The ability to reference all of the spectral bands of a sensor to a common SI-traceable reference of RadCalNet allows for comparisons such as this. Analysis of the results in Figure 10 indicates no statistically significant difference between the similar bands of the sensors at the 5% confidence interval when relying on either the standard deviation of the average or the estimated absolute uncertainty. Likewise, the results are not statistically different from a ratio of unity for any of the spectral bands.
The results in Figure 10 also confirm the discussion of the previous section related to the bandto-band agreement for Sensor #2. The power of RadCalNet is well demonstrated in the band-to-band differences seen in Sensor #2 by RadCalNet. One would expect that processing data through an algorithm relying only on the near-identical bands of Sensors #1 and #2 would give similar results for either sensor because the radiometric calibrations agree. One advantage for Sensor #2 is that the added bands from 700 nm to 800 nm allow unique algorithms to be developed, making use of this spectral information. Conclusions drawn from algorithms using these bands could be suspect because of the lack of consistent radiometric calibration across the sensor. The ability to reference all of the spectral bands of a sensor to a common SI-traceable reference of RadCalNet allows for comparisons such as this.
As has been emphasized repeatedly, another advantage of RadCalNet is the use of multiple sites. Figure 11 shows the comparison between Sensors #1 and #2, except in this case relying on Site B. Comparing the ratios for similar center wavelengths shows that the calibrations of both sensors agree at the 5% confidence interval, both in a relative and in an absolute sense. The three spectral bands of Sensor #2 in the 700 nm to 800 nm spectral region again disagree significantly from the shorterwavelength bands of Sensor #1 as well as the 850 nm at this site. As has been emphasized repeatedly, another advantage of RadCalNet is the use of multiple sites. Figure 11 shows the comparison between Sensors #1 and #2, except in this case relying on Site B. Comparing the ratios for similar center wavelengths shows that the calibrations of both sensors agree at the 5% confidence interval, both in a relative and in an absolute sense. The three spectral bands of Sensor #2 in the 700 nm to 800 nm spectral region again disagree significantly from the shorter-wavelength bands of Sensor #1 as well as the 850 nm at this site.
Two Sensors, Two Sites
The final way to examine the data from the two sensors and the two sites is to combine all of the data from both sites for a single sensor for both sensors of interest, as shown in Figure 12. The data points represent the average of the ratios from all dates, weighted by the absolute uncertainty for each site. Similarly, the error bars shown are a weighted variance based on the absolute uncertainties for each site. Such an approach is simplistic but serves as an example of how the data from two sites could be combined. Efforts are taking place within the RadCalNet Working Group to determine
Two Sensors, Two Sites
The final way to examine the data from the two sensors and the two sites is to combine all of the data from both sites for a single sensor for both sensors of interest, as shown in Figure 12. The data points represent the average of the ratios from all dates, weighted by the absolute uncertainty for each site. Similarly, the error bars shown are a weighted variance based on the absolute uncertainties for each site. Such an approach is simplistic but serves as an example of how the data from two sites could be combined. Efforts are taking place within the RadCalNet Working Group to determine methods to represent the uncertainties and averages from combinations of data from multiple sites. For example, one could readily compute the average ratio from data for each site and then determine the average of those two results (i.e., the average of the results shown in Figures 10 and 11). The results shown in Figure 12 are very similar, but noticeably different to those in Figures 10 and 11. None of the results shown in Figure 12 are statistically different from a relative or absolute sense, and this is not a surprise since Figure 12 represents a combination of the data in the other two figures. The band-to-band agreement within a sensor agrees to within the absolute uncertainty for each sensor. Likewise, the sensor-to-sensor agreement for similar bands is within the absolute uncertainties. The same conclusions are drawn when using an unweighted average and standard deviation computed from the unweighted average, in essence providing a relative comparison between bands and sensors. All bands across both sensors agree from an absolute sense as well, however, the three bands from Sensor 2 between 700 nm and 800 nm differ significantly from Band 1 of Sensor 1 in a relative sense using the unweighted standard deviation of the averages.
Interconsistency
The example shown in Figure 12 is a demonstration of how data from RadCalNet can be used for harmonization, or interconsistency, studies. The SI-traceable nature of RadCalNet with its known uncertainties allows the network to be treated as a common reference to understand the consistency between sensors. One advantage to RadCalNet is that no spectral band correction factor [41] is needed because the results for each sensor already include the sensor SRF as part of the band weighting. Additionally, the data from the sensors under study need not be from the same dates, and, more importantly, the data from the sensors need not even be from the same sites. A similar philosophy has been recommended for use of results from the reflectance-based approach with personnel present on the site [12] and allowed the intercomparison of a dozen satellite and airborne sensors [42]. The examples above provide a demonstration of the utility of RadCalNet and show how SI-traceability provides the best methodology for ensuring data interconsistency across bands and sensors. The results shown in Figure 12 are very similar, but noticeably different to those in Figures 10 and 11. None of the results shown in Figure 12 are statistically different from a relative or absolute sense, and this is not a surprise since Figure 12 represents a combination of the data in the other two figures. The band-to-band agreement within a sensor agrees to within the absolute uncertainty for each sensor. Likewise, the sensor-to-sensor agreement for similar bands is within the absolute uncertainties. The same conclusions are drawn when using an unweighted average and standard deviation computed from the unweighted average, in essence providing a relative comparison between bands and sensors. All bands across both sensors agree from an absolute sense as well, however, the three bands from Sensor 2 between 700 nm and 800 nm differ significantly from Band 1 of Sensor 1 in a relative sense using the unweighted standard deviation of the averages.
Interconsistency
The example shown in Figure 12 is a demonstration of how data from RadCalNet can be used for harmonization, or interconsistency, studies. The SI-traceable nature of RadCalNet with its known uncertainties allows the network to be treated as a common reference to understand the consistency between sensors. One advantage to RadCalNet is that no spectral band correction factor [41] is needed because the results for each sensor already include the sensor SRF as part of the band weighting. Additionally, the data from the sensors under study need not be from the same dates, and, more importantly, the data from the sensors need not even be from the same sites. A similar philosophy has been recommended for use of results from the reflectance-based approach with personnel present on the site [12] and allowed the intercomparison of a dozen satellite and airborne sensors [42]. The examples above provide a demonstration of the utility of RadCalNet and show how SI-traceability provides the best methodology for ensuring data interconsistency across bands and sensors.
Conclusions
This paper has presented information about the new RadCalNet network of instrumented sites. RadCalNet is designed to provide predicted TOA reflectance suitable for the radiometric calibration or monitoring of Earth imagers in the reflected solar portion of the spectrum. The initial network of sites consists of four locations: Baotou (China), Gobabeb (Namibia), La Crau (France), and Railroad Valley Playa (USA). Data from the four sites are available via the RadCalNet portal, along with documentation providing descriptions regarding the instrumentation at each site and SI-traceable uncertainties for the RadCalNet results from the sites.
RadCalNet data has been processed by many users, at the time of writing there are more than 300 registered users of RadCalNet from over 35 countries. The data from RadCalNet are well suited to assess the radiometric stability and accuracy of individual space sensors and to assess the radiometric consistency of multiple sensors, without relying on the assumption of temporal invariance of the observed site nor requiring near-simultaneous observations of the site by the sensors being compared. The SI-traceable nature of the RadCalNet TOA reflectance and its associated uncertainties allows for the consistent radiometric monitoring, comparison, and calibration of space sensors over multiple sites.
RadCalNet data are already being used for comparisons, calibrations, and validations e.g., ([43]). It is expected that over the next few years more sites will be admitted as RadCalNet sites, increasing the range of sites available for comparison. Research is ongoing on how to improve the accuracy and consistency of RadCalNet sites and how to make the most of the available data.
While RadCalNet was designed for the comparison of Level 1 satellite products of TOA reflectance, the data are also being used for other applications. The BOA observations (site raw data) are of interest to those generating Level 2 products (atmospherically corrected BOA reflectance). Funding: The Gobabeb site was set up and instrumented thanks to funding from the European Space Agency Technology and Research Programme, contract 4000110704. The RadCalNet archive and portal are maintained through the European Space Agency Earthnet Programme, contract CCN5 4000110704. NPL received funding for this work from the Metrology for Earth Observation and Climate project (MetEOC-2), Grant Number ENV55 532 within the EMRP programme. It also received funding from the MetEOC-3 project, grant number 16ENV03 under the EMPIR programme. The EMRP and EMPIR programmes are jointly funded by the EMRP participating countries within EURAMET and the European Union's FP7 and H2020 programmes. NPL was also funded by the European Space Agency Technology and Research Programme through the ACTION project and from the UK Government's Department for Business, Energy and Industrial Strategy (BEIS) through the UK's National Measurement System programmes. AOE's work was supported by the Bureau of International Co-operation Chinese Academy of Sciences (Grant No. 181811KYSB20160040. The University of Arizona received funding from NASA Research Grants NNX14AE20G, NNX15AM86G, NNX16AL25G, and USGS Research Cooperative Agreement G14AC00371. We would like to thank AERONET for processing the sun photometer data, and also the Bureau of Land Management (BLM), Tonopah Nevada Office, for assistance and access to Railroad Valley. | 18,986.8 | 2019-10-16T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Behavioral and Cognitive Performance Following Exposure to Second-Hand Smoke (SHS) from Tobacco Products Associated with Oxidative-Stress-Induced DNA Damage and Repair and Disruption of the Gut Microbiome
Exposure to second-hand Smoke (SHS) remains prevalent. The underlying mechanisms of how SHS affects the brain require elucidation. We tested the hypothesis that SHS inhalation drives changes in the gut microbiome, impacting behavioral and cognitive performance as well as neuropathology in two-month-old wild-type (WT) mice and mice expressing wild-type human tau, a genetic model pertinent to Alzheimer’s disease mice, following chronic SHS exposure (10 months to ~30 mg/m3). SHS exposure impacted the composition of the gut microbiome as well as the biodiversity and evenness of the gut microbiome in a sex-dependent fashion. This variation in the composition and biodiversity of the gut microbiome is also associated with several measures of cognitive performance. These results support the hypothesis that the gut microbiome contributes to the effect of SHS exposure on cognition. The percentage of 8-OHdG-labeled cells in the CA1 region of the hippocampus was also associated with performance in the novel object recognition test, consistent with urine and serum levels of 8-OHdG serving as a biomarker of cognitive performance in humans. We also assessed the effects of SHS on the percentage of p21-labeled cells, an early cellular marker of senescence that is upregulated in bronchial cells after exposure to cigarette smoke. Nuclear staining of p21-labeled cells was more prominent in larger cells of the prefrontal cortex and CA1 hippocampal neurons of SHS-exposed mice than in sham-exposed mice, and there was a significantly greater percentage of labelled cells in the prefrontal cortex and CA1 region of the hippocampus of SHS than air-exposed mice, suggesting that exposure to SHS may result in accelerated brain aging through oxidative-stress-induced injury.
The gut microbiome plays an important role in the synthesis of enzymes, amino acids and neurotransmitters, as well as the production of metabolites promoting epithelial barrier integrity, immune modulation, and cognition [13,14].As a result, the gut microbiome can impact behavioral and cognitive performance through the gut-brain axis, which include vagal nerve innervation, neurotransmitter production, endocrine signaling, and inflammation [15][16][17][18][19][20].Perturbations to the gut microbiome can result in dysbiosis, wherein the microbiome drives a variety of pathologies that can negatively affect the brain via the gut-brain axis.In humans, the gut microbiome diversifies with age, reflecting healthy aging, and predicts survival [21], while increasing evidence supports the role of an altered gut microbiome in neurodegenerative conditions [22].
Smoking affects the gut microbiome [23], and smoking behaviors may be influenced by specific taxa that produce neurotransmitter-associated metabolites, such as tryptophan and tyrosine [24].The inhalation of SHS affects both the gut tissue and the composition of the gut microbiome.These observations raise the question of whether environmental exposure to SHS might impact the gut microbiome and thus cause cognitive injury.We hypothesize that SHS inhalation drives changes in the gut microbiome, impacting behavioral and cognitive performance and neuropathology.
In our earlier study [25], we started to assess the effects of chronic SHS exposure (10 months to ~30 mg/m 3 ) on behavioral and cognitive performance, metabolism, and neuropathology in 2-month-old wild-type (WT) mice and mice expressing wild-type human tau, a genetic model pertinent to Alzheimer's disease in which there is a spread of neurofibrillary tangles, consisting of hyperphosphorylated tau aggregates, that is associated with disease severity [26,27].It is unclear whether SHS induces dysregulation of wild-type human tau.In our current gut microbiome analysis, we included a non-mutant human tau (htau) mouse model that exhibits age-dependent tau dysregulation, neurofibrillary tangles, neuronal loss, neuroinflammation, and oxidative stress starting at 3-4 months and in which tau dysregulation and neuronal loss correlate with synaptic dysfunction and cognitive decline [28].
In addition to behavioral and cognitive performance and neuropathology, the lungs of mice were examined for pathology and alterations in gene expression.We originally hypothesized that WT mice would be less susceptible to the effects of chronic SHS than human tau mice and that WT male mice would be less susceptible to the effects of chronic SHS than WT females.However, our results revealed that the brains of WT mice, and especially WT male mice, were susceptible to the effects of chronic SHS exposure [25].In a follow-up study, we reported increased levels of 8-hydroxy-2 -deoxyguanosine (8-OH-dG), a marker of oxidative DNA damage and a biomarker of DNA damage following exposure to cancer-causing agents [29], generated following oxidative-stress-induced damage to 2 -deoxyguanosine in the prefrontal cortex of SHS as compared to air controls and a trend towards increased levels in the CA1 area of the hippocampus [30].In the prefrontal cortex, levels of the oxidative DNA repair marker AP endonuclease 1 (APE1), which is involved in the repair of oxidative DNA damage, were also higher in SHS than air-exposed mice [30].SHS might also increase various markers of cell senescence in the brain following the oxidative DNA damage, such as β-galactosidase [31][32][33].
In the current study, we assessed whether the composition of the gut microbiome from the mice in this prior study varies (1) as a result of SHS exposure and (2) whether the composition of the gut microbiome is linked to behavioral or cognitive phenotypes in these mice.In addition, we assessed whether the oxidative-stress-induced DNA damage that was higher in the hippocampus and prefrontal cortex of SHS-exposed mice correlated with performance in the object recognition test.As β-galactosidase is a marker of senescent cells [34] and cigarette smoke induces p21 expression [35], an early marker of senescence, we also assessed whether β-galactosidase and p21 levels were elevated in the hippocampus or prefrontal cortex following SHS exposure.
SHS Exposure and Behavioral and Cognitive Data
We collected tissues and fecal samples of all 62 mice (n = 8 mice/genotype/sex/exposure condition; 1 htau male and 1 wild-type male mouse exposed to SHS died) following open field testing generated as part of a previously NIEHS-funded R21 proposal for the current study.In this project, we assessed the effects of chronic SHS exposure (Scireq inExpose system (Montreal, Quebec, H2S 3G8, Canada; 90% side stream, 10% main stream; 10 months (312 days), 7 days per week, to ~30 mg/m 3 ) on behavioral and cognitive performance, metabolism, and neuropathology in 2-month-old wild-type (WT) and human tau mice.For a detailed description of the SHS chemical composition, animal survival, behavioral tests and related results, please see our earlier reported study [25].To avoid possible withdrawal symptoms in the mice, the mice were behaviorally tested during the last part of the SHS or air exposures.In addition, the lungs of mice were examined for pathology and alterations in gene expression.Details regarding the exposures and analyses reported so far are described in [25].Briefly, mice were assigned to SHS or air as control exposure and exposed to SHS (90% sidestream and 10% mainstream, using the SCIREQ ® inExposeTM system) or air for 168 min per day.Each exposure day, a cigarette-smoking robot (CSR) and a CSR lighter from SCIREQ ® were used to light twenty-four 3R4F certified cigarettes (University of Kentucky, Lexington, KY, USA).There was one puff per minute; the flow rate was 2 L/min.Gravimetric analysis was used to analyze the particulate matter count monthly.All procedures were performed according to the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals following approval from the Oregon Health & Science University (OHSU) Institutional Animal Care and Use Committee (IACUC) and consistent with the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines.
16S Gut Microbiome Analysis
The 16S rRNA gene sequence data were generated from stool collected from mice following our prior work [36].Briefly, we followed the Earth Microbiome Project protocols to extract DNA using the QIAamp PowerFecal Pro DNA extraction kits and amplify the V4 hypervariable locus using Polymerase Chain Reaction (PCR).Cleaned amplicons were pooled at equimolar concentrations and subject to DNA sequencing on an Illumina MiSeq via the Center for Quantitative Life Sciences at Oregon State University.Sequence data were then demultiplexed, adapter trimmed, and subject to Amplicon Sequence Variant (ASV) clustering using the DADA2 workflow in the R programming environment.
We evaluated how the α-diversity and β-diversity of the microbiome associates with various experimental covariates, including cohort, sex, and genotype, as well as 10 behavioral physiological covariates.α-diversity is a measure of the biodiversity of the microbiome (i.e., how many different taxa reside in the community).This measurement is agnostic to the specific types of bacteria that comprise the microbiome and instead summarizes ecological characteristics of the community, such as the carrying capacity.β-diversity, on the other hand, is a measure of the composition of the microbiome.In particular, it assesses which specific taxa reside in a community and how the taxonomic composition of the community differs from that of other communities.In effect, β-diversity allows us to assess whether the types of taxa that comprise a microbiome vary as a function of different experimental parameters (e.g., exposure to SHS).For both αand β-diversity, we evaluated a variety of metrics that similarly summarize these properties of the microbiome, but that differ in their specific mathematical form, which allows us to evaluate different aspects of the ecology of the communities.
Our analysis of the gut microbiome data followed our prior work [36].Briefly, we used robust hypothesis tests or linear regression to associate the α-diversity with study covari-ates, including genotype, exposure to SHS, and sex, using a step-wise model construction framework that zeroes-in on the set of covariates, as well as their potential interaction, that significantly explain the variation in α-diversity across samples (p < 0.05).We used conditional correspondence analysis via a related step-wise multivariate regression approach to link covariates or their statistical interaction to β-diversity (p < 0.05).
Immunohistochemistry
As the most profound effects of SHS in our earlier published study were seen in wild-type male mice, we only used wild-type male mice for the immunohistochemical analyses (n = 3 mice/exposure condition).
Antibodies.The immunohistochemistry was carried out using a mouse monoclonal antibody (15A3) raised against 8-hydroxyguanosine (8-OHdG) from Santa Cruz Biotechnology (Dallas, TX, USA), anti-APE1 polyclonal antibodies from ThermoFisher and an antibody to the senescence marker p21 from Abcam (ab107099) as described in [25,30].The quantification of the results of the immunohistochemical analyses for oxidative-stressinduced markers in the CA1 region of the hippocampus and prefrontal cortex are described in Table 1 [30].The brains (right hemispheres) of the male wild-type mice were submerged in 4% buffered paraformaldehyde, cryoprotected in sucrose (30%) and frozen with Tissue-Tek ® .Brains from male wild-type mice were selected as this was the most affected group for earlier reported outcome measures in mice chronically exposed to SHS.Cryopreserved brain tissue sections (20 µm) were placed on Superfrost ® Plus (VWR/Avantor) (Radnor, PA, USA) glass slides (2 sections/per slide).The slides were airdried and, the next day, processed for staining or immunohistochemistry.The slides were warmed to room temperature and the sections were outlined using an ImmEdge ® PAP Pen (Vector Labs, Newark, CA, USA) before staining or immunohistochemistry. Brain tissue sections were subjected to antigen retrieval for 45 min using a heated citrate-based unmasking solution (H-3301, Vector Labs, Newark, CA, USA) and subsequently blocked with Tris-buffered saline with 0.1% Tween ® (TBST) (Millipore Sigma, Burlington, MA, USA) for 15 min.The unmasked sections were incubated overnight at 4 • C with the primary antibodies described above and subjected to avidin-biotin peroxidase staining by quenching them for 30 min, before adding a biotinylated secondary antibody (Vectastain ® Elite ABC Kit, Newark, CA, USA) and blocking with Bloxall (Vectastain ® Elite ABC Kit, Newark, CA, USA) for 10 min.Slides were incubated with Vectastain ® Elite ABC reagent and NovaRed ® peroxidase substrate (red/brown stain-ing) (Vector Labs, Newark, CA, USA).The slides were washed with deionized/distilled water, dehydrated with alcohol and xylene substitute, mounted with Cytoseal60 ® (Fisher Scientific, Norwalk, CA, USA) and imaged using a Leica DM100 microscope (Deerfield, IL, USA) at various magnifications.
Three representative images from the prefrontal cortex and hippocampal areas (CA1, CA3, CA4) were manually counted in a blinded fashion, based on the staining intensity compared to the background signal.
Histochemical Staining
Senescence-associated-β-galactosidase (SA-β-gal) staining was performed on brain tissue sections (as described above) using a Cellular Senescence Assay Kit (CBA-230; Cell Biolabs Inc., San Diego, CA, USA) according to the manufacturer's protocol with minor modifications [37].Cryopreserved brain tissue sections from both air-and SHSexposed male wild-type male mice were incubated for 8 h at 37 • C (in the dark) with cell staining solution, the staining solution removed and the sections washed with PBS.The stained sections were imaged on a Leica DM100 microscope (Deerfield, IL, USA) at various magnifications.Three representative images from the prefrontal cortex of air-and SHS-exposed mice were manually counted, the counts averaged and the data analyzed using R as previously described [30].
Statistical Analyses
The statistical analyses for the 16S gut microbiome analysis are described above.For statistical analyses for the staining and immunohistochemistry, the counts from each brain region and exposure were averaged and the data were analyzed using a Welch t-test with R (v 4.2.3). Figure representing quantification of the corresponding data (expressed as means ± SD) was generated using GraphPad Prism (v 9.5.1)software.Correlations between the percentage of 8-OH-dG-labeled cells in the CA1 region of the hippocampus and APE1-labeled cells in the prefrontal cortex with performance in the object recognition test were analyzed using Pearson correlations and GraphPad Prism software (v.8).
α Diversity of the Gut Microbiome Correlates with Cognitive Performance
The microbiome samples we interrogated were comprised of taxa typically observed in mouse gut microbiome studies [38,39].For example, the top ten most abundant genera based on the median relative across all samples were Faecalibaculum (28.4%),Lactobacillus (12.4%),Turicibacter (6.15%), Alistipes (3.31%), a genus within the Lach-nospiraceae_NKA136_group (2.49%), Lachnoclostridium (1.69%), Rosburia (1.45%), a genus within the Lachnospiraceae_FC020_group (0.96%), Muribaculum (0.96%), and Desulfovibrio (0.52%).For all analyses, the microbiome αand β-diversity did not vary as a function of mouse genotype, indicating that the associations presented here are agnostic to the mouse htau genotype.SHS did not significantly affect the α-diversity of the gut microbiome in a general sense, but it did affect the α-diversity of the gut microbiome in a sex-dependent manner.Specifically, for mice in the air exposure group, differences in α-diversity between female and male mice were negligible.However, for male mice exposed to SHS, the α-diversity of their microbiomes was significantly higher than those of female mice (Figure 1).The diversity of the female mouse microbiomes were comparable to the diversity of the air-exposed mouse microbiomes.We observed this pattern for both measures of α-diversity we considered (Figure 1): richness, which measures the number of distinct microbes present in a community, as well as Shannon entropy, which additionally reflects the evenness of the community in terms of the relative abundance of the various taxa that are present.
The α-diversity of the gut microbiome is also associated with several measures of cognitive performance (Figure 2).In the water maze test, hippocampus-dependent spatial retention is assessed in a probe trial (no platform present).Microbiome richness negatively was linked to the cumulative distance to the target location in the first water maze probe trial (p = 0.019), while it was positively linked to the same cognitive measure in the second water maze probe trial (p = 0.006).Richness was also negatively linked to the percentage of hippocampus-dependent spontaneous alternations in the Y maze (p = 0.048).
manner.Specifically, for mice in the air exposure group, differences in α-diversity between female and male mice were negligible.However, for male mice exposed to SHS, the α-diversity of their microbiomes was significantly higher than those of female mice (Figure 1).The diversity of the female mouse microbiomes were comparable to the diversity of the air-exposed mouse microbiomes.We observed this pattern for both measures of αdiversity we considered (Figure 1): richness, which measures the number of distinct microbes present in a community, as well as Shannon entropy, which additionally reflects the evenness of the community in terms of the relative abundance of the various taxa that are present.The α-diversity of the gut microbiome is also associated with several measures of cognitive performance (Figure 2).In the water maze test, hippocampus-dependent spatial retention is assessed in a probe trial (no platform present).Microbiome richness negatively was linked to the cumulative distance to the target location in the first water maze probe trial (p = 0.019), while it was positively linked to the same cognitive measure in the second water maze probe trial (p = 0.006).Richness was also negatively linked to the percentage of hippocampus-dependent spontaneous alternations in the Y maze (p = 0.048).Measures of Shannon entropy were negatively correlated with the cumulative distance to the target location in the first water maze probe trial (p = 0.031) as well as the percentage of spontaneous alternations in the Y maze (p = 0.033) (Figure 2).
β-Diversity of the Gut Microbiome Correlates with Cognitive Performance
The β-diversity analysis revealed that the composition of the gut microbiome-the specific assemblage of microbial taxa that reside in the gut varies as a function of exposure to SHS (p = 0.001).However, the composition of male and female mouse microbiomes responded differently to SHS exposure (p = 0.002).In addition to multivariate regression, ordinations were used to visualize the similarity in the composition of microbiome samples.In ordinations, each microbiome sample is illustrated as a single point in a coordinate space, wherein the distance between two points within this space represents the extent of difference in the composition of the two microbiome samples these points represent.Points were color coded to illustrate how samples from mice exposed to SHS differ in their composition relative to samples collected from air-exposed mice (Figure 3).Data points were colored based on the exposure group the corresponding sample was collected from.Moreover, the shape of a point represents whether the corresponding sample was collected from a male or female mouse.To aid in interpretation of the data, 95% confidence intervals are placed around each group of points based on the corresponding sample's exposure (Air versus SHS) and sex (Male versus Female) characteristics.This analysis shows that while male and female mouse microbiomes appear to be statistically indistinguishable for air-exposed mice, male and female mouse microbiomes take on distinct compositions relative to one another in SHS-exposed mice.Measures of Shannon entropy were negatively correlated with the cumulative distance to the target location in the first water maze probe trial (p = 0.031) as well as the percentage of spontaneous alternations in the Y maze (p = 0.033) (Figure 2).
β-diversity of the Gut Microbiome Correlates with Cognitive Performance
The β-diversity analysis revealed that the composition of the gut microbiome-the specific assemblage of microbial taxa that reside in the gut varies as a function of exposure to SHS (p = 0.001).However, the composition of male and female mouse microbiomes responded differently to SHS exposure (p = 0.002).In addition to multivariate regression, ordinations were used to visualize the similarity in the composition of microbiome samples.In ordinations, each microbiome sample is illustrated as a single point in a coordinate space, wherein the distance between two points within this space represents the extent of difference in the composition of the two microbiome samples these points represent.Points were color coded to illustrate how samples from mice exposed to SHS differ in their composition relative to samples collected from air-exposed mice (Figure 3).Data points were colored based on the exposure group the corresponding sample was collected from.Moreover, the shape of a point represents whether the corresponding sample was collected from a male or female mouse.To aid in interpretation of the data, 95% confidence intervals are placed around each group of points based on the corresponding sample's exposure (Air versus SHS) and sex (Male versus Female) characteristics.This analysis shows that while male and female mouse microbiomes appear to be statistically indistinguishable for air-exposed mice, male and female mouse microbiomes take on distinct compositions relative to one another in SHS-exposed mice.The composition of the gut microbiome is also associated with behavioral and cognitive measures: the total distance moved in an open field (p = 0.022), a measure of activity and response to novelty, and the cumulative distance to the target location in the second water maze probe trial (p = 0.041), a cognitive measure.The results of this analysis are visually illustrated in Figure 3B,C.For these ordinations, points are colored based upon the performance score that the corresponding mouse received in each of these two tests.Higher tests scores are illustrated with brighter colors.A regression analysis evaluated how these test scores were distributed within the ordination space to clarify the relationship between the β-diversity and test performance.In Figure 3B,C, an arrow overlaid onto this ordination space illustrated the results of this regression.For example, mice with a higher cumulative distance to the target location (worse cognitive performance) in the second water maze probe trial tend to be oriented on the right hand side of the ordination space, whereas the mice with lower scores for this test (better cognitive performance) tend to be oriented on the left hand side of this ordination space.The composition of the gut microbiome is also associated with behavioral and cognitive measures: the total distance moved in an open field (p = 0.022), a measure of activity and response to novelty, and the cumulative distance to the target location in the second water maze probe trial (p = 0.041), a cognitive measure.The results of this analysis are visually illustrated in Figure 3B,C.For these ordinations, points are colored based upon the performance score that the corresponding mouse received in each of these two tests.Higher tests scores are illustrated with brighter colors.A regression analysis evaluated how these test scores were distributed within the ordination space to clarify the relationship between the β-diversity and test performance.In Figure 3B,C, an arrow overlaid onto this ordination space illustrated the results of this regression.For example, mice with a higher cumulative distance to the target location (worse cognitive performance) in the second water maze probe trial tend to be oriented on the right hand side of the ordination space, whereas the mice with lower scores for this test (better cognitive performance) tend to be oriented on the left hand side of this ordination space.
β-Galactosidase
Increased activity of β-galactosidase (β-gal) is a common biomarker of senescent cells in the brain [40].We assessed the effects of SHS on the percentage of senescence-associated biomarker β-galactosidase (SA-β-gal)-labeled cells in wild-type male mice.There was no effect of SHS on the percentage of SA-β-gal-labeled cells in the CA1 region of the prefrontal cortex (Figure 4A) or hippocampal CA1 region (Figure 4B).SA-gal staining was also similar across other regions of the hippocampus.
β-Galactosidase
Increased activity of β-galactosidase (β-gal) is a common biomarker of senescent cells in the brain [40].We assessed the effects of SHS on the percentage of senescence-associated biomarker β-galactosidase (SA-β-gal)-labeled cells in wild-type male mice.There was no effect of SHS on the percentage of SA-β-gal-labeled cells in the CA1 region of the prefrontal cortex (Figure 4A) or hippocampal CA1 region (Figure 4B).SA-gal staining was also similar across other regions of the hippocampus.
Relationship between Percentages of p21-Labeled Cells in the CA1 Region of the Hippocampus and the Prefrontal Cortex
We also assessed the effects of SHS on the percentage of p21-labeled cells, an early cellular marker of senescence [41] that is upregulated in bronchial cells after exposure to cigarette smoke [42].Nuclear staining of p21-labeled cells was more prominent in larger cells of the prefrontal cortex (Figure 5A) and CA1 hippocampal neurons (Figure 5B) of SHS-exposed mice than in sham-exposed mice.This was consistent with a significantly greater percentage of labelled cells in both the prefrontal cortex and hippocampus (CA1 neurons) of SHS-than air-exposed mice (p = 0.007 and p = 0.0191, respectively) (Figure 5C).In contrast, p21 staining was not different in other regions of the hippocampus.
Relationship between Percentages of 8-OH-dG-Labeled Cells in the CA1 Region of the Hippocampus and APE1-Labeled Cells in the Prefrontal Cortex with Performance in the Object Recognition Test
We also examined the effect of oxidative-stress-induced DNA damage and repair (Table 1) on both behavioral performance and cognition in SHS-exposed mice.We assessed the relationships between these biomarkers of SHS exposure and performance in the object recognition test.The percentage of 8-OH-dG-labeled cells in the CA1 region of the hippocampus was negatively correlated with the frequency of exploring the novel object (r = −0.8482,p = 0.0326, Pearson (Figure 6A)) and the time spent exploring the novel object (r = −0.9182,p = 0.0098, Pearson (Figure 6B) in the object recognition test.In addition, the percentage of APE1-labeled cells in the prefrontal cortex was negatively correlated with the time spent exploring the familiar object (r = −0.8176,p = 0.0469, Pearson (Figure 6C)).
Figure 4.
Senescence-associated β-galactosidase (SA-β-gal) staining in the prefrontal cortex (A) and the hippocampus (B) of mice chronically exposed to SHS.Higher magnifications of the prefrontal cortex (boxes: 34.7x).Stained sections from air-and SHS-exposed mice (n = 3/tx) were manually counted for β-Gal staining, as previously described [30].There was no effect of SHS on β-Gal staining in either brain region.).Stained sections from air-and SHS-exposed mice (n = 3/tx) were manually counted for β-Gal staining, as previously described [30].There was no effect of SHS on β-Gal staining in either brain region.cellular marker of senescence [41] that is upregulated in bronchial cells after exposure to cigarette smoke [42].Nuclear staining of p21-labeled cells was more prominent in larger cells of the prefrontal cortex (Figure 5A) and CA1 hippocampal neurons (Figure 5B) of SHS-exposed mice than in sham-exposed mice.This was consistent with a significantly greater percentage of labelled cells in both the prefrontal cortex and hippocampus (CA1 neurons) of SHS-than air-exposed mice (p = 0.007 and p = 0.0191, respectively) (Figure 5C).In contrast, p21 staining was not different in other regions of the hippocampus.
Figure 5. p21 staining in the prefrontal cortex (A) and CA1 region of the hippocampus (B) of mice chronically exposed to SHS.These brain areas were stained for the senescence marker p21.Note that there was prominent staining of cells within the prefrontal cortex and the hippocampal CA1 region of SHS-than air-exposed mice ("arrows").Higher magnifications (boxes) of the prefrontal cortex (middle and right images in (A) indicate that the nuclear staining was more prominent in larger cells.(C) Stained sections from both air-and SHS-exposed mice were manually counted for p21 staining, as previously described [42].The percentages of labelled cells in the prefrontal cortex and hippocampal CA region of SHS-exposed mice were significantly greater than air-exposed mice.Values are expressed as percentages of labeled cells.* p < 0.05, ** p < 0.01, 2-tailed t-tests.
Relationship between Percentages of 8-OH-dG-Labeled Cells in the CA1 Region of the Hippocampus and APE1-Labeled Cells in the Prefrontal Cortex with Performance in the Object Recognition Test
We also examined the effect of oxidative-stress-induced DNA damage and repair (Table 1) on both behavioral performance and cognition in SHS-exposed mice.We assessed Figure 5. p21 staining in the prefrontal cortex (A) and CA1 region of the hippocampus (B) of mice chronically exposed to SHS.These brain areas were stained for the senescence marker p21.Note that there was prominent staining of cells within the prefrontal cortex and the hippocampal CA1 region of SHS-than air-exposed mice ("arrows").Higher magnifications (boxes) of the prefrontal cortex (middle and right images in (A) indicate that the nuclear staining was more prominent in larger cells.(C) Stained sections from both air-and SHS-exposed mice were manually counted for p21 staining, as previously described [42].The percentages of labelled cells in the prefrontal cortex and hippocampal CA region of SHS-exposed mice were significantly greater than air-exposed mice.There was no correlation between the percentage of β-gal-labeled cells in the prefrontal cortex and time or frequency exploring the novel object in the object recognition test.The quantification of the results of the immunohistochemical analyses for oxidative- There was no correlation between the percentage of β-gal-labeled cells in the prefrontal cortex and time or frequency exploring the novel object in the object recognition test.The quantification of the results of the immunohistochemical analyses for oxidativestress-induced markers in the CA1 region of the hippocampus and prefrontal cortex is summarized in Table 1.
Discussion
The results of this study indicate that SHS exposure significantly impacts the composition of the gut microbiome, and that these changes are linked to cognitive impairments.The α-diversity analyses revealed that SHS impacts the biodiversity and evenness of the gut microbiome in a sex-dependent fashion and that these same measures of the gut microbiome broadly link to cognitive performance.Consistent with our earlier cognitive, lung and brain pathological data and plasma and brain metabolomics data [25], chronic SHS exposure also had a significant impact on the gut microbiome of male mice as indicated by the richness, evenness, and composition of the gut microbiome that associated with several measures of cognitive performance.These observations are consistent with prior studies that link SHS exposure to changes in the microbiome, as well as prior studies-including our own-that link mouse cognitive performance and the gut microbiome.However, our analysis is the first to consider how chronic SHS exposure impacts the gut microbiome, and it is also the first to intertwine measures of cognition into our understanding of the impacts of SHS on the microbiome.Collectively, these results support the hypothesis that the gut microbiome contributes to the effect of SHS exposure on cognition.Additional studies will be needed to verify the causal role of the microbiome in this process.
The water maze and Y maze assess hippocampus-dependent learning and memory, so the results of this study generally point to a connection between the α-diversity of the gut microbiome and hippocampus-dependent learning and memory.That said, the opposing associations between richness and the two subsequent water maze probe trial test results indicate that the observed relationship between α-diversity and spatial memory retention may be complex; while the first water maze trial occurred 24 h after the mice were trained for two days to locate a hidden platform, the second water maze trial took place 72 h after the mice were trained for three days to locate the hidden platform.We postulate that the different association between performance in these two probe trials and the gut microbiome relates to the different challenges associated with these two spatial memory retention tests due to the distinct intervals between the last hidden platform training trial and the spatial memory assessment.
We recognize that because of the environmental challenge in the SHS study, 10 months of SHS exposure for 7 days per week, there could be an effect of aging as well.In light of the long SHS exposure, the genotype difference in effects of SHS on the gut microbiome might be subtle.This is of translational relevance considering environmental exposure to SHS in humans with different genetic backgrounds.What is striking is that there is a profound sex difference that is apparently stronger than a possible htau genotype effect.This is an important result because our prior gut microbiome studies also found that genotype and sex effects on the gut microbiome significantly correlated with behavioral changes [38,39,43].The percentage of 8-OH-dG-labeled cells in the CA1 region of the hippocampus, involving the formation, consolidation, and retrieval of hippocampal-dependent memories and the main output of the hippocampus [44], also correlated negatively with the frequency of exploring the novel object and the time spent exploring the novel object in the object recognition test.Consistent with these data, elevated urinary 8-OHdG levels, a biomarker of smoking status [45], were associated with lower global cognitive scores in 45-75-year-old adults, after adjustment for age, education, and the genetic risk factors for age-related cognitive decline and Alzheimer's disease apolipoprotein E4 [46].Urinary 8-OHdG levels were used as a biomarker to distinguish Alzheimer's patients from cognitively healthy controls [47,48].Similarly, higher serum levels of 8-OH-dG 24 h after hospital admission were associated with early cognitive impairments, as assessed by the Mini-Mental State Exam 30 days later, in patients with stroke [49].However, this biomarker is not specific for cognitive injury as higher serum levels of 8-OH-dG 24 h after hospital admission were also associated with depression, as assessed by the Hamilton Depression Score, 30 days later [50].These data are consistent with the hypotheses that peculiar microbiota might induce oxidative stress in the brain [51,52] and, vice versa, that brain injury might affect the gut microbiome [53].
Oxidative DNA damage is repaired by both the base-excision pathway and APE1initiated activation of the non-homologous end joining pathway in cortical neurons [54,55].The percentage of APE1-labeled cells in the prefrontal cortex also correlated negatively with the time spent exploring the familiar object.Unless exploring the familiar object less is associated with exploring the novel object more, this is not a cognitive measure.These data suggest that APE1 might be more a general marker of oxidative-stress-induced DNA damage and senescence, but not of cognitive performance.
Oxidative-stress-induced DNA damage can also induce changes in the neuronal cell cycle to activate a persistent DNA damage response, leading to neuronal senescence [56].SHS exposure increased the levels of p21-labeled cells in the prefrontal cortex and hippocampus, but not β-galactosidase activity, a widely used marker of cellular senescence that is also elevated in neurons with high energetic and metabolic demands [57].As p21 is an early marker of cell senescence [58], these results suggest that exposure to SHS might result in accelerated brain aging through oxidative-stress-induced injury.
While a unique strength of the experimental design used is that it involves an environmentally controlled exposure pertinent to humans, we recognize the following limitations: (1) as mice were exposed for 10 months, we cannot distinguish chronic effects of SHS from the effect of aging and interactions between the two.Future studies, starting the exposure either earlier or later in life, might help address this concern.(2) While used as a mouse model pertinent to Alzheimer's disease, the human tau mice express wild-type tau.Future efforts are warranted that include mouse models expressing human mutations of genes that modulate risk to develop neurodegenerative disease.(3) In our paradigm, mice were exposed 168 min per day.It is conceivable that the exposure of mice for longer periods of time per day might cause more pronounced effects on the gut microbiome and brain.Finally, (4) while the current study revealed associations between the gut microbiome and behavioral measures, this does not prove causality.Future studies involving fecal transplants of mice exposed to SHS into germ-free mice should be considered to determine if alterations in the gut microbiome are sufficient to induce phenotypes in the recipient mice.
Conclusions
In summary, SHS exposure impacts the composition of the gut microbiome, and this in turn is linked to cognitive impairments.The increased levels of 8-OHdG in the CA1 region of the hippocampus were also associated with performance in the novel object recognition test, consistent with urinary and serum levels of 8-OHdG as a biomarker of cognitive performance in humans.SHS exposure also increased APE1 in the prefrontal cortex.Nuclear staining of p21-labeled cells was more prominent in larger cells of the prefrontal cortex and CA1 hippocampal neurons of SHS-exposed mice than in sham-exposed mice, and there was a significantly greater percentage of labelled cells in the prefrontal cortex and CA1 region of the hippocampus of SHS-than air-exposed mice, suggesting that exposure to SHS might result in accelerated brain aging through oxidative-stress-induced injury.
Figure 1 .
Figure 1.Microbiome richness (i.e., number of taxa in a sample) (Left and Shannon entropy (i.e., number of taxa in a sample weighted by taxon abundance) (Right) were increased in male mice exposed to SHS.In contrast, SHS had no detectable effect on measures of α-diversity in female mice.Points represent an α-diversity measure from a single mouse in the study.Horizontal lines represent 95% confident intervals based on a bootstrapping analysis.Samples from SHS-exposed mice are in orange, while samples from air controls are in green.The results are shown with genotype collapsed.
Figure 1 .
Figure 1.Microbiome richness (i.e., number of taxa in a sample) (Left) and Shannon entropy (i.e., number of taxa in a sample weighted by taxon abundance) (Right) were increased in male mice exposed to SHS.In contrast, SHS had no detectable effect on measures of α-diversity in female mice.Points represent an α-diversity measure from a single mouse in the study.Horizontal lines represent 95% confident intervals based on a bootstrapping analysis.Samples from SHS-exposed mice are in orange, while samples from air controls are in green.The results are shown with genotype collapsed.
Genes 2023 , 15 Figure 2 .
Figure 2. Scatterplots illustrating the relationship between measures of mouse microbiome α-diversity as well as measures of cognitive performance.The top row of plots illustrates results based on microbiome richness (i.e., number of taxa in a sample), while the bottom row illustrates results based on Shannon entropy (i.e., number of taxa weighted by taxon abundance).Each column represents one or more plots based on a particular measure of cognition (Y.maze_PctSponAlt: percentage of spontaneous alterations in a Y maze; Probe1_CumDistSEPlat.cm: cumulative distance to the target location in the first water maze probe trial; Probe2_CumDistSEPlat.cm: cumulative distance to the target location in the second water maze probe trial).Points represent individual microbiome samples.Blue lines illustrate the line of best fit as determined by linear regression.Only significant associations after multiple test correction are shown here.
Figure 2 .
Figure 2. Scatterplots illustrating the relationship between measures of mouse microbiome α-diversity as well as measures of cognitive performance.The top row of plots illustrates results based on microbiome richness (i.e., number of taxa in a sample), while the bottom row illustrates results based on Shannon entropy (i.e., number of taxa weighted by taxon abundance).Each column represents one or more plots based on a particular measure of cognition (Y.maze_PctSponAlt: percentage of spontaneous alterations in a Y maze; Probe1_CumDistSEPlat.cm: cumulative distance to the target location in the first water maze probe trial; Probe2_CumDistSEPlat.cm: cumulative distance to the target location in the second water maze probe trial).Points represent individual microbiome samples.Blue lines illustrate the line of best fit as determined by linear regression.Only significant associations after multiple test correction are shown here.
Figure 3 .
Figure 3. Ordinations of microbiome β-diversity.The Bray-Curtis dissimilarity metric quantified the dissimilarity in community composition across samples while weighting these differences based on the abundance of taxa.These Bray-Curtis dissimilarities were used to generate the distance based redundancy analysis ordination illustrated here, wherein samples are represented by points in the ordination.This ordination is color coded in three forms representing three different analyses: (A) the relationship between β-diversity and SHS exposure as well as mouse sex; (B) the relationship between β-diversity and distance moved in the open field test; (C) the relationship between β-diversity and cumulative distance to the target location in the second water maze probe trial.The main text provides specific information on how to interpret each of the above plots.
Figure 3 .
Figure 3. Ordinations of microbiome β-diversity.The Bray-Curtis dissimilarity metric quantified the dissimilarity in community composition across samples while weighting these differences based on the abundance of taxa.These Bray-Curtis dissimilarities were used to generate the distance based redundancy analysis ordination illustrated here, wherein samples are represented by points in the ordination.This ordination is color coded in three forms representing three different analyses: (A) the relationship between β-diversity and SHS exposure as well as mouse sex; (B) the relationship between β-diversity and distance moved in the open field test; (C) the relationship between β-diversity and cumulative distance to the target location in the second water maze probe trial.The main text provides specific information on how to interpret each of the above plots.
Figure 4 .
Figure 4. Senescence-associated β-galactosidase (SA-β-gal) staining in the prefrontal cortex (A) and the hippocampus (B) of mice chronically exposed to SHS.Higher magnifications of the prefrontal cortex (boxes: 34.7x).Stained sections from air-and SHS-exposed mice (n = 3/tx) were manually counted for β-Gal staining, as previously described[30].There was no effect of SHS on β-Gal staining in either brain region.
Figure 5. p21 staining in the prefrontal cortex (A) and CA1 region of the hippocampus (B) of mice chronically exposed to SHS.These brain areas were stained for the senescence marker p21.Note that there was prominent staining of cells within the prefrontal cortex and the hippocampal CA1 region of SHS-than air-exposed mice ("arrows").Higher magnifications (boxes) of the prefrontal cortex (middle and right images in (A) indicate that the nuclear staining was more prominent in larger cells.(C) Stained sections from both air-and SHS-exposed mice were manually counted for p21 staining, as previously described[42].The percentages of labelled cells in the prefrontal cortex and hippocampal CA region of SHS-exposed mice were significantly greater than air-exposed mice.Values are expressed as percentages of labeled cells.* p < 0.05, ** p < 0.01, 2-tailed t-tests.
Figure 6 .
Figure 6.(A) The percentage of 8-OH-dG-labeled cells in the CA1 region of the hippocampus was negatively correlated with the frequency of exploring the novel object (r = −0.8482,p = 0.0326, Pearson.(B) The percentage of 8-OH-dG-labeled cells in the CA1 region of the hippocampus was negatively correlated with the time spent exploring the novel object (r = −0.9182,p = 0.0098, Pearson).(C) The percentage of APE1-labeled cells in the CA1 region of the hippocampus was negatively correlated with the time spent exploring the familiar object (r = −0.8176,p = 0.0469, Pearson).
Figure 6 .
Figure 6.(A) The percentage of 8-OH-dG-labeled cells in the CA1 region of the hippocampus was negatively correlated with the frequency of exploring the novel object (r = −0.8482,p = 0.0326, Pearson.(B) The percentage of 8-OH-dG-labeled cells in the CA1 region of the hippocampus was negatively correlated with the time spent exploring the novel object (r = −0.9182,p = 0.0098, Pearson).(C) The percentage of APE1-labeled cells in the CA1 region of the hippocampus was negatively correlated with the time spent exploring the familiar object (r = −0.8176,p = 0.0469, Pearson).
Table 1 .
[27]ary of immunohistochemical analyses reported in[27]1 .Oxidative DNA damage and repair biomarkers in the air-and SHS-exposed mouse prefrontal cortex (PFC) and various regions of the hippocampus (HIPP). | 9,622.2 | 2023-08-27T00:00:00.000 | [
"Biology",
"Psychology"
] |
The Happiness that Qualifies Nonduality: Jñāna, Bhakti, and Sukha in Rāmānuja’s Vedārthasaṃgraha
The great eleventh-century figure, Rāmānuja, belonged to the Śrīvaiṣṇava community that worshiped the divine as Viṣṇu-with-Śrī, the Lord-and-Consort. But he also embarked on a project to develop an interpretation of the first-century Vedāntasūtra, which presented the supposedly core teachings of the major Upaniṣads, traditionally the last segment of the sacred corpus of the Vedas. Rāmānuja sought to reconcile the devotional commitments of Śrīvaiṣṇavism—which was built on the human yearning for the divine that was incomprehensibly Other while graciously accessible—with the conceptual demands of the Vedānta in which a profound identity between the individual self (ātman) and the impersonal, ultimate explanatory principle (brahman) was taught. This reconciliation of difference and identity came to be called “Qualified Nondualism.” In his earliest work, the Vedārthasaṃgraha, one of the ways Rāmānuja points to reconciliation is through identifying a single state of consciousness as both cognition of nonduality (the Vedāntic project) and the emotionally valent experience of happiness (the supreme expression of the human encounter with the divine). It does not seem that he systematically pursues this conception of how a state can be both cognitive and affective, and such an analysis will require independent philosophical analysis. Thus, this article argues that if a state of consciousness were indeed both cognitive and affective in this way, we would have a full explanation for how the devotional approach of a human being to Viṣṇu-with-Śrī can also be the self’s realization of identity with brahman.
A fair amount is known about the great Hindu figure, Rāmānuja (although there are competing theories about how and why he is traditionally assigned the extraordinarily long life span of one hundred and twenty years and what a more realistic dating might be). This is because Rāmānuja was born in the Tamil land among those who had already coalesced in the previous three hundred years into a mainly Brāhmaṅa community devoted to the worship of Viṡṅu and his consort Ś rī (and therefore called Śrīvaiṡṅavas). The community also ensured that his works were carefully noted, and there is only some controversy about Rāmānuja's oeuvre. His critical works are his first composition, an independent summary of his views, the Vedārthasaṃgraha: his commentary on the Vedānta Brahmasūtra, the Śrībhāṣya; and the commentary on the Bhagavadgītā, the Gītābhāṣya. He is generally supposed to have written two shorter commentaries on the Brahmasūtra, the Vedāntadīpa, and the Vedāntasāra. There is controversy over whether he wrote the rather different prose hymns, the Gadyatraya, introducing ideas not found in the acknowledged works (Lester 1966). Unlike Ś aṅkara (who history sees as his great rival in competing visions of the Vedānta), whose teachings appear not to have had an immediate effect on the community that he left behind on becoming a renunciate, Rāmānuja continued to work among his community even after renunciation and strengthened their theological identity immeasurably. Although competing interpretations of his views fed into the formation of two subgroups within the community, there continues to be a close association between the philosophical theology he developed called Viśiṡt˙ādvaita-Qualified Nonduality-and the Ś rīvaiṡṅava community.
It is a matter of interpretation whether Rāmānuja was primarily a Vedāntin who assimilated popular Vaiṡṅava devotionalism into the Brāhmaṅa community (van Buitenen 1966) or whether he was a Vaiṡṅava who took the Vedānta as "the general framework" to present his sectarian religion (Kumarappa 1934: 185). The community finds these mono-directional aetiologies baffling, taking him to synthesize the philosophical and devotional aspects of his tradition within his theology (Narasimhachary 2004) harmoniously. The Western scholarly understanding of this integrated view is first and best articulated in a classic work by John Braisted Carman (1974).
As Swāmī Ā didevānanda says about the Vedārthasaṃgraha, it "occupies a unique place inasmuch as this work takes the place of a commentary on the Upaniṡads, though not in a conventional sense or form. The work mirrors a total vision of the Upaniṡads, discussing all the controversial texts in a relevant, coherent manner. It is in fact an independent exposition of the philosophy of the Upaniṡads" (1956: i). Drawing not only on the Upaniṡads themselves, but many previous teachers (many whose work are not extant) as well as the Mahābhārata, the Rāmāyaṇa, and the Viṣṇu Purāṇa, Rāmānuja seeks to contextualize the various lines of thought in the Upaniṡads in such a manner as to present them as being reconciled within his system. In brief, he argues that passages which affirm the complete difference between the grounding principle brahman, world, and self, and those that speak of the complete subsumption of the latter two within the first, are reconcilable with the teachings where brahman is presented as the essential self of selves and the world (Ā didevānanda 1956: ii). Brahman, so understood, is God Viṡṅu. As such, the text lays out a comprehensive, theistic rendering of Rāmānuja's original reading of the doctrines of the Upaniṡads.
In this article, I focus on a specific concept in his masterful début, the Vedārthasaṃgraha. Towards its peroration, 1 Rāmānuja says: We have already said that the means of attaining brahman is supreme devotion in the form of intense meditation that enters into perception of utmost pellucidity, immeasurably and preeminently dear to the devotee; it is attained by steadfast devotion that is preceded by awareness of the real as learned from sacred teachings and furthered by the performance of one's apt actions. The term "devotion" has the sense of a distinguishing love. And so, too, "love" of a distinguishing awareness. "However, love is commonly related to a worthless happiness, and the happiness that is to be won by a distinguishing awareness is something else altogether." Not so. For whatever distinguishing awareness it is that is said to win it [that is, happiness], that distinguishing awareness itself is happiness (141). 2 The identification made here between awareness-specifically, the gnosis of nondifference-and devotion through the twinned notion of love (prīti) and happiness (sukha) is critical to Rāmānuja. While it would be a far larger undertaking to demonstrate that this identification is central to his lifelong project, for this article I suggest that it is helpful to have a general idea of this project, at least to see how he lays out the connection between this particular construal of cognition and this particular emotional nexus of love and happiness.
What is this project? I propose to look at it as the ultimate harmonization of ontic nonduality and devotional relationality. For Rāmānuja, there are two different aspects of belonging to his tradition. The first is the commitment to an account of reality founded on his reading of the Brahmasūtras as the essence of the teachings of the Upaniṡads. The other is the call to love of God as Viṡnu-with-Śrī. He is therefore left with these two contrary considerations. Let me try and explain what I mean.
Unicity and Otherness: The Project of Reconciling Vedānta and Bhakti
One side of Rāmānuja's project is driven by what might be called the "Vedāntic imperative." He has to offer an account of how our being is encompassed by the ground of all being that is brahman. This is laid out, deriving from Rāmānuja's reading of the Upaniṡadic terms of the Brahmasūtras, as a metaphysics of consciousness. Drawing on my previous work (Ram-Prasad 2013), we can see this Vedāntic work as gnoseology combined with ontotheology. I take the term "ontotheology" described by Martin Heidegger, as the subsumption of all being in the Being of God. In other words, this is the taking of God as the supreme Being who encompasses all other beings and modes of being, a reflex that is widely shared across theistic traditions; so much so that to this day, people in different religions take this to be the most basic definition of God. (Heidegger subjects this understanding to probing criticism that has influenced many subsequent Christian theologians. In my examination of Rāmānuja's commentary on the Bhagavadgītā (Ram-Prasad 2013: Chapter 2), I showed that Rāmānuja was unique in inaugurating a theology that simultaneously granted a role for this ontotheology-because of his theistic reading of the Vedāntic identification of ātman as individual self and brahman as Lord Nārāyaṅa/Viṡṅu-while also limiting it by saying [as some Christian theologians only in the past few decades have done] that the God of love is beyond being, beyond metaphysical understanding.) Rāmānuja locates this ontotheology within a gnoseology, that is to say, a system of gaining insightful awareness (the Greek "gnosis" here functioning exactly like its cognate "jñāna"). Since the Vedānta has the authority of sacred text, its teachings make it function as the prompt to the revelation of an account of reality for the one who seeks to know brahman. Upon approaching the texts with proper preparation and investigation with the appropriate virtues of conduct, the authorized individual comes to the point of having self-consciousness of the self's non-separateness from the consciousness of brahman (Ram-Prasad 2013: 41-47). As is well known in the tradition, Rāmānuja makes several pedagogic moves to convey the nature of this relationship of non-separateness. (This is, of course, quite distinct from-and explicitly opposed to-Ś aṅkara's denial of any internal relationship in the nonduality of self and brahman.) The most celebrated such move in itself also provides a multilevel ontology: This is Rāmānuja's mapping of the individual self's relationship to its material body onto brahman's relationship with all being. That is to say, just as ātman is to body, so supreme brahman is to all reality (which encompasses all selves and the whole world). 3 So we have this Vedāntic side of Rāmānuja. In it, he does two things: firstly, he develops an ontotheology of how all being is contained in God's Being, thereby uniting individual selves within divine brahman while also distinguishing us from brahman. Also, he locates this within the ancient Upaniṡadic practice of selfrealizing inquiry-that is, gnoseology-about the self-as-brahman.
The Vedāntic imperative for Rāmānuja leads to the unicity of self and brahman (even if this is qualified by a distinguishability that does not collapse all differences). But on the other hand, we have what we could call the Ś rīvaiṡṅava imperative. This is to explain the profound, existential orientation of the person in a loving relationship with God-Nārāyaṅa-with-Ś rī. Here, Rāmānuja presents not primarily the Vedāntic exposition of a gnostic ontotheology but the lyrical adoration of a God available to the devotee in an intricate, richly detailed personal presence. Within his community, this is elaborated through ritualized devotional practices that express an intense connection with God. What is critical here is that such a connection, by the very structure of love and grace, is not one of nonduality (howsoever qualified). The divine person is the focus of the highly charged emotional attention of the individual who possesses the entire panoply of bodily means of expression-a sight to see the deity's concretized icon, tongue to give word to the devotional song, the senses all to be saturated in worship, and the cognitive capacities to receive the divine presence. 4 Here, otherness is integral to Rāmānuja's worldview: one does not love oneself but the Other in all its mysterious disclosure to oneself.
I have previously argued that the culmination of Rāmānuja's vision is not his Viśiṡt˙ādvaitic ontotheology but the loving relationship with a God who is beyond Being, a conception amply testified in Rāmānuja's commentary on the Gītā (Ram-Prasad 2013: 71-75). There, Rāmānuja's main line of thought is that the attainment of qualified nondual consciousness between the being of the individual self and the being of brahman-as-God is a preparation for the proper expression of loving devotion by the individual towards God. In the passages from the Vedārthasaṃgraha that we will now consider, it seems as if this construal is already present. But I also think that it is possible to see a more radical identity between Vedāntic cognition and devotional love: that is to say, Rāmānuja could be saying that the gnostic, or self-realizing awareness of qualified nondifference, is also-properly speaking-the love of God. To my knowledge, no attempt has been made to look closely at this conceptual move, largely implicit in Rāmānuja.
This article will explore the relevant passages in more detail and conclude by laying out these alternative readings of Rāmānuja's view of the relationship between Vedāntic awareness of unicity and loving bhakti towards the divine Other.
The General Theory: Happiness as Awareness
To recapitulate, Rāmānuja claims that "whatever distinguishing awareness it is that is said to win it [that is, happiness], that distinguishing awareness itself is happiness" (141). He places this claim within a very general theory regarding the nature of awareness.
That is to say, awareness of objectual content comes under happiness, sorrow, and intermediate categories. They become individuated according to their objectual content. If awareness is individuated by particular objectual content that generates happiness, it is accordingly held dear; awareness that has that objectual content is itself happiness, for we observe no other corresponding thing. This is because what occurs is just the practical behavior of being happy (142). 5 "Objectual content" (viṣaya) is a compendious term for what makes a state of awareness that individuated state by virtue, in some way, of being of a particular object and not another. Such a catch-all definition does not tell us specifically how a state of a subject is "of" that subject (for example, later Viśiṡṫādvaita sharply distinguishes itself from Advaita on the relationship between particular states of awareness and the subject who is aware, as well as whether the subject is simply awareness or possesses awareness as a quality). Nor does it tell us the exact way in which an object becomes content (for example, whether it is by being an external cause, whether it is by a representation, or whether this implies a realist ontology of objects independent from cognitions of them, and so on). Since Rāmānuja does not specify further, we can see here that there is some systematic correlation between the things there is awareness of and the awareness being of that thing and not another: the awareness of an object individualizes that (state of) awareness.
Rāmānuja uses this formal understanding of objectual content to make two bold moves. The first move is to assert that a tripartite division can thematize awareness according to the emotional character (the nature of emotion) and valence (the value of emotion to the subject). All awareness is either happy, sorrowful, or something in-between. It must be acknowledged that one might want to stop and protest that this is inadequate when thinking of emotional character as a whole (especially given the rich understanding of rasas and bhavas available in the surrounding culture). There are at least two kinds of objections along these lines: (i) it is unclear what the emotion or emotions might be that could be identifiable with cognitions with "intermediate" objectual content; and (ii) it seems a highly schematic view of emotions to think of them as entirely lying on a continuum between happiness and sorrow. Nevertheless, let us grant that the notion of the "intermediate" is sufficiently flexible to allow abstract space for a richer depiction of emotions; there is perhaps nothing conceptually incoherent in such a claim.
The second move is ingenious. If we grant that objectual content is saturated with emotional character-because objects are bearers of emotional valence-then the awareness of an object is also in some way an emotional orientation towards that object. This, in turn, can only mean that the awareness of the object also carries emotion concerning that object. Emotion is the awareness commensurate with the object, and we should not distinguish between awareness of objects and emotions regarding them as two separate subjective states. This is because there is no second or separate subjective state found; Rāmānuja argues that one cannot identify an emotional orientation towards an object over and above the awareness of it. We find a situation where just how a person is aware of what makes her happy manifests that happiness. Becoming aware of the happy-making thing is nothing more than being happy, and being happy is showing happiness. And so, Rāmānuja argues, showing happiness is nothing other than showing awareness of the happy-making thing.
The Equation of Brahman and Happiness
The equation of awareness and happiness is not a theory for its own sake but a step in Rāmānuja's larger argument. He indicates that by now drawing attention to the familiar Vedāntic identification of brahman and ultimate happiness.
The individuation of awareness as intrinsically happy in the case of anything other than brahman is inferior and unfixed, but with brahman itself, it is preeminent and fixed. It is said, "brahman is bliss" (Taittirīya Upaniṣad 3.6). Since awareness is really happiness if its objectual content is intrinsically of happiness, brahman itself is happiness (142 continued). 6 We now see the beginning of what will become the crucial problem Rāmānuja realizes he faces once he equates awareness with happiness: If the general theory is accepted, and the valency of objects determines their emotional awareness of them, where does that leave the gnoseological path to brahman? To preserve the general theory while also carving out the special nature of the awareness of brahman, Rāmānuja needs to point out that all objects, in general, produce emotional awareness, including happiness. However, such happiness is fundamentally different from the happiness of brahman-awareness.
He returns to this issue later. But first, he must also clearly signal yet again that brahman, as presented here, is Lord Nārāyaṅa: indeed, he certainly identifies a series of sacred terms (Vedārthasaṃgraha 6) with one another: the self of all (sarvātmā), supreme brahman (paraṃ brahma), supreme self (paramātmā), Being (sat), and First Person (puruṣottama). Once more, it would be a different task to explore in detail the theological potency of the semantics involved; 7 here, we only need to remember that the ontotheology of brahman and the devotional theology of Nārāyaṅa are two aspects of the approach to the one God who is Being, yet also the Person beyond Being.
Thus it is said, "It is the essence, for only when one has got that essence will one become blissful" (Taittirīya Upaniṣad 2.7); the meaning of this is that brahman alone is happiness, and getting brahman one becomes happy. The Supreme Person, being by himself and on his own, boundless and preeminent happiness, becomes the happiness of another too, being of intrinsic happiness quite generally (not in any particular manner). This means that one for whom brahman becomes the objectual content of awareness becomes happy (142 continued). 8 This is the crux upon which the ontotheology of nondifference between selfawareness and awareness of brahman turns into the qualified relationship of the self's happiness in awareness of brahman's intrinsic happiness. The crux is the obtainment, the "getting," of brahman. Obtaining brahman for Rāmānuja is not the switch in consciousness whereby awareness realizes its nonduality, as Ś aṅkara would have it; it is entering into a certain relationship with brahman in which ultimate happiness occurs in and as self-awareness. That relationship, of course, is Rāmānuja's celebrated concept that I translate as "supplementarity" (śeṣatva) but can also be taken as "being an accessory."
Happiness and the Question of Independence
We need not here delve into the details of supplementarity, where the self is a supplement/accessory and the Lord is the principle (the technical definition-based on a theological rereading of the relationship between the purposes of sacrificial ritual-is introduced by Rāmānuja at Vedārthasaṃgraha 121). What is important here is that such a relationship means that the self is entirely dependent upon the Lord. The supplement or accessory is the locus of pervasion (vyāpya) by the principal, which is to say, the occurrence of the former is invariably simultaneous with that of the latter. (Hence my choice of "supplement"-think of the supplement of a book: there can be a book without a supplement, but a supplement is a supplement only under the book.) When it is realized that the brahman of maximal divine qualities is the principal (śeṣin) of all, of which selves are supplementary (śeṣa), and is the object of preeminent love (atiśayaprītiviṣayaṃ), then that supreme brahman makes itself attainable to the self (sat paraṃ brahmaivainam ātmānaṃ prāpayatīti) (142 continued).
Rāmānuja now steps back and imagines that an opponent might find happiness and dependence incompatible. In response, he postulates that the objection summarizes that "we see that all conscious beings only wish for independence" (sarveṣām eva cetanānāṃ svātantryam eva iṣṭatamaṃ dṛśyate), for other-dependence (pāratantryaṃ) is sorrow (duḥkam) (143).
But he is quick to respond that this anxiety for independence is due to an existential misreading: But this is from someone who has the appropriative conceit that knits together the flesh and the self, not having learned that the intrinsic nature of the self is distinct from the body. It is thus: the body, which is just a lump upon which qualities such as generic humanness, etc., dwell, is considered independent. Those who are in the bonds of rebirth take it to be the "I" (143). 9 Ordinary existence is marked by "appropriative conceit" (abhimāna)-the particular implication of the more general translation of abhimāna as "conception"-wherein self-awareness is vitiated by conflation with awareness of the body. Egoity-selfcentredness-arises when what the self is centered on is not-self; for if the self is centered on what it is, that would be nothing other than to be centered on its relationship with God, and then there would be no misleading conceit. Egoity generates a misplaced sense of independence, and it is only then that the anxious question put by the opponent arises.
Whatever conceit one has of the self, one holds the ends of life that are congenial to it. 10 This conceit about the self settles what happiness is, according to whether it is a lion, tiger, boar, man, semi-divine attendant, demon, fiend, god, antigod, male, female, and so on. But these [ends] are mutually incompatible (143 continued). 11 With a flourish, Rāmānuja points out the explanatory power of the concept of appropriative conceit. The diversity of life forms is due to that primal tendency of the self to seek an independent existence as one or another creature. The diversity, however, is united by the same cosmic error, which takes self-awareness to be aware of the bodily form in life.
Therefore all is put together by whatever aim of life is peculiar to the appropriative conceit of the self (143 continued). 12
Awareness of Supplementarity: The Path of Happiness
Rāmānuja then explains why this longing for independence within a life-form's aims of life-which arises from the appropriative conceit of taking the self to be one or another bodily being-is wrong. As we would expect, this explanation takes us back to supplementarity, which is the very opposite of mistaken independence. The intrinsic form of the self is solely of the aspect of awareness, characteristically other than the body of deities, etc. Its intrinsic form is only of supplementarity to the Other (143 continued). 13 Self-awareness, properly speaking, is awareness of self, not bodily being; this proper self-awareness is constitutively and veridically awareness of self as a supplement to the divine, the supreme self, the Supreme Person. In answer to the implied question on the persistence of the existential error, Rāmānuja cannot do more than say that the prideful conceit of independence (svātantryābhimāna) should be understood as erroneous cognition (viparītajñāna) due to the operation of consequentiality (karmakṛtva). In other words, we can point to the process by which the error persists, which is the tie between how individuals act and the existential consequences that are visited upon them across different bodily lives. But we cannot find the originary point of disclosure. 10 ātmābhimāno yādṛśas tadanuguṇaiva puruṣārthapratītiḥ / 11 siṃhavyāghravarāhamanuṣyayakṣarakṣaḥpiśācadevadānavastrīpuṃsavyavasthitātmābhimānānāṃ sukhāni vyavasthitāni / tāni ca parasparaviruddhāni / 12 tasmād ātmābhimānānuguṇapuruṣārthavyavasthayā sarvaṃ samāhitam /
Consequentiality as an Explanation of Variable Happiness
The resort to karman loops back to the earlier point that there are two classes of happiness-the intrinsic, eternal happiness of the (self-)awareness of God, and the contingent, transient happiness from objects of bodily awareness. Now, why would we ever choose the latter?
Thus, it is also due to consequentiality that objects other than the Supreme Person induce happiness (143 continued). 14 This entanglement of happiness in karman causes persistent orientation to objects, even though the happiness they yield is as nothing to the happiness due to God.
All things other than brahman lack intrinsic happiness and produce it only due to consequentiality (143 continued). 15 He goes on to say the very structure of objectuality is given by the operation of consequentiality: the identity of objects is determined through how they affect and are affected by the self-awareness that is misplaced in bodily being.
What makes an entity-that is exactly of the form of happiness or sufferingthat very entity? Its singularity is due to the operation of good or evil [consequentuality]. 16 This is what generates the diversity of experience. The happiness of one is the sorrow of another, and the happiness of one at one time is her sorrow at another time. All the variation between and within people of happiness and suffering concerning the same entities can be explained through the working out of karman, not due to the intrinsic nature of those entities. They might remain stable in their other qualities, but the removal of karman removes the happiness or suffering due to them. This is because happiness-the quality of producing happiness-as-awareness -is not the intrinsic form (svarūpa) of these objects. When consequentiality ends through self-awareness (that is, the gnoseological attainment discussed at the beginning of this article), then the happiness yielded by objects goes away, too (143 continued).
Serving, Knowing, and Loving God
All this is really about the fundamental distinction between the happiness of God and that of the objects of the world, for Rāmānuja has already said earlier in 143: 14 ataḥ karmakṛtam eva paramapuruṣavyatiriktaviṣayāṇāṃ sukhatvam / 15 brahmavyatiriktasya kṛtsnasya vastunaḥ svarūpeṇa sukhatvābhāvaḥ karmakṛtatvena ca / 16 sukhaduḥkhādyekāntarūpiṇo vastuno vastutvaṃ kutaḥ / tadekāntatā puṇyapāpakṛtetyarthaḥ / Then, it is also due to the operation of consequentiality that objects other than the Supreme Person bring happiness. They are thus mean and unfixed, while the Supreme Person alone is happiness in himself. 17 All entities other than brahman are intrinsically lacking in happiness due to the operation of consequentiality and unfixed.… 18 He then grants that dependence as such is not a desirable condition. The opponent's earlier quotation of Manusmṛti 6.160 that "all subservience is sorrow" (sarvaṃ paravaśam duḥkhaṃ) (at 143), does hold, for the mutuality of supplement and principal (parasparaśeṣaśeṣibhāva) does not hold for the self with anything other than the Supreme Person (144). In all other cases, supplementarity does produce sorrow because it lacks mutuality with God. Rāmānuja has already said that, in the case of God, dependence does not produce sorrow.
Let us sum up the sequence of ideas so far. Intentional awareness takes on the content of its object. Objects have emotional valency, resulting in happiness, sorrow, or something intermediate. But emotion is no different from awareness since no extra subjective state is detected; the manifestation of awareness is nothing other than the expression of the relevant emotion. The Vedānta says (on Rāmānuja's reading) that self-awareness is awareness of brahman.
Given that (for Rāmānuja) brahman is the Supreme Person, Nārāyaṅa, the assertion of sacred text that brahman is intrinsic happiness amounts to saying that God is happiness. Consequently, self-awareness is awareness of God. If awareness of happiness is identical to happiness, the awareness of God just is the ultimate happiness. Our tendency to look for happiness in other things is because we lack true self-awareness and instead take awareness to be bodily happiness given by objects. But objects do not generate happiness, for brahman alone is intrinsically happy; instead, beginningless consequentially traps us in our misconstrued state. In this trapped state, we are dependent upon objects, and our sense of independence is wrong because the happiness of objects is transient, irrelevant to them, and variable, according to karman. By contrast, dependence on God is not harmful but, rather, is a fulfilling mutuality in which awareness is equivalent to eternal happiness.
Finally, Rāmānuja turns to the question of how we may cultivate this fulfilling mutuality in which we gladly acknowledge our supplementarity to God. And in these concluding passages, we see the resolution of that tension between nondualistic ontotheology and the qualified otherness in loving devotion with which I began this article: it is clear that the resolution's core is the identification of awareness and the emotion of happiness.
All those who know the true nature of the self serve the Foremost Person alone. As the Lord has said, "When one serves me unerringly through the discipline of devotion, he will pass beyond the [existential] qualities and become brahman" (Bhagavadgītā 14.26). This service in the form of devotion is denoted in the word "knowing" when it is said in the sacred text that "One 17 ataḥ karmakṛtam eva paramapuruṣavyatiriktaviṣayāṇāṃ sukhatvam / ata eva teṣām alpatvam asthiratvaṃ ca paramapuruṣasyaiva svata eva sukhatvam / 18 brahmavyatiriktasya kṛtsnasya vastunaḥ svarūpeṇa sukhatvābhāvaḥ karmakṛtatvena cāsthiratvaṃ… who knows brahman obtains the ultimate," "The knower of this becomes immortal," "Knowing brahman he becomes brahman," and so on. In the specific text, "One whom he chooses may attain him," we understand from "whom he chooses" that one must be chosen. And the one chosen is the beloved (144). 19 In this dense passage, Rāmānuja assembles sacred texts around the suggestive but elusive description of the ultimate relationship between self and brahman/Lord: it is one of "becoming" (bhūya), "obtaining" (āpana), "attaining" (lābha) brahman/the Lord. Here is a creative lacuna that is at the heart of Viśiṡṫādvaita. The relationship can be seen exegetically as the realization of the ontological closure of nondifference, but it can also be seen as the achievement of supplemental intimacy with the Other. Indeed, the qualification of nonduality that gives the system its name is the qualification due to that intimacy. (To reiterate that simplistic analogy: a book supplement has an identity-but only by virtue of being a supplement of that book.) Rāmānuja points out that the sacred texts stipulate the approach to the mystery of our relationship with God through the cognitive register of knowledge of self and the emotional register of devotion. The final sacred passage he quotes indicates that they are not two different existential modes of being a self. The cognitive attainment of "becoming" aware of brahman is precisely also the sign of the emotional attainment of being chosen out of God's love. And this is not surprising-we have been told of the general argument that awareness and emotion are not two different existential modes.
It is on account of such an identity of the cognitive and the emotional that the path is clear for the Viśiṡt˙ādvaitin. From the general theory of their identity, Rāmānuja has argued that self-awareness is the same as happiness. However, mistaken (bodily) self-awareness is the same as karman-variegated and transient happiness, while true (brahman-becoming) self-awareness is the same as eternal happiness. Now there is a switch of perspective from the theologian's argument to God's sacred assurance: The beloved of the Lord is the one in whom boundless and preeminent love of God has been bred. Thus has the Lord said, "For I am the dearly beloved of one who is wise, and he is beloved by me (Bhagavadgītā 7.17)." Therefore, it is knowledge that has reached the form of ultimate devotion that is truly the means of attaining the Lord (144). 20 The Lord himself says that being wise and aware of him is identical to loving and being loved by him, which is to be in a relationship of eternal mutual happiness with 19 sarvair ātmayāthātmyavedibhiḥ sevyaḥ puruṣottama eka eva / yathoktaṃ bhagavatā māṃ ca yo 'vyabhicāreṇa bhaktiyogena sevate / sa guṇān samatītyaitān brahmabhūyāya kalpate // itīyam eva bhaktirūpā sevā brahmavid āpnoti param tam evaṃ vidvān amṛta iha bhavati brahma veda brahmaiva bhavatītyādiṣu vedanaśabdenābhidhīyata ityuktam / yam evaiṣa vṛṇute tena labhya iti viśeṣaṇād yam evaiṣa vṛṇuta iti bhavagatā varaṇīyatvaṃ pratīyate / varaṇīyaś ca priyatamaḥ / 20 yasya bhagavaty anavadhikātiśayā prītir jāyate sa eva bhagavataḥ priyatamaḥ / tad uktaṃ bhagavatā priyo hi jñānino 'tyartham ahaṃ sa ca mama priyaḥ / iti / tasmāt parabhaktirūpāpannam eva vedanaṃ tattvato bhagavatprāptisādhanam / him. So the reception of the self by the divine also equates to the cognitive and emotional states of the self.
Rāmānuja then quotes from the Mokṣadharma of the Mahābhārata: 21 "His form is not beheld; no one sees him with the eye; only one who has concentrated, with discrimination and devotion on the self, sees steadily the intrinsic form of awareness." 22 This does not quite say the same thing as the previous statement that Rāmānuja quoted since awareness is not equated with the emotional quality of devotion but is given as a requisite quality. But perhaps we should permit some slippage in the rhetoric, for Rāmānuja's primary concern is to show the fundamental concomitance of the cognitive and the emotional, even when the exact nature of their co-occurrence is underdetermined between identity and qualification. That is what we have seen throughout: Viśiṡt˙ādvaita leaves what I have called a creative lacuna at the heart of the matter. There is some profound but mysterious truth where the unicity of self and brahman is qualified by the nonidentity implied by intimate relationship; there is an equation of awareness and happiness that is also rephrased as a qualification of the former with the latter.
The meaning is that having concentrated on the self with discrimination, one sees the Supreme Person-makes him immediate-through devotion, attains him…Everything follows from devotion taken as a distinguishing awareness (144 continued). 23 While acknowledging that there is this lacuna, perhaps deliberately left at the center of this account of truth and love, I would like to conclude my exposition with some tentative remarks on two possible readings of what Rāmānuja might mean by saying that "everything follows from devotion taken as a distinguishing awareness."
Two Readings of the Central Claim
The first reading may be called one of "causal succession." According to it, first, there is the attainment of true awareness. (Later, the commentator Sudarśanasūri [1924] glosses jñāna at 144 in terms of the effective function of the system of epistemic validation [pramāṇatantra]. This strict epistemological definition is absent in Rāmānuja, who generally takes the truth theologically as apt in relation to God.) Then there is the generation of emotion: the happiness intrinsic to brahman is expressed as mutual love with God. 21 van Buitenen (Rāmānuja 1956) helpfully points out that dhṛti-should be "discrimination" because in the Gītābhāṣya at 6.25, Rāmānuja "glosses buddhyā dhṛtigṛhītayā by vivekaviṣayayā": footnote 814, page 299. 22 na saṃdṛśo tiṣṭhati rūpam asya na cakṣuṣā paśyati kaścanainam / bhaktyā ca dhṛtyā ca samāhitātmā jñānasvarūpaṃ paripaśyatītīha // 23 dhṛtyā samāhitātmā bhaktyā puruṣottamaṃ paśyati sākṣātkaroti prāpnotītyarthaḥ / …bhaktiś ca jñānaviśeṣa eveti sarvam upapannam / This is the straight reversal of the Ś aṅkarite sequence in the Gītābhāṣya. 24 Ś aṅkara can afford to found devotion on gnosis because, for him, the world order of devotion is less than ultimately real, and therefore falls away with ultimate true cognition. In Rāmānuja's system, however, there is a problem: if self-awareness as awareness of the unicity of self and brahman is merely a preparation for divine love -where devotion structurally implies a difference between a human person and the Supreme Person-that might make sense as a devotional process, but it does not prima facie map on to the formal theory of the identity of awareness and emotion that he offered in the first place.
One could perhaps defend this reading by adding a step to Rāmānuja's formal theory of awareness and emotion: while in general, they are the same, and perfect self-awareness is also the happiness of brahman, ordinary awareness must be perfected first, stripped of its bodily identification and its subjection to karman. There is then a long period of preparatory awareness, which will manifest emotions too. Still, the emotion of mutual love with God is generated only by the occurrence of realized self-awareness. We could therefore try and defend Rāmānuja by saying that awareness is indeed the cause of emotion when we look solely at the process by which awareness gains the discrimination required to achieve the ultimate emotion of mutual love with God. Still, throughout the process, any awareness is also an emotion (just not the ultimate emotion of mutual love of God).
This, however, does strike me as special pleading, for it is difficult to get around the explicit claim Rāmānuja makes that it is not just preparatory awareness but perfect-discriminating-awareness that still precedes the emotion of devotional love of God. And that does run counter to even the tweaked theory of causal sequence that I have suggested.
The second reading may be termed that of supplemental identity. The attainment of perfect awareness is the expression of the emotion of love: to know that I love is for me to love, and to know that I have become brahman is to love the Supreme Person who is brahman. In this reading, supplementarity is crucial. As I have indicated, it does the crucial job of providing the qualificatory restriction of nonduality that Rāmānuja requires. Indeed, the Vedāntic notion of an "attainment" or "becoming" of brahman makes ontotheological sense only because the being of the self is nothing other than the being of brahman. 25 However, since the self "becoming (or 'attaining') brahman" for Rāmānuja is the awareness of being supplementary to brahman (in contrast to being indistinguishably [that is, Advaitically] identical with brahman), an intimate space opens up between self and God, which is articulated by the emotion of loving devotion (and reciprocal divine love). The identity of awareness and emotion (specifically, of true selfawareness of brahman and mutual love of God) is consistent from the espousal of the general theory at the end of 141 right through to the assertion of love arising from discriminating awareness at the end of 144.
In short, just as there is an asymmetry in the unicity of self and brahman when the former is supplemental to the latter, there is also an asymmetry between emotion and awareness. Emotion-the mutual love of the God who is intrinsic happiness-is supplementary to awareness (that is, true self-awareness as a nondifference from brahman). That is why Rāmānuja can talk about awareness without happiness/love when mentioning the path of discrimination while also asserting that love is not different from awareness.
Concluding Remarks
The challenge for Rāmānuja, therefore, is how to reconcile ontotheological unicity with the differentiating relationship of love between humans and the divine. This has been recognized in the tradition from the beginning, so I am not making an original point. But all too often, we have tended to hold them together by fiat, as if declaring that the mere identification of the brahman of Vedānta and the Ś rīman Nārāyaṅa of Śrīvaiṡṅavism suffices to explain how Rāmānuja's vision holds them together. I have previously argued that the culmination of Rāmānuja's vision is not that ontotheology but the loving relationship with a God who is beyond Being, a conception only recently emerging in Christian theology but that is amply testified in Rāmānuja's commentary on the Bhagavadgītā. But I had not understood then that he does not provide in that commentary an indication of how jñāna is intrinsic to bhakti, only that the attainment of the former is preparatory to the proper expression of the latter. But it seems as if the explanatory move is evident here in his first work. We have now glimpsed how key constituents of his hermeneutic are brought together.
In sum, Rāmānuja takes the gnostic awareness that is the culmination of nondualism and equates it with the participation in the emotional essence-rasaof the divine (through the evocation of the Taittirīya Upaniṣad 2.7, raso vai saḥ / raso hyevāyaṃ labdhvānandī bhavati). He thereby not only reconfigures our understanding of the nature of happiness but also of the ontological state. This understanding of self and divinity is exactly what "distinguishes" his vision of nonduality, for the self's participation in divine essence is both a matter of copresence and difference. Here is a powerful articulation of why his vision is called "viśiṣṭādvaita." But we must acknowledge that this requires developing, on its own terms, a detailed and nuanced theory of the equation of cognition and emotion that nevertheless preserves a distinction between them. It is unclear what salience the tradition has accorded this issue, but it would be a philosophical program worth undertaking.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain | 9,606.8 | 2022-12-13T00:00:00.000 | [
"Philosophy",
"History"
] |
Investigation on Impact of Heat Input on Microstructural, Mechanical, and Intergranular Corrosion Properties of Gas Tungsten Arc-Welded Ti-Stabilized 439 Ferritic Stainless Steel
In the present study, gas tungsten arc welding was employed to weld Ti-stabilized 439 ferritic stainless steel using 308L austenitic stainless steel filler electrode with varying heat input, i.e., low heat input (LHI) and high heat input (HHI). The optical microstructure revealed the formation of retained austenite (RA) and ferrite in the weld zone (WZ), whereas the peppery structure consisting of chromium-rich carbides were observed in the heat-affected zone for both the weldments. The volumetric fraction of RA was calculated using X-ray diffraction analysis. The RA’s content decreased, whereas grain size in WZ increased with an increase in heat input. The local misorientation and grain boundary distribution in the welded region was investigated by electron backscattered diffraction. The LHI weldment depicted the higher micro-hardness and tensile strength attributed to the higher content of RA as compared to HHI; however, the opposite trend was observed for the intergranular corrosion resistance.
Introduction
In recent years, there has been a significant increase in the demand for ferritic stainless steels (FSSs) attributed to their economic advantage and a good combination of mechanical and corrosion properties as compared to austenitic stainless steels (ASSs) (Ref 1). The FSSs with chromium (11-30%) and little or no nickel content are used in corrosive environments such as vehicles, pressure vessels, and power generation. (Ref 2). The FSSs also possesses good oxidation resistance at higher temperature, and therefore, these steels are majorly used in automobile exhaust systems (Ref 3).
Fusion welding is the most widely adopted technique to join the FSSs at an industrial scale. However, the major problems of FSSs welding are grain coarsening in weld zone (WZ) and heataffected zone (HAZ) as grain refinement restricted due to the absence of phase transformation from liquid to solid (Ref 4) and consequently reduced the mechanical strength as well as corrosion resistance. The grain coarsening can be minimized by limiting the supplied heat input during fusion welding (Ref 1). Moreover, the FSSs also suffers from sensitization during the welding resulting in an intergranular corrosion (IGC) ( Ref 5). It is reported that the carbide precipitation occurs faster in FSSs due to the lower solubility of carbon compared to ASSs ( Ref 6). Therefore, proper control on the addition of carbon content or by adding titanium (Ti) and/or niobium (Nb) as stabilizing elements has proved to be the most viable option to reduce the IGC without compromising its ductility and corrosion resistance of FSSs weldments ( Ref 7).
During fusion welding, the selection of the welding process and parameters plays a crucial role in producing the sound joint. There have been various studies performed on the influence of welding processes and parameters on the microstructural, mechanical, and corrosion properties of FSSs. Mohandas et al. (Ref 8) studied the effect of welding processes on 430 FSS and reported that gas tungsten arc welding (GTAW) welds showed higher ductility and strength as compared to shielded metal arc welding (SMAW). The authors attributed this to the formation of equiaxed grains in the fusion zone and the restricted entry of atmospheric gases by protecting the weld pool via shielding gases. Silva et al. (Ref 9) investigated the microstructural characterization of HAZ of 444 FSS by SMAW process. They observed the presence of carbides, nitrides, carbonitrides and some secondary precipitates such as chi and sigma and needle-like laves phase precipitate near the partially melted zone. Alizadeh-Sh et al. (Ref 10) investigated the effect of phase transformation and mechanical properties of similar welded 430 FSS by resistance spot welding. They stated that the grain coarsening and grain refinement in high-, medium-, and low-temperature HAZ depend on the solidification and cooling rate. Lakshminarayanan et al. (Ref 11) examined the effect of different welding processes, namely GTAW, SMAW, and gas metal arc welding (GMAW), on the microstructural and mechanical properties of 409M FSS. The authors reported that the lower heat input associated with the GTAW process resulted in a faster cooling rate and enhanced mechanical properties. Kim et al. (Ref 12) investigated the IGC behavior of 409L FSS. The authors reported that these types of steels suffer IGC when used in a temperature range of 400-600°C for a long time after exposure to high temperatures such as arc welding. Lakshminarayanan et al. (Ref 13) compared the sensitization kinetics of 409M FSS welded by GTAW, friction stir welding, laser beam welding, and electron beam welding and reported that friction stir welding exhibited lower sensitization attributed to the formation of more refined grains.
In welding of FSSs, the use of filler material with same chemical composition to base metal (BM) results grain coarsening in WZ, and subsequently, results in the reduced mechanical and corrosion properties of the weldments (Ref 14). Therefore, the selection of filler electrodes plays an essential role in imparting better mechanical and corrosion properties in FSSs. Shojaati ) studied the effect of heat input on the microstructural characterization of HAZ of 439 FSS, welded by manual GMAW using 308L-Si filler electrode and observed that carbide and nitride precipitates in HAZ and width of HAZ and grain size increased with increasing heat input. However, the systematic investigation on the weldability of Ti-stabilized 439 FSS is still scanty. Hence, there is a need to understand microstructural evolutions and their subsequent effect on mechanical properties and intergranular corrosion resistance of welded 439 FSS. Therefore, the present study focuses on the impact of heat input on the similar welding of Ti-stabilized 439 FSS by GTAW technique using a 308L filler electrode.
Materials and Method
The BM, 439 FSS sheets was procured commercially with an initial thickness of 3 mm. The rolled or milled sheets were solution annealed at 850°C for 1 h in the furnace (carbolite gero) followed by water quenching to eliminate the prior thermal effects ( Ref 19). The sheets were cut in the dimension of 75 mm 9 100 mm using a wire-cut electro-discharge machine. The oxide layer was cleaned by an acid pickling solution of 100 ml consisting of 15 ml HNO 3 , 5 ml HF and water.
The chemical composition of BM and filler electrode is given in Table 1. The GTAW technique was employed to perform the butt welding of 439 FSS plates using a 308L filler electrode. The welding was performed with two different heat inputs, i.e., low heat input (LHI) and high heat input (HHI) in a single pass with a root gap of 1.2 mm. The welding parameters are summarized in Table 2. The heat input was then calculated using the welding parameters by Eq 1 (Ref 20): where ''v'' is the welding speed in (mm/s), ''I'' is the welding current in amperes (A), ''V'' is the arc voltage in volts (V), and g is the welding efficiency which is considered as 60% for GTAW technique ( Ref 20). The edges of the plates were discarded after welding; 10 mm from both top and bottom side of the weldments to account for the possible non-uniformity during welding. And then, samples for metallography, mechanical and corrosion studies were obtained. The schematic diagram of weld joints and representation of samples are shown in Fig. 1. The samples were then polished by a series of polishing papers (200, 400, 800, 1200, 1500, 2000 grit size) followed by velvet cloth polishing using 0.75 lm alumina slurry. The samples were then cleaned ultrasonically in distilled water for 5 min. An electrochemical etching method was performed in potentiostat (solartron 1285) to reveal the microstructural evolutions in the weldments. A potential of 6 V is applied for 30 s in a chemical solution comprising of 90-ml ethanol and 10-ml HCL. The optical microstructure of WZ, HAZ and BM was observed in optical microscope (OM, Leica DMi8 C), and the presence of different phases in BM and WZ was studied by Xray diffraction (XRD-XÕPert PRO PANanalytical). The scanning electron microscope (SEM -JEOL 6380A, JAPAN) coupled with energy dispersive spectroscopy (EDS) was employed to study the microstructural characteristics and chemical composition of secondary phases in the different zones. The electron backscattered diffraction (EBSD, oxford instruments) was employed to analyze the grain orientation and secondary phase formation. The micro-hardness of the welded samples was obtained using Vickers micro-hardness tester (Simadzu Micro-hardness tester). The 300 g load was applied for 10 s to estimate the micro-hardness values of different zones. The tensile test specimens were prepared according to ASTM E8M-04 and tested using an Instron-4467 universal testing machine, followed by fractographic investigation using SEM. The double-loop electrochemical potentiokinetic reactivation (DLEPR) test was performed to examine the susceptibility of IGC. The conventional three-electrode cell, i.e., welded samples as a working electrode, platinum electrode as a counter electrode and saturated calomel electrode (SCE) as a reference electrode, was used for the IGC test. The test was carried out in a solution of 0.5 M H 2 SO 4 + 0.01 M NH 4 SCN at room temperature (27 ± 1°C). The potential range of -500 mV SCE to +300 mV SCE was used for the forward scan, and the same potential range was used for the reverse scan (+300 mV SCE to -500 mV SCE ) at a scan rate of 1.67 mV/s (Ref 21).
Microstructural Evolution
3.1.1 Optical and Scanning Electron Microscope Analysis. Figure 2(a) depicts an optical micrograph of BM, 439 FSS with a fine-grained and homogeneous structure. The Kaltenhauser (Ref 22) developed an equation to estimate the formation of martensite in FSS, which is known as Kaltenhauser factor or K-factor and is given by: The K factor has a critical value of 13 and 17 for low and medium chromium FSS to prevent the formation of martensite in FSS, respectively. In the present study, the medium chromium 439 FSS is used, and the calculated value of Kfactor is 20.55, which is higher than the critical value, and therefore, no martensite is observed in the BM in the optical microstructure. Moreover, the XRD graph ( Fig. 2(b)) of BM further confirms the presence of a single-phase body-centered cubic ferritic structure.
In stainless steels (SSs), the chemical composition of BM and filler electrode plays a significant role in determining the solidification mode in the WZ, which can be determined through the WRC 1992 diagram by calculating the ratio of chromium equivalent (Cr eq ) to nickel equivalent (Ni eq ) (Ref 23). The Cr eq is calculated using the weight (%) of ferrite stabilizing element (Eq 3), and Ni eq is calculated using the weight (%) of austenite stabilizing elements (Eq 4). In the present study, the ratio of Cr eq /Ni eq is found out to be 2.63.
Cr eq =Ni eq > 1:95 ðEq 5Þ Figure 3 depicts the microstructural and XRD analysis of WZ for both the weldments. Figure 3(a-b) shows that the microstructures of WZ exhibit the evolution of mixed microstructures during GTAW with varying heat input. In both the weldments, the microstructure illustrates the formation of mixed microstructure, i.e., retained austenite (RA) in the grain boundary and ferrite in the grain matrix. Hence, it can be stated that during the solidification of 439 FSS weldments, the solidification mode is F, and ferrite solidifies upon cooling and transforms to d fi c + d in WZ. However, it can be noticed from Fig. 3(b) that the volumetric fraction of RA decreased for the HHI weldment. This may be attributed to the slower cooling rate in the HHI process. The slower cooling rate in the HHI process offers sufficient time for the transformation of the RA phase to the ferrite phase.
The XRD spectrums of WZ are shown in Fig. 3(c). It can be noticed that in WZ of LHI and HHI weldments, the formation of austenite (c) can be observed with respect to the single ferrite phase (a peaks- Fig. 2(b)). This further confirms the formation of RA in the WZ of weldments. Also, it can be observed that the intensity of peak c decreased for HHI weldment as compared to LHI weldment, indicating that the volumetric fraction of RA decreased with an increase in heat input, as evidenced in the optical microstructures ( Fig. 3a-b). The volumetric fraction of RA is estimated according to Eq 6 (Ref 25): The authors (Ref 16) stated that the solidification mode would occur via Eq 7 if the ferrite promoting elements are dominant than that of the austenite promoting elements and austenite formation is suppressed entirely. However, when the austenite-promoting elements are present, the evolution of microstructure is a mixed-mode (ferrite and RA), and the solidification will occur via Eq 8 or 9. This solidification path promotes the nucleation of austenite at ferrite grain boundary at elevated temperature, which restricts the formation of a single ferrite phase. However, the authors (Ref 16) further stated that the solidification mode would occur via Eq 9 for high carbon FSS (the carbon content in more than 0.15%) or the content of austenite-promoting elements are present in high quantity. Therefore, in our present study, the solidification may have occurred via Eq 8, as the BM and filler electrode used in the present study has low carbon content and the content of austenite promoting elements (Ni and Mn) is not very high. Hence, it can be stated that during the solidification of 439 FSS weldments, the ferrite solidifies upon cooling and transforms to d fi c + d in WZ. Moreover, the face-centered cubic (FCC) Fig. 2 (a) Optical microstructure, and (b) XRD pattern of 439 FSS base metal structure of 308L filler material will promote the mixed-mode microstructure. Figure 4(a, b) shows the SEM micrographs of WZs having precipitates coupled with EDX spectra, and the variation in the chemical composition of different elements in the grain area and grain boundary regime is shown in Fig. 4(c, d). It can be observed from Fig. 4(c, d) that the grain boundary has higher Ni and Mn content and lower Cr content, whereas inside the grain, vice versa followed. It is well-defined that Ni and Mn are austenite promoting elements, and Cr is ferrite promoting elements (Ref 16). This further confirms the formation of mixed microstructure, i.e., RA at the grain boundary and ferritic structure within the grain boundaries in WZ of both the weldments, and the results are consistent with the XRD analysis (Fig. 3c). Similar results are obtained by the other researchers (Ref 8, 26).
The optical microstructures of the HAZ region for both the weldments are shown in Fig. 5(a-d). The HAZ region is divided into two parts, namely (i) coarse-grain heat-affected zone (CGHAZ) and (ii) fine-grain heat-affected zone (FGHAZ) for both the weldments. The area near the fusion boundary possesses the coarse grains attributed to elevated temperature and slower cooling rate and is termed as CGHAZ. The area next to the CGHAZ is named FGHAZ. The temperature during welding in FGHAZ is comparatively low and has a faster cooling rate compared to the CGHAZ area, resulting in finer grains (Ref 27). Also, some precipitates can be observed inside the grains, called ''peppery structures'' ( Ref 28). Figure 5(a-d) shows that the precipitates formed are far from the grain boundary, and the area near to the grain boundaries are free from carbides in all the HAZ of weldments and is called a precipitate free zone (PFZ) (Ref 10). The peppery structure represents the carbides that may have formed during cooling due to the lower solubility of carbon in the ferrite phase (Ref 6).
The measured grain size of WZ and HAZ (average of 10 readings are taken) and the width of the different zone in HAZ are shown in Fig. 5(e, f). Figure 5(e) shows that the LHI weldment exhibits a smaller grain size in WZ and HAZ as compared to HHI weldment. The grain size in WZ during welding of FSS is greatly affected by the formation of austenite at elevated temperatures during solidification. It is reported that once the austenite forms along the grain boundaries at elevated temperatures, it restricts the grain growth of the ferrite (Ref 29). In the present study, it is observed that the WZ of LHI weldment exhibited a higher volumetric fraction of RA as compared to WZ of HHI weldment, hence, resulting in smaller Fig. 3 Optical microstructures of WZ of (a) LHI, (b) HHI weldments (c) XRD pattern grain size. Moreover, a significant difference in the width in different zones of HAZ is shown in Fig. 5(f). It can be seen that the width of HAZ (combined width of CGHAZ and FGHAZ) increased with an increase in heat input. This can be ascribed to the slower cooling rate for the HHI weldment, which resulted in the higher width of HAZ (Ref 30). The SEM-EDX analysis of the carbide precipitates in the HAZ of weldments is shown in Fig. 6(a-b). The variation in the chemical composition of different elements in the grain area and grain boundary regime is shown in Fig. 6(c, d). It can be observed from Fig. 6(c, d) that the peppery structure are Crrich carbides as the higher content of Cr and C can be observed in the peppery structure than the ferrite grain boundaries. Also, it can be observed that the Cr-rich carbide precipitation is higher in the HAZ of LHI weldment and its concentration decreases with an increase in heat input. Khorrami et al. (Ref 5) reported that the number of grain boundaries is majorly responsible for the short distance diffusion and formation of Cr-rich precipitates. Figure 5(e) shows that the grain size of HAZ of LHI weldment is significantly smaller than the HHI weldment, and the smaller grain size resulted in the formation of a higher number of grain boundaries. These grain boundaries are mainly responsible for more short-distance diffusion of C, which caused the formation of more amount of precipitates. Hence, it can be stated that the heat input plays an important role in the formation of a higher concentration of Cr-rich precipitate inside the ferrite grain boundaries of HAZ of LHI weldment. Moreover, no precipitation is observed in the ferrite grain boundary of HAZ for both the weldments.
Electron Backscattered Diffraction Analysis.
The EBSD analysis was used to study the micro-texture (orientation of crystallographic plane) of the weldments. The area from the WZ to HAZ of the weldments was scanned for analysis, and the inverse pole figure (IPF) map in the Z direction is shown in Fig 7. In Fig. 7(a), the IPF of LHI WZ shows the finer ferrite grains, whereas HHI WZ illustrates a coarser structure, as shown in Fig. 7(b). This is attributed to the increased heat input and slower cooling rate. The average local misorientation (LAM) method was used to quantify the strain generated during the thermal weld cycle and is shown in Fig. 7(c, d). Figure 7(c) shows the graph of LAM distribution for BCC structure (ferrite), and Fig. 7(d) shows the LAM distribution for FCC structure RA for both the weldments. The misorientation distribution of ferrite and RA phase is shown in Fig. 7(c, d). Figure 7(c) shows that the BCC structure in the LHI weldment shows a broader region than the HHI weldment. Moreover, it can be noticed that the peak of the BCC structure in the LHI weldment is shifted slightly toward the right side (Ref 31). This can be attributed to the higher accumulation of strain generated in the LHI weldment during the solidification due to the presence of a higher volumetric fraction of RA. The peaks of FCC structure (Fig. 7d) in both the weldments showed no appreciable difference. This may be due to the lower concentration of FCC structure in the weldment as compared to the BCC structure. Figure 7(e-h) shows the variation in different types of grain boundary for both the weldments, which consists of low angle grain boundary (LAGB, h <5 0 ), mean angle grain boundary (5 0 -15 0 ), and high angle grain boundary (HAGB, h>15 0 ) (Ref 32). Figure 7(e-f) depicts the grain boundary distribution for LHI and HHI weldments, and Fig. 7(g-h) illustrates the variation in grain boundary fraction for BCC (ferrite) and FCC (RA) structure. The fraction of grain boundary variation depends on welding parameters, and the fraction of LAGB is opposite to the fraction of HAGB. Saha et al. (Ref 33) stated that the higher fraction of HAGB restricts the motion of dislocation across the grain boundary area and the dislocation pile-ups restrict the grain growth during solidification. In contrast, low fraction HAGB is not capable of restricting the dislocation piles-ups. Figure 7(g, h) shows that a higher fraction of HAGB is observed for LHI weldment as compared to its counterpart. Therefore, in the present study, a lesser grain size is observed in the WZ and HAZ of the LHI as compared to its counterpart (Fig. 5e).
Micro-Hardness Analysis.
The micro-hardness varies with heat input and the variation is greatly affected by the formation of different phases during solidification as shown in Fig. 8(a) and the variation in average micro-hardness values of various regions is shown in Fig. 8(b). The higher microhardness values of LHI weldment as compared to HHI weldment is ascribed to the formation of a higher amount of volumetric fraction of RA in the WZ of LHI weldment. The formation of RA at grain boundary restricts the grain growth, and consequently resulting in a higher number of grains and grain boundaries, which act as a barrier for the movement of dislocation. Therefore, the presence of higher volumetric fraction of RA and lesser grain size (Fig. 5(e)) in the WZ of LHI weldment resulted in higher micro-hardness values as compared to HHI weldment. Furthermore, it can be noticed that the average micro-hardness values of WZ decreased by 10.53 % for the HHI weldment. A similar trend is also observed in the HAZ of the weldments. The formation of intragranular precipitate in HAZ resulted in a decrease in 4.89% average micro-hardness values for the HHI weldment. In HAZ, variation in micro-hardness values in CGHAZ and FGHAZ can be ascribed to the variation in grain size (Fig. 5e).
The formation of a higher fraction of HAGB restricts the dislocation motion across the grain boundary (Ref 34), and it is observed that the higher fraction of HAGB is observed across the WZ to HAZ for the LHI weldment ( Fig. 7(e, h)). Therefore, the micro-hardness values are higher for LHI as compared to its counterpart across the WZ to HAZ. Hence, the results are in consistent with the result obtained from the grain boundary EBSD analysis. Fig. 9(a). Figure 9(a) shows that the LHI weldment illustrates 1.60% higher ultimate strength, 7.80% higher yield strength, and better ductility as compared to the HHI weld joint. The location of fracture is BM for both the weldments, which clearly shows that the strength of WZ is higher than the BM. As compared to single ferritic phase BM, the strength of WZ was significantly improved due to the formation of mixed microstructure (ferrite and RA) in the WZ (Fig. 3a, b) Fig. 5(a-d), it is observed that carbide precipitates are distributed over a wide area within the grains of HAZ for both weld joints, which significantly contributed to strengthening the HAZ ( Ref 26). Hence, it can be stated that the strength of all the weldments is significantly enhanced by the presence of different phases in the WZ and by the presence of carbides in HAZ.
The fractographs of different weldments are shown in Fig. 9(b, c). From the micrographs, dimples with planar facets can be observed in both the weldments, indicating the fracture occurred in ductile as well as in brittle mode. The tensile properties of the weldment (shown in Fig. 9(a)) show that the ductility and strength decreased with an increase in heat input, resulting in more planar facets in HHI weldments.
Corrosion Behavior-Double-Loop Potentiokinetic Reactivation (DLEPR) Test Analysis
The DLEPR test was performed to evaluate the susceptibility of different welded samples toward IGC. The formation of chromium-rich carbides or any other precipitates influences the depletion of chromium in nearby regions during welding of stainless steels. This depletion of chromium generates the potential difference between grain and grain boundary, accelerating the grain boundary attack, which further depends on the availability of anodic sites near the grain boundary ( Ref 21). In the DLEPR test, two scans were performed, i.e., forward and reverse scan. In forward scan, the whole surface of the test sample gets passivated, and the maximum current density sample takes for passivation is taken as I a . In reverse scan, depassivation starts, and passive layer breaks in the anodic sites or chromium-depleted areas because passive film dissolves very easily in these areas with decreasing potential, and the maximum current density during depassivation is taken as I r .
Then, the value of these two peak current densities gives the quantitative data about degree of sensitization (DOS), and it was calculated by the ratio of (Ir/Ia*100) (Ref 34). The DLEPR test results are shown in Table 3. The DLEPR curves of WZ for LHI and HHI weld joints, and BM are shown in Fig. 10(a-c), and the DOS and its correlation with the volumetric fraction of RA are shown in Fig. 10(d). The BM shows the lower DOS due to single-phase ferrite microstructure and no carbides and/or precipitates were found in optical microstructure or XRD peaks (Fig. (2)). In WZ of different heat inputs, DOS is higher of both the weldments as compared to the BM attributed to the formation of mixed microstructure. The presence of RA at the grain boundary resulted in the depletion of chromium at the grain boundary (as can be observed by the EDS analysis (Fig. 4) of both the weldments). The WZ of LHI weldment has a higher volumetric fraction of RA, resulting in higher number grain boundaries, consequently higher sensitization sites. Hence, it can be stated that the DOS is proportional to the volumetric fraction RA, and therefore, a higher DOS is observed in LHI weldment as compared to the HHI weldment. With the increase in heat input, the DOS deceased from 16% for the WZ of LHI weldment to 5.42% for the WZ of HHI weldment.
Conclusions
1. In the present work, GTAW technique was employed for welding of 439 Ti-stabilized FSS using 308L filler material with varying heat input. 2. The microstructural evolution for both the weldments consists of mixed microstructure (ferrite + RA) and variation in grain size with varying heat input. The RA formed at ferrite grain boundary and the volumetric fraction of RA decreased with increase in heat input, and the estimated volumetric fraction of RA calculated using XRD analysis is 33.33% and 20.42% in the WZ of LHI and HHI weldments, respectively. 3. The HAZ area can be divided into CGHAZ and FGHAZ attributed to the variation in temperature gradient, and width of HAZ is higher for HHI weldment. The peppery structure was in HAZ of both the weldments; however, the concentration was higher in LHI HAZ. 4. The local misorientation distribution is higher for LHI weldment than HHI weldment attributed to the higher volumetric fraction of RA. 5. The micro-hardness results depicted the higher microhardness values for LHI weldment and is ascribed to the presence of higher volumetric fraction of RA at grain boundary which restricts the grain growth and act as a barrier for dislocation movements.
6. The WZ of LHI weldment showed higher % DOS due to the higher volumetric fraction of RA at ferrite grain boundary which is responsible for more Cr depletion near the grain boundary.
Funding
Open access funding provided by Manipal Academy of Higher Education, Manipal.
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 6,626.2 | 2021-12-23T00:00:00.000 | [
"Materials Science"
] |
Pineapple core from the canning industrial waste for bacterial cellulose production by Komagataeibacter xylinus
To address the high production cost associated with bacterial cellulose (BC) production using the Hestrin-Schramm (HS) medium, alternative agricultural wastes have been investigated as potential low-cost resources. This study aims to utilize pineapple core from pineapple canning industry waste as a carbon source to enhance the bacterial growth of Komagataeibacter xylinus and to characterize the physical and mechanical properties of the resulting BC. To assess growth performance, commercial sugar at concentrations of 0, 2.5, and 5.0 % (w/v) was incorporated into the medium. Fermentation was conducted under static conditions at room temperature for 5, 10, and 15 days. The structural and physical properties of BC were characterized using SEM, FTIR, XRD, and DSC. With the exception of crystallinity, BC produced from the pineapple core medium exhibited comparable characteristics to BC produced in the HS medium. These findings highlight the potential of utilizing pineapple core, a byproduct of the canning industry, as an economically viable nutrient source for BC production.
To address the high production cost associated with bacterial cellulose (BC) production using the Hestrin-Schramm (HS) medium, alternative agricultural wastes have been investigated as potential low-cost resources.This study aims to utilize pineapple core from pineapple canning industry waste as a carbon source to enhance the bacterial growth of Komagataeibacter xylinus and to characterize the physical and mechanical properties of the resulting BC.To assess growth performance, commercial sugar at concentrations of 0, 2.5, and 5.0 % (w/v) was incorporated into the medium.Fermentation was conducted under static conditions at room temperature for 5, 10, and 15 days.The structural and physical properties of BC were characterized using SEM, FTIR, XRD, and DSC.With the exception of crystallinity, BC produced from the pineapple core medium exhibited comparable characteristics to BC produced in the HS medium.These findings highlight the potential of utilizing pineapple core, a byproduct of the canning industry, as an economically viable nutrient source for BC production.
Introduction
Bacterial cellulose (BC) is a cellulose produced during acetic fermentation in a medium of Acetobacter xylinus, now commonly known as Komagataeibacter xylinus [1].As a natural polymer, the physicochemical properties of BC are superior to those found in plant cellulose but have a similarly molecular formula [2].Purification of BC is not necessary such as plant cellulose because it is free of lignin and hemicellulose.Additionally, since BC occurs in nano-sized cellulose fiber dimension, high crystallinity [3], high Young's modulus [4], and biocompatibility [5], BC is fascinating and regarded as an excellent renewable biomaterial [6].Various effort has been reported to use BC for sustainable and functional material such as electromagnetic interference shielding [7], osmotic energy harvesting [8], colorimetric sensor [9]removal of bisphenol A [10], dental application [11], medical application [12], dairy industry [13], wastewater treatment [14], textile material [15] anti-microbial food packaging [16].
Although it possesses remarkable characteristics and is prospective for a broader range of commercial applications, the BC is somewhat costly to fabricate due to the use of pricy Hestrin-Schramm (HS) culture media and the low yield of the use of bacteria strain [17].It was calculated that the BC costs around 30 % of the total production expense [18].Hence, numerous agricultural industry by-products decreased the BC production cost and developed the yield.In terms of agro-industry waste utilization, several evaluations have been studied, such as the use of corn stover [19], waste of date syrup and food-grade sucrose industry [6], rotten guava [20], oat hulls [21], confectionary wastes [22], even from kitchen waste [23].
Another potential agro-industrial waste includes pineapple residues obtained from the pineapple canning industrial waste, which generates enormous amounts of solid waste [24].This waste contains protein, fiber, vitamins, minerals, amino acids, phenolic compounds, oligosaccharides, and polysaccharides [25].Because carbohydrates are found in pineapple waste, thus it can be used as a carbon source for microbial growth during fermentation [19].Additionally, it has the potential as a source of growth media for K. xylinus.
Moreover, using these wastes would be a great opportunity to create new products while reducing the amount of waste generated by pineapple canning industrial waste [26].The pineapple canning industrial waste was also used for the production of biogas [27], ethanol [28], lactic acid [29], vanillin and vanillic acid [30], and polyhydroxybutyrate.The use of pineapple industrial wastes such as peel, crown, and its mixture with the core has been studied for BC production [31][32][33].However, the utilization of core from the pineapple canning industrial waste for BC production has not been reported.Meanwhile, Indonesia is the largest pineapple producer in the Asia Pacific with total production of 2.886.416tones [34] (Fig. 1).From the pineapple processing, it is calculated that the core composes ca.7 % of the total waste [35].Additionally, the pineapple core contains 41.1-43.5 % of cellulose, 28.5 % of hemicellulose, 5.78 % of lignin, and 0.85 % of protein [36].It was also reported that the core containing 84.90 % of moisture content, 83.03 % of carbohydrate, crude fat of 2.35-4.78%, and the crude fiber content of 42.02 % [37].The core also rich of glucose and fructose and could be invaluable in fermentation process [38].Therefore, the core of pineapple core may be used as a raw material for BC production.
This research focuses on the potential of pineapple core from the canning industrial waste, which could provide economical source of nutrients to boost BC production from the available resources.The microstructure of BC synthesized by K. xylinus is described and compared to BC synthesized in a standard HS medium.
Materials
Pineapple core was obtained from the by-product of the pineapple canning industry, small and medium enterprises, Alamsari Serba Nanas in West Java, Indonesia.The pineapple (Ananas comosus) was obtained from the Subang Plantation (S 6 • 34′ 16.6224″, E 107 • 45′ 34.8876″) and harvested 6 months after its cultivation, in March 2022.The main component of the pineapple core was analyzed for their ash content (SNI 01.28911), total carbohydrate (AOAC 982.14), total fat (AOAC 925.12), crude protein (AOAC 992.15), and water content (SNI 01.28911) as shown in Table 1.The starter bacteria as a working inoculum were K. xylinus strain CGMCC 2955 purchased from a nata de coco small industry in Cianjur, West Java, Indonesia [39].
A typical synthetic Hestrin-Schramm (HS) medium that is frequently used for BC production in laboratories was also employed for comparison.The modified HS medium was prepared containing glucose 25 g/L, peptone 5 g/L, yeast extract 5 g/L, citric acid 1.15 g/L, and disodium hydrogen phosphate 2.7 g/L [40].Local commercial table sugar (Gulaku) was used to replace the glucose commonly used in the HS medium and purchased from the local supermarket, and used without further treatment or purification.
BC production and harvesting
The pineapple core was put in a beaker glass and added with water.The ratio of fresh pineapple core to distilled water of 1:2 (w/v).To compare with glucose used in HS medium, various sugar concentrations of 0, 2.5, and 5.0 % (w/v), respectively were added into the mixture and boiled for 30 min.After cooling and filtering, the filtrate was added with glacial acetic acid until its pH to 4. The fermentation was prepared in 500 mL beaker glass containing 200 mL of medium which is then inoculated with 40 mL of commercial culture of bacterial starter and incubated for 5, 10, and 15 days under static condition at room temperature.
At the end of the fermentation phase, the BC pellicle was taken from the medium and washed with water to remove the leftover medium.As described previously, to get rid of impurities or the leftover of bacteria, the pellicle was boiled in water for 10 min then cast off.This process was repeated until its pH was the same as the pH of water [39].To remove the excess of water, the wet BC pellicle was squeezed by filter-vacuum under nylon fabric, followed by microwave oven-drying (300 W, 3 min) and then heat-pressed (125 • C, 30 kgf/cm 2 , 10 min), respectively.The dry weight of BC per liter of medium (g/L) was used to calculate BC yield.
Reducing sugar analysis
Reducing sugar analysis of medium for K. xylinus growth was carried out by the Nelson-Somogyi method as described elsewhere [41].
Morphological analysis
The morphology of BC was observed by a Scanning Electron Microscope (SEM) (JEOL JSM IT-300, Japan).The specimens were coated with a thin layer of gold (±10 nm).The SEM micrograph was taken at an acceleration voltage of 20 kV.
Fourier transform infrared (FTIR) spectroscopy
Fourier transform infrared (FT-IR) spectroscopy (Thermo Scientific™ Nicolet™ iS™5 FTIR Spectrometer) with attenuated total reflectance (ATR) mode at wavenumbers ranging from 4000 to 400 cm ¡1 was used to characterize the functional group of BC [42].
X-ray diffraction (XRD)
XRD measurements on a Bruker D8 Advance X-ray Diffractometer operating at 40 kV and 30 mA with CuKα radiation as the X-ray source and LYNXEYE XE-T as the detector was performed to determine the crystalline segment of samples.At a scanning rate of 3 • / min, samples were scanned in 2θ ranges from 5 to 40 • .
Differential scanning calorimetry (DSC)
The DSC value of BC was analyzed by a differential scanning calorimeter (DSC-214 Polyma, Netzsch, Germany).An amount of BC (5-7 mg) was placed in an alumina pan and then heated from 30 to 250 • C with a rate of 5 • C/min, under a nitrogen atmosphere (50 mL/min).
Statistical analysis
Statistical analysis was performed using one-way analysis of variance (ANOVA), followed by the post hoc Tukey's HSD procedure.The differences within group were considered significant at a probability level of 0.05 (p ≤ 0.05).The assumption of equal variances was verified by Brown-Forsythe Test and Welch Test, respectively.
Results and discussion
The industrial production of BC is inhibited by the growth media cost of roughly 30 % of the total BC production budget when standard nutrients are used [43].To minimize the raw materials cost, free pineapple core from the pineapple canning industrial waste were replaced for the commercial and pricy HS medium.Utilizing this byproduct could result in more cost-effective and sustainable production in BC.Additionally, the pineapple core was not sterilized in an autoclave resembling the simplicity of the method.As for the after-treatment efficiency, BC was boiled in water compared with soaked in hot 2 % NaOH, as reported previously [44].The yield of BC produced from various sugar concentrations, fermentation times and its labelling was shown in Table 2.
BC yield
Yield is an important aspect in BC production, especially for commercial purposes.Therefore, the utilization of cost-effective media, such as pineapple core, plays a significant role in BC production.The influence of pineapple core supplemented with sugar on BC yield is presented in Table 2.The pineapple core medium used in this experiment is suitable for BC production as it is described here.Our current findings demonstrate that the yield obtained from the medium without sugar addition is comparable to that of the standard high-sugar (HS) medium.This observation can be attributed to the presence of carbohydrates in the pineapple core, which serve as a carbon source for bacterial growth.The pineapple core contains approximately 4.5 % fiber, which further contributes to the availability of carbon sources necessary for bacterial growth, as fibers also serve as carbon sources [ [45,46] This achievement highlights the potential of utilizing pineapple core, a by-product of the pineapple canning industry, as an alternative resource for the production using HS medium, which is typically considered costly for BC production.Previous studies have demonstrated comparable yields using pineapple peel extract waste (40 % v/v) as the medium, resulting in a yield of 2.42 g/L when incubated for 6 days at pH 4.5 using K. xylinus [47].Additionally, a slightly higher yield of 3.24 g/L was achieved from pineapple peel by utilizing a 1 % v/v inoculum of Gluconacetobacter medellinensis and incubating for 13 days [48].The gradual increase in BC yield with prolonged fermentation time is consistent with previous findings using K. xylinus and various bio-wastes as substrates for BC production [49] (Fig. 2).Statistical analysis indicated significant differences (p < 0.05) in the yields of the pellicles as shown in Table 2.
Analysis of glucose content in pineapple core media
The glucose content of all treatments shown in Fig. 3 decreased significantly during the 5 days of fermentation, indicating that the microorganisms were growing rapidly and therefore a large amount of glucose was consumed.By day-10, the glucose content increased slightly, possibly due to glucose regeneration from the breakdown of other carbohydrate sources in the waste.After the 10th to the 20th day of fermentation, all graphs showed a tendency for glucose levels to decrease at a slower rate as the bacteria were no longer focused on growing or multiplying.However, this increased BC production started to be observed from day 5 of fermentation as presented in Table 2.
Media without glucose and with 2.5 % glucose addition showed a similar graph of glucose content, however it shows that the sugar consumption rate in medium with 2.5 % sugar content was higher than in medium without sugar addition.The 5-day sugar consumption rate of the fermentation on medium fields 0, 2.5, and 5 % were 10.51, 34.84, and 14.59 mg/mL per day, respectively.Additionally, the oxygen required for K. xylinus to carry out additional metabolic or for its growth may be reduced by the formation of a BC layer on the media's surface.
A number of bacterial strains use mannitol, glucose, fructose, sucrose, and other carbon sources as building blocks to generate the biopolymer known as BC as a main metabolite [50].When bacteria first begin to proliferate, primary metabolites are created as cell mass or number increases [51].The presence of oxygen in aerobic environments contributes to the metabolic processes going on in addition to serving as a source of sustenance.The oxygen transfer rate, which declines as broth viscosity increases and CO 2 pressure is followed by BC buildup, determines the rate of BC synthesis [52].
Morphological analysis
The properties of BC are closely related to its morphological characteristics, which play an important role in its assessment.In Fig. 4 (a-c), the surface micrograph of BC derived from pineapple core at different sugar concentrations on day 5, 10, and 15 of fermentation is presented.From the observations in Fig. 4(a-c), it is evident that there are no structural differences among the fibers in BC gel formed with or without the addition of sugar.The BC fibers exhibit an irregular three-dimensional network composed of densely packed and disordered fibrils.Interestingly, the BC produced from pineapple core exhibits a heterogeneous three-dimensional network structure similar to that of BC produced in high-sugar (HS) medium.This indicates that the utilization of pineapple core, derived from canning industrial wastes, has the potential to serve as a low-cost growth medium for K. xylinus in cellulose production.This finding aligns with a previous report in which BC synthesis was conducted using corn steep liquor [53].Additionally, although the diameter size may differ, the irregular three-dimensional network of disordered fibrils in BC bears resemblance to bamboo cellulosic fiber [54].
The dimension of the nanofibril diameter was observed between BC derived from pineapple waste and nanofibrils obtained from HS medium.The frequency histogram in Fig. 5(a-c) presents the distribution of BC diameter from pineapple waste medium at various sugar concentrations on day 5, 10, and 15 of fermentation.The observed fibril diameters ranged from 23 to 85 nm.The diameter of BC fibers produced in this study is comparable to that of BC obtained from crude distillation waste after oven-drying at 50 • C for one and a half hours [55].This result suggests that the utilization of pineapple waste as a growth medium for BC production can yield microfibrils with similar diameter characteristics to those obtained from conventional HS medium.The consistent fibril dimensions indicate the potential of pineapple waste as a suitable and cost-effective alternative substrate for BC synthesis.
Fourier transform infrared (FTIR) spectroscopy
Fourier-transform infrared spectroscopy (FTIR) is a suitable technique for analyzing the chemical bonds and specific functional groups in a molecule, providing insights into the alterations induced by fermentation treatment.Fig. 6(a-c) illustrates the FTIR spectra of BC obtained from pineapple core medium at different sugar concentrations on day 5, 10, and 15 of fermentation.The FTIR spectrum exhibits distinct absorption peaks, with the characteristic absorption at 3341 cm − 1 attributed to the intra and inter O-H stretching vibrations in cellulose I [56] A weaker peak observed around 2918 cm − 1 corresponds to the C-H stretching of CH 2 and CH 3 groups.Furthermore, a strong peak at 1636 cm − 1 indicates the H-O-H bending vibration of water absorption [57].The absorption at approximately 1107 cm − 1 is associated with the C-C vibration of the monomer units of polysaccharides [58].The bands observed in the region of 1030-1060 cm − 1 are linked to the stretching vibrations of C-O-C and C-O bonds [59].Additionally, the peak at 663 cm − 1 corresponds to the O-H out-of-phase bending vibration [57].From the FTIR spectra presented in Fig. 4, it is evident that BC derived from pineapple core exhibits similar absorption features to those obtained from HS medium.This similarity in FTIR spectra indicates a comparable chemical composition and functional groups between BC produced from pineapple core and BC produced from HS medium.Consequently, pineapple core can serve as a low-cost and promising alternative growth medium for BC production, potentially replacing the traditional HS medium.
X-ray diffraction analysis
X-ray diffraction (XRD) analysis is a widely used technique for characterizing the structure and assessing its changes in BC.In Fig. 7 (a-c), the XRD curves for BC production in pineapple core medium with varying sugar additions are presented.The XRD pattern of all 100) and (110), respectively, which are indicative of cellulose I α structure [60].The diffractogram obtained in this study closely resembles the XRD pattern of BC produced using K. xylinus in kombucha medium [61] and resembles the XRD pattern of cellulose derived from plants [62].Additionally, Fig. 7(a-c) demonstrates that the diffractogram of BC obtained from pineapple core is nearly identical to that of BC prepared using the HS medium.
These results indicate that the sugar concentration significantly influences the X-ray diffraction of BC.Furthermore, it is noteworthy that the crystal plane of BC membranes obtained from HS medium is similar compared to that of the pineapple core medium.This difference in peak height can be attributed to the variations in nutrient composition present in the HS medium, which might impact the cellulose structure [63].The XRD analysis confirms that BC produced from pineapple core exhibits a similar structure resembling to that of BC produced using traditional HS medium.The plane structure of BC is influenced by the sugar concentration, with higher diffractogram observed in BC membranes derived from HS medium.These findings shed light on the impact of growth medium composition on the properties of BC, providing valuable insights for further optimization of BC production processes.
Differential scanning calorimetry (DSC)
Differential scanning calorimetry (DSC) is a thermodynamic technique used to directly assess the heat energy absorbed or released by a sample as it undergoes temperature changes.Fig. 8(a-c) presents the DSC analysis of BC production in pineapple core medium with varying sugar concentrations.As depicted in Fig. 8(a-c), all BC samples exhibit similar patterns of broad endothermic and exothermic peaks.The observed endothermic peaks in the temperature range of 30-105 • C for all samples can be attributed to water dehydration [64].This behavior is commonly observed in cellulosic materials due to the interaction between water molecules and non-substituted hydroxyl groups.The broad endothermic peak at approximately 65 • C in BC derived from pineapple core may be attributed to random chain breakage [65].On the other hand, the slightly sharper and higher endothermic peak at around 74 • C is characteristic of BC produced from HS medium.The higher peak temperature suggests a more homogeneous distribution of heat and stronger chain-to-heat interactions.Additionally, a distinct and significant exothermic peak observed at approximately 155 • C between the two DTA endothermic peaks could be attributed to chain breaking termination [65].This DSC analysis reveals similar thermal behavior in BC samples obtained from pineapple core and HS medium.The observed endothermic peaks associated with water dehydration and chain breakage indicate typical characteristics of cellulosic materials.The sharper and higher endothermic peak observed in HS-based BC suggests a more robust chain-to-heat interaction.The presence of a prominent exothermic peak between the endothermic events further indicates the occurrence of chain breaking termination.These findings provide valuable insights into the thermal properties and behavior of BC produced from pineapple core, demonstrating its potential as a viable alternative to HS medium in BC production.In current study, the BC production using pineapple core medium was successfully achieved.The cost of the pineapple from the canning industrial wastes was calculated, revealing lower production cost of 68 % comparing with HS medium.The results shows the potential of pineapple core utilization for BC production.Carbohydrate in pineapple core was used as a carbon source to replace the glucose sources in HS medium.
Conclusion
In conclusion, the findings of this study highlight the potential of utilizing low-cost substrates, specifically pineapple core derived from canning industrial waste, as an alternative medium for BC production under static conditions.These underutilized by-products serve as a viable replacement for HS medium.The addition of sugar to the pineapple core media effectively stimulated BC pellicle formation, with fermentation time playing a crucial role in this process.Importantly, the structural, physical, and thermal characteristics of BC produced from the pineapple core medium were found to be comparable to those obtained from the HS medium.This demonstrates that pineapple core can serve as a sustainable and cost-effective alternative substrate for BC production, offering comparable BC quality to the conventional HS medium.These findings contribute to the advancement of BC production processes by utilizing agricultural waste and promoting a more sustainable approach in the field of biomaterials.
Fig. 2 .
Fig. 2. BC production based on the fermentation days.
Fig. 3 .
Fig.3.The changes of glucose contents in the medium with no addition of glucose, with 2.5, and 5 % addition of sugar during the fermentation process.
Fig. 4 .
Fig. 4. SEM images of BC from pineapple core and HS medium at a) day-5, b) day-10, and c) day-15 of fermentation.
Fig. 5 .
Fig. 5. Frequency histogram of BC from pineapple core and HS medium at a) day-5, b) day-10, and c) day-15 of fermentation.
Fig. 6 .
Fig. 6.FTIR spectra of BC from pineapple core and HS medium at a) day-5, b) day-10, and c) day-15 of fermentation.
Table 1
Main component of pineapple core.
Table 2
Yield of BC from various sugar concentration.
Each values represent mean of replicate, alphabets as superscript across.Rows indicates a significant difference p ≤ 0.05 by Tukey test.Notes.BC produced in pineapple core.HS produced in Hestrin-Schramm medium. | 5,231.8 | 2023-11-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science",
"Materials Science"
] |
Gravitational Wormholes
Spacetime wormholes are evidently an essential component of the construction of a time machine. Within the context of general relativity, such objects require, for their formation, exotic matter -- matter that violates at least one of the standard energy conditions. Here, we explore the possibility that higher-curvature gravity theories might permit the construction of a wormhole without any matter at all. In particular, we consider the simplest form of a generalized quasi topological theory in four spacetime dimensions, known as Einsteinian Cubic Gravity. This theory has a number of promising features that make it an interesting phenomenological competitor to general relativity, including having non-hairy generalizations of the Schwarzschild black hole and linearized equations of second order around maximally symmetric backgrounds. By matching series solutions near the horizon and at large distances, we find evidence that strong asymptotically AdS wormhole solutions can be constructed, with strong curvature effects ensuring that the wormhole throat can exist.
31 Caroline St. N., Waterloo, ON N2L 2Y5, Canada Spacetime wormholes are evidently an essential component of the construction of a time machine.Within the context of general relativity, such objects require, for their formation, exotic matter-matter that violates at least one of the standard energy conditions.Here, we explore the possibility that higher-curvature gravity theories might permit the construction of a wormhole without any matter at all.In particular, we consider the simplest form of a generalized quasi topological theory in four spacetime dimensions, known as Einsteinian Cubic Gravity.This theory has a number of promising features that make it an interesting phenomenological competitor to general relativity, including having non-hairy generalizations of the Schwarzschild black hole and linearized equations of second order around maximally symmetric backgrounds.By matching series solutions near the horizon and at large distances, we find evidence that strong asymptotically AdS wormhole solutions can be constructed, with strong curvature effects ensuring that the wormhole throat can exist.
I. INTRODUCTION
It has long been known that if spacetime is to have closed timelike curves in some local regions [1], then wormholes are an essential part of this construction [2,3].However, a key characteristic of such objects is that they require exotic matter that does not respect the energy conditions.Despite the challenges presented in constructing wormholes [4], the search nevertheless continues in the hopes of evading the constraints imposed by quantum physics in Einsteinian geometries [5].
Both Lovelock and quasi-topological theories have been shown to be particular cases of a more general class called Generalized Quasi-Topological gravity (GQTG) [31][32][33][34][35].These theories are characterized by second-order linearized equations around maximally symmetric backgrounds and admit single-function (g tt g rr = −1) non-hairy generalizations of the Schwarzschild black hole.These theories are ghost-free on constant-curvature backgrounds, but, on a generic background, will have ghosts.However, such ghosts cannot escape to infinity in spacetimes that are asymptotically of constant curvature.The effects of the additional degrees of freedom in GQTGs have not been fully explored, but it is known that they significantly modify the thermodynamics of black holes for small masses [34] and, in the cubic and quartic cases, exhibit an number of interesting features [34,[36][37][38][39][40][41].A comprehensive list of their properties has been given [35,42].A key advantage of GQTGs is that they have non-trivial field equations (and solutions) in D = 4 dimensions.
In this paper, we carry out the first investigations of wormhole solutions in D = 4 Generalized Quasi-Topological gravity.For specificity, we shall consider the simplest GQTG, a theory known as Einsteinian Cubic Gravity (ECG) [31,43], whose action is of the form with α, β and γ being coupling constants, and three densities that are cubic in the Riemann curvature, given by For a general static spherically symmetric ansatz (GSSS) the densities C and C ′ do not contribute in a linearly independent way to the field equations.Both of these terms become trivial when the metric function N (r) is a constant.This situation is the one generally considered in ECG and clearly does not admit a vacuum wormhole.Note that, regardless of the values that the higher-curvature couplings take, the Einstein-AdS limit of the theory at large distances is preserved (albeit with a modified cosmological constant), since the contributions from the cubic terms fall off much more rapidly than the contributions from the Einstein-Hilbert part of the action if AdS asymptotic behaviour is imposed as a requirement.
We therefore consider the GSSS ansatz with N ′ ̸ = 0. We find that this situation is possible if the spacetime has a spherical deficit/surfeit angle in the asymptotic large-r region.We shall specifically consider solutions whose metric functions have the asymptotic form where l is the AdS length scale and δ ̸ = 0 parametrizes the deficit/surfeit angle.
We find that Einsteinian Cubic Gravity-and, by implication, higher-order GQTGsadmits wormhole solutions that are purely gravitational without any exotic matter.The solutions that we obtain are asymptotically anti-de Sitter, with a spherical deficit angle resembling that of a global monopole.Unlike other solutions with radial symmetry, these solutions have non-zero values for the coupling parameter β.We find that including β provides a sufficiently large number of parameters to match the series solutions for the two metric functions over a broad range of radii at which the matching takes place.
II. THE NON-LINEAR ODE SYSTEM
For the ansatz (5), we find that the two independent field equations are and where, without loss of generality, we can set γ = 0 in (1), since its inclusion simply reproduces the preceding equations but with β → β + γ/2.
If N (r) is constant, then these equations become linearly dependent, and a wormhole solution is not possible since there will be a single metric function whose largest root corresponds to the horizon of a black hole.
We can see this by considering the series expansions in the asymptotically distant region at large r, where asymptotically flat solutions have Λ = 0. Inserting these into the field Equations ( 7) and (8) yields in the limit r → ∞, where and and all β-dependent terms vanish.
From these formulae, we observe several aspects.First, a power-series solution in 1/r implies only a single independent function f (r), which is the hallmark of GQTGs.Second, for asymptotically flat solutions Λ = 0, which in turn implies Λ 0 = 0, we also have h ′ = 1, and so the asymptotically flat solution can be immediately obtained from (11) by setting Λ = 0.However, note that the converse is not true: even if Λ 0 = 0, it is possible to have asymptotically de Sitter solutions with Λ = 3/ 8|α| provided that α < 0.
We seek solutions that have the asymptotic form (6), where N (r) is not constant so as to obtain wormhole solutions.The presence of the wormhole needs to be manifest at large-r in a way that differs from that of a spherically symmetric star or black hole.To this end, we consider the ansatz where the quantity K = 1 + δ parameterizes a spherical deficit/surfeit angle produced by the wormhole.The effect is analogous to that produced by a global monopole [44,45].Far from the wormhole, all light rays are deflected by the same angle regardless of their impact parameter.
Inserting the ansatz (14) into the field equations yields b 1 L(24L 2 α + 1) = 0 (15) from both equations to leading order.This equation is satisfied by choosing either b 1 = 0 or (24L 2 α+1) = 0.However, it is straightforward to show that the next order forces b 1 = 0 regardless of the value of (24L 2 α + 1).However, if this latter quantity is non-zero, then it is straightforward to show that there is no deficit angle, and all b i coefficients must vanish as per the discussion above.
We now see that the condition h ′ (Λ) = 0 from ( 13) allows for N to be a non-constant function, opening up the possibility of obtaining a wormhole solution.We pursue this in the next section.
III. SERIES SOLUTIONS
Anticipating the asymptotic behaviour ( 14), we rewrite the general static spherically symmetric (GSSS) ansatz ( 5) in the form using the coordinate transformation where the metric functions n(x) and g(x), defined on x ∈ [0, 1], are with r 0 a positive constant.
For wormhole solutions [1], these continuous functions must be everywhere positive in the interior of the domain, with n vanishing and g having a finite positive value at x = 0, which locates the position of the wormhole throat.Under this map, r → ∞ is compactified to x = 1.With this new ansatz, the boundary condition ( 6) is equivalent to as x approaches 1.The effect of δ is analogous to that produced by a global monopole [44,45], which deflects light rays by the same angle regardless of their impact parameter.
The advantage of the ansatz ( 18) is clear-it compactifies the domain so that numerical and semi-analytic solutions become more easily attainable.We now employ this ansatz to obtain series solutions for the functions n and g.The field equations for g(x) and n(x) are given in Appendix A.
A. Large-r Solution
To obtain solutions asymptotic to (20), we substitute the formal series into the equations and solve them order by order in (1−x).Note that we have set b 1 = b 2 = 0 due to the discussion following condition (15).The lowest two orders yield two constraints h(a 0 ) = 0 and (24αL 2 + 1)(a 2 − 1) = 0.The first of these simply defines Λ 0 in terms of the other parameters.Solutions with a vanishing deficit a 2 = 1 satisfy the second constraint but force b n≥3 = 0 or, in other words, n = 1.The only alternative non-trivial solution occurs when a 2 ̸ = 1, yielding and Λ 0 and α are replaced by expressions in terms of L via the two constraints h(a 0 ) = 0 and (24).
The parameters a 0 = Lr 2 0 , a 2 and b 3 are the only free variables in this solution; we also have n(1) = n 0 = 1.Furthermore β is an independent coupling parameter.It can happen in a non-linear system that fewer constants of integration appear in a solution than the differential order of the equations.Due to the non-linearity, a spontaneous singularity could appear, which means that the radii of convergence for g and n depend on the initial conditions, namely the values of these parameters.We do not expect vanishing radii of convergence for all values of the parameters, since the series with b 3 = 0 converges to the AdS solution with a deficit.
B. Near-Throat Solution
There is likewise a near-throat solution for (24L 2 α + 1) = 0. Local solutions near x = 0 compatible with (22) and ( 23) are necessarily Taylor series.In this case, the desired boundary conditions at the throat require the ansatz to be which yields two series whose coefficients are fully determined by A 1 , B 0 and a 0 = Lr 2 0 .We obtain Higher-order terms are very lengthy and cumbersome to write; we present some of them in Appendix B.
Note that in both series solutions (near x = 1 and near x = 0), the independent parameters r 0 and β always appear as Lr 2 0 and L 2 β.Consequently, we can set L = 1 and regard r 0 and β as independent parameters without loss of generality.
IV. MATCHING THE SOLUTIONS
As a consequence of the uniqueness of Taylor series, we expect that the Taylor expansions of ( 22) and ( 23) at x = 0 can be matched with ( 26) and ( 27) as long as a wormhole solution (analytic on [0, 1]) exists for some values of (β, r 0 , a 2 , b 3 , A 1 , B 0 ).We achieve this matching by minimizing the quantity where x 0 is the matching point, as a function of the parameters (β, r 0 , a 2 , b 3 , A 1 , B 0 ), where ∆F ≡ F ∞ − F th .By matching the second derivatives, we ensure that there are no discontinuities in the Riemann curvature.
The presence of the coupling parameter β, irrelevant for asymptotically AdS solutions (with K = 1), has a profound effect insofar as it yields a sufficient amount of freedom in the parameter space to minimize ∆ to high precision.The precision of our matching is accurate to one part in 10 15 at worst.Note from (24) that each solution appears for a specific choice of α.We have found a broad range of wormhole solutions using this method.These are illustrated in Figures 1-3 and, respectively, correspond to matching for small x 0 , mid-range x 0 , and large x 0 .
V. CONCLUSIONS
We have shown that Einsteinian Cubic Gravity contains wormhole solutions that are purely gravitational.Unlike wormholes obtained in generic higher-curvature gravity theories, our solutions are in (3 + 1)-dimensions and require no exotic matter in their construction.
In contrast to previous solutions obtained in the theory, our wormhole solutions require three special characteristics.One is that their asymptotic behaviour is that of AdS spacetime with a global monopole deficit.The second is that the coupling parameter α is related to the effective cosmological constant via (24).The third is that the coupling parameter β ̸ = 0. Our wormhole solutions have no horizons or singularities and so are traversable in principle.Imposing more stringent traversability requirements (such as requiring that the gravity at the throat not exceed Earth's gravity) will reduce the range of allowed solutions; we have not imposed this constraint on the solutions that we have obtained.
Although our series-matching approach has yielded candidate gravitational wormholes, a full wormhole solution to the field Equations (A1) and (A2) remains to be obtained.This can be done numerically, but it presents a computational challenge.A solution in terms of Tchebyshev polyonomials requires many coefficients to obtain high accuracy.We were not able to achieve this using the computational resources available.While it is straightforward to apply the shooting method to an ODE system, here, we have a double shooting problem, which is considerably more challenging.The solution will necessarily depend on tuning the constants of integration appearing in the local series expansions of the metric functions near x = 1 such that the integrated solution from x = 1 satisfies the boundary condition at the other end (or vice versa, beginning at x = 0).We did attempt such solutions but found that we typically encountered at least one spontaneous singularity for g between x = 0 and x = 1.Some of these cases might be indicative of a new class of black holes, which merit further investigation.
Files used to generate the various graphs presented in this paper are available on request.
ACKNOWLEDGMENTS
This work was supported in part by the Natural Science and Engineering Research Council of Canada.Mengqi Lu was also supported by the China Scholarship Council.We would like to thank Niayesh Afshordi, Jianhui Qiu, and Robie Hennigar for the helpful discussions.
Appendix A: On-Shell Field Equations Using the ansatz (18), field Equations ( 7) and ( 8) become, for γ ̸ = 0, The coefficients B 4 and A 5 require about 5 and 10 pages, respectively, to present, so we | 3,558.8 | 2024-06-10T00:00:00.000 | [
"Physics"
] |
FlexibleSUSY extended to automatically compute physical quantities in any Beyond the Standard Model theory: Charged Lepton Flavor Violation processes, Higgs decays, and user-defined observables
FlexibleSUSY is a framework for the automated computation of physical quantities (observables) in models beyond the Standard Model (BSM). This paper describes an extension of FlexibleSUSY which allows to define and add new observables that can be enabled and computed in applicable user-defined BSM models. The extension has already been used to include Charged Lepton Flavor Violation (CLFV) observables, but further observables can now be added straightforwardly. The paper is split into two parts. The first part is non-technical and describes from the user’s perspective how to enable the calculation of predefined observables, in particular CLFV observables. The second part of the paper explains how to define new observables such that their automatic computation in any applicable BSM model becomes possible. A key ingredient is the new NPointFunctions extension which allows to use tree-level and loop calculations in the model-independent setup of observables. Three examples of increasing complexity are fully worked out. This illustrates the features and provides code snippets that may be used as a starting point for implementation of further observables.
Introduction
Exploring the parameter space of Beyond the Standard Model (BSM) theories, researchers frequently employ software packages to automate both intricate calculations and parameter scans.Nevertheless, these software tools are often restricted to a limited range of models, closely related to the Standard Model (SM), the Minimal Supersymmetric Standard Model (MSSM), the Two-Higgs-Doublet Model (2HDM), and their extensions involving higher-dimensional operators.This is only a slice of all intriguing possibilities that include a broad set of models.
To address this limitation, FlexibleSUSY [1,2] was created to be used for a broad class of supersymmetric (SUSY) or non-SUSY models, providing a tool for the comprehensive investigation of diverse theoretical scenarios. 1It is a software application primarily implemented in the Wolfram Language [7] and C++ based on SARAH [3,[8][9][10] and components from SoftSUSY [11,12], designed to produce an efficient and accurate C++ spectrum generator (a program searching for a consistent set of model parameters and calculating the pole mass spectrum and a set of observables) for physical models specified by the user.
The produced C++ program applies user-defined boundary conditions at up to three distinct energy scales within the model, incorporating Renormalization Group Equation (RGE) evolution between these scales.It further generates a collection of mixing matrices, pole masses, and auxiliary quantities.Recent versions of FlexibleSUSY have introduced the computation of several important observables that are suitable for phenomenological investigations and comparisons with experimental data.In particular, we highlight the extensions FlexibleAMU and FlexibleCPV introduced in Ref. [2] (they are responsible for the calculations of the anomalous magnetic moment and Electric Dipole Moment (EDM) of leptons), an update for FlexibleMW on precise calculation of the W -boson pole mass from Ref. [13], and FlexibleDecay [14] (a tool to calculate decays of scalars in a broad class of BSM models).
A key point of these observables is that they are integrated in FlexibleSUSY such that they are ready to be computed for any desired BSM scenario FlexibleSUSY is applied to.In order to achieve this, the mentioned extensions store information about the observables in suitable Wolfram Language and C++ meta code which is then automatically converted into actual C++ code specifically for each BSM scenario.
So far, new observables were added by individually modifying the internals of FlexibleSUSY, hence users could not add new observables in such a model-independent way.In the present paper, we explain a new FlexibleSUSY 2.8 design structure which solves this problem.It allows to integrate new observables on the meta code level, and it provides powerful options to define and finetune the computation of new observables without having to touch internals of FlexibleSUSY.
A number of new observables has already been integrated by using this new structure (they correspond to various Charged Lepton Flavor Violation (CLFV) processes), and in the future, further additional observables may be integrated either by FlexibleSUSY developers or by users of FlexibleSUSY.
To streamline and modularize the integration of new observables into FlexibleSUSY, an extension named NPointFunctions [15] was developed.This extension serves to automate the calculation of amplitudes and other quantities that rely on them for any high-energy model supported by FlexibleSUSY.In essence, NPointFunctions incorporates a well-defined approach of widely used packages FeynArts [16], FormCalc [17], and ColorMath [18] into FlexibleSUSY (up to technical implementation details to be mentioned later in appropriate sections).
The article is separated into two parts depending on the readers' interests: 1. Section 2 describes how to install and use FlexibleSUSY to calculate any of the available observables.This section is of interest for all users of FlexibleSUSY who may want to switch on the computation of observables.Reading it does not require knowledge of the internal structure of FlexibleSUSY.To get physical insights about the new CLFV observables, one is invited to read Appendix A.
2. Section 3 presents details on how to implement new observables.It provides a general outline and background information relevant for all observables, and it covers three specific examples of increasing complexity.It thus illustrates the range of possibilities and equips users with code snippets which can be used as a basis for further developments.Interesting features improving the functionality of FlexibleSUSY are mentioned separately in Appendix B.
Available observables and how to calculate them with FlexibleSUSY
Observables that are currently available in FlexibleSUSY are shown in Table 1.This section shows the reader how to calculate them with FlexibleSUSY.
Installation
There are no changes to the previous FlexibleSUSY version of Refs.[2,14] with respect to mandatory installation steps.All missing dependencies will be highlighted by FlexibleSUSY during the execution of the configure script (their list and hints about the installation can be found at developer's repository, see the program summary).To use observables that rely on NPointFunctions module (in particular, CLFV ones), FeynArts and FormCalc must be installed.
Output defined observable with FlexibleSUSY
To switch on the calculation of a desired, predefined observable O i (for example, bsgamma or LToLConversion from Table 1) for a physical model M a (like SM or MSSM), one needs to modify Observable FlexibleSUSY name
Loop level
Hints and comments ∆a ℓ
EDM
Electric dipole moment of a lepton [2] See Table 2 and Appendix A.2 µ-e conversion LToLConversion 0-1 See Table 3 and Appendix A.3 b → sγ bsgamma 1 --FlexibleDecay known SM (0-4), LO (0-1) for BSM Decays of scalars [14] either model _ files/M a /FlexibleSUSY.m or models/M a /FlexibleSUSY.m.These files define the C++ spectrum generator output in the SUSY Les Houches Accord (SLHA) [19,20] or the Flavour Les Houches Accord (FLHA) [21] formats.The first file mentioned above is used by the script createmodel (see Sections 3.7.1-3.7.3) to create both the directory models/M a / and the second file, while the latter is executed by commands configure and make that create C++ spectrum generator itself.This means, that to always include the desired observables after the directory models/M a / is purged or cleaned, one modifies the first file.
The required modification of the file FlexibleSUSY.m is in the list ExtraSLHAOutputBlocks.Each observable that should be computed and appear in the spectrum generator output needs a corresponding entry which specifies the details of the observable and how it should appear in the output.For example, µ-e conversion is switched on in the Minimal R-Symmetric Supersymmetric Standard Model [22] (MRSSM) (for its particular FlexibleSUSY configuration called The numerical value of µ-e conversion after the execution of the spectrum generator will be stored in SLHA format under the user-chosen number 41 in the block FlexibleSUSYLowEnergy (see meta/WriteOut.m).More details for arguments of LToLConversion are provided in Table 3.
To numerically calculate all added observables (apart from FlexibleDecay), the observable calculation must be enabled in the SLHA input file for C++ spectrum generator: In templates/observables/ @O i _ filename@.hpp.in@O i _<EMAIL_ADDRESS>In models/M a /observables/ M a _ @O i _<EMAIL_ADDRESS>a _ @O i _<EMAIL_ADDRESS>coefficients are defined in files meta/Observables/O i /WriteOut.m.
File structure of new observables O i
Two new FlexibleSUSY features are described in this paper: a way to add new observables to FlexibleSUSY (implying new auxiliary files), and a way to generate C++ code to numerically calculate amplitudes that are required by these observables (NPointFunctions extension).This section addresses both extensions and explains how to implement new observables with the help of toy examples and code snippets of already implemented observables.
To create a spectrum generator, FlexibleSUSY usually first runs Wolfram Language code (located in meta/) to obtain model-specific expressions for mass matrices, self-energies, amplitudes, etc.The obtained expressions are converted to C++ form and are filled into C++ template files (located in templates/) to generate model-specific C++ code (located in model/M a /). Figure 1 shows the files relevant for the calculation of observables.The top left block shows Wolfram Language files in meta/ that contain general routines and observables implemented in previous versions of FlexibleSUSY.The top right block shows the corresponding observable-specific Wolfram Language files (located in meta/Observables/O i /) and the necessary C++ template files (located in templates/observables/).The bottom block shows the generated relevant C++ files that are eventually compiled and combined with other C++ files to the final spectrum generator executable (models/M a /run _ M a .x).
In the following, by creating a new observable we mean the generation of a specific C++ function that calculates the observable within the final spectrum generator.That is, one needs to create several C++ code blocks in appropriate places of the generated C++ spectrum generator, as the following template-listing with observable-and model-specific blocks shows: 3Listing 4: C++ template code to calculate an observable for a specific model Ma.
// 1. Function definition in Ma _ @Oi _ filename@.cpp:namespace Ma _ @Oi _ namespace@ { @Oi _ output _ type@ @Oi _ calculate(...)@ { // Complicated function body generated during the meta phase and specific for Oi and Ma } } // 2. Internal calculation of the observable: observables.@Oi_ name@ = Ma _ @Oi _ namespace@::@Oi _ calculate(...)@; // 3. Writing the observable in Les Houches format to an output stream: block << FORMAT _ * (observables.@Oi_ name@, ...); In the C++ source code snippet above, @O i _ name@ is replaced by the name of the observable and @O i _ output _ type@ is its C++ type.The @O i _ calculate(...)@ pattern will be replaced by the name of function that performs the calculation, which is defined in the M a _@O i _ namespace@ namespace.The body of the function will depend on the observable and the model under consideration.
In the following subsections it is described how the expressions are determined that replace the different patterns in the above C++ template code snippet.The files where these expressions are determined are the Wolfram Language files located in the directory meta/Observables/O i /, which will be described in the following subsections.In general, to define a new observable one creates five new files filled with the content described in the following steps: 1.In Section 3.1 we show how to define the C++ name of the observable, the observable's C++ type, etc. (done in the file meta/Observables/O i /Observables.m).
2. In Section 3.2 we show how to connect the calculation of the observable to the spectrum generator output (done in the file meta/Observables/O i /WriteOut.m).
3. In Sections 3.3-3.4we describe how to fill the body of the function that calculates the numeric value of the observable for a given parameter point (done in the meta-phase file meta/Observables/O i /FlexibleSUSY.m in Section 3.3 and two C++ template files in Section 3.4).
4. As a last step, we show how to modify the model-specific FlexibleSUSY model file named models/M a /FlexibleSUSY.m to register the observable to the model under consideration.
Each subsection will begin with general explanations and definitions.Then three recurring examples will be used for illustration.The first example illustrates the definition of an observable with minimal complexity -the output of a single numerical constant, where the numerical constant is not hard-coded but can be specified separately for each model FlexibleSUSY is applied to.The second example outputs the value of fermion masses, where the decision of which fermions to select can be specified separately for each model.The third example outputs the value of one-loop self-energies and thus illustrates how to use loop calculations in the definition of observables.
General case
The creation of a new observable O i starts with the file meta/Observables/O i /Observables.m.This file is supposed to define the C++ tokens such as the observable name (@O i _ name@) or the observable's C++ type (@O i _ output _ type@), etc.The file contains only one call to the function DefineObservables, which defines all C++ tokens.The general structure of this function call is shown in the following Wolfram Language source code snippet: Listing 5: General content of meta/Observables/Oi /Observables.m.
All further arguments of the function DefineObservables define how C++ tokens (for example, @O i _ output _ type@) for the observable O i should be replaced in code from Listing 4. Note that the names of the parameters defined for the observable (like parA, parB, etc.) are replaced in the strings by their values given in the model-specific FlexibleSUSY model file (the only exception: arguments in the prototype).For example, if one writes in the FlexibleSUSY model file FlexibleSUSYObservable O i [Fe,3], then the strings parA and parB are replaced by Fe and 3, respectively, in all strings on the r.h.s. in the function DefineObservable, so that for example the execution of the function GetObservableName produces "name _ with _ Fe3". 4 The meaning of two other mandatory arguments of the function DefineObservables is the following: GetObservableType defines the C++ type of the observable and replaces @O i _ output _<EMAIL_ADDRESS>general, it can be any (real or complex) scalar, array or matrix C++ type.The syntax {1} in Listing 5 defines a complex-valued array of length 1 (corresponds to the C++ type Eigen::Array<std::complex<double>,1,1>).See also BrLToLGamma/Observables.m for another example.Note that array, vector or matrix types are convenient for storing Wilson coefficients, debugging information, etc.As described later, the connection between the observable type and the FlexibleSUSY Les Houches output is specified in the file named meta/Observables/O i /WriteOut.m.
GetObservablePrototype defines the C++ prototype for the function that calculates the observable (@O i _ calculate(...)@).Note that the function name is modified with the values of the parameters of the first argument of the function DefineObservable (parA is replaced with value of parA _ in Listing 5), while the function arguments are kept intact (parB is kept). 5 Let us now illustrate how to create new observables in FlexibleSUSY starting from the usage of the function DefineObservable in Observables.mfile.The examples are of increasing complexity and they will be continued in the next subsections.
Example 1: output a given number
Let us create an observable, called ExampleConstantObservable, that outputs a numerical constant specified by the user in the FlexibleSUSY.mfile, so that the value of the numerical constant is not hard-coded in the body of the observable C++ code but rather configured with the Wolfram Language files.This example will demonstrate the basic usage of the function that calculates an observable and allows us to familiarize ourselves with the workflow, as no physical calculation is required.First, one creates the meta/Observables/ExampleConstantObservable/ directory.Afterwards one creates the file Observables.minside this directory with the following content: 4 More advanced ways to name the C++ code parts are supported: 1.One may use expression like "$(parA+1)", which evaluates the content inside the parenthesis after substitution of observable pattern values (parA _ ). 2. The description of observable in the SLHA output can be generated from its name automatically by replacing underscores with spaces.This can be redefined by GetObservableDescription. 3.Both names for C++ templates and C++ namespace can be automatically generated from the Wolfram Language name of the observable by inserting an underscore before capital letters (or before numbers in front of them) and lowering the letter case.If the generated replacement for @Oi _ namespace@ (@Oi _ filename@) leads to undesired side effects, then one may use GetObservableNamespace (GetObservableFileName) to override the default behavior.4. GetObservableName uniquely defines the C++ name of the observable, which replaces @Oi _<EMAIL_ADDRESS>name is used to store the numerical value of the observable internally and is generated automatically but can be overridden. 5The following two additional function arguments may be added: auto model and auto qedqcd.The first one allows to access all numeric quantities of the model (model parameters, masses, etc.).The second one provides access to known low-energy input observables.As in the general case discussed above, the function call of DefineObservables defines the Wolfram Language symbol of the observable to be ExampleConstantObservable, and it specifies that the observable depends on one parameter num _ ; the meaning of the parameter will be the value of the numerical constant to be output.Accordingly, as C++ return type for the function that calculates the observable we use an array of size one in GetObservableType.GetObservablePrototype defines the prototype of the function that performs the calculation and we use double num as function argument.
Example 2: show fermion masses
Let us turn to a little more involved example to illustrate the usage of model-specific quantities in the calculation of an observable and the possibility of filling the C++ template files with modelspecific content.As an example, we define an "observable" which can output several masses whose values are determined in FlexibleSUSY.Specifically we choose to allow the output the Modified Minimal Subtraction Scheme (MS)/Dimensional Reduction Scheme (DR) mass of a user-selected fermion at a given renormalization scale and/or the pole mass of a SM lepton.Again, similar to the example above, the fermion field and its generation number will be specified by the user in the FlexibleSUSY.mfile, so that these values are not hard-coded in the body of the observable C++ code but are configured with the Wolfram Language files.
To set up the observable, we create the directory meta/Observables/ExampleFermionMass/ and the file Observables.mwithin it, with the following content: Again, the function call of DefineObservables first defines the Wolfram Language symbol of the observable to be ExampleFermionMass, and it specifies that the observable depends on two arguments, merged into one Wolfram Language function.We assume that the concrete name of the fermion to output will be defined in the FlexibleSUSY model file models/M a /FlexibleSUSY.m, and we assume that it has the form fermion[gen], where fermion is the name of the fermion multiplet and gen is its index in the multiplet.Therefore, the observable ExampleFermionMass has the form of a function with the fermion _ [gen _ ] pattern as argument, which matches the multiplet name with fermion _ and the index in the multiplet with gen _ . 6We would like to return two numerical values, corresponding to the MS/DR mass of the fermion fermion[gen] at a given renormalization scale and the pole mass of a SM lepton of generation gen.Hence we define the observable type to be an array of length 2 in GetObservableType.The prototype of the function that calculates the observable is defined in GetObservablePrototype.The function takes the index of the fermion in the multiplet as first argument.The remaining two arguments model and qedqcd allow the access to the model parameters, masses, mixing matrices, etc.In the name of the function prototype ex _ fermion _ mass, the string fermion is replaced by the value of the variable fermion, which may be Fe, Fd, etc. depending on the user-selected model M a .As we plan to calculate different masses depending on the SLHA output block (see Section 3.2.2),let us implement the description of the observable as specified with GetObservableDescription above to have an explicit indication of the generated numerical values.There, the strings fermion and gen are replaced by the user-specified values of fermion and gen, respectively.
Example 3: lepton self-energy with NPointFunctions
As a third example, let us implement an observable to calculate the self-energy of a userspecified lepton at the one-loop level.This example will illustrate how to use the NPointFunctions extension in both simple and advanced ways, as well as how to output quantities in the FLHA format.As in the previous examples, we start by defining the observable and a suitable C++ function prototype that allow us to select the lepton (from a multiplet) whose self-energy shall be calculated and the contributions that should be taken into account: We call the observable ExampleLeptonSE, which takes the multiplet name (field _ ) and the index of the particle in the multiplet (gen _ ) as first argument, as in the previous example.Furthermore, we'd like to be able to select a subset of the full one-loop contribution.To achieve this, we provide a second parameter (contr _ ) which we will use later to only calculate a certain part of the full one-loop self-energy.In general, a fermion self-energy can be split into different covariants involving left-handed or right-handed projection operators P L,R and possibly the covariant / p, where p µ is the external fermion momentum.Let us in this example for simplicity output only the coefficients of the two covariants / pP L and / pP R , so we choose an array of length 2. The C++ function needs the index of the particle in the multiplet (gen) and the model object (model) as arguments.
Content of the file O i /WriteOut.m General case
After the definition of the observable name, type, etc. in the file O i /Observables.m,one can now connect the numerical value(s) of the observable to the Les Houches output of the C++ spectrum generator in the file meta/Observables/O i /WriteOut.m.Currently, there are two automated ways to output the results of calculations in FlexibleSUSY: via the SLHA (exists since FlexibleSUSY 1.0) and the FLHA (the usage is explained in this article) formats.Both are defined in the meta/WriteOut.m file, which provides functions to write numbers, arrays or matrices to specified output blocks.
In the simplest case the value of an observable O i [parA, parB, ...] is a single complex number (see Examples 1-2 below or the ℓ i → ℓ j γ decay; their Les Houches output is shown in Sections 3.7.1-3.7.2).If we want to write its real part to the FlexibleSUSYLowEnergy SLHA output block, we define a function called WriteObservable in meta/Observables/O i /WriteOut.mas follows: The first argument to the WriteObservable function is a string with the SLHA output block name (here: "FlexibleSUSYLowEnergy"), which reflects the SLHA block name in the FlexibleSUSY model file models/M a /FlexibleSUSY.m.Since the observable type is a complex-valued array of length 1 in this example, we have to output the 0-th element of the array (in C++ index convention).We obtain the real part of the array element by applying the Re function.
As another example we consider the output of the real parts of several Wilson coefficients (see also Example 3 below and its Les Houches output in Section 3.7.3).This requires a more involved definition of the function WriteObservable: { "fermions1, operator1, Oalpha1, Oalphas1, contributions1, num _ value, \"comment1\"", "fermions2, operator2, Oalpha2, Oalphas2, contributions2, num _ value, \"comment2\"", ... }, { "num _ value" -> "Re(observables."<> Observables GetObservableName[obs] <> ")", ... } ]; We define the function WriteObservable to write the real parts of the Wilson coefficients to the FWCOEF Les Houches output block. 7The return value of WriteObservable is a list of strings, with each string consisting of a comma-separated tuple of a fermion name (fermions 1 ), an operator name (operator 1 ), etc., which should be replaced by appropriate values.See Ref. [21] for a description of the FLHA format.The numbering convention for the FLHA format in FlexibleSUSY is the following: Wilson coefficients must occupy the positions from the end of the C++ output array.In the definition of the function WriteObservable the substring "num _ value" is replaced by the real parts of the Wilson coefficients (accessed via GetObservableName) via the function StringReplace.Note that there is no need to explicitly specify the numbering of the Wilson coefficients in the replacement rule for "num _ value", as it is done automatically.
Example 1: output a given number
We continue our minimal Example 1 from Section 3.1.1,where the observable is defined to just be a user-defined numeric constant specified in the FlexibleSUSY model file.The second step is to connect the numeric value of the observable to the SLHA (or FLHA) format, which is the output of the spectrum generator.This is done by the following definition placed in the file As the first argument of the function WriteObservable shows, the numeric value of the observable is written to the SLHA output block FlexibleSUSYLowEnergy, which matches a corresponding definition in the model file models/M a /FlexibleSUSY.m.The function returns a string (which must be valid C++ syntax), where we take the real part of the first entry from the array of length 1, where the observable is stored (with the zero-based index convention in C++).Since for the output no information about the observable is needed, one uses the _ pattern in the specification of ExampleConstantObservable. The code listing above leads to the usage of the C++ parser for the SLHA format named FORMAT _ ELEMENT, see Listing 4.
Example 2: show fermion masses
Now we turn back to Example 2 from above and connect our observable (two given fermion masses) to the output of the spectrum generator.Similarly to Example 1, one creates the file meta/Observables/ExampleFermionMass/WriteOut.m and places the definitions of the function WriteObservable there.In this example we would like to output two different masses at the same time.To store them internally, we have defined the observable type to be an array of length 2, see the specification of GetObservableType in line 3 of Listing 7. To write the two fermion masses to the output, one could do the following: Here, we define two distinct behaviours of our observable with two different Les Houches output blocks (which must be reflected by appropriately named lists in models/M a /FlexibleSUSY.m): "FlexibleSUSYLowEnergy" and "ExampleLeptonMass".In both cases, definitions lead to the C++ parser named FORMAT _ ELEMENT, see Listing 4. The fermion mass in the first entry of our twocomponent observable array is written to the block FlexibleSUSYLowEnergy, while the second entry is written to the block ExampleLeptonMass.Note that the name of this second block is not a part of the official SLHA standard, but is a non-standard addition we use for this example.
Example 3: lepton self-energy with NPointFunctions
In Example 3, we aim to output the two components of a lepton self-energy.We can use the automatic FLHA output format to achieve this: In the definition of WriteObservable we provided a pattern for the index of the lepton in the lepton multiplet (gen _ ) explicitly in the first argument of the observable name ExampleLeptonSE, because the concrete value of this index must be known to select the appropriate self-energy for the output.
General case
In the previous sections it was described how to define a new observable (done in the file Observables.m)and how to write the numeric value of the observable to the Les Houches output of the spectrum generator (defined in the file WriteOut.m).In this section we describe how to generate the content of the C++ function, that calculates the numeric value of the observable.This C++ function is defined in the C++ template files located in templates/observables/, which contains placeholders that are replaced by expressions to calculate the observable.The rules that specify how to replace the placeholders by appropriate expressions are defined in the observable-specific file meta/Observables/O i /FlexibleSUSY.m.This file contains at least the function WriteClass, which performs the following tasks: 1. Fill the C++ templates from templates/observables/ with appropriate C++ code.
2. Move the filled C++ templates into models/M a /observables/.
} ];
The function WriteClass has three parameters: the explicit name of the observable (obs), the list of all observables that are requested -by the variable ExtraSLHAOutputBlocks from the file models/M a /FlexibleSUSY.m -to be calculated (allObs _ ), and a list of the C++ template file names, where the replacements should be performed, and the corresponding output file names (files _ ).The body of the function WriteClass consists of three parts: 1.The first part is the If statement, where all C++ expressions are created (Task In the above example source code listing prototypes strings are created by replacing the "@type@" and "@prototype@" tokens by the observable's C++ type and the prototype of the function that calculates the numeric value of the observable, respectively. 2. In the second part (Task 2 indicated in the listing) the C++ tokens ("@npf _ headers@", "@npf _ definitions@", etc.) are replaced in the C++ template files (files) that are passed to the function with the help of the ReplaceInFiles function.
3. The third part (Task 3) is to specify the function's returned expression.In the example above the function returns a list of replacement rules, whose use is described below.
Note that in Task 2 the tokens in the C++ template files can be replaced by strings that can in principle be arbitrarily large.However, we recommend to put as much generic information as possible into the C++ template files and replace the tokens in the template files only by model-specific information.For the latter we recommend to use the full power of FlexibleSUSY's helper routines and functions located in meta/TextFormatting.m, meta/CConversion.m and meta/Utils.m.We refer the reader also to Section 3.5, where we describe the NPointFunctions extension and how one can use it to generate C++ code for amplitudes.
The expression returned by the WriteClass function (Task 3) should be a list of replacement rules, which gets stored internally in the variable ObservablesExtraOutput["O i "].The replacement rules stored in the returned list can be accessed and re-used later, if needed.For example, one can access expression stored in the rule defined in line 40 from Listing 14 as follows: Besides the possible manual re-use of the returned expressions, FlexibleSUSY automatically performs the following two tasks with the returned list of replacement rules: 1.If there exists a replacement rule of the form "C++ vertices" -> list 1 , then the expression list 1 must be a list of vertices that are required to calculate the observable.Each vertex is represented by a list of SARAH fields, e.g.{bar[Chi], Chi, VP} in the MRSSM (represents the χ0 a χ 0 b γ vertex with two neutralinos and a photon).For each required vertex FlexibleSUSY creates a corresponding C++ function for its numerical evaluation when calculating the numeric value of the observable.2. If there exists a replacement rule of the form "C++ replacements" -> list 2 , then the expression list 2 must be a list of replacement rules that should be applied to all C++ template files, see Appendix B.3 for an example.
Enabling optional NPointFunctions extension, simple settings
If the observable relies on tree-level or one-loop level Feynman diagrams then there is an automated way to generate them in FlexibleSUSY with the help of the NPointFunctions extension already announced in Ref. [15].NPointFunctions is typically used from within the WriteClass function.Internally, the extension calls FeynArts [16], FormCalc [17], and ColorMath [18] to generate analytic expressions and converts them into C++ form for their numeric evaluation in FlexibleSUSY.It thus allows access to Feynman diagrammatic computations in the definition of observables.
NPointFunctions can be used in two modes which we will refer to as simple and advanced modes.The modes differ by the accessible settings.Both types of settings serve to modify the calls of the FeynArts and FormCalc routines: In the simple mode only the settings listed in Table 4 are accessible, which allow topology-indepedent modifications.The advanced mode allows many more settings, enabling in particular to select specific option values for selected topologies.In the present section, we focus on the simple settings from Table 4.These simple settings are also illustrated with the help of Example 3 in Section 3.3.3.Later, the advanced settings are explained in detail in Section 3.5 and exemplified in an extra Section 3.5.7.
ZeroExternalMomenta can currently be True, False, OperatorsOnly, and ExceptLoops.This option also specifies the way external masses should be treated for specific topologies, fermions bispinors, and in loop integrals.If set to True, all external momenta are set to zero everywhere, while if set to ExceptLoops, the scalar products in loop integrals are kept.This allows one to correctly implement the expressions for self-energy-like diagrams relevant for CLFV processes in particular.
UseCache stores the FeynArts and NPointFunctions output for future usage, if enabled.This speeds up the code generation if model/M a / is purged or parts of the meta code are modified.
KeepProcesses specifies the mode of NPointFunctions.There are two modes to calculate an amplitude: using simple settings only (Observable -> None) and with the help of advanced ones defined in meta/Observables/O i /NPointFunctions.m files (Observable -> O i []), see Section 3.5. 8
Example 1: output a given number
Each observable undergoes various calculational steps before being evaluated into an actual numerical result.In general, the definitions of the calculational steps are distributed between the Wolfram Language file meta/Observables/O i /FlexibleSUSY.m and the C++ template files discussed.Specifically, the WriteClass function in the FlexibleSUSY.mfile is supposed to generate model-specific expressions to be placed into C++ template files from Section 3.4.1.As stated before, in general it is recommended to put as much information as possible into the C++ template files and keep the WriteClass function minimal.In case of Example 1, however, the example is so simple that we would like WriteClass to generate the complete C++ code for all calculations (just output a number in this case), thus the C++ template files will not contain much code.This code generation is done with the definition of @type@ @prototype@ { @type@ res {num}; return res; }", { "@type@" -> CConversion CreateCType[Observables GetObservableType[#]], "@prototype@" -> Observables GetObservablePrototype[#] } ] &/@ observables; ... ( * Task 3: return an empty list * ) {} 8 The simple mode with the simple settings discussed here was developed mainly to allow users a simple starting point.It is also used in unit testing of FlexibleSUSY.Currently, FlexibleSUSY performs unit tests of self-energy expressions in the SM/MSSM and Z-boson penguins in the MRSSM.It is also used in Example 3 to demonstrate the basic usage of the NPointFunctions package and can serve to write the first iterations of the code that relies on NPointFunctions.Typically, the code for actual physics observables such as the ones discussed in Section 2 will use the advanced settings of NPointFunctions.
In the above source code listing the variable definitions is defined to contain the entire definition of the function that calculates the observable (including function body).It is generated by replacing the @type@ token by the concrete C++ type Eigen::Array<std::complex<double>,1,1> and the function name @prototype@ (including its argument double num) as specified by the definitions of Section 3.1.1 in a generic string.In the body of the function, we initialize a local variable res of type @type@ and set its value to the value of num.The value of res is then returned to further fill the SLHA block entries.The final function definition in the definitions variable will be used later in Section 3.4.1 together with the C++ template files to construct the complete C++ code to output the numerical value of the observable.
The WriteClass function finally returns an empty list (Task 3), because we do not need any of the defined expressions anywhere else in this example.
Note that the function definition generated in Task 1 is later put into the C++ template files in templates/observables/ an is thus closely connected to their content, as shown in Section 3.4.1.
Example 2: show fermion masses
Now we return to Example 2 where we output fermion masses.In this example we follow the general recommendation to put as much C++ code as possible into the C++ template files and as little as possible into FlexibleSUSY.m.Thus, in this example we fill the following code into the WriteClass function in the FlexibleSUSY.mfile of Listing 14: @type@ @prototype@ { return forge<@type@, fields::@fermion@>(gen, model, qedqcd); }", { "@fermion@" -> SymbolName[Head[First[#]]], "@type@" -> CConversion CreateCType[Observables GetObservableType[#]], "@prototype@" -> Observables GetObservablePrototype[#] } ] &/@ observables; ... ( * Task 3: return an empty list * ) {} Again, the variable definitions contains the definition for each C++ function for the observable calculation of the function that calculates the numerical value of the observable.This function has the name and type that were specified via GetObservablePrototype and GetObservableType, respectively.In this example the function body contains only a single line, which contains a call of the template function forge.By this call we delegate the computation of the observable to the forge function, which is defined entirely at the C++ level in Section 3.4.2.The same strategy is applied in several of the predefined observables discussed in Section 2 such as ℓ i → ℓ j ℓ k ℓ c k or µ-e conversion.The template parameter fields::@fermion@ of the forge function above is an internal C++ type that represents the fermion whose mass shall be output (in the MRSSM, for example, the token @fermion@ is replaced by Fe).
Again, we do not want to use any expressions defined in the WriteClass function anywhere else, so we return an empty list from the function (Task 3).
Example 3: lepton self-energy with NPointFunctions (Observable -> None)
This example illustrates the use of the NPointFunctions extension of FlexibleSUSY.We will use the NPointFunctions extension in the simplified mode, where we do not make use of the advanced settings in NPointFunctions.m and just specify the option Observable -> None.The following code snippet shows the content of the WriteClass function: TextFormatting ReplaceCXXTokens[" @type@ @prototype@ { const auto npf = npointfunctions::@name@(model, {gen, gen}, {}); "@prototype@" -> Observables GetObservablePrototype[#], "@name@" -> name First of all, as NPointFunctions returns the same code for different generations of external particles, we remove potential duplicates by replacing integer generation with the _ pattern for simplicity (it could have been any symbol or number), see Task 1a.
Afterwards, we run NPointFunctions (Task 1b) to generate the relevant C++ code, and all required C++ vertices.For this we create a local function (defined as a Module), which we map to all observables in the observables list.Note, however, that in more involved calculations it may be better to define a non-local, separate function to process all observables, instead of a local one via a Module, as done here.The local function to obtain the C++ code to numerically evaluate the observable(s) stores its results in the following local variables: field represents the lepton particle that we specify in the definition of the observable (like Fe for example in the SM or MRSSM).
contr contains an expression that allows one to select certain contributions from the self-energy that should be output.In this example we do not make use of this selection feature.See Section 3.5.7 for an advanced example.
npf contains the main result of NPointFunctions: an object with generic amplitudes and, sometimes, replacement rules.See Table 4 for a list of all possible options.In this example, we calculate the Fe -> Fe amplitude, from which we will extract the self-energy (note that the outgoing particle is typed as Fe and not bar[Fe]).We do not store the results in the cache.OnShellFlag is used internally by the option FormCalc OnShell for the function FormCalc CalcFeynAmp.ZeroExternalMomenta is passed to the function FormCalc OffShell.The loop level is specified explicitly to be 1.The renormalization scheme is decided to be set by FlexibleSUSY.
We chose a simple operation mode of the NPointFunctions extension by setting Observable -> None.This implies, that no optional file O i /NPointFunctions.m with advanced settings is used, see Section 3.5 for more details and the usage example in Section 3.5.7.The option KeepProcesses allows selecting certain topologies, provided by the list of default values within the function GetExcludeTopologies in meta/NPointFunctions/Topologies.m.
To extend this list, we refer to the usage of FeynArts $ExcludeTopologies.
name contains the C++ name of NPointFunctions-generated function that calculates the numerical value of the observable.
basis is a list of replacement rules to extract certain sub-expressions of the amplitude and store them in local C++ variables.The rhs. of each replacement rule defines the expression, whose prefactor should be extracted.The lhs. defines the name of the C++ variable in which this prefactor shall be stored.Note that patterns are currently not supported to select the subexpression.The replacement rules stored in the basis variable should be passed to the InterfaceToMatching function, which performs the extraction of the prefactors.9 cxxVertices contains the list of all vertices that are required to numerically calculate the observable.They have to be returned from the WriteClass function (see Task 3), because FlexibleSUSY stores the C++ code for the vertices in other files.
npfDefinitions contains strings with all generated amplitude-related C++ functions required for the numerical evaluation of the amplitude.The function CreateCXXFunctions converts the NPointFunctions object into C++ code.The user provides the name of the C++ function and specifies which color structures are expected (Identity or SARAH Delta).
definitions contains the C++ function that calculates the numerical value of the observable with the help of C++ code created by the NPointFunctions extension.The function body consists of a call to the generated irr _ se function (the function name is stored in the name variable), to which the model parameters (model object) and the indices of the incoming and outgoing particles (both are gen here) are passed.The last argument contains the set of momenta of the external particles.In the current implementation external momenta are assumed to be zero and so the last function argument should be an empty initializer list {}.
Since basis contains two replacement rules that define prefactors of certain Lorentz structures, irr _ se will return a two-valued array.The generated function eventually returns the two components of this array.
In Section 3.5.7 the example described here is extended to use advanced NPointFunctions options.
General case
The function WriteClass defined in the file meta/FlexibleSUSY.m in the previous section fills C++ templates with model-specific information, and places the resulting files into the directory models/M a /.The content of the C++ template files is discussed in this section.
The numerical calculation of the observable might require more C++ auxiliary functions or classes.To use extra functions/classes one could add appropriate preprocessor #include directives.
Note again, that there is some freedom to move code between the O i /FlexibleSUSY.m and @O i _<EMAIL_ADDRESS>as stated before, we recommend to move as much C++ code as possible into the dedicated directory templates/observables/, as C++ source code is often more robust compared to Wolfram Language scripts due to static type checking.
Example 1: output a given number
To output a single number we create two C++ template files in templates/observables/, as stated above.They will be filled by the WriteClass function defined in FlexibleSUSY.mcreated in Section 3.3.1.The content of the header template file example _ constant _ observable.hpp.in is the same as in general Listing 19.In principle, we can fill the C++ definitions template file with the default content provided in Listing 20.Nevertheless, as we do not use the NPointFunctions module for the observable calculation (realized with lines 3-6 in Listing 16), we can simplify the file as follows: Listing 21: Content of C++ definitions template example _ constant _ observable.cpp.in.#include "@ModelName@ _<EMAIL_ADDRESS>flexiblesusy::@namespace@ { @calculate _ definitions@ } // namespace flexiblesusy::@namespace@ In this file, the template C++ token @calculate _ definitions@ will be replaced with the content of the variable definitions defined in the file O i /FlexibleSUSY.m described in Section 3.3.1.In this way, during the meta phase of FlexibleSUSY the complete C++ code will be generated for the function calculate _ example _ constant _ observable (defined in Listing 6) which outputs the number num.We do not need to provide any additional observable-specific code, as everything is already provided by the WriteClass function in O i /FlexibleSUSY.m during the Wolfram Language meta phase.
Example 2: show fermion masses
As stated above, we need to provide two C++ template files in templates/observables/.They will be filled during the FlexibleSUSY meta phase and embedded into the rest of the C++ spectrum generator.In this example the content of the template header file example _ fermion _ mass.hpp.in is the same as in the general Listing 19.The template file example _ fermion _ mass.cpp.in,however, contains two changes compared to the generic example from Listing 20.First of all, all commands related to NPointFunctions can be omitted, since we do not use Feynman diagrammatic calculations in this example (like in Example 1).Apart from this, our goal with this example is to demonstrate how one can move as many calculations as possible to the C++ template file, while keeping the flexibility to insert model-specific information.As discussed in Section 3.3.2,we prepared for this by having a minimal function body defined on the Wolfram Language level in the file ExampleFermionMass/FlexibleSUSY.m,where the function body only calls another C++ function named forge.This function shall now be defined.The following source code listing shows the content of the C++ template file that contains this definition: 10Listing 22: Content of C++ definitions template example _ fermion _ mass.cpp.in.#include "@ModelName@ _ mass _ eigenstates.hpp"#include "cxx _ qft/@ModelName@ _ qft.hpp" #include "@ModelName@ _<EMAIL_ADDRESS>"error.hpp"namespace flexiblesusy { using namespace @ModelName@ _ cxx _ diagrams; namespace @namespace@ { template <typename RTYPE, typename FIELD> auto forge(int idx, const @ModelName@ _ mass _ eigenstates& model, const softsusy::QedQcd& qedqcd) { context _ base context {model}; auto context _ mass = context.mass<FIELD>({idx});std::complex<double> lepton _ mass; switch (idx) { case 0: lepton _ mass = qedqcd.displayPoleMel();break; case 1: lepton _ mass = qedqcd.displayPoleMmuon();break; case 2: lepton _ mass = qedqcd.displayPoleMtau();break; default: throw OutOfBoundsError("fermion index out of bounds"); } RTYPE res {context _ mass, lepton _ mass}; return res; } @calculate _ definitions@ } // namespace @namespace@ } // namespace flexiblesusy In the code listing above, the template C++ token @calculate _ definitions@ will be replaced by the content of the variable definitions, defined in the ExampleFermionMass/FlexibleSUSY.m file in Section 3.3.2,which calls the forge function.The forge function defined above illustrates how to access two different types of particle masses.First, the running MS/DR mass of the fermion specified by FIELD and idx (which can correspond to, e.g.Fe [2] or Fd [3] at the Wolfram Language level in the selected BSM model) is obtained by calling the mass function template.Afterwards, the pole mass of the SM lepton of generation idx is obtained from the qedqcd object.The forge function finally returns an array of length 2 containing these two masses.
Example 3: lepton self-energy with NPointFunctions
As in the examples above, we have to create two C++ template files that are responsible for the numerical calculation of the observable.This example uses NPointFunctions to generate analytical expressions for the self energies, but it is otherwise a structurally simple example.Hence, like in Example 1 (but unlike Example 2) we do not delegate computations to a forge function; rather, the main definition of the observable is done by the function WriteClass in FlexibleSUSY.mcreated in Section 3. We recall that in this example we use NPointFunctions in the simple mode specified by the option Observable -> None; the example will be continued in Section 3.7.3,which shows how to enable and use the computation of the self-energy in a concrete spectrum generator.An alternative version of the example is provided in Section 3.5.7,where the example is modified to illustrate the use of the advanced settings of NPointFunctions.
Content of the optional file O i /NPointFunctions.m, advanced settings
The NPointFunctions extension allows to generate Feynman diagrammatic calculations to obtain analytical expressions for observables in any specific model M a when the FlexibleSUSY meta phase is executed.The simple usage of NPointFunctions was demonstrated in Section 3.3.3,but frequently it is necessary to fine-tune calculations done with NPointFunctions.Such a fine-tuning is possible via advanced settings of NPointFunctions explained in this section.It may involve the selection of subsets of Feynman diagrams, remove contributions, selecting the regularization scheme, or specifying the fermion order in e.g.four-fermion amplitudes.Parts of the fine-tuning may be accessible to users of the observable (such as the arguments Vector, Scalars, etc. of observables in Tables 2-3), other parts may be found only in the definition of the advanced settings.
In the following we begin by explaining how to enable the advanced settings and giving an overview, then we will explain all advanced settings and finally provide an example.
Enabling advanced settings, overview
We begin by explaining how to enable and access the advanced settings when setting up the definition of an observable.The NPointFunctions extension is called from the observable-specific file O i /FlexibleSUSY.m discussed in Section 3.3.Listing 18 combined with Listing 14 provides an example for using NPointFunctions in its simple mode without advanced settings.To use the advanced mode, this file must be modified as follows: Here the line Observable -> O i [] enables the advanced settings (note that the variable obs was defined in the first line in Listing 14).This option requires the existence of the corresponding file meta/Observables/O i /NPointFunctions.m which should contain the detailed advanced settings described further below.
The second line defines the option KeepProcesses via the variable contr (instead of the hard-coded value Irreducible in Listing 18) and opens up an important way to access advanced settings.To explain the meaning of this line we briefly recall the overall structure of setting up an observable.From the user's perspective, an observable ultimately is called as described in Section 2, and observables may have options contr, see e.g.Tables 2-3, with possible values being a keyword or a list of keywords such as Vectors, Scalars, Boxes, etc.Using these options influences how the observable is evaluated.The definition of such keywords and how they influence the observable is part of the advanced settings of NPointFunctions.
Using such a contr option has to be prepared in the file O i /Observables.mas exemplified in Section 3.1.3by specifying the appropriate argument in the definition of the observable.Via line 9 of Listing 18 and line 3 of Listing 23 the variable contr propagates into the option of KeepProcesses.The construction in the code makes sure that the option is always a list.
The advanced settings are listed in Table 5 and explained in detail in the following Sections 3.5.2-3.5.6.As an overview, NPointFunctions relies on FeynArts and FormCalc to produce Wolfram Language expressions for model-specific class-level amplitudes, and the advanced settings fine-tune the calls to FeynArts and FormCalc routines, so familiarity with these tools is required to some extent.11Among the advanced settings, topologies, diagrams and amplitudes can be used to define the keywords which can be set by users as values of contr as explained above; these keywords then influence the execution of the functions FeynArts CreateTopologies, FeynArts InsertFields, and FormCalc CalcFeynAmp.The further advanced settings influence the calls to FeynArts and FormCalc and manipulate their output in ways which are fixed for the observable and can be influenced only by directly modifying the file O i /NPointFunctions.m.
topologies
The observable calculation by the NPointFunctions extension starts from the generation of topologies.It is possible to select particular topologies in the calculation by using the option KeepProcesses in line 3 of Listing 23 as explained above.The possible values of contr are (lists of) keywords: Vectors, Scalars, TreeLevelSChannel, etc.
In this section we focus on how to define such keywords, generally now called Contribution i , and to associate them with topologies of Feynman diagrams.(a) FeynArts topology names.The latter uniquely define topologies by connections to adjacency matrices, as shown in Figure 2.This explicit naming of topologies is useful, as they can also be used to modify diagrams and amplitudes, as described in the following subsections.
The name of a topology can be any Wolfram Language symbol that has to be connected with the actual adjacency matrix.All relevant topology names can be defined in a separate file meta/NPointFunctions/Topologies.m, as follows: Listing 24: Example for definitions of topologies (in meta/NPointFunctions/Topologies.m).
Special syntax
See the text description from Figure 2. One draws the FeynArts topology with explicit vertex numbers, creates an adjacency matrix, and eliminates all entries with redundant information: the zero topleft submatrix representing propagators between external particles, and entries below the diagonal as the adjacency matrix is symmetric.The rest is combined line by line into a one-dimensional list that is stored in meta/NPointFunctions/Topologies.m.These steps are done by the function AdjaceTopology defined there.
2. A connection between a set of simple topologies and their group name, like treeAll.This is useful to shorten the application of other settings.
If the topology names are defined like this, an example content of the file O i /NPointFunctions.m might define the keyword TreeLevelSChannel and associate it to the topology treeS as follows:
diagrams (generic level modifications)
The keywords Contribution i can also be associated with generic-level field patterns (in FeynArts nomenclature) to remove certain generic-level fields in the Feynman diagrams.Table 7 shows the required command in the file O i /NPointFunctions.m to define the associations with field patterns.The logic is the same as for topologies, but the main new ingredient is the Command b option.It defines the kind of generic fields that should be removed, and it should be a pure function which returns True or False and which takes several arguments while traversing the tree of FeynArts insertions: FeynArts TopologyList, the current topology, and the current class level insertion.Apart from standard Wolfram Language expressions, the following ingredients may be used to specify Command b : 12• Fields appearing in tree-parts of diagrams are specified by TreeFields.
• Fields appearing in loops are specified by LoopFields.
• Any generic field of scalar type FeynArts S, fermion type FeynArts F or vector boson type FeynArts V.
• A specific field, derived from external particles.For example, one can remove from the diagram fields that correspond to the external particles numbered 1 or 3 with the usage of the argument FieldPattern[#, 1|3] in the function FreeQ below.
Altogether, an example is given by the setting: Then, the function call NPointFunction[..., KeepProcesses -> {Vectors}] deletes the diagrams with the topology penguinT when the external particles 1, 3 do appear inside the loop.A second example, demonstrating the Absent mode of WhenToApply, is given by: In this way, if KeepProcesses -> {Vectors} is specified (thus Scalars is not specified), the third line of the listing applies and eliminates diagrams of penguinT topology which contain scalar particles in their tree-level parts.
amplitudes (class level modifications)
Often, modifications on the level of topologies and generic-level amplitudes allowed by the topologies and diagrams settings, together with other options of Table 4, provide sufficient finetuning.Sometimes, however, removing amplitudes on the classes-level is required.This is possible via specifying the amplitudes[LoopNumber, WhenToApply] setting.The syntax is identical to the one of diagrams and given in Table 7.As an example, amplitudes with massless particles can require special treatment, and the following setting allows to remove them from the generated expressions (assuming they will be handled properly elsewhere):
Abbreviation
Values Hints LoopNumber 0 or 1 As in Table 6 Momenta i ∀ Symbol from ZeroExternalMomenta See Table 4 Rule j
Special syntax See DiracChains.m
Table 8: Syntax and options for chains.Files in column "Hints" are located in meta/NPointFunctions/.
order, chains (Dirac algebra modifications)
NPointFunctions can be used to extract Wilson coefficients.For observables with external fermions, there might exist a need to change or fine-tune fermion chains to achieve the desired operator structure.This is done by the settings order and chains, where both the desired fermion order of external fermions as well as chains to be dropped are specified.
The syntax for order can be understood as follows.Imagine that we would like to obtain Wilson coefficients corresponding to the process µ − → e − e − e + , or more conveniently for the crossed 2 → 2 process µ − e − → e − e − instead, where the outgoing positron is replaced by an incoming electron.That means we need to obtain coefficients of expressions ū(e − )Γ i u(µ − ) ū(e − )Γ j u(e − ), where Γ represent structures involving Dirac matrices and momenta.For example, in the source code for the observable BrLTo3L, the NPointFunctions extension is accordingly called as where the variable lep will correspond to leptons during the meta phase execution for concrete models.Interpreting the incoming particles as muon and electron, we specify the required fermionic structure by the following line in the file NPointFunctions.m (the field numbers are as in FeynArts InsertFields): order[] = {3, 1, 4, 2}; Once the fermionic order is fixed by order, and amplitudes are calculated, one obtains multiple chains consisting of Dirac matrix products.Often certain Dirac chains may be simplified or neglected, thanks to the desired level of precision or the choice of basis of Wilson coefficients.Dropping specific types of Dirac chains can be implemented by using the chains setting described in Table 8.An example code which serves to neglect expressions proportional to the mass of the electron from the example above is given by: In general, the Rule j appearing on the right-hand side of this example (or the general case in Table 8) must be of the form ChainNumber[Entry1, Entry2, ...] -> 0 The syntax can be described as follows (the source code which interprets these expressions is in the file meta/NPointFunctions/DiracChains.m, and further details can be found there): ChainNumber is an integer which defines a chain number, e.g. for the fermion order {3,1,4,2} the chain numbered 1 consists of particles {3,1} and chain numbered 2 is {4,2}.
Entry 1 corresponds to a pattern for a Dirac chain which may be a non-commuting product of projection operators P L,R , γ µ matrices with open indices or γ µ matrices contracted with external momenta.We mimic FormCalc notation inside the Dirac chains so that the integer
Other modifications
Here we briefly describe and exemplify a set of settings which are typically of minor importance.
In some cases, one should not sum over all generations for some generic field in the amplitude.For example, in CLFV observables, there often appear penguin contributions with external selfenergy-like corrections.In such diagrams, the fermion in the tree-level propagator should differ from the external fermion.This behavior can be defined via the setting sum: This changes the expressions at the loop level LoopNumber = 1 in the following way.For the topology identified as inSelfT one modifies the summation over generic fields in the propagator under FeynArts number 6, so that the sum over the field being equal the first external particle is omitted on C++ level.
The setting momenta can be used to eliminate certain external momenta by using momentum conservation for a given topology, for instance This modifies the options for the function FormCalc CalcFeynAmp so that for LoopNumber = 1 and topology penguinT the momenta of the second external particle is replaced by the momenta conservation expression.
The setting mass allows to neglect masses, and improve the readability of the code, if desired: The first example appends an additional replacement rule that allows the use of the explicit expression for the mass of the particle in the generic propagator.The second one prevents some simplification which would otherwise lead to incorrect amplitude expressions, see the explicit implementation for more details.
Finally, regularization overrides the option FormCalc Dimension for given topologies: This option might be useful to obtain the desired Wilson coefficients faster or more optimally: e.g.sometimes the option chains might be skipped, as the default FormCalc setting has already produced the required expressions.
Example 3: lepton self-energy with NPointFunctions (Observable -> O i [])
Let us exemplify how one enables the usage of the advanced settings for NPointFunctions, by modifying Example 3 which calculates fermion self energies.We continue from Section 3.3.3and Section 3.4.3,where the self-energy calculation was defined by using only the simple mode of NPointFunctions.In order to use the advanced settings, at first we need to apply the following modifications in the file FlexibleSUSY.m:Here the first lines are as explained in Section 3.5.1 and enable the advanced settings, including the variable contr which allows users later to select different Feynman diagrams for the evaluation of the self energies via keywords corresponding to different Contribution i .In the last line of the listing, the name of the function is changed to reflect different possible values of the variable contr.The goal in this example is to allow users to compute "self energies" by including either only one-particle irreducible diagrams, or only diagrams with a tadpole part, or both.Hence we intend to use the advanced settings to define three keywords and associate them with the appropriate topologies via the topologies setting described in Table 6.
In the following we describe a typical workflow how one might interactively obtain the relevant information to create the advanced settings file meta/Observables/O i /NPointFunctions.m which achieves this.We begin with the definition of topologies we want to enable/disable for the calculation.
As the self-energy process is 1 → 1, we start by looking for already defined topologies inside meta/NPointFunctions/Topologies.m in the form AllTopologies[{1, 1}].Currently, this T3: definition is missing, so its specification becomes our first task.We can open a Mathematica notebook and evaluate the following code to figure out, which one-loop 1 → 1 topologies exist in general and which ones are of interest to us: Listing 27: Execute in a Mathematica notebook.
The printed output is similar to the content of the first column in Figure 3.In this example, we are not interested in the first topology, while others represent what would like to include.We need to convert the FeynArts representation of the topology to the NPointFunctions one, this can be done as shown in Figure 3 or by execution of the code: 13Listing 28: Execute in a Mathematica notebook.
AdjaceTopology[topologies[[2]]] AdjaceTopology[topologies[[3]]]
Let us name the obtained topologies as in Figure 3 and add their definition to the observableindependent file meta/NPointFunctions/Topologies.m as: where the last line creates the synonym for both topologies combined.Now we can finally define the content of the advanced settings file for this observable: This achieves the goal.Now the observable ExampleLeptonSE has an option contr, like all the examples in Section 2. This option may be set to the values Tadpoles, Sunsets, or Fermi.If some model-specific configuration file models/M a /FlexibleSUSY.m contains the option Tadpoles, then only the tadpole topology will be included, and similarly for Sunsets.Once we specify Fermi (or {Tadpoles, Sunsets}), then both topologies will be used to compute the (generalized) selfenergy.In this setup, Sunsets has the same effect as the Irreducible setting from the simplified example.
Content of the optional file O i /FSMathLink.m
The spectrum generators created by FlexibleSUSY can be called from within a Wolfram Language notebook or kernel.The necessary definitions to output the numerical values of the generated observables at the Wolfram Language level are done in the general file meta/FSMathLink.m.For a specific observable O i one can create an optional file meta/Observables/O i /FSMathLink.m to specify the desired interface via the function PutObservable.We refer the reader to existing example files shipped with FlexibleSUSY for more details.
General case
To activate the C++ code generation for a desired observable with FlexibleSUSY, one carries out the same steps as for predefined observables in Section 2 (though now it is clear where all definitions come from).For a user-selected model M a one modifies models/M a /FlexibleSUSY.m by changing ExtraSLHAOutputBlocks.The pattern of observable O i is defined inside the file called meta/Observables/O i /Observables.mwhile the Les Houches blocks where this observable can be placed -inside meta/Observables/O i /WriteOut.m.Finally, by execution of make one obtains the desired C++ spectrum generator.
Example 1: output a given number
In summary, to add the new observable corresponding to Example 1 to FlexibleSUSY, one needs to create several files: After all this is done, the observable can be used in the same way as the predefined observables discussed in Section 2, and we need to proceed as described in that section.
We first need to choose a desired physical model to perform the calculations (to continue this example we choose the SM), create it via ./createmodel--name=SM, then configure FlexibleSUSY to make the C++ spectrum generator for this model via ./configure--with-models=SM.
Then, two model-specific settings files have to be modified to enable the computation of the observable ExampleConstantObservable. To be specific, we modify the file with the meta-level model settings models/SM/FlexibleSUSY.m to contain In this way we specify that the observable is called twice, with two different arguments (the arguments simply correspond to the numeric constants which should be printed in the output), and that the output will be part of the Block FlexibleSUSYLowEnergy with numbers 1 and 2, respectively.Finally, the calculation of all observables is enabled in the runtime model-specific SLHA input file: The execution of make command runs the meta phase and compiles the final C++ spectrum generator.FlexibleSUSY provides shell scripts which merge these steps, as follows: Listing 33: Execute in terminal from the FlexibleSUSY directory.
. Here we see the appearance of the block numbers and the values of the numeric constants in agreement with the definitions given above.
Example 2: show fermion masses
In order to add the observable of Example 2 to FlexibleSUSY and obtain a SM spectrum generator including the output of this observable one needs to go through analogous steps.Again, FlexibleSUSY is shipped with a script which merges all steps: Listing 34: Execute in terminal from the FlexibleSUSY directory.
./createmodel --name=SM ./configure--with-models=SM ./examples/new-observable/make-observableSM-example-2 make ./models/SM/run_ SM.x --slha-input-file=models/SM/LesHouches.in.SM After executing these steps successfully, the SLHA output will contain the results corresponding to four instances of the observable ExampleLeptonMass (where the fermion index 1 here means 2nd generation due to the C++ numbering convention): Block FlexibleSUSYLowEnergy Q= 1.73340000E+02 1 1.04187667E-01 # Fe [1] (lepton [1] if in Block ExampleLeptonMass) mass 2 5.71258815E-02 # Fd [1] (lepton [1] if in Block ExampleLeptonMass) mass Block ExampleLeptonMass Q= 1.73340000E+02 1 1.05658357E-01 # Fe [1] (lepton [1] if in Block ExampleLeptonMass) mass 2 1.05658357E-01 # Fd [1] (lepton [1] if in Block ExampleLeptonMass) mass Here the file models/SM/FlexibleSUSY.m defines the four instances of the observable: they are distinguished by their argument (either Fe [2] or Fd [2]) and by the Les Houches block (either FlexibleSUSYLowEnergy or ExampleLeptonMass).We recall that in the definition of the observable the running MS/DR mass of the fermion in the BSM model described by the argument and the SM lepton pole mass of the specified generation (in this case generation 2) are returned, see Section 3.4.2.We also recall that the output depends on the Les Houches block, see Section 3.2.2.This explains the output given above, where the FlexibleSUSYLowEnergy block shows the MS/DR masses of the muon and the strange quark in the BSM model, while the ExampleLeptonMass block shows twice the muon pole mass from the qedqcd object.
Example 3: lepton self-energy with NPointFunctions
Technically, there are two versions of this example which differ by O i /FlexibleSUSY.m: using the simple mode of NPointFunctions in Section 3.3.3or using the advanced mode in Section 3.5.7.To add the advanced version of the observable of Example 3 to FlexibleSUSY and obtain a SM spectrum generator, one needs to go through steps analogous to previous examples.Again, FlexibleSUSY provides a script which combines all steps: Listing 37: Execute in terminal from the FlexibleSUSY directory.
Conclusions
In this paper, we describe two novel essential features of FlexibleSUSY: a streamlined approach for incorporating new observables to FlexibleSUSY, and a simplified way to generate C++ code for Feynman diagrams essential for the computation of these observables and other physical quantities.To enhance the accessibility of this article we have divided the content into two distinct parts.
The first part is concise, requires no detailed knowledge of the code and is directed to readers interested in the computation of predefined FlexibleSUSY observables.The currently predefined observables include CLFV observables such as ℓ i → ℓ j γ, µ-e conversion and ℓ i → ℓ j ℓ k ℓ c k , whose physics definitions are summarized in the appendix.Any user of FlexibleSUSY can now enable the computation of these observables for any desired model by just adding the appropriate flag in the model files.
The second part deals with the procedure of implementing new observables.It is more extensive and technical and is of interest to those seeking insights into the internal structure of the NPointFunctions code used for the creation of new observables.In essence, the implementation of a new observable requires five files (three Wolfram Language and two C++ template files) which essentially define in a model-independent way how the observable is computed, how it is called and how its output is organized.The second part also illustrates the utilization of the NPointFunctions extension to streamline the generation of C++ code for Feynman diagrams.This is achieved with the help of FlexibleSUSY-specialized wrappers designed for FeynArts, FormCalc, and ColorMath packages.
Three fully worked out examples are provided.They correspond to a minimal "observable" which simply outputs a single number, an observable which outputs a set of particle masses, and an observable which outputs the values of one-loop self energy diagrams.The code snippets can be used as efficient starting points for future implementations of further observables.In the appendix additional features and details are discussed which may be useful to accommodate special properties of new observables.They correspond to actual use-cases motivated by implementing e.g. the b → sµµ or h → gg decays.
The FlexibleSUSY extensions presented here have been thoroughly cross-checked and used for concrete phenomenological applications in non-SUSY and in SUSY models.Ref. [23] uses and validates CLFV observables in a leptoquark model, where some observables arise at the one-loop level and some observables arise at tree-level.Refs.[24,25] provide validations of loop-induced CLFV observables in a model of neutrino masses where several neutrino masses are themselves loop-induced.Finally, extensive validations have been carried out in the context of a non-trivial SUSY realization by comparing with results of Ref. [26] on CLFV phenomenology in the MRSSM.These applications demonstrate the reliability and versatility of the code for observables defined via Feynman diagrammatic calculations across a broad spectrum of models for wide variety of observables.In the future, it is planned to add further observables to the default distribution of FlexibleSUSY.In addition, users may add observables individually.Finally, we introduce the possibility to request the implementation of desired observables via github issues, see the developer's repository.
The observable defined in this example calculates amplitudes required for the h → gg process (but not the branching ratio itself) with the help of NPointFunctions.After NPointFunctions calculations are done, one is left with amplitudes containing many abbreviations.From physics we know that the computed amplitudes must contain several structures of covariants.For h → gg, the possible covariants are ǫ µ 2 ǫ 3µ , ǫ µ 2 p 3µ , ǫ µ 3 p 2µ , ε αβµν ǫ 2α ǫ 3β p 2µ p 3ν , (B.1) where ǫ i is the polarization vector of the corresponding gluon and ε is the Levi-Civita tensor.
It is the coefficients of these structures which are relevant for the calculation of the branching ratio.Hence we need to extract the prefactors of these structures.This can be performed as shown in Listing 40.There, we apply all remaining sub-expressions in line 1, then abbreviate all basis structures in lines 2-8, and extract the coefficients that come as prefactors of the second argument of InterfaceToMatching in line 9 (similarly to line 28 in Listing 18).Finally, the C++ code definitions are generated by the function NPFDefinitions in line 14 (similarly to line 32 in Listing 18; note, that NPFDefinitions may accept strings that are different to the last argument of InterfaceToMatching): npf = WilsonCoeffs InterfaceToMatching[npf, {"eps", "e2e3", "e2m3" "e3m2"}]; ...
AppendTo[npfDefinitions,
NPointFunctions NPFDefinitions[npf, "cpp _ name", SARAH Delta, {"eps", "e2e3", "e2m3 _ e3m2"}] ]; In principle, one can apply the routines from the code snippet above for the processes with external fermions as well.This might be relevant, in particular, if one prefers to ignore the setting chains from Section 3.5.5 and deal with Dirac chains in some other ways.
Table 1 :
All observables currently supported by FlexibleSUSY.
lep symbol Leptons ℓ in ℓ i → ℓ j ℓ k ℓ c k in SARAH model (Fe in SM or MRSSM) iI, iJ, iK integer Generations i, j, k in ℓ i → ℓ j ℓ k ℓ c k , starting from 1
2
Listing 2: In models/Ma /LesHouches.in.Ma .All added observables are enabled.FlexibleSUSY One can also output Wilson coefficients used in derivation of LToLConversion or BrLTo3L.To do that, one places the corresponding observable into the FWCOEF (IMFWCOEF) block to output their real (imaginary) part, for example:
{ "num _ value" -> "Re(observables."<> Observables GetObservableName[obs] <> ")", "leptons" -> Switch[gen, 0, "1111", 1, "1313", 2, "1515"] } ]; 1as indicated in the listing above).In general, we are interested in defining one observable O i unified by similar calculations, e.g.we define one observable O i for the set of processes ℓ i → ℓ j ℓ k ℓ c k instead of multiple observables µ → 3e, τ → 3µ, etc.Then, we enable specific realizations of O i in ExtraSLHAOutputBlocks, see also explicit examples in Listing 31 and 35.So, we start with selecting unique realizations of chosen O i from all allObs and storing them into observables.Then, inside If statement we make changes to the C++ code generation based on the possible realization-specific features (e.g. the process τ → 3µ might require additional contributions, compared to µ → 3e).Changes in C++ prototypes stored in the variable prototypes are handled automatically due to the function WriteObservable, see Listing 5, while the C++ definitions in the variable definitions usually require manual coding, see Listings16-18.
Table 4 :
Mandatory options for the function NPointFunction, see the function CheckOptionValues in meta/NPointFunctions.m file for allowed values, and meta/Observables/Oi /FlexibleSUSY.m files for examples.
3.3.For this reason we use the C++ template files given in
Table 5 :
Purpose of all available advanced settings and their usage.any changes in this example.Note that these general template files are generally appropriate for observables that use the NPointFunctions extension, hence we can keep the present subsection very short.
Table 6 :
Table 6 shows the required command in the file O i /NPointFunctions.m.It connects each keyword Contribution i with certain Syntax and options values for topologies.
Table 6 WhenToApply
treeS, and NPointFunctions internal representation of this specific topology.It is obtained in a few steps, as follows Present or Absent Apply Command b for TopologyName j if Contribution i is present (absent) in KeepProcesses, see Table 4
Table 7 :
Syntax and options for diagrams and amplitudes. | 17,236.8 | 2024-02-22T00:00:00.000 | [
"Physics"
] |
A fluid mechanic’s analysis of the teacup singularity
The mechanism for singularity formation in an inviscid wall-bounded fluid flow is investigated. The incompressible Euler equations are numerically simulated in a cylindrical container. The flow is axisymmetric with the swirl. The simulations reproduce and corroborate aspects of prior studies reporting strong evidence for a finite-time singularity. The analysis here focuses on the interplay between inertia and pressure, rather than on vorticity. The linearity of the pressure Poisson equation is exploited to decompose the pressure field into independent contributions arising from the meridional flow and from the swirl, and enforcing incompressibility and enforcing flow confinement. The key pressure field driving the blowup of velocity gradients is that confining the fluid within the cylinder walls. A model is presented based on a primitive-variables formulation of the Euler equations on the cylinder wall, with closure coming from how pressure is determined from velocity. The model captures key features in the mechanics of the blowup scenario.
In 1926 Einstein published a short paper explaining the meandering of rivers [1].He famously began the paper by discussing the secondary flow generated in a stirred tea cup -the flow now widely known to be responsible for the collection of tea leaves at the center of a stirred cup of tea.In 2014, Luo and Hou presented detailed numerical evidence of a finite-time singularity at the boundary of a rotating, incompressible, inviscid flow [2,3].
The key to generating this singularity is the teacup effect.The present work is not aimed at proving the existence of a singularity for this flow, nor is it aimed at generating more highly resolved numerical evidence for the singularity than already exists.Rather, I assume that the flow simulated by Luo and Hou genuinely develops a singularity in finite time.My goal is to understand, from a fluid-mechanics perspective, why.
The flow under investigation is depicted in Fig. 1.The system is initialized with a pure azimuthal flow (swirl) having a sinusoidal dependence on the axial coordinate z.A pressure field is instantaneously generated to provide the radially inward force necessary to keep fluid parcels moving along circular paths.This results in high pressure near the cylinder wall where the circulation is largest (z = ±L/4) and low pressure where this is no azimuthal flow (z = 0 and z = ±L/2).Necessarily, then, there is a vertical variation in the pressure and this drives a secondary meridional flow.This is the teacup effect -the portion of the fluid just from z = 0 to z = L/4 corresponds to a cup of tea.(In an actual cup of tea, the variation in swirl with z is due to a boundary layer at the bottom of the cup.)
Mathematical preliminaries
The fluid flow is governed by the incompressible Euler equations where u is the fluid velocity and p is the pressure.Without loss of generality the fluid has unit density.We work naturally in cylindrical coordinates (r, θ, z).The flow is axisymmetric (independent of θ), but has swirl (u θ = 0 in general).Hence the velocity has components u(r, z, t) = u r (r, z, t)ê r + u θ (r, z, t)ê θ + u z (r, z, t)ê z , where êr , êθ , and êz are standard basis vectors for cylindrical coordinates.The flow takes place inside an axially periodic cylinder of period L = 1/6 and radius 1.The boundary condition at the cylinder wall is The initial condition employed by Luo and Hou, and reproduced here, is a pure swirl This initial condition possesses symmetries that are preserved under evolution of (1).The most important is centro symmetry about z = 0 The full set of symmetry planes is z j = jL/4, j = 0, ±1, ±2; u r is even and u z is odd about all planes; u θ is odd about planes z o , z ±2 and is even about planes z ±1 .The pressure p is even about all four planes.
Extensive analysis of finely resolved numerical simulations indicates that starting from the above initial condition, the flow evolves to form a singularity on the critical ring, r = 1, z = 0, at time T 0.0035056 [2,3].In the present work, simulations are well resolved to t = 0.0032.
To be conservative, the flow is analyzed at the early time of t = 0.0031.I rely heavily on the studies of Luo and Hou (hereafter referred to as LH), to know that the flow at t = 0.0031 is indicative of the flow all the way to t = 0.003505, extremely close to the singularity time.To be clear, the simulations presented here are not aimed at numerically establishing a singularity (LH have already done this), but instead at understanding the mechanisms at work, and for this they are fully adequate.
Pressure preliminaries
Pressure is the only stress acting within an inviscid fluid and it is the only means to provide force to, and thereby accelerate, the flow.The role of pressure is seen by taking the divergence of (1a) Given a solenoidal field u, in general ∇ • (u • ∇u) = 0, meaning that nonlinearity generates dilatation or compression.The pressure stress accelerates fluid exactly so as to counterbalance this effect, and it does so simultaneously everywhere.The initial flow [(2)] is solenoidal.
From (3), the relationship between pressure and velocity to maintain this is the Poisson equation This is not the full story, however.The flow of interest is wall bounded and this puts a condition on the stress field within the fluid.The initial flow satisfies (1c).From the êr component of the momentum equation at the wall, this will be maintained as long as Thus pressure is determined by the pressure Poisson equation together with its boundary condition where these expressions define the source term S and the boundary term b.As long as p satisfies (4), the flow evolving under (1a) will remain incompressible and confined within the cylinder.A primary focus of this work is distinguishing stresses associated with the incompressibility constraint from those associated with fluid confinement.
Overview of the singularity
Figure 2A shows the pressure field and meridional flow near the cylinder wall.Only one quarter of the axial period is shown; the behavior over the full period follows from symmetry.
We see the teacup effect: high pressure is generated from the rotating fluid near z = L/4 and this generates a vertical pressure gradient driving fluid near the cylinder wall downward.A secondary local pressure maximum forms on the critical ring to provide the stress necessary to bend (accelerate) the downward velocity to a radially inward velocity.In the vicinity of the critical ring the meridional flow is a saddle.Figure 2B shows an enlargement of Fig 2A near the critical ring and Fig. 2C shows the vorticity magnitude |ω| in this region.Here the vorticity is dominated by the radial component ω r , which is just the axial shear of the swirl velocity ω r = −∂ z u θ .See Fig. 1.As time evolves, the axial flow along the cylinder wall advects the swirl towards the critical ring, where a singularity develops in a nearly, but not exactly, self-similar way [2][3][4][5].
The pressure field shown in Fig. 2 is similar to that reported by LH at t = 0.003505, very close to the singularity time T 0.0035056.(See Ref. [3], but note that its Fig. 17 has a distorted aspect ratio.)LH note that the pressure maximum on the critical ring means that there is locally an adverse axial pressure gradient that decelerates flow on the cylinder wall.However, this does not mean that the pressure maximum inhibits the singularity.On the contrary, a pressure maximum like that in Fig. 2B will drive a singularity.This fact is central to this work.
Consider the velocity-gradient dynamics on the critical ring.Differentiating velocity gives the velocity-gradient tensor ∇u and differentiating the pressure gradient gives the pressure Hessian ∇(∇p).Symmetries dictate that on the critical ring the only non-zero derivatives entering these are where | c means evaluated on the critical ring.We will refer to Q and P as pressure curvatures.Straightforward differentiation of (1a) gives By incompressibility on the critical ring: V + W = 0. Thus V can be eliminated, giving the velocity-gradient dynamics These equations are exact, and while they are not closed ((5c) is insufficient to determine From Fig. 2B we see that both pressure curvatures, Q and P , are negative (a pressure maximum occurs on the critical ring), but that they are not equal.The radial curvature is larger in magnitude than axial curvature, that is |Q| > |P |.To understand the importance of this, suppose that for t ≥ t 0 , (More precisely, we need inf t≥t 0 a > 1.) Using ( 6) to eliminate Q from (5c) gives P = −2W 2 /(a 2 + 1), which can then be used to eliminate P from (5a).The velocity-gradient equations ( 5) then become where γ = (a 2 + 1)/(a 2 − 1) < ∞.
The flow at t 0 is assumed to be axially contracting: W (t 0 ) < 0. Without loss of generality we redefine the origin of time so that t 0 = 0. Sacrificing generality here for simplicity, we take a > 1 to be constant.The solution to Eqs. ( 7) is then just where T = −γ/W (0) > 0 is the singularity time and Ω 0 = Ω(0).These are the known divergences as t → T [2,3].In particular, Ω = ω ∞ diverges with exponent −γ.All other divergences associated with the singularity follow immediately from invariances of the Euler equations and the value of γ.We know from LH that γ 2.46, corresponding to a 1.54.
The corresponding ratio of length scales is indicated in Fig. 2B.The contours do not exactly manifest this ratio of scales, in part because contours are a finite distance from the critical ring and in part because the flow is a finite distance from the singularity.From the data at t = 0.031, Q/P 1.62.
The fundamental point is the following.Incompressibility locks radial expansion and axial contraction together such that it is not the signs of Q and P that are important for singularity formation; it is their mismatch.A persistent mismatch in pressure curvatures on the critical ring will drive the flow to a singularity.Of interest here is |Q| > |P |.The pressure contours in Fig. 2B are the signature of this simple mechanism.One can deduce from the results of LH that a mismatch of approximately the same amount is still in effect as close to the singularity time as they could resolve (Fig. 17 of Ref. [3]).The remainder of the paper addresses why this happens in the teacup flow.
Meridional and swirl pressures
We exploit the linearity of the Poisson equation ( 4) to separate pressure into contributions from distinct effects.To begin, the source is written: S = S 2D + S swirl , where S 2D depends only on the (2D) meridional flow (u r , u z ) and S swirl depends only on the swirl u θ .(See Materials and Methods.)Thus p = p 2D + p swirl , where These pressures are plotted in Fig.Let The saddle structure of p swirl is evident.
maximum implies that Q 2D P 2D < 0, while for the saddle swirl pressure Q swirl < 0 < P swirl .
In fact, Graphically this can be seen by adding the corresponding pressure slices (blue to blue and red to red), in Figs.3C and 3D.We will now address in more depth the two key features responsible for the pressure mismatch.) Such a saddle flow is to be anticipated [6][7][8] and the associated approximate symmetry of p 2D is not particularly surprising.
However, it is the pressure curvatures on the critical ring that matter for singularity formation.Figure 4 shows second derivatives of p 2D along slices at the midplane, z = 0, and at the cylinder wall, r = 1.The general agreement between the two curves is a manifestation of the near symmetry of p 2D .However, the curves behave differently approaching the critical ring.Necessarily ∂ 2 z p 2D is even about z = 0, since p 2D is.There is no such constraint on ∂ 2 r p 2D about r = 1.Hence, although the slices in Fig. 3C appear nearly identical approaching the critical ring, they are not.The significant observation is that Q 2D < P 2D < 0. While this ordering does not seem a priori obvious, it appears to be a natural consequence of the conditions at the wall and symmetry plane.
Swirl pressure
The swirl stress p swirl both maintains incompressibility of the flow and confines the fluid within the cylinder wall.Fully decoupling these two effects is not achievable, but we can partially separate them via the decomposition p swirl = p a + p b + p c , where The signs of the curvatures Q a < 0 < P a can be understood in two ways.As expected, the axially varying pressure is larger away from the critical ring (Fig. 5D) where the swirl is also larger.Hence P a > 0, and since ∇ 2 p a | c = 0, Q a < 0. We can also consider the radial dependence of p a .Since We have seen (Fig. 5C,E) that ∂ r p c | c > 0, hence ∂ r p a | c < 0. (Note that a negative gradient ∂ r p a | r=1 < 0 is required to contain a "negative density" fluid within the cylinder.)Now The pressure p a is the essence of the teacup effect near the critical ring -axial variation of the swirl at the cylinder wall necessitates the confining stress p a , whose derivative ∂ z p a then produces axial force toward the critical ring.At the critical ring, its opposite-signed curvatures, Q a < 0 < P a , arise naturally and are at the heart of the pressure mismatch driving the blowup.(Recall (10).)The meridional stress p 2D is a more passive player.In response to the incoming axial flow generated by the teacup effect, p 2D develops a local maximum with approximate rotational symmetry in the region around the critical ring.The symmetry is only approximate (Fig. 4) and the meridional pressure curvatures satisfy Q 2D < P 2D < 0.
This ordering is significant since it gives |Q 2D | > |P 2D |, meaning the asymmetry in p 2D does not act against the pressure mismatch generated by the swirl.It acts to enhance it.Momentarily we will exploit this by making the symmetric approximation knowing that this approximation is safe, in the sense that if the flow develops a singularity with this approximation, then it will certainly develop one in the actual asymmetric case.
One-dimensional models and closure
There is a rich literature on one-dimensional modeling of singularities in inviscid flow.
See [9] for a recent summary.For the teacup flow, LH propose the model [2, 3] with the identifications ω(z) ∼ ω θ | r=1 , θ(z) ∼ u 2 θ | r=1 , and u(z) ∼ u z | r=1 .(We abuse notation, by conflicting with usage elsewhere in the paper and by not strictly distinguishing between model quantities and their full-flow counterparts.)Eqs. ( 12) are closed by determining u from ω via the Hilbert transform This is natural from a vorticity-formulation viewpoint.The model captures very well features of the teacup flow and exhibits a finite-time singularity [9].
One can ask -what about the Hilbert transform of θ? From ( 4) and (11a) we have that (We have used linearity of H and H( b ) = H(const) = 0.The final equality is by definition.)Thus the model variable θ is equivalent to the axial gradient of p a .This is important because for any model to capture the correct singularity mechanism, it must capture p a .The LH model does.This also helps explain why the model can so successfully capture the singularity using only variables on the cylinder wall.
This suggests a different approach to closure -working in a primitive-variable formulation and obtaining pressure by Hilbert transform.This approach appears to be inferior to the LH model and will not be pursued here, except as it provides insight into the velocity-gradient blowup.We can express Eqs.(5) in terms of quantities only at the cylinder wall under two assumptions: that the meridional curvatures are equal, Q 2D = P 2D , and that the curvature P b is negligible.With the first assumption, (5c) gives 2P 2D = −2W 2 .Using this to eliminate W 2 in (5a) and neglecting P b gives Ẇ = −P a .Then the curvature P a can be obtain by Hilbert transform as . With these approximations, the velocity-gradient dynamics on the critical ring become These equations make explicit the vital role of the pressure p a and the global character of the blowup problem through dependence on the swirl along the boundary, θ = u 2 θ | r=1 .As the flow evolves, the contracting axial velocity transports swirl towards the critical ring, while also increasing the velocity gradient on the critical ring.This will produce blowup if H(∂ z θ)(0) ∼ W 2 .More quantitatively, blowup will occur with the known exponents ( (7) and following), if approaching the singularity time This establishes a relationship for singularity formation involving the diverging gradient W on the critical ring and the (global) gradient ∂ z θ along the cylinder wall.By writing the Hilbert transform in integral form and plotting the integrand using data from simulations, one can observe numerically that θ evolves along the cylinder wall so that the left-hand-side of ( 14) approaches a finite value as the system approaches the singularity.This, however, is not a new result; it is a direct consequence of the known nearly self-similar collapse at the singularity [2,3].The more important objective is to find a first-principles derivation that would explain why the left-hand-side of ( 14) approaches a finite value, and thereby explain the selection of the exponent γ.I have been unsuccessful in this.Hou and Lui [10] were able to make progress on the selection problem by replacing the Hilbert transform with a simpler closure [7].Perhaps that approach could be employed here, but this must await future work.
Discussion
The discovery of the teacup singularity by Luo and Hou [2,3] has significantly advanced our understanding of finite-time singularities in the Euler equations.However, the blowup takes place on the cylinder wall and not in the flow interior.The present analysis provides a direct connection between the singularity and flow confinement.The pressure stress at the heart of the teacup effect is present solely to confine the rotating fluid within the cylinder; it plays no role in maintaining incompressibility.This stress forces a mismatch of pressure curvatures on the critical ring -a mismatch that persists as the axial flow advects swirl toward the critical ring, driving the blowup of velocity gradients.Not only does the stress from the teacup effect arise from flow confinement, but this field also has a local minimum with second derivatives of opposite sign.Fundamentally, such a minimum can only occur at a boundary.For better or worse, no modification of the mechanism can move the singularity to the flow interior.
There is an important connection between this mechanism and recent popular models for singularity formation [2,3,7].These models involve two variables, vorticity and square swirl on the cylinder wall.The Hilbert (or similar) transform of the vorticity is used to obtain the velocity.The Hilbert transform of the square swirl is, uniquely, the axial gradient of the confining pressure at the core of the singularity mechanism.
There are many future directions suggested by this work.Pressure could possibly provide physical insight into the role of the boundary in the rapid growth of vorticity gradients shown by Kiselev and Šverák [6].One could simulate a cylindrical configuration with a no-penetration condition at z = 0 to impart more symmetry to the saddle in the vicinity of the critical ring (achieving something similar in spirit to [8]).Translating these result to the Boussinesq system should be straightforward [9].The selection mechanism for the exponent γ remains open as does the role of pressure in other configurations, such as anti-parallel vortices [11].Finally, it should be possible to develop precise theorems along the lines of Chae et al. [12] to address the specific pressure fields observed in the teacup flow.This could possibly lead to a new line of attack on proof of a singularity in the Euler equations.
Euler equations for axisymmetric flow with swirl
The Euler equations in component form are where û = (u r , u z ) and ∇ = (∂ r , ∂ z ).
Taking the divergence of the nonlinear terms gives the source term S on the right-handside of the pressure Poisson equation The first and third terms are independent of the swirl velocity u θ , while the middle term depends only on u θ .This leads us to define Thus the pressure Poisson equation, with boundary condition, is The Euler equations have been simulated in the vorticity-streamfunction formulation as given by Eqs.(2) in [2].The essential difference between the simulations here and those of LH [2,3] is that here a fixed computation grid is used.A Fourier pseudospectral representation is used in z with dealiasing given by Hou and Li [13].A Chebychev grid is used in r with no dealiasing.Fourth-order Runge-Kutta time stepping is used with an adaptive time step such that the CFL number is less than 0.2.Exploiting the separation in the Fourier representation, the Poisson problem for the streamfunction is solved directly.Solving similar Poisson problems, pressure fields are computed in a post-processing step.
For all results reported the computation grid has 769 radial points for r ∈ [0, 1] and 2048 axial points for z ∈ [0, L/4).At time t = 0.003 simulations produce a vorticity maximum ω ∞ = 90846.6,agreeing to at least 5 digits of precision with the value ω ∞ = 90847 reported by LH [3].The flow is resolved until t = 0.0032, which is sufficient for our purposes.Recall, (13), that P a is determined from the Hilbert transform of ∂ z u 2 θ (z), evaluated at zero.From this, as stated in (14), a condition for blowup is that (15) approach a finite limit as the flow evolves.Using the integral representation of the Hilbert transform, this can be written as The coordinate ξ is the unique rescaling of z such that the integrand h(ξ) has value 1 at ξ = 0. Points (circles) are used to show the last resolved time in the simulations, t = 0.0032.
FIG. 1 :
FIG.1:The teacup flow in a cylinder, periodic in the axial direction.The primary azimuthal flow (swirl) generates an axial variation in the pressure.This produces a secondary meridional flow that in turn drives azimuthal flow along the cylinder wall towards the critical ring.The shear of this azimuthal flow generates intense vorticity on the critical ring, ultimately leading to a singularity and a breakdown of the Euler equations.Note that by symmetry a second critical ring (not indicated) exists at z = L/2, which by periodicity is also at z = −L/2.In the actual configuration studied, the height L is only one sixth of the radius.
3 BFIG. 2 :
FIG. 2: The teacup flow at t = 0.0031.(A) The pressure field (color) and meridional-flow streamlines (black) near the cylinder wall for 0 ≤ z ≤ L/4.The behavior over a full axial period follows from symmetry.The surfaces z = 0, z = L/4, and r = 1 are flow invariant.High pressure form near the outer wall in the vicinity of z = L/4 where the swirl is largest and this drives meridional downward flow.A secondary pressure maximum exists on the critical ring to divert the incoming flow.(B) Enlargement near the critical ring.The length ratio 1.54-to-1 associated with exponent γ is indicated (see text).(C) Magnitude of vorticity |ω| near the critical ring.A contour plot of just the radial component |ω r | is nearly identical.The color bar in A is used for all plots in the paper; the values of Low and High vary.For pressure only the difference is relevant: (A) High -Low = 275, (B) High -Low = 23.(C) High = 1.54 × 10 5 , Low = 1.2 × 10 4 .
Q
and P separately), they are extremely useful in examining what transpires in singularity formation.(5b) is commonly referred to as vortex stretching.For this flow, Ω = − ω r | c is the absolute vorticity maximum [2, 3], so Ω = ω ∞ .(5c) is the pressure Poisson equation on the critical ring.
3 .
Contours of p 2D are nearly circular arcs indicating approximate rotational symmetry about a pressure maximum on the critical ring.Contours of p swirl are those of a saddle with the expected high pressure along the cylinder wall where the swirl is largest.The pressure slices in Figs.3C and 3D further demonstrate the near symmetry and the saddle.The core cause for the mismatch in pressure curvatures, |Q| > |P |, is immediately evident.
FIG. 3 :
FIG. 3: Pressure components from (A) the meridional (2D) flow and (B) the swirl.The contours of p 2D are nearly circular arcs centered on the critical ring, while p swirl is a saddle with high pressure along the cylinder wall.Colors are given by the color bar in Fig. 2A where in (A) High-Low = 20.2 and in (B) High-Low = 7.9.Only differences in pressure are relevant.(C) Slices of p 2D at the midplane, z = 0, as a function of r (red), and at the cylinder wall, r = 1, as function of z (blue).The z coordinate is oriented to align the slices with the critical ring on the right.The vertical bar indicate a pressure difference of 5.The near symmetry of p 2D is evident.(D) Same as (C) for p swirl .
11c) where • denotes axial mean and tilde denotes axial fluctuations.(See Materials and Methods.)p a contributes to confining the flow, p b contributes to maintaining incompressibility, and p c contributes to both.These are plotted in Fig. 5.We also decompose the pressure curvatures, Q swirl = Q a + Q b + Q c and P swirl = P a + P b + P c , with the obvious meanings.The most significant finding is best seen in Fig. 5D.Near the critical ring, the axial variation of p swirl is almost exclusively dictated by the component p a .As we will see, p a is the only component that has curvatures with the signs Q a < 0 < P a needed to generate the pressure mismatch that drives the singularity.The component p b has very weak variation near the critical ring; the green curve in Fig. 5D is nearly flat.By definition p c does not vary with z and hence P c = Q c = 0. We begin the discussion with p c , since it is easy to interpret physically.p c (r) is the axially-independent pressure generated by the swirl flow u 2 θ (r)ê θ , whose speed at each r is the axial r.m.s. of u θ .Although p c contributes nothing to the pressure curvature (P c = 0), it is by far the dominant component of the swirl stress (Fig. 5E); −∇p c (r) is the radiallyinward force curving each circular streamline of the r.m.s.swirl flow, both maintaining incompressibility and confining the fluid.The important stress p a is more difficult to interpret physically.It does nothing to maintain incompressibility.There is no physical flow that has pressure field p a , since b takes on negative values and no value of u 2 θ is negative.In fact, on the critical ring: b c = b| c − b | c = − b | c < 0. (One could think of a fluctuating component of fluid in this region as having "negative density".)
FIG. 5 :
FIG. 5: Components of the swirl pressure: (A) p a , (B) p b , and (C) p c .Colors are given by the color bar in Fig. 2A where in (A) High-Low = 11, in (B) High-Low = 0.5, and in (C) High-Low = 14.Note that the range in (B) is much smaller than in (A) and (C).The curvature p a satisfies Q a < 0 < P a , even though p a has a local minimum on the critical ring.(D) Pressure slices at the cylinder wall, r = 1, with the z coordinate is oriented to agree with Fig. 3D.p a (orange), p b (green), and p swirl (blue points).p swirl is the same data as in Fig. 3D (blue curve), but here plotted as points at every fourth computational grid value so as to be visually distinguishable from p a .(E) Pressure as function of r at the midplane z = 0 over the full range of r.Here pressures values are aligned at r = 0. p a (orange), p b (green), and p c (purple).The inset shows enlargement near the critical ring.On the critical ring P a = ∂ 2 z p a c = 9.53 × 10 6 , P b = ∂ 2 z p b c = −0.84× 10 6 .
S
swirl and b are further decomposed into axial mean and fluctuating terms S swirl = S swirl + Sswirl , b = b + b,
FIG. 6 :
FIG. 6: (A) Approximate symmetry of the meridional flow near the critical ring.Contours of the Stokes streamfunction ψ(r, z) are shown in black.Also shown in dashed green are contours ofψ(1 − z, 1 − r).The two sets of contours are nearly identical.(B) Velocity profiles in along the cuts indicated by red and blue lines in A. The z coordinate is oriented to align the profiles.The red curve is u z (r, z = 3.9 × 10 −4 ) while the blue curve is u r (r = 1 − 3.9 × 10 −4 , z).Note that ∂ r u z | r=1 = 0 so that there is shear (vorticity) at the cylinder wall, r = 1.However, by symmetry ∂ z u r | z=0 = 0 and there is no shear (vorticity) at the midplane z = 0.
Figure 8 (
Figure 8(A) shows the time evolution of ∂ z u 2 θ over just a portion of the cylinder wall.
FIG. 7 :
FIG. 7: Time evolution of the axial velocity u z and the swirl velocity u θ along the cylinder wall.(A)u z and (B) u θ over the full cylinder length [−L/2, L/2], at times equally spaced from t = 0 to t = 0.003.Arrows at the top indicate the direction of the axial velocity, which also naturally orders the curves in time.(C) and (D) are the same as (A) and (B) except only over the region [−L/8, L/8] and for times equally space from t = 0.0025 to t = 0.0032.
Figure 8 ( 4
Figure 8(B) shows the time evolution of the integrand h(ξ) for the same data as in Fig.8(A).The curves show convergence to a finite limit, thereby implying a finite-time blowup.Of course we already know from LH that the flow collapses to a singularity in a nearly, but not exactly, self-similar way[2][3][4][5].Hence this is not a new result, just a different way of looking at what is already known.The lack of exact self-similarity necessarily follows since the data is taken from simulations in an axially periodic cylinder and not an infinite cylinder.Therefore the integrands h(ξ) fundamentally cannot collapse because the (very weak) tails at large ξ cannot.This lack of exact self-similarity for this flow is well known[4,5].
FIG. 8 :
FIG. 8: (A) Time evolution of ∂ z u 2 θ along the wall.These profiles determine the pressure curvature of p a at the critical ring.(B) Plots of h versus ξ given by expressions Eqs.(16) for the same data as in (A).The bold black curve corresponds to the time t = 0.0031.Results in the main paper are all shown at this time.For reference, at t = 0.0031, ξ = 2 corresponds to z = 1.51 × 10 −3 . | 7,477.4 | 2019-02-15T00:00:00.000 | [
"Physics",
"Engineering"
] |
Deep fake detection using a sparse auto encoder with a graph capsule dual graph CNN
Deepfake (DF) is a kind of forged image or video that is developed to spread misinformation and facilitate vulnerabilities to privacy hacking and truth masking with advanced technologies, including deep learning and artificial intelligence with trained algorithms. This kind of multimedia manipulation, such as changing facial expressions or speech, can be used for a variety of purposes to spread misinformation or exploitation. This kind of multimedia manipulation, such as changing facial expressions or speech, can be used for a variety of purposes to spread misinformation or exploitation. With the recent advancement of generative adversarial networks (GANs) in deep learning models, DF has become an essential part of social media. To detect forged video and images, numerous methods have been developed, and those methods are focused on a particular domain and obsolete in the case of new attacks/threats. Hence, a novel method needs to be developed to tackle new attacks. The method introduced in this article can detect various types of spoofs of images and videos that are computationally generated using deep learning models, such as variants of long short-term memory and convolutional neural networks. The first phase of this proposed work extracts the feature frames from the forged video/image using a sparse autoencoder with a graph long short-term memory (SAE-GLSTM) method at training time. The first phase of this proposed work extracts the feature frames from the forged video/image using a sparse autoencoder with a graph long short-term memory (SAE-GLSTM) method at training time. The proposed DF detection model is tested using the FFHQ database, 100K-Faces, Celeb-DF (V2) and WildDeepfake. The evaluated results show the effectiveness of the proposed method.
INTRODUCTION
With the advancement of technology, accessibility to social networks is easier for all users. Therefore, many deepfake images and videos have been spread on social media platforms. Manipulation of digital images or videos on social media involves replacing the image of a person with the face of another person. This manipulation of facial images is called deepfake and has become a very annoying social problem nowadays. Swapping popular faces with celebrities from Hollywood or politicians will mislead people's opinions and create rumors about celebrities or politicians (Wang et al., 2020;Nataraj et al., 2019). The spread of false information in the form of deepfake through synthetically created images and videos has increased daily. This will become a significant issue for manipulative detection techniques.
Detection and prevention of deepfake images and videos are essential on social media. For these research studies, various organizations, such as Facebook Inc., the US Defense Advanced Research Projects Agency (DARPA), and Google, support researchers in detecting and preventing deepfake images and videos. (Westerlund, 2019;Kwok & Koh, 2021). In the detection of forged images and videos, many research works have been developed, and these research works focus on keeping personality information secret in a secure way. To detect the difference between real and fake images, ocular biometrics, based on the CNN approaches of SqueezeNet, DenseNet, ResNet and light CNN (Nguyen, Yamagishi & Echizen, 2019), is used. To ensure personnel data security and avoid the deepfakes, the researchers are motivated to develop an efficient deepfake detection system using deep learning approaches. The main contributions of this research work are as follows.
1. Implementing the deepfake face detection method based on the SAE-GLSTM and capsule dual graph CNN model in which dimensionality reduction is used for the features in the face image.
2. To improve accuracy, preprocessing in this work implements an adaptive median filter and uses GLSTM to extract image features.
3. Compare to the existing research works, our proposed is efficient in deepfake detection system using deep learning based preprocessing, feature extraction and detection. This system ensures the efficiency, reliability and integrity.
The article is organized as follows. "Review of Literature" describes a review of the literature, "Proposed Methodology" introduces deep detection using SAE-GLSTM and the capsule dual graph CNN model, "Experimental Result & Discussions" discusses the experimental results, and "Conclusion" concludes the article with future directions.
REVIEW OF LITERATURE
Deepfake is composed of "deep learning" and "fake" concepts with a technique for synthesizing videos or images using deep learning techniques. Deepfake enthusiasts swap the face of an image or perform video forgery to spread misinformation, mask real information, and promote privacy insecurity using advanced techniques such as artificial intelligence and deep learning techniques. This swapping of images or videos of faces has become an annoyance for social media platforms. Social media users have had difficulty publishing forged images or videos that are developed by combining celebrity or politician images in the video (Kaur, Kumar & Kumaraguru, 2020).
Detection of face tampering using the CNN model is based on the concept of two streams of face classification and patch triplets. In the training process, the face image was classified as having been tampered with (Zhou et al., 2017). One article (Korshunova et al., 2017) proposed the transformation of the original face image into a fake image using a face-swap process. However, it preserves facial expressions, lighting, and position of images from the source to the destination of the image. The first deepfake video was released in 2017 in which a celebrity face was swapped with the face of a porn actor. Additionally, deepfake videos of famous politicians were created as fake speeches. It is a great threat to the world for the preservation of the security and privacy of world-famous actors and politicians (Howcroft, 2018;Chesney & Citron, 2019;Soare & Burton, 2020). Identifying the original face image from the deepfake video is evaluated based on the properties of deepfake videos, which undergo an affine warping process to match the face of the original image .
Fake face videos are exposed using deep-generative networks based on eye blinking features. To achieve this, it detects the faces frame by frame and aligns them with the same coordinate value and observes head movement in a realistic manner (Li, Chang & Lyu, 2018). From each frame of a video, eye blinking is detected using the long-term recurrent convolutional neural network (LRCN) method. In this LRCN method, the swapping of the face in the videos is implemented more easily. A pipeline-based temporal-aware system was proposed to detect deepfake videos automatically. Features are extracted in the form of a frame by a frame (Güera & Delp, 2018). Table 1 shows the results of a survey of the deepfake detection methods.
PROPOSED METHODOLOGY
The proposed deepfake detection method shown in Fig. 1 is suitable for video and images. In the pre-processing phase, the deepfake video input is split into frames, and detection is performed from the frames. During the training phase, the features of the video frames or images are extracted using the graph LSTM method. The extracted feature subset is given as input to phase II for detection using a capsule network, the capsule dual graph CNN, which identifies the frame sequence/image as real or fake.
Preprocessing
The preprocessing of fake videos/images is shown in Fig. 2. Initially, the input fake videos are split into frames. Using multitask cascaded convolutional neural networks (MTCNNs) (Zhang et al., 2016), the face is detected and cropped from video frames. MTCNN is a Python face detection module with an accuracy of 95%. It can extract the face that focuses on computer vision transformation. This extracted face is pre-processed using the sequence of operations listed below to enhance image quality. The feature vector was generated by extracting the computer vision features from the preprocessed image using the graph LSTM method.
Image rescaling
The input image/frames consist of RGB values in the range of 0 to 255. The values are rescaled to the interval [0,1] to be fed as input into the proposed model using the 1/255 scaling method.
Shear mapping
Each image in the frames is converted from the edge to the vertical direction. From the original frame, this parameter controls the angle of deviation of the horizontal line and the displacement rate. The value of the shear range is 0.3.
Noise removal
Accurate noise removal of input data will build up the improved quality training data set. This will improve the accuracy of the detection system. The background noise of the normalized image is removed using an adaptive median filter. An adaptive median filter solves issues related to median filters, such as the capability of the median filter to remove only salt and pepper noise. If there is no proper kernel size smoothing and if the spatial density is high, then the median filter is not effective. Various adaptive median filters are suitable for kernels with variable sizes, and all pixels are not replaced with median values. In Algorithm 1, we follow Soni & Sankhe (2019), in which the idea of a two-level median filter appeared. This algoritm consists of two steps. For each kernel, the median value is calculated and the pixel value of salt and pepper noise is checked. We consider the input grayscale image with pixel dimensions m × n. We assume that this image is given as the matrix G of the gray levels of all pixels in the image, so G = (g x,y ), where x ¼ 1; . . . ; m, y ¼ 1; . . . ; n, and g x,y is a gray level of the pixel in the coordinates (x, y). Algorithm 1 will transform the input gray level g x,y of the pixel in the coordinates (x,y) into the noise removed value of the gray level of this pixel. The method of transformation of the value g x,y is done in the construction of all s-neighborhoods U s of the pixel in coordinates (x,y) (this pixel lies in the center of these squre-neighborhoods) with the side of this square equal to s (the initial value is s = 3) and we manage this transformation in two levels L A and L B , in which we compare the value g x,y with the minimal, maximal, and median grayscale values of all pixels in neighborhood U s for s ≥ 3. The exact procedure of Algorithm 1 is evident from the pseudocode below; only we must realize that for the smallest s-neighborhood with the side s = 3, the pixel inside the image has eight adjacent pixels, but the pixels in the corners have only three neighbors, and the other pixels at the edge of the image have five neighbors.
Data augmentation
During the training process, the augmentation method is as follows: 1. Zooming augmentation: zooming augmentation is used to view the input image as larger with the value 0.2 in the range [0.8, 0.2]. The parameter values vary from the 1 − value to the 1 + value.
2. Horizontal flipping: with the help of Boolean value 'true', the zoomed image is horizontally flipped.
5. Coarse dropout with the size of 0.03.
Algorithm 1: Two-Level Adaptive Median Noise Removal Filter.
Input: The gray level g x,y of the pixel at the coordinates (x,y), the maximal value s max of the side of the square U s .
Output:
The noise removed value of the gray level g x;y of the pixel in the coordinates (x,y).
Step 1: Assume that L A and L B are the two levels of noise pollution.
Step 2: Consider a s-neighborhood U s . Initially, we take s = 3.
Step 3: Calculate g med , g min , g max as the median, minimal, and maximal gray level of the pixels in U s .
Step 4: Level L A .
Step 6: else s = s + 2 // We increase the size of s-neighborhood U s (s must be an odd number, as the pixel (x,y) must be in the center).
Step 7: end if Step 8: if s ≤ s max then go to Step 4.
Step 9: else Output: g x;y ¼ g x;y .
Step 10: end if Step 11: Level L B .
Step 13: else Output: g x;y ¼ g x;y .
Step 14 Feature extraction using the sparse autoencoder (SAE) with graph LSTM The pre-processed image is fed into this section to extract the computer vision features using the proposed sparse autoencoder-based LSTM model. The traditional auto-encoder method has some problems, such as its inability to find features by copying memory into an implicit layer (Olshausen & Field, 1996). This problem is resolved with a sparsity approach with an autoencoder called a sparse autoencoder (SAE) (Hoq, Uddin & Park, 2021). It is an unsupervised deep learning method with a single hidden layer (Kang et al., 2017) used to encode the data for feature extraction. This will extract the most relevant features from the expressions of the hidden layer and estimate the error (Leng & Jiang, 2016). The regularization equation for the sparsity is defined in Eq. (1).
where the divergence implemented is the Kullback-Leibler divergence (KL), b # i is the activation function of the hidden node i th , ϑ is the sparsity parameter. The KL-divergence is mathematically defined by Eq. (2) The sparse AE is trained with the cost function declared in Eq.
(3) which consists of the mean square error defined in Eq. (4). This will reconstruct the input vector X into the output vector b X throughout the entire training dataset (Ng, 2011). The LASSO regression term is declared in Eq. (3) and the final term of the sparsity transformation is defined in Eq. (1). The importance of LASSO regression in SAE is to extract the most relevant features by assigning the feature coefficients as zero for features that are of little relevance, which will reduce the space of the parameter.
where N is the total number of input data points, X represents the input vector,X represents the reconstructed output vector, α is the Lasso regression coefficient and β is the sparsity regularization coefficient, l indicates the l th layer, n l is the number of layers, u l is the number of units in the layer l and w l ij is the weight value between the i th node of the layer l and the j th node of the layer l + 1. LASSO regression can add the magnitude of the absolute value as a penalty term. To improve the feature extraction process, this penalty coefficient is set to zero. The LSTM trains SAE to improve the feature extraction process. An LSTM is a backpropagation method for training the feature extraction model with three gates: input, forget, and output gates. The input gate is responsible for modifying the memory with the values decided by the sigmoid activation function. The forget gate is used to discard features from the previous state. The output gate is used to control the output features. Traditional LSTM is enhanced with a graph structure in which each node of the graph is represented as a single LSTM unit with forward and backward directions. In one direction, the node history is constructed and in another direction the response is characterized. Figure 3 shows the proposed SAE-GLSTM model to extract deepfake image features.
An SAE consists of an encoder, a compression/parameter space, and a decoder. The encoder encodes the given input to the parameter space through hidden layers, and the decoder is responsible for decoding the parameter space data to the output layer. Due to the autoencoder nature of dealing with negative values, a rectified linear unit (ReLU) is not suitable, and a sigmoid function is used as an activation function, which will reduce the training ability of the network. SAE can reduce the error between the input and the reconstructed data. The usage of hidden layers is reduced by the sparsity constraint. The regularization used here will avoid the overfitting issue and can be applied to larger datasets. The selected features are then passed on to the LSTM cell to enhance the feature selection process by extracting the relevant features. The LSTM graph model comprises six layers: the input layer, four hidden layers, and the output layer. The input layer of the LSTM graph consists of the features extracted from SAE. Each feature is represented as a neuron in the LSTM graph cell input layer graph, as shown in Fig. 4.
For the number of iterations t, SAE-GLSTM computes the hidden forward sequence as s and the hidden backward sequence as s ! , and the output sequence is represented as The input x t has a hierarchical timing layer and the transition of the node state is declared a vector using the standard LSTM (Zhou, Xiang & Huang, 2020). Figure 5 shows the hierarchical forward and backward timing structure of Graph LSTM. Let p(t) be considered the parent node of t and k(t) be the first child node. This hierarchical timing structure has predecessor p(t) and successor s(t) representing the forward and backward siblings, respectively. If there is no child, the values of k(t), p(t), and s(t) are set to null. For the forward GLSTM, the parameters, such as the input gate ig, temporal forget gate fg, hierarchical forget gate hg, cell c, and the output op, are updated and represented in Eqs. (6) to (10) with the vectors ig t indicating the new information weight, fg t indicating the memory data of siblings, hg t indicating the memory data of the parents, and σ representing the sigmoid function. For the backward GLSTM, l t ð Þ is replaced by k(t), p(t) is replaced by s(t) and is represented in Eqs. (11) to (15) listed in Table 2.
The extracted features from SAE are enhanced with the LSTM graph by sending the SAE output as input to the LSTM unit. The optimal relevant feature subset has been selected as an output of this graph LSTM network. The extracted feature subset has numerical values of specific images in matrix form.
Deepfake detection using capsule dual graph CNN The proposed detection system consists of three capsules. Two capsules are allotted for output to indicate fake and real images. CNN dual graph is represented in one capsule to perform detection. The features extracted from "Feature Extraction using the Sparse Autoencoder (SAE) with Graph LSTM" are given as input to this detection model. The dual graph neural network is the variance of the traditional neural network with a graph (Scarselli et al., 2008). Each node in the graph is a feature. A dual graph CNN consists of two CNNs and the input set of data points X ¼ x 1 ; x 2 ; . . . ; x l ; x lþ1 ; . . . ; x n f g , the set of labels C ¼ f1; 2; . . . ; cg, the first l points have labels fy 1 ; y 2 ; . . . ; y l g 2 C, and a graph structure. We assume that each point has at most k features; therefore, we denote the data set as a matrix X 2 R nÂk and represent the structure of the graph by the adjacency matrix A 2 R nÂn . Using the input X, labels L, and A, our model aims to predict the labels of the unlabeled points.
The model is constructed with local consistency (LC) and is a type of feed-forward network that incorporates global consistency (GC) and a regularizer for the ensemble. The feature vector FV and the adjacency matrix A are the inputs of the DGCNN model. The local consistency output for the hidden layer i of the network Z (i) is declared in Eq. (16) (Kipf & Welling, 2017) conv where A ¼ A þ I n is the adjacency matrix A with self-loops, I n is the identity matrix, and 2 is the normalized adjacency matrix, Z (i−1) represents the output of the (i − 1) th layer, Z (0) = X W (i) represents trainable parameters of the network, and σ indicates the activation function (ReLU). The output of the DGCNN can be visualized on the Karate club network, as shown in Fig. 6. The red color of this network indicates a labeled node and the green color indicates an unlabeled node. The local consistency network is optimized with PPMI (positive point-wise information) in the global consistency layer.
Global consistency is formed with PPMI to encode semantic information and is denoted as a matrix P 2 R nÂn . Initially, the frequency matrix FM is calculated by random walk and on the basis of FM we calculate the matrix P. Furthermore, we define the convolution function Conv GP based on P. We can calculate the matrix FM as follows: A random user can choose the random path. If the random user is at node x i at time t, we define the state as s(t) = x i . We set the probability of transition from node x i to one of its neighbors x j by The frequency matrix is computed for all pairs of nodes and the path is calculated by random walk. The i th vector of the frequency matrix is the i th node and j th node is the j th column of the frequency matrix. This is called context c j . This frequency matrix is used to calculate the PPMI matrix P, as shown in Eqs. (18) to (21).
(20) where p i,j is the estimated probability that node x i occurs in context c j , p i, à is the estimated probability of node x i , and pà ,j is the probability of context c j , thus, The PPMI matrix P increases the relationship between data points compared to the adjacency matrix A. Using the PPMI matrix, global consistency is calculated in Eq. (22) Conv ðiÞ GC ðXÞ ¼ Z ðiÞ ¼ rðD À 1 2 PD À 1 2 Z ðiÀ1Þ W ðiÞ Þ; where P represents the PPMI matrix and D i;i ¼ P j P i;j for normalization. To combine the local and global consistency convolution for the dual graph convolutional network, a regularizer was used. The loss function with this regularizer is represented in Eq. (23) where c is the number of different labels for prediction, Z A 2 R nÂc is the output given by Conv LC ,Ẑ A 2 R nÂc is the output of the softamax layer, y L represents the set of data indices whose labels are observed for training and Y 2 R nÂc is the ground truth.
whereẐ P 2 R nÂc is the output of applying the softmax activation function given by Conv GC (the vectorsẐ A ià ;Ẑ P ià 2 R n are i th columns of the matricesẐ A ,Ẑ P , respectively). For the calculations of L 0 Conv LC ð Þand L reg Conv LC ; Conv GC ð Þ , the activation function called ReLU was used. After applying the activation function, the output matrix is represented as Z A 2 R nÂc and Z P 2 R nÂc . The structure of the CNN dual graph capsule is shown in Fig. 7. This proposed network consists of three main capsules. Each capsule consists of a dual graph CNN and two capsules to represent real and fake images or videos. The output of each CNN capsule called OC j|i is directed through dynamic routing to produce the detected output O j for r iterations, as mentioned in Algorithm 2. Input: FM, A, PPMI, y L , r, λ(t) and hidden convolution layers (H) Output: Training model with best features.
Step 9: W 0 W 0 þ rand size W 0 ð Þ ð Þ Step 10: for all input capsules i and all output capsules j do Step 11: OC jji W 0 j squash (OC j|i )) // Where W 0 j 2 R m .
Step 12: end for Step 13: for all input capsules i and all output capsules j do Step 14: b ij 0 Step 15: end for Step 16: for r iterations do Step 17: for all input capsules i do c i softmaxðb i Þ Step 18: for all output capsules j do s j P i c i;j OC jji Step 19: for all output capsule networks i do O i squashðs i Þ // Where squash s i ð Þ ¼ Step 20: for all input capsules i and output capsules j do Step Step 22: end for Step 23: Return O j Step 24: end for
EXPERIMENTAL RESULT AND DISCUSSIONS
The proposed deep learning-based deepfake detection system with an efficient feature extraction and detection process is tested with fake and real images of public datasets such as FFHQ , 100K-Faces (Li, Chang & Lyu, 2018), Celeb-DF (V2) (Kipf & Welling, 2017) and WildDeepfake. The proposed system is implemented using the machine learning library called PyTorch.
Flickr-Faces-HQ, FFHQ
Flickr-Faces-HQ, FFHQ, is a dataset that contains a group of 70,000 face images with a high-quality resolution generated by generative adversarial networks (GANs).
100K-Faces
The 100K-Faces dataset contains 100,000 unique human face images generated using StyleGAN.
Celeb-DF (V2)
It is a large-scale video dataset with 590 real videos of celebrities and high-quality deepfakes of 5,639 videos constructed using a synthesis process with respect to over two million frames. Real videos gathered from YouTube videos and fake videos are created by swapping each pair of faces.
WildDeepfake
It is a real-world deepfake detection dataset collected from the Internet. The subjects of this dataset are real and fake, and they are collected from the internet sources and consist of various scenes. Each scene consists of more persons with rich facial expressions.
Specificity
It is used to evaluate the rate between true negatives (TNs) and true positives (TPs) The comparison of the accuracy of the proposed SAE-GLSTM-C-DGCNN deepfake detection is shown in Table 3 The sensitivity, specificity, and ROC comparisons of various deepfake detection systems are evaluated using four different datasets, and the results are shown in Table 4 and Fig. 8. Table 4 shows the sensitivity and specificity analyzes of the proposed SAE-GLSTM with the C-DGCNN system compared to the existing algorithms and various datasets from FFHQ, 100K-Faces, Celeb-DF, and WildDeepfake. The proposed SAE-GLSTM with C-DGCNN obtained a sensitivity score of 91.67% in the FFHQ dataset, 89.8% in the 100K-Faces dataset, 89.1% in the Celeb-DF dataset and 93.1% in the WildDeepfake dataset. Similarly, the specificity of the proposed SAE-GLSTM with the C-DGCNN achieved a Table 4 evaluated on the FFHQ, 100K-Faces, Celeb-DF, and WildDeepfake datasets. These datasets are used in both the training and testing processes by using the deepfake detection classifier baseline methods namely, VGG19, ResNet, and MobileNet, with our proposed work of SAE-GLSTM with the C-DGCNN model. Table 5 shows that the proposed approach obtained a minimum error rate of 5.1 for the WildDeepfake dataset, 7.12 for Celeb-DF, 6.01 for 100K-Faces and 5.91 for FFHQ datasets. The error rate is minimal compared to the baseline deepfake detection methods such as VGG19, ResNet, and MobileNet. Figure 9 illustrates the equal error rate (EER) of various approaches with respect to various datasets. The proposed deapfake detection system secured a minimum EER of 1.9 for the WildDeepfake dataset, 1.67 for the Celeb-DF dataset, 2.1 for the 100K-Faces dataset and 1.32 for the FFHQ dataset. These EERs are minimal compared to traditional baseline systems such as VGG19, ResNet and MobileNet. shown that the proposed SAE-GLSTM with CNN capsule dual graph improved sensitivity, specificity, accuracy, and minimum error rate, EER, and computation time. Ultimately, these findings have proven that the proposed system efficiently detects deepfake images/ videos with improved accuracy and minimum error.
CONCLUSION
This article demonstrated the two-level deep learning method for the detection of deepfake images and videos. From the frames extracted, face images are extracted for deepfake detection. The features of the face images are extracted using the proposed SAE method. The most relevant features are extracted by enhancing the SAE-based feature extraction with the graph LSTM approach. These relevant extracted features are then fed as input into the capsule network for the detection of deepfakes. There are five capsules, including three input capsules constructed from CNN graph and two output capsules to represent fake and real images or videos. Experimental analysis with various baseline deepfake detection approaches, such as VGG19, ResNet and MobileNet, using the benchmark deepfake image and video datasets, including FFHQ, 100K-Faces, Celeb-DF and WildDeepfake, demonstrated that the proposed two-level deepfake detection approach secures improved accuracy of 96.2%, 97.15%, 98.12% and 98.91%, respectively, on these datasets. The proposed system obtained an improved ROC value of 94.2% for the FFHQ dataset, 94.1% for the 100K-Faces dataset, 96.12% for the Celeb-DF dataset and 86.54% for the WildDeepfake dataset. In terms of the error rate, the proposed system secured the corresponding values of 5.91, 6.01, 7.12 and 5.1. The proposed system secured the computational time as 8.1 ms for the WildDeepfake dataset, 10.3 ms for Celeb-DF, 12.1 ms for 100K-Faces and 21.2 ms for the FFHQ dataset. Therefore, all evaluations have shown that the proposed two-level deepfake detection method is general and effective in detecting a wide range of fake videos and image attacks. In the future, the proposed system will be improved to defend against adversarial machine attacks with enhanced capabilities. | 6,820.8 | 2022-05-31T00:00:00.000 | [
"Computer Science"
] |
New Pharmacokinetic and Microbiological Prediction Equations to Be Used as Models for the Search of Antibacterial Drugs
Currently, the development of resistance of Enterobacteriaceae bacteria is one of the most important health problems worldwide. Consequently, there is a growing urge for finding new compounds with antibacterial activity. Furthermore, it is very important to find antibacterial compounds with a good pharmacokinetic profile too, which will lead to more efficient and safer drugs. In this work, we have mathematically described a series of antibacterial quinolones by means of molecular topology. We have used molecular descriptors and related them to various pharmacological properties by using multilinear regression (MLR) analysis. The regression functions selected by presenting the best combination of a number of quality and validation metrics allowed for the reliable prediction of clearance (CL), and minimum inhibitory concentration 50 against Enterobacter aerogenes (MIC50Ea) and Proteus mirabilis (MIC50Pm). The obtained results clearly reveal that the combination of molecular topology methods and MLR provides an excellent tool for the prediction of pharmacokinetic properties and microbiological activities in both new and existing compounds with different pharmacological activities.
Introduction
The increasing emergence of resistance to known antibiotics is one of the most important problems that have appeared in recent years in the treatment of infectious diseases nowadays [1]. The emergence of resistance is associated not only with complications from infections in terms of morbidity and mortality, but also represents an increase in healthcare costs [2]. In this context, rapid and inexpensive methods are necessary for the immediate expansion of the therapeutic arsenal that we currently have [3].
Traditionally, in order to develop a drug, it was necessary to synthesize and evaluate the activity of hundreds to thousands of compounds with the intention of proving their biological activity, selectivity, and bioavailability, as well as their low toxicity. On average, this process of discovering new drugs involves, in addition to a high economic cost, several years of duration before, hopefully, finding a drug with the right characteristics to boost its commercialization [4].
Virtual screening along with combinatorial chemistry appear as a solution to this problem by allowing researchers to identify molecules that are likely to be active from a virtual chemical library [5]. Given that these virtual libraries usually have many components, it is common to apply filters to discard molecules with a low probability of becoming a drug. In this sense, Lipinski was the first to apply these filters when describing the "Rule of Five" [6]. This standard predicts that a compound is "non-pharmacological" if it has too many hydrogen bond donors or acceptors (5 and 10 respectively), its molecular mass is greater than 500 and its calculated partition coefficient (ClogP) is greater than 5. With these rules, Lipinski created the first pre-filter prior to drug screening [7]. This rule has been revised based on data obtained from studies in rats [8]. Since then, other rules have appeared and been modified, such as the Rule of Three [9], used in "fragment-based" discovery, which classifies fragments as "drug-like" those with an average molecular weight ≤ 300 Da, a calculated partition coefficient (ClogP) ≤ 3, ≤3 hydrogen bond donors, ≤3 hydrogen bond acceptors, and <3 rotatable bonds. Another similar example is the "Rule of 3/75", which was described after analyzing 245 preclinical Pfizer compounds. This rule states that a compound will be 2.5 times safer in in vivo assays when its ClogP is <3 and the topological polar surface area (TPSA) > 75 [10].
Within drug screening, it is also necessary to highlight the rise of QSAR (Quantitative Structure-Activity Relationship) methods, which have been the foundation of the design and development of new molecules with pharmacological activities [11]. QSAR modelling is a technique that allows the interdisciplinary exploration of knowledge in compounds covering chemical, physical, biological, and toxicological aspects. It also provides formalisms for the mathematical development of models based on chemical characteristics and the activity of structurally similar compounds. This context is defined by mathematical algorithms and provides a reasonable foundation for creating a prediction model [12].
The use of QSAR methods for obtaining and developing new drugs is a key tool since it provides a noteworthy saving of time and resources, considering that many pharmacological activities of compounds can be predicted without having been synthesized by virtue of its rational approach [13,14].
There are numerous examples of drug designed molecules with very different pharmacological activities which have followed the application of QSAR techniques in combination with other in-silico methods, such as zanamivir, tirofiban, imatinib, raltegravir, donepezil, boceprevir, norfloxacin, captopril and dorzolamide, all of them approved by the US Food and Drug Administration [15].
Our research group uses molecular connectivity or molecular topology (MT), developed by Kier and Hall in the mid-1970s [16], a method derived from the QSAR techniques, which is based on the calculation of molecular descriptors or topological indices (TIs) from the structure of a molecule. These methods play an important role, since they provide useful information for the rational design of new molecules with minimal cost and are an alternative and effective tool in view of the reduction of investments by the pharmaceutical industry in the area of antibiotics [17,18].
By using MT, we can classify a compound as active or not active against a certain pharmacological activity using pattern recognition techniques such as multilinear regression (MLR), discriminant linear analysis, neuronal networks, random forests, support vector machines or principal component analysis [19].
All Gram-negative bacilli of the Enterobacteriaceae family, are resistant to common antimicrobials, such as penicillin, oxacillin, methicillin, macrolides, glycopeptides, linezolid, and clindamycin, among others [20]. Moreover, the World Health Organization (WHO) on its "WHO PRIORITY PATHOGENS LIST FOR R&D OF NEW ANTIBIOTICS" considers this family a critical or first priority pathogen [3]. Both bacteria studied in this work, Proteus mirabilis and Enterobacter aerogenes, are Gram-negative bacilli that belong to the Enterobacteriaceae family, and are an important cause of community and nosocomial infections, representing a major clinical and public health challenge due to the limited therapeutic arsenal and well-established resistances [21,22].
In this study, by combining MT and MLR after extending a thorough bibliographic research on physicochemical and biological properties on antibacterial quinolones from a previous study [23], we developed three linear regression functions capable of predicting clearance and minimum inhibitory concentration 50 against Enterobacter aerogenes and Proteus mirabilis. We selected quinolones to build these equations, which can be applied to other quinolones not used in the study, as well as to molecules that share no structural relationship, seeing as the selection of hits or leads considers mathematical-topological similarity patterns, despite the possible core-structure diversity [24]. Thus, the obtained functions can be used as filters in the search process of new antibacterials in large databases of molecules. Examples of some commonly used chemical libraries where new molecules could easily be found on which to apply these techniques are ZINC, Benchmark DUD, PubChem, ChemBank, ChEMBL, DrugBank and Inter-bioscreen [15].
The first quinolone to be used clinically was discovered in 1962 by G. Lesher et al., who, during the purification of the antimalarial chloroquine from mother liquor, found 7chloroquinoline, a by-product with antimicrobial activity. Starting from this lead, nalidixic acid was obtained. Despite the good tolerance and ease of synthesis of this compound, its use has been restricted to the treatment of some urinary infections due to its tendency to select resistance and to concentrate mainly in urine. From this discovery, derivatives with similar structures were developed, with still moderate activity against Gram-negative bacteria, but which were very valuable in uncomplicated urinary infections [25].
Pipemidic acid appeared in 1975, the first quinolone with a piperazine ring, with favorable activity, spectrum, and pharmacokinetics, which allowed the administration of lower doses. The appearance of bacterial resistance during treatment and side effects were also lower [26]. A year later, flumequin was discovered, which, by incorporating a fluorine atom in position 6 of the core double ring, was able to amplify its spectrum within Enterobacteriaceae, including some resistant strains and slightly improve the activity against Gram-positive bacteria [27].
These facts motivated the beginning of an active chemical synthesis campaign to refine the structure-activity relationships. The aim was to improve activity while optimizing the pharmacokinetic properties and reducing the toxicity and interactions with other drugs. This led to the consolidation of the second-generation quinolones, which differ from the classical quinolones in two characteristics common to all of them: the presence of a fluorine atom in position 6 and a substituted piperazine or pyrrolidine ring at position 7 of their core structure [28].
The first second-generation quinolone to be patented, in 1978, was norfloxacin, whose improved potency against Gram-negative bacteria was already in the natural antibiotics' range, in addition to having a longer half-life (3-4 h) and shorter protein binding (50%) than its predecessors [29]. Shortly after, ciprofloxacin was introduced, being for many years the most potent fluoroquinolone against Gram-negative bacteria and the most widely used quinolone [30]. This generation of quinolones has a much higher activity against aerobic Gram negatives and some important Gram-positive pathogens, as well as more favourable toxicological profiles and good absorption through the gastrointestinal tract [28].
In the 1990s, several quinolones, mostly di-or tri-fluorinated, were approved. They are the third-generation quinolones, which have a greater structural complexity than their predecessors. One of the first modifications was the introduction of an amino group at position 5, which caused an increase in the general activity against Gram-positive bacteria [28]. All of them have advantages in their bioavailability compared to second-generation quinolones, since the substituted rings at position 7 give them greater lipophilicity, resulting in longer elimination half-lives and greater tissue penetration [31].
In recent years, a new class of quinolones has been synthesized, since the optimization of the substituents in different positions of the core structure has allowed the suppression of the fluorine atom in position 6, thus avoiding possible toxic effects, without necessarily reducing its activity [32]. Probably, the most prominent examples of these novel desfluoro(6) quinolones are garenoxacin (initially known as BMS-284756) and DX-619, which due to their high affinity for type II topoisomerases, have shown high intrinsic activity against most Gram-negative, Gram-positive and anaerobic microorganisms with a low resistance selection potential [33,34].
The good pharmacokinetic profile of quinolones, as well as their high antibacterial activity and broad spectrum of action, makes their therapeutic indications very varied. In fact, quinolones are used in the treatment of many infections, both in the hospital setting and in outpatient clinical practice. More than ten thousand quinolones have been registered. Therefore, it has probably been the most widely used thoroughly studied class of chemotherapeutic agents [35,36]. For all these reasons, we thought that quinolones were the most suitable antibacterial family to be the focus of our study.
Results
We performed Y-randomization, inter-correlation, and leave-one-out (LOO) crossvalidation tests on all equations. On the one hand, a representation of correlation coefficient (r 2 ) versus cross-validated correlation coefficient (r 2 cv ) could show the presence of chance correlations. On the other hand, compounds that do not fit the selected function could be easily spotted by a representation of the Y-randomization analysis.
Hence, we rejected those equations with highly dependent indices (correlation coefficient greater than 0.8 or smaller than −0.8 between indices), suspected randomness (r 2 vc smaller than 0.5 for the selected equation) or outliers in the compound set (dissimilar residual values between the selected and the cross-validated equation for one of the compounds, which is indicative of instability).
We also considered adjusted linear coefficient of determination (r 2 a ), which is represented by: where N is the number of observations; and p is the number of dependent variables selected in the RF (p = p when the y-axis intersect is zero and p = p + 1 when it is different from zero). The correlation will be better the closer this value gets to unity. Therefore, we finally selected three descriptor-independent regression functions (RFs) with high predictive capacity and significant statistical parameters: In Tables 1-3 we can find all the important information regarding the RFs for each of the compounds: the value of each independent variable or TI; the dependent variable or experimental value for each property; the predicted value for each property; and the residual values (difference between experimental and predicted value).
We can note that one of the calculated values is negative (Clinafloxacin, Table 2) and, of course, this is an impossible value for a MIC value. This fact can happen in compounds with a dependent variable's value very close to zero, since the linear regression function may calculate a predicted value for that compound just below the x-axis, i.e., just below zero.
Discussion
Only those functions with a significance greater than 99.99% (p ≤ 0.0001) were selected. The incorporation of more than three independent variables was studied in order to improve the statistical parameters but this was not achieved with any of the properties studied. Finally, the best statistical data were obtained with the presence of three independent variables in each function, which indicates that all the indices provide important structural information to quantify each of the properties studied.
In all cases, the residual value for each compound was less than twice the standard error of estimation (±2SEE), indicating that the selected equations have high predictive ability.
The cross-validated correlation coefficient (r 2 cv ) was determined to study Y-randomization. A function is considered to be non-randomly obtained when its r 2 cv > 0.5 [15]. In the three selected equations, the value of this coefficient was higher than 0.7028, which explains more than 70.28% of the variance of the group. The probability of finding a random sequence that fits the prediction model is very low since in the three selected equations they had significantly higher r 2 cv values than the non-selected equations (Supplementary Section S1). All these data guarantee the robustness of the selected equations.
For the clearance (CL) property, a connectivity index [43], which considers sixth order chain type graphs ( 6 χ ch ), increases its value. This means that the presence of 6-member cycles increases this property. One the other hand, an electro-topological index (S >N-) [44] and an information-theory index (NI 2 ) [45] have a negative effect on the RF's value. The first one increases with a high number of tertiary amine groups, so the presence of this group decreases the CL value. The latter can be applied to a molecule to quantify its atomic diversity [46].
For the minimum inhibitory concentration 50 against Enterobacter aerogenes (MIC50Ea), two electro-topological indices increase its RF's value. The first one (S aCa ) increases with a high number of aromatic carbons, while the second one (S -NH-) increases with a high number of secondary amine groups. Thus, it should be noted that the presence of aromatic carbons and/or secondary amine groups have a negative effect on the activity against E. aerogenes, since the property in this case is MIC. On the contrary, another electrotopological index (S -CH2-), which increases with an increasing number of methylene groups, helps decreasing the RF's value and, hence, improves the activity against E. aerogenes.
For the minimum inhibitory concentration 50 against Proteus mirabilis (MIC50Pm), a connectivity index and an electro-topological index increase its RF's value. The valenceconnectivity index [47] considers ninth order chain type graphs ( 9 χ V ch ), present in garenoxacin and moxifloxacin, which present weak activity against P. mirabilis compared to most of the compounds in that data set. This is consistent with the structure-activity relationships established by Domagala et al. [48], which state that a 9-member bicycle at the quinolone's R7 position enhances activity against Gram-positive bacteria but is detrimental for Gram-negative bacteria such as P. mirabilis. Since this bicycle is present in only two compounds, the possibility of developing the RF ignoring this index was studied, but in all the attempts, the statistical consistence significantly decreased. The connectivity index's value (S -CH3 ) increases with an increasing number of methyl groups, so we could state that an excessive amount of methyl groups in the quinolones' structure decreases the activity against P. mirabilis. In fact, most of the most active compounds from this data set lack this functional group, i.e., have a zero value for this index (see Table 3 for the MIC, 9 χ V ch and S -CH3 values and Supplementary Section S2 for the quinolones' chemical structures). Nevertheless, an electro-topological index decreases the RF's value, i.e., enhances activity against P. mirabilis. Such is the case of S >N-, which increases with a high number of tertiary amine groups. Therefore, we could state that a good proportion of tertiary amine groups in the quinolones' structure enhances the activity against P. mirabilis.
Compound Selection
Data were collected for a total of 85 physicochemical, pharmacokinetic, pharmacodynamic and microbiological properties of antibacterial quinolones (see Supplementary Section S2).
Experimental procedures for the obtention of experimental values of each property were studied. To select a property, it had to have been obtained with comparable experimental procedures for at least 10 compounds. Sufficient data for a total of 27 microbiological properties (activities against different microorganisms measured as different MICs) were obtained from in vitro assays performed according to CLSI protocols [49] and 9 pharmacokinetic properties obtained in assays performed in healthy subjects.
Due to the diversity of bibliographical data found regarding a given property for each of the selected molecules, we considered the average of those data whose values were of the same order of magnitude for the study and development of the prediction equations.
Topological Descriptors
A total of 41 different quinolones were topologically characterized using DESMOL13 [50] and MOLCONN-Z [51] programs. We removed indices with value 0 for every compound and with identical values for all compounds, leaving a total of 140 topological descriptors specific to each molecule (see Supplementary Section S3). These descriptors were computed from the adjacency topological matrix obtained from the hydrogen-depleted chemical pseudographs, drawn with the ChemBioDraw Ultra 12.0 drawing program of the ChemBioOffice 2010 program package. The molecular descriptors used are described in Supplementary Section S4 along with their definitions and references.
Multilinear Regression
We used the statistical package BMDP module 9R [52] to perform the MLR analysis. This program is based on the Furnival-Wilson algorithm [53], which uses Mallow's Cp parameter [54] to assess how well a set of observations fit a given RF. Its purpose is to find the best combination of indices from all the possible ones to generate this RF. This statistical parameter can be depicted as: where PRESS is the squared sum of the residual values of the dependent variables; s is the residual mean obtained in the RF; p is the number of dependent variables selected in the RF; and N is the number of observations.
In principle, the equation with the lowest Mallow's Cp value will be the optimal one. However, the residual sum of squares always decreases as dependent variables are added to the equation. Consequently, we will always select the equation with the highest number of indices if we aim for the minimum residual sum values.
However, we must always bear in mind that the predictions made from a linear regression function are not reliable if there is not a high degree of linear correlation between the independent and dependent variables. In addition, its statistical quality must be analyzed and the higher it is, the more reliable and accurate the prediction will be.
It is also recommended that the number of independent variables chosen be well below the number of observations, to avoid possible fortuitous correlations, so that the optimal relationship between them is the one that achieves a better prediction with the minimum number of descriptors possible, in order to avoid random variation as much as possible [55]. Randomness tests were also carried out to identify possible fortuitous regressions. To do this, the property value of each compound is randomly permuted and linearly correlated with the selected independent variables. This process is repeated as many times as necessary. The result of the randomness test is expressed by graphically representing the correlation coefficient (r 2 ) versus the prediction coefficient (r 2 cv ).
For all those reasons, the number of observations (size of the set), the effect of the different independent variables and the degree of correlation between them will influence the selection of RFs under the criterion of this algorithm.
Conclusions
The increase of resistant Enterobacteriaceae bacteria such as E. aerogenes and P. mirabilis is one of the major challenges that have appeared in recent years in infectious disease treatments. Therefore, the urge to find new compounds with antibacterial activity has not ceased to grow. It must also be noted that the pharmacokinetic profile of a compound is essential in determining the clinical outcome of an antibiotic treatment.
We have designed and developed three statistically significant, descriptor-independent regression functions with a high predictive capacity, which will aid us obtain theoretical values for the pharmacokinetic and microbiological properties selected. The results of the predictions obtained are supported by internal validation studies performed on all selected properties, as well as by all the statistical parameters, which can be classified as very satisfactory in all cases.
These equations, used as predictive models or as filters, can be extremely useful to find new molecules with antibacterial activity and even for their use in drug repositioning, where we find thousands of candidate compounds, which will have most certainly endured toxicological assays, to be screened in a short time for the search of safe antibacterial compounds.
We can conclude that the prediction functions obtained confirm molecular topology as a useful and efficient tool in the prediction of pharmacokinetic and microbiological properties. Data Availability Statement: This study did not report any data. | 5,048.4 | 2022-01-20T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Cancer CGH+SNP Unmasked Multiple Noncontiguous Deletions on Chromosome 7q and Cryptic Genomic Imbalances in a CMML Patient with an Apparently Balanced t(4;12) Translocation. A Case Report and Literature Re-View A
Unmasked Multiple Noncontiguous Deletions on Chromosome 7q and Cryptic Genomic Imbalances in a CMML Patient with an Apparently Balanced t(4;12) Translocation. Case Report and Literature Re-View. Biomed Sci
Chronic myelomonocytic leukemia is a clonal hematopoietic stem cell disorder with overlapping features of myelodysplastic syndromes and myeloproliferative neoplasms. Median age at diagnosis is 70 years and, in many cases, it is diagnosed occasionally. Bone marrow karyotype is normal in two thirds of patients, with a few recurrent aberrations including -Y, -7, and +8. Here, we report the case of a patient affected by dysplastic CMML-1 subtype according to the 2017 WHO classification, showing a 46,XX,del(7) (q21q36),t(4;12)(q24;q15)[18]/46,XX [2] peculiar karyotype. To assess the real nature of these chromosomal abnormalities we performed a Cancer CGH+SNP array. On chromosome 7q, we identified three noncontiguous deletions at bands q21.11-q22.1, q22.1-q32.2 and q34-q36.1, while we did not detect any copy number neutral loss. In addition, the SNP array unveiled the unbalanced nature of t(4;12), with three cryptic genomic imbalances: two deletions on chromosome 4, at bands q13.1-q13.3 and q24, and one deletion on chromosome 12, at bands q21.33-q23.1. These three deletions are known to involve many OMIM genes, including TET2 (OMIM *612839) and NFKB1 (OMIM *16401). Chromosome 7 aberrations, detected in about 20% of CMML patients with cytogenetic abnormalities, have been recognized as an adverse prognostic factor, therefore allocating to the high cytogenetic risk category. Several tumor suppressor genes map in the chromosome 7 deleted regions, such as EZH2, SAMD9L and CUX1. Deletion of these regions can contribute to disease progression and could account for the differences in patients' prognosis due to the variability of breakpoint regions on 7q.
Next-generation sequencing (NGS) analysis confirmed this result revealing a double TET2 mutation. Therefore, we underline the role of CGH arrays in CMML diagnostic workup. These tools, together with NGS, represent a valid instrument to provide insight not only in molecular pathogenesis but also in disease progression.
Case Report
Chronic myelomonocytic leukemia (CMML) is a clonal hematopoietic stem cell disorder with overlapping features between myelodysplastic syndromes (MDS) and myeloproliferative neoplasms and an inherent leukemic risk of ~15% over 3-5 years [1,2]. The 2017 WHO classification has recommended its partitioning into three categories based on peripheral blood and bone marrow (BM) blasts percentage [2]. In addition, the previously used 1994 FAB Cooperative Leukemia Group subdivision into a "dysplastic" (MD) and a "proliferative" CMML variant has been revived. Median age at diagnosis is 70 years, with a male preponderance. In many cases the diagnosis is occasional, with a median survival of 24-36 months [3]. Over the years several studies aimed to identify clinical and biological features associated with CMML survival outcomes, leading to the development of different prognostic models for individual patients' treatment decision-making [4]. Like acute myeloid leukemia, CMML patients demonstrate ~10-15 mutations per kilobase of coding DNA regions, [5] while clonal cytogenetic abnormalities are observed in 20-30% of cases, including +8, -Y, chromosome 7 abnormalities, +21, and complex karyotypes [1]. In
Here, we describe the case of a 76-year-old patient who was admitted to our hospital because of suspected CMML and for whom an array CGH was performed to better define the genomic imbalances at submicroscopic level and identify involved genes.
In been discovered [6,7]. While chromosome 7q cytogenetic analysis could not detect the precise intervals and the genes involved in the deletion, with array CGH we identified five genes already known to have a potential role in tumorigenesis.
In details, EZH2 is a component of the polycomb repressive complex-2 and encodes for a methyltransferase, initiating epigenetic silencing of many genes involved in different cell pathways.
CUX1 encodes for a homeobox transcription factor involving in tumorigenesis, with a possible role as a tumor suppressor gene.
SAMD9 and SAMD9L compound heterozygous deletions with high frequency in adult and childhood myeloid leukemia. In contrast with previous reports, KMT2C/MLL3, despite being an epigenetic regulator acting as a gene silencer, is not involved in our deletion. In our patient, together with a del7q, we found an apparently balanced t(4;12) translocation, which was proved to be unbalanced by array CGH. The three deletions found on chromosome 4 involve many OMIM genes, with TET2 and NFKB1 playing an important role in disease progression. Somatic TET2 mutations occur in ~60% of CMML, even if they are not specific for the disease and can also be detected as a part of age-related clonal hematopoiesis. Moreover, they have not proven to negatively impact either on overall (OS) or leukemia-free survival [8,9]. On the contrary, in the absence of clonal ASXL1 involvement, TET2 mutations were shown to favorably impact on OS [10]. Interestingly, we found the coexistent loss of EZH2 due to the 11Mb deletion at bands q34-q36.1 of chromosome 7. Indeed, its deletion is known to contribute to myeloid tumorigenesis in association with TET2 variations. The was not commonly described in association with hematological malignancies, so that its biological significance remains unclear. At the same time, we cannot exclude that some of the involved genes could play a minor role in disease onset or progression.
In conclusion, this case shows both common recurrent rearrangements and rare copy number alterations. Clarifying the role of these alterations could contribute to elucidate the mechanisms involved in CMML leukemogenic network, possibly contributing to define a more accurate prognosis. This case also underlines the importance of including different molecular cytogenetic tests in CMML diagnostic workup, so providing prognostic information and a strategy to develop personalized therapies, especially considering that NGS analysis is not always available.
Disclosure of Conflicts of Interest
All the authors declare they have no potential conflicts of interest.
Ethics Approval and Consent to Participate
This study was not conducted with research intervention. Thus, ethics committee approval was not necessary.
The patient has given her written informed consent to publish her case.
Funding
The only funds used were provided by the authors' institutions. | 1,404.2 | 2021-08-31T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
MicroRNA-223 is essential for maintaining functional β-cell mass during diabetes through inhibiting both FOXO1 and SOX6 pathways
The initiation and development of diabetes are mainly ascribed to the loss of functional β-cells. Therapies designed to regenerate β-cells provide great potential for controlling glucose levels and thereby preventing the devastating complications associated with diabetes. This requires detailed knowledge of the molecular events and underlying mechanisms in this disorder. Here, we report that expression of microRNA-223 (miR-223) is up-regulated in islets from diabetic mice and humans, as well as in murine Min6 β-cells exposed to tumor necrosis factor α (TNFα) or high glucose. Interestingly, miR-223 knockout (KO) mice exhibit impaired glucose tolerance and insulin resistance. Further analysis reveals that miR-223 deficiency dramatically suppresses β-cell proliferation and insulin secretion. Mechanistically, using luciferase reporter gene assays, histological analysis, and immunoblotting, we demonstrate that miR-223 inhibits both forkhead box O1 (FOXO1) and SRY-box 6 (SOX6) signaling, a unique bipartite mechanism that modulates expression of several β-cell markers (pancreatic and duodenal homeobox 1 (PDX1), NK6 homeobox 1 (NKX6.1), and urocortin 3 (UCN3)) and cell cycle–related genes (cyclin D1, cyclin E1, and cyclin-dependent kinase inhibitor P27 (P27)). Importantly, miR-223 overexpression in β-cells could promote β-cell proliferation and improve β-cell function. Taken together, our results suggest that miR-223 is a critical factor for maintaining functional β-cell mass and adaptation during metabolic stress.
Globally, the number of adults with diabetes has quadrupled in the past three decades, and it is projected to increase to 642 million in 2040 (1). Type 1 diabetes (T1D) 3 results from notable insulin deficiency caused by autoimmune attack on -cells, which leads to pronounced -cell death and dysfunction. Type 2 diabetes (T2D), which accounts for about 90% of all diabetes cases, is believed to be the consequence of insulin resistance in key organs such as liver, adipose tissue, and skeletal muscle (2). However, accumulating evidence suggests that insulin resistance leads to T2D only when accompanied by -cell dysfunction, in which -cells can no longer compensate for the increased demand of insulin by increasing functional output and number (3). Therefore, loss of functional -cells is a critical culprit responsible for both types of diabetes.
Pancreatic -cell mass is maintained through dynamic balance of neogenesis, proliferation, and apoptosis (4). At embryonic and neonatal stages, -cells are primarily generated via differentiation from stem or progenitor cells, a process called neogenesis; in adult pancreas, however, cell lineage tracing studies have shown that replication/proliferation of existing -cells is the dominant mechanism to increase -cell mass (5). Yet -cell proliferation rate declines rapidly in early childhood and ultimately reaches to almost zero in adults (4). Interestingly, it has been proposed that -cell dedifferentiation is needed prior to proliferation (6,7), followed by redifferentiation into mature -cells (8,9). In fact, -cell dedifferentiation has been repeatedly reported to play a role in the pathogenesis of T2D in both rodent and human studies (10 -12), and the redifferentiated cells are capable of correcting hyperglycemia in T1D mice (13). Thus, it is very likely that, during early stage of diabetes, pancreatic -cells undergo the process of dedifferentiation-proliferation-redifferentiation to expand functional -cell mass and cope with increasing demand of insulin. Despite years of intensive research in this field, currently available therapies fail to prevent -cells from their inev-itable deterioration. Although transplantation of -cells has been proposed to address this problem, it becomes implausible because of the scarcity of -cells from cadaveric donors and transplant rejection (14). Therefore, it is urgently needed to identify novel factors to preserve and restore functional -cell mass, which can eventually lead to the development of -cell replacement or regeneration therapies.
MicroRNAs (miRNAs or miRs) are non-protein coding RNAs of ϳ22 nucleotides, which can degrade or inhibit the translation of hundreds of target mRNAs by binding to the 3Ј untranslated region (UTR) (15). Therefore, miRNAs are able to regulate a broad spectrum of biological processes such as cellular proliferation and survival. Genetic deletion of dicer1, an essential processor of miRNAs, in pancreas or -cells results in the development of diabetes because of smaller pancreatic size and loss of insulin mRNA and protein (16,17). This study suggests that miRNAs, as an integrated entity in whole, may act as single positive regulator of -cell development, survival, and function. Like many other miRNAs, miR-223 is ubiquitously expressed in tissues such as heart, adipose tissues, and liver (18). Consistent with its tissue distribution, miR-223 has been implicated in various physiological and pathological conditions including cardiac hypertrophy (19), systemic cholesterol homeostasis (20), development of atherosclerosis (21), and adipose tissue-associated insulin resistance (18). Furthermore, it has been reported that miR-223 plays a critical role in regulating cell proliferation in diabetic retinopathy and some cancers (22,23). However, the functional role of miR-223 in pancreatic -cells and its underlying mechanism have never been investigated. Thus, we performed in vivo loss-of-function and in vitro gain-of-function studies to determine the role of miR-223 in maintaining functional -cell mass. Our data unveil a unique bipartite molecular mechanism whereby miR-223 positively controls functional -cell mass through regulating the Foxo1 and Sox6 signaling cascades, providing potential effective targets for diabetes intervention.
MiR-223 is up-regulated in -cells treated with TNF␣ or high glucose as well as islets from diabetic mouse model and human patients
To investigate the functional role of miR-223 in pancreatic -cells, we first characterized the expression levels of miR-223 in diabetic mouse islets and -cells (Fig. 1, A-E). We observed that, compared with chow diet (CD) control mice, levels of miR-223 were increased by 5.6-fold and 12.8-fold in islets of HFD and db/db mice, respectively (Fig. 1, A and B). Similar results were noticed in islets of Akita mice, a genetic mouse model associated with T1D because of Insulin 2 gene mutation and increased -cell toxicity (Fig. S1A). Consistent with these in vivo data, the expression levels of miR-223 were increased by 2.5-and 3.4-fold, respectively, in Min6 -cells treated with TNF␣ or high glucose (25 mM) medium, compared with BSA or low glucose controls (5 mM) (Fig. 1, C and D). More importantly, expression levels of miR-223 were also enhanced by about 2-fold in islets from human donors with obesity and type 2 diabetes (Fig. 1E). Together, both in vivo and in vitro data clearly indicate that the miR-223 expression in -cells and islets is dysregulated under various stress conditions.
Ablation of miR-223 exacerbates -cell dysfunction in diabetic condition
We next went on to assess the potential role of miR-223 in the regulation of functional -cell mass under metabolic stress. A global miR-223 KO mouse model was used to avoid confounding effects from organ crosstalks, since miR-223 can be transported via exosomes or released into circulation (24,25). The results of qRT-PCR analysis confirmed that miR-223 was deleted ( Fig. 2A and Fig. S1B). After 18 weeks. of HFD feeding, we did not observe any differences in body weight changes between WTs and KOs (Fig. S1C). Interestingly, when compared with chow diet-fed WT mice, miR-223 KO mice showed higher fasting blood glucose but lower insulin levels (Fig. 2, B and C). More importantly, HFD-fed KO mice displayed exacerbated response, with 31% higher fasting glucose levels, whereas insulin levels were 44% lower when compared with WT HFD mice (Fig. 2, B and C). Next, we performed homeostasis model assessment (HOMA) to estimate insulin resistance (HOMA-IR) and steady-state -cell function (HOMA-%) as described previously (6). The results showed that KO CD mice exhibited dramatic increase of insulin resistance and 81% decrease in -cell function, compared with chow diet-fed WTs, and further aggravated upon HFD feeding (Fig. 2, D and E). In accordance with HOMA index, KO HFD mice showed overall significantly higher increase in glucose levels during insulin tolerance test, suggesting more severe systemic insulin resistance (Fig. 2, F and G). Furthermore, we observed higher blood glucose levels during glucose tolerance test in KO HFD mice, when compared with WT HFD controls (Fig. 2, H and I), indicating blunted -cell function to secrete insulin in response to glucose injection. Taken together, these data reveal that miR-223 ablation provokes insulin resistance and -cell dysfunction, which are worsened in HFD-induced condition.
MiR-223 deficiency leads to maladaptive -cell proliferation and apoptosis
The impaired insulin secretion from -cells in KO mice could be because of either smaller -cell mass or -cell dysfunction. To assess the in vivo relevance of miR-223 in the adaptive -cell proliferative response upon metabolic stress conditions, we adopted two approaches. First, we fed 5-/6-week-old WT mice and miR-223 KO mice with HFD for 18 weeks and analyzed -cell proliferation by immunofluorescent staining and flow cytometry. We noticed significant reduction in K i -67 signal, a marker of cell proliferation, in miR-223 KO mice under basal condition (Fig. 3, A-D). More importantly, we detected a ϳ36% decrease in insulin-positive cells (Fig. 3, A and B) and a ϳ39% reduction in the number of K i -67-positive cells (Fig. 3, A and C) in HFD-fed miR-223 KO mice, when compared with age-matched HFD-fed WT mice, suggesting impaired compensatory -cell proliferation. This was further confirmed by staining with additional proliferation marker pHH3 and decreased cyclin D1 mRNA levels (Fig. S2, A-C). Similarly, H&E staining of pancreas tissue showed reduced islet sizes in KO mice (Fig. 3E). In the second model, we injected WT and miR-223 mice with STZ to deplete -cells. These mice became and remained diabetic (blood glucose Ͼ250 mg/dl) 10 days after injection. Although no spontaneous reversal of diabetes was observed, there was noticeable amount of proliferating -cells to cope with hyperglycemia in WT mice. However, such adaptive -cell
MiR-223 deficiency impairs functional -cell mass
proliferative response was diminished in miR-223 KO STZ mice, which was supported by a reduction in the number of -cells that costained positive for insulin and K i -67 (29% lower in KO STZ compared with WT STZ mice) (Fig. 3, A and C), leading to overall smaller insulin-positive islet size (Fig. 3, A and B). In addition, TUNEL staining revealed more -cell apoptosis in KO mice when compared to WT controls. A similar trend was observed in mice fed with HFD ( Fig. S2, D-F). Together, these data provide evidence for the necessity of miR-223 to preserve -cell proliferation and survival.
MiR-223 directly targets Foxo1 and Sox6 pathways
To seek potential mechanisms underlying the miR-223 KOmediated decrease in -cell mass, we first performed bioinformatics analysis using the database TargetScan. Two potential targets, forkhead box protein O1 (Foxo1) and sex determining region Y-box (Sox6), were identified. As shown in Fig. 4, A and B, the 3Ј-UTRs of Foxo1 and Sox6 contain miR-223 interacting region that is highly conserved among human, mouse, and rat species.
Using luciferase reporter assay, we validated that miR-223 directly recognized both 3Ј-UTRs of Foxo1 and Sox6 in HEK293 cells (Fig. 4, C and D). Cotransfection of miR-223 strongly inhibited luciferase activity from the reporter constructs harboring 3Ј-UTR segments of Foxo1 and Sox6, whereas no effect was observed when a control miRNA was cotransfected with either reporter construct (Fig. 4, C and D). Similar results were also obtained when tested using Min6 -cells (Fig. 4, E and F).
Remarkably, immunofluorescence analysis of pancreatic tissue showed significantly increased number of insulin-positive cells that were also expressing Foxo1. Of more interest, islets of miR-223 KO mice showed higher levels of nuclear localization of Foxo1 (Fig. 5A). Consistently, Western blotting results showed that the expression of Foxo1 and Sox6 were increased by 2-and 3.3-fold in pancreas of miR-223 KO mice, respectively (Fig. 5B), when compared with WT controls. Considering that 1) the activity and stability of Foxo1 is regulated by posttranslational modification such as phosphorylation (26); 2) Akt-mediated phosphorylation of Foxo1 results in Foxo1 translocation MiR-223 deficiency impairs functional -cell mass from nucleus to cytoplasm, thereby inactivating its transcriptional activity (26,27); and 3) miR-223 increases phosphorylation of Akt in cardiomyocytes (19), we next hypothesized that deletion of miR-223 might reduce Akt phosphorylation, resulting in lower levels of Foxo1 phosphorylation, and subsequently increases its accumulation in nucleus. Indeed, Western blot results revealed that phosphorylation levels of Akt at both Thr-308 and Ser-473 sites were lower in pancreas of KO mice than WT controls, leading to 47% decrease in the phosphorylation levels of Foxo1 protein (Fig. 5B).
Foxo1 is known to inhibit the expression of Pdx1, a master regulator of -cell growth and function (28), and a recent publication has shown that impaired -cell function is mediated by Foxo1-Pdx1-Glut2 pathway (29). On the other hand, it has been reported that Sox6 can control -cell function and proliferation via repressing Pdx1 and cyclin D1 (30,31). Therefore, we next determined whether miR-223 deficiency affects the expression levels of Pdx1, a common downstream target of Foxo1 and Sox6. The results of Western blotting and qPCR analysis showed that the expression levels of Pdx1 were markedly decreased in pancreas of miR-223 KO mice compared with WT controls, which are consistent with existing literature (29). As a consequence, levels of Glut2 protein were greatly reduced (Fig. 5, C and D). Additionally, protein levels of cell cycle inhibitor p27, another downstream target of Foxo1, were 2.4-fold higher in pancreas of KO mice than WT controls (Fig. 5C). Overall, these results indicate that the repressed proliferative response of -cells in miR-223 KO mice is mainly mediated by activated Foxo1 and Sox6 signaling cascades.
To further assess the effects of miR-223 KO on -cell identity, we performed qRT-PCR analyses to determine the expression levels of various -cell markers. Surprisingly, the results showed 2.4fold increase in Ucn3 levels, but no change in the expression of Mafa, whereas levels of Nkx6.1 and neurogenin 3 (Ngn3), were significantly reduced in islets of miR-223 KO mice, compared with WT-controls (Fig. 5D). These data suggest the bidirectional effects of miR-223 on -cell identity with an overall net effect of reduced functional -cell mass upon miR-223 ablation.
MiR-223 regulates -cell growth in a cell-autonomous manner
To further clarify the in vivo findings, we utilized Min6 -cells, which were infected with adenovirus encoding inhibitory miR-223 sequence (Ad.223off) for 48 h to knock down the expression of miR-223, followed by a series of experiments. qRT-PCR analysis validated that the expression levels of miR-223 were reduced by about 50% in Ad.223off-infected Min6 -cells, compared with control cells (Fig. 6, A and B). Consistent with our in vivo findings, knockdown of miR-223 in Min6 -cells significantly suppressed proliferation as indicated by 40% decrease in K i -67-positive cells after FACS analysis, when compared with Ad.GFP-infected control cells (Fig. 6, C and D). In addition, down-regulation of miR-223 increased Min6 cell death as evidenced by MTS assay and FACS analysis with propidium iodide staining ( Fig. 6E and Fig. S3, A and B). Moreover, cell death was exacerbated upon H 2 O 2 and palmitate treatment (Fig. 6E). Similar to the in vivo findings on the molecular change pattern, Western blot results showed that down-regulation of miR-223 resulted in significantly higher levels of Foxo1, Sox6, and p27, whereas phosphorylated Foxo1, Pdx1 and Glut2 protein levels were decreased (Fig. 6, F and G). In addition, the protein levels of phosphorylated Akt were reduced in Ad.223off group, when compared with Ad.GFP controls (Fig. S3C). Furthermore, Ucn3 expression levels were greatly increased, and the expression levels of Nkx6.1, Ngn3, cyclin D1, and cyclin E1 were severely reduced when miR-223 was down-regulated (Fig. 6H). Overall, these data clarify that cell-autonomous mechanisms regulated by miR-223 contribute to the diminished functional -cell mass exhibited by KO mice.
Overexpression of miR-223 rescues Min6 -cell by improving proliferation and viability
Given that deficiency of miR-223 displayed overall detrimental effect on -cells, we tested whether up-regulating miR-223 could rescue -cell growth and function. Using adenovirus encoding miR-223 construct (Ad.miR-223), we overexpressed miR-223 in Min6 cells for 48 h (Fig. 7, A and B). Cell proliferation was increased by about 2-fold in Ad.miR-223--cells, compared with control Ad.GFP-cells (Fig. 7, C and D). Importantly, results of MTS assay revealed that elevation of miR-223 could promote Min6 cell viability under basal condition. Such protec-
MiR-223 deficiency impairs functional -cell mass
tive effects were also observed when Min6 cells were treated with H 2 O 2 and palmitate (Fig. 7E). In contrast to KO mice, Ad.miR-223 -cells showed significant decrease in protein levels of Foxo1, Sox6, and p27, whereas p-Foxo1, Pdx1, Glut2, and phosphorylated Akt were dramatically increased (Fig. 7, F and G and Fig. S3D). qRT-PCR results also showed reversed expression pattern of -cell markers: Ucn3 levels were reduced, whereas the levels of Nkx6.1, Ngn3, cyclin D1, and cyclin E1 were increased. Expression of Mafa and NeuroD1 were unaffected when miR-223 was up-regulated (Fig. 7H). Moreover, as the result from glucose-stimulated insulin secretion assay showed, overexpression of miR-223 could rescue and further augment insulin secretion (Fig. 7I). Taken together, these results demonstrate that up-regulation of miR-223 is able to promote -cell proliferation, viability, and function.
Discussion
Our study is the first attempt to elucidate the effects of miR-223 on functional -cell mass. We observed higher miR-223 levels in islets of diabetic mice and humans as well as in Min6
MiR-223 deficiency impairs functional -cell mass
-cells treated with TNF␣ and high glucose. We also demonstrate that deletion of miR-223 interrupted glucose homeostasis. Further investigation revealed that miR-223 deficiency reduced proliferation rate and increased -cell death under basal condition, and the compensatory response of -cells to HFD-induced insulin resistance was blunted. Insights into the mechanism involved in the regulation of proliferative response in -cells have been provided by examining Foxo1 and Sox6, two direct targets of miR-223. Our data showed that ablation of miR-223 caused increased expression and nuclear localization of Foxo1, which suppressed Pdx1 and Glut2 expression but increased p27 protein levels. In addition, increased protein levels of Sox6 also contributed to the down-regulation of Pdx1 and cyclin D1. Furthermore, overexpression of miR-223 in -cells was able to improve their proliferation and function. Taken together, our data unveil a critical molecular mechanism whereby miR-223 is essential for maintaining functional -cell mass by controlling Foxo1 and Sox6 signaling cascades (Fig. 8), providing potential effective targets for diabetes intervention.
It has been proposed that transient -cell dedifferentiation is required prior to proliferation, which plays a pivotal role in their in vivo dynamics (6,7). A recent study reported that TGFR3 could reverse -cell dedifferentiation ex vivo, evidenced by restoration of Ucn3 (32). In addition, TGF receptor I inhibitor, Alk5 inhibitor II, reversed -cell dedifferentiation and restored key transcription factors such as Pdx1 and Nkx6.1 (32). However, Alk5 inhibitor II can also inhibit SMAD7-induced -cell proliferation, which reiterates the intriguing hypothesis that -cell dedifferentiation proceeds before proliferation. Indeed, as Alk5 inhibitor II reverses -cell dedifferentiation under cytokine stress condition, it increases gene expression of Foxo1, which has been widely reported as a repressor of -cell proliferation (26,33). In contrast, it has been shown that -cells with Foxo1 deficiency could undergo dedifferentiation (10). In the present study, we also showed that miR-223 could directly target TGFR3 (Fig. S4, A and B), which was up-regulated in miR-223 knockdown -cells (Fig. S4C). As shown in Fig. 7H, gene expression of Ngn3 was enhanced when miR-223 was up-regulated, this suggests that miR-223 might be able to induce -cell dedifferentiation, yet further investigation is required to confirm this hypothesis.
Foxo family proteins play critical roles in the regulation of whole body energy metabolism. Of interest, Foxo1, a key transcription factor in insulin signaling, is highly expressed in pancreatic -cells, in which Foxo1 is best known for its role in regulating the expression of Pdx1 (26,28). Nonetheless, numerous studies have suggested that the multifunctional roles of Foxo1 may be attributed to its involvement in crosstalk between various intracellular signaling pathways. For example, Foxo1 inhibition is required for the proliferative effects of GLP-1 on -cells (34). It has also been reported that ER stressor lipotoxicity-triggered -cell apoptosis could be attenuated by overexpressing dominant-negative Foxo1 (27). Furthermore, Foxo1 inhibits cell proliferation via up-regulating cyclindependent kinase inhibitor such as p27 (26). In fact, inactivation of Foxo1-p27 cascade mediates the proliferative action of liraglutide in -cells (33). In accordance with current literature, our study presented here shows that miR-223 is essential for -cell proliferation via targeting the Foxo1 signaling pathway. Therefore, inhibition of Foxo1 by overexpression of miR-223 Given that miR-223 directly targets Sox6, a key molecule known to suppress -cell redifferentiation (8), up-regulation of Sox6 in miR-223 KO -cells might inhibit the process of their redifferentiation. Nonetheless, further experiments and proper animal model would be needed to validate this conclusion. There is another limitation in this work that needs to be addressed. In vitro evidence of the beneficial effects of miR-223 was based on experiments involving substantial overexpression in Min6 cells; such findings should be evaluated in a more physiological setting, ideally using -cell-specific overexpression of miR-223 animal model.
It is important to note that previous studies have shown upregulation of miR-223 in the blood, heart, and adipose tissue of type 2 diabetic patients (36 -38). In line with our present findings, these results suggest that increased miR-223 levels in diabetic conditions may represent a compensatory mechanism to protect against T2D, as miR-223 could augment functional -cell mass to cope with the increasing demand of insulin under obese/T2D conditions. Furthermore, miR-223 has been reported to protect against inflammatory response in adipose tissue and insulin resistance (18). Therefore, increased levels of miR-223 may be the compensatory outcome to mitigate the progression of T2D. Indeed, our data presented here demonstrate that down-regulation of miR-223 resulted in the loss of
MiR-223 deficiency impairs functional -cell mass
functional -cell mass during diabetes. Taken together, the current and prior observations consistently indicate that miR-223 is a critical factor for maintaining functional -cell mass and adaptation to metabolic stress.
Mouse manipulation
Pre-miR-223 knockout mice (B6.Cg-Ptprca MiR223tm1Fcam/J) and WT mice (C57BL6 background) were purchased from The Jackson Laboratory (Bar Harbor, ME). All animal protocols followed the Guidelines for the Care and Use of Laboratory Animals prepared by the National Institutes of Health and were approved by the University of Cincinnati Animal Care and Use Committee.
Diabetic mouse models
To generate type 1 diabetes, male adult mice (5-6 weeks old) were intraperitoneally injected with STZ (50 mg/kg body weight; S0130-1G, Sigma-Aldrich), or citrate buffer as control, daily for 5 days. At days 7 and 10 after the first injection, blood glucose levels were measured. Mice with glucose above 250 mg/dl were considered diabetic and used for experiments. For high-fat diet feeding paradigm, starting at age of 5 or 6 weeks, male mice were given 60% kcal HFD (D12492, Research Diet, New Brunswick, NJ) for 18 weeks; mice fed with normal CD were used as control. In addition, db/db mice (stock no. 000697) were purchased from The Jackson Laboratory; islets were isolated from 14-week-old male mice for experiments.
Islet isolation
Mouse islets were isolated as described previously (35). Briefly, mice were anesthetized and the pancreas was perfused with collagenase and neutral protease (CIzyme RI, catalog no. 005-1030, VitaCyte, Indianapolis, IN) followed by incubation in 37°C water bath for 15 min with gentle swirling every 3 min. Then HBSS (with 0.3% BSA) was added to pancreas and centrifuged at 290 ϫ g for 1 min. Supernatant was discarded, and cell pellets were resuspended in 10 ml HBSS, which were then filtered through a plastic tea strainer and rinsed with another 10 ml HBSS. After centrifugation at 330 ϫ g for 2 min, supernatant was discarded and cell pellets were resuspended in 10 ml cold Histopaque-1100 and 10 ml HBSS. After centrifugation at 900 ϫ g for 18 min, entire 20 ml of supernatant was collected and passed through an inverted 70-m filter, which was rinsed into a Petri dish with 10 ml media. Islets were then handpicked for future experiments. Human islets from both male and female healthy donors (ages 29 -57 years old, BMI 21.2-25.9 kg/m 2 , HbA1c 4.6 -5.7%), obese donors (ages 20 -62 years old, BMI 31-36.9 kg/m 2 , HbA1c 5-5.8%), and type 2 diabetic donors (ages 58 -66 years old, BMI 30.3-32.7 kg/m 2 , HbA1c 6.5-6.7%) were purchased from Prodo Laboratories.
RNA isolation and quantitative RT-PCR (qRT-PCR)
Total RNA was extracted from pancreatic islets or Min6 cells with miRNeasy Mini Kit (217004, Qiagen, Venlo, Netherlands) following the manufacturer's protocol, and qRT-PCR was performed using miScript PCR Starter Kit (218193, Qiagen). U6 and GAPDH were used as internal control. The primer sequences are shown in Table S1.
Blood glucose and insulin measurement
Mice were fasted overnight, and blood glucose was measured through tail tip bleeding with the use of Accu-Chek Smartview Nano meter (Roche Diabetes Care). For insulin measurement, MiR-223 deficiency impairs functional -cell mass whole blood was drawn from submandibular vein, and plasma was separated by centrifugation at 12,000 rpm at 4°C for 15 min. in microvette tubes coated with EDTA (KMIC-EDTA, KentScientific,Torrington,CT).Plasmainsulinlevelsweremeasured by mouse ELISA kit (cat. no. 90080, Crystal Chem, Elk Grove Village, IL) according to manufacturer's instructions. The HOMA index was used to calculate insulin resistance (HOMA-IR) and -cell function (HOMA-%). The following formula was used: HOMA-IR ϭ (fasting glucose ϫ insulin)/ 22.5; HOMA-% ϭ (20 ϫ fasting insulin)/(fasting glucose Ϫ 3.5)%. Units of glucose and insulin are mM and milliunit/liter, respectively.
Intraperitoneal glucose tolerance testing (IPGTT) and insulin tolerance testing (IPITT)
Mice were fasted overnight (for IPGTT) or for 6 h (for IPITT) prior to injection. Glucose (2 g/kg body weight) or insulin (0.75 units/kg body weight) was injected intraperitoneally. Glucose levels were measured in blood collected from tail tip prior to and at 15, 30, 60, 90, and 120 min after injection.
Histological analysis
Immunohistochemistry staining was performed as described previously (19). For measuring cell apoptosis, TUNEL staining was performed using DeadEnd Fluorometric TUNEL System (catalog no. G3250, Promega) following manufacturer's protocol.
Images of insulin-positive cells were taken from pancreatic sections using Zeiss LSM710 LIVE Duo Confocal Microscope (20ϫ objective, Live Microscopy Core, University of Cincinnati). Zen/ZenLight software or ImageJ were used to quantify all insulin-positive cells or cells costained with insulin and K i -67 or pHH3 or Foxo1. Images of sections from a minimum of three mice per group, three to five pancreas sections per mouse, were captured.
Luciferase reporter assay for validation of miR-223 targets
Luciferase reporter experiments were performed as described previously (21). Briefly, 3Ј-UTR segments of Foxo1 and Sox6 and their respective mutants were amplified and validated prior to transfection in HEK293 or Min6 cells. 100 nM mimic miR-223 or mimic miR control (Thermo Fisher Scientific) were added to each well in 12-well plates. Cell lysates were prepared 48 h later, and luciferase activity was measured and expressed as relative light units. All transfections were performed in triplicate from three independent experiments.
Western blotting
Western blotting was performed as described previously (19). The antibodies used are listed in Table S2.
MTS assay
Min6 -cells were plated on 96-well plates at 10 4 seeding density and transfected with adenovirus for 48 h followed by 24 h palmitate (0.5 mM) or 1 h H 2 O 2 (200 mM) treatment. Cell viability was assessed using the MTS incorporation assay kit (Promega) according to the manufacturer's protocol.
Flow cytometry
Mice were anesthetized and pancreases were perfused with collagenase and neutral protease followed by incubation in 37°C for 30 min using 700 agitation. Digestion was stopped by adding excessive volume of cold HBSS, and digested tissue was filtered through 70-m cell restrainer and washed with HBSS. Cells were then stained with insulin (catalog no. FAB1544P, R&D Systems, Minneapolis, MN) and K i -67 (catalog no. 61-5698-82, Invitrogen).
For in vitro experiments, after adenovirus infection, cells were fixed with cold 75% ethanol and stored at Ϫ20°C for at least 2 h. Cell proliferation was determined by staining with Alexa Fluor ® 647 mouse anti-K i -67 antibody (catalog no. 558615, BD Biosciences). For cell death analysis, cells were stained with propidium iodide staining solution (00-6990, eBioscience, Waltham, MA). Flow cytometry was performed using LSRII Analyzer (SHC Flow Cytometry Core, Cincinnati) and analyzed with FCSexpress software.
Statistical analysis
Animals were randomly assigned to groups, and sample size estimates were not used. All data were analyzed by two-tailed Student's t test and reported as mean Ϯ S.E. or S.D. as specified. A p Ͻ 0.05 was considered statistically significant. | 6,553.4 | 2019-05-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
MHV Gluon Scattering Amplitudes from Celestial Current Algebras
We show that the Mellin transform of an $n$-point tree level MHV gluon scattering amplitude, also known as the celestial amplitude in pure Yang-Mills theory, satisfies a system of $(n-2)$ linear first order partial differential equations corresponding to $(n-2)$ positive helicity gluons. Although these equations closely resemble Knizhnik-Zamolodchikov equations for $SU(N)$ current algebra there is also an additional"correction"term coming from the subleading soft gluon current algebra. These equations can be used to compute the leading term in the gluon-gluon OPE on the celestial sphere. Similar equations can also be written down for the momentum space tree level MHV scattering amplitudes. We also propose a way to deal with the non closure of subleading current algebra generators under commutation. This is then used to compute some subleading terms in the mixed helicity gluon OPE and our results match with those obtained from an explicit calculation using the Mellin MHV amplitude.
A Brief review of Celestial or Mellin amplitudes for massless particles 27 A.1 Comments on notation in the paper 28
Introduction
It is generally believed that any consistent theory of quantum gravity on a space-time with asymptotic boundary should have a holographic dual description in terms of a theory living on the boundary at infinity. The dual theory can compute all the observables which make sense in the bulk theory of quantum gravity. In the case of asymptotically flat space-time the observables are the S-matrix elements and it has been proposed that the dual theory is a conformal field theory [11-20, 23-25, 30-50], dubbed "Celestial conformal field theory (CCFT)", which lives on the celestial sphere. The Lorentz group acts on the celestial sphere as the group of global conformal transformations and when the bulk space-time is four dimensional, one can show, starting from the subleading soft graviton theorem [10] that CCFT should have a stress tensor [12][13][14] which generates local conformal transformations on the two dimensional celestial sphere. So CCFT should have the full Virasoro symmetry just like a more conventional two dimensional CFT. But this is not the end of the story. Since the asymptotic symmetry group in four dimensions contains supertranslations [26,27], CCFT should also have supertranslation symmetry [2,5,7,8,28,29]. On top of that, there are various other infinite dimensional current algebra symmetries, coming from soft factorisation theorems, under which CCFT should be invariant [1,3,5,[19][20][21][22][60][61][62][63][64][65][66][67][68][69]. Now, the CCFT is supposed to compute bulk scattering amplitudes. So a natural question arises as to how the bulk scattering amplitudes in four dimensions are constrained by the infinite dimensional symmetries of the dual CCFT. For recent developments in this direction please see [3][4][5][6][7][8][9]. The study of CCFT is facilitated by going to the Mellin space . In Mellin space scattering amplitudes can be written as the correlation functions of conformal primaries 1 . Conformal primaries are Mellin transform of Fock space creation (annihilation) operators which create (annihilate) asymptotic free particle states in a scattering event. They are called conformal primaries because under Lorentz transformations, which act on the celestial sphere as global conformal transformations, they transform like primary operators in a conformal field theory. Let us now briefly describe the main results of this paper.
In this paper we focus on the Maximal Helicity Violating (MHV) gluon scattering amplitudes in pure Yang-Mills (YM) theory with gauge group G = SU (N ). These are amplitudes of the form − − + + + · · · , where two gluons have negative helicity and the rest have positive helicity. Their explicit expressions are given by the famous Parke-Taylor formula [77]. Now this is the first non-trivial helicity amplitude because at tree level in pure YM theory, amplitudes of the form − + + + · · · with only one negative helicity gluon and + + + · · · with all positive helicity gluons vanish. As a result of this at tree-level there is no negative helicity soft gluon in the MHV-sector and also, the set of MHV amplitudes is closed under taking collinear limits. This allows us to define, just like in the case of gravity [5], an autonomous MHV-sector of the CCFT which computes the MHV gluon scattering amplitudes. The gluon MHV-sector is characterised by the fact that it is governed by the G current algebra [1,[60][61][62][63][64][65] at level zero which arises from the leading positive helicity soft gluon theorem. There is also the subleading positive helicity soft gluon theorem [3,[66][67][68][69] which gives rise to another infinite set of currents which play an equally important role in the theory. The significance of the autonomous MHV sectors for gluons and gravitons [5] is that they behave somewhat like Minimal models of two dimensional conformal field theories and hence are exactly solvable.
Let us now consider an n-point MHV amplitude with two negative helicity gluons and (n − 2) positive helicity gluons. The main result of this paper is a system of (n − 2) coupled first order linear partial differential equations satisfied by the tree-level MHV amplitudes. In Mellin space they are given by, where i ∈ (1, 2, · · · , n − 2) runs over the (n − 2) positive helicity gluons in the MHV amplitude. In the above equation, O a h,h (z,z) is a gluon conformal primary with scaling dimension (h,h) and a is a Lie algebra index. T a is the Lie algebra generator in the adjoint representation and C A is the quadratic Casimir. We have also introduced the symbol ǫ j which is ±1 depending on whether O a j h j ,h j (z j ,z j ) is outgoing or incoming. The operator P −1 j acts on a gluon conformal primary O a k h k ,h k (z k ,z k ) as Similarly the global time translation generator P −1,−1 (i) acts on the i-th positive helicity gluon conformal primary according to Equation (1.1) can be easily transformed to momentum space and for the momentum space MHV scattering amplitude it can be written as, A a k (ǫ k ω k , z k ,z k , σ k ) Here the null momentum of an on-shell gluon has been parametrised as and ǫ = ±1 for an outgoing (incoming) gluon. We have also denoted the momentum space creation (annihilation) operators by A a (ǫω, z,z, σ) where σ is the helicity of the gluon. Note that the MHV amplitude in (1.4) includes the overall momentum conserving delta function. Now (1.1) or (1.4) are examples of (holographic) constraints on (hard) scattering amplitudes coming from the infinite dimensional symmetries of CCFT. They are obtained in the same way as differential equations for MHV graviton scattering amplitudes were obtained in [5]. Presumably (1.1) or (1.4), together with the Ward identities coming from Poincare invariance, can be solved to obtain the MHV gluon scattering amplitudes. Along this line we make a preliminary check in which we determine the leading gluon-gluon OPE coefficients from the differential equations and our results match with those of [3,53].
The origin of equations (1.1) and (1.4) will be discussed in great detail in section (6) but let us mention a few things before we close this section.
1. The first two terms of (1.1) closely resemble the Knizhnik-Zamolodchikov (KZ) equation [72] satisfied by the correlation functions of current algebra primaries in WZW model. The only difference is the prefactor h i in the second term in (1.1) and we also have to set the level k of the current algebra to 0.
2. In WZW model the scaling dimensions of the primaries are determined in terms of the level of the current algebra and the representation of the zero mode algebra under which the primary transforms. But, this is not the case here. (1.1) holds for any value of the scaling dimension (h,h) of the gluon primary. This is consistent with the fact that in CCFT the scaling dimension ∆ = h +h of a (hard) primary is a continuously varying (complex) number and therefore should not be constrained in any way.
3. The third term in (1.1) is an additional contribution coming from the (local) subleading soft gluon symmetry. This has no analog in the usual KZ equation and is most likely related to the fact that there is no Sugawara stress tensor in CCFT. This is also very different from pure gravity where the corresponding differential equations for MHV amplitudes do not have any contribution from the subsubleading soft graviton theorem.
An outline of this paper is as follows. We begin in Section (2) by discussing the action of Poincare generators on gluon primary operators on the celestial sphere. The definition of a conformal primary operator is given explicitly in Section (2.1). In Section (3) we consider the leading conformal soft gluon theorem which is equivalent to the Ward identity for a level zero Kac-Moody algebra on the celestial sphere. We specify here the commutators involving modes of the Kac-Moody current and the Poincare generators. Using the current algebra Ward identity we also relate here the celestial correlators involving Kac-Moody descendants to correlation functions of gluon primary operators. Section (3.1) contains the definition of a primary operator under the current algebra. In Section (4) we consider a set of currents (J a , K a ) on the celestial sphere which arise from the subleading conformal soft gluon theorem. In Section (4.1) we derive the OPE between a subleading soft gluon and a hard gluon primary which yields an important constraint on the OPE of hard gluons in the subleading conformal soft limit. From this OPE we also extract the definition of descendants created by modes of (J a , K a ) and use the Ward identity, corresponding to the subleading soft gluon theorem, to obtain correlation functions with insertions of these descendants. In Section (??) the definition of a primary under the subleading soft symmetry algebra is provided. Section (4.2) lists various useful commutation relations. In Section (5), we discuss the interpretation of the commutation relations between modes of the subleading soft symmetry generators in the light of the fact that these generators do not close to form a Lie algebra in the conventional sense. In Sections (6) and (7) we derive the differential equations (1.1) and (1.4) for tree-level MHV gluon amplitudes in Mellin space and Fock space respectively. In Section (8) we use the differential equation (1.1) to determine the structure of the leading OPE for gluon primaries in Yang-Mills theory. In particular, Section (8.1) deals with the case where both gluons in the OPE are either incoming or outgoing and in Section (8.2) we consider the case where one of the gluons in the OPE is outgoing and the other is incoming. In Section (9), we illustrate how some descendant OPE coefficients in the OPE between gluons of opposite helicities can be determined using the underlying infinite dimensional symmetry algebras. We end the paper with a set of Appendices. Appendix A contains a brief review of celestial amplitudes and comments on some of the notation used in this paper. In Appendix B we present a detailed calculation of the first subleading correction to the leading celestial OPE of positive helicity gluons using the Mellin transform of the 5-point tree level MHV gluon amplitude in Yang-Mills theory. In Appendix C we use the 4-point MHV Mellin amplitude to extract the first set of subleading terms in the OPE between opposite helicity gluon primaries.
Poincare invariance
Since the scattering amplitudes are Poincare invariant, generators of the Poincare group act on the conformal primaries which live on the celestial sphere 2 . The Lorentz group SL(2, C) acts on the celestial sphere as the global conformal group and we denote its generators by L 0 , L ±1 ,L 0 ,L ±1 . Their commutation relations are given by, They act on a gluon conformal primary O a h,h (z,z) as, The four global space-time translation generators will be denoted by P m,n where m, n = 0, ±1 and they are mutually commuting The commutation relations between Lorentz and global space-time translation generators are given by, The translation generators P m,n act on conformal primaries according to where ǫ = ±1 for an outgoing (incoming) gluon.
Poincare primary
It follows from the definition of a conformal primary operator that the following standard relations hold but a Poincare primary [8] must also satisfy the additional conditions which follow from (2.6).
Leading soft gluon
The leading conformally soft [51][52][53][54][55] 3 gluon operator j a (z) with positive helicity is defined as, j a (z) = lim where O a ∆,+ (z,z) is a positive helicity gluon primary with scaling dimension ∆. The soft operator j a (z) is a Kac-Moody current [1,[60][61][62][63][64][65] whose correlation function with a collection of gluon primaries is given by the leading soft gluon theorem and has the standard form in Mellin space, as gluons transform in the adjoint representation. Now let us consider the modes j a n of the above current. They satisfy the algebra and act on a gluon primary as We note that the level of the current algebra here is zero which will be further justified by the form of the gluon-gluon OPE [9] in the MHV sector.
The commutation relations with the (Lorentz) global conformal generators are given by, [L m , j a n ] = −nj a m+n , L m , j a n = 0, Similarly, the commutators with the generators {P m,n , m, n = 0, ±1} of global space-time translations are given by, P m,n , j a p = 0 (3.7) For our purposes an important role is played by the correlation functions of the descendants j a −p O b h,h (z,z), p ≥ 1 with a collection of gluon primaries. These are given by where the operator J a −p (z) is defined as, 3 Conformally soft graviton theorems have been studied in [51,56,57].
Leading current algebra primary
A current algebra primary O a h,h (z,z) is defined by the standard conditions j a n O b h,h (0) = 0, ∀n ≥ 1 (3.10) and
Subleading soft gluon
The subleading conformally soft [51][52][53][54][55] gluon operator S +a 1 (z,z) with positive helicity is defined as, where O a ∆,+ (z,z) is a positive helicity gluon primary with scaling dimension ∆. The correlation function of S +a 1 (z,z) with a collection of primary gluon operators is given by the subleading soft gluon theorem in Mellin space [3], and Here ǫ k = ±1 depending on whether the gluon primary O a k h k ,h k (z k ,z k ) is outgoing or incoming. For simplicity of notation we keep the additional label ǫ implicit when we write the correlation functions of the gluons. Now following [3,5] we expand the R.H.S of (4.2) in powers of the coordinatez of the subleading conformally soft gluon operator S +a 1 (z,z) and define two currents J a (z) and K a (z) whose Ward identities are given by, This is equivalent to expanding the soft operator S +a 1 (z,z) as, We can now define the modes of the currents, J a n (z) and K a n (z), in the standard way. Their actions on a gluon primary are given by the following commutation relations, and Although the generators (J a n , K a n ) do not form a Lie algebra under commutation, we will see in section (5) that when the commutators J a m , J b n , J a m , K b n and K a m , K b n act on a gluon conformal primary or its descendants, the results are given by simple expressions which look almost like closure. This is crucial for our purpose in this paper.
4.1 OPE between S +a 1 (z,z) and a hard gluon conformal primary Suppose we want to compute the OPE between S +a 1 (z,z) and the gluon primary O a 1 h 1 ,h 1 (z 1 ,z 1 ). For this we have to expand the R.H.S of (4.2) in powers of (z − z 1 ) and (z −z 1 ). So as (z,z) → (z 1 ,z 1 ) we can write, where the differential operators J a −p (1) and K a −p (1) are defined as, From (4.10) the OPE between S a 1 (z,z) and O a 1 can be extracted to be, where the correlation functions with the insertion of the descendants (z 1 ,z 1 ) are given by (4.11) and (4.12), respectively. Now (4.13) acts as a boundary condition on the OPE of two hard gluon primaries one which is positive helicity. Using the definition (4.1) of the subleading conformally soft gluon As we will see (4.14) is a nontrivial constraint on the OPE of two hard gluons. Inside correlation functions (4.14) means that we first take the OPE limit (z,z) → (z 1 ,z 1 ) and then take the subleading conformal soft limit. In that limit we should always get back (4.10).
We also note that the operator product expansion (4.13) leads to the following conditions which are satisfied by any gluon primary O a h,h (z,z),
Commutation relations involving subleading generators
In this section we collect some useful commutation relations. These are the "classical" commutators which can be easily obtained from the action of the generators on a primary operator. For the convenience of the reader we gather here the actions of the Poincare and current algebra generators on a gluon primary, where ǫ = ±1 depending on whether the gluon is outgoing or incoming. Using these we arrive at the following commutation relations between generators, L 1 , J a n = 0, L 0 , J a n = − 1 2 J a n , L −1 , J a n = −K a n (4.24) L 1 , K a n = J a n , L 0 , K a n = 1 2 K a n , L −1 , K a n = 0 (4.26)
How to interpret the commutators of subleading symmetry generators
It is well known [66] that the subleading symmetry generators J a n and K a n do not close under commutation. So they are not the generators of a Lie algebra symmetry in the ordinary sense. To see this we note that the scaling dimensions of J a n and K a n are given by (−n − 1/2, −1/2) and (−n−1/2, 1/2), respectively. Therefore the antiholomorphic scaling dimension of the commutator amongst these generators must be an integer. But there is no generator with integer antiholomorphic scaling dimension that can appear here and so the generators (J a m , K a m ) cannot form a Lie algebra. Now, at least for the purposes of this paper, what we really need to know is how the commutator of two subleading generators acts on a gluon primary or its descendants. For example, in the OPE of two outgoing gluon primaries of opposite helicities given by, O a In order to calculate the OPE coefficient multiplying this operator, one has to know the structure of the term from the OPE and we need to get simplified expressions for them. In order to do this we start by computing the following commutators Let us focus on the first commutator. Using (4.21) and the Jacobi identity we get, Again using (4.21) we can write this as where we have used ǫ 2 = 1. Now we take the limit (z,z) → (0, 0) and from (5.5) we get Since the mode J a n of the current J a is defined with respect to the point (0, 0), (5.6) can be interpreted as the relation between the descendants Now we can apply the same argument to get the other two relations These are the relations that will be used in the following sections to obtain recursion relations for subleading OPE coefficients and to define null states or primary descendants.
We can see that the L.H.S of (5.7), (5.8) and (5.9) are linear in the subleading symmetry generators. But, they also depend on the scaling dimension of the gluon primary on which the commutators are acting and also the gluon primary appearing on the R.H.S has shifted dimension compared to the one appearing on the L.H.S. This is a signature of the fact that the generators do not form a Lie algebra. This is perhaps the closest one can come towards forming a closed algebra out of J a n and K a n when they act on states in a Hilbert space and, as we will see, this is sufficient for finding (subleading) OPE coefficients as we will see in section (9).
Before we end this section we would like to mention the result when the commutator acts on a level-1 descendant. They are given by, The above relations, say for example (5.10), can be obtained by starting from the commutator The above relations have obvious generalisations to a general descendant and are given by, We do not need these more general relations in this paper but, it will be interesting to check their consistency with explicit calculations performed using scattering amplitudes. For example, a good check of this will be to compute subleading OPE coefficients in the gluongluon OPE directly from the (MHV) scattering amplitude and compare them with the results from the recursion relations obtained using (5.20), (5.21) and (5.22). Our derivation of these relations has been somewhat hand waving. We leave a more rigorous derivation to future work.
Differential equation for MHV gluon amplitudes in Mellin space
In this section we will derive a differential equation for Mellin transformed tree level n-point MHV gluon amplitudes in Yang-Mills theory.
Consider the celestial OPE between two positive helicity outgoing gluon primaries in Yang-Mills theory. We will denote these operators below as O a ∆,+ (z,z) and O a 1 ∆ 1 ,+ (z 1 ,z 1 ) where the subscript (+) denotes that both operators have positive helicity. In order to arrive at the proposed differential equation, we will be interested in the contribution from descendants which constitute the first subleading correction to the leading singular term in this OPE. This was recently obtained in [9] and is given by is a positive helicity (outgoing) gluon primary. The leading primary OPE coefficient is given by the Euler beta function, B(∆ − 1, ∆ 1 − 1). The dots above denote contributions from descendants at further subleading orders in (z − z 1 ), (z −z 1 ). In [9] the above OPE was extracted from the Mellin transform of the tree-level 4-point MHV gluon amplitude. We refer the reader to Section B of the Appendix in this paper for a derivation of (6.1) from the Mellin transform of the 5-point MHV gluon amplitude. Now let us take the subleading conformal soft limit ∆ → 0 in the above OPE. We then obtain 2) According to our discussion in Section (4.1), in the subleading conformal soft limit the OPE should obey the general constraint given by equation (4.14). Therefore as ∆ → 0 in (6.1) we should get where Then comparing (6.2) and (6.3) we see that the leading singular terms in the OPE match. But the subleading terms appear to be different. Therefore in order for the OPE in (6.1) to be consistent with the subleading soft gluon theorem we must have the following relation Multiplying the above by if aa 1 b and using the relation where C A is the quadratic Casimir of the adjoint representation, we can express (6.5) as Further shifting ∆ 1 → ∆ 1 + 1 in (6.7) we get Up to this point we have been considering the gluon primary in (6.7) to be outgoing. But this can be easily generalised to the case of an incoming positive helicity gluon. In that case we simply get an additional minus sign before the third term in (6.8). Thus for an incoming positive helicity gluon we have Therefore in general we have the condition where P −1,−1 is the global time translation generator which acts on a gluon primary as Here ǫ = ±1 for an outgoing (incoming) gluon. Note that in obtaining (6.10) we have used the definition (3.11) of a current algebra primary according to which Now consider the linear combination of descendants denoted as Ψ a (z,z) in (6.10). Applying the definition of a Poincare and current algebra primary from Sections (2.1) and (3.1) and using the commutation relations given in Section (4.2), it can be easily checked that 4 L 1 Ψ a (z,z) =L 1 Ψ a (z,z) = P 0,−1 Ψ a (z,z) = P −1,0 Ψ a (z,z) = 0 (6.12) Equations (6.12) and (6.13) together imply that Ψ a (z,z) is in fact a primary operator with respect to the Poincare group and the current algebra associated to the leading soft gluon theorem. 5 In fact, one can easily check that the null-state Ψ a is uniquely determined by the primary-state conditions under the Poincare group and leading soft gluon SU (N ) current algebra. Thus Ψ a (z,z) is a null field and we can consistently set Ψ a (z,z) to zero within celestial MHV gluon amplitudes. Now let us insert (6.10) within Mellin transformed tree-level MHV gluon amplitudes. Below we will denote the gluon primaries as O a k h k ,h k (z k ,z k ). The negative helicity gluons in the MHV amplitude will be labelled by (n − 1) and n. Then for every positive helicity gluon i ∈ (1, 2, · · · n − 2) we get a decoupling relation where we have used 2 h i = ∆ i + 1 for a positive helicity gluon primary. The index (i) accompanying L −1 , j a 0 , j a −1 , J a −1 and P −1,−1 above denotes that these modes act on the i-th positive helicity gluon. Then using the representation of L −1 , j a −1 , J a −1 in terms of differential operators, we obtain from (6.14) with i ∈ (1, 2, · · · , n − 2) We have thus obtained (n − 2) linear first order partial differential equations (PDEs) for tree-level n-point gluon MHV amplitudes in Mellin space. The (n − 2) PDEs correspond to the (n − 2) positive helicity gluons in the n-point MHV amplitude. Now let us note that the structure of (6.15) is similar to the Knizhnik-Zamolodchikov (KZ) equation [72] obeyed by correlation functions of primary operators in WZW theory. The KZ equation is given by are the primary operators and k is the level of the current algebra. In the context of our paper, we will take these operators to transform in the adjoint representation of the zero mode algebra and so the superscript a i is a Lie algebra index. Let us now compare the differential equation (6.15) and the KZ equation (6.16).
First of all, for an n-point correlation function of primaries in WZW theory there are n differential equations because every primary in WZW theory is degenerate. This should be contrasted with the case of MHV amplitudes where an n-point MHV amplitude satisfies (n − 2) differential equations corresponding to (n − 2) positive helicity gluons. This is related to the fact that within the MHV sector, governed by leading and subleading current algebras coming from positive helicity soft gluon, negative helicity gluons have no null states. This is a major difference.
Secondly, note that the coefficient of the ∂ z i term in (6.15) is C A /2, where C A is the quadratic Casimir of the adjoint representation. In the KZ equation (6.16), this coefficient is given by (k + C A /2). At a superficial level this is consistent with the fact that in our case the SU (N ) current algebra has level k = 0. Now let us consider the second term within the square brackets in (6.15). This arises from the j a −1 (i)j a 0 (i) piece in (6.14). The analogous term is also present in the KZ equation (6.16) but, the coefficient of this term in our case depends on the holomorphic conformal weight h i of the primary operator O a i h i ,h i (z i ,z i ) whose null state gives rises to (6.15). This is an important difference which plays a crucial role due to the following reason.
In WZW theory, the KZ equation follows from the existence of a Sugawara stress tensor. From the expression of the Sugawara stress tensor it also follows that the holomorphic conformal weight of a current algebra primary is given by [72,73] h r = C r 2k + C A (6.17) where C r is the quadratic Casimir of the representation r, under which the primary operator transforms. Here we are considering r to be the adjoint representation and so C r = C A .
The null state relation which gives rise to the usual KZ equation holds only when the primary operator, with respect to which the null state is defined, has (holomorphic) weight given by (6.17). But (6.14) and consequently (6.15) hold for arbitrary values of the scaling dimensions for the positive helicity gluon primaries in the MHV amplitude 6 . The coefficient h i in front of the second term in (6.14), (6.15) plays a crucial role in ensuring this.
Finally let us discuss the third term within the square brackets in (6.15). This arises due to the subleading soft gluon symmetry. There is no counterpart of this term in the KZ equation (6.16). Thus compared to the usual KZ equation, this can be regarded as a correction term. Another consequence of this term is that unlike the KZ equation, the differential operators acting on the celestial MHV amplitude are not purely holomorphic.
Recently in [70] a Sugawara construction of the stress tensor was performed for celestial CFTs by studying Mellin transformed gluon amplitudes in Yang-Mills theory in the limit where a pair of gluons become conformally soft. However it was observed that within correlation functions, this stress tensor generates the correct conformal transformations only for the (leading) conformally soft gluons but fails to do so for the hard gluon primaries. This indicates that the Sugawara construction does not yield the full stress tensor in the celestial CFT putatively dual to Yang-Mills theory. The possibility that the full stress tensor may include contributions in addition to the Sugawara stress tensor was also pointed out in [32,71]. The additional term coming from the subleading soft gluon symmetry in the differential equation (6.15) that we have obtained, further suggests that the standard form of the Sugawara construction involving only the leading current j a (z), may not apply to celestial CFTs and most likely the subleading currents (J a (z), K a (z)) play an important role in any such construction.
In the following sections we will study the implications of this differential equation for the celestial OPE of gluon primary operators in Yang-Mills theory. In particular we will show that the leading celestial OPE of gluons can be determined using this equation.
Differential equation for MHV gluon amplitudes in momentum space
The differential equation (6.15) was derived for the Mellin transformed gluon amplitude. We can also write down an equivalent form of this equation for the amplitude in Fock space. Let us denote the tree-level Fock space MHV amplitude as n k=1 A a k (ǫ k ω k , z k ,z k , σ k ) MHV (7.1) where ǫ = ±1. A a k (ω k , z k ,z k , σ k ) and A a k (−ω k , z k ,z k , σ k ) are annihilation and creation operators respectively for the external gluons with helicity σ k in the S-matrix. In (7.1) we will take gluons (1, 2, · · · , n − 2) to have positive helicity. Then gluons (n − 1) and n have negative helicity. Now in order to recast (6.15) to momentum space, we make the following substitutions Applying an inverse Mellin transform we can replace the correlation function in (6.15) with the Fock space amplitude (7.1). Thus we obtain the differential equation As in (6.15), here we have (n − 2) partial differential equations labelled by the index i in (7.3) which runs over the (n − 2) positive helicity gluons in the MHV amplitude. Also note that the amplitude appearing above is the full tree level scattering amplitude and so explicitly includes the delta function which imposes overall energy-momentum conservation. Therefore the partial derivative with respect toz j in (7.3) has a nontrivial action on the amplitude (7.1).
Leading OPE from differential equation
In this section we will show that the leading structure of the celestial OPE of gluons can be determined using the differential equation (6.15).
OPE for outgoing (incoming) gluons
Let us first consider the celestial OPE between a positive helicity gluon primary O a 1 ∆ 1 ,+ (z 1 ,z 1 ) and another gluon primary denoted by O a 2 ∆ 2 ,σ 2 (z 2 ,z 2 ). The spin σ 2 of the second gluon primary will be left unspecified for now. We will also take both these gluons to be outgoing. Then let us assume that the leading OPE in this case takes the following general form where z 12 = (z 1 − z 2 ),z 12 = (z 1 −z 2 ). O x ∆,σ (z 2 ,z 2 ) is the leading primary operator that can appear in the OPE and C p,q (∆ 1 , ∆ 2 , σ 2 ) is the associated OPE coefficient. The dots denote possible contributions from descendants. The conformal dimension and spin of O x ∆,σ (z 2 ,z 2 ) are given by Our objective now is to determine the values of p, q and the OPE coefficient C p,q (∆ 1 , ∆ 2 , σ 2 ). In carrying out this analysis, we will assume that the structure of the OPE in (8.1) holds for arbitrary values of the dimensions ∆ 1 and ∆ 2 with p, q and σ 2 fixed. This was also done in [5] in the context of the celestial OPE of gravitons in Einstein gravity. As in [5], our results below will further justify this assumption. We will see that the values of p, q and C p,q (∆ 1 , ∆ 2 , σ 2 ) obtained using the differential equation (6.15) precisely match with the corresponding results of [53], where the leading celestial OPE was derived from the Mellin transform of the splitting function which appears in the leading collinear limit in gluon scattering amplitudes in Yang-Mills theory [74].
Let us now write the differential equation (6.15) as where ǫ 1 = ǫ 2 = 1. Using (8.3) we can derive a recursion relation for the leading OPE coefficient as follows. Let us substitute the leading OPE (8.1) inside the correlator in (8.3). Then it is evident that at leading order in the OPE limit, the z 12 ,z 12 dependence on the L.H.S. of (8.3) will be of the form z p−1 12z q 12 . Assuming that (8.3) is satisfied order by order in the OPE regime (z 12 → 0,z 12 → 0), we can then set the coefficient of the z p−1 12z q 12 term to zero. Consequently we obtain the following recursion relation where 2h 1 = ∆ 1 − 1 and 2h 2 = ∆ 2 − σ 2 . Then applying the identity where C A is the quadratic Casimir of the adjoint representation, we get from (8.4) In the ensuing discussion, it will be convenient for us to express (8.6) in another form. For this purpose, let us first note the following relation due to the invariance of the OPE under global time translations [3] Then shifting ∆ 2 → ∆ 2 + 1 in (8.6) and using (8.7) we get We shall now derive another recursion relation for the leading OPE coefficient by appealing to the subleading soft gluon theorem in a similar fashion as in [3]. Consider the action of the subleading soft symmetry generator J a 0 on a gluon primary. This is given by (4.8) where ǫ = ±1 for an outgoing (incoming) gluon. Then requiring both sides of the OPE (8.1) to transform in the same way under the action (8.9) we get 7 Multiplying both sides of (8.10) by f yba and using the identity (8.5) we then obtain In order to easily determine the values of p and q let us bring (8.11) into the same form as (8.8). To achieve this, we shift ∆ 1 → ∆ 1 + 1 in (8.11). This gives, Then using (8.6) we can eliminate C p,q (∆ 1 + 1, ∆ 2 − 1, σ 2 ) from (8.12) and obtain Now we can solve for p, q using equations (8.8) and (8.13). These equations admit nontrivial solutions provided we have Note that the differential equation (8.3) holds for any value of ∆ 1 . Further as mentioned before, the values of p, q in the celestial OPE (8.1) do not depend on ∆ 1 , ∆ 2 . Consequently we can vary ∆ 1 and ∆ 2 independently in (8.14). Thereby the only non-trivial solution of the above equation is 8 This is precisely what we expect in pure Yang-Mills theory. Then substituting (8.15) in (8.2) we immediately get The leading OPE (8.1) for outgoing gluon primaries then takes the form where σ 2 = ±1. Now we can determine the OPE coefficient as follows. After substituting p = −1, q = 0, equation (8.8) as well as (8.13) reduces to Then using the above in the recursion relation (8.7) we obtain Recursion relations of the form in (8.18) and (8.19) were also obtained using time translation invariance and the global subleading soft gluon symmetry in [3]. The solution of these equations is given by where B(x, y) is the Euler-Beta function. The constant α is as of yet undetermined. We can fix it by using the leading conformal soft limit. Consider ∆ 1 → 1 in (8.17). Then matching with the Ward identity (3.2) gives α = 1. Thus for outgoing gluons, we get In the case where both gluon primaries are incoming, an identical analysis again gives p = −1, q = 0. The OPE coefficient also takes the same form as in (8.20). However in order to determine the overall constant we should note that the Kac-Moody current for an incoming gluon is given by where the superscript (−) above denotes an incoming gluon. Due to this minus sign in comparison to (3.1) for an outgoing gluon, the leading OPE coefficient for incoming gluons is given by The OPE coefficients in (8.21) and (8.23) precisely match with the corresponding results derived in [3,53].
Outgoing-incoming OPE
We will now deal with the case where one of the gluon primaries in the celestial OPE is outgoing and the other is incoming. Here we will take the outgoing gluon primary to have positive helicity and denote it as O a 1 ∆ 1 ,+ (z 1 ,z 1 ). The incoming gluon primary will be denoted by O a 2 ,− ∆ 2 ,σ 2 (z 2 ,z 2 ) where the superscript (−) denotes that it is incoming. We will not fix the spin of the incoming gluon and so σ 2 = ±1.
In this case, both an outgoing and an incoming gluon primary can contribute to the OPE at leading order. Then as in the previous subsection we begin by assuming the following general form of the leading OPE where O x,+ ∆,σ and O x,− ∆,σ in the R.H.S. of (8.24) respectively denote the outgoing and incoming primaries which contribute to the leading OPE. Their conformal dimension and spin are In (8.24) C ± p,q (∆ 1 , ∆ 2 , σ 2 ) is the OPE coefficient corresponding to the outgoing (incoming) primary that appears in the OPE. The dots denote possible contributions from descendants. Now following exactly the same steps as in the previous subsection (8.1) we can obtain a recursion relation for the leading OPE coefficients using the differential equation (8.3). Here we get Then applying the global subleading soft symmetry generator J a 0 to the OPE (8.24) we get another recursion relation analogous to (8.11) Let us also note that invariance of the OPE (8.24) under global time translations yields Then using (8.28) and performing similar manipulations as before we can rewrite (8.26) and (8.27) as follows (2p + ∆ 1 + 1)C ± p,q (∆ 1 , ∆ 2 , σ 2 ) = ±(∆ 1 + ∆ 2 + 2p + q − σ 2 + 1)C ± p,q (∆ 1 + 1, ∆ 2 , σ 2 ) (8.29) and The system of equations (8.29) and (8.30) have the same form as the analogous equations (8.8) and (8.13) obtained in the case of the OPE between two outgoing (incoming) gluons. It then follows by similar arguments that they admit non-trivial solutions iff This is again the expected result in Yang-Mills theory. The OPE coefficients can now be obtained by solving (8.28) and (8.29) with p = −1, q = 0 in the same way as shown in the previous subsection (8.1). We then get where α, β are constants. These can be fixed by using the leading conformal soft limit. In order to determine β we can put p = −1, q = 0 in the OPE (8.24) and then take ∆ 1 → 1. The leading conformal soft theorem (3.2) then implies β = 1. Similarly considering the limit ∆ 2 → 1 in the OPE (8.24) and comparing with the Ward identity (3.2) gives α = −1. Thus finally the leading OPE coefficients in the case of outgoing-incoming gluon OPE are given by Again the above results for the OPE coefficients are in perfect agreement with [3].
Subleading OPE coefficients from symmetry
In this section we will illustrate how the OPE coefficients of descendants in the celestial OPE between gluon primaries can be determined using the underlying symmetries. For the OPE between positive helicity gluons, some of the descendant OPE coefficients were obtained in [9] using translation, global conformal and leading soft gluon current algebra symmetries.
Here we will consider the mixed helicity case, i.e., the OPE between a positive helicity and a negative helicity gluon primary. We will see that the subleading soft gluon symmetry plays a crucial role here. Let us denote the gluon primaries whose OPE we want to consider as O a ∆ 1 ,+ (z 1 ,z 1 ) and O b ∆ 2 ,− (z 2 ,z 2 ). We will also consider both of these to be outgoing. Then as shown in Section C of the Appendix in this paper, upto the first subleading order this OPE is given by In the Appendix, Section C we have derived this result from the Mellin transform of the 4-point MHV gluon amplitude in Yang-Mills theory. Although we have obtained this from the 4-point Mellin amplitude, the above form of the mixed helicity OPE is expected to hold within any n-point tree level MHV gluon amplitude in Yang-Mills. Now it is important to note that in the OPE (9.1), at order O(z 0 12z 0 12 ), we encounter descendants associated to both the leading soft gluon current algebra as well the subleading soft gluon symmetry algebra. These are given by the operators j a respectively in (9.1). We will now show that the OPE coefficients for these descendants can also determined using symmetries as follows.
In general, we can have the following descendants appearing at O(1) in the mixed-helicity OPE The above operators are linearly independent. This is because the vanishing condition (6.10) holds only for a positive helicity gluon primary. This will be further justified by our analysis below. Then the general form of the O(1) term in the mixed helicity OPE can be written as where α 1 , α 2 , α 3 are constants which we want to determine. The leading OPE coefficient B(∆ 1 − 1, ∆ 2 + 1) can be obtained using the differential equation (6.15) as shown in Section 8. Also note that we have placed the operator O b ∆ 2 ,− at the origin, without loss of generality. Now let us apply the subleading soft symmetry mode J c 1 to the OPE in (9.3). Then using (4.15) and applying the commutation relations listed in Sections (4.2) and (5) we get the recursion relation The coefficients α 1 , α 2 , α 3 in the above equation do not carry any Lie algebra indices. This equation should then hold for any allowed values of the free indices (a, b, c, d). Thus we can set for example a = c in (9.4). The structure constants being antisymmetric then immediately gives us Similarly setting a = b in (9.4) we get Now it can be easily checked that applying the current algebra mode j c 1 yields exactly the same recursion relation as in (9.4). Then in order to fix α 2 we can apply L 1 to both sides of (9.3). Again using the relevant commutation relations from Section 4.2 we obtain Substituting α 1 = 0 from (9.5) into the above equation we get Finally we can solve for α 3 from (9.4) by putting in the value of α 2 obtained above. This yields We thus find that the values of α 1 , α 2 , α 3 obtained in (9.5), (9.8) and (9.9) precisely agree with those extracted directly from the Mellin amplitude. Now it is easy to see that (9.3) is already invariant under the action of the translation generator P −1,0 . It is also straightforward to check that these values of the descendant OPE coefficients satisfy the recursion relations that follow from applying the translation generator P 0,−1 to the OPE (9.3). The fact that all these recursion relations are mutually consistent and admit a unique solution further justifies the absence of the null state relation (6.10) for a negative helicity gluon primary.
Note Added
In a previous version of the paper we demanded that the conditions (4.15) and (4.16), satisfied by gluon primaries, should also hold for any primary descendant or null-state of a gluon primary. In other words, we demanded that any primary descendant should also be primary under the subleading soft gluon symmetry. We feel that a naive imposition of this condition is somewhat restrictive and in this version we have removed this. A more precise statement will appear elsewhere.
The main results of the paper remain unchanged. For example, the null-state (6.10) is uniquely determined by the primary-state conditions with respect to the Poinacre group and the leading soft gluon SU (N ) current algebra.
A Brief review of Celestial or Mellin amplitudes for massless particles
The Celestial or Mellin amplitude for n gluons in four dimensions is defined as the Mellin transformation of the n-particle S-matrix element, given by [30,31] where σ i = ±1 denotes the helicity of the i-th gluon and the on-shell momenta are parametrized as, a i denotes the Lie algebra index carried by the i-th gluon. The scaling dimensions (h i ,h i ) are defined as, The Lorentz group SL(2, C) acts on the celestial sphere as the group of global conformal transformations and the Mellin amplitude M n transforms as, This is the familiar transformation law for the correlation function of primary operators of weight (h i ,h i ) in a 2-D CFT under the global conformal group SL(2, C).
We can also define a modified Mellin amplitude 9 as in [34,35], where u can be thought of as a time coordinate and ǫ i = ±1 for an outgoing (incoming) particle. Under (Lorentz) conformal tranansformation the modified Mellin amplitude M n transforms as, Now in order to make manifest the conformal nature of the dual theory living on the celestial sphere it is useful to write the (modified) Mellin amplitude as a correlation function of conformal primary operators. So let us define a generic conformal primary operator as, where ǫ = ±1 for an annihilation (creation) operator of a massless gluon of helicity σ and Lie algebra index a. Under (Lorentz) conformal transformation the conformal primary transforms like a primary operator of scaling dimension (h,h) Similarly in the presence of the time coordinate u we have, In terms of (A.8), the Mellin amplitude can be written as the correlation function of conformal primary operators Similarly using (A.10), the modified Mellin amplitude can be written as, A.1 Comments on notation in the paper Note that conformal primaries carry an additional index ǫ which distinguishes between an incoming and an outgoing particle. In the paper, for notational simplicity, we omit this additional index unless this plays an important role. So in most places we simply write the (modified) Mellin amplitude as, Similarly in many places in the paper we denote a gluon conformal primary of weight ∆ = h +h by O a ∆,σ where σ = ±1 is the helicity (= h −h). Since we are considering pure Yang-Mills, we can further simplify the notation to O a ∆,± by omitting the σ = ±1.
B OPE from 5-point MHV gluon amplitude
In this section of the Appendix we will consider the Mellin transform of the 5-point tree-level MHV gluon amplitude in Yang-Mills theory. Our main objective here is to show that the OPE in (6.1) which was obtained in [9] from the 4-point MHV amplitude also holds within the 5-point Mellin amplitude.
B.1 5-point MHV gluon amplitude
The tree-level 5-point gluon amplitude in Yang-Mills theory can be expressed as [75] A(1 a 1 , 2 a 2 , 3 a 3 , 4 a 4 , 5 a 5 ) = (ig) 3 where g is the Yang-Mills coupling. f abc 's are Lie algebra structure constants . The sum above runs over a basis of 3! color-stripped partial amplitudes denoted by A(1, ρ(2), ρ(3), ρ(4), 5). This is an over-complete basis. Owing to BCJ relations [76], any 4 of the sub-amplitudes in (B.1) can be written in terms of 2 linearly independent sub-amplitudes. Let us choose this BCJ basis to be given by the sub-amplitudes Here s ij = (p i + p j ) 2 , with p i being the momenta of the external particles. Now our interest here will be in the MHV configuration. For this we will take the helicities of the gluons in (B.1) to be σ 1 = σ 2 = −1, σ 3 = σ 4 = σ 5 = 1. Then using (B.3) we obtain where the c i 's denote the following color structures Now parametrising the null momenta of the external gluons in the amplitude as where ǫ i = ±1 and using the Parke-Taylor formula for MHV amplitudes [77], we can write (B.5) as Mellin amplitude is given by where O a i ∆ i ,± (i) denotes a gluon primary operator with dimension ∆ i = 1 + iλ i . The subscript (±) here denotes the spin of the operator. The label (i) collectively denotes the coordinates (z i ,z i , u i ) at null infinity where the i-th gluon primary is inserted. Now we are interested in extracting the celestial OPE between the gluons (4 +a 4 , 5 +a 5 ). We will further take them to outgoing and so ǫ 4 = ǫ 5 = 1. Then let us define where t ∈ [0, 1]. The delta function imposing overall energy-momentum conservation in (B.11) can be expressed in the following form δ (4) can be easily done and we get Then using (B.17) and the expression of the 5-point MHV amplitude obtained in (B.8) we finally get In the above integral the theta functions simply impose the condition that ( ω * i ω P ) ≥ 0. This is required because in the original integral (B.11), the energy variables satisfy ω i ≥ 0. The prefactor N in (B.19) is given by and I 1 (t), I 2 (t) in the integrand in (B.19) are I 1 (t) = 1 + z 45 t σ 1,2 σ 1,1 +z 45 t σ 1,3 σ 1,1 + z 45z45 t σ 1,4 σ 1,1
B.3 Mellin transform of 4-point gluon MHV amplitude
Here we will obtain the modified Mellin transform of the tree-level 4-point MHV gluon amplitude which is required in order to extract the OPE from the 5-point Mellin amplitude (B.19).
B.4 OPE decomposition of 5-point Mellin amplitude
We shall now extract the celestial OPE between gluon primaries O a 4 ∆ 4 ,+ and O a 5 ∆ 5 ,+ from the 5-point Mellin amplitude. For this purpose we will set u 45 = 0 and expand (B.19) around z 45 = 0,z 45 = 0. Now note that expanding the theta functions in (B.19) generates delta function contributions whose arguments are functions of z ij ,z ij with (i, j) ∈ (1, 2, 3, 5). We will assume that the operators which do not participate in the OPE are all inserted at separated points at null infnity and thereby such contact terms can be ignored. The contributions of interest here will then only come from expanding the integrands I 1 (t), I 2 (t) in (B.19).
Here we will restrict attention to the leading and the first subleading terms in the OPE regime. The leading term corresponds to the Mellin transform of the collinear splitting function [74]. The subleading terms in the Mellin amplitude can likely be related to the subleading corrections to the leading collinear limit in the momentum space amplitude which has been studied in [59].
B.4.1 Leading term
Using (B.22) and (B.23), it is easy to see that the terms from I 1 (t) and I 2 (t) which contribute at leading order in the OPE limit (z 45 → 0,z 45 → 0) are given by ) in the OPE decomposition. This will correspond to the contribution of descendants in the celestial OPE. Let us first gather the relevant terms from I 1 (t) and I 2 (t) which can contribute at this order. From (B.22) we get where Z is given by Then from (B.23) we have and and and Thus upto the first the subleading order, the celestial OPE between outgoing gluon primaries of opposite helicities is given by The above OPE manifestly satisfies both the leading as well as the subleading conformal soft gluon theorem. | 12,877.2 | 2020-10-30T00:00:00.000 | [
"Physics"
] |
Phased Array Antenna Beam-Steering in a Dispersion-Engineered Few-Mode Fiber
We present, for the first time to our knowledge, experimental demonstration of tunable optical beamforming for phased array antennas using a few-mode fiber. The double-clad step-index few-mode fiber is dispersion engineered such that it operates as a continuously tunable 5-sample true-time delay line, enabling continuous steering of the beam-pointing angle. Using this approach, we measure the radiation pattern from 5 elements of an in-house fabricated 8-element phased array antenna at the radiofrequency of 26 GHz and demonstrate continuous beam-steering over a 59$^\circ$ range by sweeping the optical wavelength from 1543 nm up to 1560 nm. Such a few-mode fiber-based beamformer could be beneficial to next-generation fiber-wireless communications and radar systems, as it provides further versatility and capacity along with reduced size, weight and power consumption.
I. INTRODUCTION
B EAMFORMING for phased array antennas (PAAs) is a key technology in radar and wireless communications systems, enabling the shaping and steering of beams towards specific directions.Traditional analog beam-steering techniques based on phase shifters are prone to beam squinting, and therefore unsuitable for wideband operation [1].To address this issue, true-time delay lines (TTDLs) are often used instead, as they introduce frequency-independent delays and thereby accommodate broadband beamforming [2].Optical approaches are considered as the most promising technology to realize TTDLs, as they offer immunity to electromagnetic interference, along with a broader bandwidth and lower losses compared to their electronic counterparts [3].
Different schemes have been used to develop optical TTDL beamformers, including those based on dispersive fibers [4], fiber Bragg gratings [5], stimulated Brillouin scattering [6], optical ring resonators [7], and subwavelength grating waveguides [8], among others.Moreover, multicore (MCF) and fewmode fibers (FMFs), which have found applications in various fields such as optical transmission [9], imaging [10], sensing [11] and nonlinear switching [12], have also proven useful to microwave photonics signal processing [13].In the context of microwave photonics, the parallelism inherent to the spatial dimension of these fibers can be used to implement TTDLs [14], [15], [16], and therefore, allow simultaneous distribution and processing of microwave signals within the same fiber link.This approach, which eliminates the need for an external device to process the signal, particularly gains importance in scenarios where the radio-over-fiber link is required anyway.Moreover, taking advantage of the spatial diversity in MCFs and FMFs makes it possible to use only one laser source, while other fiber-based TTDLs that exploit optical wavelength diversity require several lasers [4], [5].Recently, this scheme was used to demonstrate a TTDL beamformer for PAAs in a custom dispersion-engineered heterogeneous MCF [17].A few beamforming networks using FMFs have also been reported so far [18], [19], [20].However, their main drawback is that the tunable TTDL operation is not solely provided by the FMF and external delay control is required by either changing the FMF length [18], using FMF Bragg gratings [19], or employing optical switches [19], [20].We have recently proposed [21] and experimentally demonstrated [16] a dispersion-tailored FMF that overcomes this limitation by acting as a tunable TTDL, offering continuous time-delay tunability over a broad range through varying the optical wavelength.Here, we use this FMF-based TTDL to realize an optical beamforming network.To the best of our knowledge, this is the first experimental demonstration of tunable optical beamforming for PAAs using a FMF, where the FMF itself provides the required tunable TTDL operation, making the system reconfigurable and yet simple.Such a scheme, which allows simultaneous processing and distribution of microwave signals, could be particularly interesting for next-generation wireless communications systems, such as Beyond 5G, as it offers increased versatility, capacity and compactness.
II. FEW-MODE FIBER TRUE-TIME DELAY LINE
The different spatial modes of a FMF have distinct group delays and propagate at different velocities in the fiber.Therefore, at the FMF output, different time-delayed replicas of an input signal are provided by the different modes.We can utilize this property to implement a tunable TTDL.In this case, through custom fiber design, two specific conditions should be met.First, the dispersion properties of the modes must be tailored such that the differential time delay Δτ between adjacent replicas (or the differential group delay (DGD) between adjacent modes) is constant at a given wavelength.Second, to obtain continuous delay tunability, Δτ should vary linearly with the optical wavelength, which means that the differential dispersion slope between adjacent modes should be minimized.
In mathematical terms, the group delay of spatial mode n, can be expanded in a 1-st order Taylor series around an anchor wavelength λ 0 , as where τ 0,n and D n indicate the group delay and chromatic dispersion of mode n at λ 0 , respectively, and L is the fiber length.When exploiting the space dimension, the basic time delay among adjacent samples at a particular wavelength is given by the differential group delay between each pair of adjacent modes, expressed as Δτ (λ) = τ n (λ) − τ n−1 (λ).If the FMF is designed such that all modes have the same group delay at the anchor wavelength (identical τ 0,n value), then we have where ΔD = D n − D n−1 is the differential chromatic dispersion between neighboring modes.Based on (2), to fulfill the two above-mentioned conditions for tunable TTDL operation, ΔD should be constant [21].
Considering these requirements, we have recently developed a few-mode fiber with a double-clad step-index profile that features relatively evenly-spaced incremental chromatic dispersion values of ΔD = 1.7 ps/nm/km among 5 of its linearly polarized (LP) modes, namely, LP 01 , LP 11 , LP 21 , LP 31 and LP 41 .The modal crosstalk is minimized among these modes as they belong to different mode-groups.Fig. 1(a) shows the refractive index profile of the 1-km long FMF, fabricated by YOFC company, where the radii of its silica core, inner and outer claddings are 9.1, 13 and 62.5 μm, while their GeO 2 doping concentrations are 7.79, 5.93, and 0.22 mol%, respectively.The measured differential group delays of the modes with respect to LP 01 are displayed in Fig. 1(b), where for every wavelength, we can see that the group delay increases in constant steps as the mode number increases.The average Δτ value among neighboring modes at different wavelengths, together with its standard deviation is Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
III. PHASED ARRAY ANTENNA
Phased array beamforming is obtained when a constant time delay of Δτ is introduced between the microwave signals radiating from the antenna elements, corresponding to a progressive phase shift of Δφ = 2πf RF Δτ among them, where f RF is the operating radiofrequency (RF).The total radiation pattern from an PAA is the radiation pattern from a single-element, known as the element factor, multiplied by an array factor (AF).The AF, which is dependent on the geometry of the array and the excitation of the elements in terms of amplitude and phase, is given by [22] where θ is the far field angular coordinate, N is the number of radiating elements, a n represents the amplitude of the signal radiated by element n, d is the spacing between the antenna elements, and c is the speed of light in vacuum.As we can understand from (3), the beam-pointing angle, at which the maximum radiation occurs, depends on Δτ .Thus, when using our developed FMF-based TTDL to implement a beamformer for PAAs, the beam can be steered by changing the differential group delay among the spatial modes, which we can realize by simply varying the operational optical wavelength, as seen in Fig. 1
(c).
A microstrip PAA, with 8 elements separated by d = 5.77 mm (λ RF /2 at 26 GHz), was designed and fabricated at our facilities, using a milling machine on a substrate Rogers RT5880 with a height of 0.381 mm and relative permittivity of 2.2.A picture of the fabricated 8-element PAA is illustrated in Fig. 2(a).The measured radiation patterns of 4 representative antenna elements are displayed in Fig. 2(b).These patterns, which should ideally be similar, are very different and have considerable power variations.This non-uniformity affects the total radiation pattern from the PAA, as will be discussed in the following sections.
IV. OPTICAL BEAMFORMING EXPERIMENTAL SETUP AND RESULTS
We use our developed FMF-based TTDL to experimentally demonstrate 5-element optical beamforming for the PAA, using the setup of Fig. 3.The optical signal from a tunable laser is intensity modulated by a 10 dBm RF signal at 26 GHz, generated by a vector network analyzer (VNA).The modulated signal is amplified and split into 5 paths.After controlling the polarization, the signal in each path is injected into one of the 5 modes of interest (LP 01 , LP 11 , LP 21 , LP 31 and LP 41 ) of the 1-km FMF using a mode multiplexer.After propagating through the FMF, the signals are extracted from the modes using a demultiplexer.The multiplexer/demultiplexer pair are fabricated by Cailabs, with an average back-to-back modal crosstalk and average insertion loss of −19 dB and 9.3 dB at 1565 nm, respectively.Among every two degenerate asymmetrical LP lm modes (l ≥ 1), the power from only one of them is collected.Thus, we maximize the power in that specific mode using the polarization controller placed before the multiplexer.The variations in its power, caused by the inevitable mode coupling with the other degenerate mode can also be controlled using this approach.At the FMF output, the powers coming from the different modes are equalized using variable optical attenuators.The 5 optical signals are then directed to an anechoic chamber, where each one is detected by an individual photodiode and converted back to the electrical domain.After amplifying the RF signals, they are fed into 5 consecutive elements of the 8-element PAA.For measuring the radiation patterns, the PAA is mounted on an antenna positioner that is capable of 180-degree azimuth rotation.The received power is measured using a receiver horn antenna, placed 3 meters away from the transmitter antenna.The power is then amplified by a low-noise amplifier (LNA) and the radiation pattern is measured by the VNA.Beam-steering is realized by sweeping the optical wavelength of the laser.
Using (3) and the DGD values extracted from the fitted lines of Fig. 1(b), the normalized radiation patterns and beam-pointing angles for different wavelengths are calculated and illustrated in Fig. 4(a) and (b), respectively.The results indicate that theoretically, beam-steering from approximately −90 • to 90 • could be achieved.However, as seen in Fig. 4(a), the farther the angle is from broadside direction, the main lobe becomes broader and the main-to-side-lobe ratio (MSLR) decreases; thus, limiting the operational range.Additionally, in practice, the beam-steering range is restricted by the characteristics of the single-element radiation patterns, which have significant variations from one another.
To evaluate the performance of the FMF-based beamformer, we vary the optical wavelength from 1543 nm to 1560 nm, corresponding to average time delays between 66.7 ps and 95.6 ps.This steers the beam-pointing angle from 22 • down to −37 • , as we can see in Fig. 5, where the normalized measured radiation patterns at 8 different wavelengths are displayed.The corresponding beam-pointing angles are shown by red squares in Fig. 4(b).The results show main lobe broadening at angles farther from broadside direction, as expected from the simulations of Fig. 4(a).This can be improved by using a TTDL with a higher number of samples, which could be achieved by increasing the number of modes, but at the expense of an increase in the modal crosstalk.An alternative is to increase the number of samples by using optical lasers with different wavelengths at the input (combining wavelength division multiplexing and space division multiplexing).For example, using two lasers instead of one, would double the number of samples.Another technique would be to use a combination of cores and modes, i.e., a customized multicore fiber in which each core supports a few modes.This way, through proper design, we can ensure a low level of crosstalk among the modes within each core and among different cores.
V. DISCUSSION
In our beamforming experiment, initially the system was optimized to perform the measurements using 5 specific elements of the 8-element PAA.In this case, the beam-pointing angle had good agreement with the theoretical values at different wavelengths, but in some cases high power was observed in the side-lobes, decreasing the MSLR.As Fig. 4(c) shows, this is due to non-uniform power distribution among the 5 signals.
Even though before starting the measurements, the powers of the signals radiated from the elements were equalized, their power distribution varied throughout the measurements at different beam-pointing angles, which is mostly attributed to the considerably different power levels of the single-element radiation patterns.Therefore, to achieve uniform power distribution, it was necessary to make up for the element factor variations at every step, which required a lot of time and was not feasible for us.Hence, to reduce the effect of the dissimilar element radiation patterns, for steering the beam to a specific beam-pointing angle, rather than using a fixed set of antenna elements, we chose the 5 consecutive elements whose radiation patterns were least different at that particular angle.For example, for steering the beam to −30 • , according to Fig. 2(b), we avoided the element whose radiation pattern is shown in green, as it experiences a significant power drop at this angle.This notably improved the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.MSLR compared to the case where a fixed set of elements were used for all angles.However, the downside of this approach is that using different elements for different angles introduces slight variations to the electrical paths of the 5 signals, leading to minor changes in the time delay Δτ among them.According to (3), this causes a shift in the beam-pointing angle, explaining the offset observed among theoretical and measured values of Fig. 4(b).Employing a PAA that exhibits better performance in terms of its element radiation patterns would overcome these issues and further improve our results in terms of steering range and MSLR.Nonetheless, our results show the applicability of a FMF-based TTDL in tunable beamforming for a PAA.Moreover, besides the single-element radiation patterns, which we believe are the main restriction in this experiment, minor variations in Δτ between neighboring modes could cause a slight shift in the beam-pointing angle.Also, coupling between the modes could affect the power distribution among the signals and reduce the MSLR.However, for several wavelengths, we repeated the experiment after 10 minutes and quite similar results were obtained, even though the FMF spool was not placed on a vibration isolating table.This indicates that the linear mode coupling did not notably affect the radiation patterns.Keeping the FMF spool in a controlled environment would reduce the effect of external coupling sources.
VI. CONCLUSION
We experimentally present continuously tunable optical beamforming for a phased array antenna using a 1-km few-mode fiber link.To the best of our knowledge, this is the first-ever demonstration in which the FMF itself provides the tunable time delay required for beam-steering, without requiring external delay control.This was made possible by the unique dispersion properties of the custom-designed double-clad step-index FMF, which allowed it to operate as a tunable sampled true-time delay line.The 5-element radiation pattern of a PAA is measured in an anechoic chamber at an RF frequency of 26 GHz, where the beam-pointing angle is steered within a 59 • range, from 22 • to −37 • , by sweeping the optical wavelength of the laser from 1543 nm to 1560 nm.The radiation patterns of different individual elements show different behavior, indicating that our in-house fabricated PAA is not ideal.Thus, we believe a wider steering range could be obtained by employing an PAA with improved performance.
Such FMF-based TTDLs offer a promising approach for implementing fiber-distributed signal processing in a compact and versatile manner, where the distribution and processing functionalities are realized simultaneously within the same fiber medium.We have previously demonstrated the applicability of this technique to tunable microwave signal filtering [16].Arbitrary waveform generation and shaping, as well as time differentiators/integrators are among other signal processing functionalities that could benefit from this approach.
Fig. 1 .
Fig. 1.(a) Refractive index profile of the developed double-clad step index FMF at 1550 nm along with the transverse mode profiles of the 5 modes of interest that are suitable for true-time delay line operation.(b) Differential group delays of the modes of interest, with respect to LP 01 .The circles represent measured data, while the solid lines are the fitted trend lines.(c) Measured (red dots) and fitted (dotted blue line) values for the average Δτ .The corresponding error bars for the measured values are also shown.
Fig. 2 .
Fig. 2. (a) The in-house built 8-element phased array antenna.(b) Radiation pattern of a single antenna element, measured for 4 different individual elements.
Fig. 4 .
Fig. 4. (a) Simulated radiation patterns of the phased array antenna at different wavelengths.The DGD values extracted from the fitted lines of Fig. 1(b) are used to perform these simulations.(b) Variations of the beam-pointing angle (shown in degrees) with the optical wavelength, found through simulations (blue line) and measurements (red squares).(c) Simulated radiation patterns of the phased array antenna with different input power distributions (a n is the normalized amplitude of the signal fed to each of the 5 elements).
Fig. 5 .
Fig. 5. Radiation patterns of the phased array antenna, measured at different optical wavelengths at an RF operating frequency of 26 GHz. | 4,036.2 | 2023-11-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Computer-Aided Sketching: Incorporating the Locus to Improve the Three-Dimensional Geometric Design
: This article presents evidence of the convenience of implementing the geometric places of the plane into commercial computer-aided design (CAD) software as auxiliary tools in the computer-aided sketching process. Additionally, the research considers the possibility of adding several intuitive spatial geometric places to improve the e ffi ciency of the three-dimensional geometric design. For demonstrative purposes, four examples are presented. A two-dimensional figure positioned on the flat face of an object shows the significant improvement over tools currently available in commercial CAD software, both vector and parametric: it is more intuitive and does not require the designer to execute as many operations. Two more complex three-dimensional examples are presented to show how the use of spatial geometric places, implemented as CAD software functions, would be an e ff ective and highly intuitive tool. Using these functions produces auxiliary curved surfaces with points whose notable features are a significant innovation. A final example provided solves a geometric place problem using own software designed for this purpose. The proposal to incorporate geometric places into CAD software would lead to a significant improvement in the field of computational geometry. Consequently, the incorporation of geometric places into CAD software could increase technical-design productivity by eliminating some intermediate operations, such as symmetry, among others, and improving the geometry training of less skilled users.
Introduction
Mathematics has been used to solve geometric problems throughout history, something that has proliferated with the possibility of establishing metric (e.g., distances or angles) and geometric (e.g., parallelism or perpendicularity) variables for analysis, or implementing such variables in the development of computer-aided design (CAD) software. Such augmentations have become increasingly powerful, such that neither hand drawing-with either classical instruments or Euclidean tools (compass, set square, and bevel)-nor the need for in-depth knowledge of classical-geometry procedures are essential for designers anymore.
Hence, many design-related institutions have replaced some of their classical-geometry material to ensure that students receive proper training on the use of the CAD tools involved in three-dimensional
Background
Although graphic entities meeting certain geometric criteria have previously been suggested by renowned mathematicians and geometers, difficulty obtaining the graphic part has previously complicated their use. In the past, problems were always reduced to the plane, and geometers believed that the use of curves not drawn with a compass was inelegant.
A common consensus regarding the definition of a geometric place or locus describes it as a set of points that satisfies one or more of these properties [35]: distance, angle, or some other parametric feature. A practical locus problem is found in Aristotle's Meteorology [36].
It is common to speak of equidistance in the context of the property of distance, that is, points that are at the same distance. Thus, given a fixed point, a locus can be derived from the points that meet the condition of being at a fixed distance from the given point; this is the circumference. The concept can be expanded further: in three-dimensional space, the third dimension, z, can be used to obtain the expression r 2 = x 2 + y 2 + z 2 ; this is the sphere equation, defined by points that are equidistant from a fixed point, which is the center of the sphere.
All conical curves can be defined as loci relating two entities, either as points to each other (ellipse as a sum of distances), point and line (parabola), or the difference in distances to two points (hyperbola). Furthermore, Figure 1 shows that the condition of the point changes according to the circumference.
Background
Although graphic entities meeting certain geometric criteria have previously been suggested by renowned mathematicians and geometers, difficulty obtaining the graphic part has previously complicated their use. In the past, problems were always reduced to the plane, and geometers believed that the use of curves not drawn with a compass was inelegant.
A common consensus regarding the definition of a geometric place or locus describes it as a set of points that satisfies one or more of these properties [35]: distance, angle, or some other parametric feature. A practical locus problem is found in Aristotle's Meteorology [36].
It is common to speak of equidistance in the context of the property of distance, that is, points that are at the same distance. Thus, given a fixed point, a locus can be derived from the points that meet the condition of being at a fixed distance from the given point; this is the circumference. The concept can be expanded further: in three-dimensional space, the third dimension, z, can be used to obtain the expression = + + ; this is the sphere equation, defined by points that are equidistant from a fixed point, which is the center of the sphere. All conical curves can be defined as loci relating two entities, either as points to each other (ellipse as a sum of distances), point and line (parabola), or the difference in distances to two points (hyperbola). Furthermore, Figure 1 shows that the condition of the point changes according to the circumference. Equally, as with the extension to the three-dimensional space of the circumference, the sphere was obtained from the ellipse, the ellipsoid of revolution would be obtained from the parabola, the paraboloid of revolution, and from the hyperbola, the hyperboloid of revolution.
The geometric places described thus far use the property of equidistance. However, Figure 2 presents some examples of how they may be derived from other distance properties (minimum, maximum, or proportional distance) or angles. Equally, as with the extension to the three-dimensional space of the circumference, the sphere was obtained from the ellipse, the ellipsoid of revolution would be obtained from the parabola, the paraboloid of revolution, and from the hyperbola, the hyperboloid of revolution.
The geometric places described thus far use the property of equidistance. However, Figure 2 presents some examples of how they may be derived from other distance properties (minimum, maximum, or proportional distance) or angles. (c) locus of the points of the plane that observe a circumference from the same angle; (d) lines of a given length that are tangential to a circle; (e) construction of a triangle from a segment on a given surface.
As the geometric places method has become the common ground for solving geometric problems in engineering in the last 30 years, the most powerful software applications have begun to explicitly or implicitly exploit different possibilities of the locus concept, such as in auxiliary tools for 2D design. However, the concept has not been fully developed, leading this article to emphasize its broad possibilities, such that the most useful geometric places for 3D design are visible and implemented in commercial CAD software. The article hypothesizes that implementing a set of functions to illuminate geometric places in commercial CAD software will simplify and facilitate the design process for users. Therefore, the main objective of the paper is to demonstrate the theoretical benefits of implementing loci functions in CAD software.
The remainder of the paper is structured as follows: Section 2 explains the methods; Section 3 reflects on the findings; Section 4 presents the research's conclusions.
Methods: A Novel Focus of Locus
As shown above, the number of possible loci is virtually unlimited. This article intends to develop the theory using some of the most practical geometric places as a model, in order to implement them on 3D design CAD software in the future. Among these are those formed by lines or circumferences. These are well known in classical geometry. However, those presented here are curvilinear. The only requirement is that the CAD software can draw the curves, whether circumferences or another kind of curve. As the geometric places method has become the common ground for solving geometric problems in engineering in the last 30 years, the most powerful software applications have begun to explicitly or implicitly exploit different possibilities of the locus concept, such as in auxiliary tools for 2D design. However, the concept has not been fully developed, leading this article to emphasize its broad possibilities, such that the most useful geometric places for 3D design are visible and implemented in commercial CAD software. The article hypothesizes that implementing a set of functions to illuminate geometric places in commercial CAD software will simplify and facilitate the design process for users. Therefore, the main objective of the paper is to demonstrate the theoretical benefits of implementing loci functions in CAD software.
The remainder of the paper is structured as follows: Section 2 explains the methods; Section 3 reflects on the findings; Section 4 presents the research's conclusions.
Methods: A Novel Focus of Locus
As shown above, the number of possible loci is virtually unlimited. This article intends to develop the theory using some of the most practical geometric places as a model, in order to implement them on 3D design CAD software in the future. Among these are those formed by lines or circumferences. These are well known in classical geometry. However, those presented here are curvilinear. The only requirement is that the CAD software can draw the curves, whether circumferences or another kind of curve. The theory must be supported with examples-presented from lesser to greater complexity--that allow intuitive understanding of the advantages of implementation and that it includes proposals for accessing the new tools so that they can be integrated as options (e.g., menus or sidebar tools) in the CAD software.
To validate the hypothesis, three examples are presented: one on the plane and two in space. The methodology for solving the examples involved individually considering the different geometric places involved as equations, and then finding the solution for these equations graphically [36]. The examples were solved graphically using Solidworks 2019 Premium (Dassault Systèmes, Waltham, MA, USA). Editing the images required 3ds Max from Autodesk (San Rafael, CA, USA).
In addition, to facilitate understanding of the scientific proposal and its benefits, an own-software tool developed in Adobe Authorware v7.0.2 (Adobe, San José, CA, USA) is also presented. The tool has not been implemented in any CAD software; its purpose is just academic and demonstrative. Although the software application does not yet have all the geometric places functions implemented, the data structure is shown in Figure 3. The structure of the computer application is similar to others shown in published data [37], and it was divided into four levels of functional hierarchy. The first level is the main structure of the program. The second level serves to lead to the proper set of functions according to the number of entities involved. The third level allows the selection of the desired function and the last one, to select the entities and solve the problem. In order to clarify how the developed tool works, an extra locus example is provided and solved. The theory must be supported with examples-presented from lesser to greater complexity-that allow intuitive understanding of the advantages of implementation and that it includes proposals for accessing the new tools so that they can be integrated as options (e.g., menus or sidebar tools) in the CAD software.
To validate the hypothesis, three examples are presented: one on the plane and two in space. The methodology for solving the examples involved individually considering the different geometric places involved as equations, and then finding the solution for these equations graphically [36]. The examples were solved graphically using Solidworks 2019 Premium (Dassault Systèmes, Waltham, MA, USA). Editing the images required 3ds Max from Autodesk (San Rafael, CA, USA).
In addition, to facilitate understanding of the scientific proposal and its benefits, an ownsoftware tool developed in Adobe Authorware v7.0.2 (Adobe, San José, CA, USA) is also presented. The tool has not been implemented in any CAD software; its purpose is just academic and demonstrative. Although the software application does not yet have all the geometric places functions implemented, the data structure is shown in Figure 3. The structure of the computer application is similar to others shown in published data [37], and it was divided into four levels of functional hierarchy. The first level is the main structure of the program. The second level serves to lead to the proper set of functions according to the number of entities involved. The third level allows the selection of the desired function and the last one, to select the entities and solve the problem. In order to clarify how the developed tool works, an extra locus example is provided and solved.
Plane Example
The first example is rather elementary: two geometric places incorporating the concept of distance, the perpendicular bisector and the bisector.
The challenge was, given the outer perimeter of the flat face of an object, which was formed by three straight edges and two arcs of circumference (Figure 4a), to locate the slot of known dimensions so that its center was on the bisector of the angle formed by the non-perpendicular straight edges and it was at the same distance from the perimeter cut-off points ( Figure 4b).
Plane Example
The first example is rather elementary: two geometric places incorporating the concept of distance, the perpendicular bisector and the bisector.
The challenge was, given the outer perimeter of the flat face of an object, which was formed by three straight edges and two arcs of circumference (Figure 4a), to locate the slot of known dimensions so that its center was on the bisector of the angle formed by the non-perpendicular straight edges and it was at the same distance from the perimeter cut-off points ( Figure 4b). Using parametric CAD software such as Solidworks, SolidEdge, Autodesk Inventor and Catia, among others, we initially drew a similar shape, and then fit it to the real model by applying dimensional and geometric restrictions. To draw the bisector line, we started from a given point (the center of the left end of the circumference); however, if we needed to find another point, we would, for example, draw another circle tangential to the two lines, and then join the centers of both circles and extend the line to the perimeter. Then, the slot could be drawn and placed at the midpoint of that line.
Using a vector program (e.g., AutoCAD), the bisector could be drawn traditionally or using a tangent circle. Then, we would extend one or both and cut off the bisector to locate the midpoint. To find the centers of the circumferences of radius 12 of the slot, we would have to use another circumference of radius 45/2, draw both circumferences, trace the tangent lines on both sides, and then cut to eliminate the excess lines.
Thus, a possible alternative to the geometric places method is to combine two of the most obvious geometric places: the bisector, being the geometric place defined by the points at the same distance from two lines, and the perpendicular bisector, being the geometric place defined by the points at the same distance from two points. The intersection of these two geometric places determines the center of the slot.
The simplicity of this approach makes it difficult to appreciate the advantages of using these geometric places. However, it can be demonstrated that this approach can be applied to a multitude of cases that previous methods could not so easily solve. Most relevant, however, is that this methodology can be easily applied to three-dimensional space, where it is more complicated to find the correct solution intuitively.
First, it is necessary to demonstrate the usefulness of the concept in three-dimensional space. Two examples are presented. In the first example, curved geometric places are not needed; however, the second example does involve curved surfaces.
Spatial Example 1
One of the simplest spatial geometric places is the plane bisecting two planes, defined as the plane formed by points that are equidistant from two other planes. When they are parallel, the bisecting plane is also parallel, and very often the nerves-the stiffening elements of a part-are located in the center of said bisector plane. An example of this is shown in Figure 5. The geometric place would provide the placement of the plane that positions the nerve without having to rely on parallelism or perpendicular restrictions, so that we could generate it even with more than one pair of faces (the outer or the inner planes of the fins). The advantage over other options stands out. Using parametric CAD software such as Solidworks, SolidEdge, Autodesk Inventor and Catia, among others, we initially drew a similar shape, and then fit it to the real model by applying dimensional and geometric restrictions. To draw the bisector line, we started from a given point (the center of the left end of the circumference); however, if we needed to find another point, we would, for example, draw another circle tangential to the two lines, and then join the centers of both circles and extend the line to the perimeter. Then, the slot could be drawn and placed at the midpoint of that line.
Using a vector program (e.g., AutoCAD), the bisector could be drawn traditionally or using a tangent circle. Then, we would extend one or both and cut off the bisector to locate the midpoint. To find the centers of the circumferences of radius 12 of the slot, we would have to use another circumference of radius 45/2, draw both circumferences, trace the tangent lines on both sides, and then cut to eliminate the excess lines.
Thus, a possible alternative to the geometric places method is to combine two of the most obvious geometric places: the bisector, being the geometric place defined by the points at the same distance from two lines, and the perpendicular bisector, being the geometric place defined by the points at the same distance from two points. The intersection of these two geometric places determines the center of the slot.
The simplicity of this approach makes it difficult to appreciate the advantages of using these geometric places. However, it can be demonstrated that this approach can be applied to a multitude of cases that previous methods could not so easily solve. Most relevant, however, is that this methodology can be easily applied to three-dimensional space, where it is more complicated to find the correct solution intuitively.
First, it is necessary to demonstrate the usefulness of the concept in three-dimensional space. Two examples are presented. In the first example, curved geometric places are not needed; however, the second example does involve curved surfaces.
Spatial Example 1
One of the simplest spatial geometric places is the plane bisecting two planes, defined as the plane formed by points that are equidistant from two other planes. When they are parallel, the bisecting plane is also parallel, and very often the nerves-the stiffening elements of a part-are located in the center of said bisector plane. An example of this is shown in Figure 5. The geometric place would provide the placement of the plane that positions the nerve without having to rely on parallelism or perpendicular restrictions, so that we could generate it even with more than one pair of faces (the outer or the inner planes of the fins). The advantage over other options stands out. . This is an example of a part with a nerve whose placement would be much easier if based on the bisecting plane concept. Figure 6 shows that a pyramidal structure with a quadrangular base was modeled to fulfill the following requirements: first, a quadrilateral base defined by two horizontal diagonals perpendicular to each other ( Figure 6a); second, a vertex of the pyramid, where the four lateral edges meet, is located in the center of a spherical ball, held by a mechanical arm attached to the ceiling at a given place (Figure 6b). The mechanical arm had to have a fixed length and be able to rotate in any direction (L = the distance from the ball to the rotation center of the arm). Furthermore, the lateral edges had to start from each diagonal, forming a 100° angle at the vertex.
Spatial Example 2
The problem here was positioning the small sphere to meet the conditions described; the lengths of the four lateral bars of the pyramidal structure could be as short as possible. Figure 5. This is an example of a part with a nerve whose placement would be much easier if based on the bisecting plane concept. Figure 6 shows that a pyramidal structure with a quadrangular base was modeled to fulfill the following requirements: first, a quadrilateral base defined by two horizontal diagonals perpendicular to each other ( Figure 6a); second, a vertex of the pyramid, where the four lateral edges meet, is located in the center of a spherical ball, held by a mechanical arm attached to the ceiling at a given place (Figure 6b). . This is an example of a part with a nerve whose placement would be much easier if based on the bisecting plane concept. Figure 6 shows that a pyramidal structure with a quadrangular base was modeled to fulfill the following requirements: first, a quadrilateral base defined by two horizontal diagonals perpendicular to each other ( Figure 6a); second, a vertex of the pyramid, where the four lateral edges meet, is located in the center of a spherical ball, held by a mechanical arm attached to the ceiling at a given place (Figure 6b). The mechanical arm had to have a fixed length and be able to rotate in any direction (L = the distance from the ball to the rotation center of the arm). Furthermore, the lateral edges had to start from each diagonal, forming a 100° angle at the vertex.
Spatial Example 2
The problem here was positioning the small sphere to meet the conditions described; the lengths of the four lateral bars of the pyramidal structure could be as short as possible. The mechanical arm had to have a fixed length and be able to rotate in any direction (L = the distance from the ball to the rotation center of the arm). Furthermore, the lateral edges had to start from each diagonal, forming a 100 • angle at the vertex.
The problem here was positioning the small sphere to meet the conditions described; the lengths of the four lateral bars of the pyramidal structure could be as short as possible.
The following geometric places were involved: Locus1: The geometric place of the points of the space that observe one of the segments at a 100 • angle (spatial "arc capable"; in other words, the surface of revolution generated by a 100 • "arc capable" that spans one of the arms of the cross); Locus2: The geometric place of the points of the space that observe one of the segments at a 100 • angle (100 • "arc capable" that spans the other arm of the cross); Locus3: The geometric place of points in space that are at a distance L from the rotation center of the rotating arm (sphere).
After the geometric places that intervene have been determined, all possible solutions can be found. Different approaches are equally valid: solving the intersection between Locus1 and Locus2, and then finding the intersection with the sphere (there would be two solutions on the sphere surface); solving the intersection between Locus1 and Locus3 and then finding the intersection with Locus2; solving the intersection between Locus2 and Locus3 and then finding the intersection with Locus1.
This research uses the last approach as an example, using a series of images (Figure 7) combining solid and wireframe models and transparencies for a better understanding of the shape and the intersection of the geometric places. The following geometric places were involved: Locus1: The geometric place of the points of the space that observe one of the segments at a 100° angle (spatial "arc capable"; in other words, the surface of revolution generated by a 100° "arc capable" that spans one of the arms of the cross); Locus2: The geometric place of the points of the space that observe one of the segments at a 100° angle (100° "arc capable" that spans the other arm of the cross); Locus3: The geometric place of points in space that are at a distance L from the rotation center of the rotating arm (sphere).
After the geometric places that intervene have been determined, all possible solutions can be found. Different approaches are equally valid: solving the intersection between Locus1 and Locus2, and then finding the intersection with the sphere (there would be two solutions on the sphere surface); solving the intersection between Locus1 and Locus3 and then finding the intersection with Locus2; solving the intersection between Locus2 and Locus3 and then finding the intersection with Locus1.
This research uses the last approach as an example, using a series of images (Figure 7) combining solid and wireframe models and transparencies for a better understanding of the shape and the intersection of the geometric places. The initial data were the two cross-shaped segments, the point on the ceiling that the arm can rotate on and the length of the arm (Figure 7a). A perspective view was chosen in order to be more intuitive than dihedral projections. Figure 7d shows several superior (top view) projections to avoid surfaces generated through the "arc capable" Locus1 and Locus2 being confused with spheres (i.e., because they resemble rugby balls more than spheres). Figure 8a adds a hemisphere as the geometric place where the lower end of the fixed-length arm can move when rotating on its ceiling anchor. Figure 8b uses transparent solid models to enable a comparison of the three geometric places. The initial data were the two cross-shaped segments, the point on the ceiling that the arm can rotate on and the length of the arm (Figure 7a). A perspective view was chosen in order to be more intuitive than dihedral projections. Figure 7d shows several superior (top view) projections to avoid surfaces generated through the "arc capable" Locus1 and Locus2 being confused with spheres (i.e., because they resemble rugby balls more than spheres). Figure 8a adds a hemisphere as the geometric place where the lower end of the fixed-length arm can move when rotating on its ceiling anchor. Figure 8b uses transparent solid models to enable a comparison of the three geometric places. To better illustrate the example, Figure 9 shows wireframe models of the different geometric places. To better illustrate the example, Figure 9 shows wireframe models of the different geometric places. To better illustrate the example, Figure 9 shows wireframe models of the different geometric places. Any of the three procedures would lead to an identical solution: the location of the exit points. In this case, the only two possible points are those that appear in Figure 9h as points A and B; these are the result of finding the intersection of the three geometric places.
Finally, we considered the lengths of the resulting bars. The two solutions meeting the initial requirements were deemed correct.
The procedure's mathematics would comprise solving the equation system generated by applying the boundary conditions. In other words, the three graphic elements would be taken to an analytical level. However, it has been demonstrated that the images are much more illustrative than the mathematics would be, especially for the geometric places that correspond to a spatial "arc capable".
Locus Example Solved with Developed Tool
In this sample, a basic case of locus equidistant to two entities is presented. Two circumferences are given (with diameter and center known), and it is requested to find the locus of all the centers of the circumferences that are exterior tangents to the upper circumference and interior tangents to the circumference placed at the bottom. The developed tool provides the solution to this problem by clicking on the proper function icon and, later, clicking on the two entities given in the proper order. Automatically, the tool represents the solution to the problem by generating a construction curve (Figure 10a, in green), where all the centers of circumference's tangent to both entities are located in the conditions of the statement (some of them represented in Figure 10b, in black). Any of the three procedures would lead to an identical solution: the location of the exit points. In this case, the only two possible points are those that appear in Figure 9h as points A and B; these are the result of finding the intersection of the three geometric places.
Finally, we considered the lengths of the resulting bars. The two solutions meeting the initial requirements were deemed correct.
The procedure's mathematics would comprise solving the equation system generated by applying the boundary conditions. In other words, the three graphic elements would be taken to an analytical level. However, it has been demonstrated that the images are much more illustrative than the mathematics would be, especially for the geometric places that correspond to a spatial "arc capable".
Locus Example Solved with Developed Tool
In this sample, a basic case of locus equidistant to two entities is presented. Two circumferences are given (with diameter and center known), and it is requested to find the locus of all the centers of the circumferences that are exterior tangents to the upper circumference and interior tangents to the circumference placed at the bottom. The developed tool provides the solution to this problem by clicking on the proper function icon and, later, clicking on the two entities given in the proper order. Automatically, the tool represents the solution to the problem by generating a construction curve (Figure 10a, in green), where all the centers of circumference's tangent to both entities are located in the conditions of the statement (some of them represented in Figure 10b
Reflections
Before the advent of computers, the geometric places that could be represented using traditional instruments or Euclidean tools (compass, set square, and bevel) were used in the service of drawing designs. The geometric places were based on very simple concepts-such as distance or angle equality-and could be easily drawn with such tools.
Though the first personal computers had little graphical processing power, when graphical processing became more powerful, applications that reproduced human design work were released, albeit with limited scope. Over time, they evolved to take advantage of their increasing capacity for calculus and providing automatic solutions to the problems of classical geometry. Still, the lines or curves those calculations implied were not visible.
More recently, CAD software has been significantly improved through incorporating relativeposition relationships between entities; these relationships include parallelism, perpendicularity, concentricity, tangency, or distance equality.
Methodologies changed in recent years, allowing graphic entities to adapt to any given design condition through restrictions or parameters, improving functionality. However, the inability to use equidistant curves complicates choosing the appropriate entities' positions, so they can be adjusted later to their final state by incorporating said restrictions or parameters. Hence, this research has substantial potential practical applications.
Upon understanding the procedure's dynamics-especially given the infinite number of geometric places that can be determined-it is convenient to limit the system to illuminating only the most useful geometric places, a feature that 3D design experts would like CAD software to have.
A broader analysis could be done, but only in space, since the plane could be considered as the intersection of the spatial geometric place with the plane. However, given that the planes are more recognizable, they will be considered separately.
Given that this article demonstrates that the incorporation of geometric places into CAD software represents an improvement in 3D CAD design, the main objective of this research is to promote the functionality and competitiveness achieved by taking the system to 3D space. In this context, the first example has enabled the identification of auxiliary operations of classical geometry, including the bisector of an angle formed by two edges and the perpendicular bisector of a segment, which has not yet been implemented in the most commonly used CAD software. The second example identified the intuitive functions of geometric places in space, including the two-sided bisector plane, the equidistant plane at two points, and the paraboloid equidistant from a plane. Currently, obtaining these geometric places through CAD software is not carried out through the methodology used in this research; instead, it is achieved through the selection of basic entities by the designer, such as straight-point, three points, point and perpendicular to straight, and parallel line. The contribution of these new functions would provide alternative options for the designer. The third example showed
Reflections
Before the advent of computers, the geometric places that could be represented using traditional instruments or Euclidean tools (compass, set square, and bevel) were used in the service of drawing designs. The geometric places were based on very simple concepts-such as distance or angle equality-and could be easily drawn with such tools.
Though the first personal computers had little graphical processing power, when graphical processing became more powerful, applications that reproduced human design work were released, albeit with limited scope. Over time, they evolved to take advantage of their increasing capacity for calculus and providing automatic solutions to the problems of classical geometry. Still, the lines or curves those calculations implied were not visible.
More recently, CAD software has been significantly improved through incorporating relative-position relationships between entities; these relationships include parallelism, perpendicularity, concentricity, tangency, or distance equality.
Methodologies changed in recent years, allowing graphic entities to adapt to any given design condition through restrictions or parameters, improving functionality. However, the inability to use equidistant curves complicates choosing the appropriate entities' positions, so they can be adjusted later to their final state by incorporating said restrictions or parameters. Hence, this research has substantial potential practical applications.
Upon understanding the procedure's dynamics-especially given the infinite number of geometric places that can be determined-it is convenient to limit the system to illuminating only the most useful geometric places, a feature that 3D design experts would like CAD software to have.
A broader analysis could be done, but only in space, since the plane could be considered as the intersection of the spatial geometric place with the plane. However, given that the planes are more recognizable, they will be considered separately.
Given that this article demonstrates that the incorporation of geometric places into CAD software represents an improvement in 3D CAD design, the main objective of this research is to promote the functionality and competitiveness achieved by taking the system to 3D space. In this context, the first example has enabled the identification of auxiliary operations of classical geometry, including the bisector of an angle formed by two edges and the perpendicular bisector of a segment, which has not yet been implemented in the most commonly used CAD software. The second example identified the intuitive functions of geometric places in space, including the two-sided bisector plane, the equidistant plane at two points, and the paraboloid equidistant from a plane. Currently, obtaining these geometric places through CAD software is not carried out through the methodology used in this research; instead, it is achieved through the selection of basic entities by the designer, such as straight-point, three points, point and perpendicular to straight, and parallel line. The contribution of these new functions would provide alternative options for the designer. The third example showed that space loci that have not been previously used have properties that make them valuable for the development of complex designs; these enable a designer to use points with simultaneous properties, located at the intersections of the geometric places, without the need for mathematical operations and substantial numbers of intermediate operations. Finally, the last example served to identify the ease with which geometric place problems could be solved if loci functions are implemented in CAD software.
Conclusions
This article has presented the geometric places of the plane and the possibilities of improvement derived from their implementation in commercial CAD software as auxiliary tools in the CAS process. The possibility of adding several extremely intuitive curved geometric places to extend the limited options of traditional geometry has also been considered.
Some examples have been used to demonstrate these ideas. First, an example of a 2D figure positioned on an object's flat face represents a significant improvement over the vector and parametric tools used in commercial CAD software because it is more intuitive and does not require the designer to execute as many operations.
Second, two more complex 3D examples were presented. In the last one, the use of spatial geometric places not previously used as auxiliary elements in the design process was shown to be an effective and highly intuitive tool. This approach provides curved surfaces whose points have notable features, a considerable innovation. Its incorporation into CAD software in the near future could greatly simplify the work of designers.
Third, a locus sample solved with an own software tool has shown the potential of loci implementation in CAD software.
The research has demonstrated the convenience of carrying out a systematic study of spatial geometric places and testing their relevance to improving 3D-design generation in commercial CAD software.
Thus, it has been made clear that the functionality of both 2D design and 3D design would be significantly improved by automating the process of finding the geometric places in CAD software. The design process would also become more logical and intuitive, and the number of operations would be reduced; designers would be able to find solutions that current parametric design programs do not provide.
The following actions are proposed following the incorporation of the geometric places of the plane: • An analytical and graphical study of the most relevant spatial geometric places to take advantage of current calculation capacities; we also propose the creation of new initiatives promoting spatial geometric places in technical design; • Communication with 3D CAD software developers about the convenience of incorporating the most relevant plane geometric places, including those currently deemed ineffectual due to being curved lines not generated by circumferential arcs; • An investigation into the best approach to introducing loci functions within CAD programs to users, and insight into situations where these functions are able to solve a geometric problem, especially in more complex loci cases. | 8,860.6 | 2020-07-16T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Fast Pedestrian Recognition Based on Multisensor Fusion
A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients ROI-IHOG feature extraction method is proposed later. A support vector machine SVM classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.
Introduction
Pedestrians are vulnerable participants among all objects involved in the transportation system when crashes happen, especially those in motion under urban road scenarios 1 .In 2009, it was found that in the first global road safety assessment of World Health Organization report, traffic accident is one of the major causes of death and injuries around the world.41% to 75% of road traffic fatal accidents are involving pedestrians, and the lethal possibility of pedestrians is 4 times compared with that of vehicle occupants.Therefore, pedestrian safety protection should be taken seriously 2 .
System Architecture
The research of pedestrian recognition is carried out on the multisensor vehicle platform, as shown in Figure 1.This experimental platform is a modified Jetta.It is equipped with a vision sensor, a laser scanner, and two near-infrared illuminators to detect pedestrians in the range of 90 • in front of the vehicle.
The architecture of the proposed pedestrian detection system based on multisensor is shown in Figure 2. The system is running on an Intel Core I5 CPU, 2.27 GHZ, RAM 2.0 GB PC.The system includes offline training and online recognition.For offline training, a novel pedestrian dataset adapt the urban road environment is established first, and then the pedestrian classifier is trained by SVM.For online recognition, a Sony SSC-ET185P camera installed on the top front of the experimental vehicle is used to capture continuous 320 × 240 image.Potential pedestrian candidate regions are identified in the image through the radar data from a SICK LMS211-S14 laser scanner and the perspective mapping model between world coordinates and image coordinates.For each image, all candidate regions are scaled to 64 × 128 and judged by the classifier trained offline.
Sensor Selection
The Sony SSC-ET185P camera has been chosen for several reasons.The camera has a high color reproduction and sharp images.It includes a 18x optical zoom and 12x digital highquality zoom lens with autofocus, so the camera can capture high quality color images during the day.Although the system is now being tested under daylight conditions, two near-infrared illuminators are mounted on both sides of the laser radar in front of the vehicle, which allow the object detection due to a specific illumination for the extension of its application at night.
The laser scanner is a SICK LMS211-S14.The detection capabilities scanning angle of 90 • , minimum angular resolution of 0.5 • up to 81.91 m range are suitable for our goal.The laser scanner only scans a flat data, the ranging principle is a time-of-flight method, and it measures the round trip time of flight to determine the distance by emitting light pulses to the target.It takes 13 ms of once scanning which could be able to meet the needs of real time.
Vehicle Setup
The laser scanner and two near-infrared illuminators are located in the front bumper in horizontal, as shown in Figure 3 a .The camera is placed at the top front of the vehicle, with the same centerline of the laser scanner, as show in Figure 3 b .The horizontal distance between the camera and the laser scanner is 2.3 m, and the camera height is 1.6 m, which are two key parameters of the camera calibration.
The MINE V-cap 2860 USB is used to connect between the camera and the PC.An RS-422 Industrial serial and MOXA NPort high-speed card provide an easy connection between the laser and PC. Figure 4 shows the hardware integration of the proposed system.
Potential Pedestrian Location Estimation
Most current pedestrian detection methods are simply depending on visual sensors that cannot meet the real time application.In our work, we attempt to utilize laser radar sensor to detect obstacle locations for potential pedestrian position estimation in world coordinates, and then we make use of the camera calibration and the space-image perspective mapping model to mark the pedestrian candidate region in the image.Pedestrian recognition algorithm proposed later is performed only for the candidate regions instead of the entire image, which could effectively reduce the computational time cost for a good real time application.
In our experimental platform, a SICK LMS211-S14 laser scanner is utilized.The scanning angle is 90 • in front of the host vehicle and the minimum angular resolution of 0.5 • in Figure 5 .Thus, we can get 181 data arrays from radar sensor scanning once time.Each data array includes two parameters: the angle and the distance between the obstacle and the host vehicle.A data array could be denoted as { ρ i , θ i | i 1, 2, . . ., m}, where m is the total number of the array, and ρ i , θ i is the data of the ith array.Obviously, a set of laser beams from the same target should have the similar distances and the similar angles.Based on this, a clustering method is applied for 181 data to determine which belong to the same target, which is denoted as where φ is the minimum angular resolution of the radar; r k is the distance of the kth array; D 1 , D 2 are the distance threshold.According to the installation location of the radar, the part of pedestrian's knees would be scan.Taking into account the actual physical characteristics of the pedestrian legs separated or closed in spatial, D 1 , D 2 are set as 10 cm and 70 cm, respectively.Then, the potential pedestrian location parameters the start data, the end data, and the data amount of each target are recorded.The target distance could be expressed by the average distance of all beams from the target: ρ a ρ 1 ρ 2 • • • ρ n /n.Its direction could be represented as θ a θ j1 θ j2 /2, where θ j1 is first angle value of the target, and θ j2 is the last one.Finally, we convert the radar data from polar coordinate to Cartesian coordinate as where r, θ is the data in the polar coordinate; x, y is the data in the Cartesian coordinate, which represent the target location in space.The possible pedestrian locations are 2D data in world coordinate.Their corresponding regions in the image are then located by a piecewise camera calibration and the perspective space-image mapping model.This map is projected into the image in order to identify the regions and scale to search for pedestrians in the image.The camera height is 1.6 m, which is a parameter of the camera calibration.We can obtain the space-image mapping model as follows: where X w , Y w , and Z w are location parameters in world coordinate; u, v are corresponding parameters in image coordinate.We divided the detection area into four sections which gradually to determine, respectively, the mapping model parameters u, v more accurately by the least square method.
In order to detect pedestrians more accurately and faster, we should determine the detection size of the candidate pedestrian imaging region at different distances in front of the vehicle.We assumed that the pedestrian template is 2 m height and 1 m width a little larger than real pedestrian .The relationship between the pedestrian's width and height of the imaging region and the pedestrian location in space could be found by the calibration experiment.The potential pedestrian region's width and height in the image could be denoted as h 1402y −0.97 , w 723.2y −0.99 , where y is vertical distance from the target to the host vehicle.
Feature Representation
In 2005, Dalal and Triggs 22 proposed the grids of histogram of oriented gradient HOG descriptors for pedestrian detection.Experiment results showed that HOG feature sets significantly outperformed existing feature sets for human detection.However, HOG-based algorithm is too time consuming, especially for multi-scale object detection.The approach should be further optimized because it is not suitable for real time pedestrian safety protection.
In this paper, for fast pedestrian detection, the region of interest ROI of a pedestrian sample is found by calculating the average gradient of all positive samples in RSPerson dataset mentioned below.We can find that the gradient features at the head and limbs of pedestrian samples are most obvious.On the other hand, the gradients of the background area in the sample image offer less effect for pedestrian detection which may also disturb the processing performance.Therefore, in order to reduce HOG feature vector dimension of a whole image 3780 dimensions , several important areas are considered as ROI of a selected sample image to calculate the HOG feature.Accordingly, the computation amount of HOG feature is greatly reduced, and pedestrian recognition speed is improved.Through the analysis of average gradient value of pedestrian samples which is shown in Figure 6, four regions of interest are identified as ROI: the head region, the leg region, the left arm region, and the right arm region.These regions could be part of the overlaps each other and cover the body's contours basically.For a color image, gradients of each color channel are calculated.The gradients which have the largest amplitude among three color channel are selected as the the gradient vector of each pixel.Optimal ROI location, width, and height of a sample image is shown in Table 1.
Similar with Dalal's method, for calculating the feature vector of ROI in a detection window, the cell's size is defined as 8 × 8 pixel, and the block's size is defined as a 2 × 2 cell.The window's scan step is 8 pixels, the width of a cell.A total of 49 blocks could be extracted in a detection window.For each pixel x, y in the image, the gradient vector is denoted as Δg x, y ∂f x, y /∂x, ∂f x, y /∂y .In general, one-dimensional centrosymmetric template operator −1, 0, 1 is used for calculating the gradient vector:
4.1
Accordingly, the gradient magnitude could be calculated as Δg x, y g x x, y 2 g y x, y 2 .
4.2
The gradient orientation is unsigned, it is defined as θ x, y arctan g y x, y g x x, y π 2 .
4.3
To compute the gradient histogram of a cell, each pixel casts a vote weighted by its gradient magnitude, for the bin corresponding to its gradient orientation.All of gradient orientations are group into 9 bins.Thus, every block has a gradient histogram with 36 dimensions, and ROI-HOG feature vector has 49 * 36 1764 dimensions.Furthermore, integral histograms of oriented gradients IHOG 23 are utilized for farther speed up the process of feature extraction.The histograms of oriented gradients of the pixel x, y could be expressed as follows: T x, y g 1 , . . ., g i , . . ., g 9 , g i The integral feature vectors in x-orientation is as follows: The integral feature vectors in y-orientation is as follows: As shown in Figure 7, IHOG of a cell could be calculated as Accordingly, IHOG of a block could be calculated as HOG BLOCK HOG CELL-1 , HOG CELL-2 , HOG CELL-3 , HOG CELL-4 .
4.8
IHOG method only need scan the entire image for once and storage the integral gradient data.Any area's HOG feature could be obtained with simple addition and subtraction operations without repeated calculation of the gradient orientation and magnitude of each pixel.
Sample Selection for Training
For pedestrian recognition in urban road environment, we build a pedestrian sample dataset called RSPerson Person Dataset of Road System dataset.In the sample dataset, the positive samples are including walking pedestrians, standing still pedestrians, and group pedestrians with different size, pose, gait, and clothing.Some preexperimental studies have shown that the selection of negative samples is particularly important for reduction of false alarms.Thus, boles, trash cans, telegraph poles, and bushed which are likely to be mistaken for pedestrians, as well as some normal objects such as roads, vehicles, and other infrastructures are selected to form negative samples.This is most beneficial for our pedestrian detection system.In RSPerson dataset, each sample image is normalized to 64 × 128 pixels for training.Figure 8 shows some samples of RSPerson dataset.9.
Pedestrian Recognition with SVM
For online recognition, once the potential pedestrian locations are located by laser radar, candidate regions in the image are confirmed accordingly by the perspective mapping model.For each candidate region, scale transforming is carried out for normalization of 64 * 128 pixels, and then, ROI-IHOG feature vector could be extracted.Based on these steps, we can judge whether the candidate is a true pedestrian or not by the classifier trained with SVM.
Experimental Results
For testing the validity of the proposed method, several video sequences from realistic urban traffic scenarios are tested for performance assessment of our pedestrian recognition experimental platform.Firstly, the pedestrian candidate locations are estimated based on laser radar data processing and space-image perspective mapping model.Some candidate region segmentation results are shown in Figure 10.In this way, potential pedestrian regions are located in the image, but some other obstacles poles, shrub, etc. are also located as positives.
Secondly, the proposed ROI-IHOG SVM algorithm is tested with several video sequences.In this step, pedestrian recognition only depends on ROI-IHOG SVM for an entire image without fusing the laser information.The recall could reach 93.8% under 10 −4 FPPW.The image size is 320 * 240 pixels.The average detection time is about 600 ms/frame.Some detection results are shown in Figure 11.
Finally, fusing information from laser and vision sensor, each candidate region detected is scaled to the size of 64 * 128 pixels and extracted the ROI-IHOG feature.According to our recognition method, the candidate region is considered to be a pedestrian or not by the classifier trained with SVM.Based on multisensor fusion, the average detection time is about 18 ms for a candidate.Thus, if there are 5 candidate regions in each image of the video sequence averagely, the processing speed is about 11 frame/s which could be satisfied the real time requirement.Several recognition results Figure 12 indicate that the proposed pedestrian detection approach based on multisensor fusion has good performance, which could provide an effective support for active pedestrian safety protection.
Conclusions
A fast pedestrian recognition algorithm based on multisensor fusion is developed in this paper.Potential pedestrian candidate regions are located by laser scanning and the perspective mapping model, and then ROI-IHOG feature extraction method is proposed for reducing computational time cost.Moreover, SVM is utilized with a novel pedestrian sample dataset which adapt to the urban road environment for online recognition.Pedestrian recognition is tested with radar, vision, and two-sensor fused, respectively.Reliable and timewise performances are shown on fusion-based pedestrian recognition.The processing speed could reach 11 frame/s which could be satisfied the real time requirement.In future work, we will further study the key technologies for pedestrian safety, such as pedestrian tracking, pedestrian behavior recognition, and conflict analysis between pedestrians and the host vehicle.
Figure 2 :
Figure 2: Architecture of the proposed multisensor pedestrian recognition system.
Figure 3 :Figure 4 :
Figure 3: a Installation location of the laser scanner and near-infrared illuminators.b Installation location of the camera.
Figure 5 :
Figure 5: The sketch of radar scanning.
Figure 10 :
Figure 10: Pedestrian candidate region estimation results under different urban scenarios.
Table 1 :
Location of ROI in the sample.
Before online recognizing pedestrian, we should construct a classifier offline trained by SVM algorithms.Firstly, training dataset and test dataset are built from RSPerson dataset.The training dataset includes 2000 pedestrian and 2000 nonpedestrian samples, and the testing dataset includes 500 pedestrian and 500 nonpedestrian samples.The training dataset samples are handled, and features are extracted to form training vectors.With cross-validation based on grid search method, the proper parameters of SVM are selected.RBF kernel is chosen as kernel function, and the penalty factor C 1024 as well as the kernel parameter g 0.0625.After that, the pedestrian classifier could be constructed.Finally, testing dataset samples are chosen to test the performance of the classifier.We use the DET curve which contains two indicators: miss rate and FPPW false positive per window to evaluate performance of SVM classifiers.The performance of pedestrian recognition based on ROI-IHOG is shown in Figure | 3,894.8 | 2012-12-23T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A null mutation in CABP4 causes Leber's congenital amaurosis-like phenotype.
PURPOSE
To describe the finding of a novel calcium binding protein 4 (CABP4) mutation in a family with Leber congenital amaurosis (LCA) phenotype.
METHODS
Homozygosity mapping was performed in a consanguineous family with four affected members originally referred as cases of LCA. Detailed electroretinographic recordings were obtained.
RESULTS
A novel homozygous single base-pair insertion was identified in all four siblings. The patients had an LCA-like phenotype, including either flat or greatly diminished electroretinographic activity.
CONCLUSIONS
This report significantly expands on the phenotype associated with calcium binding protein 4 mutations, which has so far been limited to congenital stationary night blindness, and further demonstrates how molecular data often blur the boundaries between what are believed to be clinically distinct retinal disorders.
The remarkable organization of retinal cells in layers ensures efficient transmission of the phototransduction cascade that is initiated in photoreceptors to second-and thirdorder neurons, before it is transmitted through the optic nerve. Perturbation of the phototransduction cascade or its transmission adversely affects vision. The resulting phenotype is known as congenital stationary night blindness (CSNB). Unlike retinal dystrophies, CSNB represents a functional retinal defect, since the clinical appearance of the retina in these patients is largely normal and a detailed functional assessment of the retina is required to make the diagnosis. Several [7][8][9]. These proteins are encoded by genes that are both autosomal and X-linked, which explains why CSNB follows different modes of inheritance. Encoding calcium-binding protein 4 (CABP4) was recently identified by the candidate gene approach as a disease gene in autosomal recessive CSNB [10]. Several characteristics made this an attractive candidate: 1) its cellular localization to the synaptic terminals of photoreceptors [11]; 2) its physical association with and capacity to modulate the activity of Cav1.4α, a calcium channel that mediates the synaptic release of the neurotransmitter glutamate from photoreceptors in the dark [11]; and 3) the CSNB phenotype in Cabp4 knockout mice [12]. Compared to the wild type, Cabp4 −/− mice manifest a 50% reduction of the "a" wave (generated by photoreceptors), and their "b" wave (generated by bipolar cells) was even more markedly reduced [12]. So far, a total of three human mutations have been reported in this gene [10,13]. Despite the variability in the phenotype associated with these mutations, electroretinography (ERG) findings were consistent with CSNB. The purpose of this paper is to describe a novel frameshift mutation in CABP4 predicted to be more severe in nature, compared to those reported previously. The four patients had a clinical presentation highly reminiscent of Leber Congenital Amaurosis (LCA, congenital onset, nystagmus, severe visual impairment, and severely diminished or extinguished ERG), which expands the spectrum of the CABP4-associated phenotype.
METHODS
Human subjects: Four siblings (3 sisters and 1 brother) ranging in age from 6 to 16 years were referred to King Faisal Specialist Hospital and Research Center for workup of nystagmus, photophobia, and decreased visual acuity; the referral diagnosis was LCA. They were recruited under an IRB-approved research protocol (REC#2070023, KFSHRC) and a written informed consent was signed by each of the subjects. All four patients underwent a comprehensive ophthalmological evaluation that included best-corrected visual acuity (VA), slit-lamp examination, funduscopy, and full-field ERG (in accordance with International Society for Clinical Electrophysiology of Vision standards) under both scotopic and photopic conditions. Dilated fundoscopic photographs were acquired using Topcon Fundus Camera (Topcon Medical Systems Inc., Paramus, NJ). DNA extraction and genotyping: DNA was extracted from whole blood using a Gentra Puregene Blood Kit (Qiagen, Valencia, CA). Genomewide genotypes were obtained using the Affymetrix SNP 250K Chip platform (Affymetrix, Santa Clara, CA) following the manufacturer's instructions. Blocks of homozygosity were identified using the Affymetrix® Genotyping Console™ (Affymetrix) as described in previous studies [14,15]. Briefly, we use the default settings of the Genotyping Console™ to identify runs of homozygosity. These are cross checked against a comprehensive list obtained from the literature of all genes known to cause hereditary retinal disorders to prioritize genes for sequencing. Mutation analysis: CABP4 was PCR-amplified using primers that covered the entire coding sequence, as well as the flanking intronic sequences. PCR amplification was performed on a thermocycler (Applied Biosystems, Foster City, CA) in a total volume of 25 µl. PCR primers, as well as reaction conditions, are available upon request. PCR amplicons were submitted for bidirectional sequencing using an Amersham ET Dye Terminator Cycle Sequencing Kit (Amersham Biosciences; Piscataway, NJ) following the manufacturer's instructions. Sequence analysis was performed using the SeqManII module of the Lasergene (DNA Star Inc., Madison, WI) software package, with a normal sequence used for comparison.
RESULTS
Clinical phenotype: Four siblings from a single Bedouin family were enrolled in this study, ranging in age from 6 to 16 years. The parents are first cousins and belong to a large tribe in the central region of Saudi Arabia. To the best of our knowledge, there is no history of CSNB in the extended family members. The parents were unaffected and the sibship includes two unaffected siblings ( Figure 1).
The index patient (II-3) was a 15-year-old Saudi girl with an unremarkable perinatal history. The family noticed her poor vision around the age of 2-3 months, evidenced by nystagmus and lack of fixation and tracking. She continued to have very poor vision as a child and, despite prior evaluations, no specific diagnosis was given to this patient except that she had a "retinal disorder." There was a history of photophobia, but not night blindness; color vision was normal. Our opthalmological evaluation revealed the presence of nystagmus, eccentric fixation, and poor visual acuity, with a best-corrected VA of 20/400 in both eyes. This is not different from VA obtained in early childhood, indicating that her vision loss was stationary in nature, consistent with what was volunteered in the history. Fundoscopy was largely normal, apart from decreased foveal reflex. ERG was extinguished under both photobic and scotobic conditions. The clinical profile in other affected siblings was strikingly similar (Table 1 provides detailed ophthalmological findings in all affected siblings). However, the ERG pattern of II-4 was notably different in that it was not extinguished, but rather severely decreased under photopic conditions with normal implicit time on photopic flicker, whereas scopotopic ERG was borderline, with normal oscillatory potential. Two girls (II-3 and II-4) were noticed early by the family to have strabismus, and surgical correction of the strabismus was performed in II-3. All four patients had normal intellect and growth. Other systemic examinations were unremarkable.
Molecular analysis: Only one block of homozygosity was identified as shared between the four siblings. The minimal area of overlap was 3.29 Mb on chromosome 11q13, encompassing 157 genes, including CABP4 (accession number NM_145200.2). In view of the recently described phenotypes associated with CABP4 mutations in humans, this appeared a good candidate. Direct sequencing revealed the presence of a single base-pair insertion (c.81_82insA) in a homozygous state in all affected siblings, while parents were heterozygote carriers (Figure 1). This insertion introduced 44 novel amino acids before prematurely terminating the protein (p.Pro28Thrfsx44) (Figure 2). The transcript was, therefore, a potential target of NMD, but due to lack of availability of RNA, we were unable to test this experimentally.
DISCUSSION
CSNB is a rare retinal disorder characterized by functionally rather than structurally defective photoreceptors that are unable to relay their electrical impulses to the adjacent neurons. This disorder is therefore suspected when patients present with a history suggestive of retinitis pigmentosa (nyctalopia and decreased visual acuity) in the setting of a normal fundus exam. ERG is considered diagnostic, with different patterns reported in the different subtypes of CSNB [13]. Recently, Littink et al. reported a family with a novel nonsense mutation in CABP4 and a clinical picture of decreased visual acuity, photophobia, and nystagmus, but no nyctalopia [13]. The authors of that paper noted that two of the three families reported so far with CABP4 mutations lacked nyctalopia in their clinical presentation and suggested that the term congenital cone-rod synaptic disorder be used as a more accurate name than CSNB2 for this subtype. We are in agreement with the suggestion, since this name reflects the true pathogenesis of this condition. Table 1 summarizes the molecular and clinical characteristics of human patients with CABP4 mutations. Three mutant alleles have been described in five patients representing three Caucasian families. Our report adds a fourth allele and four patients representing one Arab family with this disorder. Interestingly, the three previously reported mutations were most likely hypomorphic in nature. In the case of c.800_801delAG, it was predicted to elongate the otherwise 276 amino acid-long CABP4 by another 91 novel amino acids (p.Glu267fsX91). The introduction of the novel amino acids was downstream of the last Ca-binding motif, making it unlikely to directly interfere with that function of the protein [10]. However, this mutation was found to significantly reduce the abundance of the wild-type transcript, so the authors speculated that a combination of change in the tertiary structure and a reduction in transcriptional efficiency was responsible for the pathogenicity of this mutation [10].
Interestingly, this mutation was found to display a founder effect in another patient who was compound heterozygous for the only missense mutation reported in CABP4 (Arg124Cys) [10]. It is unclear how this missense mutation adversely affects the protein function, since it does not reside within any of the four Ca-binding motifs. Indeed, the authors speculated the effect may in fact be at the level of transcription, because compound heterozygosity for this mutation and c. 800_801delAG, just like homozygosity for the latter, was associated with decreased abundance of the wild-type transcript [10]. The third mutation, p.Arg216X, was also speculated to be hypomorphic in nature, since it does not undergo NMD and only truncates two of the four Ca-binding motifs [13]. Our mutation, on the other hand, is very likely to be a complete null. Even if the mutant transcript escapes NMD, the introduction of novel amino acids very early in the protein will abolish all Ca-binding motifs and essentially render the protein functionally absent (Figure 2). Therefore, our mutation is predicted to result in the most dramatic functional perturbation of this important protein, and this may explain the more severe phenotype observed in our patients, compared to others (Table 1). LCA and CSNB are usually considered distinct clinical entities. The congenital onset of symptoms in our patients and their extinguished ERG ( Figure 3) argue for their classification as LCA [16]. However, lack of enophthalmos and Franceschetti's oculo-digital sign in our patients must be noted. The normal appearance of the fundus in the four patients should not be viewed as evidence against LCA, since GUCY2D-related LCA is known to be associated with a normal-looking fundus, even in adults [17]. However, the stationary nature of the disease observed in our four patients does complicate their classification as LCA, which is known to be progressive in nature. Nonetheless, this family provides a unique opportunity to observe a severe retinal phenotype that resembles LCA caused by a mutation in a gene hitherto described as a CSNB gene. In summary, our study is in line with our previous reports of the clinical utility of homozygosity mapping in the setting of genetically heterogeneous disorders. It expands on the Figure 2. Schematic of CABP4 gene. Previously reported mutations are indicated by empty triangles, and our novel mutation is indicated by a solid triangle. Figure 3. ERG findings in a family with an LCA-like phenotype secondary to the CABP4 mutation. Scotopic and photopic ERG readings are shown for each of the four affected members, who are referred to using the IDs in Figure 1 and Table 1 (II-2, II-3, II-4, and II-6). | 2,704.8 | 2010-02-10T00:00:00.000 | [
"Medicine",
"Biology",
"Chemistry"
] |
Impulse tests to determine the mechanical properties of aluminum alloys used to manufacture auto rims
The purpose of this paper is to analyze the improvement of quality rims made of different aluminum alloys for road vehicles. The modulus of elasticity, shear modulus and Poisson's ratio are mechanical parameters that characterize, together with mass and geometrical dimensions, dynamic behavior of auto rims. To reduce the weight of the rims, the steel is replaced with the aluminum and magnesium alloys or composite materials. The natural frequencies and the shape of the natural modes are determined for different aluminum samples, using the Finite Element Method and Modal Analysis in the ANSYS software. The vibroacoustic signal is recorded through a condenser microphone that has a high fidelity in the audible range of 20-20,000 Hz. The vibration responses of the aluminium specimens, in free-free conditions, are carried out using algorithms based on Fast Fourier Transform (FFT), and Prony’s series. MATEC Web of Conferences 126, 01008 (2017) DOI: 10.1051/matecconf/201712601008 Annual Session of Scientific Papers IMT ORADEA 2017 © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). * Corresponding author<EMAIL_ADDRESS>chemical composition changing, in order to improve the structure of aluminum-alloys must be found in increasing the mechanical properties values, were presented by R.Pradeep, B.S. Praveen Kumar and B. Prashanth in [4], and by N.Bularda and T. Hepuţ in [5] and [6]. In all these works, to determine the elastic properties of aluminum and its alloys were used classical methods based on shear and tensile test respectively. In recent years a strong development took nondestructive methods based on the resonance tests to determine the elastic properties of metals and their alloys. In the paper [7], M. Alfano and L. Pagnotta reviews the latest patents in the field of non-destructive tests for the determination of mechanical characteristics of materials. The use of the vibroacoustic signals for determining the elasticity modulus of the hydroxiapatite or welded joints is presented in [8] by L. Bereteu, M. Vodă, Gh. Draganescu and in [9] by O. Suciu, L. Bereteu, Gh. Draganescu. The purpose of this paper is to show that by analyzing a vibration signal obtained after applying a mechanical impulse on the sample can be determined all three elastic properties of the material: the longitudinal modulus (Young's), the shear modulus and the Poisson coefficient. This is possible by validating the results given by the spectrum analysis with the results of modal analysis based on Finite Element Method. For signal acquisition uses an acoustic sensor one condenser microphone and for signal processing it is used an adequate software [10]. 2 Theoretical background 2.1 Resonant vibration tests In the following, the samples consist of welded aluminum, shall be considered rectangular beam, having a constant section, and no subject to external forces. To study the free vibration of these beams can consider different boundary conditions. The best known case in the literature is free–free conditions. The elastic properties of the material can be determined by measuring the frequencies of the free vibrating bar excited by mechanical impulse. Vibratory motion of the beam is governed by a partial differential equation with partial derivate Euler-Bernoulli, as it is given by Meirovics [11]. EI ∂ 4Y(x,t) ∂x4 +ρA ∂ 2Y(x,t) ∂t2 =0, (1) where Y(x,t), E, I, A, ρ are the displacement of neutral axis of the beam at x distance to the left end, the longitudinal elastic modulus, geometric moment of inertia of the cross section of the beam, the area of this section and material density. The characteristic equation for free-free boundary conditions is: 1-chX cosX=0 (2) where X=√ 2f2L4ρA EI 4 , (3) f is one of the bending resonance frequencies and L is the length of the beam. The first two roots of the characteristic Eq.2 are X1=4,730, X2=7,853, and for natural modes r>2, it is easy to show that
Introduction
After iron, aluminum is now the second most widely used metal in the world.Aluminum and its alloys have become widely used metals in recent decades, mainly due to their properties, among which the most important are: densities and thus low weight, high strength, high resistance to corrosion, easy mechanical processing etc.It is noteworthy that aluminum has a very good weldability and not least, the best property is its high rate of recycling.
The popularity of aluminum and its alloys derive from its use in various areas such as: first of all in machine construction, automotive, food industry, chemical industry, civil construction, shipbuilding and aerospace industry, railway transport industry, areas where the use of aluminum is substantial.
In the commercial process, this material is called unalloyed aluminum.It is very easy to be processed, it has very good anticorrosive properties, it can be anodized and painted, and it can be varnished.It also has excellent bending properties and excellent weldability properties.It is used to various surfaces in advertising production, or it is used frequent in common surfaces plating.
In the use of unalloyed aluminum, the main issue that it is considered is that in the corrosion process, the surface does not become matt.This material can be very well anodized in the natural state or in the painted state.Anodized sheets not only looks much better in terms of aesthetics, but also presents much better features anticorrosive and erosion properties than the natural aluminum.These can be maintain and clean up much easier.
Aluminium and its alloys have increased machinability characteristics: can be excellent processed by lathering and these have a good weldability with the MIG/WIG procedures.For these reasons, these materials are widespread in industrial applications, including shipbuilding, as shown T. Anderson in paper [1].The arc welding method -MIG is the most common method of joining aluminum alloys used in shipbuilding, and the research results on the mechanical properties of aluminium alloys and its MIG welded joints were carried out using a static tensile test by K. Dudzik [2]. .
To achieve the new fuel efficiency standards requires the use of vehicle weight reduction.Achieving this goal involves the use of new materials and alloys.Aluminium alloy 6082, according to SR 1706: 2000 is such alloy and is used in manufacturing car rims by hot plastic deformation [3].Performance rims are closely related to the mechanical properties of the material, and any chemical composition changing, in order to improve the structure of aluminum-alloys must be found in increasing the mechanical properties values, were presented by R.Pradeep, B.S. Praveen Kumar and B. Prashanth in [4], and by N.Bularda and T. Hepuţ in [5] and [6].
In all these works, to determine the elastic properties of aluminum and its alloys were used classical methods based on shear and tensile test respectively.
In recent years a strong development took nondestructive methods based on the resonance tests to determine the elastic properties of metals and their alloys.In the paper [7], M. Alfano and L. Pagnotta reviews the latest patents in the field of non-destructive tests for the determination of mechanical characteristics of materials.The use of the vibroacoustic signals for determining the elasticity modulus of the hydroxiapatite or welded joints is presented in [8] by L. Bereteu, M. Vodă, Gh.Draganescu and in [9] by O. Suciu, L. Bereteu, Gh.Draganescu.
The purpose of this paper is to show that by analyzing a vibration signal obtained after applying a mechanical impulse on the sample can be determined all three elastic properties of the material: the longitudinal modulus (Young's), the shear modulus and the Poisson coefficient.This is possible by validating the results given by the spectrum analysis with the results of modal analysis based on Finite Element Method.For signal acquisition uses an acoustic sensor one condenser microphone and for signal processing it is used an adequate software [10].
Theoretical background 2.1 Resonant vibration tests
In the following, the samples consist of welded aluminum, shall be considered rectangular beam, having a constant section, and no subject to external forces.To study the free vibration of these beams can consider different boundary conditions.The best known case in the literature is free-free conditions.The elastic properties of the material can be determined by measuring the frequencies of the free vibrating bar excited by mechanical impulse.Vibratory motion of the beam is governed by a partial differential equation with partial derivate Euler-Bernoulli, as it is given by Meirovics [11].
where Y(x,t), E, I, A, ρ are the displacement of neutral axis of the beam at x distance to the left end, the longitudinal elastic modulus, geometric moment of inertia of the cross -section of the beam, the area of this section and material density.The characteristic equation for free-free boundary conditions is: where f is one of the bending resonance frequencies and L is the length of the beam.
The first two roots of the characteristic Eq. o 2 are X 1 =4,730, X 2 =7,853, and for natural modes r>2, it is easy to show that X r = (2r+1)π 2 (4) From Eq. o 3 and Eq.o 4 it can obtained the relation between free vibration frequency f r of the sample and its Young's modulus This relationship is valid only for the bending modes of the sample.By a similar procedure could be obtained the relation between the shear modulus and the corresponding resonance frequencies of the torsion modes: where r is the torsion mode number and f is a coefficient that takes into account the geometrical shape of cross-section and it is equal to 1 only for circular section.For a rectangular section has the expression: where b is the sample breadth, and the h is sample thickness.
To determine Poisson's ratio noted by it is used the well-known relationship between the three mechanical parameters that characterize the behavior of a metal to simple tensile and torsion tests:
2.2Finite Element Analysis
To validate the sample shape modes of vibration and correlation with experimental resonance frequencies is necessary to make a modal analysis.For an n-degree of freedom system, by Finite Element Method meshing (Fig. 1) the free motion equation of sample can be expressed as follows: where {q} is the vector of the displacement, and [M], [C] and [K] are mass, damping and stiffness matrices, respectively.In this approach the mechanical system is modeled as Rayleigh damping which is associated with symmetric matrix [C] proportionally with [M] and [K] matrices.
Fig. 1. Experimental stand for vibroacustic measurements
The resonance frequencies and modal shapes are obtained by Finite Element Analysis, using ANSYS [12].
Experimental setup
The experimental stand for non-contact measurement of free vibrations of the samples is shown in the Fig. 2. below, and it is composed by: the sample 1, which is the mechanical structure to be analyzed; impulsive mini hammer 2; brackets to support the sample 3; elastic threads for support of the structure in boundary conditions with the free ends 4; the acoustic sensor one condenser microphone 5, and the computer that has embedded and acquisitions plate 6.
Fig. 2. Experimental stand for vibroacustic measurements 4 Numerical results
The sizes of laminate aluminum sample analyzed by ANSYS software are: L=206 mm, b=39.5mm,h=1.5mm.The frequencies and the shape modes are given in Table I.This analysis is necessary to determine the order of the form of natural modes, that is, to determine which modes correspond to bending vibrations, respectively which modes correspond to torsional vibrations.
Experimental results
Equation ( 5) is used to determine the experimental values of the longitudinal elasticity module, E, with the frequencies corresponding to the first three bending modes of vibration, which are obtained from Fast Fourier Transform of the vibroacoutical signal obtained from experimental measurements.From Fig. 3, correlating with the results in Table 1 can determine the frequencies corresponding to the bending modes, respectively the torsion modes.
Fig. 3. Fourier Frequency Spectrum
The same formula is used to determine the experimental values of Young's modulus with frequencies determined by analyze the acquired signal with Prony's Series Method, D.J. Trudnowski [13].Equation ( 6) is used to determine the experimental values of the shear modulus G, corresponding to the first two torsion modes of vibration, which are obtained also from Fast Fourier Transform, Fig. 3.
Fig. 4. Young's modulus (E) by different methods
The values of the modulus of elasticity obtained by three methods: Euler Bernoulli equation (E-EB), Fast Fourier Transform (E-F) and the series Prony method (E-P), for seven samples are represented in Fig. 4.
Fig. 5. Share modulus (G) by different methods
The same three methods are used to determine the shear modulus of elasticity.Their values for seven samples of molten batches are shown in Fig. 5.
Conclusion
Method of measurement of Young's modulus, shear modulus and Poisson coefficient by modal analysis of vibrations excited by impulse is extremely simple to use.
The results for the elastic moduli are better than those obtained using standard tensile or shear tests.This method does not require specific form for samples and does not introduce errors due to the fastening system.
Modulus variation depending on various alloying elements of aluminum, will be of particular interest and will constitute an upcoming issue.
Table 1 .
The shape modes and their frequencies for laminate aluminum sample | 3,042.2 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Alterations of specific biomarkers of metabolic pathways in vascular tree from patients with Type 2 diabetes
The aims of this study were to check whether different biomarkers of inflammatory, apoptotic, immunological or lipid pathways had altered their expression in the occluded popliteal artery (OPA) compared with the internal mammary artery (IMA) and femoral vein (FV) and to examine whether glycemic control influenced the expression of these genes. The study included 20 patients with advanced atherosclerosis and type 2 diabetes mellitus, 15 of whom had peripheral arterial occlusive disease (PAOD), from whom samples of OPA and FV were collected. PAOD patients were classified based on their HbA1c as well (HbA1c ≤ 6.5) or poorly (HbA1c > 6.5) controlled patients. Controls for arteries without atherosclerosis comprised 5 IMA from patients with ischemic cardiomyopathy (ICM). mRNA, protein expression and histological studies were analyzed in IMA, OPA and FV. After analyzing 46 genes, OPA showed higher expression levels than IMA or FV for genes involved in thrombosis (F3), apoptosis (MMP2, MMP9, TIMP1 and TIM3), lipid metabolism (LRP1 and NDUFA), immune response (TLR2) and monocytes adhesion (CD83). Remarkably, MMP-9 expression was lower in OPA from well-controlled patients. In FV from diabetic patients with HbA1c ≤6.5, gene expression levels of BCL2, CDKN1A, COX2, NDUFA and SREBP2 were higher than in FV from those with HbA1c >6.5. The atherosclerotic process in OPA from diabetic patients was associated with high expression levels of inflammatory, lipid metabolism and apoptotic biomarkers. The degree of glycemic control was associated with gene expression markers of apoptosis, lipid metabolism and antioxidants in FV. However, the effect of glycemic control on pro-atherosclerotic gene expression was very low in arteries with established atherosclerosis.
Introduction
Cardiovascular diseases (CVD) are highly prevalent in the general population, affecting most adults over 60 years of age. Vascular endothelium has unique responses to hemodynamic forces. The flow and hemodynamic forces are not uniform in the vascular system. The endothelium of the vascular circulation is exposed to hemodynamic forces of greater magnitude than in other human tissues. Hemodynamic forces play an important role in vascular diseases, especially in the location of atheromas [1].
The pathophysiology of arterial thrombosis is different from that of venous thrombosis. In the arteries there is altered endothelium-platelet adhesion, greatly influenced by hemodynamic forces, while the major factors for venous thrombosis are the phenomena of slow or stagnant blood flow, combined with hypercoagulability situations [2].
Sustained flow with high shear stress upregulates gene and then protein expression in endothelial cells, which has a protective effect against the atherosclerotic process [3]. In the venous system, a disturbed flow leads to inflammation and venous thrombosis, and therefore the development of chronic vessel disease, such as peripheral arterial disease.
Atherosclerosis is associated with processes such as inflammation [4] lipid metabolism [5], apoptosis [6] and the immune system responses [5]. Epidemiologic studies show a consistent association between diabetes and cardiovascular disease [7]. The influence of evaluating the atherosclerosis process according to the effect of tight glycemic control has been less convincing in clinical trials [8]. Hyperglycemia could lead to vascular complications via several mechanisms. Hyperglycemia per se activate transcription factors that modulate the expression of a number of genes in endothelial cells, monocyte-macrophage and vascular smooth muscle cells favoring atherosclerotic process.
In this study our aims were to determine whether different biomarkers (inflammatory, oncogenic, immunological or lipid) had altered gene expression in different atherosclerotic blood vessels (artery vs. vein) and to examine whether glycemic control influenced the expression of these genes.
Subjects
All patients were hospitalized in the Cardiovascular Surgery Department of Carlos Haya Hospital (Malaga, Spain) between February 2007 and June 2008. Those diagnosed with an advanced atherosclerotic process and type 2 diabetes mellitus were recruited to this study. Two types of vascular biopsies were collected from patients with clinical stage IV peripheral arterial occlusive disease and lower limb amputation (PAOD): 1) occluded popliteal artery (OPA) with atherosclerotic plaque and 2) femoral vein (FV). Both the OPA and the FV were obtained from the vascular package of each patient (n = 15). Control arteries with no atherosclerosis consisted of internal mammary artery (IMA) biopsies collected from 5 diabetic patients with good glycemic control and severe ischemia due to ischemic cardiomyopathy (ICM) who were undergoing coronary revascularization.
Patients were included if they were aged 18-80 years (all patients were over 60) and provided written informed consent. Patients were excluded if they had associated diseases such as alcoholism, drug addiction or HIV. Individuals who refused to participate in the study were considered losses.
We evaluated the presence of atherosclerotic risk factors using the definitions of the Spanish Society of Hypertension (blood pressure, systolic ≥140 and/or diastolic ≥90 mmHg), ADA (fasting blood glucose level ≥126 mg/dl), and the NCEP-ATP3 criteria (triglycerides ≥150 mg/dl and HDL cholesterol <40 mg/dl in men or <50 mg/dl in women). A BMI (Kg/m 2 ) >30 was used for the presence of obesity and patients were considered to be smokers if they had smoked up to 6 months before hospital admission. Anthropometric and biochemical parameters included sex, age, waist circumference, systolic and diastolic blood pressure, glucose, HbA1c, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol and triglycerides. The HOMA (Homeostasis Model Assessment) index, a method used to quantify insulin resistance and beta-cell function, was also recorded. The approximating equation for insulin resistance used a fasting blood sample, and was derived by use of the insulin-glucose product, divided by a constant: (glucose x insulin)/405 where glucose is given in mg/dL and insulin is given in μU/mL.
The patients had type 2 diabetes mellitus for 12 ± 7 years. Type 2 diabetes mellitus was diagnosed according to the ADA definition (2010) by the presence of repeated fasting glucose levels ≥126 mg/dl if the patients were being treated with oral antidiabetic agents or insulin at the time of the study or if their HbA1c was >6.5%. This definition was corroborated with the patient's medical history in order to avoid misdiagnosis due to stress hyperglycemia during hospital admission. The patients with PAOD were divided into either wellcontrolled or poorly-controlled groups, depending on their glucose and HbA1c (Table 1). All the diabetic patients were treated with metformin and 70% of the poorly-controlled diabetic patients were treated with insulin. The patients were admitted to the hospital 72 h before surgery. Their treatment was usually modified in order to prepare the patient for surgery. There was a drug washout period of 12 h prior to blood collection.
Control patients (patients with ICM) were aged 63 ± 13 years and had similar baseline characteristics as the patients with PAOD. Patient's characteristics did not significantly differ between the groups.
The study protocol complied with the principles of the Helsinki Declaration. The study was approved by the hospital ethics committee and all the patients gave written informed consent to participate in the study.
Isolation of human mRNA from biopsies
Samples of OPA, FV and IMA vessels were homogenized in ice using the Tripure Isolation Reagent (Roche Molecular Biochemicals, Barcelona, Spain) according to the manufacturer's instructions using a laboratory batch mixer (T25-Ultra3-Turrax Basic; Ika R Laboratory Equipment).
Real-time PCR
A total of 46 genes were studied in these vascular biopsies. The genes were classsified depending on the metabolic pathway involved. Housekeepping genes: 18 S and GAPDH. Lipid metabolism genes: PPARg, PTGS1 cDNA was obtained from 1 μg RNA using the High Capacity cDNA Archive Kit (Applied Biosystems, San Francisco, California, EEUU). RNA purity was determined by measuring the A260/A280 ratio. RNAs with ratios between 1.7 and 2 were considered adequate for quantification of mRNA expression. cDNA synthesis was obtained from 1 μg RNA using the protocol provided with the High Capacity cDNA Reverse Transcription kit (Applied Biosystems, Foster City, CA, USA). Recombinant RNasin Ribonuclease Inhibitor (Applied) was added to prevent RNase-mediated degradation. cDNA was stored at -20°C.
Gene expression analyses were performed at mRNA level by Taqman Low-Density Array (TLDA). Predesigned TaqMan probe and primer sets for target genes were chosen from an on-line catalogue (Applied). Once selected, the sets were factory-loaded into the 384 wells of the TLDA card which was configured into eight identical 24 gene sets in duplicate. Twenty-two genes were chosen based on literature reviews of key molecules in inflammation and immunology. Each set of genes also contained two housekeeping genes, GAPDH and 18srRNA.
Five μl of single-stranded cDNA (equivalent to 100 ng of total RNA) were mixed with 45 μl of nuclease-free water and 50 μl of TaqMan Universal PCR Master Mix. After gentle mixing and centrifugation, 100 μl of mixture was transferred into a loading port on a TLDA card. The card was centrifuged twice for 1 minute at 1100 rpm to distribute the samples from the loading port into each
Western blot analysis
The proteins from IMA, FV and OPA biopsies were analyzed by western blot analysis as described previously [9]. Blots were incubated with monoclonal antibodies against human tF (ADI, Ref: 4501; dilution 1:1000; American Diagnostica Inc., Stamford, Connecticut, EEUU) and LRP-1 (RDI; PR-61067; Fitzgerald dilution 1:50; Fitzgerald Industries International, North Acton, MA, USA). Equal proteins loading in each line was verified by staining filters with Ponceau and also by incubating blots with monoclonal antibodies against β-actin (clone AC-15, Sigma). Western blot bands were quantified with a Chemi-doc (BioRad) using the Quantity One 1-D Analysis Software. Results are expressed as arbitrary units (AU) that refer to units of intensity/mm 2 .
Histological analysis
The vascular biopsies, removed immediately after surgery, were immersed in 2-methylbutane on liquid nitrogen. Histological studies were performed on 4-mm thick sections of the vessels and atheromatous plaque thawmounted onto poly-L-lysine treated slides, cut in a cryostat at -20°C. Tissues were then stained with Masson's trichrome and photographed under routine light microscopy (Leica Microsystems Ltd.).
Statistical analysis
Results are expressed as mean ± standard deviation (SD). The baseline clinical characteristics of each group were analyzed by one-way analysis of variance (ANOVA). All probability values were two-tailed, and all confidence intervals were computed at the 95% level. Differences were considered significant if the p value was less than 0.05. Chi-square analysis was used to compare the qualitative variables. Relationships between cell biomarkers and continuous variables were examined by Spearman correlation analysis, which measures the linear relation between two quantitative variables (−1 < r < 1). Multiple regression models were used to correct for confounding factors to assess the association between mRNA expression levels of different biomarkers, risk factors or drugs. Statistical analyses were performed with SPSS for Windows, version 11.5 (IBM Corporation INC. Somers, NY (USA)).
Results
The baseline biological characteristics of the CMI and PAOD patients are shown in Table (Variables are mean ± SD). No significant differences were found for any of the variables studied between ICM and PAOD patients (data not shown). In contrast, there were significant differences in glycemia and HbA1c between well-and poorly-controlled PAOD diabetic patients.
Histological characterization of vascular biopsies from atherosclerotic patients
All vessel biopsies were stained with Masson's trichrome and photographed under light microscopy ( Figure 1).
The IMA showed a normal histology with the adventitia composed of connective tissue as well as collagen and elastic fibers, the media layer composed of smooth muscle and elastic fibers, and the intima layer, the inner layer, composed of an elastic membrane lining and smooth endothelium covered by elastic tissues ( Figure 1A).
The section corresponding to the OPA had soft plaques with a lipid nucleus and calcium deposits. The capsule had few collagen fibers and VSMC, although with an excess of macrophages and lymphocytes ( Figure 1B). In the FV biopsies three layers were observed: the adventitia, media, intima layer and endothelium ( Figure 1C).
Both IMA and FV were composed of the same type of layers, although IMA contained elastic layers that make it more amenable to blood pressure variation.
mRNA expression of different biomarkers in the vascular system from atherosclerotic vessels of diabetic patients To investigate the possible alteration in gene expression of different biomarkers we analyzed mRNA levels in IMA, OPA and FV from ICM and PAOD patients.
Expression was obtained from all the genes studied (46 genes), but only 9 genes were significantly different in all tissues. These altered genes, which showed greater levels in OPA compared to IMA or FV, were those involved in thrombosis ( (Figure 2).
Protein expression of tF and LRP-1 in the vascular system from atherosclerotic vessels of diabetic patients
The inflammatory process and lipid metabolism are two of the key signaling pathways in the formation of atheromas in vascular wall. Therefore, and in view of the results of mRNA expression obtained of two biomarkers of these pathways (tF and LRP-1), we studied if this pattern of overexpression showed in OPA, was repeated at the protein level. We perform a Western blot analysis of both markers, confirming this effect previously observed with mRNA expression in all analyzed vessels. OPA had greater protein expression of tF than the VF and the control artery (IMA) (1.93 ± 0.12; 0.20 ± 0.12; 0.34 ± 0.23, p<0.04, respectively). The same happened when we study the levels of LRP-1. The OPA had greater protein expression of LRP-1 than the VF and the control artery (IMA) (2.38 ± 0.11, 0.14 ± 0.07, 0.30 ± 0.17, p<0.05, respectively). The results obtained are shown in Figure 3.
Correlation analysis between anthropometric and biochemical parameters and biomarkers
Different anthropometric parameters (sex, age and blood pressure) were correlated with the genes studied. In all tissues (IMA, FV and OPA), only blood pressure showed a significant correlation with different biomarkers. In IMA biopsies, systolic blood pressure correlated negatively with MMP-9, TIMP-1, TIMP-3, PPARg and VEGFA mRNA expression (p=0.02, p=0.03, p=0.03, p=0.03 and p=0.005, respectively). FV biopsies had a positive correlation between diastolic blood pressure and HIFA1A (p=0.04). OPA biopsies showed a positive correlation between systolic blood pressure and MMP9 (p=0.03), and between diastolic blood pressure and SREBP1 (p=0.02).
Of all the biochemical parameters studied, only fasting blood glucose levels and HbA1c correlated significantly with the biomarkers studied. IMA biopsies showed no significant correlation with any of the biomarkers. However, FV biopsies showed a negative correlation between glycemia levels and BCL2, CDKN1A, Cox2 and SREBP2 (p=0.02, p=0.04, p=0.05 and p=0.03, respectively). HbA1c correlated negatively with BCL2 (p = 0.05). In OPA biopsies only the glycemia levels were influential, not the HbA1c. Fasting blood glucose levels showed a positive correlation with CD83, LRP1, NDUFA2 and TIMP1 (p=<0.0001, p=0.05 and p=0.03, respectively).
mRNA expression was obtained from FV or OPA to compare the two diabetic groups (Figure 4)
Discussion
In this study, IMA and FV had a similar gene expression pattern but very different to that seen in OPA.
Atherosclerosis is the consequence of excess lipid accumulation in vessels, which triggers an immune response, and the secretion of inflammatory cytokines that promote its development. The arteries and veins differ in that the atherosclerotic process is accentuated in large arteries where hemodynamic forces are exerted. We found significant differences between these different vessels as result of an advanced atherosclerotic process.
Lesion disruption facilitates the interaction between circulating blood and prothrombotic substances, such as tissue factor (tF) present within the atherosclerotic lesion. However, an increase of tF levels (mRNA and protein expression) was found in OPA with respect to IMA or FV, in agreement with previous studies from coronary arteries [10].
Our interest in inflammation as a component of CVD led us to study other additional mediators, such as the metalloproteinases (MMP), which form a family of zincdependent endopeptidases that degrade vascular extracellular matrix and basement membrane components playing a main role in tissue repair and vascular remodelling. It is widely accepted that plaque rupture plays a crucial role in the pathogenesis of vascular events and that atherosclerotic plaque destabilization is mediated by MMP. In the present study, we found higher MMP2 and MMP9 mRNA in OPA. These findings are consistent with previously published results [11]. MMP2 and MMP9 are regulated by tissue inhibitors, pro-inflammatory cytokines and other factors such as oxLDL or situations of insulin resistance, where the oxidative stress is increased [12], and they have both pro-and anti-inflammatory actions. These actions are associated with arterial stiffness and essential hypertension [13]. It was shown that MMP inhibitors and eNOS inhibit the smooth muscle cells migration in vitro and neointima formation in vivo [14]. Moreover, NO was shown to attenuate gene expression associated with insulin resistance [15].
As expected, gene expression levels of TIMP-1 and TIMP-3, regulators of the cytoskeleton, were also increased in OPA with respect to IMA and FV. TIMP-1 and TIMP-3 are endogenous inhibitors regulated by oxLDL in vascular endothelial cells [16], and are compensating mechanisms to prevent the breakdown of the atheroma plaque.
Cholesterol and LDL concentrations have indubitable value as risk markers for future cardiovascular events. Recent studies have demonstrated that increased levels of oxLDL are markedly associated with MMP-9 activation, and that statins reduce inflammatory responses. The relationship between lipid metabolism and popliteal plaque has been poorly studied. In the present study, NDUFA and LRP1 mRNA expression levels were highly increased in occluded OPA. LRP1 is upregulated by cardiovascular risk factors such as hypercholesterolemia [17] and hypertension [18]. Aditionally, this receptor contributes to the uptake of aggregated LDL [19], one of the main modifications of LDL in the arterial intima. The increase of LRP1 expression in OPA (mRNA and protein expression) suggest that LRP1 may play a crucial role in atherosclerosis progression, as previously demonstrated in other studies [20,21].
Delivery of free fatty acids excess to peripheral tissues can worsen insulin resistance and may play a role in activating inflammatory processes through activation of toll-like receptors [22]. TLRs are a family of pattern recognition receptors found in various inflammatory cells [23]. TLR2 showed increased levels in popliteal artery compared with vessels without atheroma in our study.
Different biomarkers were studied in the same environment, i.e., OPA and FV biopsies from the same diabetic patient. For this reason, it is very interesting to study what happens with the biomarkers of different metabolic pathways when the patient has good or poor glycemic control. It is likely that hyperglycemia-induced intra-and extra-cellular changes lead to alterations in signal transduction pathways, affecting gene expression and protein function and causing cell dysfunction and damage. However, in patients with type 2 diabetes mellitus, pathways involved in the diffuse vasculopathy present in non atherosclerotic arterial tissue and mRNAalterations are already established [24].
Our results demonstrated that patients with good glycemic control had greater SREBP2 expression levels in FV. Hyperinsulinemia is related to an up-regulation of SREBPs [25], which could conflict with our results, but we previously showed that SREBP2 controls the expression of some LDL receptor genes, such as CD36 gene expression. A strong relation between SREBP2 and CD36 was found in FV but not in OPA [26]. Sampson et al. [27] showed that diabetic subjects with good glycemic control had higher CD36 expression, which could reflect a post-transcriptional efficiency of this receptor and thus there would be greater metabolism of oxLDL in these patients.
Our patients with good glycemic control showed an increased expression from genes involved in protection against apoptosis and cell turnover. Our data agree with the results recently published by Redondo et al. (2011), who demonstrated that there is a link between inflammation (COX-2) and apoptosis resistance (BCL2) in the vessels of diabetic patients [28].
Some drugs (e.g., metformin, thiazolidinediones and statins) used in the treatment of diabetes and atherosclerosis are able to up-regulate both processes. These may exert their protective effects through activation of AMPK which has potentially beneficial antiatherosclerotic effects, such as reducing adhesion of inflammatory cells, lipid accumulation and the proliferation of inflammatory cells [29][30][31]. Recently, it has also been shown that adiponectin receptors ADIPOR1 and ADIPOR2, through the AMPK, may modify the risk of CVD in individuals with IGT, possibly through alterations in the mRNA expression levels [32]. Note that in this year, AMPK has been proposed as a therapeutic target for diabetic vascular disease [33].
Results from the present study show that glycemic control only exerts a significant effect on MMP-9 expession in OPA. These results are in agreement with previous studies showing that strict glycemic control does not improve cardiovascular disease progression in situations of advanced atherosclerosis [34,35].
Our study has certain limitations. The study population consists of diabetic patients with advanced atherosclerosis. Therefore, results from the present study may not be extrapolated to other population types. In addition, the sample size was small, although the significant differences found, contribute to the strength of the results.
In conclusion, when we compared occluded arteries with vessels without atheromas in diabetic and atherosclerotic patients we found significant differences in biomarkers involved in inflammation, lipid metabolism and apoptotic pathways. In addition, compensatory mechanisms could exist that prevent the rupture of the atheromatous plaque in peripheral arterial occlusive disease. On the other hand, when the atherosclerotic process is studied in terms of good or poor glycemic control into the context of diabetes, we observed that the expression of genes involved in inflammation and apoptosis protection was increased in veins from patients with good diabetic control. In contrast to veins, in arteries with advanced thrombosis, like OPA, where lumen is almost completely occluded, the glycemic control did not seem to exert any effect on gene expression profile. the biochemical data from the patients. LB and FT oversaw manuscript construction and supervised the experiments. All authors read and approved the final manuscript. | 4,978.4 | 2012-07-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Design of Functional Layer with Robust Constitutive Parameters for Multilayer Metamaterials
We propose a functional layer designwith robust effective parameters formultilayermetamaterial.The functional layer is consisting of two identical dielectricmaterial layers and one layer ofmetallic structures sandwiched in between.The symmetric design ensures that, following standard retrieval technique, effective parameters retrieved for a single functional layer in vacuum can be used to characterize its electromagnetic contribution when stacked in a multilayer system.When applied to the fishnet structures, effective parameters of the symmetric functional layer system show great robustness against the varying of the number of layers. The symmetric functional layer design is also investigated on multilayer metamaterials consisting of several layers of different kinds of metallic structures. Transmission and reflection spectra are obtained for real structures and their effective models by finitedifferential-time-domain simulation and transfer matrix method calculation, respectively. It turns out that the effective model shows great equivalency to the real structures, and the effective parameters of symmetric functional layer design are robust at both normal and oblique incident cases. Our work provides a practical approach to design and characterize multilayer metamaterials with the well-known effective parameters retrieval technique.
Since single layer structures can be described with their effective constitutive parameters, the design and characterization of multilayer MMs should be easily accomplished based on the effective parameter retrieval technique.However, the retrieved parameters are directly related to a particular field, and the interaction between building blocks from different layers is inevitable in a multilayer MM.Thus, effective parameters obtained for single layer structure in vacuum cannot characterize its contribution when this layer is stacked in multilayer system.In other words, the effective parameters are not robust.For example, previous research on multilayer fishnet structure [12,[36][37][38] point out that, due to the coupling between different layers, effective parameters change as the number of layers increased.Though 2 Advances in Materials Science and Engineering the effective parameters could eventually converge when sufficient number of layers is acquired, the convergence results are usually quite different to those of monolayer fishnet.Consequently, it is impractical to estimate properties of multilayer MMs with the effective parameters of single layer structure, even if all of the stacking layers are the same.Moreover, for multilayer MMs consisting of several layers of different metal structures, the coupling between different layers would be much more complex, making the effective parameters of single layer almost useless in multilayer MMs design.
In this paper, we propose a symmetric functional layer design for multilayer MMs.The functional layer is comprised of two identical dielectric layers and one layer of metallic structure sandwiched between them.We find that effective parameters of these symmetric functional layers are robust against the change of surrounding.Effective parameters of multilayer fishnet structure apply this symmetric functional layer design almost immune from the number of layers varying.Furthermore, the robustness of effective parameters is also intact in multilayer MMs consisting of several symmetric functional layers of different metallic structures at both normal and oblique incidence cases.Therefore, this symmetric functional layer design offers a practical approach to construct multilayer MMs with the well-known effective parameters retrieval technique [26,28,39].
Symmetric Functional Layer
In Figure 1(a), we present schematic graph of a multilayer fishnet structure.The periods along the lateral plane are set to be = = 4 mm, and the square holes of fishnet have widths of = 3.6 mm.The thickness of the metallic fishnet structures is 18 m.The metal we used for fishnet structure and all other metallic structures in this paper is silver with the electric conductivity of 6.3 × 10 7 S/m.The dielectric spacers are made of FR4 (orange, real = 4.3, loss tangent of 0.025) with the thickness hf = 1.6 mm.The propagation direction is along the -axis and perpendicular to the surface of the fishnet.In the following discussion, the number of metal structure layers is used to specify the multilayer fishnet structures with different actual layers.As an example, the two-layer slab contains 2 metal fishnet layers.
For a homogeneous material slab, the impedance and refractive index (or the permittivity and permeability) does not depend on its thickness.In contrast, MMs must acquire a number of layers to achieve convergence of electromagnetic properties and qualify as bulk material [7,12].Due to the interaction between building blocks from different layers, effective parameters vary as the number of layers increases.Generally speaking, effective parameters of monolayer structures are not as robust.In Figure 1(c), we present the retrieved results for the real part of effective refractive index for fishnet structures with different number of layers ( = 2, 3, 4, 5).Notice that effective parameters gradually converge as the number of layers increases.However, the convergence results are completely different from results given by the 2layer structure.These features were reported in many prior studies [7,12,28].
The interactions of metallic structures and EM waves are the essentials of the exotic electromagnetic properties of MMs.Though the fishnet structures from each layer are exactly the same in geometric parameters, the surrounding environments are not always the same.As shown in Figure 1(b), while the inner metallic fishnet layers are located symmetrically between two dielectric spacers, the two outmost layers of metallic fishnet are sandwiched between air and dielectric spacer, which is clearly asymmetric.Intuitively, when interacting with electromagnetic waves, the induced electromagnetic fields for inner and outmost metallic fishnet structures are expected to be symmetric and asymmetric, respectively.Thus, the resonant features are different for two kinds of fishnet structures.For = 2 fishnet system, both of the two fishnet layers are asymmetric; the electromagnetic behavior should be purely dominate by asymmetric layers.While the number of layers increases, the symmetric layers gradually became the majority.Correspondingly, the electromagnetic behavior gradually changed to be dominated by symmetric layers, and in other term, the retrieval parameters converge.Furthermore, the interaction between them could also induce the hybridization of resonance modes, which is shown as the ripples in effective parameter diagrams.
These properties are examined in a simple multilayer system composed of two kinds of homogeneous medium slabs.The thickness of each layer is 1.6 mm, while the constituted parameters of two media are set to be A = 2, A = 1 (orange) and B = 16 + , B = 1.5 + 0.1 (blue), respectively.As shown in Figure 1(d), the multilayer system is arranged in a BA-(ABA) -AB pattern, which has inside symmetric ABA blocks and outside asymmetric AB and BA blocks.Comparing with multilayer fishnet structure, the B layers are corresponding to the metallic fishnet structure layers, and building blocks with two A layers are corresponding to the dielectric spacers.Despite the fact that no resonant structure is included, this system holds the geometric essence of the fishnet structure shown in Figure 1(a).The calculated effective refractive index for multilayer system with different number of B layers is shown in Figure 1(e).It is clear that this simplified system shows similar convergence and hybridization features as the fishnet system.
At this state, we can concluded that the conventional assembly manner of multilayer MMs causes the resonance features of the same structures to differ depending on their location and introduces interactions between different layers which result in the change of electromagnetic behavior as the number of layers increases and poor robustness of the effective parameters of monolayer structures.To overcome this problem, we proposed a symmetric functional layer design, which consists of two identical dielectric media layers and one layer of metallic structure sandwiched in between.Figure 2(a) shows sketch of a unit cell of the symmetric fishnet functional layer.The geometric parameters of metallic fishnet structure are the same as those in Figure 1(a).Two dielectric layers are made of FR4 also, and each has the thickness of df = 1/2hf = 0.8 mm, which ensures that the thickness of dielectric spacer between two adjacent metallic structures are the same as in Figure 1(a).Schematic of a 5-layer structure applying this design is shown in Figure 2(b).Obviously, all the metallic structure layers are equally located in the same symmetric environment.Compared to the conventional design, the modification is relatively minor that only two dielectric layers are added to the top and bottom.Consequently, the effect on the electromagnetic behavior is also little.The retrieval indexes of such multilayer system with different number of functional layers are shown in Figure 2(c).Clearly, the results are almost unchanged, and hybridization of resonant modes is also eliminated since resonant features of each functional layer are the same.This indicates that effective parameters of the symmetric fishnet functional layer are robust.To further estimate robustness of effective parameters, we introduce a hypothetical homogenous medium slab with the thickness of 5-functional-layer fishnet structure and the effective parameters of one-functional-layer structure.The transmission and reflection spectra of 5-functional-layer fishnet structure and the hypothetical slab are shown in Figure 2(d).The results agree with each other very well in a relatively wide frequency range.
In our opinion, the underlying mechanism for the robustness of our symmetric functional layer design is twofold.First, symmetric design ensures resonant features unaffected when assembled in multilayer systems.Second, different functional layers are essentially interacting through the evanescent scattering field component around metal structures.Two identical dielectric layers provide the space for them to damp down, further reducing the coupling between metallic structures, which results in retrieval parameters robust being against the varying of the number of functional layers.Furthermore, we investigated the dependence of the robustness of effective parameters on the dielectric spacer in fishnet structures.Compared to the structures in Figure 2(a), we varied dielectric layer thickness of the symmetric fishnet functional layer to thicker (df = 1.0 mm, orange) and thinner (df = 0.5 mm, orange) cases, and all other parameters are fixed to their respective values in Figure 2 symmetric functional layers with thicker dielectric layer have more robust effective parameters as we expected, since the coupling would be weaker in this case.On the other hand, thinner dielectric layer case is companied with strong coupling between metallic structures, and the robustness of effective parameters would be affected, as shown in Figure 3(c).However, compared to conventional fishnet structures with same spacer thickness between metallic structures (shown in Figure 3(d)), the robustness of effective parameters of symmetric functional layer system is still greatly improved in this case.The material parameters of dielectric spacer are another important factor that affected the coupling between metallic structures, since the damping of evanescent fields is scaled to the optical distance.We have also investigate a higher index spacer case ( = 8, blue); the thickness of dielectric layer is df = 0.6 mm.The optical distance between adjacent metallic layers in this case is nearly the same as in Figure 2(a).As shown in Figure 3(e), the obtained effective indexes are robust against the varying of the number of layers.This implies that more compact functional layers with robust effective parameters can be built with higher index spacer.
Since the symmetric functional layer design can significantly reduce the coupling between metallic structures, the robustness of effective parameters of symmetric functional layer should be maintained when different kinds of metal structures are used to construct multilayer MMs.In the insets of Figure 4(a), we present the unit cell of symmetric functional layer consisting of square patch structure.The length of patch is = 2.9 mm, the thickness of patch structures is 18 m, and the periods along the lateral plane are the same as the aforementioned fishnet structure.FR4 layers with the thickness dp = 0.8 mm are set on both sides of the metallic square patch.The symmetric fishnet functional layer shown in Figure 4(b) is exactly the same as the one in Figure 2(a).The effective parameters of patch and fishnet functional layers recovered from simulation data are shown in Figures 4(a) and 4(b).In principle, the imaginary parts of permittivity and permeability (black and blue dashed line) for both structures are rather small and can be neglected.The real parts of permeability of both slabs are near to 1 (nonmagnetic) [40].
For the patch structure, the real part (black solid line) of permittivity is positive, while it is negative for the fishnet (blue solid line).By stacking these two functional layers, we get a two-layer composite material; the unit cell is shown in the inset of Figure 4(c).A corresponding hypothetic material can be built by stacking two layers of homogeneous medium with the same thickness and recovered parameter of patch and fishnet functional layers.Considering an electromagnetic wave normal incidence to the real material and the hypothetical patch-fishnet slab, the transmission and reflection spectra are obtained by performing numerical simulation and transfer matrix method calculation, respectively.The results (see in Figure 4(c)) are almost identical to each other.
In addition, split ring resonators (SRRs) and SRR/wire structures [2,3,9,13,14,16,19,26,32,33] are used to test the robustness of effective parameters of the symmetric functional layer.Figures 5(a layer boundary on both sides.The lateral periods are = = 4 mm, same as the aforementioned fishnet and patch functional layers.The SRR structures have the thickness of 18 m.The outer ring of SRR has outside radius of out = 1.5 mm, inside radius of in = 1.1 mm, and gap of out = 0.2 mm; the inner one has outside radius of out = 0.75 mm, inside radius of in = 0.55 mm, and gap of in = 0.1 mm.The width of the wire structure is = 0.2 mm and it runs the length of the unit cell.As shown in Figures 5(b) and 5(e), effective permittivity and permeability of SRR and SRR/wire functional layers have strong resonant properties and exhibit magnetic negative and double negative features around the resonant frequencies.We pair the SRR (SRR/wire) functional layer with fishnet functional layer to form a two-layer slab; the unit cell is shown in the insets of Figures 5(c) and 5(f).The calculated transmission and reflection results of effective models match very well with those of the real structures (seen in Figures 5(c) and 5(f)), even around the resonant frequencies where the effective parameters rapidly change.Thus we conclude that symmetric functional layer design provides robust effective parameter and makes it possible to estimate macroscopic electromagnetic behavior of multilayer MMs based on effective parameter of all the included functional layers.Conversely, it also provides practical route to design multilayer MMs to be tailored for unprecedented applications.
Oblique Incidence Case
The implementation of MMs is not restricted to the normal incidence condition.Due to the specific shape and orientation of MM structures, effective parameters retrieved from the transmission and reflection coefficients of a MM slab are directly related to the incident fields.In principle, the retrieved complex effective parameters are not global material parameters.For instance, effective MM parameters for oblique incidence are determined by the incident angle and polarization, which are functions of the lateral wave vector [28].
For the oblique incidence case, the conventional designed multilayer MMs also have to accumulate enough layers to reach the convergence of effective parameters [28].On the multilayer fishnet structures, we perform a comparison between the conventional design and the symmetric functional layer design.In Figure 6, we present retrieved results of the real part of effective index for both polarizations at 30-degree incident angle.The plane of incidence is the - plane, and the incident angle is defined by the angle between the -axis and the wave vector .The electric field vectors of both polarizations are denoted with blue arrows.Clearly, the robustness of effective parameters of symmetric functional layer is unaffected at oblique incidence situation for both polarizations.Especially, for the strongly resonance mode induced by the TM polarized incidence, symmetric functional layer design also provides robust effective parameters and avoids the hybridization in conventional design that affect the convergence in a wide frequency range.
Furthermore, we investigate the symmetric functional layer in multilayer MMs design and characterization at oblique incident case.If we try to build effective model of a multilayer MM for oblique incident case, the crux here is to determine the effective parameters for each slab.Considering a plane wave with an arbitrary angle of incidence enters a multilayer system, the lateral wave vector is preserved in all spatial regions.Therefore, a reasonable guess here is that each layer of the effective model should be homogenous medium with effective parameters retrieved for corresponding functional layers with the same lateral wave vector.However, it is kept in mind that this hypothetic effective model is only valid for multilayer MMs under this angle of incidence.
The robustness of effective parameters of symmetric functional layer is also tested with the patch-fishnet structure at oblique incidence case.The structure is exactly the same as the structure in Figure 4. Effective parameters of each functional layer are obtained at the same lateral wave vector of the system.Figure 7 presents the transmission and reflection spectra for effective model and the real structure at several different incident angles for both polarizations.Clearly, effective model is equivalent to the real structure for both polarizations in a wide range of incident angles.
SRR and SRR/wire structures are typical MMs building blocks, whose resonant features varied considerably as the incident angle increased.A SRR/wire-fishnet two-functionallayer structure under oblique incidence case is investigated to demonstrate the effective parameters robustness of symmetric functional layer design.The structure is exactly the same as in Figure 5(f).The transmission and reflection spectra for effective model and the real structure at the angle of incidence 30 and 60 degrees are shown in Figure 8.It is clear that symmetric ring resonator functional layers hold excellent robust effective parameters in oblique incidence cases and the electromagnetic behavior of real structure can be well characterized by effective model.Figure 8: Transmission and reflection spectra of two-layer structure that is comprised of SRR/wire and fishnet functional-layer (solid lines) and the corresponding effective model (dashed lines) at oblique incidence (30 ∘ and 60 ∘ ) for both polarizations.
In addition, a three-functional-layer MM consisting of patch, SRR/wire, and fishnet structures is investigated here to give further demonstration.Each functional layer here has the same geometric parameters as those aforementioned.In Figure 9, we plot the transmission and reflection results for both polarizations at the angle of incidence of 30 and 60 degrees.Effective parameters of symmetric functional layer still show great robustness in such complex multilayer structures.The effective model results match pretty well with those of real system through the frequency range we investigated, especially for the TM polarization.The SRR/wire structures have strong resonance of electric field for the TE polarization, which causes the frequency shifting at the resonance band with the increasing angle of incidence.But still, the effective model shows great equivalency to the real structure.
Conclusion
We have proposed a symmetric functional layer design with robust effective parameters for multilayer MMs.The robustness of effective parameters of symmetric functional layer are examined in multilayer MMs comprising several layers of different kinds of metallic building blocks at both normal and oblique incident cases.In general, the symmetric functional layer provides the opportunity to straightforwardly design and characterize the electromagnetic properties of multilayer MMs by effective parameters.
Figure 1 :
Figure 1: (a) and (b) Schematic of a conventional fishnet structure with 5 metallic layers.(c) Real part of the effective index for 2-layer, 3-layer, 4-layer, and 5-layer fishnet structures retrieved from the simulated scattering parameters.(d) Sketch of corresponding simplified multilayer system.(e) Real part of the retrieved effective index for simplified multilayer systems contains 2 layers, 3 layers, 4 layers, 7 layers, 9 layers, 11 layers, and 14 layers B medium.
Figure 2 :
Figure 2: (a) The schematic picture of a unit cell of symmetric fishnet functional layer and (b) sketch of a unit cell of 5-functional-layer fishnet structures.(c) The real part of effective index for 2-functional-layer, 3-functional-layer, 4-functional-layer, and 5-functional-layer fishnet structures retrieved from the simulated scattering parameters.(d) Transmission and reflection spectra of real 5-functional-layer fishnet slab and the hypothetical homogenous slab with the thickness of 5-layer fishnet structure and the effective parameters of one-layer structure.The results are calculated for normal incidence of a plane wave, and the spectra are normalized to the intensity of the incident wave.
Figure 5 :
Figure 5: (a) and (d) Schematic of unit cells of symmetric SRR (a) and SRR/wire (d) functional layers.(b) and (e) The retrieved effective parameters for symmetric SRR (b) and SRR/wire (e) functional layers.(c) and (f) Transmission and reflection spectra of real SRR (SRR/wire) fishnet 2-functional-layer structure (lines) and effective model (circles).
Figure 6 :
Figure 6: The definition of the polarization is shown in the inset of (b).Real part of effective index of conventional fishnet structures at 30-degree oblique incidence for TE (a) and TM (b) polarizations.The corresponding results of symmetric fishnet structures are shown in (c) and (d). | 4,720.8 | 2017-01-01T00:00:00.000 | [
"Materials Science"
] |
Probabilistic Optimal Power Flow Solution Using a Novel Hybrid Metaheuristic and Machine Learning Algorithm
: This paper proposes a novel hybrid optimization technique based on a machine learning (ML) approach and transient search optimization (TSO) to solve the optimal power flow problem. First, the study aims at developing and evaluating the proposed hybrid ML-TSO algorithm. To do so, the optimization technique is implemented to solve the classical optimal power flow problem (OPF), with an objective function formulated to minimize the total generation costs. Second, the hybrid ML-TSO is adapted to solve the probabilistic OPF problem by studying the impact of the unavoidable uncertainty of renewable energy sources (solar photovoltaic and wind turbines) and time-varying load profiles on the generation costs. The evaluation of the proposed solution method is examined and validated on IEEE 57-bus and 118-bus standard systems. The simulation results and comparisons confirmed the robustness and applicability of the proposed hybrid ML-TSO algorithm in solving the classical and probabilistic OPF problems. Meanwhile, a significant reduction in the generation costs is attained upon the integration of the solar and wind sources into the investigated power systems.
Introduction
The optimal power flow (OPF) problem is classified as a nonlinear optimization problem, and it can be used as a power system tool that aims to determine the best possible values for the decision variables corresponding to the objective functions, satisfying the system constraints [1,2].In the last decade, several approaches to solutions have been presented in the literature to solve the OPF problem, as reviewed in detail in [3,4].In the same vein, different objective functions are studied by the researchers, such as the minimization of the generators' real power costs, daily operational costs, production emission rate, and power losses [5,6].However, an optimal decision for the dispatchable distributed generation units has not yet been achieved [7].Previously, deterministic approaches were used to solve the classical OPF problem without integrating renewable energy sources (RESs) [8].Motivated by the ambitious new climate change policies and regulatory schemes that encourage the diversification of energy sources for energy security and carbon emission mitigation purposes, the high penetration of RESs into grids has been promoted [9,10].However, this results in uncertainties and parameters' statistical changes [11,12].Consequently, the probabilistic power flow (PPF) [13,14] or probabilistic OPF (POPF) problems should be solved from a probabilistic point of view rather than the traditional deterministic approaches, as reviewed in detail by Ramadhani et al. [15] and Prusty and Jena [16,17].The stochastic variations in wind speed and solar irradiance are the motivation for researchers to seek sophisticated statistical models for simulating solar and wind power generation.
The major classifications of PPF problem solution methods are subject to the analytical, approximate, numerical, and heuristic methods classified in the large reviews [18,19].In the analytical method, the equation is obtained between the input and the output and then calculated directly from the input variables.For example, [20] proposed a datadriven approach for probabilistic forecasts of the distribution grid state and PPF solution.
Ref. [21] developed a fast-specialized point estimate method compared to the Monte Carlo simulation (MCS) approach to solve the POPF considering the IEEE-69 bus distribution system with the presence of RESs.Ref. [22] presented a clustering-based analytical method for PPF and interval power flow, in which the uncertainties of load demands and wind power outputs were adequately handled.Ref. [23] introduced a new framework based on the relevance vector machine (RVM) compared to the Newton-Raphson method in order to calculate the PPF and the multivariate distribution of wind speed considering uncertainties associated with multi-dimensional wind turbine (WT) farms studying the correlation between wind speed in different regions.The linearization in the analytical method makes the power flow calculations simpler; however, the accuracy of the PPF problem solution is poor.The approximate method computes the moments of the output using the PDF of the input variables.Despite successfully solving the PPF problem without large-scale computations, the computation burden increases as the number of stochastic variables increases.The third type, the MCS, is a frequently utilized numerical method for the solution of the PPF problem [24].The MCS is a more reliable method to solve the PPF problem.According to the recent review by Skolfield and Escobedo [19], the advancements in metaheuristic algorithms for power system applications have proven to have superior advantages compared to methods in solving the classical and stochastic OPF [18].
Considering the scope of the current study, the review analysis only focuses on the most recent studies on PPF or POPF.For example, [25] presented differential evolutionary particle swarm optimization to solve the multiple objectives (fuel cost, emission, and prohibited operating zones of thermal generators) POPF problem, in which the uncertain solar irradiance and wind speed were simulated via log-normal and Rayleigh probability distributions validated for IEEE 30, 57, and 118 test systems.Ref. [26] proposed a novel barnacles mating optimizer (BMO) to solve the POPF with stochastic solar power to minimize either the generation cost, power loss, voltage deviation, emission minimization, or combined cost and emission of power generations of the IEEE-30 bus system.Similarly, the BMO algorithm was used by [27], while the study incorporated the stochastic small hydro power generator plus wind and solar.Ref. [28] developed a hybrid algorithm that combines the Moth Swarm Algorithm (MSA) and the Gravitational Search Algorithm (GSA) with the Weibull Distribution Function (WDF) used to demonstrate the alternating nature of wind farms integrated with the studied power systems.Moreover, [29] introduced a hybrid methodology based on the differential evolution (AGDE) algorithm and the Fitness-Distance Balance (FDB) method to solve the POPF involving wind and solar energy systems with the IEEE 30-bus test system.Ref. [30] developed a combination of phasor particle swarm optimization and a gravitational search algorithm-namely, a hybrid PPSOGSA algorithm-to calculate the POPF in the case of the IEEE 30-bus system, considering the forecasted WT and PV power generation as uncertain variables.Most recently, Ref. [31] came up with a novel algorithm called the Heap Optimization Algorithm (HEAP) for OPF solutions when solar and wind generators are added.The method was validated with three standard systems (IEEE 30, 57, and 118) and compared with a genetic algorithm.Ref. [32] solved the POPF problem for the modified IEEE-39 bus system while incorporating the uncertainty-related expense incurred due to the stochastic behavior of PV and WT generation, in which the solar radiation and wind speed are modelled using WPD and normal distributions and the uncertainty is simulated using the MCS method.Ref. [33] employed a hybrid Point Estimate Method (PEM)/Ant Lion Optimization (MALO) approach for handling loads, wind, and solar uncertainties and solving the POPF with multiple objectives in the case of a modified IEEE 33-bus islanded microgrid system.Ref. [34] developed a general computationally efficient copula-polynomial chaos expansion to solve PPF, including both linear and nonlinear relations of stochastic wind and PV power generation.Rosenblatt transformation is employed to convert the correlated variables and obtain independent variables, keeping in mind the dependence structure.The systems used are the IEEE 57-and 118-bus systems.Ref. [35] presents the PPF problem solution based on a scaled unscented transformation.The PPF problem is applied on AC/DC networks.It is applied on the modified IEEE 1354-bus (PEGASE) system.Ref. [36] introduces a PPF problem solution including hundreds of uncertain variables.The Zhao's point estimate technique is used when solving the PPF problem.As the number of PPF problem inputs increases, the computational burden increases linearly.
The studied cases are investigated on a modified IEEE 118-bus system.Ref. [37] presents a comparative analysis of Monte Carlo simulation with the Latin Hypercube sampling method and Unscented transformation methods.The comparisons are with the results achieved by the classical Monte Carlo simulation method.The test systems used are the IEEE 14-and 30-bus systems.The results confirm the superiority of the unscented transformation method in terms of speed and reliability over the other two methods.Table 1 summarizes the optimization methods used in this literature.
Ref. No.
Approach Used Advantages Disadvantages [19] Data-driven approach No need for the inversion of the power-flow equations' Jacobian [20] Fast-specialized point estimate method compared to the MCS approach use deep-rooted linear programming commercial solvers [22] New framework based on the RVM compared to the Newton-Raphson method.Guaranteed to yield a solution if one exists [24] Differential evolutionary particle swarm optimization Combines PSO and DE advantages [30] HEAP algorithm Flexible and applicable [32] Hybrid PEM and MALO The distribution curve of a variable is plotted from the obtained moments.The probability of the occurrence of the variable over a specific range can be determined.[33] General computationally efficient copula-polynomial chaos expansion and Rosenblatt transformation.
Faster in terms of computational time than other methods
This paper introduces a new hybridization of the self-organizing maps (SOM) machine learning technique and the transient search optimization (TSO) algorithm.The ML provides the systems with the ability to understand themselves and then estimate unknown outputs [38,39].The SOM technique is considered one of the important artificial neural network (ANN) architectures and features the ability for data visualization processing.It is classified as an unsupervised ANN and is utilized for knowledge extraction in order to determine the best areas with the objective of reducing the exploration field.The TSO algorithm was first proposed in 2020 by Qais, Hasanien, and Alghuwainem [38,40] and was inspired by the transient behavior of the first-and second-order circuits, which include energy storage elements (e.g., inductors and/or capacitors).Using the ML-TSO in opti-mization, one reaches the global solution efficiently without being stuck in a local solution.The main contributions of this paper are as follows:
•
Proposing a novel hybrid optimization approach based on the ML technique and TSO algorithm-namely, ML-TSO-for the optimal solution of classic OPF and POPF problems.
•
Formulating the ML-TSO algorithm to consider the integration of both conventional generators, renewable sources (PV panels and WTs), and time-varying load profiles.
•
Introducing the statistical models for the PV panels and WTs using the Beta and Weibull distribution functions based on real-time historical data.This allows us to calculate the generated electrical power RESs accurately while solving the PPF problem.
•
Validating the robustness of the proposed hybrid algorithm, built in MATLAB software, on the IEEE 57-and 118-bus test systems, with comparative analysis with the most recent literature.
The rest of the paper is organized in the following sections as follows: the problem formulation is illustrated in Section 2; the modeling of WT and PV generation systems is presented in Section 3; Section 4 describes the proposed hybrid ML-TSO algorithm.The simulation results are demonstrated in Section 5; and, finally, the paper is concluded in Section 6.
Problem Formulation
Considering the study objectives, the proposed hybrid ML-TSO optimization algorithm is first implemented to solve the classical OPF, with the objective function formulated to minimize the total generation costs.For a realistic solution of the POPF problem, the algorithm structure is adapted to consider the unavoidable uncertainty of solar PV and WT generators and time-varying load profiles.
The Classical OPF Problem
In this analysis, the OPF is solved for the IEEE 57 and 118 systems with fixed demand and conventional generations only.The objective function as well as the constraints of the problem are explained more in the following subsections.
The Objective Function
Herein, the objective function is formulated as the summation of the total costs of the power generated by the conventional generators.The defined cost function is a quadratic function of the active generated powers in the system.It is mathematically expressed in Equations ( 1) and ( 2) [41].
where J represents the objective function, which is the summation of the power generation cost.NG refers to the number of the conventional generators in the system under study.P Gi,h refers to the power generated from the generator at the bus number 'i' at the hour 'h'.For part 'B' of the problem formulation, this objective function is recalculated hourly with 'h' step = 1.
The System Constraints
The OPF problem has equality constraints on the active and reactive power besides the thermal limits of the transmission lines, as expressed in Equations ( 3), (4), and (8).There are inequality constraints representing the limits on the active and reactive power of each generator, in addition to the bus voltage constraints given in Equations ( 5)-( 7) [41].
where P injk,h and Q injk,h represent the power injection into the bus 'k'.In part 'B' of the problem formulation, when the OPF is repeated hourly, the symbol 'h' refers to the hour at which the simulation is run.The symbols V k,h and V l,h represent the voltages of bus 'k' and bus 'l'.Additionally, the symbol 'h' refers to the hour during the day when the simulation is run.G kl refers to the conductance.Meanwhile, B kl refers to the susceptance.The δ l,h is the voltage angle of bus 'k'.The δ k,h is the voltage angle of bus 'l'.Additionally, 'h' is the simulation hour at part 'B'.
where P Gmin is the minimum allowable active power that can be generated from the generator number 'i'.Meanwhile, P Gmax is the maximum active power that can be generated from the 'i'th generator.Q Gmin is the minimum allowable reactive power of the 'i'th generator.Meanwhile, Q Gmax is the maximum reactive power of the 'i'th generator.P limkl denotes the thermal limit of the transmission line between buses 'k' and 'l'.
OPF with RES Considering the Uncertainty (Probabilistic OPF Problem)
The OPF formulation is adapted to consider the unavoidable uncertainty of solar PV and WT generators and time-varying load profiles [42,43].For an extensive analysis, the four different cases of the OPF problem are constructed and solved (i) case 1: the load is changing hourly through the day, and there is no RESs integrated into the system; (ii) case 2: the load is changing hourly through the day, and PV is integrated into the system; (iii) case 3: the load is changing hourly through the day, and WT is integrated into the system; (iv) case 4: the load is changing hourly through the day, and both solar PV and WT are integrated into the system.All these cases of OPF are studied to investigate how much the generation costs are changed by changing the load and by connecting renewable energy sources to the systems under study.The addition of the RESs increases the model complexity of the OPF problem.
In the later three cases with PV and/or WT integration, the output powers from these RESs are correctly modeled as non-dispatchable.To do so, the generation values from the PV and WT are applied to the system as forecasted values at certain preset buses, and they supply part of the systems' demands.This correspondingly reduces the amount of electrical power generation needed from the conventional generators.The time interval of the repetition of the OPF solution is set to be one hour through a day.At every time interval 'h' (1 h), there are 6 wind speed readings and 60 solar irradiance readings.The PV and the WT are assumed to generate only active power, and there is no reactive power generation.The forecasted power generation from the PV and WT sources are calculated according to probabilistic models of the wind speed and the solar irradiance, besides some technical characteristics of each source [31].
Using the MATLAB software and the built-in MATPOWER library, the classic OPF and POPF are solved.The computer used for this study is "lenovo ideapad 330 (15" Intel)", whose CPU is "Up to 8th Gen Intel ® Quad Core i7-8550U".The proposed ML-TSO algorithm generates random solutions and guarantees that these suggested solutions have values between their maximum and minimum limits defined in Equation ( 5).The constraints defined in Equations ( 3), ( 4) and ( 6) are satisfied by the Newton-Raphson power flow.Moreover, penalties are inserted into the main objective function, defined in Equation ( 2), to eliminate any violation of limits that can happen to the other dependent variables and to consider this suggested solution as an infeasible one.These penalties are defined in Equation (9).
where K v and K l refer to predefined large positive numbers, 9 × 10 15 and 9 × 10 13 .Sa refers to the apparent power of the branch number j.Meanwhile, Sa rated j refers to the maximum value of the apparent power of the branch j.
Modeling of the PV and WT Generation
In this paper, the integration of wind turbines and PV generation has been considered in the PPF problem solution.However, as we mentioned earlier, due to the RES power generation dependency on the metallurgical conditions, the power generation profile of the WT and PV is highly uncertain [44,45].Therefore, it is important to accurately model the PV and WT generators as follows:
Modelling of PV Power Generation
The active power generated by the PV panel depends on the solar irradiance (S) in (W/m 2 ).It can be calculated as follows [29]: where P pvn refers to the nominal power of the PV panel.S stc represents the standard conditions' solar irradiance.R c defines a certain irradiance point.The solar irradiance is modelled by the Beta PDF f s(S) as follows: In Equation (11), S is expressed in kW/m 2 .α and β are the shape parameters of the Beta PDF.Γ is the Gamma function.
The parameters α and β are calculated using the mean and the standard deviation of the observations of the solar irradiance during a periodic time h: The Beta function is sampled to N s samples.The probabilistic solar irradiances, which correspond to the samples of the Beta PDF, are then used to calculate the forecasted values of active power generation from the PV panel as follows: where S h g represents the solar irradiance of the sample number g at hour h.f s S h g represents the irradiance probability of the sample number g at hour h.
Modelling of WT Power Generation
The power generated by the wind turbine depends on the wind speed (v).It can be calculated as follows [30]: where P wtn represents the nominal wind turbine power.v n refers to the nominal speed of the wind.v ci and v co are the cut-in and the cutoff wind speeds.
The wind speed is modelled by the Weibull PDF f v (v) as follows: where C and k represent the scale and shape parameters of the Weibull PDF.r represents a uniform random which is distributed between 0 and 1.The scale and shape parameters, C and k, are obtained from the mean µ h v and the standard deviation σ h v values of the wind speeds measured at hour 'h' as follows: The Weibull function is sampled to N v samples.The probabilistic wind speeds, which correspond to the samples of the Weibull PDF, are then used to calculate the forecasted values of active power generation from the wind turbine as follows: where v h g represents the wind speed of sample number g at hour h.f v v h g represents the wind speed probability of the sample number g at hour h.
Proposed Solution Method
This paper proposes a new hybrid ML-TSO optimization algorithm in order to acquire the features of ML and TSO together and introduce an efficient tool for finding the optimal solutions for classic and POPF problems.
TSO Algorithm
The transient search optimization algorithm is inspired by the transient behavior of the circuits, which include energy storage elements in their configuration [46].The transient behavior of the circuit depends on the circuit order, whether it is the first-order or second-order circuit.The circuit order can be determined by the number of energy storage elements, inductors, and capacitors in the circuit schematic.This overall transient behavior consists of transient and steady-state parts.
For the first-order circuits, the differential equation describing the transient behavior can be expressed as follows [46].
This equation can be solved for x(t) as a function of time as follows: where x(t) represents the capacitor voltage or inductor current.τ is the time constant.K is an initial condition-dependent constant.x (∞) is the steady-state x value.
The previous second-order differential equation can be solved as follows: where α is the damping coefficient and ω 0 and f d are the resonant and damped frequencies.B 1 and B 2 refer to arbitrary constants.The responses of the circuits are graphically shown in Figure 1.
The TSO Inspiration
Similar to the other metaheuristic optimization algorithms, the first step in the TSO is to set random agents whose values lie between predefined boundaries according to the following equation [47]: After that, it explores the optimal solution using the exploration and exploitation steps.The exploration step is inspired by the oscillatory response of the second-order circuits.Finally, it reaches the optimal solution after a predefined number of iterations.Moreover, the exploitation step is inspired by the exponential decay of the first-order circuits.Mathematically, the exploration and the exploitation steps can be expressed as follows: where T, C 1 , r 1 , r 2 , and r 3 refer to random numbers.Y l and Y * l represent the population and the best population until the lth iteration, respectively.k refers to a counter that starts from 0. The stopping criterion is that when the iterations reach L max .Y * l corresponds to x(∞).Additionally, 'T' is a variable that lies between −2 and 2, which is used to balance the exploration and exploitation processes.The effect of changing 'T' is shown in Figure 2. The pseudo-code of the TSO is provided in Algorithm 1.Moreover, modifications to the existing TSO algorithm are made using the levy [48] and Weibull distribution functions [49], as described below.
Levy Function
Part of the population is modified using the levy function with the factor 'LF'.It is expressed as follows: where v and u are random numbers ranging from [0, 1].
Weibull Distribution Function
Another part of the population is modified using the Weibull distribution function with factor (W D ).It is mathematically expressed as follows: ) η (30) where v 1 and η refer to the Weibull distribution constants whose values are 2 and 1, respectively.
The modified TSO populations, using the levy and Weibull distribution functions, are calculated as shown in (31).The population of the next iteration is updated by one of four equations according to a random number (r 1 ).When r 1 is less than '0.25', the population is modified using the levy function based on the current best population, the parameter 'P', and the parameter 'CF'.When r 1 is between '0.25' and '0.5', the population is modified using the Weibull function based on the current best population and stepsize2 l .When r 1 is between '0.5' and '0.75', the population is modified using the Levy function based on the current best population and stepsize3 l .Finally, when r 1 is greater than '0.75', the population is modified using the Weibull function according to the current best population and stepsize4 l .
where P is a constant that is set to '0.5' in this research.CF is a constant that depends on the number of current iterations and the maximum number of iterations.The stepsize l is calculated as follows: where 'Y l ' is the current population.'Y * l ' is the best population obtained until the current iteration.'W D is the Weibull distribution function whose inputs are the dimension of the problem, the population size, and the location, shape, and scale parameters of the Weibull distribution.The location, shape, and scale parameters of the Weibull distribution are set to be 0, 2, and 1. 'LF' is the levy function, whose inputs are the dimension of the problem, the population size, and a constant set to '1.5'.
Machine Learning Approach
Self-organizing maps (SOMs) are considered important techniques in the machine learning approach [50].SOMs can be classified as unsupervised neural networks that are utilized as a technique to extract the knowledge to determine the best areas to reduce the exploration field [51,52].By using SOMs, multidimensional data can be analyzed.It has advantages over the other knowledge extraction methods such as preserving the organized data of the original elements as they are.Consequently, SOMs are commonly used as an approach for data reduction.
The SOM integrates two layers, where the first layer is called the input layer, and the second layer is named the competitive layer.The second layer involves a number of neural units ordered as a 2-D lattice.The construction of the SOM is shown in Figure 3.The SOM has an output lattice of h × h neural elements.The input is a vector x of dimension n.The data of vector x are connected to each neural element.Every neural element lattice contains a weight vector with n units.The training of a SOM includes the weight adjustment and the neighborhood definition [50].The learning steps contain two phases: the similarity calculation and the weight adaptation.First, the initial weights are small and random.After that, the initial vector of data is inserted into the neural network.Then, the weights of the neural elements are adapted.For more visualization to the effect of data-reduction and clustering, Figure 4
Proposed Hybrid ML-TSO
In this paper, a new hybridization between the modified version of the SOM and TSO was made.First, an initial random population is assumed.The machine learning part of the optimization code is applied.A small random value is assigned to all weights.The initial learning rate and topological structure are determined.A number of iterations for the learning process are set.After the training, an input vector is selected randomly from the training set.Define the winner neuron as the nearest to the input.Update the weights of the winner unit and neighboring ones.Then, the fitness function is calculated, and it converges to the best solution using the SOM.The populations are also updated correspondingly.The employment of the SOM before starting optimization using the TSO makes the convergence performance of the TSO more efficient.The population of the TSO is then modified and updated by the levy function and the Weibull distribution function.According to a random number, there are four possibilities to update the population.This process is repeated until the maximum number of iterations is reached.At every iteration, the best solution is obtained.The best solution is compared with the fitness calculated in the current iteration.If the fitness of the current iteration is better than the saved best solution, the fitness replaces the best solution.It should be mentioned that the metaheuristic optimization methods are significantly affected by the initialization process.The flow chart of the proposed hybrid ML-TSO algorithm is provided in Figure 5.
Results and Discussion
In this section, the results and discussion of the main findings are comprehensively demonstrated by dividing them into three subsections.The first subsection presents the classical OPF problem.This subsection aims to examine, analyze, and evaluate the performance of the newly developed hybrid ML-TSO optimization method in solving the OPF problem compared to other methods in the existing literature.Furthermore, this subsection shows how applicable the ML-TSO method is in further optimization problems in the field of power systems.The test validation systems adopted in this study are the standard IEEE-57 bus and IEEE 118-bus test systems.The second subsection presents the simulation results of the POPF problem after integrating the RESs of PV and/or WT.The uncertainties caused by the random variability of the load demand and the stochastic behavior of the solar and wind power were adequately handled and modelled.This subsection also shows the effect of the RESs' integration on the overall operating costs of the power system.The final subsection provides a statistical analysis of the numerical optimization results obtained for OPF in the case of the 57-and 118-bus systems.The main specifications of the adopted test systems are listed in Table 2.The data include the number of buses in each system, the number of generators, the number of branches, the number and capacity of the connected loads, and the systems' losses.The hyper parameters of the machine learning code are the number of neurons and the number of sampling neurons.In the IEEE 57-bus system case, they are set to 10.Meanwhile, in the 118-bus system case, they are set to 5.These values are selected by trial and error.
The conventional GA is the type of GA used in this study.The mutation operator is set to 10% of the population size.The crossover operator is set to 65% of the population size.The selection process depends on the uniform distribution function.The following Table 3 summarizes the selected type and parameters of the GA used in this study.
The GA Type
Conventional GA The mutation operator 10% The crossover operator 65% The selection process Depend on the uniform distribution function Regarding the PSO algorithm, the used PSO is the global best PSO.The inertia coefficient is set to 1, the damping ratio of the inertia coefficient is set to 0.99, and the personal and social acceleration coefficients are set to 2. The swarm size is set to 15.
The following Table 4 summarizes the selected type and parameters of the PSO used in this study.
The PSO Type Global Best PSO
The inertia coefficient 1 The damping ratio of the inertia coefficient 0.99 The personal acceleration coefficient 2 The social acceleration coefficient 2
Classical Optimal Power Flow (Base Case)
The results obtained by ML-TSO are compared with the solutions achieved by other single optimization algorithms such as GA and PSO.The stopping criterion for the runs of all algorithms is the number of iterations, which is set to 600.The comparisons shown in Tables 5 and 6 depict the values of the design variables, which are the output power needed from each conventional generating unit in each system to meet the demand of the network.Additionally, the fuel cost in each system needed to operate the generators can be seen.Furthermore, Figures 6 and 7 show the convergence performance of the proposed hybrid algorithm in comparison to other methods while solving the PPF problem; clearly, the optimal results obtained by the proposed hybrid ML-TSO in both applied cases can be seen.In the case of the IEE 57-bus system, the PSO reached the worst result after 600 iterations.The GA and the proposed ML-TSO methods reached close results, but the proposed ML-TSO method's result is better.The ML-TSO reached better results than the GA by 0.0441% and better results than the PSO by 0.93%.It can be argued that the ML-TSO convergence performance is better, as it needed about 100 iterations to settle.Meanwhile, the GA needed about 450 iterations.
With respect to the IEEE 118-bus system, the GA reached the worst result after 600 iterations.The proposed ML-TSO method reached the best result.The ML-TSO result is better than that of the GA by 6.4% and better than that of the PSO by 2.56%.Hence, the ML-TSO convergence performance is better, as it needed about 300 iterations to settle.
In general, the convergence of the cost function curves in the two standard systems using the ML-TSO method is fast and smooth.
Probabilistic OPF with RESs Uncertainties and Time-Varying Loads
The proposed hybrid ML-TSO algorithm has been applied to solve POPF for the modified IEEE 57-and 118 bus test systems.The RESs (PV and/or WT) are integrated on certain buses in the two test systems, as given in Table 7.Meanwhile, the hourly load demand variation using typical day forecasting is taken into account.The hourly active and reactive power demand variations are given in Figures 8 and 9 for the 57-and 118-bus systems, respectively.The active output power of the PV and WT generators varies through the day according to the irradiance and the wind speed profiles [53,54].So, in this study, the uncertainties of renewable energy resources [55] are considered when forecasting the hourly active output power of the solar and wind systems.Different cases of POPF are evaluated in this scenario.In the first case, the OPF solution has only been evaluated with the hourly variable load consideration and with no RESs integrated.In second and third cases, the OPF solution is performed with the integration of either PV or WT.Finally, the proposed hybrid ML-TSO was implemented for the POPF solution, including both the PV and WT to the studied test systems.The nominal, cut-in, and cutoff wind speeds are assumed to be 10 m/s, 2.7 m/s, and 25 m/s.The generated active power from the PV system is calculated using (15).At the standard condition of irradiance (Sstc), the solar irradiance is assumed to be 1000 W/m 2 .Additionally, the certain irradiance point (Rc) is assumed to be 120 W/m 2 .This part of the research investigates the uncertainty of the RESs and the effect of their variable active power generation on the power generation of the conventional generators, which reflects on the total fuel cost, correspondingly.In this study, it is assumed that renewable energy sources were already installed when defining the cost function.The POPF is performed sequentially through a whole day divided into preset time intervals of 1 h each.The wind speed data were taken in Zafarana in Egypt on 25 November 2014.The solar irradiance data have been obtained from the Natural Energy Laboratory of Hawaii Authority (NELHA) on 3 January 2022.Figures 10 and 11 show the variation of the wind speeds and the solar irradiance throughout a day.According to the measured data, the PDFs of the wind speeds and the solar irradiance are determined, and a sample of these PDFs, at hour 17, is illustrated in Figures 12 and 13.The forecasted generated active powers from the wind turbine and the PV panel are then calculated using Equations ( 15) and (19).Implementing the proposed methodology on the IEEE 57 bus system, the obtained results in Figure 14 show the hourly cost of four investigated scenarios.The investigated scenarios include the system without RES, the system with only PV or WT energy sources, and both types of RESs integration.It is noted that a reduced cost was obtained when the PV panel was contributing between hours 9 and 18, as it is the duration when the solar irradiance is beneficial to generating maximum power.In contrast, the effect of the wind turbine in the system can be noticed during the entire time slot.However, the maximum cost reduction is recorded between hours 10 and 17.Similarly, the comparison of hourly costs for the four scenarios in the case of the 118-bus system is shown in Figure 15.It can be seen that the proposed hybrid ML-TSO method provides the best optimal solutions for the PPF problem, and the illustrated results show the impact of renewable energy integration subject to fuel cost reduction.
Statistical Analysis of the Classical OPF Results
In order to examine and verify the performance of the proposed hybrid ML-TSO optimization approach, the simulations are repeated for the three optimization methods (ML-TSO, PSO, and GA) using the investigated test system, i.e., the IEEE 57-and 118-bus systems.The performed statistical analysis shows valuable insights regarding the best value, worst value, mean value, median value, and standard deviation.
Tables 8 and 9 show the statistical indicators for the two investigated test systems, where the statistic obtained by the proposed ML-TSO method outperforms the other tech-niques.For instance, using the optimization parameters, the standard deviation calculated by the ML-TSO method is the lowest in both investigated systems.The obtained results prove the consistency, relevance and robustness of the proposed optimization method.The meta-heuristic techniques are known for their uncertainty in results over the repetition of the runs of the simulation.One of the tests used to verify how robust the metaheuristic algorithm is is Wilcoxon's rank-sum test.This test provides a fair comparison among the introduced ML-TSO method and the PSO and GA optimization methods.Twenty independent runs are implemented in the test.The selected level of significance is 5%.The p-values determined by Wilcoxon's rank-sum test are shown in Table 10.The hvalues obtained from the test is '1 , which means that the null hypothesis is rejected among the optimization algorithms.It can be concluded from the test results that the ML-TSO is superior to the PSO and the GA optimization methods when applied to solve the OPF and PPF problems under the different scenarios stated previously in the problem formulation.
Conclusions
This paper has introduced a novel hybrid optimization method based on the combination of TSO and machine learning techniques, namely, a hybrid ML-TSO algorithm with the main target of solving the classic OPF and POPF problems optimally.The main inferences of the paper can be given in bullets as follows: • The proposed hybrid ML-TSO algorithm's implementation of the solution of the OPF and PPF problems was verified on standard IEEE 57-bus and 118-bus systems.
•
The results and statistics demonstrate the applicability of the hybrid ML-TSO algorithm, as the convergence performance was observed to be better than that of the other optimization algorithms.
•
The application of the ML-TSO for the OPF resulted in a reduction in the fuel costs of 0.0441% to 0.93% for the IEEE 57-bus system.
•
The application of the ML-TSO for the OPF resulted in a reduction in the fuel costs of 2.56% to 6.4% for the IEEE 118-bus system.
•
The ML-TSO effectively solves the PPF by using distribution functions to simulate the stochastic nature of solar irradiance and wind speed over an entire day.
In the PPF solution strategy, different cases were investigated and analyzed based on the penetration level of the RESs, in which the implications of each case for the calculations were analyzed and evaluated in detail.The findings show that the fuel cost has been significantly minimized when the solar and wind turbine generators were simultaneously added to either the IEEE-57 or 118-bus systems in comparison with a single integration of each source.Lately, the statistical analysis confirms the superior performance of the proposed ML-TSO over that of the PSO and GA algorithms.Finally, it is recommended to use the hybrid ML-TSO optimization method to solve further optimization problems in the field of renewable energy systems and smart grids.
Figure 1 .
Figure 1.Transient response of the circuits.
Figure 2 .Algorithm 1 .
Figure 2. Effect of the variable 'T' on the exploration and exploitation processes.Algorithm 1. Pseudo code of the TSO algorithm.Initialize the first population and obtain the best agents Compute the fitness function While the iteration number < the max.number of iterations Use the Equations (26)-(28) to calculate T, C1, and a Do for each population Use the Equation (25) to calculate the Updated positions End do Obtain the fitness of the updated population IF the new fitness is better than the best fitness Then Replace the fitness and the population End IF l = l + 1 end while return Y l *
Figure 3 .
Figure 3. Representation of a SOM.
Figure 4 .
Figure 4. Data reduction effect: Training process function.
Figure 5 .
Figure 5.The flowchart of the proposed algorithm.
Figure 6 .
Figure 6.Cost function vs. iterations in the 57-bus system.
Figure 7 .
Figure 7. Cost function vs. iterations in the 118-bus system.
Figure 9 .
Figure 9. Hourly demand of the 118-bus system.
Figure 10 .
Figure 10.Measured wind speed throughout the day.
Figure 11 .
Figure 11.Measured sun irradiance throughout the day.
Figure 13 .
Figure 13.PDF of wind speed at hour 17.
Figure 14 .
Figure 14.Hourly fuel cost in the case of the 57-bus system.
Figure 15 .
Figure 15.Hourly fuel cost in the case of the 118-bus system.
Table 1 .
Summary of some of the references stated in the literature.
Table 2 .
Main specifications of the IEEE 57 and 118 bus systems.
Table 3 .
Main parameters of the GA.
Table 4 .
Main parameters of the PSO.
Table 5 .
Minimum fuel cost and optimal design variables for the IEEE 57-bus system.
Table 6 .
Minimum fuel cost and optimal design variables for the IEEE 118-bus system.
Table 7 .
Locations of the RESs in the 57-and the 118-bus systems.
Figure 8 .
Hourly demand of the 57-bus system.
Table 8 .
The best, worst, mean, median, and standard deviation values in the case of the 57-bus system.
Table 9 .
The best, worst, mean, median, and standard deviation values in the case of the 118-bus system. | 9,203.6 | 2022-08-23T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Montelukast Nanocrystals for Transdermal Delivery with Improved Chemical Stability
A novel nanocrystal system of montelukast (MTK) was designed to improve the transdermal delivery, while ensuring chemical stability of the labile compound. MTK nanocrystal suspension was fabricated using acid-base neutralization and ultra-sonication technique and was characterized as follows: approximately 100 nm in size, globular shape, and amorphous state. The embedding of MTK nanocrystals into xanthan gum-based hydrogel caused little changes in the size, shape, and crystalline state of the nanocrystal. The in vitro drug release profile from the nanocrystal hydrogel was comparable to that of the conventional hydrogel because of the rapid dissolution pattern of the drug nanocrystals. The drug degradation under visible exposure (400–800 nm, 600,000 lux·h) was markedly reduced in case of nanocrystal hydrogel, yielding only 30% and 50% amount of cis-isomer and sulfoxide as the major degradation products, as compared to those of drug alkaline solution. Moreover, there was no marked pharmacokinetic difference between the nanocrystal and the conventional hydrogels, exhibiting equivalent extent and rate of drug absorption after topical administration in rats. Therefore, this novel nanocrystal system can be a potent tool for transdermal delivery of MTK in the treatment of chronic asthma or seasonal allergies, with better patient compliance, especially in children and elderly.
Introduction
Montelukast (MTK) is a selective leukotriene receptor antagonist that has been commonly prescribed in the treatment of chronic asthma and symptoms of seasonal allergies in children and adults [1][2][3]. Moreover, recently, MTK is clinically being investigated as a novel medication in the treatment of Alzheimer's disease in elderly people [4]. In preclinical studies, the leukotriene antagonist was found effective in stroke models and was beneficial in improving cognition in older rats, owing to its anti-inflammatory and neuroprotective effects [5,6]. Considering its frequent prescription to children in the age group of 6-24 months [7] and potential dosing in elderly patients with difficulty in swallowing, the transdermal dosage form can be an alternative for better patient compliance. However, it is quite challenging to formulate the therapeutic agent into a transdermal dosage form because of its poor aqueous solubility (0.2-0.5 µg/mL in water at 25 • C) [8,9] and chemical instability [10,11]. MTK is extremely susceptible to light-or oxygen-induced degradation, and mainly breaks down into sulfoxide and cis-isomer by the oxidation of the mercapto group and by isomerization of the
Preparation of MTK Nanocrystal Suspension and Hydrogel
Drug nanocrystal suspension was fabricated using acid-neutralization method as reported previously [26,27], with slight modification. At first, drug alkaline solution (5-20 mg/mL as MTK) was prepared by dissolving sodium salt of MTK powder (62.4-249.6 mg) in 10 mM NaOH solution (4 mL, pH 12). Different kinds of stabilizers (PVP K30, Kollidon VA64, Poloxamer-188, Poloxamer-407, Kolliphor RH40, Kolliphor EL, Kolliphor HS 15, HPMC 2910, polysorbate 20, and polysorbate 80; Table 1) were separately dissolved in 60 mM lactic acid solution in the concentration ranges from 0.05-2.0 (%, w/v). The drug alkaline solution was then added to the stabilizer-containing acidic solution at the rate of 2 mL/min, under ultra-sonication procedure. The probe-type ultra-sonicator (Vibracell VC-505, Sonics & Materials, Newtown, CT, USA) equipped with a 1/2-inch (13 mm) probe was placed into the acid solution and sonicated under different amplitudes (11-33 watts) for 3 min, with different cycles (1-3 times). To prevent the temperature elevation, the samples were located inside the ice bath during sonication procedure. The acidity of the MTK nanosuspensions was adjusted to pH 4.0-5.0, to suspend the weak-acid compound in the aqueous vehicle. MTK nanosuspensions were then stored at room temperature for further experiments, under light protection.
MTK nanocrystal-loaded hydrogel was prepared by adding xanthan gum (1.0%, w/v) to the drug nanocrystal suspension and then stirring for overnight at 700 rpm (IKA RCT basic, Staufen, Germany). Prepared hydrogels were incubated overnight at room temperature to remove the air bubbles.
Preparation of Drug Solution and Conventional Hydrogel
Drug solution (10 mg/mL as MTK) was prepared by adding MTK-Na powder (124.8 mg) to 10 mM NaOH solution (12 mL) and stirred for 10 min at room temperature. The pH of the solution was adjusted to 11.5-12.0, to completely dissolve the weak-acid compound. To formulate conventional hydrogel, 120 mg of xanthan gum powder (1.0% w/v) was added to 12 mL of drug solution and stirred at 700 rpm for 12 h.
Particle Size and Zeta Potential of MTK Nanocrystals
Average particle size and polydispersity index (PDI) value, a measure of the width of the size distribution of MTK nanocrystal suspension were determined using a Zetasizer Nano dynamic light scattering particle size analyzer (Marlvern Instrument, Worcestershire, UK) [28][29][30]. Each sample was loaded onto disposable cells with no additional dilution at a scattering angle of 90 • . The zeta potential of the MTK nanocrystals (approximately pH 4.0) was also estimated using a Zetasizer Nano at 25 • C. Samples (100 µL) were loaded into the capillary cell after 10-fold dilution with distilled water, and 20 runs were performed for each measurement. All measurements were carried out in triplicates at 25 • C.
Morphology of MTK Nanocrystals
The morphology of MTK nanocrystals in the aqueous suspension or hydrogel was observed using a transmission electron microscopy (TEM, Tecnai F20 G2, FEI, Hillsboro, OR, USA). Approximately 50 µL of sample was loaded onto the copper grid and was gently blown up for 20 min, to diminish aqueous vehicle. Samples were then fixed on sample holder and observed under an accelerating voltage of 80 kV.
X-ray Powder Diffraction Analysis
The crystalline state of drug powder, nanocrystal suspension, and nanocrystal-embedded hydrogel were analyzed using an X-ray diffractometer (XRD, Model Ultima IV, Rigaku, Japan) with Cu Kα radiation generated at 30 mA and 40 kV [31]. To remove the aqueous vehicle and thickening agent Pharmaceutics 2020, 12, 18 4 of 16 from the preparations, each sample was 10-fold diluted with distilled water and was centrifuged at 3500 g for 10 min, leading to the settling of the drug nanocrystals. Afterward, the collected MTK nanocrystals were dried in the oven at 40 • C for 12 h. Samples were loaded on the glass plate and the diffraction pattern over a 2θ range of 5 • -45 • was determined using a step size of 0.02 • . Scan speed was fixed to 1.0 s/step.
Thermal Analysis
The thermal behaviors of the drug powder or dried nanocrystal formulations were analyzed using differential scanning calorimeter (DSC, Model DSC 50, Shimadzu, Japan) [32,33]. Nanocrystals were collected by dilution, ultra-centrifugation, and subsequent drying processes as described in Section 2.4.3. Each sample (about 2 mg) was put in an aluminum pan and heated at the rate of 10 • C/min over the temperature range of 0-200 • C. An empty aluminum pan was used as reference.
Drug Content in the Nanocrystal Suspension
The amount of MTK suspended in the preparations was determined by HPLC assay. Nanocrystal suspension (1 mL) was ultra-centrifuged at 13,000 rpm for 10 min to settle the nanocrystals in the aqueous vehicle. Methanol was then added to the precipitate and vortexed for 30 min to dissolve the drug nanocrystals. The concentration of MTK in the organic solvent or in the supernatant was analyzed using Waters HPLC system that is composed of a pump (Model 515 pump), a UV-VIS (ultraviolet-visible) detector (Model 486), and an autosampler (Model 717 plus). The mobile phase comprised acetonitrile and distilled water at a volume ratio of 4:6 with 0.15% v/v of trifluoroacetic acid, was flowed through the reversed-phase C18 column (4.6 mm × 50 mm, 1.8 µm, Agilent, Santa Clara, CA, USA) at a flow rate of 1.2 mL/min. The column temperature was set to 25 • C. A 20 µL aliquot was injected and the column eluent was monitored at a wavelength of 238 nm. The sharp peak of MTK was detected at 10.7 min. The least-square linear regression was linear in the MTK range from 1.0-100 µg/mL, with a coefficient of determination (r2) value of 0.997.
In Vitro Dissolution Profiles of MTK Nanocrystal Formulations
In vitro dissolution profile of MTK from the nanocrystal formulations (suspension or hydrogel) was comparatively evaluated with the conventional preparations (drug solution or conventional hydrogel) under sink condition, according to USP II paddle method (Model DT 720, Erweka). To maintain sink condition during the release test, 0.5% w/v of sodium lauryl sulfate was dissolved in 10 mM phosphate buffered saline (pH 7.4) as the solubilizing agent. Each formulation containing 5 mg of MTK were added to 500 mL of dissolution media kept at 37 • C ± 0.5 • C and stirred at 50 rpm. At predetermined times (0, 0.2, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, and 6.0 h), the dissolution media was withdrawn and replaced with equal volume of pre-warmed media. Withdrawn samples were then centrifuged at 13,000 rpm for 5 min to remove undissolved materials, including the drug nanocrystals. The supernatant was then 2-fold diluted with ACN and analyzed using HPLC as described earlier.
Photo-Stability of MTK Nanocrystal Formulations
The photo-stability of MTK nanocrystal formulations was comparatively evaluated with that of the conventional preparations by determining the drug remaining and degradation product formation, under stress conditions. For photo-stability study, samples containing Sunset Yellow FCF, as photo-stabilizer, at a concentration of 0.2% w/v were additionally prepared. A total of eight samples (nanocrystal suspension, nanocrystal hydrogel, drug alkaline solution, and conventional hydrogel with or without Sunset Yellow FCF) were placed into the scintillation vials and were kept in a DYX 500A solar simulator (DY-Tech, Seoul. Korea) under exposure to simulated sunlight, at a wavelength of 400-800 nm (8 h, 600,000 lux·h, 25 • C). At predetermined times, samples were withdrawn and then diluted with methanol. The drug remaining in each sample was then analyzed using HPLC as described earlier. Additionally, the two main degradation products (cis-isomer and sulfoxide) of MTK in each sample were quantified by the area percentage method [34]. Each sample was diluted with methanol at a concentration of 100 µg/mL and were analyzed by HPLC gradient method. Mobile phase A was 0.15% trifluoroacetic acid-containing distilled water, and mobile phase B was 0.15% trifluoroacetic acid-containing acetonitrile. The mobile phase was passed into the reversed-phase C18 column (4.6 mm × 50 mm, 1.8 µm, packing L11, Agilent) following the gradient condition; 60% A phase for first 3 min, linear decrease to 49% over 16 min, return to 60% at 16.1 min, and maintenance over 5 min. Other parameters such flow rate, column temperature, and wavelength were identical to those of the drug content analysis method. Retention times of cis-isomer and sulfoxide was 5.7 min and 8.2 min, respectively.
Animals and Experimental Protocols
In vivo pharmacokinetic study of MTK-loaded nanocrystal preparation was carried out in the healthy rats, after approval from Institutional Animal Care and Use Committee (IACUC) of Dankook University (Cheonan, Korea) (DKU-19-032, 8th October 2019). Six-week-old male Sprague-Dawley rats (SD rats, 250 ± 20 g) obtained from Samtako (Kyungki-do, Korea) were acclimated for at least 3 days, with free access to tap water and standardized chow. The rats were then divided into two groups (n = 6 per each group) and the hair on the dorsal region was removed by hair clippers without damaging the skin, prior to topical application. MTK nanocrystal hydrogel or conventional hydrogel were applied to the dorsal skin (20 cm 2 ) at a dose of 25 mg/kg. To avoid the drying of the nanocrystal or conventional hydrogels, a thin film made from polyurethane and acrylamide (Tegaderm ® , 10 × 12 cm) was attached onto the application site. At predetermined time intervals (0, 0.5, 1, 2, 3, 4, 5, 8, 12, and 24 h), the blood samples (about 0.2 mL) were withdrawn from the submandibular vein using 28 G heparinized syringe. At 6 h post-administration, rats were allowed to freely access to water and standardized chow. During the experimental period (24 h), individual rats were monitored for their changes in skin and fur, behavior pattern, morbidity, and mortality. Blood samples were centrifuged at 13,000 rpm for 5 min and stored at −70 • C until analysis. The drug concentration in the plasma was determined by the high-performance liquid chromatography and tandem mass spectrometry (LC-MS/MS) method described below.
LC/MS-MS Analysis and Calculation of Pharmacokinetic Parameters
The MTK levels in rat plasma were determined using the liquid chromatography and multiple reaction monitoring (LC-MRM) method [35]. Briefly, 20 µL of thawed rat plasma was mixed with 500 µL of the extraction solution (25% (v/v) of dichloromethane and 75% (v/v) of ethyl acetate), including zafirlukast (the internal standard, IS, 60 ng/mL) and vortexed for 60 s. After the centrifugation at 12,000 g and 4 • C, the whole supernatant was dried under nitrogen stream. Then, the residue was reconstituted with 100 µL of methanol and the final solution was analyzed through the LC-MS/MS system, composed of Nexera X2 UHPLC and LCMS-8050 mass spectrometer (Shimadzu, Tokyo, Japan). For the interface between LC and MS, electrospray ionization in positive ion mode was employed. The separation of a sample was performed with Luna C18 column (2.0 × 150 mm, 5 µm, Phenomenex, Torrance, CA, USA). The two different kinds of mobile phases, 5 mmol/l ammonium formate in water (A) and methanol (B), were used with the gradient mobile phase program. The flow rate, the autosampler temperature, the volume of a sample injected, the column oven temperature, and the total analysis time were 0.25 mL/min, 4 • C, 5 µL, V were used as its confirmatory transitions. Additional parameters for LC-MS/MS were as follows: the nebulizing gas flow of 3 L/min, the heating gas flow of 10 L/min, the interface temperature of 300 • C, the desolvation line temperature of 250 • C, and the heating block temperature of 400 • C. All LC-MRM data were acquired and analyzed using Lab Solutions software (version 5.93, Shimadzu, Kyoto, Japan). For the MTK concentrations, the screening transition peak area ratio of MTK to IS from a sample was compared with the calibration curve built from those of matrix-matched standard solutions.
From the pharmacokinetic data, pharmacokinetic parameters, such as area under the plasma concentration versus time curve (AUC) and terminal half-life (T 1/2 ) were calculated using non-compartmental pharmacokinetics by using Winnonlin software (Version 5.2.1, Pharsight Co. Mountain View, CA, USA). Maximum plasma concentration (C max ) and the time needed to reach the maximum plasma concentration (T max ) were determined directly from the mean plasma concentration-time profile.
Statistical Analysis
Each experiment was performed at least three times and the data are presented as the mean ± standard deviation (SD). The statistical significance was determined using one-way analysis of variance (ANOVA) test and was considered to be significant at p < 0.05, unless otherwise indicated.
Selection of Steric Stabilizer of MTK Nanocrystal Suspension
The MTK free acid nanocrystals dispersed in aqueous media with different steric stabilizers was fabricated using the acid-base neutralization and the ultra-sonication techniques. In our preliminary study, anti-solvent precipitation provided more uniform drug crystal size as compared to the top-down methods, including the bead-milling process, and thus, was selected for further preparation of the MKT nanocrystal suspension (data not shown). In this method, the weak-acid MTK compound dissolved in the alkaline solution (10 mM NaOH solution) was added dropwise to the stabilizer-containing acidic solution (10 mM lactic acid solution). The drug solubility was drastically diminished in the acidic condition, instigating a rapid drug re-crystallization process [27]. The drug nanocrystals formed by the nucleation and crystal growth process were then ultra-sonicated to decrease the crystal size and uniformly disperse the nanocrystals using steric stabilizers.
In formulating drug nanosuspension, the types of dispersants and the interactions with the drug nanocrystals highly influence the colloidal stability in the vehicle [21]. Thus, different stabilizers, including hydrophilic polymers and surfactants were screened to formulate MTK nanosuspension based on the aspects of crystal size, homogeneity, and dispersibility. The concentration of stabilizer for screening test was set to 0.5% w/v through preliminary experiment (data now shown). When the stabilizer was not included, irregular drug crystals over 1.6 µm in size were formed ( Table 1). The drug crystals with no stabilizer were thermodynamically unstable, forming agglomeration and/or precipitates after 7 days of storage under accelerated conditions (40 • C). The addition of steric stabilizers, such as HPMC-2910, Poloxamer-188 and 407, Polysorbate 20 and 80, Kolliphor RH40 and EL, and Solutol HS 15, markedly decreased the crystal size below 500 nm, by hindering the interaction of dissolved drug molecules with crystal surfaces. However, these surface stabilizers could not provide even dispersion of the MTK nanocrystals in aqueous vehicle, forming the drug aggregates after a week-long storage at 40 • C. On the other hand, when PVP K30 or Kollidon VA64 was dissolved in distilled water at a concentration of 1.0% w/v, MTK nanocrystals were physically stable with excellent re-dispersibility in the vehicle. These hydrophilic hydrogen-bonding acceptor polymers might be adsorbed on the surface of MTK nanocrystals by hydrogen bond and/or van der Waals interaction, and predominantly contribute to disperse the hydrophobic nanocrystals in the aqueous media, with no aggregation [36,37]. Out of the two hydrophilic polymers, PVP K30 polymer was selected for further preparation of MTK nanocrystal suspension, as it provided smaller and more uniform nanocrystals as compared to the VA64 polymer. The concentration of stabilizers in the formulation was set to 0.5% w/v. 2 Data are expressed as mean ± SD (n = 3). 3 Polydispersity index, calculated by dividing weight average molecular weight by number average molecular weight. 4 Visually assessed after a week-long storage at 40 • C. 5 Prepared with no stabilizer.
Effect of Process Parameters on Size and Homogeneity of MTK Nanocrystal Suspension
The effect of formulation variables, such as sonication powder and the number of sonication cycles, the drug and the PVP polymer concentrations in the preparation process, on crystal size and homogeneity is presented in Figure 1. Four formulation variables were set from the previous report [38] and our preliminary experiment. As expected [38], as the sonication intensity increased from 13-33 Watts, the drug crystal size markedly decreased, providing nano-sized drug crystals below 150 nm with a sonication power of 33 Watts (Figure 1a). Drug nanocrystal size was also decreased as the number of sonication cycles were increased, and they reached a plateau with three cycles (total 6 min), with a particle size below 150 nm (Figure 1b). The drug concentration in the nanosuspension was set to 10 mg/mL, with narrower size distribution (PDI value of 0.2) (Figure 1c). Under the arranged sonication condition, the MTK crystal size was effectively adjusted, by the concentration of PVP K30 polymer in the aqueous vehicle. When the concentration was increased from 0.05%-0.5%, the nanocrystal size was accordingly decreased from 900-150 nm (Figure 1d) as the polymeric stabilizer covered the surface of nanocrystals and thus, preventing and/or retarding drug crystal growth. Taken together, the optimized MTK nanocrystal suspension was prepared with the sonication power of 33 Watts, 6 min of sonication time, 10 mg/mL of drug concentration, and 0.5% w/v of PVP K30 polymer concentration.
Morphological and Physicochemical Characteristics of MTK Nanocrystal Suspension and Hydrogel
The optimized MTK nanocrystal suspension prepared with 0.5% w/v of PVP K30 polymer were further characterized in aspects of morphology, particle size, zeta potential, drug content, and drug crystallinity. The morphological features of the drug nanocrystals suspended in the aqueous vehicle or hydrogel matrix were scrutinized using TEM. In both aqueous suspension and hydrogel, MTK nanocrystals prepared by anti-solvent precipitation and ultra-sonication techniques were spherical and/or elliptical, and with homogeneous and smooth surface (Figure 2A,B). Dimensions of the novel MTK nanocrystals were uniform in the range of 100-200 nm, with no marked differences in the nanocrystal size in both formulas. The particle size analysis also revealed that MTK nanocrystals with median diameter of 102.3 nm were effectively prepared with PDI value < 0.3 ( Table 2). The particle size of novel nanocrystal system was supposed to be appropriate for transdermal delivery [23,39,40]. In a previous report, nanocrystals with a size range of 200-400 nm offered enhanced permeability across the skin and mucosal membranes by providing an enhancement in the saturation solubility, and consequently, facilitated dissolution rate with reduced diffusional distance [39,40]. Moreover, nanocrystals with an appropriate size (approximately 700 nm) can deposit into these shunts, which act as a depot from which the drug can diffuse into the surrounding cells for extended release [23]. Pireddu et al. reported that 280 nm-sized nanocrystal formulation provided a higher accumulation of diclofenac in the skin as compared to both the coarse suspensions and the commercial formulation in ex vivo experiments [39].
Morphological and Physicochemical Characteristics of MTK Nanocrystal Suspension and Hydrogel
The optimized MTK nanocrystal suspension prepared with 0.5% w/v of PVP K30 polymer were further characterized in aspects of morphology, particle size, zeta potential, drug content, and drug crystallinity. The morphological features of the drug nanocrystals suspended in the aqueous vehicle or hydrogel matrix were scrutinized using TEM. In both aqueous suspension and hydrogel, MTK nanocrystals prepared by anti-solvent precipitation and ultra-sonication techniques were spherical and/or elliptical, and with homogeneous and smooth surface (Figure 2A,B). Dimensions of the novel MTK nanocrystals were uniform in the range of 100-200 nm, with no marked differences in the nanocrystal size in both formulas. The particle size analysis also revealed that MTK nanocrystals with median diameter of 102.3 nm were effectively prepared with PDI value < 0.3 ( Table 2). The particle size of novel nanocrystal system was supposed to be appropriate for transdermal delivery [23,39,40]. In a previous report, nanocrystals with a size range of 200-400 nm offered enhanced permeability across the skin and mucosal membranes by providing an enhancement in the saturation solubility, and consequently, facilitated dissolution rate with reduced diffusional distance [39,40]. Moreover, nanocrystals with an appropriate size (approximately 700 nm) can deposit into these shunts, which act as a depot from which the drug can diffuse into the surrounding cells for The sonication powder, drug concentration, and PVP K30 concentration were fixed to 33 Watts, 10 mg/mL, and 0.5 w/v%, respectively. (c) The sonication powder, sonication time, and PVP K30 concentration were fixed to 33 Watts, 6 min, and 0.5 w/v%, respectively. (d) The sonication powder, sonication time, and drug concentration were fixed to 33 Watts, 6 min, and 10 mg/mL, respectively. Three batches of each sample were prepared, and the data are presented as mean ± SD (n = 3). Data are expressed as mean ± SD (n = 3).
Drug content analysis in suspension revealed that over 99.5% of MTK was suspended as solid-state in the formations, due to poor solubility of MTK, a weak-acid compound, in acidic environment (pH 4.1) ( Table 2). The zeta potential of drug nanocrystals stabilized with PVP K30 polymer was neutral (−3.6 mV) ( Table 2). The crystalline state of MTK nanocrystals dispersed in the aqueous media or hydrogel matrix was further evaluated by comparing the diffraction spectrum or thermal behavior of MTK nanocrystals with those of the MTK-Na, and MTK free acid powders ( Figure 2C,D). The X-ray diffraction spectrums of the dried MTK nanocrystal suspension and hydrogel were analogous to those of MTK-Na, and MTK free acid powders, presenting no distinctive diffraction peaks over 2θ range of 5°-45° ( Figure 2C). These results denote that the raw materials used in our study existed in an amorphous state and the crystalline form of the leukotriene Drug content analysis in suspension revealed that over 99.5% of MTK was suspended as solid-state in the formations, due to poor solubility of MTK, a weak-acid compound, in acidic environment (pH 4.1) ( Table 2). The zeta potential of drug nanocrystals stabilized with PVP K30 polymer was neutral (−3.6 mV) ( Table 2). The crystalline state of MTK nanocrystals dispersed in the aqueous media or hydrogel matrix was further evaluated by comparing the diffraction spectrum or thermal behavior of MTK nanocrystals with those of the MTK-Na, and MTK free acid powders ( Figure 2C,D). The X-ray diffraction spectrums of the dried MTK nanocrystal suspension and hydrogel were analogous to those of MTK-Na, and MTK free acid powders, presenting no distinctive diffraction peaks over 2θ range of 5 • -45 • ( Figure 2C). These results denote that the raw materials used in our study existed in an amorphous state and the crystalline form of the leukotriene antagonist did not alter during the nanocrystalization and subsequent gelation processes, retaining intrinsic amorphous form in both nanocrystal formulations. The drug crystalline state was further evaluated by analyzing thermal behavior of the nanocrystal formulas using the DSC measurement ( Figure 2D). The DSC curves of raw materials showed broad endothermic peaks between 50-60 • C with no sharp peaks. These thermal patterns are consistent with previous reports showing that MTK powder is in an amorphous state, possessing glass transition temperature between 50 • C and 60 • C [41]. The DSC pattern of MTK nanocrystals suspended in aqueous vehicle and embedded hydrogels was quite analogous to that of the MTK free acid powder, with no other distinctive peaks. In TEM observation, MTK nanocrystals embedded in hydrogel was physically stable, with no changes in crystal size over 2 months (Data not shown). Taken together, we concluded that MTK free acid powder formed by acid-base neutralization was lucratively split below 200 nm, with no crystalline changes and/or polymorphic transition during ultra-sonication process and subsequent embedding process.
In Vitro Drug Release Profile from MTK Nanocrystal Suspension and Hydrogel
In vitro drug release profiles from MTK nanocrystal suspension or hydrogel were comparatively evaluated with those from the drug solution and conventional hydrogel, under sink condition. To ensure sufficient drug solubility in dissolution media, 0.5% w/v of sodium lauryl sulfate (SLS) was contained in 10 mM phosphate-buffered saline (pH 7.4). As control, drug solution was prepared by dissolving the weak acid compound in alkaline solution (10 mM NaOH, pH 11.2). Conventional hydrogel was formulated by adding xanthan gum (0.5 w/v%) to drug alkaline solution at the same concentration with the nanocrystal hydrogel.
Under the sink condition, nanocrystal suspension was completely dissolved and released at the first sampling time (5 min), exhibiting comparable dissolution profile with the drug alkaline solution (Figure 3). This rapid dissolution of nanocrystal suspension under sink condition can be explained by Noyes-Whitney equation; dM/dt = k·S·C s , where dM/dt, dissolution rate; k, rate constant; S, surface area of the drug particle; C s , drug solubility in dissolution media. The lessening in drug particle size led to a drastically increased surface area, thus enhancing the dissolution rate of hydrophobic compound in dissolution media [42]. The embedment of drug molecules or drug nanocrystals in the xanthan gum matrix markedly hindered drug release; the cumulative amount of drug released from hydrogels was gradually increased for 3 h, showing over 80% drug release. Although drug release from the nanocrystal hydrogel was slower than that of the conventional hydrogel due to the additional dissolution step prior to diffusion out process, the extent of drug released after 2 h was quite comparable with that of the conventional hydrogel.
Pharmaceutics 2019, 11, x www.mdpi.com/journal/pharmaceutics antagonist did not alter during the nanocrystalization and subsequent gelation processes, retaining intrinsic amorphous form in both nanocrystal formulations. The drug crystalline state was further evaluated by analyzing thermal behavior of the nanocrystal formulas using the DSC measurement ( Figure 2D). The DSC curves of raw materials showed broad endothermic peaks between 50-60 °C with no sharp peaks. These thermal patterns are consistent with previous reports showing that MTK powder is in an amorphous state, possessing glass transition temperature between 50 °C and 60 °C [41]. The DSC pattern of MTK nanocrystals suspended in aqueous vehicle and embedded hydrogels was quite analogous to that of the MTK free acid powder, with no other distinctive peaks. In TEM observation, MTK nanocrystals embedded in hydrogel was physically stable, with no changes in crystal size over 2 months (Data not shown). Taken together, we concluded that MTK free acid powder formed by acid-base neutralization was lucratively split below 200 nm, with no crystalline changes and/or polymorphic transition during ultra-sonication process and subsequent embedding process.
In Vitro Drug Release Profile from MTK Nanocrystal Suspension and Hydrogel
In vitro drug release profiles from MTK nanocrystal suspension or hydrogel were comparatively evaluated with those from the drug solution and conventional hydrogel, under sink condition. To ensure sufficient drug solubility in dissolution media, 0.5% w/v of sodium lauryl sulfate (SLS) was contained in 10 mM phosphate-buffered saline (pH 7.4). As control, drug solution was prepared by dissolving the weak acid compound in alkaline solution (10 mM NaOH, pH 11.2). Conventional hydrogel was formulated by adding xanthan gum (0.5 w/v%) to drug alkaline solution at the same concentration with the nanocrystal hydrogel.
Under the sink condition, nanocrystal suspension was completely dissolved and released at the first sampling time (5 min), exhibiting comparable dissolution profile with the drug alkaline solution (Figure 3). This rapid dissolution of nanocrystal suspension under sink condition can be explained by Noyes-Whitney equation; dM/dt = k·S·Cs, where dM/dt, dissolution rate; k, rate constant; S, surface area of the drug particle; Cs, drug solubility in dissolution media. The lessening in drug particle size led to a drastically increased surface area, thus enhancing the dissolution rate of hydrophobic compound in dissolution media [42]. The embedment of drug molecules or drug nanocrystals in the xanthan gum matrix markedly hindered drug release; the cumulative amount of drug released from hydrogels was gradually increased for 3 h, showing over 80% drug release. Although drug release from the nanocrystal hydrogel was slower than that of the conventional hydrogel due to the additional dissolution step prior to diffusion out process, the extent of drug released after 2 h was quite comparable with that of the conventional hydrogel.
Photo-Stability of MTK Nanocrystal Formulations
The chemical stability of MTK nanocrystal or conventional preparations was comparatively evaluated under light exposure, as the leukotriene antagonist was reported to be extremely susceptible to light, heat, and oxidative degradations. As previously reported, light exposure of drug alkaline solution resulted in marked degradation of MTK, showing over 55% decrease in residual drug content ( Figure 4A). Correspondingly, the amounts of cis-isomer and sulfoxide in the drug solution were sharply increased by 10.5% and 2.9%, respectively, after 8 h. On the contrary, photo-induced degradation of MTK in the nanocrystal suspension was reduced as compared to that of the drug solution, preserving over 60% of the drug residue after 4 h. In addition, the amounts of cis-isomer and sulfoxide in the preparation were markedly decreased to 3.9% and 2.2%, respectively. This result could be explained considering the structural characteristics and the low drug solubility in the nanocrystal suspension. In the nanocrystal suspension, drug molecules present in the outer region of the nanocrystals and small fraction of the drug dissolved in aqueous solutions are exposed to light [25]. On the contrary, in the drug alkaline solution, the labile compound existed in the solution at a molecular level and thus, MTK molecules might be exposed to the UV-Vis rays more profoundly, causing extensive degradation into cis-isomer and sulfoxide in the aqueous solution. On the other hand, embedding of drug nanocrystals or molecules into hydrogel matrix considerably improved the photo-stability of MTK. In case of nanocrystal hydrogel, providing over 70% of the drug remaining after 8 h, which is over 40% higher as compared to the nanocrystal suspension ( Figure 4A). The extent of cis-isomer and sulfoxide content in the nanocrystal hydrogel were noticeably lowered to 2.9% and 1.5%, respectively. The formation of physical barrier around the drug nanocrystals diminished light transmission in preparation, alleviating light exposure to nanocrystal and/or drug molecules. The overall order of the chemical stability of MTK was as follows: nanocrystal hydrogel > conventional hydrogel > nanocrystal suspension > drug alkaline solution. These findings suggested that the formulation of MTK nanosuspension and further incorporation into the polymeric matrix were effective in stabilizing the labile leukotriene antagonist in the external preparation.
The photo-stability of MTK in the nanocrystal or conventional preparations was further evaluated in the presence of Sunset Yellow FCF, as photo-stabilizing agent. The employment of the excipients exhibiting similar absorption spectra with targeted therapeutic agent reduced the undesirable light exposure, hampering photo-degradation of the active compounds. As expected, the degradation of MTK was markedly retarded in all formulations, providing over 60% of the drug remaining after 8 h ( Figure 4B). The MTK content of nanocrystal suspension and hydrogel was determined to approximately 76% and 81% in the presence of the UV/Vis absorber, respectively, which was markedly higher as compared to 49% and 70% of the formulations in the absence of Sunset Yellow FCF. Correspondingly, the contents of cis-isomer and the sulfoxide content in the MTK nanocrystal suspension with Sunset Yellow FCF were markedly decreased, resulting in less than 20% and 50% compared to the nanocrystal suspension with no photo-stabilizer. It is coincided with the previous report that the chemical stability of MTK in oral liquid syrup was markedly improved by adding the coloring agents, by absorbing light in the UV and visible region ranging from 350 to 500 cm −1 [43].
In Vivo Pharmacokinetic Profile after Topical Administration of MTK Nanocrystal Hydrogel in Rats
The plasma concentration-time profile of the MTK following topical administration of the nanocrystal or conventional hydrogels in dorsal skin of rats is depicted in Figure 5, and relevant pharmacokinetic parameters calculated from the pharmacokinetic profiles are presented in Table 3. There were no signs on their physical condition and behavior, morbidity, and mortality of rats during the pharmacokinetic study (data not shown). Drug dose topically administered to the skin was 25 mg/kg, which is lower than oral no observable adverse effect level (NOAEL) of MTK reported [44], thus, exhibiting no marked adverse effects following single topical administration. In long-term chronic toxicity test in rats and mice for 12 months, the NOAEL value was estimated 50 mg/kg [44]. The photo-stability of MTK in the nanocrystal or conventional preparations was further evaluated in the presence of Sunset Yellow FCF, as photo-stabilizing agent. The employment of the excipients exhibiting similar absorption spectra with targeted therapeutic agent reduced the undesirable light exposure, hampering photo-degradation of the active compounds. As expected, the degradation of MTK was markedly retarded in all formulations, providing over 60% of the drug remaining after 8 h ( Figure 4B). The MTK content of nanocrystal suspension and hydrogel was determined to approximately 76% and 81% in the presence of the UV/Vis absorber, respectively, which was markedly higher as compared to 49% and 70% of the formulations in the absence of Sunset Yellow FCF. Correspondingly, the contents of cis-isomer and the sulfoxide content in the MTK nanocrystal suspension with Sunset Yellow FCF were markedly decreased, resulting in less than 20% and 50% compared to the nanocrystal suspension with no photo-stabilizer. It is coincided with the previous report that the chemical stability of MTK in oral liquid syrup was markedly improved by adding the coloring agents, by absorbing light in the UV and visible region ranging from 350 to 500 cm −1 [43].
In Vivo Pharmacokinetic Profile after Topical Administration of MTK Nanocrystal Hydrogel in Rats
The plasma concentration-time profile of the MTK following topical administration of the nanocrystal or conventional hydrogels in dorsal skin of rats is depicted in Figure 5, and relevant pharmacokinetic parameters calculated from the pharmacokinetic profiles are presented in Table 3. There were no signs on their physical condition and behavior, morbidity, and mortality of rats during the pharmacokinetic study (data not shown). Drug dose topically administered to the skin was 25 mg/kg, which is lower than oral no observable adverse effect level (NOAEL) of MTK reported [44], thus, exhibiting no marked adverse effects following single topical administration. In long-term chronic toxicity test in rats and mice for 12 months, the NOAEL value was estimated 50 mg/kg [44]. Note: Data are expressed as mean ± standard error (n = 6). Abbreviations: AUC 0-24 h) , area under the plasma concentration-time curve from zero to 24 h; AUC (0-inf), area under the plasma concentration-time curve to infinite time; C max , maximum plasma concentration; T max , time to reach maximum plasma concentration; T 1/2 , elimination half-life of the drug.
In both groups, the plasma levels of the leukotriene antagonist were steeply elevated after topical application, reaching the maximum levels within 2 h. There was no significant difference in the C max values between the nanocrystal and conventional hydrogel-treated groups, exhibiting C max values of 5.9 and 5.3 ng/mL, respectively. Then, we observed that the MTK levels in the plasma fluctuated between 2 and 6 ng/mL 5 h post-dosing, probably due to a complementary process between absorption and metabolism and/or elimination. The plasma concentration-time profiles of MTK obtained from both groups were quite fluctuated with large deviations between individuals. As the leukotriene antagonist is extremely lipophilic compound (log P value of 8.79) [8], it might struggle easily crossing the skin layer. In general, small (molecular weight below 500 Da) compounds with appropriate lipophilicity (log P value between 1 and 3) and a low melting point are more suitable for skin penetration [45]. Moreover, MTK has been reported to be extensively metabolized by multiple cytochrome P450s such as CYP2C8 and CYP2C9 and CYP3A4 and glucuronidase including UGT1A3, which is also abundant in skin layer [46]. Thus, extensive metabolism of MTK molecules in the liver and even skin tissue might contribute large variations. Afterward, the drug concentration in the plasma was decreased to less than 1 ng/mL 12 h post-dosing, with the elimination T 1/2 values between 9.7 h and 6.2 h. There was no significant difference in AUC (0-24 h) values between nanocrystal and conventional hydrogel-treated groups (20.1 and 23.5 ng·h/mL, respectively). Thus, no significant differences in the extent and rate of drug permeation after topical administration of nanocrystal or conventional hydrogels were correlated with the in vitro dissolution profile of MTK, as described above, denoting that nanocrystalized drug powder was rapidly dissolved and/or adsorbed in the relevant skin layer by forming a high concentration gradient between the hydrogel and the stratum corneum. Moreover, intact MTK nanocrystals and/or dissolved drug molecules might have been penetrated the systemic circulation through the surrounding follicular epithelium, exhibiting comparable plasma concentration profiles of conventional hydrogel. From these findings, we concluded that nanocrystal hydrogels could be a potent tool that can provide comparable skin permeability with markedly improved chemical stability of MTK.
Conclusions
A novel nanocrystal system of MTK was lucratively prepared by acid-base neutralization and ultra-sonication method. Drug nanocrystals stabilized by PVP K30 polymer were approximately 130 nm in size, globular shaped, and were in amorphous state. The incorporation of MTK nanocrystals into xanthan gum hydrogel did not alter the physical characteristics of the drug nanocrystals. The nanocrystal hydrogel exhibited markedly improved photo-stability as compared to the drug solution, conventional hydrogel, and even nanocrystal suspension, considering the aspects of drug remaining and the generation of two main degradation products, cis-isomer and sulfoxide. Moreover, there was no noticeable difference in the plasma concentration-time profile between the nanocrystal and the conventional hydrogels, exhibiting equivalent AUC and C max values. Therefore, the novel nanocrystal hydrogel represents a promising tool for transdermal delivery of MTK, a poorly soluble labile compound, improving patient compliance, especially in children and elderly. | 9,119.6 | 2019-12-23T00:00:00.000 | [
"Medicine",
"Chemistry",
"Materials Science"
] |
Two-stage Federated Phenotyping and Patient Representation Learning
A large percentage of medical information is in unstructured text format in electronic medical record systems. Manual extraction of information from clinical notes is extremely time consuming. Natural language processing has been widely used in recent years for automatic information extraction from medical texts. However, algorithms trained on data from a single healthcare provider are not generalizable and error-prone due to the heterogeneity and uniqueness of medical documents. We develop a two-stage federated natural language processing method that enables utilization of clinical notes from different hospitals or clinics without moving the data, and demonstrate its performance using obesity and comorbities phenotyping as medical task. This approach not only improves the quality of a specific clinical task but also facilitates knowledge progression in the whole healthcare system, which is an essential part of learning health system. To the best of our knowledge, this is the first application of federated machine learning in clinical NLP.
Introduction
Clinical notes and other unstructured data in plain text are valuable resources for medical informatics studies and machine learning applications in healthcare. In clinical settings, more than 70% of information are stored as unstructured text. Converting the unstructured data into useful structured representations will not only help data analysis but also improve efficiency in clinical practice (Jagannathan et al., 2009;Kreimeyer et al., 2017;Ford et al., 2016;Demner-Fushman et al., 2009;Murff et al., 2011;Friedman et al., 2004). Manual extraction of information from the vast volume of notes from electronic health record (EHR) systems is too time consuming.
To automatically retrieve information from unstructured notes, natural language processing (NLP) has been widely used. NLP is a subfield of computer science, that has been developing for more than 50 years, focusing on intelligent processing of human languages (Manning et al., 1999). A combination of hard-coded rules and machine learning methods have been used in the field, with machine learning currently being the dominant paradigm.
Automatic phenotyping is a task in clinical NLP that aims to identify cohorts of patients that match a predefined set of criteria. Supervised machine learning is curently the main approach to phenotyping, but availability of annotated data hinders the progress for this task. In this work, we consider a scenario where multiple instituitions have access to relatively small amounts of annotated data for a particular phenotype and this amount is not sufficient for training an accurate classifier. On the other hand, combining data from these institutions can lead to a high accuracy classifier, but direct data sharing is not possible due to operational and privacy concerns.
Another problem we are considering is learning patient representations that can be used to train accurate phenotyping classifiers. The goal of patient representation learning is mapping the text of notes for a patient to a fixed-length dense vector (embedding). Patient representation learning has been done in a supervised (Dligach and Miller, 2018) and unsupervised (Miotto et al., 2016) setting. In both cases, patient representation learning requires massive amounts of data. As in the scenario we outlined in the previous paragraph, combining data from several institutions can lead to higher quality patient representations, which in turn will improve the accuracy of phenotyping classifiers. However, direct data sharing, again, is difficult or impossible.
To tackle the challenges we mentioned above, we developed a federated machine learning method to utilize clinical notes from multiple sources, both for learning patient representations and phenotype classifiers.
Federated machine learning is a concept that machine learning models are trained in a distributed and collaborative manner without centralised data (Liu et al., 2018a;McMahan et al., 2016;Bonawitz et al., 2019;Konečnỳ et al., 2016;Huang et al., 2018;Huang and Liu, 2019). The strategy of federated learning has been recently adopted in the medical field in structured databased machine learning tasks (Liu et al., 2018a;Huang et al., 2018;Liu et al., 2018b). However, to the best of our knowledge, this work is the first time a federated learning strategy has been used in medical NLP.
We developed our two-stage federated natural language processing method based on previous work on patient representation (Dligach and Miller, 2018). The first stage of our proposed federated learning scheme is supervised patient representation learning. Machine learning models are trained using medical notes from a large number of hospitals or clinics without moving or aggregating the notes. The notes used in this stage need not be directly relevant to a specific medical task of interest. At the second stage, representations from the clinical notes directly related to the phenotyping task are extracted using the algorithm obtained from stage 1 and a machine learning model specific to the medical task is trained.
Clinicians spend a significant amount of time reviewing clinical notes. This time can be saved or reduced with reasonably designed NLP technologies. One such task is phenotying from medical notes. In this study, we demonstrated, using phenotyping from clinical note as a clinical task (Conway et al., 2011;Dligach and Miller, 2018), that the method we developed will make it possible to utilize notes from a wide range of hospitals without moving the data.
The ability to utilize clinical notes distributed at different healthcare providers not only benefits a specific clinical practice task but also facilitates building a learning healthcare system, in which meaningful use of knowledge in distributed clinical notes will speed up progression of medical knowledge to translational research, tool development, and healthcare quality assessment (Fried-man et al., 2010;Blumenthal and Tavenner, 2010). Without the needs of data movement, the speed of information flow can approach real time and make a rapid learning healthcare system possible (Slutsky, 2007;Friedman et al., 2014;Abernethy et al., 2010).
Study Cohorts
Two datasets were used in this study. The MIMIC-III corpus (Johnson et al., 2016) was used for representation learning. This corpus contains information for more than 58,000 admissions for more than 45,000 patients admitted to Beth Israel Deaconess Medical Center in Boston between 2001 and 2012. Relevant to this study, MIMIC-III includes clinical notes, ICD9 diagnostic codes, ICD9 procedure codes, and CPT codes. The notes were processed with cTAKES 1 to extract UMLS 2 unique concept identifiers (CUIs). Following the cohort selection protocol from (Dligach and Miller, 2018), patients with over 10,000 CUIs were excluded from this study. We obtained a cohort of 44,211 patients in total.
The Informatics for Integrating Biology to the Bedside (i2b2) Obesity challenge dataset was used to train phenotyping models (Uzuner, 2009). The dataset consists of 1237 discharge summaries from Partners HealthCare in Boston. Patients in this cohort were annotated with respect to obesity and its comorbidities. In this study we consider the more challenging intuitive version of the task. The discharge summaries were annotated with obesity and its 15 most common comorbidities, the presence, absence or uncertainty (questionable) of which were used as ground truth label in the phenotyping task in this study. Table 1 shows the number of examples of each class for each phenotype. Thus, we build phenotyping models for 16 different diseases.
Data Extraction and feature choice
At the representation learning stage (stage 1), all notes for a patient were aggregated into a single document. CUIs extracted from the text were used as input features. ICD-9 and CPT codes for the patient were used as labels for supervised representation learning. At the phenotyping stage (stage 2), CUIs extracted from the discharge summaries were used as input features. Annotations of being present, absent, or questionable for each of the 16 diagnoses for each patient were used as multi-class classification labels.
Two-stage federated natural language processing of clinical notes
We envision that clinical textual data can be useful in at least two ways: (1) for pre-training patient representation models, and (2) for training phenotyping models.
In this study, a patient representation refers to a fixed-length vector derived from clinical notes that encodes all essential information about the patient. A patient representation model trained on massive amounts of text data can be useful for a wide range of clinical applications. A phenotyping model, on the other hand, captures the way a specific medical condition works, by learning the function that can predict a disease (e.g., asthma) from the text of the notes.
Until recently, phenotyping models have been trained from scratch, omitting stage (1), but recent work (Dligach and Miller, 2018) included a pretraining step, which derived dense patient representations from data linking large amounts of patient notes to ICD codes. Their work showed that including the pre-training step led to learning patient representations that were more accurate for a number of phenotyping tasks.
Our goal here is to develop methods for federated learning for both (1) pre-training patient representations, and (2) phenotyping tasks. These methods will allow researchers and clinicans to utilize data from multiple health care providers, without the need to share the data directly, obviating issues related to data transfer and privacy.
To achieve this goal, we design a two-stage federated NLP approach (Figure 1). In the first stage, following (Dligach and Miller, 2018), we pre-train a patient representation model by training an artificial neural network (ANN) to predict ICD and CPT codes from the text of the notes. We extend the methods from (Dligach and Miller, 2018) to facilitate federated training.
In the second stage, a phenotyping machine learning model is trained in a federated manner using clinical notes that are distributed across multiple sites for the target phenotype. In this stage, the notes mapped to fixed-length representations from stage (1) are used as input features and whether the patient has a certain disease is used as a label with one of the three classes: presence, absence or questionable.
In the following sections, we first describe a simple notes pre-processing step. We then discuss the method for pre-training patient representations and the method for training phenotyping models. Finally, we describe our framework for performing the latter two steps in a federated manner. Figure 1: Two stage federated natural language processing for clinical notes phenotyping. In the first stage, a patient representation model was trained using an artificial neural network (ANN) to predict ICD and CPT codes from the text of the notes from a wide range of healthcare providers. The model without output layer was then used as "representation extractor" in the next stage. In the second stage, a phenotyping support vector machine model was trained in a federated manner using clinical notes for the target phenotype distributed across multiple silos.
Pre-processing
All of our models rely on standardized medical vocabulary automatically extracted from the text of the notes rather than on raw text.
To obtain medically relevant information from clinical notes, Unified Medical Language System (UMLS) concept unique identifiers (CUIs) were extracted from each note using Apache cTAKES (https://ctakes.apache.org). UMLS is a resource that brings together many health and biomedical vocabularies and standardizes them to enable interoperability between computer systems.
The Metathesaurus is a large, multi-purpose, and multi-lingual vocabulary that contains information about biomedical and health related concepts, their various names, and the relationships among them. The Metathesaurus structure has four layers, Concept Unique Identifies(CUIs), Lexical (term) Unique Identifiers (LUI), String Unique Identifiers (SUI) and Atom Unique Iden-tifiers (AUI). In this study, we focus on CUIs, in which a concept is a medical meaning. Our models use UMLS CUIs as input.
Representation learning
We adapted the architecture from (Dligach and Miller, 2018) for pre-training patient representations. A deep averaging network (DAN) that consists of an embedding layer, an average pooling layer, a dense layer, and multiple sigmoid outputs, where each output corresponds to an ICD or CPT code being predicted.
This architecture takes CUIs as input and is trained using binary cross-entropy loss function to predict ICD and CPT codes. After the model is trained, the dense layer can be used to represent a patient as follows: the model weights are frozen and the notes of a new patient are fed into the network; the patient representation is collected from the values of the units of the dense layer. Thus, the for t ∈ 1 to T do for k ∈ 1 to K in parallel do Train phenotyping model f k end aggregate models from all sites by W t ag = K k=1 n k N w t k end Algorithm 1: Two-stage federated natural language processing text of the notes is mapped to a fixed-length vector using a pre-trained deep averaging network.
Phenotyping
A linear kernel Support Vector Machine (SVM) taking input from representations generated using the pre-trained model from stage 1 was used as the classifer for each phenotype of interest. No regularization was used for the SVM and stochastic gradient descent was used as the optimization algorithm.
Federated machine learning learning on clinical notes
To train the ANN model in either stage 1 or stage 2, we simulated sending out models with identical initial parameters to all sites such as hospitals or clinics. At each site, a model was trained using only data form that site. Only model parameters of the models were then sent back to the analyzer for aggregation but not the original training data. An updated model is generated by averaging the parameters of models distributively trained, weighted by sample size (Konečnỳ et al., 2016;McMahan et al., 2016). In this study, sample size is defined as the number of patients. After model aggregation, the updated model was sent out to all sites again to repeat the global training cycle (Algorithm 1). Formally, the weight update is specified by: where W ag is the parameter of aggregated model at the analyzer site, K is the number of data sites, in this study the number of simulated healthcare providers or clinics. n i is the number of samples at the i th site, N is the total number of samples across all sites, and W i is the parameters learned from the i th data site alone. t is the global cycle number in the range of [1,T]. The algorithm tries to minimize the following objective function: Where x j is the feature vector of CUIs. and y is the class label. p is the output number and M is the total number of outputs. f is the machine learning model such as artificial neural network or SVM.Codes that accompany this article can be found at our github repository 3 .
Experiments
To imitate real world medical setting where data are distributed with different healthcare providers, we randomly split patients in MIMIC-III data into 10 sites for stage 1 (federated representation learning). The training data of i2b2 was split into 3 sites for stage 2 (phenotype learning) to mimic obesity related notes distributed with three different healthcare providers. i2b2 notes were not included in the representation learning as in clinic settings information exchange routes for diseasespecific records are often not the same as general medical information and ICD/CPT codes were not available for i2b2 dataset.
Experiments were designed to answer three questions: 1. Whether clinical notes distributed in different silos can be utilized for patient representation learning without data sharing 2. Whether utilizing data from a wide range of sources will help improve performance of phenotyping from clinical notes 3. Whether models trained in a two-stage federated manner will have inferior performance to models trained with centralized data.
To answer these questions, two-stage NLP algorithms were trained. Performance of models trained using only i2b2 notes from one of the three sites were compared with two-stage federated NLP results. Furthermore, performance 3 https://github.com/kaiyuanmifen/ FederatedNLP of machine learning models using distributed or centralized data at patient representation learning stage or phenotyping stage were compared.
Results
4.1 Two-stage federated natural language processing improves performance of automatic phenotyping We looked at the scenarios where no representation learning was performed. In those cases, the standard TF-IDF weighted sparse bag-of-CUIs vectors were used to represent i2b2 notes. The sparse vectors were used as input into the phenotyping SVM model. We also looked at the scenarios where representation learning was performed by predicting ICD codes. For each of these conditions, we trained our phenotyping models using centralized vs. federated learning. Finally, we considered a scenario where the phenotyping model was trained using the notes from a single site (the metrics we report were averaged across three sites).
To summarize, seven experiments were conducted: 1. No representation learning + centralized phenotyping learning 2. No representation learning + federated phenotyping learning where i2b2 training data were randomly split into 3 silos 3. No representation learning + single source phenotyping learning, where i2b2 data were randomly split into 3 silos, but phenotyping algorithm was only trained using data from one of the silos 4. Centralized representation learning + centralized phenotyping learning 5. Centralized representation learning + federated phenotyping learning 6. Federated representation learning + centralized phenotyping learning,where MIMIC-III data were randomly split into 10 silos 7. Federated representation learning + federated phenotyping learning, where MIMIC-III data were randomly split into 10 silos and i2b2 data into 3 silos (Table 2). The results of our experiments are shown in Table 3. First of all, we looked at whether phenotyping model training can be conducted in a federated manner without compromising performance. When only i2b2 data from one of three silos was used for phenotyping training (experiment 3), the F1 score of 0.542 was achieved. When data from all three i2b2 sites were used for phenotyping model training (experiment 1) the F1 score improved to 0.634, which suggests that more data did improve the model. If we assume data from the three i2b2 silos can not be moved and aggregated together, the model trained in a federated manner (experiment 2) achieved a comparable F1 score of 0.632. This suggested federated learning worked for phenotyping model training.
Previous work showed that using learned rep-resentations from clinical notes from a different source using a transfer learning strategy helps to improve the performance of phenotyping NLP models (Dligach and Miller, 2018). When patient representations learned from centralized MIMIC-III notes were used as features and centralized phenotyping training was conducted (experiment 4), the phenotyping performance increased significantly with F1 score of 0.714, which was consistent with previous findings (Dligach and Miller, 2018).
When a federated approach was applied in both representation learning and phenotyping stages, the algorithm achieved F1 score of 0.724. It is worth pointing out that F1 scores from experiment 7 , where both representation and phenotyping training were conducted in a federated manner, were not statistically different from F1 scores of experiment 4 over multiple rounds of experiment using different data shuffling and initialization. In comparison, when only data from a single simulated silo was used, the average F1 score 0.634. When the centralized approach was taken at both stages, the precision, recall and F1 score were 0.718, 0.711 and 0.714 respectively. These results suggested utilizing clinical notes from different silos in a federated manner did improve accuracy of the phenotyping NLP algorithm, and the performance is comparable to NLP trained on centralized data. The performance of federated NLP on each single obesity commodity were shown in Table 3. It is necessary to point out that it was impractical to conduct federated phenotyping training when the number of "questionable" cases for many diseases are small (Table 1). This is true for many diseases in the i2b2 dataset. In such situation, "questionable" cases were excluded from the training and testing process. Instead of 3-class classification, a 2-class binary classification of "presence" or "absence" were conducted. There-fore, the performance metrics can not be directly compared with results in the original i2b2 challenge, though the scores were similar.
Conclusion
In this article, we presented a two-stage method that conducts patient representation learning and obesity comorbidity phenotyping, both in a federated manner. The experimental results suggest that federated training of machine learning models on distributed datasets does improve performance of NLP on clinical notes compared with algorithms trained on data from a single site. In this study, we used CUIs as input features into machine learning models, but the same federated learning strategies can also be applied to raw text.
Acknowledgement
Research reported in this publication was supported by the National Library Of Medicine of the National Institutes of Health under Award Number R01LM012973. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. | 4,720.8 | 2019-08-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Magnetic-Mixed Convection in Nanofluid-Filled Cavity Containing Baffles and Rotating Hollow-Cylinders with Roughness Components
,
Introduction
Convective heat transfer in confned enclosures is of great interest to the engineers and researchers as it is frequently encountered in heat transport processes in various engineering felds such as heat exchangers, electronic cooling, energy storage systems, solar technologies, and nuclear reactor systems. Among the heat transfer processes, convective heat transfer is a complex phenomenon which is developed by the interaction of thermal buoyancy fow caused by temperature diferences and shear fow caused by additional forces such as moving surfaces, rotating surfaces, or inlet-outlet fow. In addition, heat transfer associated with magnetic felds has taken researcher's attention considering its extensive applications in engineering, for example, crystal growth in liquid, electronic packaging, solar technology, nuclear reactor cooling, molten metal purifcation, and petroleum industry. In this case, magnetic force interacts with the buoyancy fow and produces Lorentz force which infuences the fuid fow and heat transport mechanisms. Recently, researchers found limitations in using convectional liquids such as water, oil, and ethylene glycol as coolants named lower thermal conductivity. Tey have tried to break down this limitation and developed novel heat transfer fuids by amalgamating nanoparticles in diferent base fuids named nanofuids which have superior properties such as higher thermal conductivity, improved stability, minimal clogging in the fow domain, and reduced pumping power. In this context, many researchers and engineers were interested in accomplishing their theoretical and experimental studies of natural convection/mixed convection/ forced convection in diferent confgurations flled with diferent nanofuids under the infuence of the magnetic feld or in absence of magnetic feld. Some of the related studies have been presented here.
Fereidoon et al. [1] investigated mixed convection in a double lid-driven square cavity by using the fnite volume method and recommended that heat transfer increases with solid volume fraction at a fxed Reynolds number. Tey also noted that it is increased for Ri and Re at a fxed solid volume fraction. Later on, Muthtamilselvan and Doh [2] conducted a similar study considering the magnetic feld efect in the vertical direction and found the fow, heat, and mass transfer characteristics strongly depends on the magnetic feld strength. Ismael et al. [3] examined the efects of partial slip and inclined magnetic feld on mixed convection and showed that the convection due to partial slip is controlled with the strength and orientation of the magnetic feld. Kasaeipoor et al. [4] utilized a fnite volume approach to study mixed convection in a T-shaped cavity under the magnetic feld efect. Tey demonstrated the heat transfer in nanofuid increases while the cavity aspect ratio was increased. Kefayati [5] used Buongiorno's mathematical model to investigate mixed convection in nonNewtonian nanofuid fow and observed heat and mass transfer enhance with the buoyancy ratio number, whereas mass transfer ameliorated with thermophoresis and Brownian motion.Öztop et al. [6] investigated mixed convection in a wavy-walled cavity flled by nanofuid and noted that enhancement of heat transfer for volume fraction depends on Ha and Ri. Armaghani et al. [7] studied natural convection in a nanofuid-flled bafed L-shaped cavity. In their study, fnite volume method was implemented and found heat transfer increasing with the aspect ratio and bafes length. Karimdoost Yasuri et al. [8] conducted similar analysis in a square cavity with a bafe and an external magnetic feld. Tey recorded that heat transfer increases for increasing Ra and bafes length but decreases for Ha. After that, Hussein et al. [9] numerically investigated natural convection in inclined cavity having a cold bafe flled by Al 2 O 3 /water nanofuid and found Nusselt number increases with Ra and ϕ, but it decreases with the cavity aspect ratio and inclination angle. Ziam et al. [10] proposed a single-phase nanofuid model to analyze natural convection in a bafed U-shaped cavity. Tey considered Brinkman and Wasp models and suggested heat transfer rate increases for increased Ra and ϕ but diminishes for Ha. Aljabair et al. [11] used FORTRAN code to solve mixed convection in a nonuniformly heated arc-shaped cavity. Tey recommended the local and average Nusselt numbers are increasing function of ϕ, Re, and Ra. Tey also presented numerical results based on correlation equations for the average Nusselt number. Al-Farhany et al. [12] studied magnetohydrodynamic natural convection in an inclined porous cavity with conducting horizontal fns and showed the highest length and the widest gap of the fns cause superior heat transfer situation within the cavity. Shah et al. [13] analyzed the magnetized viscous fuid fow of a water-based hybrid nanofuid containing single-wall carbon nanotubes through a permeable channel by implementing a nanolayer approach and demonstrated that heat fux increases by the induction of single-wall carbon nanotubes in water, and the nanolayer as well as the particle radius causes an enhancement in the thermal conductance of the base fuid. Bilal et al. [14] investigated natural convection fow of power-law fuids in trapezoidal cavity with a nonuniformly U-shaped fn by using the fnite element method and COMSOL Multiphysics software (5.6) and highlighted that heat transportation and momentum profle increase with the increase in the Rayleigh number which decline for increased viscosity of the fuid. After that, Bilal et al. [15] performed a similar investigation for a square cavity with nonuniformly T-shaped fn. Later on, Shah et al. [16] studied natural convection fow of power-law fuids an isosceles triangular cavity by using similar software and found noticeable infuence of governing parameters on local and average heat fux coefcients and kinetic energy. It was also inferred that a higher Rayleigh number causes enrichment in local heat transfer and kinetic energy, whereas the reverse phenomenon occurs for the power-law index. Another numerical study [17] pointed out that more heat transfer is achieved with the addition of metallic particles to base fuid compared to nonmetallic particles. Moreover, an elevated Nussult number is found for increasing the magnitude of nanoparticle volume fraction, but it depreciates the skin friction coefcient. Qureshi et al. [18] optimized entropy generation for the induction of hybrid magnetized nanoparticles between two coaxially rotating porous disks and showed that entropic generation is reduced with upsurge in inducted hybridized particles volume fraction and Bejam number is found inversely related with entropy generation. After that, Qureshi et al. [19] examined three-dimensional triadic hybrid nanofuid fow between two coaxially rotating disks with magnetic feld efects and found that hybrid nanofuids containing Cu-Al 2 O 3 nanoparticles showed signifcantly better results than those of other hybrid nanoparticles. It was also recorded that the thermal conductance of nanofuids is directly correlated with the morphology of any hybrid nanoparticles. Bilal et al. [20] considered the Cattaneo-Christov heat fux model and transverse magnetic feld to analyze heat transfer of Williamson fuid fow over an exponentially stretched the surface by shooting and Runge-Kutta-method and showed that the velocity feld is suppressed by the magnetic feld and Williamson fuid parameter. Later on, Bilal et al. [21] performed fnite diference simulations of viscous fuid fow over a permeable rotating disk to explore the fow in the presence of the transverse magnetic feld and recommended that the tangential and radial velocity components are enhanced with the magnetic strength parameter, whereas the opposite trend occurred in the axially directed velocity. Te fow and thermal felds within simple geometrical models with/without bafes or fns were numerically explored in these studies by using diferent nanofuids in the presence of the magnetic feld or not. It needs to be highlighted that geometrical confguration with appropriate obstruction was not considered in those investigations.
Insertion of detached obstructions of various shapes such as circular, square, tilted-square, and triangular has an impact on controlling fuid fow and heat transfer characteristics inside confned enclosures where fuid fow needs to be restricted or bifurcated. Tus, confgurations with appropriate obstructions can signifcantly afect the fow and temperature felds compared to other simple shapes. A large number of studies have been conducted considering these phenomena. Rahman et al. [22] investigated mixed convection in a rectangular cavity with a circular cylinder and pointed out that the dimensionless temperature and heat transfer were strongly dependent on the dimensionless parameters and confgurations studied. Majdi et al. [23] used the Fluent 6.3 commercial program to analyze combined convection in a triangular cavity with an insulated cylinder. Tey illustrated that heat transfer rate is enhanced with the volume fraction of nanoparticles but it decreases with Ri. Ishak et al. [24] studied mixed convection in a trapezoidal cavity having a solid cylinder and flled with nanofuids containing Al 2 O 3 nanoparticles. Teir results showed the heat transfer and Bejan number improved with the location and size of the solid cylinder. Aly et al. [25] used the ISPH method to investigate mixed convection in a lid-driven cavity with circular cylinders in motion and suggested that the SPH tool can be applied easily to solve complicated 2D and 3D problems. Bilal et al. [26] analyzed the fow and heat transfer of nonlinear fuid in a square cavity with a cold cylinder and revealed that convective heat transfer and kinetic energy arise for Ra but decrease for the power-law index. Te problem of heat transportation of mixed convection in Newtonian fuid enclosed by square cavity with a placement of adiabatic square cylinder was investigated by Khan et al. [27] and revealed that increased Reynolds number causes decrement in kinetic energy of fuid, whereas thermal buoyancy forces and Nusselt number increase for increasing Grashof number. Shah et al. [28] examined the heat and mass transportation of double difusive natural convection in Casson non-Newtonian fuid enclosed by a hexagonal enclosure with an isothermally heated cylinder in the presence of an inclined magnetic feld by practicing COMSOL Multiphysics software and found that heat and mass fux coefcients diminish for the magnetic feld efect, whereas heat and mass fux distribution are found upsurging with a Casson fuid parameter. Bansal and Chatterjee [29] performed a numerical study of magneto-convective fow in a lid-driven cavity that included a rotating and heat conducting cylinder. In their study, the fow and thermal felds were found qualitatively change with mixed convective strength, magnetic vied, and concentration of nanoparticles. Another numerical study [30] pointed out 14.2% more heat transfer in a rotating state compared to a motionless state. Later on, Malla et al. [31] studied a similar problem considering a fuid-porous layer using ANSYS fuent software, and a stable system was found at lower Da and minimum heat transfer at about Ri � 10. Ali et al. [32] numerically analyzed the fow and heat transfer in mixed convection through grooved channels, and they noticed the fow and heat transfer mechanisms, due to governing parameters, are more efective with the presence of a rotating heat source. Al-Kouz et al. [33] investigated irreversibilities and heat transfer of mixed convection in wavy enclosure containing a rotating cylinder and showed the fow function and heat transfer enhancement become maximum at increasing Ra. Tus, it is clearly seen that fuid motion and temperature distribution inside a cavity are infuenced by the presence of internal blockage. However, irregular surfaces of internal blockage were not considered in these studies, but they can be used to improve or control the fow function and heat transfer enhancement in a closed or open cavity.
Te irregular surfaces, either static or in motion, of a geometrical confguration generate contributory recirculating regions that can enhance fow mixing within the cavity and heat transfer compared to other smooth confgurations. Tus, such complicated confgurations can cause substantial improvement in the heat transfer performance and fow mixing and play signifcant roles in engineering processes such as heat exchangers, electronic cooling systems, solar technologies, nuclear reactors, lubrication systems, and chemical processing equipment. Based on this idea, many studies have been accomplished by the researchers. Gangawane and Manikandan [34] studied natural convection in a square cavity containing a hexagonal block with diferent thermal conditions and demonstrated the fow and temperature pattern changes were remarkable for both thermal conditions. It was also found that heat transfer decreases with the power-law index. Sheikholeslami et al. [35] investigated transportation of nanoparticles in a circular porous cavity with complex inner shaped in the presence of magnetic force and illustrated nanofuid motion that decreases with the magnetic force, and the Nusselt number increases for reducing cavity porosity. Later on, Alhashash [36] accomplished a similar study considering a square cavity with a hot corrugated cylinder and recommended that heat transfer from a corrugated cylinder is better than a smooth cylinder under specifc circumstances. Tayebi and Chamkha [37] carried out parametric analysis of natural convection in a nanofuid feld enclosure with a wavy cylinder and pointed out that fow and heat transfer characteristics are controlled by the presence of a wavy conductive cylinder. Ismael [38] utilized a fnite diference method to examine mixed convection in an enclosure with an arcshaped moving wall and showed that heat transfer due to rotational speed is irrespective at lower Ra but signifcant at higher Ra. Ali et al. [39] numerically investigated mixed convection in a concentric rotating cylinder and an inner sinusoidal cylinder. In their study, COMSOL 5.2a was used to solve the modeled governing equations and illustrated the formation of stream and temperature lines, as well as how the Nusselt number was afected by the number of corrugatings of the inner cylinder and the governing parameters studied. Hamzal et al. [40] analyzed the mixed convection of a rotating cylinder immersed in a nanofuid-flled cavity Mathematical Problems in Engineering associated with a magnetic feld and heat fux. Tey concluded the enhancement of heat transfer processes for NPs volume fraction, which is promoted with increasing Ra.
Based on the presented literature review, it is evident that mixed convective heat transfer in the presence of a magnetic feld is of considerable interest for researchers and engineers and is applicable in science and engineering felds. A lot of research has been conducted for mixed convection in different geometries flled with diferent fuids. Tough mixed convection in diferent geometries is considered in the open literature, mixed convective heat transfer in a lid-driven nanofuid-flled partially heated cavity having a rotating cylinder with roughness components in the presence of bafes has never been investigated. But it has a signifcant infuence to improve the heat transfer efciency by enhancing the efective fow circulation due to the presence of roughness components and nanofuid thermal conductivity and is more applicable in engineering felds. Accordingly, authors have been interested to examine the fuid fow and heat transfer behaviours in a nanofuid-flled partially heated cavity equipped by centred rotating cylinder with roughness components and moving top wall and also a pair of bafes attached to the cavity vertical walls under the infuence of transverse magnetic feld. As per literature survey and author's knowledge, no such work was reported yet. Galerkin's fnite element method is utilized to simulate the modifed governing equations in this study. Te numerical results are obtained for diferent physical parameters and explained via streamlines, isotherms, and average Nusselt number bar charts. Tis type of confguration may be set up in engineering equipment such as high-performance heat exchangers, energy storage systems, cooling of electronic equipment, solar collectors, space thermal management, and reactor safety devices. Figure 1 represents the geometrical confguration of a two-dimensional square cavity of length L. A pair of horizontal bafes of 20% cavity length is fxed to the vertical walls of the cavity, and a circular obstacle of radius R (�l0% L) with triangular roughness components is positioned at its centre which is rotated at angular velocity ω. Te free region between the cavity and the obstacle is flled with alumina water nanofuid that is heated partially from the cavity bottom wall and cooled from its top moving wall at a uniform velocity U 0 . Remaining walls and bafes are maintained no-slip condition and kept adiabatic. Te mixed convection is induced due to the rotating rough cylinder and top moving wall along with temperature diferences of the active thermal condition. A uniform magnetic feld afects the fow feld, and gravitational acceleration is also activated in the downward direction. Te thermophysical properties of nanoparticles and water are available in [41,42].
Mathematical Analysis
Te vector form of conservation equations representing the fow model is defned considering the Boussinesq approximation and the described physical model as follows [29,[42][43][44]: Te source terms used in the governing equation (2) are presented in Table 1.
Te used properties of nanofuid are as follows [30,32,41,42,[45][46][47][48]: Mathematical Problems in Engineering Corresponding boundary conditions based on the physical model are specifed as follows: On the vertical walls and baffles: V � 0 and ∇T. n � 0, On the on the cylinder: Te dimensionless variables in equation (6) are introduced in the governing equations to make its dimensionless form: and which are [43,44].
Mathematical Problems in Engineering 5 Te modifed form of the source terms of Table 1 and equation (8) are provided in the following Table 2.
With the help of equation (6), the dimensionless boundary conditions (shown in equation (5)) are written as the following: On the top wall: On the vertical walls and baffles: On the on the cylinde r: V * c � (Sr sin θ, Sr cos θ)and ∇θ * .
Evaluation of Average Nusselt Number.
Te local Nusselt number is computed at the heated midsectional wall of the cavity, which is defned in nondimensional form as follows [51,52]: and corresponding average Nusselt number is estimated as follows [51,52]: 3.2. Numerical Procedure. Te numerical simulation of the governing equations (7)- (9) along with the boundary conditions 10a-d has been performed using Galerkin's weighted fnite element method. In this simulation procedure, the governing partial diferential equations are converted into integral equations by implementing the Galerkin weighted residual method [53]. Te obtained integral equations are as follows: Here, A is the element area, N α (α � 1, 2, ..., 6) are the element shape functions or interpolation functions for the velocity components and temperature, and H λ (λ � 1, 2, 3.) are the element shape functions for the pressure. Te Gaussian divergence theorem has been introduced in equations (13)- (16). to generate the boundary integral terms associated with the surface tractions and heat fux in the momentum and energy equations and we obtained the following equations: 6 Mathematical Problems in Engineering where the surface tractions (A x , A y ) along the outfow boundary A 0 and velocity components and fuid temperature or heat fux (q w ) that fows into or out from the domain along the wall boundary A w . Te basic unknown for the above diferential equations is the velocity components (U, V), temperature θ, and the pressure P. For the development of the fnite element equations, the six-node triangular element is used in this work. All six nodes are associated with velocities as well as temperature, only the three corner nodes are linked with pressure. Tis means that a lower order polynomial is chosen for pressure and which is satisfed through the continuity equation. Te velocity components, temperature profles, and linear interpolation for the pressure distribution according to their highest derivative orders for the diferential equations (7) to (9) are as follows: where β �1, 2, . . . . . ., 6 and λ �1, 2, 3. Substituting the element velocity component distributions, temperature distributions, particles distributions, and the pressure distribution from equations (21)-(24) into the equations (17)- (20), the fnite element equations can be written in the following form: [43,44,49,50].
Now, we consider the coefcients in the above governing equations as follows: Tese element matrices are evaluated in closed form for numerical simulation. Details of the derivation for these element matrices are omitted for brevity.
With the help of the above coefcients, the fnite element equations can be written in the following type:
Mathematical Problems in Engineering
Using Newton-Raphson method explained by Reddy [54], the obtained nonlinear equations (29) to (32) are converted into linear algebraic equations. Finally, these linear equations are solved by employing the triangular factorization method and reduced integration method expressed by Zeinkiewicz et al. [55]. Te convergence criterion of the numerical solution along with error estimation has been set to |φ m+1 − φ m | ≤ 10 − 5 , where m is the number of iteration and φ is a function of U, V, and θ. Te details of the computational procedure are also available in the earlier studies [32,56,57], which are well described in [58,59]. Te simple algorithm of this study is exposed through the following fowchart (in Figure 2):
Mesh Generation.
In two-dimensional confgurations, mesh generation is a procedure of subdivision of the geometrical domain considering elements as triangular or quadrilateral called fnite elements. Tese elements are connected to their neighbouring elements using characteristic points known as "nodes". Te required values of the physical quantities of a problem are calculated at every node. Meshing a complicated geometry into a signifcant number of elements is essential to the numerical simulations, which makes the FEM a powerful tool for solving engineering problems encountered in practical applications. Te mesh confguration with triangular elements of this study is presented in Figure 3.
Grid Sensitivity Test.
A grid sensitivity test is performed to acquire a grid-irrespective solution. In this regard, the average Nusselt number has been estimated at diferent mesh systems, and fve of them have been presented in Table 3 and Figure 4 which confrm that further meshing has very small impacts on the computed average Nusselt number. Based on the presented mesh confgurations with an average Nusselt number, it can be decided that meshing with 24444 nodes and 47648 elements is appropriate for an accurate solution of this study.
Validation of Computational Procedure.
As code validation is required to evaluate the accuracy of the computational procedure of a problem, we have accomplished comparisons considering mixed convection in diferent confgurations: 3.5.1. Validation Case One. Rashad et al. [60] simulated mixed convection in a lid-driven cavity flled with nanofuids by implementing the fnite volume method and compared their results with earlier studies reported by Khanafer and Chamkha [61] and Iwatsu et al. [62]. We have simulated the similar studies of those [60][61][62] by using a fnite element method-based numerical code. Te obtained results are compared with their results and presented in Table 4. In addition, comparative results of isotherm plotting are also presented in Figure 5(a). In both cases, numerically and graphically, rational agreements are found.
Validation Case Two.
Du et al. [63] used the fnite diference method to investigate the infuence of a magnetic feld on open cavity fow and validated their numerical code with numerical results available in [50,64]. Tey also validated their numerical code by comparing stream function and temperature contours against the stream function and temperature contours reported by Ghasemi et al. at [50]. In this study, we have implemented our numerical procedure to solve the problems relating to [50,63,64] in special cases, and the obtained results are validated by performing comparisons as shown in Table 5 and Figure 5(b).
Tus, the above comparisons ensure that the present numerical procedure is suitable to simulate mixed convection in a nanofuid-flled square domain equipped with bafes and a rotating hollow cylinder having roughness components and also an external magnetic feld which leads us to carry on our investigation.
Results and Discussion
In this section, simulated results are obtained for mixed convection of Al 2 O 3 -water nanofuid in a lid-driven cavity with horizontal bafes and a cantered rotating rough cylinder, and the fow and thermal felds are demonstrated using physical quantities streamlines, temperature contours, and average Nusselt number bar charts consistently. During simulation and graphic presentation, variation in two parameters is considered simultaneously while the values of others are kept fxed as Pr � 6.2, Gr � 100, Sr � 10, Ha � 10, Re � 100, ϕ � 1%, Bh � 0.20 L, and Δ � 0.0275 L. As a result, the sole efects of the two governing parameters have been exhibited combinedly. Te parametric ranges of the analysis are presented in Table 6.
Efects of Speed Ratio (Sr) and Magnetic Field (Ha) on
Streamlines. Figure 6 illustrates the fow feld by plotting streamlines for diferent values of the speed ratio of the rotating rough cylinder and magnetic feld strength in the presence of mounted horizontal bafes. In Figure 6(a) at motionless state of the rough cylinder and Ha � 0, heatedfuid ascends near the left vertical wall due to thermal induced-buoyancy fow and then descends quickly from top to bottom near the right vertical walls as the top moving cold wall and insulated vertical wall fow circulates in downward direction. As a result, clockwise streamline circulation is developed within the cavity, which is also confrmed by the negative fow strength. Tis convective circulation occupies the whole cavity, and it is found suppressing by the presence of horizontal bafes. Moreover, the highest fow strength is noticed at -0.065 within the core circulation near the top moving wall. After that, rough cylinder gets rotating at speed Sr � 5, the fow circulation is restructured by compressing the clockwise-circulations toward the top wall and forming anticlockwise circulation around the rough cylinder with the infuence of rotating inertia of rough cylinder. It is also found that fow velocity increased rapidly, and the strength as well as intensifcation of fow circulations around the hollow cylinder is much stronger than those close to the top wall. Later on, when speed ratio increased to Sr � 10, more rotating inertia is produced within the cavity, and hence, the efect of the moving wall is dominated. As a result, streamline circulation becomes more intensifed with higher strength around the rotating obstacle. Further increase in rotating speed to Sr � 20, the anticlockwise circulation gets maximum size and strength, whereas the clockwise circulation becomes small and close to the top wall. In order to visualize the impact of the speed of the rotating rough cylinder, we have recorded the numerical fow velocities and these are 0.065 at Sr � 0, 0.060 at Sr Ha � 25, streamlines are spaced out with lower strength compared to the case at Ha � 10. In addition, streamlines are more afected and spaced out at Ha � 50 and also gets minimum strength. Te physical consequences behind it is the Lorentz's force retards fow velocity which is produced due to the interaction of nanofuid buoyancy fow and shear fow with an imposed magnetic force, and the Lorentz's force escalates with each increment in the magnetic efect. As a result, fow circulation declines with an increase in Ha for all Ra. Tus, one can conclude that the variation in fow characteristics is more noticeable at greater magnetic feld strengths than at lower ones,while compared in the absence of a magnetic feld.
Efects of Speed Ratio (Sr) and Magnetic Field (Ha) on
Isotherms. Figure 7 represents the thermal feld via temperature contour plotting as well as its topology due to combined variation in rotational speed of rough cylinder and magnetic feld strength at fxed values of other parameters such as Pr � 6.2, Re � 100, Gr � 100, and ϕ � 1%. In Figure 7(a), at Sr � 0 and Ha � 0, isotherms are densely visible over the heated wall and then shifted in an upward direction near the left vertical wall resulting in an imposed thermal condition and clockwise fow circulation due to the top moving wall. In the case of a rotating rough cylinder at Sr � 5 (in a counter-clockwise direction), isotherms are reshufed by forming a spinning shape around the rough cylinder that occupies a fuid domain inside the cavity. Moreover, isotherms close to the cold moving wall are found twisting in the direction of lid-wall motion. In all diagrams, especially while the rough cylinder is in motion, isotherm distribution is found reforming with the presence of bafes where the lower isotherm pack seems to be connected to the lower end of the left adiabatic bafe and another isotherm pack seems to initiate from the top end of the right adiabatic bafe and then complete isotherm circulation in the core region. It is worth noting that cylinder rotating -inertia dominates the lid force from the core region to the bottom region, and hence isotherms tend to spin in the direction of the rough cylinder. With each subsequent increment in rotation speed, isothermal distribution within the cavity is signifcantly afected, core circulation becomes much stronger, and the isothermal pack near the bottom region is also more condensed toward the heated wall. As a result, more convective heat is released from the heated wall for each increment in the rotational speed of the rough cylinder. When the magnetic feld is imposed at strength Ha � 10, small changes are noticed in temperature contour distribution for diferent speed ratios, but for Ha � 25 remarkable changes are recorded in the isotherms distribution for all Sr, especially in the core distribution of temperature contours. Further increment in magnetic feld strength at Ha � 50, signifcant changes are observed in isotherm distributions within the closed domain which refects the magnetic feld has a signifcant impact on the distribution of isotherms as well as temperature zones inside the cavity. Moreover, temperature plotting is found diferent for both increasing speed of rotation and strength of magnetic felds. Consequently, convective heat transferring reduces with Ha. In addition, diferent colours of temperature zones in the fuid domain indicate the topologies of the thermal feld for varying amounts of Sr and Ha, respectively.
Efects of Speed Ratio (Sr) and Bafes Length (Bl) on
Streamlines. Te streamline plotting in nanofuid fow circulation at diferent speed ratios of rotating rough cylinder and bafes length is presented in Figure 8. In all diagrams, streamline circulations are found squeezing by the bafes and it increases with the increase in length of both bafes. As a result, fowing nanofuid encounters more resistive force for longer bafes, and hence the strength of fow circulation is reducednoticeably. Moreover, the streamline pattern of core circulations is unchanged except in the regions where the bafes are horizontally located. In addition, secondary circulations over the core circulation are also more afected by the increased length of bafes. Tus, longer bafes cause more changes in the fow velocity as well as circulations compared to shorter bafes. Moreover, impact of bafes on the fow feld is more efective while rough cylinder is rotated at highest speed ratio than lower as 44.44% reduction is found in fow strength at Sr � 20 that becomes 17.24% at Sr � 0.
Efects of Speed Ratio (Sr) and Bafes Length (Bl) on
Isotherms. Besides this, Figure 9 displays isotherms of distribution when speed ratio and bafe length are increased simultaneously. Te distribution of temperature contours is found and changes gradually especially in the core region, with the increase in length of bafes and denseness of isotherms, are also found getting closer to the bottom heated wall which results in heat transferring increase with longer bafes. It is required to note that the impact of bafe length on the thermal feld decreases while the speed ratio gets lower strength and it seems to be insignifcant in the motionless state of the rough cylinder.
Efects of Reynolds Number (Re) and Magnetic Field (Ha) on Streamlines.
Te plotting of streamlines due to combined variation in Reynolds and Hartmann numbers is demonstrated in Figure 10. In Figures 10(a)-10(d), strong convective counter-clockwise fow circulations are observed around the rough cylinder due to the infuence of cylinder rotating-inertia and the imposed thermal condition. In addition, two symmetric clockwise fow circulations are also visualized over the centred anticlockwise circulation with the assistance of (a) (b) (c) (d) Figure 9: Variation in temperature contour plotting for diferent Sr and Bl.
lid-wall inertia. However, the strength of convective fow circulations is observed increasing with the increase in Reynolds numbers and streamlines are also intensifed toward the cylinder because increase in Re causes increment in fuid inertia which results rotating inertia and lid inertia are both increased, and hence strength as well as concentration of streamlines increased but the pattern of streamlines remained similar for varying Re. Besides this, it is noticed that the fow velocity gets lower strength, and distribution of streamlines is spaced out toward the boundaries with an increasing Ha for all Re. Tese variations are more noticeable in higher magnetic strength than comparative lower ones.
Efects of Reynolds Number (Re) and Magnetic Field (Ha) on Isotherms.
On the other hand, isotherm distributions at diferent Re and Ha are depicted in Figure 11. It is observed that isotherms are generated from the heated wall and distorted around the rough cylinder covering the fuid domain due to the rotating rough cylinder and moving wall in the presence of bafes. As lower Re causes lower fuid inertia, rotating inertia and lid inertia lose their inertia at Re � 10, and hence a weak isotherm circulation is visible around the cylinder and isotherms become wavy with lower gradient near the top wall (as seen in Figure 11(a) at Ha � 0). As a result, lower heat transfer occurs at Re � 10. After that, increasing Re increase fuid inertia consequently, both rotating inertia and lid inertia become prominent, and hence isotherms circulation around the cylinder become stronger and isotherms are more twisted near the top moving wall up to Re � 100. Ten at Re � 200, circulation and distortion of isotherms become more stronger having higher thermal gradient which results more convective heat released at Re � 200. When magnetic feld is imposed at Ha � 10, isotherms distribution slightly changed for all Re. Further increase in Ha, circulation, and distortion of isotherms remarkably get afected, and the efect of Ha is found prominent at lower Re than at higher Re. Moreover, dense circulation rounding the rough cylinder tends to disappear, and isotherms are spaced out with a lower curvature at Ha � 50, which confrms the substantial efect of the magnetic feld on isotherm distribution for all Re.
Efects of Volume Fraction (ϕ) and Magnetic Field (Ha) on
Streamlines. Figure 12 delineates streamlines plotting for diferent volume fractions of nanoparticles in a lid-driven cavity having a rotating cylinder with triangular components in the presence of a magnetic feld or not. Due to induced thermal and velocity boundary conditions, a primary convective strong fow circulations along with two secondary small circulation are visible in the base fuid at fxed parametric values as Pr � 6.2, Gr � 100, Sr � 10, Re � 100, Bh � 0.20 L, and Δ � 0.0275 L, which remain almost similar in nanofuids at diferent concentrations (1%, 3%, and 5%), but fow strength is found declining remarkably for the additional nanoparticles at vol. of 1%, 3%, and 5% in the base fuid. Tese results are expected since the amalgamation of nanoparticles increases the density of the working fuid and decelerates fow velocity within the cavity. In order to understand the fow magnitude at diferent amounts of nanoparticles more accurately, one can fnd the maximum velocity at the core fow circulations which are 1.20, 1.10, 0.90, and 0.80, respectively, at vol. of 1%, 3%, and 5%. It is also found that fow velocity decreases for imposing and increasing magnetic feld strength transverse to the fow circulation. Moreover, denseness of streamlines reduces with each increment in Ha and spaced out toward the sidewalls. In addition, maximum declination in fow magnitude id occurred at simultaneous changes in the volume fraction and magnetic feld which are recorded as 1.2 (ϕ � 0%, Ha � 0), 1.0 (ϕ �1%, Ha � 10), 0.70 (ϕ � 3%, Ha � 25), and 0.45 (ϕ � 5%, Ha � 50).
Efect of Volume Fraction (ϕ) and Magnetic Field (Ha) on
Isotherms. Beside these, distributions of temperature contours in base fuid and nanofuid with 1% volume fraction are found (in Figure 13(a)) qualitatively similar but minute Figure 14 illustrates heat transferring in the fuid fow domain due to physical parameters via average Nusselt number bar charts. Figure 14(a) confrms the heat transfer rate augments rapidly with increasing rotational speed of rough cylinder because increased rotational inertia due to Sr increases fow circulation signifcantly. It is also visualized in Figure 14(a) that higher magnetic strength decreases the heat transfer rate at each Sr as the interaction of magnetic force with convective fow circulation generates Lorentz's force, which reduces the fow velocity and produces more temperature within the fuid domain. From Figure 14(b), it is seen that the heat transfer rate augments monotonically by the increasing Re where maximum heat transfer is recorded in forced convection dominated regime and minimum for mixed convection regime. Te physics behinds it is that higher Re increases fuid inertia which accelerates cylinder rotating inertia and also lid inertia that causes maximum fow circulation and heat transfer within the cavity. In Figure 14(c), the heat transfer rate is found increasing the function of the amount of nanoparticles. Tis phenomenon is expected as higher amounts of nanoparticles improves nanofuid thermal conductivity, and hence capability of energy transportation in the fow domain. A similar trend is also observed in Figure 14(d) for the presence and increase in height of triangular components on the rotating cylinder, as higher heights of triangular components increase rotating inertia and fow velocity as well. In order to understand the impact of roughness components on heat transferring precisely, it can be recorded that 13.20% (at Ha � 0) more heat transfer takes place for rotating rough cylinders compared to rotating smooth cylinders, and it becomes 10.14%, while the magnetic feld is activated at strength Ha � 50. On the other hand, enhancement in heat transfer is found for enlarging bafes length (seen in Figure 14(e)) as the temperature contours were found getting closer to the heated wall with longer bafes and also concentrated in its distribution around the rough cylinder, which leads to an increase in heat transferring inside the cavity (shown in Figure 9). Moreover, heat transfer rate increases by 42.42% while bafe length changes from 0.10 L to 0.25 L in the absence of a magnetic feld but reduces to 34.53% whilethe magnetic feld is activated at a strength of Ha � 50. Results in Figure 14(f ) indicate that the average Nusselt number strongly depends on the orientation of bafes, and maximum heat transfer occurs while bafes are horizontally fxed at the cavity's vertical walls. In all bar charts of Nusselt number, it is observed that the heat transfer rate decelerates monotonically with the increase in magnetic feld strength resulting in active Lorentz's force due to the magnetic feld efect.
Conclusions
In this study, the impacts of the magnetic feld, rotating rough cylinder, and amount of nanoparticles on the fully developed fow and temperature felds in a nanofuid flled, partially heated lid driven cavity with horizontal bafes are numerically investigated. Te Galerkin fnite element method is implemented to simulate the governing equations, and the obtained results are validated against existing results available in the literature. Detailed parametric discussion has been performed based on the physical point of view. Te major fndings based on the obtained results are as follows: (i) Intense streamline circulation and fuid fow velocity are occurred at higher speed ratios and Reynolds number, while the reverse phenomenon is occurred at higher magnetic feld strengths, nanoparticle concentrations, and length of the bafes. (ii) Te temperature contour plotting changes signifcantly with the change in speed of rotating rough cylinder, Reynolds and Hartmann numbers, and bafes length, while rough cylinder is in motion but it minutely changes with the increase in concentration of nanoparticles in the base fuid. (iii) Heat transfer rate is augmented substantially at higher speed ratio, height of the triangular components, length of bafes, and Reynolds number, which is respectively maximized and minimized at each increment in nanoparticle volume fraction and magnetic feld strength. (iv) heat transfer rate is optimum, while bafes are horizontally fxed at the cavity walls than other cases.
(v) Maximum heat transfer occurred while triangular components are attached to the rotating cylinder rather than a smooth rotating cylinder. (vi) Optimization of heat transfer is correlated with the direction of rotating the rough cylinder and lid wall. | 9,352 | 2022-12-14T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Effect of normal stresses on the results of thermoplastic mold filling simulation
The paper deals with the effect of the normal stresses on the predicted flow front during the filling stage of thermoplastic injection molding. The normal stresses are predicted using the non-linear Criminale-Ericksen-Filbey model (a variant of the second-order fluid rheological model with viscosity, first and second normal stress coefficients dependent upon magnitude of shear rate) incorporated into a comprehensive 3D simulation software for mold-filling analysis. The additional stress term allows the prediction of the so called ear-flow effect (melt racing on the edges of the cavity).
Introduction
Thermoplastic injection molding is the most common manufacturing process for producing plastic parts.Material is fed into a heated barrel, mixed, and forced into a mold cavity where it cools and hardens to the configuration of the cavity.Significant progress has been achieved in three dimensional finite element simulation of plastic filling the mold (mold filling analysis) [1].Typically in commercial simulations polymer melt is considered as generalized Newtonian fluid, there the deviatoric stress tensor is proportional to the deviatoric deformation rate, the scalar coefficient connecting the shear rate and shear stress, known as viscosity, is dependent upon temperature, shear rate invariants, pressure and other factors [2].However, the generalized Newtonian fluid model does not predict any normal stress differences during simple shear flow whereas real polymers usually exhibit significant normal stress differences.
In this current work, we develop a finite element mold-filling program that allows incorporation of the Criminale-Ericksen-Filbey viscoelastic model that can accurately predict normal stress differences in a wide range of temperatures and shear rates [3].One of the motivators for this development was potentially improving the prediction of ear-flow, a little understood phenomenon of the more rapid advance of the flow front on the edge of a mold cavity than in the center of the cavity [4].
Ear-Flow Phenomenon
Numerous cases have been observed in industrial injection molding practice of amorphous materials which exhibit a race-track or ear-flow effect.This is the more rapid advance of the flow front at the edges of the molding cavity than in the center of the cavity, usually observed at elevated injection speed.This flow-leading effect at the edge cannot be explained by differences in cavity wall section thickness.In the worst cases, this race-tracking leads to air-traps and visual defects in the molded part.A typical flow front propagation demonstrating the ear-flow in a polystyrene material is shown on Figure 1.
As was originally suggested by experimental observations of Murata et al. [5], and confirmed by moldfilling simulations of Costa et al. [4], in many cases the ear flow is caused by the higher polymer temperatures in the edge region.The temperature rise is caused by shear heating in the high-shear region (close to the boundary) of the runners and gates.This temperature rise is then convected into the cavity, favoring the cavity edge due to the distribution pattern in the gate.The effect of the preferential convection of the temperature rise from the shear regions of the runners is also similar to the development of flow imbalance in geometrically balanced feed systems explained by Cook et al. [6].
However in many practical injection molding cases ear flow phenomenon is observed for conditions where very little shear heating is predicted, thus, rising suspicions that there is also another mechanism responsible for the ear flow effect.Since normal stress differences caused by in-plane shear near the cavity edges may push polymer perpendicular to the flow they are a candidate for such a mechanism.Our simulation was used to test this possibility.
Figure 1 Flow front shapes obtained by Murata et al. [5]
for a Polystyrene material in a glass insert mold.
Mathematical model
Filling stages of injection molding process are described by the combination of the momentum equation 1: ) and the energy equation (3): together with appropriate material equations and boundary conditions described in [1] and [2].In the simplest mold filling case the material equations include generalized Newtonian rheological equation: On the mold-plastic interface the boundary conditions are set as: On the melt flow front (F) the boundary conditions are set as where is the normal to the flow front F. On the injection surface (I) the boundary conditions are set as where I n G is the normal to the injection surface I and ) (t Q is the injection flow rate. The system of equations ( 1)-( 12) is solved using a specialized finite element method customized for the typical conditions of the injection molding process described in [1].
In order to incorporate the effect of normal stresses we implemented material rheological function connecting stresses (V ) with flow conditions that follows Criminale-Ericksen-Filbey model [3]: In the equation ( 4 Following established practice for mold-filling simulation we use Cross-WLF model [2] for the viscosity model: D and n are empirical coefficients.
Autodesk Moldflow material library stores quite an extensive data of the Cross-WLF parameters for thousands of material grades.The set of equations 1-17 was integrated into a special build of 3D flow solver of Autodesk Moldflow Insight 2017 software and the resulting program was used for simulation of injection molding processes.The addition included calculation of the additional normal stress tensor field: using equations (15)-(18).Then we apply additional nodal forces i N G to each of the filled nodes i calculated as: where i S is the surface of the control volume of the node i.At the each time step the algorithm iterated the velocity-pressure solver together with the calculations of the normal stress by equations ( 18) and ( 19) until they converge.The rest of the mold filling simulation algorithm described in [1] was left intact.All other rheological, thermal and mechanical material parameters were taken from the standard material library.
Filling of a thin rectangular cavity
All three cases simulation do not show any significant shear heating in the edge area (see Figure 3) as was expected because the runners and gates were not modeled.
No normal stresses case
The simulation without normal stresses does not predict any ear flow phenomenon as shown in Figure 4.
First normal stress differences only
When only the first normal stress difference is included, the flow front propagation pattern is very similar to the case of no normal stress.As shown in Figure 5 there is no ear flow phenomenon in this case either.
First normal differences and maximal second normal differences
Figure 6 Flow front positions for the case with the first and second normal stress differences.
The flow front distribution from the simulation using first and second normal stress differences is shown in Figure 6.A significant ear flow effect can be seen.The effect is quite prominent despite a moderate magnitude of normal stresses up to ~30kPa (see Figure 7).The results of these simulations show that the second normal stress difference can be an important contributing factor to the ear flow phenomenon.
Conclusions
An integrated system of 3D mold filling simulation that takes into account nonlinear rheological properties of normal stress differences is presented.
The second normal stress difference appears to be an important contributing factor to the ear flow phenomenon while the first normal stress difference does not affect much the flow front propagation.
Boundary conditions are set on the mold-plastic interface, on the melt flow front and on the injection surfaces.
) V is the deviatoric stress tensor; D is the deformation rate tensor, equation (14), v is velocity and functions ) first and the second normal stress difference functions.Equation (13) is an extension of the generalized Newtonian equation (4).
when viscosity is known we followed the so called Cox-Mertz Abnormal Rule as described by V. Sharma and G.H. McKinley[7].
or consistency viscosity.As shown in[7] equation 7 allows relatively accurate estimation of the first normal stress difference function if Cross-WLF parameters are known.Finally, to estimate the second normal stress difference function we assume that it is proportional to the first normal stress difference function:
4. 1
Case studyIn order to estimate the effect of normal stresses on flow front propagation we use a filling simulation of a simple thin rectangular plaque of 100mm in length, 20mm in width and 2mm in thickness shown on Figure2.The plague being filled by the generic polymethylmethacrylate (PMMA) material.The filling time is 1 second.The cavity is meshed by 4 node tetrahedral elements with at least 10 layers of elements through the thickness.Melt inlet boundary conditions are applied along one of the short edges of the plaques in the style of a film or fan gate.
Figure 2 x
Figure 2 Illustrative molding caseThree rheological models were considered:x No normal stressesx First normal stress difference estimated from the Cox-Mertz Abnormal Rule but no Second normal stress differences: coefficient \ K =0x First normal stress difference estimated from the Cox-Mertz Abnormal Rule and large Second normal stress differences \ K =-0.5.
Figure 3
Figure 3 Temperature distribution at the end of fill.No normal stresses case.The cutting plane is in the middlethickness plane of the cavity.
Figure 4
Figure 4 Flow front positions for the case with no normal stress.
Figure 5
Figure 5 Flow front positions for the case with the first normal stress differences only. | 2,121 | 2016-01-01T00:00:00.000 | [
"Materials Science"
] |
N=2 higher-derivative couplings from strings
We consider the Calabi-Yau reduction of the Type IIA eight derivative one-loop stringy corrections focusing on the couplings of the four dimensional gravity multiplet with vector multiplets and a tensor multiplet containing the NS two-form. We obtain a variety of higher derivative invariants generalising the one-loop topological string coupling, $F_1$, controlled by the lowest order Kahler potential and two new non-topological quantities built out of the Calabi-Yau Riemann curvature.
Introduction and summary
The quantum corrections in N = 2 theories have received a great deal of attention. These are of two types: corrections proportional to the inverse tension of the string and corrections proportional to the string coupling constant. The former arise from perturbative and instantonic world-sheet corrections and are encoded in higher derivative terms in the ten-dimensional supergravity action, while the latter come from string loops and brane instantons. Perturbative low energy effective actions are expanded in a double perturbation series in the inverse tension and the coupling constant.
These corrections not only affect the moduli spaces of N = 2 theories, but are manifested in the higher-derivative couplings. A better control of these couplings is hence essential, as is demonstrated by the study of terms involving the Weyl chiral (supegravity) super field W . However most of the other (higher-derivative) couplings in N = 2 theories and their relation to string theory remain largely unexplored. We make some steps in this directions. Our study is mostly restricted to string one loop results and Calabi-Yau compactifications, and will not cover gauged N = 2 theories.
The better-understood structures, involving W holomorphically, are captured by the topological string theory. In particular, the F-term in the low energy effective action in four dimensions is related to the scattering amplitude of 2 selfdual gravitons and (2g − 2) selfdual graviphotons in the zero-momentum limit and is computed by the genus-g contribution F g to the topological string partition function [1,2]. Crucially, the genus-g contribution F g also determines the partition functions of N = 2 global gauge theories.
There exists, however, a continuous deformation of the gauge theory which uses nontrivially the manifest SU(2) R-symmetry of theory. This is what happens in the so-called Omega background. [3,4].The two-parameter gauge theory partition function in the Omega background has been computed recently and reduces to the standard gauge theory partition function only when the two parameters are set equal. It is an outstanding open problem to find string theory realisation of these backgrounds and understand the extension of the genus-g function F g which determines the general N = 2 partition function, and which should involve scattering amplitude among 2 gravitons, (2g − 2) graviphotons, and 2n gauge fields in vector multiplets. Theses considerations have lead to a recent interest in explicit realisation of couplings F g,n W 2g V 2n [5][6][7][8][9].
Let us recall that the genus one partition function F 1 is special due to the fact that it is the only perturbative four-dimensional contribution, which survives the five-dimensional decompactification limit. The ten/eleven dimensional origin of these couplings is related to M5 brane anomalies and they lift to certain eight-derivative terms in the effective action [10,11]. Until very recently only the gravitational part of these couplings was known (and it was checked that their reduction on CY manifolds does correctly reproduce F 1 ). At present, we have a much better control of the more general form of the couplings in general string backgrounds with fluxes turned on, so that an explicit calculation of the one-loop four-, six-and eight-derivative couplings in N = 2 theories, which should lead to the generalisation of F 1 , is now within the reach.
In the four dimensional setting, recent developments in going beyond chiral couplings described by integrals over half of superpace [12], allow us to extend the list of higher derivative terms in several ways. The new couplings are constrained by N = 2 supersymmetry to be governed by real functions of the four dimensional chiral fields. The latter naturally include vector multiplets and two types of chiral backgrounds, one of which is the Weyl background, W 2 , introduced above. The second chiral background we consider is constructed out of the components of a tensor multiplet containing the NS two-form, the so called universal tensor multiplet 1 and contains four derivative terms on its components, such as (∇H) 2 , where H = dB and B is the NS two-form. These ingredients then allow us to describe couplings which are characterised by polynomials of the type [F 2 +R 2 +(∇H) 2 ] n , generalising the purely gravitational R 2 couplings discussed above. The function of the vector multiplet scalars and Weyl background controlling these couplings directly corresponds to the extended couplings F g,n W 2g V 2n , when the tensor multiplet is ignored. Inclusion of the latter results to more general couplings that have not yet been discussed in the literature.
From a quantum gravity point of view, higher-derivative corrections serve as a means of probing string theory at a fundamental level. Even though the complete expansion involves all fields of the theory, so far the attention has been mostly concentrated on the gravitational action. In particular, the one-loop eight derivative R 4 (O(α 3 )) terms stand out among the stringy quantum corrections. Due to being connected to anomaly cancellation, they are not renormalised at higher loops and survive the eleven-dimensional strong coupling limit. These couplings also play a special role in Calabi-Yau reductions to four-dimensional N = 2 theories. Firstly, they have been instrumental in understanding the perturbative corrections to the metrics on moduli spaces. In addition, they give rise to the four-derivative R 2 couplings, and as mentioned above agree with F 1 W 2 .
In order to understand the stringy origin of more general higher derivative couplings in N = 2 theories, one needs to go beyond the purely gravitational couplings in ten dimensions. In the NS-NS sector of string theory, H 2 R 3 couplings are specified by a five-point function [13]. Direct amplitude calculations beyond this order are exceedingly difficult, but recent progress in classification of string backgrounds using the generalised complex geometry and T-duality covariance provide rather powerful constrains on the structure of the quantum corrections in the effective actions. A partial result for the six-point function, obtained recently, together with T-duality constraints and the heterotic/type II duality beyond leading order, allows to recover the ten-dimensional perturbative action almost entirely [14] (the few yet unfixed terms mostly vanish in CY backgrounds and hence are not relevant for the current project). The eleven-dimensional lift of the modified coupling leads to the inclusion of the M-theory four-form field strength; the subsequent reduction on a nontrivial KK monopole background allows to incorporate the full set of RR fields in the oneloop eight-derivative couplings. This knowledge will be crucially used for obtaining the relevant four-dimensional N = 2 couplings.
The goal of this paper is to bring together some of these recent developments. In the process, we shall: Confirm and specify some of the predictions of general N = 2 considerations and fix the a priori arbitrary quantities constrained solely by supersymmetry in terms of Calabi-Yau data Discover new terms and couplings that have not been previously considered Provide some tests and justification for the proposed lift of type IIA R 4 terms to eleven dimensions A brief comment on the last point. Since the lift from ten to eleven dimensions involves a strong coupling limit, ones is normally suspicious of simple-minded arguments associated with just replacing the string theory NS three-form H by a four-form G. In N = 2 theories however the three-and four-form give rise to fields in the same super multiplet, namely the (real part of the) scalars u I and the vectors A I in the vector matter respectively (here the index I spans the vector multiplets). Hence verifying that the respective couplings involving u I ad A I are supersymmetric completions of each other provided a test of the lifting procedure.
We conclude this section by a summary of our results. A variety of four dimensional higher derivative terms of the type [F 2 + R 2 + (∇H) 2 ] n are characterised by giving the relevant functions of four dimensional chiral superfields that control them. From the point of view of the CY reduction, the order of derivatives of all terms in four dimensions is controlled solely by the power of the CY Riemann tensor appearing in the internal integrals. We therefore find that the eight, six and four derivative terms are controlled by the possible integrals involving none, one or two powers of the internal Riemann tensor, respectively.
We find, in particular, that only the lowest order Kähler potential is relevant for the eight derivative terms, since this is the natural real function of vector multiplet moduli arising in Calabi-Yau compactifications, describing the total volume of the internal manifold. At lower orders in derivatives, the Kähler potential still appears as part of the functions describing the various invariants, combined with the Riemann tensor on the CY manifold X, denoted by R mnpq . Given that all traces of the latter vanish, the relevant internal integrals must necessarily contain the harmonic forms on the CY manifold. In the case at hand, the relevant forms are the h (1,1) two-forms ω I ∈ H 2 (X, Z), where I, J = 1, . . . , h (1,1) , since we ignore the hypermultiplets arising from the (2, 1) cohomology. We then obtain the following tensorial objects which control all the couplings that we were able to describe within N = 2 supergravity at six-and four-derivative order respectively. Similar to the standard derivation of the lowest order Kähler potential, one can deduce the existence of corresponding real functions whose derivatives lead to the the couplings (1.1). Finally, the inclusion of the Weyl and tensor superfields through additional multiplicative factors leads to the functions that characterise the corresponding couplings involving R 2 and (∇H) 2 respectively. For example, the R 2 F 4 and R 2 F 2 couplings lead to the functions Unknown Table 1: A summary of higher-derivative couplings discussed here. The first row corresponds to chiral couplings involving the Weyl and tensor multiplets. The next three rows display the known non-chiral N = 2 invariants at each order of derivatives, while the last row summarises the currently unknown invariants that can arise. The double appearance of R 4 at the eight-derivative level corresponds to two different invariants (see (4.2) below).
There are further invariants arising from the reduction, that cannot be currently described in components 2 within supergravity, and are associated to terms involving H 2n with n odd, such as H 2 F 2 , H 2 R 2 etc. We comment on some of these terms, either giving the leading terms that characterise them, or pointing out their apparent absence.
An inventory of the four-and six-derivative couplings studied in this paper is given in Table 1. The first line in this table describes the terms based only on holomorphic functions of vector moduli and the two chiral backgrounds. These are the only couplings that are controlled by a topological quantity, namely the vector of second Chern classes of the Calabi-Yau four-cycles. The gravitational R 2 coupling is the first nontrivial coupling F g W 2g above, related to the topological string partition function [1,2]. The second, third and fourth lines correspond to the non-holomorphic couplings of [12], where the tensor multiplet background is included. Note the diagonal of underlined invariants of the type R 2 F 2n , which correspond to the first nontrivial couplings F g,n W 2g V 2n , for g = 1, recently discussed in [6,8,9]. The diagonal of the blue boxed invariants gives the one-loop copings of vector multiplets only, controlled by the tensors (1.1) and are the ones defining the structure of all other invariants in each line. To the best of our knowledge, the string original of such couplings have not been discussed in the literature. The remaining invariants in the last line can arise a priori and their description remains unknown within supergravity. We comment on the expected structure of some of these below 3 .
The structure of the paper is as follows: In the next section we shall review briefly the one-loop R 4 couplings as well as some of our conventions and the reduction ansatze. The structure of known higher derivative couplings in four-dimensional N = 2 theories is presented in sec. 3. We then proceed to consider the various higher derivative terms arising from the Calabi-Yau compactification of the one-loop term, organised by the order of derivatives. Hence, in section 4 we consider the eight derivative terms, while in sections 5 and 6 we discuss the six and four derivative invariants respectively. Some open questions are listed in section 7. The extended appendices contain further technical details of the structures appearing in the main text. In particular, appendix A contains the fully explicit expressions for the quartic one-loop terms in 10D. Appendices B and C deal with chiral couplings of general chiral multiplets and the composite chiral background of the tensor multiplet respectively. Finally, appendix D reviews the structure of the kinetic chiral multiplet and the various invariants that can be constructed based on it, up to the eight derivative level.
Higher derivative terms in Type II theories
The starting point for our considerations is the ten-dimensional eight-derivative terms that arise in Type II string theories. The structure of the gravitational part of these couplings has been known for a long time, but the coupling to the remaining Type II massless fields was not explicitly known. Recently, a more concrete understanding of the terms involving the NS three-form field strength, H, has been achieved [14]. The structure of the corresponding terms involving RR gauge fields is constrained to a large extend, using arguments based on the eleven dimensional uplift to M-theory.
Upon reduction on a Calabi-Yau manifold without turning on any internal fluxes, the NS three form leads to two types of objects in the four dimensional effective theory, namely a lower dimensional three-form field strength and h 1,1 scalars. The former is naturally part of a tensor multiplet 4 , while the latter are part of vector multiplets in Type IIA and tensor/hyper multiplets in Type IIB.
In this section, we start by giving an overview of the ten-dimensional eight-derivative terms in section 2.1, from which all the lower dimensional higher-derivative terms arise. In section 2.2 we then turn to a discussion of the reduction procedure on Calabi-Yau three-folds, which is central to the derivation of four dimensional couplings.
The eight-derivative terms in ten dimensions
In summarising the structure of R 4 with the NS three-form H included, it is most convenient to start by introducing the connection with torsion which reads in components The curvature computed out of Ω ± is then Denoting the Riemann tensor by R µν ν 1 ν 2 , we may write in components Note that the first and last term in this expression satisfy the pair exchange property, while the second term is antisymmetric under pair exchange due to the Bianchi identity on the three-form. The Type II eight-derivative terms can be written in terms of two standard "N = 1 superinvariants", defined as J 0 (Ω) = t 8 t 8 + 1 8 10 10 R 4 ≡ t 8 t 8 + 1 8 10 10 which provide a convenient way of encoding the kinematic structure of R 4 terms. The tensor t 8 and the associated tensorial structures appearing here are spelled out in appendix A. Note that at this stage the terms (2.4) are build from Levi-Civita connections only, and the three-from H is not included. It has been argued in [14] that these will be completed with the B-field as follows: Note that J 0 (Ω + ) + ∆J 0 (Ω + , H) appears at tree level both in IIA and IIB and at one loop in IIB, while J 0 (Ω + ) − 2J 1 (Ω + ) + ∆J 0 (Ω + , H) appears at one loop in IIA. The structure of ∆J 0 (Ω + , H) is more elaborate and kinematically different form the standard 1 8 10 10 R 4 (Ω) terms, and in fact it is the only part of the eight-derivative action that is not written purely in terms of R(Ω ± ). 5 Here we should also use the full six-index un-contracted combination of H 2 . These structures receive contributions starting form five-point odd-odd amplitudes: The order H 4 R 2 contribution is known up to some ambiguities, while the terms with higher powers of H remain a conjecture. Luckily these terms play little role in N = 2 reductions and we comment on the cases where they are relevant below.
The last term in (2.5b), coming form the worldsheet odd-even and even-odd structures corresponds to the gravitational anomaly-canceling term. The relative sign between the two terms is fixed by the IIA GSO projection, so that the coupling contains only odd powers of B-field. The explicit contribution to the effective action is Since t 8 R 4 ∼ 1 4 p 2 1 − p 2 is made of characteristic classes and H enters in (2.7) like a torsion in the connection, its contribution amounts to a shift by exact terms. For completeness, we record the complete expression, since its reduction will be useful in the following. The eight-derivative (tree level and one-loop) terms are the origin of the only perturbative corrections to the metrics on the N = 2 moduli spaces. The corrections respect the factorisation of the moduli spaces, and the classical metrics on moduli space of vectors and hypers receive respectively tree-level and one-loop corrections, both of which are proportional of the Euler number of the internal Calabi-Yau manifold [11,15]. Needless to say, our discussion is consistent with these corrections, and from now on we shall concentrate only on the higher-derivatives terms. Recent progress in understanding the hyper-multiplet quantum corrections is reviewed in [16].
As already mentioned, the reduction of type IIA super invariant J 0 (Ω) − 2J 1 (Ω) on Calabi-Yau manifolds yields the one loop R 2 terms in N = 2 four-dimensional theory, and this is the only known product of the reduction so far that leads to higher derivative terms in 4D. We shall return to the four-dimensional R 2 terms in section 6. Clearly, the inclusion of the B field leads to further couplings to matter upon dimensional reduction, to which we now turn.
Reduction on Calabi-Yau manifolds
We now consider the reduction of the ten-dimensional eight-derivative action on a Calabi-Yau threefold X, and its relation to the N = 2 action. The metric can be reduced in the standard way, as 6 where g mn is the metric on the Calabi-Yau manifold, X, which we will not need explicitly. The three-form H reduces as where the four-dimensional H is part of the tensor multiplet, and the one-forms f I can be locally written as f I = du I , with u I being a part of the vector multiplet scalars. The index I spans over h 1,1 (X), and ω I ∈ H 2 (X, Z). Hence reducing the terms built out of R(Ω ± ) and H 3 (where Ω ± = Ω± 1 2 H), one expects at a given order of derivatives various couplings involving the Riemann tensor, R, as well as the tensor multiplet and vector multiplets. For example, at the four-derivative level one recovers the four-dimensional R 2 couplings and expects to obtain further couplings quartic in tensor multiplet and vector multiplets, as well as mixed terms. We use the symbolic computer algebra system Cadabra [17,18] to systematically derive the structure of these terms.
The vector moduli shall be denoted z I = u I + it I , where t I are the Kähler moduli, defined through the decomposition of the Calabi-Yau Kähler form, J, as J = t I ω I . The total volume, V, of the CY manifold is given by the standard volume form, cubic in J as where we defined the 4D Kähler potential. We shall not need the vector fields themselves, but only their field strengths, denoted as F A , where the index A = 0, I runs over the h 1,1 (X) + 1 vector fields (in places where the shorthand notation is used, F will stand for the entire multiplet).
Reducing the NS eight-derivative couplings we obtain couplings that contain u I and t I . The couplings to F I can be recovered by thinking of the (one-loop) couplings as being reduced from five (or eleven) dimensions. In practical terms, one has to add an extra index on f I µ → F I µν . Since (the affected parts of) the expressions are even in powers of F I , the extra index will always be contracted with a similar counterpart. Moreover most of the expressions are only quadratic in F , hence the lifting is unique. A little combinatorial imagination is needed for F 4 terms. This procedure follows the lifting of one-loop NS couplings to eleven dimensions, outlined in [14] and is analogous to the way one can recover graviphoton couplings from the R 2 term -one just has to think of the lifting of the couplings to five dimensions and their consequent reduction. As already mentioned, here we can benefit from the explicit N = 2 formalism in verifying that the couplings involving t I and F I complete each other sypersymmetrically and hence provide a verification of the lifting of the complete one-loop eight-derivative terms from type IIA strings to M-theory.
Since we are focusing on Calabi-Yau compactifications without flux, different pieces in the reduction will involve integrating over X expressions containing some power of the internal curvature and ω I ∈ H 2 (X, Z). 7 We shall start with the familiar integrals.
At the four derivative level, one needs to consider terms with exactly two powers of the Riemann tensor in the internal Calabi-Yau manifold. In the purely gravitational sector, one then finds an R 2 term in four dimensions, originating from the R 4 couplings in ten dimensions. In this case, one obtains where we note that only terms completely factorised in internal and external objects contribute. The function F 1 is an integral over the internal directions that takes the form where the first equality holds up to Ricci terms and in the second equality we evaluated the integral. The fine balance between the two a priori different terms in (2.13) can be extended to more complicated integrals, that are relevant in the reduction of the non-purely gravitational terms. In this case, we have checked explicitly the identity up to Ricci terms. Note that the structure of spacetime indices is different in the two sides, while the remaining terms are purely internal. Upon contraction with the Kähler moduli, each side of (2.14) reduces to the expression in (2.13).
Even further, the identity in (2.15) can be generalised to an identity involving eight indices, as follows.
again up to Ricci-like terms. These two expressions are relevant for the terms in R(Ω + ) that are odd or even under pair exchange, respectively. The reduction to six-and eight-derivative couplings will require integration over expressions linear or zeroth order in the Riemann tensor of the internal Calabi-Yau manifold. In view of the vanishing of the Calabi-Yau Ricci tensor, these are essentially unique, and given by where G IJ ultimately leads to the vector multiplet Kähler metric and R IJ is a new coupling to be discussed in due time.
The four dimensional action
We now describe the structure of the effective N = 2 supergravity action in four dimensions, that arises from the reduction of the one-loop Type IIA Lagrangian. Given that the original ten dimensional action contains eight derivatives, one obtains a variety of higher derivative couplings, next to the lowest order two derivative action. In order to describe these in a systematic way, we will consider the off-shell formulation of the theory, which allows to construct infinite classes of higher derivative invariants without modifying the supersymmetry transformation rules. However, since the higher dimensional one-loop action and the reduction scheme are on-shell, one has to deduce the off-shell invariants from the desired terms that result upon gauge fixing to the on-shell theory. In the following, we take the pragmatic approach of matching the leading, characteristic terms in each invariant and promoting to off-shell variables by standard formulae for special coordinates for the vector multiplet scalars. In practice, these choices are essentially unique, and below we comment on this issue in the examples where this is relevant. The defining multiplet of off-shell N = 2 supergravity is the Weyl multiplet, which contains the graviton, e a µ , the gravitini, gauge fields for local scale and R-symmetries and various auxiliary fields. Of the latter, only the auxiliary tensor T ab ij is directly relevant, since it is identified with the graviphoton in the on-shell formulation of the theory, at the two-derivative level. The reader can find a short account of the Weyl multiplet in Appendix B. In what follows, we will mostly deal with the covariant fields of the Weyl multiplet, which can be arranged in a so-called chiral multiplet (see Appendix B for more details), which contains the auxiliary tensor T ab ij and the curvature R(M ) µν ab . The latter is identified with the Weyl tensor, up to additional modifications. These observations will be very useful in the identification of the various higher derivative couplings.
There are various matter multiplets that can be defined on a general supergravity background. Here, the fundamental matter multiplets we consider are vector multiplets and a single tensor multiplet, corresponding to the universal tensor multiplet of Type II theories. Both these multiplets comprise 8 + 8 degrees of freedom and are defined in appendices B and C respectively, to which we refer for further details. Moreover, they can be naturally viewed as two mutually non-compatible projections of a chiral multiplet, which is central to our considerations.
All Lagrangians considered in this paper are based on couplings of chiral multiplets, which contain 16 + 16 degrees of freedom and can be defined on an arbitrary N = 2 superconformal background. We refer to appendix B for more details on chiral multiplets. Here, we simply state that these multiplets are labeled by the scaling weight, w, of their lowest component, A, and that products of chiral multiplets are chiral multiplets themselves, obtained by simply considering functions F (A), which must be homogeneous, so that a weight can be assigned to them. As mentioned above, the matter multiplets we consider are also chiral multiplets of w = 1, on which a constraint projecting out half of the degrees of freedom is imposed and the same property holds for the covariant components of the Weyl multiplet. This implies that actions for all the above multiplets can be generated by considering expressions constructed out of chiral multiplets, which are invariant under supersymmetry.
Two derivatives
The prime example is given by the invariant based on a w = 2 chiral multiplet, implying that its highest component, C, has Weyl weight 4, and chiral weight 0, as is appropriate for a conformally invariant Lagrangian in four dimensions. It can be shown that the expression is the bosonic part of the invariant, including a conformal supergravity background described by the auxiliary tensor T ab ij of the gravity multiplet. The two derivative action for vector multiplets is now easily constructed, by setting the chiral multiplet in this formula to be composite, expressed in terms of vector multiplets labeled by indices I, J, · · · = 0, 1, . . . , n v . It is possible to show (cf. (B.6)) that the relevant terms of such a composite multiplet are given by 8 where F I and F IJ are the first and second derivative of the function F , known as the prepotential and B ij I , G − ab I are the remaining bosonic components of the chiral multiplets (which in this case are constrained by (B.7) for vector multiplets). As the bottom composite component, A, has w = 2, the function F (X) must be homogeneous of degree two in the vector multiplet scalars X I . Taking into account the constraints in (B.7), the bosonic terms of the Lagrangian following from (3.1) read where in the last line we added the hermitian conjugate to obtain a real Lagrangian.
Here, F I ab are the vector multiplet gauge field strengths, R is the Ricci scalar and D is the auxiliary real scalar in the gravity multiplet. This Lagrangian is invariant under scale transformations and can be related to an on-shell Poincaré Lagrangian by using a scale transformation to set the coefficient of the Einstein-Hilbert term, Im(F IX I ), to a constant. For standard Calabi-Yau compactifications of Type II theories, one obtains a cubic prepotential, as where the constant tensor C IJK stands for the intersection numbers of the manifold.
As it turns out, the Lagrangian (3.3) is inconsistent as it stands, so that one needs to add at least one auxiliary hypermultiplet, which is to be gauged away by superconformal symmetries, similar to the scalar Im(F IX I ) above. In addition, in this paper we consider a single tensor multiplet, corresponding to the universal hypermultiplet upon dualisation of the tensor field. We refer to appendix C for more details on this multiplet. For later reference, we display the bosonic action for the auxiliary hypermultiplet and the physical tensor multiplet that needs to be added to (3.3) to obtain a consistent on-shell theory with a physical tensor multiplet, as Here, A i α is the hypermultiplet scalar, described as a local section of SU(2) × SU (2), and Ω αβ is the invariant antisymmetric tensor in the second SU (2). The on-shell fields of the tensor multiplet are the triplet of scalars L ij and the two-form gauge field, B µν , while is the dual of its field strength and G is a complex auxiliary scalar. The functions F (2) and F (3) can be viewed as the second and third derivative of a function of the L ij , that can easily be generalised to an arbitrary number of tensor multiplets [19] (see (C.4)-(C.5)). For a single tensor multiplet, there is a unique choice, as which we will assume throughout. However, as we ignore all tensor multiplet scalars in our reduction scheme, all scalars and F (2) are kept constant and only appear as overall factors.
Higher derivatives
In this paper we construct higher derivative actions based on the properties of chiral multiplets, as discussed above. One way of doing this is to consider the function F in (3.2) to depend not only on vector multiplets, but also on other chiral multiplets, which are treated as background fields. Alternatively, one may consider invariants more general than (3.1), containing explicit derivatives on the chiral multiplet fields. Here we use both structures, which we discuss in turn, emphasising the methods and the structure of invariants rather than details, which can be found in [12,19,20]. We consider two chiral background multiplets, one constructed out of the Weyl multiplet and one constructed out of the tensor multiplet, whose lowest components we denote as A w and A t respectively. These are proportional to the auxiliary fields (T ab ij ε ij ) 2 of the Weyl and and G of the tensor multiplet and we refer to appendices B and C for more details on their precise definition. Considering a function F (X I , A w , A t ) leads to a Lagrangian of the form (3.3), where the set of vector field strengths is extended to include the Weyl tensor R(M ) ab cd in (B.3) and the combination ∇ [a E b] , so that four derivative interactions of the type are generated. The explicit expressions for the relevant chiral multiplets can be found in (B.11) and (C.11) respectively. These couplings are distinguished, in the sense that they are described by a holomorphic function and correspond to integrals over half of superspace. The R 2 term has been studied in detail, especially in connection to BPS black holes, see e.g. [11,[21][22][23][24]. The full function F (X I , A w ) is in this case related to the topological string partition function [1,2]. We will only be concerned with the linear part of this function, originating in the one-loop term in section 2.1, which is controlling the R 2 coupling through (3.8). The (∇E) 2 term has appeared more recently [19], without any coupling to vector multiplets.
More general higher derivative couplings can be constructed by looking for invariants of chiral multiplets that contain explicit derivatives, unlike (3.1). Indeed, such invariants can be derived by considering a chiral multiplet whose components are propagating fields, i.e. described by a Lagrangian containing derivatives. This can be done in the standard way, by writing a Kähler sigma model, which in the simple case of two multiplets reads where both Φ and Φ must have w = 0 for the integral to be well defined. In the second form of the integral we defined a new chiral multiplet, T(Φ ), the so called kinetic multiplet, since it contains the kinetic terms for the various fields. This multiplet was constructed explicitly in [12] and is summarized in appendix D below (see also [25] for a recent generalisation).
In practice, one can think of the operator T as an operator similar to the Laplacian, acting on the components of the multiplet, as we find where we only display the leading terms.
One can now simply declare the chiral multiplets Φ, Φ to be composite by imposing (3.2), where the corresponding functions F , F can depend on vector multiplet scalars, as well as the Weyl and tensor multiplet backgrounds, exactly as described above. As described in section D and in [12], this leads to a real function H = FF + c.c., homogeneous of degree zero, which naturally describes a variety of higher derivative couplings, corresponding to the combinations generated by where the ± stand for selfdual and anti-selfdual parts. Each of these is controlled by a function of the vector multiplet moduli as where we display the characteristic terms at each order. Note that we consider a function at most quadratic in A w , A t , since a higher polynomial would lead to the same terms, multiplied by additional powers of these auxiliary scalars. These are analogous to the non-linear parts of the chiral coupling in (3.8) and go beyond one-loop terms, so we do not consider them in the following. Finally, note that due to the expansion (3.12), the functions H (i,j) are not homogeneous of degree zero for i, j = 0, but we we will always refer to the corresponding degree zero monomial in (3.12), for clarity. The invariants based on (3.10) are the simplest ones containing the kinetic multiplet. It is straightforward to construct more general integrals, for example which are the cubic and quartic invariants discussed in section D. In exactly the same way as above, the first of these integrals leads to a homogeneous function of degree −2, describing couplings cubic in F 2 , R 2 and (∇E) 2 . Only some of these are relevant in the following, in particular the (R 2 + (∇E) 2 )F 4 and F 6 , since the rest contain more than eight derivatives. Finally, the last two integrals describe couplings with at least eight derivatives and lead to homogeneous functions of degree −4. Only the last integral is relevant for us, namely for the F 8 term.
Eight derivative couplings
We start by considering terms containing the maximum number of derivatives appearing in the one-loop correction, i.e. we consider the possible eight derivative invariants in four dimensions. This may seem counterintuitive at first and in fact some of these invariants have not been described explicitly. However, the terms that are known in four dimensions are the simplest to describe, setting the stage for the more complicated structures to follow. Applying the rules and assumptions spelled out in section 2.2, one can characterise the various terms appearing in the reduction by the order of Riemann tensors, tensor multiplet fields strengths and vector multiplet fields strengths arising in four dimensions. Schematically, we then find a decomposition of the type where we write in blue the terms which correspond to the known four-dimensional invariants. The supersymmetric invariants for the underlined (red) terms are not known.
Gravity and tensor couplings
The most obvious and simplest term is the R 4 term, which arises by trivial reduction of the corresponding ten dimensional term. Note that only the even-even contribution survives the reduction and leads to a four dimensional R 4 term as The second line corresponds to two different invariants in four dimensions, each with its own supersymmetric completion, corresponding to the double appearance of R 4 in (4.1). The supersymmetrisation of the second term is not known in N = 2 supergravity (see however [26] for a discussion in the N = 1 setting). The supersymmetric completion of the first term was found in [12], where it was shown that it is governed by a homogeneous degree zero real function of the vector multiplet moduli and the Weyl multiplet scalar A w .
In the present case however, (4.2) does not depend on moduli other than the total volume of the CY manifold, so that we can immediately identify the relevant function as depending only on the Kähler potential as Here, the function of the off-shell scalars K(Y,Ȳ ) is very closely related to the lowest order Kähler potential, as with the prepotential (3.4), and is equal to it once special coordinates are chosen (for Y 0 = 1). Note however, that this is only the most natural choice that results in the first coupling in (4.2) upon taking the on-shell limit and one might consider more elaborate off-shell functions leading to the same result. Upon taking derivatives of this function with respect to the vector multiplet moduli, various couplings involving vector multiplet field strengths and auxiliary fields arise at the off-shell level, resulting to further eight derivative terms in the on-shell theory. The corresponding purely tensor coupling is the eight derivative term of the tensor multiplet, which takes the form (4.5) These couplings can be described in a way completely analogous to the R 4 term, through a homogeneous real function corresponding to (4.3), as The final possible combination at the eight derivative level for NS fields is the R 2 H 4 coupling, which in 4D is characterised by the term These couplings can be described by the obvious mixed combination of the two functions (4.3) and (4.6) above, as The last function can be straightforwardly added to the functions above, to define a total function of the vector multiplet moduli and Weyl and tensor multiplet backgrounds, defined as describing the eight derivative couplings of NS sector fields. At this point it is worth pausing, to note that the form of these equations exhibits a correspondence between the graviton and the B-field, since the complete eight derivative action for the gravity and tensor multiplet is controlled by the combination A w + A t . This will appear in several instances below, at all orders of derivatives, and reflects the structure of the 10D Lagrangian, which is controlled by the combination R(Ω + ).
Couplings involving vector multiplets
We now turn to some of the eight derivative terms involving derivatives on vector multiplet fields. We start with mixed terms between NS and RR fields, namely the ones where the order of derivatives is balanced between the two sectors. Indeed, it is straightforward to obtain the function characterising the R 2 F 4 coupling, which in 4D is described by the cubic invariant in appendix D, where one considers one of the chiral multiplets to be the Weyl multiplet. The R 2 F 4 coupling is then characterised by the terms In this case, only the Ricci-like terms contribute to the reduction altogether, so that we obtain for the relevant coupling i.e. proportional to the lowest order Kähler metric G IJ . This result determines the coupling of the vector multiplet scalars and the corresponding vector fields, but we still need to fix the couplings to the Type IIA RR gauge field, labeled by 0 in four dimensions. These can be derived by the observation that all field strengths can be introduced by lifting the three-form H µ 1 µ 2 µ 3 to the eleven dimensional four-form field strength G µ 1 µ 2 µ 3 µ 4 and reducing back on a circle, keeping all components. The result of the reduction of the four-form to 4D gauge fields, F I µν , naturally leads to the combination F I µν + u I F 0 µν , which should replace the field strengths in the couplings shown above, so that the full coupling becomes (4.12) Combined with the fact that the relevant function depends on the Weyl multiplet background only linearly, as implied by (4.10), one can now integrate to obtain This form is in line with the observation that the R 2 F 4 coupling can be roughly seen as the product of the chiral R 2 term with the real F 4 term. Note that, unlike for the lowest order Kähler potential, the 0I and 00-components of the second derivative H AwAB in (4.10) are physical in this case, since they describe the couplings of the RR one-form gauge field. In fact, the coupling H (8) AwAB is proportional to the real part of the period matrix, which describes the the theta angles in the two derivative theory.
The natural extension of (4.13) to a function where A w is replaced by A t and thus describes couplings of the type (∇E) 2 (∇F ) 2 is straightforward. However, in the compactification we consider all such terms cancel identically, in a nontrivial way. Similarly, there are no parity odd terms of this type either, so that this particular coupling seems to be absent in four dimensions.
The same conclusion seems to hold for terms of the type (∇H) 2 H 2 F 2 , which would in principle be characteristic of the H 6 F 2 term in (4.2), even though this coupling is not known in four dimensions. Terms of this order in fields do not appear in the odd sector as well.
Finally, we consider the purely vector multiplet eight derivative couplings, corresponding to an F 8 term. This can be obtained by a trivial dimensional reduction, leading to the four dimensional coupling 14) which can be described by the second quartic invariant in (D.11). Since the coupling above is given purely in terms of the product of the (1, 1) forms, ω I · ω J , the relevant real function is related to the Kähler potential and is given by This function is consistent with (4.14) for the I, J, indices and naturally extends to the 0-th gauge field in four dimensions as seen above, but we have not checked those couplings explicitly.
Six derivative couplings
At the six derivative level, we need to saturate two of the derivatives in the internal directions, so that exactly one Riemann tensor will appear in the relevant integrals on the Calabi-Yau manifold. This requirement turns out to be quite restrictive, since all traces of the Calabi-Yau curvature vanish. It follows that the internal integrals must also involve harmonic forms on which the indices of the Riemann tensor are contracted. Given that we do not consider any complex structure deformations, this observation directly implies that no invariants involving only NS-NS fields, such as R 2 H 2 or H 6 , can arise in four dimensions 9 . However, mixed couplings involving fields from both the NS-NS and the R-R sector are nontrivial and a priori include three types of couplings, namely R 2 F 2 , (∇E) 2 F 2 and H 2 F 4 . The latter has not been described in the context of N = 2 supergravity, while the former two can be constructed using the techniques in [12]. In addition, a purely vector multiplet coupling including six derivatives on the component fields, i.e. an F 6 invariant arises.
In particular, the R 2 F 2 term was already constructed explicitly in [12], and is governed by a function, H(X, A w ;X), that is linear in the Weyl multiplet, while the vector multiplet scalars appear through a holomorphic function of degree −2 and an anti-holomorphic function of degree 0. The relevant R 2 F 2 coupling is where the part of the coupling coming from the R-R fields, as derived from the reduction is where in the second equality we defined the tensor R IJ for later convenience. This tensor clearly describes a non-topological coupling, since it depends on the curvature of the Calabi-Yau manifold explicitly. In fact, the definition (5.2) is invertible, as one can reconstruct the Riemann tensor R mnpq from R IJ by contracting with the harmonic two-forms. We record the following properties of R IJ , which will be useful in the discussion below, where t I and G IJ are the Kähler moduli and G IJ is the inverse of the Kähler metric.
In order to extend (5.2) to include the 0-th gauge field, we follow the same procedure as in (4.12), to obtain the additional couplings We then obtain for the function describing the R 2 F 2 invariant whereR IJ (Y,Ȳ ) = R IJ (t) is viewed as a function of the t I = Im( Y I Y 0 ), as obtained in the standard special coordinates. Note that (5.5) is manifestly homogeneous in the holomorphic scalars Y A , but non-homogeneous in the anti-holomorphic scalarsȲ A , as expected.
It is straightforward to obtain a term of the type (∇E) 2 F 2 by simply replacing A w → A t in (5.5), in line with previous observations. It turns out that this invariant is also generated by the reduction, as where the two couplings H AwĪJ , are equal. By the same argument as above, the function (5.5) can be extended to include the tensor multiplet coupling as which describes the first row in the six-derivative part of table 1.
We now turn to the F 6 term, which is computationally more challenging than the couplings described above. This is due to the fact that there are no terms cubic in the two-form field strength H in ten dimensions, so that (∇F ) 3 terms do not arise in four dimensions. This is consistent with the fact that similar terms cancel in the F 6 coupling that follows from the cubic invariant described in appendix D. One therefore is forced to consider terms of the type (∇F ) 2 F 2 , which are quartic in the (1, 1) forms ω I , from the point of view of the Calabi-Yau reduction.
The result is a coupling containing all possible combinations of an internal Riemann tensor and four ω I , as in ω I mn ω J mn R pqrs ω K pq ω Lrs , ω I mn ω J np R pqrs ω K qm ω Lrs , . . . , (5.8) which in principle determine the function controlling the F 6 coupling. However, we also find nontrivial odd terms for the scalars resulting from (2.7), in contrast to the known coupling in section D. These terms include where the dots stand for terms containing the same objects in double traces rather than s single one. We observe that a term completely antisymmetric in three indices I , J , K arises and conclude that the known coupling is not sufficient to describe these terms. We leave it to future work to determine the possible new coupling(s) that can complete the structure. Finally, it is worth discussing in brief the invariant of the type H 2 F 4 , which is not known explicitly in supergravity. Such terms do appear and seem to be controlled by the same tensor R IJ in (5.2) above, since we find the characteristic couplings R IJ E 2 ∇F I ∇F I for all possible contractions of indices between the vector and tensor multiplet field strengths. Similarly, we find the parity odd terms where in the last integral we used similar conventions as in (5.9) above. Indeed, the two integrals appear to be closely related, so that the two couplings may have a similar origin in terms of superspace invariants.
Four derivative couplings
In order to obtain four derivative couplings in four dimensions from the 10D R 4 invariant, one needs to consider terms that include exactly two Riemann tensors in the internal directions. It follows that the integrals controlling the 4D couplings are quadratic in the Calabi-Yau curvature, in the same way as the six derivative couplings of the previous section are controlled by the Calabi-Yau Riemann tensor through (5.2) above. At this level in derivatives, four structures can appear, namely R 2 , F 4 , H 2 F 2 , and H 4 . Given our assumption of no hyper/tensor multiplets other than the universal tensor multiplet, all of these structures will be described by functions involving vector multiplet scalars, but only the latter two involve the tensor multiplet explicitly. All couplings except the H 2 F 2 terms can be described straightforwardly in N = 2 supergravity, and we now discuss each in turn.
The R 2 term
The R 2 term has been known for quite some time [11,27], and arises from terms that can be completely factorised in internal and external indices, as where we used (2.13) and are the the second Chern classes of the Calabi-Yau four-cycles. Note that this is a topological quantity, unlike the objects controlling higher derivative couplings described above, as e.g. in (5.2). The supergravity description requires to allow the lowest order prepotential to depend on the Weyl multiplet through A w [20], so that the explicit prepotential arising from (6.1) is given by where we remind the reader that the physical moduli are given by z I = Y I Y 0 in terms of the scalars Y A above.
The H 4 term
Turning to the tensor multiplet sector, an explicit computation using (2.14) leads to the following terms in four dimensions where, in complete analogy with the R 2 terms, only factorised traces contribute. It follows that the four dimensional Lagrangian contains the four derivative tensor multiplet invariant arising from (C.11), controlled by exactly the same prepotential in (6.3), upon extending the term containing the Weyl background to include the tensor background, as with A t as in (C.7). This function describes the couplings in the first line of Table 1.
Once again we observe the close relation between the R 2 and tensor multiplet couplings, which are characterised by exactly the same functional form in terms of the corresponding chiral backgrounds. This structure arises despite the fact that in N = 2 supergravity in four dimensions the tensor H and gravity are not in the same multiplet anymore, so that, a priori, more flexibility, parametrized by two functions is allowed. However, we find that the Calabi-Yau reduction leads to a single function of vector multiplet scalars for both couplings.
The F 4 term
In the purely RR sector, an invariant quartic in derivatives on the vector multiplet components exists in 4D, which is characterised by an (∇F ) 2 coupling. As above, we analyse the terms arising from the odd term under pair exchange in R(Ω + ) in order to obtain these explicitly. Using (2.14), the terms coming from non-Ricci combinations cancel, and the remaining ones are Laplacians of four dimensional fields. Explicitly, we obtain that the total (∇F ) 2 term reads where X IJ is the tensor X IJ = X mnm 1 ...m 4 pqn 1 ...n 4 R m 1 m 2 n 1 n 2 R m 3 m 4 n 3 n 4 ω I mn ω J pq , (6.7) and is explicitly given by The gauge field partner of these scalar couplings is obtained by lifting to eleven dimensions the original expression and reducing back on a circle times a Calabi-Yau. It then follows that the result for gauge fields takes the form Comparing (6.6) and (6.9) to the known F 4 term in supergravity, given in (D.7), we find that the interactions of the vector multiplets arising from expansion along the second cohomology are governed by the tensor X IJ above.
We now turn to the three-index structure, and compute the parity odd term quadratic in 4D field strengths. Following the same lifting and reducing procedure, we find that the only even-odd term quadratic in 3-from field strengths is which upon reduction to 4D gives rise to a term of the type (6.11) where in the second step we partially integrated and the dots denote terms involving the derivative of the coupling H IJ,K . The explicit expression for this three index coupling follows from the relation where we note the important identities The remaining interactions with the ten dimensional RR gauge field, F 0 , described by H 0I and H 00 , are obtained by viewing F 0 as a Kaluza-Klein gauge field, coming from the reduction from 11D. As these must necessarily be quadratic in the Kaluza-Klein gauge fields, only the factorised term in the 10D invariant contributes. It then follows that, as far as terms quadratic in Riemann tensors are concerned, the lifting and reducing procedure is identical to the 4D/5D connection studied in [24]. Therefore, we can simply add the couplings H 0I and H 00 found in that work, given by (6.14) to the ones in (6.6) and (6.9) above. After adding the extra contribution in (6.14) to (6.6), and performing the by now standard shift in (4.12) to account for the axionic coupling to the 0-th gauge field strength, we obtain the final form of the coupling H AB , as where in the second step we passed from the special coordinates z I to the projective coordinates Y A . Note that the coupling X IJ is real and depends only on the Kähler moduli t I , similar to the lowest order Kähler potential. The couplings (6.15) satisfy the condition Y A H AB = 0, so that they belong to the class of [12]. Recently, a more general class of F 4 invariants appeared in [25], which allows for Y A H AB = 0 and contains additional terms quadratic in the Ricci tensor. However, we find that no such extra terms appear in the reduction of the 10D action, beyond the one in the familiar (Weyl) 2 term, consistent with the properties of H AB above.
On H 2 F 2 terms
Finally, we comment on the possible four derivative terms which mix tensor and vector multiplets. Such terms have not been explicitly constructed in the literature and it is a interesting open problem to tackle, even for rigidly supersymmetric theories. Indeed, a construction of such an invariant is likely to lead to insight into more general mixed terms of the type H 2n F 2m , where n is odd, examples of which have been mentioned above (e.g. the H 2 F 4 term).
An explicit computation of the terms arising from reduction of the parity even terms at this order reveals that terms involving derivatives of H and F do not arise. However, we do find nontrivial terms involving field strengths only, e.g. (6.16) and terms related to this by introducing the gauge field strengths, i.e. X IJ E µ E ν F I µ ρ F J νρ , where X IJ is the integral defined in (6.7) above. In order to obtain this result, we used (2.14) and we note that the additional terms ∆J 0 (Ω + , H) in (2.6) are nontrivial in this case.
In addition, the parity odd terms are also nontrivial for these couplings, since one can easily verify that the parity odd term (2.7) leads to couplings of the type where Y IJ is the integral In the last relation, the two-forms ω I are viewed as vector valued one-forms, for convenience. Note that the term (6.17) is linear in the tensor field strength, unlike the parity even coupling (6.16). This may seem counterintuitive, but we stress that our simplifying choice of ignoring the scalars in the tensor multiplet may obscure the connection between tensor multiplet couplings that are expected to be controlled by appropriate functions of these scalars. Finally, we point out that Y IJ is by definition antisymmetric in its indices, which is similar to the corresponding six derivative terms in (5.9)-(5.10) above. This type of odd terms is somewhat unconventional in the N = 2 setting and may point to a common origin of all these unknown invariants.
One possible way to construct couplings of this type is to make use of the results of [28], on arbitrary couplings of vector and tensor multiplet superfields. In terms of the superfields G 2 and W A describing the tensor and vector multiplets respectively, one may consider an integral of the type 10 d 4 θd 4θ H(W,W ) G 2 , (6.19) in order to describe couplings such as above, where the function H must be such that the couplings (6.16)-(6.17) are reproduced. It is worth mentioning that including kinetic multiplets in (6.19) may lead to even higher derivative couplings that can account for some of the unknown couplings pointed out above, i.e. of the type H 2n F 2m , where n is odd. The explicit realisation of the possible Lagrangians following from the integral (6.19) in components would require the construction of a density formula for a general real multiplet of N = 2 supergravity and falls outside the scope of the present work.
Some open questions
We shall conclude with a list of some open questions.
One immediate consequence of this work is the prediction of new four-dimensional higherderivative N = 2 invariants. It would be nice to be able to verify this prediction by explicitly constructing some of these terms, either using the structure in (6.19), or new techniques. It is interesting to point out that the new invariants involve terms, descending from the eleven-dimensional anomalous terms C 3 ∧ X 8 , which are top-form Chern-Simonslike couplings. Examples of these at the six-and four-derivative are discussed in sections 5 and 6 respectively. It would also be very interesting to verify whether the terms that we find to be vanishing but could in principle be nontrivial, such as the H 6 F 2 and (∇H) 2 (∇F ) 2 terms, do exist or not. Moreover, we stress that we have been focusing on the leading terms, matching to the invariants constructed in [12] and disregarding the possibility of more detailed structures that might appear. While we have not found any inconsistencies, we cannot exclude the existence of subleading terms that are not captured here. For example, the types of invariants recently constructed in [25] allow for additional couplings proportional to the square of the Ricci tensor, rather than the Weyl tensor alone.
There is a number of important omissions here. We have worked exclusively with one-loop terms, and avoided the discussion of the dilaton. Our excuse can be that the tree-level terms neither survive the eleven-dimensional limit, nor contribute to the well studied R 2 terms in four dimensions. Yet they are important for understanding the corrections to the moduli spaces. In addition the dilaton is subtle and important enough to merit a discussion.
As already mentioned, we have largely ignored the complex deformations of the internal CY. It might be of some interest to extend our results to generic hyper-matter, since that would most likely turn on the couplings that we find to be vanishing.
We have concentrated only on CY compactification and hence ungauged N = 2 theories. Quantum corrections to the super potential have been much studied and are of obvious interest. It would be very interesting to extend the discussion of (at least some of ) the higher derivative couplings to the gauged theories. The fact that the couplings described here have an off-shell formulation is helpful in that respect.
The relation of our calculation to the topological string calculations needs further elucidation. Most of our CY integrals are not topological and one may ask if there is an extension or refinement of topological strings that may capture the physical string theory couplings described here. Our calculations are exclusively one-loop, but one might hope that the structure of the terms discussed here, and the relations between different supersymmetric invariants are sufficiently restricted by supersymmetry to extend to all genus calculations.
The structure of the various functions describing the coupling of the gravity and tensor multiplets seem to treat the two backgrounds on the same footing, somehow reflecting the structure of the ten dimensional action built out of the torsionfull curvature tensor R(Ω + ). Given that this structure was instrumental in checking T-duality in [14], it would be interesting to consider the properties of our couplings under the c-map, which is the lower dimensional analogous operation. Note that this would explicitly relate the vector and tensor multiplets, especially in view of the fact that the various couplings mix the two kinds of multiplets.
The new terms discussed here are not relevant for BPS black hole physics, at least at the attractor [12,29,30], as they vanish by construction on fully BPS backgrounds and do not affect the entropy and charges. However, our results are relevant for non-BPS black holes and may be related to the one-loop modifications to the entropy of such objects, as in [31].
where the R i are defined in (A.4) below. Similarly, we display for completeness the full expression for the odd-odd term quartic in R as where R µ 1 µ 2 = R µ 1 µ 3 µ 2 µ 3 is a non-symmetric tensor corresponding to the Ricci tensor and the scalar R is its trace. The various non-Ricci combinations appearing in both the even-even and odd-odd structures are defined as for any tensor R µ 1 µ 2 µ 3 µ 4 that is antisymmetric in each pair of indices, but does not satisfy the Bianchi identity and we use the shorthand notationR µ 1 µ 2 µ 3 µ 4 = R µ 3 µ 4 µ 1 µ 2 in order to keep expressions compact. Note that if R is identified with a Riemann tensor, all tilded quantities become equal to their untilded counterparts.
B Off-shell N = 2 supergravity and chiral multiplets
In this appendix we summarise some general formulae on the N = 2 Weyl multiplet in four dimensions and the chiral multiplets in a general superconformal background. Our Weyl multiplet conventions are as in [12], where the reader can find a more detailed account.
N = 2 superconformal gravity
The off-shell formulation of four-dimensional N = 2 supergravity is based on the Weyl multiplet of conformal supergravity, whose components are given in Table 2. This consists of the vierbein e µ a , the gravitino fields ψ µ i , the dilatational gauge field b µ , the R-symmetry gauge fields V µi j (which is an anti-hermitian, traceless matrix in the SU(2) indices i, j) and A µ , an anti-selfdual tensor field T ab ij , a scalar field D and a spinor field χ i . All spinor fields are Majorana spinors which have been decomposed into chiral components. The three gauge fields ω µ ab , f µ a and φ µ i , associated with local Lorentz transformations, conformal boosts and S-supersymmetry, respectively, are not independent as will be discussed later.
The infinitesimal Q, S and K transformations of the independent fields, parametrized by spinors i and η i and a vector Λ K A , respectively, are as follows, Here, D µ denotes the full superconformally covariant derivative, while D µ denotes a covariant derivative with respect to Lorentz, dilatation, and chiral SU(2)×U(1) transformations, e.g.
Under local scale and U(1) transformations the various fields and transformation parameters transform as indicated in table 2. The various quantities denoted by R(Q), and appearing in the supersymmetry variations above denote the supercovariant curvature tensors corresponding to each generator, Q, whose detailed definition can be found in [12]. Here, we only give the following which are necessary to introduce the conventional constraints defining the composite gauge fields associated with local Lorentz transformations, S-supersymmetry and special conformal boosts, ω M AB , φ M i and f M A , respectively.
Chiral multiplets
Chiral multiplets are the basic building blocks of all supersymmetric invariants in this paper. We therefore give a concise overview of their most basic properties, to be used in the various constructions. Chiral multiplets are complex, carrying a Weyl weight w and a chiral U(1) weight c, which is opposite to the Weyl weight, i.e. c = −w, while anti-chiral multiplets can be obtained from chiral ones by complex conjugation, so that anti-chiral multiplets will have w = c. The components of a generic scalar chiral multiplet are a complex scalar A, a Majorana doublet spinor Ψ i , a complex symmetric scalar B ij , an anti-selfdual tensor G − ab , a Majorana doublet spinor Λ i , and a complex scalar C. The assignment of their Weyl and chiral weights is shown in table 3. The Q-and S-supersymmetry transformations for a Chiral multiplet Any homogeneous function of chiral superfields constitutes a chiral superfield, whose Weyl weight is determined by the degree of homogeneity of the function at hand. Indeed, one can show that a function G(Φ) of chiral superfields Φ I defines a chiral superfield, whose component fields take the following form, where G I , G IJ etc. are the derivatives of the function G with respect to the scalars A I and we omitted all terms nonlinear in fermions for brevity. Chiral multiplets of w = 1 are special, because they are reducible upon imposing a reality constraint. The two cases that are relevant are the vector multiplet, which arises upon reduction from a scalar chiral multiplet, and the Weyl multiplet, which is a reduced anti-selfdual chiral tensor multiplet. vector multiplet tensor multiplet Table 4: Weyl and chiral weights (w and c) and fermion chirality (γ 5 ) of the vector multiplet and the tensor multiplet.
The constraint for a scalar chiral superfield implies that C| vector and Λ i | vector are expressed in terms of the lower components of the multiplet, and imposes a reality constraint on B| vector and a Bianchi identity on G − | vector [32][33][34], as where F µν = 2∂ [µ A ν] is the field strength of a gauge field, A µ . The corresponding Bianchi identity on G ab can be written as, where in both (B.7) and (B.8) we again omitted terms nonlinear in fermions. The reduced scalar chiral multiplet thus describes the covariant fields and field strength of a vector multiplet, which encompasses 8+8 bosonic and fermionic components. Table 4 summarizes the Weyl and chiral weights of the various fields belonging to the vector multiplet: a complex scalar X, a Majorana doublet spinor Ω i , a vector gauge field A µ , and a triplet of auxiliary fields Y ij . The Q-and S-supersymmetry transformations for the vector multiplet take the form, and, for w = 1, are in clear correspondence with the supersymmetry transformations of generic scalar chiral multiplets given in (B.5).
We now turn to the covariant fields of the Weyl multiplet, which can be arranged in an anti-selfdual tensor chiral multiplet, whose chiral superfield components take the following form, Note that all quantities involved in the components above are either manifestly supercovariant curvatures or (covariant) auxiliary fields of the Weyl multiplet. In particular, R(S) abi is the curvature of the S-supersymmetry gauge field, which is solved in terms of the derivative of the gravitino curvature, R(Q) abi , due to the conventional constraints. All higher derivative terms involving powers of the Weyl tensor in this paper are constructed by couplings of the scalar chiral multiplet with w = 2 is obtained by squaring the Weyl multiplet above. The various scalar chiral multiplet components of this multiplet are given by, In practice, we will only use the lowest component, A w , to construct functions that define composite chiral multiplets, as in (B.6), which determines completely all instances of the higher components in the relevant couplings. The components (B.11) can then be substituted straightforwardly in the final expressions to obtain the explicit couplings to the fields of the Weyl background.
C Tensor multiplet as a chiral background
We now turn to the tensor multiplet, which is also defined as an off-shell multiplet in an arbitrary superconformal background. The field content of this multiplet includes a pseudoreal triplet of scalars, L ij , a two-form gauge potential, B µν , a Majorana fermion doublet, ϕ i , and an auxiliary complex scalar, G, with the Weyl and chiral assignments given in 4. The corresponding supersymmetry transformation rules are as follows and we refer to [19] for the precise definitions of the superconformally covariant derivatives on the various fields. The vectorÊ µ is the superconformal completion of the dual of the three-form field strength,Ê µ = 1 2 i e −1 ε µνρσ ∂ ν B ρσ . The couplings of the tensor multiplets are given in terms of composite vector multiplets [19], described by functions of a set of tensor multiplets, labeled by I. To this end, we define the first component, the scalar X I as which, by (C.1), transforms according to the first of (B.5) into the remaining bosonic components of the vector multiplet, as where we suppressed all fermions and the component C I is consistent with (B.7). In order for this multiplet to be well defined, the first derivative of F I,J (L) with respect to L K ij , denoted by F I,J,Kij , must satisfy the constraints which implies that the function F I,J is SU(2) invariant and homogeneous of degree −1, so that it has scaling weight −2.
The expressions for the composite chiral supermultiplet above can be used to construct actions with higher derivative couplings. In general, one can use (C.2)-(C.3) on the same footing as any vector multiplet to obtain actions containing vector-tensor couplings. This is beyond the scope of this paper, where we only consider a background chiral multiplet containing four derivatives on the components of a single tensor multiplet, similar to [19] but allowing for couplings depending on vector multiplet scalars as well.
For a single tensor multiplet, the functions F I,J in (C.2) reduce to a single function F(L), while the constraints (C.4)-(C.5) imply the constraint, We then consider the chiral multiplet of w = 2 defined by its first component as the square of (C.2), through where we defined the function H(L) = [F(L)] 2 , and its derivatives, as As noted in (3.7), for a single tensor multiplet the function F is essentially unique, so that H is simply given by its square, as where in the reduction we consider in the main text, the scalars L ij contain the dilaton and are kept constant throughout. The remaining components of this composite background multiplet are given by (B.6) for G(A) = A 2 , as follows from (C.7). For completeness, we display their form for a general function H, as follows for the lower components and for the top component.
D The kinetic multiplet and supersymmetric invariants
The central object in constructing the various higher derivative invariants of the type R 2n F 2m in this paper is the so called kinetic chiral multiplet. The term 'kinetic' multiplet was first used in the context of the N = 1 tensor calculus [35], because this is the chiral multiplet that enables the construction of the kinetic terms, conventionally described by a real superspace integral, in terms of a chiral superspace integral. In [12,34] a corresponding kinetic multiplet, T(Φ), for a chiral w = 0 multiplet, Φ, was identified for N = 2 supersymmetry, which now involves four rather than two covariantθ-derivatives. It follows that T(Φ) contains up to four space-time derivatives, so that the expression corresponds to a four derivative coupling. Expressing the chiral multiplets in terms of (functions of) reduced chiral multiplets, (D.1) leads to higher-derivative couplings of vector multiplets and/or the Weyl multiplet.
Denote the components of a w = 0 chiral multiplet by (A, Ψ, B, G − , Λ, C), out of which we construct the components of T(Φ w=0 ), denoted by (A, Ψ, B, G − , Λ, C)| T(Φ) . In [12] the following relation was established, where we suppressed terms nonlinear in the covariant fermion fields. Observe that the right-hand side of these expressions is always linear in the conjugate components of the w = 0 chiral multiplet, i.e. in (Ā, Ψ i , B ij , G + ab , Λ i ,C). Using the result (D.2) one can construct a large variety of superconformal invariants with higher-derivative couplings involving vector multiplets, as well as the tensor and Weyl chiral backgrounds. The construction of the higher-order Lagrangians therefore proceeds in two steps. First one constructs the Lagrangian in terms of unrestricted chiral multiplets of appropriate Weyl weights, in the form Here, the n-th power of the kinetic multiplet is defined recursively as T (n) = T(Φ n T (n−1) ) for Φ n of appropriate weight. Subsequently, one expresses the unrestricted supermultiplets in terms of the reduced supermultiplets in section B. In these expressions it is natural to introduce a variety of arbitrary homogeneous functions, so that resulting final Lagrangian is controlled by a function of given homogeneity and holomorphicity in the various fields, corresponding to the original structure in (D.3).
In this work, we will make use of invariants of the type (D.3), where one, two or three kinetic multiplets appear, and are naturally quadratic, cubic and quartic in chiral multiplet components, respectively. While the first of these was described in detail in [12], the other two have not appeared in the literature. These are straightforward to write, using the formulae above and in [12] but are rather unilluminating, so that we prefer to emphasise the structure of the corresponding Lagrangians, restricting ourselves to the leading terms.
The quadratic invariant
The simplest case of a Lagrangian involving a kinetic multiplet is the one in (D.1), where a w = 0 chiral multiplet is multiplied with the kinetic of an antichiral one. In components, the leading bosonic terms in the resulting Lagrangian read where we suppressed the prime on the second chiral multiplet indicated in (D.1) for brevity. The next step is to consider the components of the chiral and anti-chiral multiplet in (D.4) to be composite, given as holomorphic and anti-holomorphic functions, F ,F of the fundamental vector, tensor and Weyl multiplet respectively. The result is a Lagrangian that is controlled by a homogeneous function of degree zero, which depends on the vector multiplets scalars, X A , and the Weyl and tensor multiplet composites, A w and A t . This invariant corresponds to higher derivative couplings that are quadratic in the leading terms, F 2 , R 2 and (∇E) 2 respectively. The arbitrariness of the function in A w is analogous to the similar dependence of the chiral couplings, F (X A , A w ) which describes the full topological string partition function. Note that the various combinations have different order of derivatives, as e.g. F 4 comprises only four derivatives, while R 2 F 2 , (∇E) 2 F 2 contain six derivatives and R 4 , R 2 (∇E) 2 , (∇E) 4 contain eight derivatives. However, all these invariants have a common structure, found by substituting the definitions of the chiral multiplets in terms of F ,F and H in (D.4). This was done in [12], where the F 4 coupling was constructed, based on a real function H(X,X), which plays the role of a Kähler potential, as it is defined up to a real function, as H(X,X) → H(X,X) + Λ(X) +Λ(X) . (D.6) The explicit form of the Lagrangian is +H IJ 4 2 cX One can obtain the more general couplings as discussed above, resulting in similar expressions. For example, the R 2 F 2 -and R 4 -type couplings feature terms found by substituting F 2 → R 2 and similarly for the other components in (D.7) and are discussed in [12].
The cubic invariant
The next more complicated example of Lagrangians containing kinetic multiplets is to consider an integral quadratic in kinetic multiplets, as where Φ 0 is a w = −2 chiral, while Φ 1 and Φ 2 are w = 0 anti-chirals, as above. It is straightforward to apply the multiplication rule for chiral multiplets, to obtain the analo-gous master formula of the type (D.4), in this case. The result takes the form which is manifestly quadratic in holomorphic and linear in anti-holomorphic components. Note that we again use a simplified notation that naively identifies the three a priori independent multiplets, despite the fact that the anti-chiral multiplet is of weight −2, while the chiral ones are of w = 0. The most general invariant follows by completing the combinations given above with the components of the kinetic multiplet given in (D.2) and viewing the holomorphic components as quadratic forms in the components of the two chiral multiplets in (D.9), as done in (D.4).
It is now straightforward, if cumbersome, to consider the three multiplets in (D.9) as functions of the vector multiplets, the tensor multiplet and the Weyl multiplet, as done in (D.5), leading to a Lagrangian described by a function, H(X A , A w , A t ,X A ,Ā w ,Ā t ), which is homogeneous of degree zero in the holomorphic components and homogeneous of degree −2 in the anti-holomorphic components. We refrain from giving the corresponding expression (D.7) in this case, since we will only be dealing with the leading terms and the properties of the corresponding function H.
Once again, the generic function of all available multiplets leads to various invariants, which contain different orders of derivatives but share the same structure, as in (D.10). The prototype of these terms is the F 6 invariant arising by taking H(X A ,X A ), i.e. a function of vector multiplet scalars only. Allowing for holomorphic/anti-holomorphic dependence on the scalars A w and A t leads to terms of the type R 2 F 4 , R 4 F 2 , (∇E) 2 F 4 and so on for all possible combinations. Note that many of these contain more than eight derivatives and therefore fall outside the scope of this work.
The quartic invariants
We finally consider integrals of the type (D.3) which are cubic in the kinetic multiplet operator, T, in which case we find two possibilities. Indeed, this is the first case where one needs to consider nested kinetic multiplets, since the two possible integrals, are not equivalent upon partial integration. Here, the first integral is the straightforward extension of (D.1) and (D.9), while in the second integral Φ 0 and Φ 0 are w = −2 chirals, while Φ 1 and Φ 2 are w = 0 chirals, as above.
Once again, one can apply the multiplication rule for chiral multiplets, to obtain the analogous master formula of the type (D.4), in these cases. The expression for the first integral is similar to (D.10), where three chiral multiplets appear and is not used in this paper. The second integral is more cumbersome, but can be easily computed by an iterative procedure, by noting thatΦ 0 T(Φ 1 ) and Φ 0 T(Φ 2 ) are w = 0 multiplets, so that (D.4) applies for their components. One can then obtain the result to the integral by making the following substitutions 12) in (D.4), where the components labeled with | T(Φ) are as in (D.2). As above, allowing for the four chiral multiplets involved to depend on the vector, tensor and/or the Weyl multiplet, exactly as in (D.5), one obtains various higher derivative invariants, sharing the same structure. However, all but one of the invariants described by each of the two integrals in (D.11) necessarily contain more than eight spacetime derivatives if the Weyl and tensor multiplet backgrounds are allowed, so that they are not relevant for our consideration. The exception is the case where all the composite chiral multiplets only depend on the vector multiplets, in which case we obtain two F 8 invariants from (D.11). | 19,056.8 | 2013-11-19T00:00:00.000 | [
"Mathematics"
] |
A deep learning framework for accurate reaction prediction and its application on high-throughput experimentation data
In recent years, it has been seen that artificial intelligence (AI) starts to bring revolutionary changes to chemical synthesis. However, the lack of suitable ways of representing chemical reactions and the scarceness of reaction data has limited the wider application of AI to reaction prediction. Here, we introduce a novel reaction representation, GraphRXN, for reaction prediction. It utilizes a universal graph-based neural network framework to encode chemical reactions by directly taking two-dimension reaction structures as inputs. The GraphRXN model was evaluated by three publically available chemical reaction datasets and gave on-par or superior results compared with other baseline models. To further evaluate the effectiveness of GraphRXN, wet-lab experiments were carried out for the purpose of generating reaction data. GraphRXN model was then built on high-throughput experimentation data and a decent accuracy (R2 of 0.712) was obtained on our in-house data. This highlights that the GraphRXN model can be deployed in an integrated workflow which combines robotics and AI technologies for forward reaction prediction. Supplementary Information The online version contains supplementary material available at 10.1186/s13321-023-00732-w.
Introduction
Organic synthesis is the foundation for the development of life science, such as pharmaceutics and chemical biology [1,2].For decades, the discovery of chemical reaction was driven by serendipitous intuition stemming from expertise, experience and mechanism exploration [3].However, professional chemists sometimes have hard time to predict whether a specific substrate can indeed go through a desired reaction transformation, even for some well-established reactions [4,5].When optimizing reaction yield or selectivity, small changes in reaction factors, including catalysts, temperature, ligands, solvents, and additives, may result in outcomes that deviate from the intended target.
With the development of artificial intelligence (AI), computational methods to predict the reaction outcome and retro-synthesis route have been proposed to accelerate chemical research [6][7][8][9][10][11][12].There is a rich history of computer assisted chemical synthesis.Jorgensen and coworkers introduced Computer Assisted Mechanistic Evaluation of Organic Reactions (CAMEO [13]).This and other early approaches, including SOPHIA [14] and Robia [15], attempted to employ expert heuristics to define possible mechanistic reactions.What these approaches suffered in common is the difficulties to enable prediction of novel chemistry.For specific reaction classes with sufficiently detailed reaction condition data, machine learning can be applied to the quantitative prediction of yield [16].As a sub-domain of AI, deep learning technologies were booming in the last decade and have made huge impact on reaction prediction and retrosynthesis modelling.For retro-synthesis planning, there are two types of deep learning model.One type is the so called template-based models, where combining reaction templates with deep neural networks [17][18][19] has been applied.Reaction templates are the classic approach to codifying the "rules" of chemistry [20][21][22][23], and is extensively applied in computer-aided synthesis planning [24,25].In contrast, without using any pre-defined reaction templates, various deep learning based machine translation models were employed to learn chemical reaction from data directly and can also been used for synthesis planning.That is called template-free-based model.
For the prediction of reaction outcome, Quantummechanics (QM) based descriptors, representing electrostatic or steric characterizations, calculated by density functional theory (DFT) or other semi-empirical methods [26][27][28][29] are frequently used for modelling.Doyle et al. [16] utilized QM-derived descriptors to build a random forest model, which achieved good prediction performance of the Buchwald-Hartwig cross-coupling of aryl halides with 4-methylaniline.Sigman et al. [30] defined four important DFT parameters to capture the conformational dynamics of the ligands, which were fed into multivariate regression modelling for the correlation of ligand properties and relative free energy.Denmark et al. [10] generated a set of three-dimension QM descriptors to develop an AI-based model for enantioselectivity prediction.Applying QM descriptors to modelling offers the advantage of model interpretability, but it usually requires a deep understanding of reaction mechanisms, which may be difficult to transfer to other reaction prediction tasks.Another kind of popular descriptors is the so-called reaction fingerprints.Glorius and co-workers [31] developed a multiple fingerprint features (MFFs) as molecular descriptors, by concatenating 24 different fingerprints, to predict the enantioselectivities and yields for different experimental datasets.Although good results were observed, this method can be a time and resource intensive process, as a single molecule was represented in a 71,374-bit array.Reymond et al. [32] reported a molecular fingerprint called differential reaction fingerprint (DRFP), by taking reaction SMILES as input which were embedded into an arbitrary binary space via set operations for subsequent hashing and folding, to perform reaction classification and yield prediction.Though the reaction fingerprints are easily built, the reaction fingerprint may lose certain chemical information due to the limited predefined substructures, and thus a task-specific representation which could learn from dataset is needed.
One possible solution to the issue of universal reaction descriptors is to apply graph neural networks (GNNs) on reaction prediction task [33,34].Owing to the powerful capacity for modelling graph data, GNNs have recently become one of the most popular AI methods and have achieved remarkable prediction performance on several tasks [11,[35][36][37].Various graph-based models, such as graph conventional network(GCN) [11,38], Graph-SAGE [39], graph attention network(GAT) [40] and message passing neural network(MPNN) [41], have been proposed to learn a function of the entire input graph over molecular properties, by either directly applying a weight matrix on the graph structure or using a message passing and aggregation procedure to update node features iteratively.A molecule is regarded as a graph, where atoms are treated as nodes and bonds are treated as edges.Node and edge features are influenced by proximal ones, and these features are learned and aggregated to form the embedding of entire molecule graph [41,42].It was worth mentioning that in addition to the above mentioned graph model architectures, transformer neural network [43] was adopted for the direct processing of molecular graph as sets of atoms and bonds [44,45].For example, transformer based model Graphormer-Mapper [46] was proposed to do reaction featurization, which is similar to the idea of learning molecular graph features with reaction data, but based on a transformer architecture.
In this work, we proposed a modified communicative message passing neural network (GraphRXN), which was used to generate reaction embeddings for reaction modelling without using predefined fingerprints.For chemical reactions comprised of multiple components, reaction features can be built up by aggregating embeddings of these components together and correlated to the reaction output via a dense layer neural network.
Another major challenge for reaction prediction is the access of high-quality data [47,48].Though numerous data were accumulated, bias toward positive results in the literatures led to unbalanced datasets.What's more, extracting valid large-scale data from literature requires substantial human intervention.High-throughput experimentation (HTE) is a technique that can perform a large number of experiments in parallel [49,50].HTE could serve as a powerful tool for advancing AI chemistry as it has the capability to significantly increase experiment throughput, and ensure data integrity and consistency.With this technology, several high-quality reaction datasets were reported [47], including Buchwald-Hartwig amination [16,51,52], Suzuki coupling [9,53,54], photoredox-catalysed cross coupling [55].These datasets contain both successful and failed reactions, which is critical for building forward reaction prediction models.Three public HTE datasets were used as proof of concept studies for our method and encourage results were demonstrated.As further verification, we used our in-house HTE platform to generate data of Buchwald-Hartwig cross-coupling reaction.The GraphRXN methodology was then applied on the in-house dataset and a decent prediction model was obtained (R 2 of 0.713), which highlights that our method can be integrated with reaction robotics system for reaction prediction.We expect that deep learning based methods like GraphRXN, combined with the data-on-demand reaction machine, could potentially push the boundary of reaction methodology development [56,57].
GraphRXN framework
A deep-learning graph framework, GraphRXN, was proposed to be capable of learning reaction features and predicting reactivity (Fig. 1).
The input of GraphRXN is the reaction SMILES where each reaction component (either reactants or products) is represented by the directed molecular graph G(V , E) [58].For each individual graph of the reaction, it learns through three steps, including message passing, information updating, and read out.All node features ( X v , ∀v ∈ V ) and edge features (X e v,w , ∀e v,w ∈ E ) are propagated in the message passing and updating stage as shown in Algorithm 1: (c) After K steps iteration, the message vector ( m(v)) is obtained by aggregating hidden states h K e u,v of its neighbouring edges.The node message vector m(v) , current node hidden state h K (v) and initial node information x(v) are fed into a communicative function to form the final node embedding h(v).(d) Gated Recurrent Unit (GRU) is chosen as the readout operator to aggregate the node vectors into a graph vector.The length of molecule feature vector is adjustable (here it is set to 300 bit).
The molecular feature vectors are then aggregated into one reaction vector by either summation or concatenation operation (named as GraphRXN-sum and GraphRXN-concat respectively).The length of GraphRXN-sum vector is set to 300 bit and GraphRXNconcat is multiple times of 300 (depending on the maximal reaction components).If we take a two-components reaction (A + B → P), for example, when summation operation is selected to aggregate features of A, B and P, the length of reaction vector is 300 bit; when concatenation operation is selected to aggregate molecular features, the length of reaction vector is 900 bit.In addition, for Fig. 1 Model architecture of GraphRXN some reaction components which are inappropriate to be depicted by graph structure, such as inorganic reagents or catalysts, one-hot embedding will be used to characterize them.Finally, a dense layer is used to fit reaction outcomes, including reaction yield and selectivity.
Data preparation
As shown in Table 1, in total, four reaction datasets were used to validate the performance of our GraphRXN model.Three of them are open-source HTE datasets and one of them is generated from in-house HTE platform (in Additional file 1).
The original outcome value x (including yields, selectiv- ities, and ratios) was then treated with z-score normalization, where µ is the mean of all samples, σ is the standard deviation of all samples.
Each dataset was split into training set and test set in a ratio of 80:20.To be mentioned, a validation set (20% of training set) was raised to avoid overfitting, i.e. when the model performance on validation set became stable, the training process would stop.From the k-fold cross validation (CV) task, we obtained averaged errors, rather than depending on one randomly split.To make a strict comparison, ten folds CV was adopted on dataset 1-2 which was consistent with the reported Yield-BERT study by Reymond et al. [8,59], and dataset 3 which was consistent with the reported study by Perera et al. [53].Five folds CV was adopted in the in-house dataset.
Baseline models
Two previously published reaction prediction methods Yield-BERT [8,59] and DeepReac + [12] were used as baseline models for comparison.
(1) Yield-BERT is a sequence-based model which employ natural processing architecture to predict reaction related properties given a text-based representation of the reaction, using an encoder transformer model combined with a regression layer. ( The source codes of Yield-BERT were downloaded from https:// rxn4c hemis try.github.io/ rxn_ yields/.(2) DeepReac + is also a graph based model.In terms of model architecture, unlike the message passing neural network used in GraphRXN, Deep-Reac + employed GAT model, a variant of GNN, as the core building block.The source codes of Deep-Reac + were downloaded from https:// github.com/ bm2-lab/ DeepR eac.
Hyper-parameters search and minor modifications were conducted for resolving some incompatibility issues of python environment.Other training details about four models (including hyper-parameters selection and training log) were supplemented in part 2 in supplementary materials.
Model evaluation
GraphRXN method along with two baseline models were applied on all four datasets.Regarding the performance measures, three evaluation metrics on the test set were used, including correlation coefficient (R 2 ), mean absolute error (MAE) and root mean squared error (RMSE).
HTE platform
HTE, operated under standard codes, has been used to perform parallel experiments for rapid screening arrays of reactants or conditions, which generated large amounts of high-quality reaction data [60,61].We have developed an in-house HTE platform by assembling various state-of-the-art automated workstations/modules.All experiments in this study were carried out using HTE, including solid dispensing, liquid dispensing, heating and agitation, reaction workup, sample analysis and data analysis (Fig. 2).Exquisite design of experiment was required before THE [62].
Solid dispensing
Solid samples were stored in the dispensing containers.Then an overhead gravimetric dispensing unit delivered target amounts of samples from dispensing containers to the designated 4 mL vials.
Liquid dispensing
Liquid samples were stored in uniform bottles.Then the liquid-handling robot transferred target volume of samples to the designated 4 mL vials in a programmed manner.With the amounts of solid and liquid samples dispensed in 4 mL vials, the liquid-handling robot was used again to make stock solution accordingly.All stock solutions were mixed thoroughly using vortex mixer.Stock solutions were transferred into the designated glass tubes of 96-well aluminium blocks for reaction setup using the liquid-handling robot.
Heating and agitation
The 96-well aluminium blocks were placed on orbital agitators under pre-set temperature and time.
Reaction workup: After the reactions were stopped and cooled down, pipetting workstation was used to process the reaction mixtures in batches, including quenching, dilution and filtration.Then samples were prepared in 96-well plates for UPLC-MS analysis.
Sample analysis
Samples were sequentially injected into UPLC-MS for expected substance determination and quantification.
Data analysis
Raw data generated by UPLC-MS were fed into Peaksel [63], an analytical software developed by Elsci, which was capable of executing batch-level integration rendering us the UV response area of target substance.
Experimental preparation
Buchwald-Hartwig coupling reaction was used as examined reaction in this study, to further evaluate GraphRXN on the in-house dataset as further verification (Fig. 3).
Experimental workflow on HTE platform
In this study, all reactions were carried out at 0.016 mmol scale in 96-well aluminum blocks using HTE platform.For reaction setup, all robots were embedded in a glovebox filled with N 2 .The 96-well aluminum blocks were sealed under N 2 and then subjected to orbital agitators with the pre-set parameter of 850 rpm and 65 °C.After 16 h, the 96-well aluminum blocks were cooled down to room temperature.In total, 2127 reactions were For each glass tube, 0.0625 equivalence of 4,4ʹ-Di-tertbutyl-1,1ʹ-biphenyl was added as internal standard (IS).Reaction solutions were then transferred to filter plates and the filtrates were collected by 96-well plates.The sample plates were then analyzed by UPLC-MS.The UV responses of product and IS were obtained using Peaksel.The ratios of UV response of product over IS ( ratio UV ) were calculated using the following equation, where A product is the response area of the target product at the wave length of 254 nm,A IS is the response area of the IS at the wave length of 254 nm, c is a constant (0.0625 eq.), which represents the mole ratio of IS and product at 100% theoretical yield: During the course of data analysis, 569 reaction data derived from abnormal spectra were discarded.Eventually, 1558 reaction data were obtained.
For more details about the experiments, please see part 1 in supplementary materials.
Performance on public datasets
Four models, including Graph-concat, Graph-sum, Yield-BERT and DeepReac + , were built on three public datasets.Dataset 1 and 2 are collections of reaction yield from coupling reactions, while Dataset 3 is a collection of stereo-selectivity from asymmetric reactions.The average R 2 , MAE and RMSE values for the respective test set throughout the tenfold CV procedure are listed in Table 2.
For Dataset 1, the performance of GraphRXN-concat model (R 2 of 0.951) was similar to the baseline method Yield-BERT (R 2 of 0.951), but better than the (2) GraphRXN-sum (R 2 of 0.937) and DeepReact + (R 2 of 0.922) models.For Dataset 2, both GraphRXN-concat (R 2 of 0.844) and GraphRXN-sum (R 2 of 0.838) outperformed the Yield-BERT (R 2 of 0.815) and Deep-React + (R 2 of 0.827) method.For Dataset 3, the R 2 of GraphRXN-concat was 0.892, which was better than GraphRXN-sum (0.881), Yield-BERT (0.886) and Deep-Reac + (0.853).Among these three metrics, we believe that MAE is more meaningful for chemists, as it gives a possible error between the observed and predicted values.MAE/RMSE may better serve as a reference value for chemists to decide whether to conduct the experiment or not.Our GraphRXN-concat model gave better MAE and RMSE values than Yield-BERT and Deep-Reac + , which demonstrated that GraphRXN model can provide on-par or slightly better performance over the baseline models.Details of model prediction on each fold were included in Additional file 2: Tables S6-S8.
HTE results
Wet-lab experiment was conducted in this study, and 1558 data points were collected into the ultimate dataset (See Additional file 1).According to the substituted aromatic amines/bromides of reactants, reactions can be grouped into four groups (G1-G4), i.e. diphenylamines derivatives (reactions between Ph-NH 2 and Ph-Br, G1), phenylpyridine amine derivatives (reactions between Ph-NH 2 and Py-Br, G2), phenylpyridine amine derivatives (reactions between Py-NH 2 and Ph-Br, G3) and 2,2ʹ-dipyridylamide derivatives (reactions between Py-NH 2 and Py-Br, G4).G1 contains 317 reaction points, while G2, G3 and G4 group have 419, 401 and 421 reactions respectively.Hereby shows the ratio UV distribu- tion for all four groups, where the light color represents low value, and the dark color corresponds to high value, ranging from 0 to 1 (Fig. 4).The grey grids represent failed reactions or discarded data and the data filtering policy were supplemented in part 1.6 of supplementary materials.
For the entire dataset, half of the reaction ratio lies in the range from 0 to 0.2.The ratio UV distribution was not balanced with heavy condense on low value which would be a challenging task for modelling.Among these, 13% of reactions in G1 gave ratio ≥ 0.5, while only 0.7%, 8% and 5% for G2, G3 and G4 respectively, which indicates the chosen reaction condition in HTE may be more suitable for reactions between Ph-NH 2 and Ph-Br.
Performance on in-house HTE dataset
An in-house dataset of 1558 data points was used for modelling and five-fold CV without replacement was done for train-test split.Results of GraphRXN S9.
Performance on scarce data
It is well known that deep learning relies on large amounts of data to discover the relationship between variables and outcomes, data scarcity remains a challenging problem in modelling processes in certain areas, especially in the field of reaction prediction.Here, we discussed the stability on these four aforementioned deep learning methods when handling scarce data.
Four groups of the in-house dataset (G1-G4), which hold smaller size than other published datasets, were evaluated respectively.The performances of GraphRXN and other baseline models are listed in Table 3 and results of each CV fold on test set see Additional file 2: Tables S10-S13.The performances of GraphRXN-concat were superior than other models on G2, G3 and but slightly worse on G1 and G4.It seems that R 2 on small-size dataset can fluctuate considerably, e.g.R 2 of four groups are rather different from each other, while values of MAE and RMSE are similar across all four groups.The results indicate that the smaller dataset with limited structural diversity that might deteriorate the prediction accuracy, while a larger dataset with diverse structures can allow to learn a better model from a larger reaction space.In general, GraphRXN-concat showed superior or on-par performance on handling scarce data, compared to other deep learning methods.
Variable-length graph representation
Our GraphRXN algorithm can provide the variablelength representation that are relevant to each task at hand.Usually, a good representation should be small but dense enough to contain a wealth of information for downstream modelling [69].Thus, we compared the model accuracy over different size of learned feature, regardless of other aspects of modelling (Fig. 6A).As the vector size climbs from 100 to 900 bits, the results of GraphRXN-concat and GraphRXN-sum remain steady at around 0.7 points.This diagram points out that vector size only caused subtle changes in model performance.Additionally, GraphRXN-concat still provided the higher accuracy in different vector size.The curves reached a peak at the size of 300, which may indicate that the number of 300 should be a suitable size for representation in the molecule level.Detailed values of evaluation metrics were supplemented in Additional file 2: Table S14.
Aggregation methods for reaction vector
Model processing was sensitive to the ordering of vectors [69], and different order of vectors would render different results, although all else being equal.In this study, two aggregation methods were utilized to encode the reaction vector when graph representation was ready.Specific order must be set in concatenating reaction vectors, and for example, in this study, we used the vector order as aromatic amines, bromides and products.In this way, we assumed it would be a possible way to sum all components' vectors together, to eliminate the effect of the input order.We then compared two aggregation methods in the same total length (Fig. 6B).When downstream model took over the same length of reaction vectors, GraphRXN-concat still provided the higher accuracy, except when 100-bit vector is unable for molecules to contain complete information.The explanation of this issue is that summing all the vectors up may weaken the ability of representation bit-wisely, and neglect the relationship between reaction components.According to the existing results, concatenation would be more suitable for characterizing chemical reactions.
Conclusion
In this work, GraphRXN, a novel computational framework, is proposed to assist automation of chemical synthesis.Regardless of the reaction mechanism, GraphRXN directly takes the 2D molecular structures of organic components as input and learn the task-related representations of chemical reaction automatically during training and achieves on-par or slightly better performance over the baseline models.In addition, we used HTE platform to build standardized dataset, and GraphRXN also delivered good correlations.Although a chemical reaction goes through certain transitional states, it seems that the model can directly predict reaction outcome using structural features of reaction components without the guidance of mechanism.This study has demonstrated that deep learning model could yield moderate to good accuracy in reaction prediction regardless of limited size of the datasets and many complex influencing variables.These results have motivated us to apply this HTE + AI strategy to enable cost reduction and liberate the scientific workforce from repetitive tasks in the future.The source code of GraphRXN and our in-house reaction dataset are available online.
(a) for the v node at step k, its intermediate message vector m k (v) is obtained by aggregating the hidden state of its neighbouring edges at the previous step h k−1 e u,v , then the previous hidden state h k−1 (v) is concatenated with its current message m k (v) and fed into a communicative function to obtain current node hidden state h k (v); (b) for the edge e v,w at step k, its intermediate message vector m k e v,w is obtained by subtracting the pre- vious edge hidden states h k−1 e v,w from hidden state of its starting node h k (v) , then the initial edge state h 0 e v,w and weighted vector W • m k e v,m are added up and fed into an activation function ( ReLU ) to form current edge state h k e v,w ;
Fig. 2
Fig. 2 General workflow of HTE process
Fig. 3
Fig. 3 Reaction scheme and substrate scope
Fig. 4
Fig.4 Heatmap of Ratio UV distribution for the in-house reaction dataset, where the prefix "A" in X-axis represents amine, and the prefix "B" in Y-axis represents bromide
Fig. 5
Fig. 5 Results of GraphRXN and other baseline models on in-house HTE dataset.A evaluation metrics over five-fold CV on test set.B test set plots over five-folds CV of GraphRXN-concat and GraphRXN-sum
Fig. 6 A
Fig. 6 A Variance of model performance with different vector size.Vector size ranges from 100 to 900 bit, where 100 bit as the interval.B Model performance when using either concatenation or summation to construct reaction vectors
Table 1
Description of reaction datasets.Dataset 1-3 were available public datasets, and dataset 4 was generated by our in-house HTE platform
Table 2
Comparison of model performance on public dataset 1-3.The values of R 2 , MAE, RMSE refers to the mean and standard deviation across the folds
Table 3
Comparison of model performance over four separate reaction groups of our in-house dataset.The values of R, MAE, RMSE refers to the mean and standard deviation across the folds Bold emphasis represents the best model performance in each group | 5,646.2 | 2023-08-11T00:00:00.000 | [
"Computer Science"
] |
Propagation in Diagonal Anisotropic Chirowaveguides
A theoretical study of electromagnetic wave propagation in parallel plate chirowaveguide is presented.The waveguide is filled with a chiral material having diagonal anisotropic constitutive parameters.The propagation characterization in this medium is based on algebraic formulation ofMaxwell’s equations combined with the constitutive relations.Three propagation regions are identified: the fast-fast-wave region, the fast-slow-wave region, and the slow-slow-wave region.This paper focuses completely on the propagation in the first region, where the dispersion modal equations are obtained and solved.The cut-off frequencies calculation leads to three cases of the plane wave propagation in anisotropic chiral medium.The particularity of these results is the possibility of controlling the appropriate cut-off frequencies by choosing the adequate physical parameters values. The specificity of this study lies in the bifurcation modes confirmation and the possible contribution to the design of optical devices such as high-pass filters, as well as positive and negative propagation constants. This negative constant is an important feature of metamaterials which shows the phenomena of backward waves. Original results of the biaxial anisotropic chiral metamaterial are obtained and discussed.
Introduction
Bianisotropic materials are special types of media where the physical parameters properties (permittivity, permeability, and magnetoelectric parameters) are tensors.They are characterized by constitutive equations that present a coupling between electric and magnetic fields [1].These materials exhibit interesting applications in electromagnetic wave propagation [2][3][4].
Indeed, the chiral is a subset of bianisotropic medium case.By definition, chirality is purely a geometrical notion, which is due to the lack of bilateral symmetry of an object [2].So, the chiral object is a three-dimensional body that cannot be superposed on its mirror image by translation or rotation [2].Furthermore, chirality concept leads to left and right waves notions where RCP (i.e., right circularly polarized) and LCP (i.e., left circularly polarized) each has a different refractive index and phase velocity.The two corresponding refractive indices are ± = √ ± [5] ( : relative permeability, : relative permittivity).
In fact, anisotropic chiral medium plays a crucial role by having negative refractive index (left-handed) materials that has opened new horizons in optics and becomes subject of important scientific interests [6][7][8][9].It was stated through theoretical and experimental results that anisotropic chiral media can have a negative refraction index like isotropic media with both negative permittivity and permeability.The negative refractive index can be reached by either increasing the chirality parameter or operating near the electric and/or magnetic resonance frequency zones, where the value of √ becomes smaller than the chirality parameter value , which becomes strong around the resonance frequencies, as reported in [10,11].Generally, natural chiral materials, such as quartz and sugar solution, have < 1 and √ > 1, so negative refraction is not possible in these materials [12].However, with artificial chiral metamaterials, macroscopic parameters can be clearly identified.Moreover, the notion of chiral nihility, when the values of and of the medium are small and very close to zero, makes the refraction index negative for one of the circular polarization modes, even when is small [5,13].In addition, it is reported that it is simpler to achieve and realize negative refraction in chiral materials than with regular metamaterials [14].In this work, a special interest is given to the wave propagation study in chiral-core waveguide due to its different physical parameters diversity.In this scope, wave propagation in anisotropic chiral medium is modeled and studied where tensors of chirality, permittivity, and permeability are diagonal.The A-formalism [15] related to the proposed structure is used to facilitate the analytical calculation procedure of Maxwell's equations.Curves of normalized propagation constants are plotted with respect to normalized frequency, where positive and negative propagation constants are presented.
Formulation of the Problem
In this section, we analyze the parallel plate chirowaveguide depicted in Figure 1 with infinite perfectly conducting planes placed at = ±/2.The chirowaveguide propagation direction is along -axis, whereas the field quantities are all independent of -axis [2].
In general, bianisotropic medium is characterized by the following constitutive equations, as presented in [15]: where , , , and are, respectively, the electric field, the magnetic field, the electric flux density, and the magnetic flux density.
[] and [] are, respectively, the electric permittivity and the magnetic permeability tensors. 0 and 0 are the free space permittivity and permeability, respectively.
[] is the nonreciprocity (Tellegen) tensor and [] is the chirality (Pasteur) tensor.In fact, this study is based on Pasteur medium, which is a reciprocal anisotropic chiral, (i.e., [] = 0 and [] ̸ = 0).Hence, (1) becomes and the permittivity, permeability, and chirality tensors of the considered medium are After substantial algebraic manipulations of Maxwell's equations, considering the constitutive equation, we obtain the following set of coupled differential equations of the components of the electric and magnetic fields: The examination of these two coupled equations shows that is the only coupling parameter which enables the appearance of the bifurcated modes.The cancellation of this parameter suppresses coupling even with the presence of the other parameters of chirality and .Let The following decoupled equations are obtained: where with < 0, = , , and . ± are the right and left wave numbers. ± are the refractive index of RCP and LCP plane wave. 0 and are the free space and medium propagation constants, respectively.
Let us take Table 1: Conditions and solutions for the chiral three regions.
Cases Conditions and velocities Types of solutions
Fast-fast-wave region V is the waveguide phase velocity; V LCP and V RCP are the LCP and RCP velocities, respectively, along -axis. 1 , 2 , 1 , and 2 are constants.
The solutions of the differential equations ( 6) are given in Table 1, taking into account the three cases: fast-fast-wave, fast-slow-wave, and slow-slow-wave regions imposed by the chiral medium [2].
In this work, we deal only with the first case (fast-fastwave region): The longitudinal and transversal components can be expressed as follows: with The propagation constant is supposed to be a real-valued quantity.The boundary conditions imposed by the adopted structure [2,16,17] are where is the chiral material thickness.The enforcement of these conditions leads to the following 4 × 4 matrix system equation: This system has a nontrivial solution only if the following equations are satisfied: The solutions of the above equations lead to two modes: RCP and LCP.
Results and Discussions
In order to investigate the propagation characteristics in the anisotropic chiral medium, different types of chiral medium are chosen considering various values of the physical parameters.
Bi-Isotropic Case.
In bi-isotropic case, the chirality parameter , permeability , and permittivity are scalars; the dispersion equation ( 14) becomes The cut-off frequencies are and we obtain exactly the same dispersion equation and cutoff frequencies as reported in [2], where Ω = ⋅ √ 0 0 ⇒ Ω/2 = ⋅ √ 0 0 is the normalized frequency.
Our results for the simple case (chiral isotropic medium) shown in Figure 2 are in good agreement with those presented in [2]; this confirms our calculations.The bifurcated modes (LCP and RCP) are well distinguished and start from the same cut-off frequencies; this is an essential feature of chiral material.
As illustrated in Figure 3, the effect of the chirality on the RCP and LCP modes of the first mode is quite different.For the first one (Figure 3(a)), the RCP mode decreases keeping the same shape until the condition √ > || is no more satisfied for ≥ 3 * where the mode becomes evanescent ( RCP = RCP ), whereas, in the second mode (Figure 3(b)), the LCP one becomes quasi-constant with the increase of and changes the sign ( LCP < 0) for ≥ 3 * .This can be explained by the curve and sign of + and − shown in Figures 3(c) and 3(d).
It is worth noting that for high values of and for this condition √ < ||, the chiral medium behaves as a metamaterial for which the first mode becomes evanescent and the second becomes a backward wave.
Bianisotropic Case.
The chirality parameter , permeability , and permittivity in this case are tensors.Original results concerning the expressions of cut-off frequencies have been achieved.The particularity of these results is the possibility of controlling the specific cut-off frequencies by the choice of the adequate physical parameters.For each case, the specific cut-off frequency as function of the constitutive parameters is clearly shown in Table 2. Our cut-off frequency calculation of the first case results in an expression function of the optic axes components and , which coincides with the conventional bi-isotropic formula [2], and it is obtained from bianisotropic one, when √ = √ and √ ̸ = | |.The second case results in a new and interesting expression of cutoff frequency function only of the chirality parameter , this latter cancels the direct effect of the two parameters (permeability and permittivity ) on the cut-off frequency value.The chiral parameter remains the only influencing factor.Therefore, it is easier to have much higher cut-off frequencies with low chiral parameter, leading to important and interesting results that can be used in designing optical devices such as high-pass filters.The third case is a combination of the other two cases.
Considering the conventional cut-off frequency formula obtained in the bianisotropic case (row 1 of Table 2), the effect of chirality on the propagation constant in the fast-fast-wave region is being treated through the three following examples.Figure 4 shows a curve of conventional RCP and LCP propagation constants even with different values of physical parameters tensors.In Figure 5, we notice that both modes behave differently even for this case.The LCP appears earlier as a backward mode ( LCP < 0) with RCP as an evanescent mode ( RCP ̸ = 0, RCP = 0, and RCP represent the losses); then, the latter turns itself into a backward mode ( RCP → 0 and RCP < 0).So the phase velocities of both backward modes (RCP and LCP) are negative (i.e., / LCP < 0 and / RCP < 0).This means that backward wave propagation or a negative refraction index (metamaterial medium) can be achieved using bianisotropic chiral medium with √ < | |.We notice that this result goes with the result of isotropic case presented in [18].Figure 6 (appearance of the backward modes).Consequently, is the only influencing parameter on the nature of propagating modes that allows switching from anisotropic chiral medium to metamaterial.
Conclusion
This study deals with different cases of wave propagation in parallel plate waveguide filled with anisotropic chiral medium, where three cases of study are considered using specific physical parameters.Original results of these cases have been obtained from the examination of cut-off frequencies.The first originality of this research work is the consideration of the three constitutive biaxial tensors parameters.This case of anisotropy has led to original and interesting results, where it is possible to control the specific cut-off frequencies by the choice of the adequate physical parameters.The second originality is the new calculated expression of the cut-off frequency versus the chirality in some special case.This result will undoubtedly contribute to the design of optical devices such as high-pass filters, since the effect of the chirality cancels the direct effect of the electric permittivity and magnetic permeability on the cut-off frequency expression.The third originality is the possibility of switching from the conventional anisotropic chiral medium to left-handed medium by a simple choice of the physical parameters satisfying the conditions √ < | |, and is the only influencing parameter on the nature of propagating modes, the coupling parameter, and allows the switch to metamaterial.
Figure 1 :
Figure 1: Parallel plate waveguide filled with anisotropic chiral material. | 2,681 | 2017-03-08T00:00:00.000 | [
"Physics",
"Engineering"
] |
Maximization of Service Flows Rates as a Solution of Network Capacity Allocation Problem
: The problem of network capacity allocation is a well-known problem in information and telecommunication area. Emerging set of new services such as cloud-based services, internet of things services, health care services, delivery support services are the key motivators of network capacity growth. In general, network capacity allocation is the static solution for general resources balancing dynamic problem. As far as user demands to change rapidly the network should be capable to support all traffic in accordance with service quality indicators. The solution of the formulated problem is the set of maximum rates of the corresponding service flows that compete with other service flows. This information is essential for the implementation of the service-oriented resource planning, as it enables to calculate the customers’ audience if the values of services popularity and subscriber’s distribution in the access network are predefined.
Introduction
In the modern telecommunication networks, the data flow analysis for the service quality provision to end users is an important sophisticated task.The flow rate depends on the number of users that are connected to the local network segment, types of services and the data they transmit.Modern telecommunication networks are content-oriented; therefore, it is important for users to get high-quality information and telecommunication services.Some amount of resource has to be allocated to the service flows to deliver these services [1].Consequently, management and allocation of network resources among multiple service flows to assure high-quality service delivery is an urgent scientific challenge [2,3].
The problem considered in the paper, actually, may be formulated as a question: in case of multiple service flows how much throughput can we allocate to each service flow at every point of the network?
The paper is related to the research area, which defines the used throughput fractions when users access the services [4,5].Some researchers propose methods and techniques of network resources allocation excluding the impact of the routing process and multiservice traffic characteristics [6,7].However, these papers do not consider the impact of the logical structure on the service flows transmission.
It is very important point to provide an abstraction of the problem to assure its independence from any network technologies and/or protocols.They can only be applied at the stage of some case study.
The authors of the current paper previously have proposed the method of service-oriented resource planning [8], the concept of service assurance in heterogeneous service-oriented systems [9] and methodological basis of heterogeneous network capacity distribution among service flows [10].Based on the mentioned papers we performed a detail numerical analysis of the network resources allocation to the service flows using the representation of the problem as linear programming problem.This approach has a limitation in the use because of the nonlinear and stochastic nature of network processes.Anyway, it can be applied in case of services flow rates are constant values.This situation can only be observed during very short time series.Therefore, the solution of the problem has a temporal effect and should be recalculated each time the service flow rates changes sensitively.The main advantage of the proposed approach is in operations simplicity and consequently -in duration of computational duration.Typically, computation of the nonlinear optimization problem solution is much more time-consuming even though it can be applied for a long-time perspective.Computational complexity is much higher too.In many cases, it is too hard to implement the nonlinear solution in practice.Therefore, the major part of scientific society tries to convert nonlinear problems to linear ones if it is possible.This process requires the simplification of solution limitations to linear functions too.
Methods and Techniques
The network resource is represented by the overall capacity of the channels that connect network devices.The amount of the resource allocated to the flow depends on the predefined network structure (network topology).The service flow rate depends on the channel throughput and is limited by other flows between the same pair of nodes.The set of all routes forms the network logical structure.This logical structure is the limiting factor as far as physical resources cannot be utilized completely after the logical structure is organized.Thus, the process of logical structure formation is the determining factor in the service flow rates maximization.The problem is common for all types of networks considering wired or wireless.The distinction only comes when we analyse the network specific protocols are using for logical structure formation.In the current case, we will analyse the most common network layer protocols such as RIP v.2 and OSPF.Anyway, the covered problem is not limited to such protocols.It may be extended using wireless resource allocation protocols.In general, the current problem arises when the multiple access to shared resources is performed.Its solution may be achieved by maximizing the resources allocated to a specific demander.In our case, the demander is service flow which should be transmitted over the network.
It is known that the formation of the logical structure is either static (which is very rare), or dynamic (using dynamic routing protocols); or a combination of both approaches.Dynamic routing can be applied by different criteria selection of the route: the metric of the lowest number of hops along the route (RIP -Routing Information Protocol), or of the channel conditions (OSPF -Open Shortest Path First).Logical structures formed by each of the protocols are usually not the same, and therefore the fraction of the allocated physical resources may be different because of different network load, therefore the routing protocol directly impacts on the desired rate of each service flow.
By the flow we mean the traffic of information packets between a pair of nodes, which is transmitted by a single route.
Identifying the flow with the route, the problem of network resources allocation can be formulated in terms of linear programming problem as follows: maximize the rate of each service flow at the input of some network node providing simultaneous existence and equitable competition of the flows and estimate the maximum number of clients using the service with multiple levels of service quality.
A telecommunication network is represented by the graph ( , ) where , i j V ∈ formed by the dynamic routing algorithm.The logical structure restricts the use of physical resources, presented by a vector x .The calculation of the elements of this vector may be performed by solving a linear programming problem, presented by a following system of inequalities: min , where ( ) f x -the objective function; , eq A A -coefficients of linear equations; x -the desired variable; , eb ub -upper and lower bounds of the desired variable.Let us write the optimization problem in the following form: min ( ) where ( ) f x -linear objective function, defined as , ( , ) ( ) - where , i j -number of nodes, , x i j -a variable that defines the rate of the service flow between a pair of nodes.The solution of the problem should satisfy the system of limitation conditions: limitation on a node's performance: where , i j C -the capacity of the channels ( , ) i j (in arbitrary units (a.u.)), ( , ) i j µ -the route between the pair of nodes , i j ; limitation on channel throughput: limitation on competition of information flows: , , , max ( . ) 0,5 where max ( . ) i j N µ -the maximum number of flows transmitting in the channel ( , ) i j , which belongs to the route ( , ) i j µ .These conditions were imposed to assure the equal competition among network flows through the features of linear programming methods.We have concluded if two or more flows are transmitting in the network channel, the flow on shortest path gets all the resources, forcing down other flows.
A solution to the formulated problem may be achieved using MatLab system; it was chosen because it includes functions for linear optimization and enables operations with graphs.For this purpose, we used two libraries -Bioinformatics Toolbox [11] and Optimization Toolbox [12].Bioinformatics Toolbox contains methods for creating, analysing, processing and visualization of graphs.Optimization Toolbox implements numerous algorithms of linear and nonlinear optimization.Using these methods, we developed the software model of the telecommunication network and solved the problem of network resource allocation among non-prioritized service flows providing their rates maximization.
Using optimization function biograph() a graph is represented as a graph object.Using the function shortestpath() we were able the calculate the shortest paths between each pair of nodes in the presented graph.The resulting routes together with weights of graph edges are the input data for the optimization task.Since the problem to be solved belongs to a class of linear programming, we use the function linprog() based on the algorithm Interior point [12].
Solving the problem of capacity allocation should be performed for the specific implementation of the telecommunication network.In our case study, it is represented by a weighted graph with 9 vertexes and 16 edges (Figure 1).Based on this graph we have calculated the routes between each pair of nodes for the two cases: the usage of RIP routing protocol and the usage of OSPF routing protocol.We suppose that the network uses only single-path routing.
Results and Discussion
According to equations ( 4)-( 7) for a given graph, the objective function and linear restrictions are formed.Based on them we solved the problem of the network capacity allocation among service flows with the set their rates that is the subject to maximization.These rates are represented by the vector.The optimization problem solutions related to the interior point method are placed in tables 1 and 2. The optimal solution of presented linear programming problem in case of OSPF routing is achieved at 10 iterations.The optimal solution of presented linear programming problem in case of RIP routing is achieved at 9 iterations.
The solution for RIP protocol is depicted in Figure 2 (bidirectional flows, blue corresponds to the flow rates from source vertex to destination, yellow -vise versa) and for OSPF protocol at Figure 3.
Presented results depict the maximal flow rates that may be achieved in the network presented by a graph at Figure 1 in case of using either RIP or OSPF protocol.We observe that rates are different in each case.It means the studied protocols form different restrictions on network resources utilization as was mentioned above.The numerical results give us better understanding of the traffic processes in the given network.The purpose of knowing the maximal rates of service flow transmitted in the specific network is to be able to manage services in upper network model layer to correspond the user demands.For example, if the total user demand is much higher than the achievable maximal service flow rate, we may either apply the prioritization strategy to give the service to users with high priority or move down the service quality to be sure we are able to serve the required number of the network sessions.
Figure 4 depicts a comparative analysis of the results.The logical further action is to determine the reasons of mentioned distinctions.As far as we know about network routing it may be caused by the metrics both RIP and OSPF to calculate the shortest path.In case of OSPF we are using the metrics that takes the channel throughput into account, as in case of RIP we operate with the hop number.Therefore, the channels (edges in graph related terms) with low throughput may not being in use in case of logical structure formed by OSPF.Let us check this assumption.Let us correlate the achieved optimization problem solutions with the network structure in order to analyze the impact of calculated service flows on the use of network resources.
By transmitting flows with calculated rates across the studied network fragment the following resource utilization is observed in each node: resources, occupied by outgoing flow; resources, occupied by arrival flow; resources, occupied by transit flow; unused resources.By transmitting flows across the network with RIP routing the obtained resource allocation has the form shown in Figure 5, a.
Service flows with calculated rates in a logical structure, formed by RIP protocol, use all available network resources (in terms of throughput), because the nodes do not contain unused resources.As shown in Figure 5, node №5 is involved only in transmitting input and output flows, other nodes also transmit transit traffic too.Since each node is the initiator of the flow, the presence of transit flow in the node limits a number of resources that can be given for service flow in this node.We have observed that with increasing of the node (vertex in terms of graph) degree the fraction of transit traffic in this node is also increasing (№1, №4, №6).This happens due to the peculiarities of RIP protocol operation.It calculates routes by the criterion of minimal number of hops, with the maximal utilization of vertices with the highest degree.In such a point, these vertexes (switching centers) can become bottlenecks; therefore, usage of the obtained solution for the formulated problem can determine weaknesses in the design of the physical structure of the telecommunication network.It is recommended in physical structures containing vertexes with mixed degrees to identify those for which the degree is maximal and provide the available throughput of adjacent channels slightly higher than the rest.This will enable the reserved throughput for transit traffic and will reduce its impact on the service flow rate, which is formed in this node.The formulated problem is also solved in the case of forming the logical structure by OSPF protocol.The obtained results differ from described above for RIP protocol.As it can be seen in Figure 5, b, almost every node contains unused resources.
Detailed analysis showed that unused resources are present at nodes, which adjacent channels have minimal throughputs.They are rejected by OSPF protocol as expected during the shortest paths selection (edges e , marked with negative weights in Figure 5, b).The explanation is the protocol OSPF selects routes nonmetering the other flows that are held on separate channels.Also, as it is shown in Figure 6, b a significant fraction of transit passes through the nodes №2 and №5 because their adjacent channels have the highest throughputs.OSPF protocol discovers the redundant edges in the physical structure of the network that can be removed.On the other hand, the result shows the inconsistency of the physical and logical structure.We recommend to use weighted multipath routing for maximal physical resources utilization in that case.Thus, we observe the logical structure formed by routing protocol imposes significant limitations on the availability of physical resources for service flows transmission.
Conclusions
This paper proposes a solution to the important problem of resource allocation among network service flows provided them equal competition for the given physical resources in order to estimate the maximum number of clients using the service with multiple levels of service quality.The task is formulated in terms of graph theory and solved by the method of linear programming in the case of the telecommunications network with 9 nodes and 16 channels of predetermined throughput.
The paper considers the impact of the logical structure on the availability of network resources.Problem solution depicts that in the case of forming the logical structure by RIP protocol service flows in the equal competition reach their maximum rate and use all the available network resources.However, for a logical structure formed by OSPF protocol not all available physical resources are used, as the routing protocol ignores the edges with minimal throughput, and therefore these edges are not included in the problem solution.This fact makes it possible to identify redundancy in the physical topology and reduce the cost of the designed network or change the routing policy by introducing k-path routing through unused channels, thereby increasing its performance.
The further work is related to developing methods of service quality assurance to end users within the calculated rate of the group service flow (each user will obtain the portion on the basis of its priority).It is assumed the use of nonlinear optimization methods to describe the models of service flows distribution among the end users.
Figure 1 :
Figure 1: The graph of the studied telecommunication network.
Figure 2 :
Figure 2: Service flows rate between each pair of nodes in a logical structure formed by the RIP protocol.
Figure 3 :
Figure 3: Service flows rate between each pair of nodes in a logical structure formed by the OSPF protocol.
Figure 4 :
Figure 4: Comparison of the results for RIP and OSPF protocols.
Figure 5 :
Figure 5: The network resources allocation in a logical structure formed: a) according to the RIP protocol; b) according to the protocol OSPF (blue -service flow, light blue -transit traffic from other service flows)
Table 1 .
The solution of service flow rates maximization problem in case of OSPF routing.
Table 2 .
The solution of service flow rates maximization problem in case of RIP routing. | 3,920 | 2018-05-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Segmentation of Passenger Electric Cars Market in Poland
: Striving to achieve sustainable development goals and taking care of the environment into the policies of car manufacturers forced the search for alternative sources of vehicle propulsion. One way to implement a sustainable policy is to use electric motors in cars. The observable development of the electric car market provides consumers with a wide spectrum of choices for a specific model that would meet their expectations. Currently, there are 53 different electric car models on the primary market in Poland. The aim of the article was to present the performed market segmentation, focused on identifying the similarities in the characteristics of electric car models on the Polish market and proposing their groupings. Based on the classification by the hierarchical cluster analysis algorithm (Ward’s method, squared Euclidean distance), the market division into 2, 3, and 4 groups was proposed. The Polish EV market segmentation took place not only in terms of the size and class of the car but primarily in terms of performance and overall quality of the vehicle. The performed classification did not change when the price was additionally included as a variable. It was also proposed to divide the market into 4 segments named: Premium, City, Small, and Sport. The segmentation carried out in this way helps to better understand the structure of the electric car market.
Introduction
The digital economy and society, which are expanding on an increasing scale, are striving to achieve sustainability by searching for appropriate solutions that will ensure sustainable development. This applies not only to the economics and economic spheres but also to logistics. As more than 90% of vehicles all over the world run on oil [1], there is a noticeable trend of desire to power vehicles with alternative energy sources. As a result, the subject of electric vehicles (EVs) is gaining popularity. Modern ways of powering vehicles, such as electrically powered engines, have the potential to be a solution consistent with the sustainable development policy [2,3]. As the problem of rising levels of global air pollution is serious, the use of electric cars can be a response to the achievement of sustainable development goals. While driving, electric cars do not emit harmful gases (such as carbon dioxide or nitrogen oxides), thus contributing to the limitation of the expansion of the scale of urban smog and the greenhouse effect [4][5][6]. Furthermore, the purchase, possession, and use of an EV may entail positive effects, such as tax breaks, the ability to drive on bus lanes, or free parking in city centers [7]. The most-explored subjects of scientific research in the field of EV market are: the role of electrification in public transport [8][9][10][11][12][13][14], personal transport devices (for example, electric scooters) [15,16], electric bicycles [17][18][19], or electric cars. Most studies on electric cars raise issues such as: battery life and its optimization [20][21][22][23][24], charging speed [25], EV pricing policy [26], charging stations for electric cars [27], legal regulations and facilitations for holders [7], business models in electric mobility [5,28], ecology and environmental impact [1].
However, from the perspective of economic sciences, factors that affect the final choice of an electric car by a consumer are the most significant. Price is the basic and the most important determinant which has the greatest impact in the process of making a purchase decision for a given good and this also includes electric vehicles [29,30]. Other factors that may affect a consumer's increased willingness to buy a given car model include: car range, flexibility (including acceleration), boot size, and brand prestige [29,31].
It was noticed that the increase in sales of electric cars in Poland and other Central and Eastern European countries is not very dynamic. The conducted analysis may be seen as a starting point for examining the need for subsidies implementation by decision-making bodies and investments in infrastructure for electric cars.
The research gap identified on the basis of the literature review is the lack of comprehensive segmentation of the Polish electric car market. For example, conventional segmentation in automotive markets carried out in Europe (which distinguishes "classes" of cars) usually is based on prices and model sizes only. The segmentation proposed by the authors also takes into account the taxonomy analysis of similarities in terms of many other vehicle characteristics, such as engine power, load capacity, or battery and charging parameters. The study carried out in this way has the applicable character and shows that, for example, the given vehicle models are not very diverse in terms of their parameters, but vary in terms of prices. This may indicate a luxury good in the case of a more expensive model. This study fills this gap, and its purpose is to present the performed segmentation, during which the similarity of the characteristics of electric car models on the Polish market was identified and the classification of such cars was proposed. The following research questions were posed:
What are the technical parameters of an electric car in Poland? What are the similarities among the available EVs in Poland? What kind of car groups can be distinguished? How will the consideration of the price change the affiliation to the specified car groups?
Full Electric Cars' Properties
Only full electric vehicles (FEVs) were included in the dataset, which was prepared specifically for the study. This means that the collection does not contain data on plug-in hybrid cars and electric cars from the so-called "range extenders" (please see Appendix A for more information about "range extenders"). Hydrogen cars were also not included in the dataset due to the insufficient number of mass-produced models and different (compared to EV) specificity of the vehicle, including the different charging methods.
The data in the collection were collected specifically for the purpose of the study and comes from the primary market. The study is complete, relevant, and up to date. The dataset includes cars that, as of 2 December, 2020, could be purchased new at an authorized dealer and those available in public and general presale, but only if a publicly available price list with equipment versions and full technical parameters was available. The list does not include discontinued cars that cannot be purchased as new from an authorized dealer (also when they are not available in stock). The subject of the study is only passenger cars, the main purpose of which is to transport people (inter alia without including vans intended for professional deliveries).
The dataset of electric cars includes all fully electric cars on the primary market that were obtained from official materials (technical specifications and catalogs) provided by automotive manufacturers with a license to sell cars in Poland. These materials were downloaded from their official websites. In the event that the data provided by the manufacturer were incomplete, the information was supplemented with data from the SA-MAR AutoCatalog (all the additional information about SAMAR AutoCatalog can be seen in Appendix A). The database consisting of 53 electric cars and 22 variables describing them is presented in Table 1. The prices in the dataset are the base prices for the given engine (the commentary on the relation between the vehicle equipment and engine version can be found in Appendix A). Government surcharges and other subsidies, as well as manufacturer/dealer discounts, are not taken into account.
If the manufacturer provided two values of maximum range (distinguishing between range in the urban and combined cycle), then the maximum range in the combined cycle was taken into account. In the situation when only one range was given by the car producer (without defining whether it is a range in the urban or combined cycle), the maximum value from this range applied for the research purposes.
The luggage compartment capacity is based on the VDA standard (with the seats folded out, to the window line). The minimum vehicle weight was taken into account without the driver (if the manufacturer reported the weight with the driver, a subtraction of 60 kg from this value was adopted to normalize the data). The capacity of the battery is considered to be its total value, not its useful value. When the manufacturer provides nominal power and instantaneous power (using the launch control), the nominal power value is used for the purpose of the test. Nonetheless, the acceleration takes into account the value for the maximum engine power (with launch control).
In the dataset, the tire size refers to the basic tires for that engine version. Energy consumption was taken into account according to the WLTP standard in the combined cycle, while when the manufacturer provided a value range in the dataset, for the research purposes its middle value was adopted.
In addition to the remarks on the entire data collection methodology, a few unusual cases should be identified. Hyundai Kona Electric has two versions of the 64 kWh engine-produced in South Korea and in the Czech Republic. They do not differ in terms of power, but they differ in terms of range and price-the list adopts Hyundai Kona Electric with a Czech-made engine. As currently the Peugeot, Citroën, and Opel brands belong to the same concern (PSA), it was assumed that the Peugeot e-Traveler, Citroën ë-Spacetourer, and Opel Zafira-e Life cars will be treated as one vehicle model. Due to the presence of several length versions of the Mercedes-Benz EQV and Citroën ë-Spacetourer models, the "long" length for the first model was assumed, and the "M" length for the second one. For the Smart Fortwo, only the hard-top version was considered, not the convertible one. In addition, the Citroën ë-C4 is equipped with 16-inch front wheels and 17inch rear wheels (in all trim levels)-16 inches were used for unification. For all additional comments on the unusual cases, see Appendix A.
The Course of the Study
The first step of the analysis was to examine the basic characteristics of the distribution of variables. Descriptive statistics tools served to measure the values of metric variables, and the frequency distribution functions were used to describe non-metric variables. To be included in the classification process, the quantitative variable had to meet the following conditions: 1. kurtosis value >2.5, 2. coefficient of variation <10%, 3. not more than 10% of data lack.
Meeting the adopted criteria will ensure that information on a given car parameter is characterized by satisfactory differentiation as well as suitable volatility and thus will optimize the classification. A large agglomeration of results with low variability would question the sense of segmentation. Moreover, it may be an indication of no large clustering around the mean value. The lack of data for many models within one variable is not worth taking into account because it will make segmentation ineffective from the classification perspective. In the case of determining the relationship and testing the values, the significance level was assumed to be 0.05.
The second stage of the study consisted of segmenting the selected variables with and without price consideration (for similar research based on the electric scooter market with the use of the same methodology, see Reference [32]). In this research, classification methods, such as hierarchical cluster analysis, were used. In order to calculate the distance between the variables, the squared Euclidean distance, which is broadly described and adopted in scientific research [33][34][35], was used [36,37]. The squared Euclidean distance is defined by the following formula: where: ( , )-distance, -next iteration, and -number of all variables in the cluster. In order to be able to use the Euclidean distance metric, the different variables must be comparable. Therefore, it is advisable to use the pre-standardization of the variables [38]. The formula below was used to perform the standardization: where ̅ is the mean value and is the standard deviation.
The Ward hierarchic method was adopted as the classification method for this study [39,40]. The measure of the diversity of the cluster in relation to the mean values is ESS (Error Sum of Squares), which is expressed by the formula: where: -value of the variable, which is the segmentation criterion for the i-th object; -number of objects in a cluster. The Ward method uses the ANOVA approach to estimate the distances between clusters. The goal of this method, in simplification, is to minimize the sum of the squared deviations of any two clusters that may be formed at any stage. This method is considered greatly effective, although it strives to create clusters of small size. As a result of applying hierarchical methods, one obtains a dendrogram which is regarded as a convenient way of visualization [41]. It illustrates the hierarchical structure of a set of objects due to the decreasing similarity between them [41]. Ward's method, in terms of the accuracy of reproducing the actual data structure, is commonly seen as the most effective among agglomeration methods [42]. In order to compare prices in individual classes, the study used Kruskal-Wallis test [43], while for pairs, the U-Mann-Whitney test was used [44].
Structure of the EV Market in Poland Dataset
The structure of the set was presented using descriptive statistics. For non-metric variables ("Number of seats", "Number of doors", "Type of brakes", "Tire size", "Drive type"), the frequencies of individual features were presented (see Figure 1), while the metric variables were arbitrarily divided into 4 groups: 1. Battery-"Battery capacity", "Range" (Table 2), 2. Size-"Wheelbase", "Length", "Width", "Height", "Minimal empty weight", "Permissible gross weight", "Maximum load capacity", "Boot capacity" (Tables 3 and 4), 3. Performance-"Engine power", "Maximum torque", "Maximum speed", "Acceleration 0-100 kph" (Tables 5 and 6), 4. Others-"Minimal gross price", "Maximum DC charging power", "Energy consumption" ( Table 2). The number of doors in electric cars varies-most cars have 5 doors (88.7%). Every third electric car on the primary market is a 4-wheel drive, while the largest number is a 2wheel drive (45.3%-front axle and 20.8%-rear axle). All cars have brakes both at the front and rear (13.5% of them have a rear drum brake (in the case of Volkswagen, Škoda, and Smart)). The vast majority (38 out of 53) has 5 seats, and only 10 out of 53 have 4 seats. Electric cars in Poland have various sizes of tires. It is worth emphasizing that the Citroën ë-c4 model is equipped with 16-inch front wheels and 17-inch rear wheels (in all trim levels). Among the variables, the number of doors, the number of seats, and the type of brakes are not very diverse among the analyzed cars. The "tire size" variable, on the other hand, does not contribute much to segmentation, as a consumer can equip their car with larger tires depending on their needs and individual preferences. Therefore, it was decided to take into account the "drive type" variable, which brings greater value to the analysis.
Electric car prices in Poland start from PLN 82,050 (Škoda Citigo-e iV), while the most expensive car costs a minimum of PLN 794,000 (Porsche Taycan Turbo S). Electric cars on average cost PLN 244,271.72, and prices deviate on average by 149,634.43 from the average value. Fifty percent of cars are cheaper than PLN 169,700 and at the same time, 50% are more expensive than PLN 169,700. Electric car prices are asymmetric to the right, which means that there are definitely fewer cars more expensive than the average price. Furthermore, there is a large concentration around the mean value.
The Smart forfour EQ has the smallest range in the test dataset, specified by the manufacturer as 148 km. The Tesla Model S Long Range Plus has the greatest range (652 km). The distributions for battery capacity and range are not significantly asymmetric, however, the battery capacity distribution is less concentrated around the mean value.
Manufacturers use different combinations of battery capacity and engine power to increase the vehicle's range. Based on the calculated correlation coefficients, it should be concluded that there is a strong positive relationship between the battery capacity and the vehicle range. This means that the greater the battery capacity, the greater the car's range on average. Despite the strong dependence, it was determined to take into account both of these variables in further analysis. The battery capacity, along with the possibility of charging with a higher current, translates into an increase in the competitiveness of a given model, when the car will require recharging, especially when driving longer distances. Moreover, the correlation analysis shows that a weaker motor is characterized by lower energy consumption.
The maximum DC charging power options are characterized by right-hand asymmetry, which proves the Porsche manufacturer's pursuit of charging speed and efficiency. The maximum value of DC charging power is as much as 270 kW, and the smallest is over ten times smaller (22 kW-in case of Smart models). Electric cars are characterized by poor differentiation in terms of energy consumption. Due to the adopted criteria, this variable was not included in the further analysis. Descriptive statistics for the size group are presented in Table 3. The size of the car is the basis for determining its segment in terms of size. The models of EVs available in Poland are characterized by low variability in terms of wheelbase, width, and height. In contrast, the length distribution among models is not so concentrated around the mean value. The shortest available model is the Smart fortwo EQ (269.5 cm), while the longest is the Mercedes-Benz EQV (514 cm). Electric cars in Poland weigh from 1035 to 2710 kg, while the boot capacity, according to the VDA standard, (for broader description see Appendix A) ranges from 171 to 870 L. Table 4 shows the values of the correlation coefficients of individual variables classified to the size group. The "width", "height" and "wheelbase" variables are characterized by low variability and according to the adopted criterion, they will not be included in the classification. There are insufficient observations for "permissible gross weight" and "maximum load capacity" variables due to truncated data provided by manufacturers. Moreover, the variable "permissible gross weight" is strongly correlated with "minimal empty weight", while the "maximum load capacity"-with "boot capacity". Descriptive statistics for the identified performance group are presented in Table 5. The models of the Smart manufacturer-fortwo EQ and forfour EQ-have the lowest engine power (82 hp), while the most powerful cars in the set are Tesla Model X Performance and Tesla Model S Performance (772 hp). Electric cars in Poland reach a maximum speed of 123 to 261 kph, with the maximum torque varying from 160 to 1140 Nm. Acceleration from 0 to 100 kph is from 2.5 to 13.1 s. When analyzing the individual performance characteristics of cars, it should be stated that despite the large variation in engine power (67.2%), there is a low variation among the maximum speed. However, these are absolute values and differ in the units of measurement. The dependencies between the individual performance parameters are presented in Table 6. It should be noted that all the variables are highly correlated with engine power which makes it possible to state that "engine power" is the variable determining the remaining performance parameters. For this reason, it will be treated as a factor to determine the similarity of particular models so that the information of the other vehicles does not distort the classification process.
Market Segmentation
In connection with the analysis of the model dataset structure, the following variables were used for classification: "engine power", "minimal empty weight", "length", "maximum DC charging power", "battery capacity", "range", "drive type", and "boot capacity". The number of specified groups results from an arbitrary decision, supported by the analysis of coefficients quotients (see Figure 2; analysis of coefficients quotients with the use of "boot capacity" variable inclusion is presented in Figure A1 in Appendix B). The process of merging into clusters is presented in Figure 3 (another variant of merging into cluster process with the use of "boot capacity" variable inclusion is presented in Figure A2 in Appendix B). Based on Figures 2 and 3, the data set has been divided into 4 variants (subgroups). The numbers of individual clusters are presented in Table 7. Table 7. Population of individual clusters.
[n] [%] 4 [n] 4 [%] 3 [n] 3 [%] 2 [n] 2 [%]
Number of cluster It was arbitrarily concluded that in the case of the division into 7 classes, there are large differences in terms of their numbers, which prevents intuitive interpretation. Therefore, the first considered classification was the division into 4 classes. Cars belonging to particular groups are presented in Table 8. Table 8. List of cars assigned to particular groups.
Group 1 "Premium"
Group 2 "City" Group 3 "Small" Group 4 "Sport" The cars in individual groupings have certain common features that allowed the formation of the names of these clusters. Their names are as follows: "premium", "city", "small", and "sport". Cars belonging to the "premium" group are very expensive to buythe cheapest of them (Volkswagen ID.3 Pro S) costs PLN 179,900, while the average minimum purchase price for these cars is PLN 277,630. These are also largely cars of brands seen as luxurious ones [45]. Although these EVs have large dimensions and robust components (such as a high-power engine, or usually 4-wheel drive), their main purpose is not to drive dynamically, but to provide driving pleasure in ultra-comfortable conditions. The "city" group consists of compact vehicles with a universal character, mostly for daily urban but also extra-urban driving as well as most everyday applications. The "small" cluster includes cars with small dimensions, practically suitable only for urban driving, as their range does not allow for a longer trip without the need for additional charging. The "sport" group includes cars whose main purpose is dynamic or fast driving. Vehicles from this segment are characterized by sporty attributes, i.e., have above-average engine power and torque. They are also similar in terms of their wheelbase of 290 cm and more.
The prices of the models were compared in the classes defined in Table 8. It should be emphasized that in the split variants for 2, 3, and 4 clusters, all prices differ significantly for each class. Detailed results are presented in Table 9. The next step was to compare the classification including and excluding the "price" variable. The results are presented for 2 classes in Table 10, and for 4 classes in Table 11. It is worth noting that in both cases the price results neither in the size of the groups in the classification nor in one of the models. It can, therefore, be argued that the models are also similar in terms of price.
Discussion
The research, as one of the few in this field, is complete, which means that all fully electric passenger car models sold in Poland were used. The basis for its conduct was the creation of a complete dataset using all the basic technical parameters of individual models, which proves its versatility. In Poland, the EV market of passenger cars and buses is the most developed. A specific gap in the market is the lack of electric trucks that would represent a high added value to achieve the sustainable development goals in inland freight transport.
The dataset that contains electric car data and that has been prepared for segmentation is almost complete. The deficiencies noted were minor and did not significantly affect the reliability of the results. However, attention was drawn to the fact that the structure of the official technical data varies depending on the manufacturer. Often, it does not contain all the technical data or the methodology for measuring some parameters (e.g., in terms of boot capacity, range, or energy consumption) differs. This may be a suggestion for both manufacturers and control (approval) authorities to unify and standardize the structure of technical data provided by manufacturers.
It is worth mentioning that after performing a complex segmentation considering various vehicle parameters there are cars of different sizes in individual segments. For example: in the "city" group there are small city cars (such as Opel Corsa-e or Renault Zoe), crossovers (such as Peugeot e-2008 or Hyundai Kona electric), as well as passenger vans (such as Nissan e-NV200 evalia or Citroën ë-Spacetourer). It proves that the purpose of vehicles in urban traffic does not necessarily have to appear only in models with "urban" sizes. Vehicles in this group have, among others, very similar engine power and range, and are also relatively little differentiated in terms of price.
Electric cars should be characterized primarily by the safety of passengers and other road users. Having disc brakes in the front and rear wheels should be standard. It has been recognized as a determinant of the higher quality of a given vehicle model. It has been noticed that the vast majority of electric vehicles have disc brakes in the front and rear axles, which proves the manufacturers' high care for the safety and prioritizing this aspect.
The type of drive (for one or both axles) depends primarily on the main purpose of the vehicle. Hence, cars whose main application is sports driving, on light terrain or for longer distances, usually have an all-wheel-drive which guarantees greater vehicle grip and better driving characteristics. Vehicles moving mainly in urban conditions generally have two-wheel drive which has the advantage of lower production costs. The use of front-or rear-wheel drive, however, depends solely on the engineering ideas of the manufacturer.
Most EVs have 5 doors, which means they have two pairs of access doors and a trunk which is treated as an additional door. Only Smart fortwo EQ has three doors as it is a twoseater. Even 4-seater cars have two pairs of access doors. This may indicate that an electric car is treated as a practical means of transport, regardless of its main purpose. The Porsche Taycan has an unconventional luggage compartment (front), so the tailgate was not considered a door in this case. Thus, all Porsche Taycan variants were assigned a value of 4 to this variable. It should be noted that this model is primarily intended for sports racing or extreme driving, as evidenced by its belonging to the "sport" extracted segment (see Table 8).
The size of the wheel depends almost entirely on customer preferences. Larger wheels are usually available at an extra cost on all equipment levels of a car, but the higher the level, the larger the wheel.
DC charging is available in all electric vehicles analyzed. Nevertheless, the higher the charging power, the faster the charging process of the battery. This turns out to be a significant advantage while traveling long distances, as then kilometers of range are recovered in a shorter time. It should be indicated that the charging time also depends on the generation capacity of the charging station. Moreover, manufacturers, willing to increase the range of their models, create combinations of engines with lower power and a more capacious battery, which is confirmed by their close correlation (see Figure A3 in Appendix B for further explanation).
The price parameter, as an important determinant of the purchase decision, was taken into account in the classification. It shows that the vehicles in specific groups do not vary much in terms of price. As a result, potential customers are able to compare cars from the preferred price group according to their own expectations for the car. The comparison can be carried out in many aspects, based on the parameters of the vehicle that are most important for a specific consumer: engine power, range, acceleration, drive type, etc.
Cars from a separate "premium" category belong mainly to the popular and generally recognized segment of SUVs, i.e., large vehicles with increased ground clearance. They will find their supporters among wealthy people who care about driving dynamics, but above all about high comfort and a sense of luxury. Rich people may also opt for an EV from the "sport" group, but then, despite the fancy nature of these cars, such customers should prioritize sports driving experience, fast driving, and dynamic acceleration over comfort.
For buyers with a slightly smaller budget to buy a car, it is proposed to consider electric cars from the "city" and "small" groups. The first group includes vehicles suitable for those who value versatility and practical use of the car in many everyday situations. They all offer a reasonable range and decent performance for a moderate price. It includes both crossovers (such as Peugeot e-2008) and city compact vehicles (such as Opel Corsa-e), which means a wider spectrum of choice for the consumer, taking into account stylistic and practical qualities. People who only need a car for urban transport and who do not care about the long range and practical aspects can choose from the offer of cars belonging to a separate "small" segment. This is the group of the smallest and cheapest electric cars, but for those who appreciate style, it also includes more expensive vehicles (such as the Honda e or Mini Cooper SE).
It should also be noted the lack of uniformity in the assignment of passenger electric vans. Nissan e-NV200 evalia along with Citroën ë-Spacetourer (and twin models) are allocated in the "city" segment, while the Mercedes-Benz EQV belongs to the "premium" segment. However, this should not come as a surprise, as the first two models of vehicles are used only for the ordinary transport of people with moderate comfort, while the Mercedes-Benz electric van, taking into account the make's sumptuous roots, is clearly geared to comfortable traveling in business conditions. Different target customers and a slightly different purpose of the vehicle confirm the belonging to these groups.
Taking into account the macro environment of the electric car market in Poland, a few remarks arise. A potential barrier to the development of the electric car market is the poor network of charging stations, which is incomparably smaller than the gas stations. There are 1294 publicly available charging stations in Poland (of which only 32% were DC fastcharging stations) [46], while there are 7681 petrol stations in Poland [47] (see Appendix A for further explanation). Expanding the network of electric car charging stations should be an element of investment not only in the private sector but also in the interest of the state.
It has been noticed that, on the Polish market, despite the presence of many models of electric cars belonging to different segments, they are noticeably more expensive than their analogous competitors with internal combustion engines. Therefore, the governments of countries should consider preparing a number of subsidies and economic privileges (like a tax exemption) for buyers of cars with electric motors, so that they can become a real competition in terms of prices for vehicles powered by gasoline or diesel oil. By regulating the broadly understood benefits for owners and buyers of electric cars, they may also turn out to be a real response to the desire to achieve sustainable development goals. This can be an incentive to lead an eco-friendly lifestyle, which will further intensify the growing trend of buying green cars-not only fully electric but also hydrogen cars as well as conventional or plug-in hybrids.
Conclusions
The added value of the study is a complete and comprehensive study, which means that it includes all models of electric cars in Poland from the primary market. The methodology and framework used in the study are universal, which means that it is enough to collect the data according to the variables used to perform the analysis with miscellaneous measures. Moreover, it is worth noting that despite the fact that the study was conducted in Poland, it can also be applied in other (wider) territories, which proves its versatile character.
One of the practical implications of the study is that conducted segmentation of the electric vehicle market in Poland carried out in this way may allow consumers to comprehensively compare all fully electric cars sold domestically due to their broad characteristics. This analysis can make potential consumers get an answer to the question which electric car in a given price group is characterized by better or worse performance and how significant the differences and similarities between them are. In addition, this segmentation may turn out to be beneficial and helpful from the point of view of automotive manufacturers, as they are able to check how their cars compare to the competition. Thus, they can also answer questions if their car models require any improvement, and they are also able to check whether it is worth initiating the production of a new model in a different price segment.
The conducted analysis may prove useful for central authorities that want to introduce or better adapt benefits (such as subsidies, tax exemptions, free parking zones) for EV users, but also for other entities (private or state-owned) that want to invest in the development of infrastructure networks for electric cars. In Poland, there is a system of government subsidies for cars using alternative energy sources (such as electricity or hydrogen). However, these surcharges are not so high as to match the price of the petrol equivalent of an EV.
A significant limitation of the study was its territorial scope, taking into account only one country: Poland. Despite the completeness and comprehensiveness of the conducted segmentation, it should be noted that it only includes fully electric cars (FEVs) that are available on the primary market. This research rejects hydrogen cars, electric cars with the so-called "range extenders" and plug-in hybrids. The data acquired to create a dataset with the parameters of electric cars are based on official data provided directly by the manufacturers, but at times the data were incomplete. Data not included in the prospectuses provided by car manufacturers were sourced from the SAMAR Auto-Catalog. Despite the fact that this source is characterized by extensive information on the cars offered in Poland, it should be noted that there is no official data (for further description of SA-MAR Auto-Catalog, see Appendix A).
Further research may focus on extending the research to other markets (also from the perspective of the entire European Union). Performing a juxtaposition of the performance and price of electric cars with comparable models with combustion engines, as well as the used EV market analysis could also be the subjects of further research. The next study may be extended to use other classification methods and distance types (such as Chebyshev and Manhattan). Conflicts of Interest: Data have been provided by automotive manufacturers in Poland that are publicly available on their official websites. Additional data has been provided by SAMAR Auto-Catalog at no cost or any obligations. The authors declare no conflict of interest.
Appendix A
This appendix contains all the additional information not contained in the main text of this paper but which should be mentioned and clarified. The "range extender" is an additional internal combustion engine fitted to an electric car that acts as a generator to power the electric motor [48].
SAMAR Auto-Catalog, as it can be read on the https://autokatalog.pl website (accessed on 21 November, 2020)., "contains the most accurate technical database of passenger cars and vans sold on the Polish market by authorized dealers". Nevertheless, it should be borne in mind that the only official source of technical parameters of cars is materials supplied directly by car manufacturers or authorized dealers.
WLTP is an abbreviation of Worldwide Harmonized Light-Duty Vehicle Test Procedure. From 1 September 2018, all new vehicles placed on the European Union market must be tested and approved in accordance with the WLTP procedure set out in Commission Regulation (EU) 2017/1151.
VDA is an abbreviation of Verband der Automobilindustrie, which from German translates as German Association of the Automotive Industry. Its method of measuring boot capacity is carried out using 'liter' wooden blocks measuring 200mm × 50mm × 100mm. The blocks, after filling the load space, are counted and the numerical result is converted into cubic meters.
In some cases (such as, for example, Nissan Leaf and Nissan Leaf e+ or Renault Zoe R110 and R135 pairs), a more powerful engine or a larger battery implies the need to use a richer version of the equipment of a given model (more powerful engines are sometimes not available for the basic equipment). It results directly from the official price lists. In such cases, the minimum price of the poorest equipment version for a given engine version (or battery) is taken into account.
When specifying the number of public charging stations, the status is provided at the end of October 2020 [46]. The number of petrol stations is at the end of September 2020 [47].
Appendix B
Another variant of the analysis would be to take into account the boot capacity variable. However, due to the lack of one observation, segmentation would be performed for 1 car less. Consequently, market segmentation would not be complete. The description of the classification process taking into account the boot capacity is presented in Figures A1 and A2. On the other hand, Table A1 shows the belonging of individual cars to separate groups. In variant of clustering into 2 subgroups, clusters 2 and 4 were combined into one, as well as clusters 1 and 3, respectively. Figure A1. Coefficients quotients in one-by-one merging steps of clustering (including boot capacity, without price variable). EQ, e-up! Figure A2. The process of merging into cluster (including boot capacity, excluding price).
Based on the available data, the relationship between "energy consumption" and "engine power" variables was also examined (see Figure A3). However, due to the large lack of data (over 10%), this analysis was not presented in the main text. As the engine power increases, the energy consumption increases on average. This relationship is approximately linear. It should be indicated that a strict correlation occurs when outliers in the form of representatives of the so-called "passenger vans" segment (including Nissan e-NV200 evalia, Mercedes-Benz EQV, and Citroën ë-Spacetourer) are rejected. These are vehicles of large dimensions and weight, in which engines with relatively low power were used. Due to their large weight and low aerodynamics, energy consumption remains relatively high. | 8,839.6 | 2021-02-10T00:00:00.000 | [
"Business",
"Environmental Science",
"Economics"
] |
Madison’s Constitution Under Stress: A Developmental Analysis of Political Polarization
We present a “developmental” approach to understanding why rising polarization in the United States has not been self-correcting but instead contin-ues to intensify. Under specified conditions, initial increases in polarization may change the meso-environment, including such features as state parties, the structure of media, and the configuration of interest groups. These shifts can in turn influence other aspects of politics, leading to a further intensification of polarization.This analysis has four important benefits: ( a ) It directs our attention to the meso-institutional environment of the American polity; ( b ) it clarifies the features of the polity that have traditionally limited the extent and duration of polarization, and the reasons why their contemporary impact may be attenuated; ( c ) it helps us analyze asymmetrical, or party-specific, aspects of polarization; and ( d ) it provides an analytic foundation that connects discussions of American politics to the comparative politics literature on democratic backsliding.
INTRODUCTION
Much of modern political analysis has, explicitly or implicitly, taken an approach to understanding the American polity that emphasizes its tendencies toward moderation and stability. Some analyses emphasized the stabilizing impact of Madisonian institutions of fragmented and overlapping authority within a highly diverse society (Dahl 1961, Polsby 1997. Others drew on the seminal work of Anthony Downs-which Fiorina & Abrams (2009, p. vii) rightly described as a sort of "master theory" for a generation of Americanists. That framework suggested that American institutions of representation and electoral competition not only created powerful incentives for a two-party system but also induced the two parties to operate close to the middle of the distribution of preferences within the electorate or face electoral retribution. Both these approaches anticipated that efforts by major parties to stake out political projects unable to command broad support would face strong resistance and growing backlash. The center would hold.
The rise of durable polarization between the parties-a major preoccupation of Americanists for the past two decades-has raised obvious issues for these frameworks. Americans grumble about polarization, yet it persists. The absence of any correction has forced acknowledgment that we have shifted from a system marked by low polarization to one of high polarization. American politics scholars have worked, with some success, to advance understanding of the characteristics of this new high-polarization setting (e.g., Lee 2015, Hopkins 2018, McCarty 2019 as well as to understand the forces that moved the system from one arrangement to the other (McCarty et al. 2006, Schickler 2016.
In many respects, the literature on today's polarized system is actually reassuring. The year 2020 marks the twenty-fifth anniversary of Newt Gingrich's ascendance to the Speakership, so intense polarization is at least a quarter-century old. If our politics has been polarized for that long, one could argue that perhaps we have adapted. From this vantage point, a long period of polarization reaffirms the flexibility of our Madisonian framework. A number of observations bolster this position, ranging from the assertion that American politics has often been equally polarized in the past to the claim that partisan divisions in the United States are often more symbolic or tactical than substantive (Lee 2009(Lee , 2016.
This article outlines a different possibility. Rather than describing a simple shift from a nonpolarized system to a polarized but nonetheless stable one, we should consider the conditions under which polarization might feed on itself. Our concern is that with polarization come new developments in the polity that encourage further polarization. Moreover, the self-correcting mechanisms of the Madisonian polity so often celebrated in the past may have either weakened or themselves been transformed into engines of polarization.
Our starting point is to adopt a developmental approach to polarization. Rather than treating polarization as a static point on some continuum, we treat it as an ongoing developmental process. Pursuing a truly developmental approach requires grappling with the ways in which high levels of polarization might become not just self-reinforcing but susceptible to intensification. This in turn requires careful attention to the potential for institutional configurations to either dampen or exacerbate the process of polarization once it begins. 1 In past eras, what we term meso-institutions acted as countervailing mechanisms, fostering deep factional divisions in the parties that undermined polarization. Today's polarization, by contrast, has fostered the rise of new organizations and transformed existing ones, creating new relationships, balances of political power, and incentives. These changes, in turn, have intensified divisions between the parties, their supporting coalitions, and voters.
Our goals are to identify these new relationships and incentives and to explore how they might lead to qualitative changes in the character of politics. We argue that, in the contemporary party system, polarization has indeed become self-reinforcing. Instead of mirroring (and thus bolstering) the fragmentation that Madison famously argued was encouraged by federalism and separation of powers, parties now operate in precisely the opposite fashion. This change in party behavior is itself a reflection of very significant changes in civil society. Today, many different actors across institutions see their interests as dependent on the success of their party. Put more ominously, they see the costs of the other party's success as unacceptable. Interest groups, state and local parties, rank-and-file politicians, campaign donors, and media outlets that in the past had exercised-at least in part-a centrifugal influence on party politics increasingly contribute to a single, nationalized, polarized politics. Rather than introducing cross-cutting cleavages and a diversity of concerns, these actors behave in ways that reinforce or intensify partisan divides.
Our approach stands in contrast to the standard Madisonian story told about American politics: one of stability induced by the configuration of political institutions and reinforced by a broadly supportive political culture. If convincing, this developmental analysis of polarization has at least four important benefits: (a) It usefully directs our attention to the meso-institutional environment of the American polity; (b) it clarifies the self-correcting or countervailing features of the American polity that have traditionally limited the extent and duration of polarization, and the reasons why their contemporary impact may be attenuated; (c) it helps us analyze asymmetrical, or party-specific, aspects of polarization; and (d) it provides an analytic foundation that connects discussions of American politics to traditions in comparative politics that seek to assess a polity's vulnerability to democratic backsliding.
Our discussion is presented in four parts. First, we ask what historical evidence has already taught us about contemporary polarization. Second, we review the characteristics of our Madisonian system that were widely believed to make polarization difficult to produce and even harder to sustain. Third, we outline a developmental account of contemporary polarization, focusing on some of the major changes in what we call the meso-institutional environment (interest groups, state parties, media) that we believe were first encouraged by polarization and now serve to intensify it. 2 Fourth, we outline some of the significant implications of this account.
HISTORICAL PERSPECTIVES ON POLARIZATION
Our developmental framework could easily be mistaken for a simple call for a turn to political history. Briefly considering how historical evidence has informed recent discussions of contemporary polarization can clarify the distinctiveness of this analysis. Comparisons with earlier periods can offer a useful counterpoint to the tendency to see today's politics as without precedent. In our view, however, rather than providing reassuring evidence that polarization is the norm, the historical evidence more often highlights the distinctiveness of recent developments-particularly how polarization has fed on itself rather than dissipating in the face of the countervailing forces operating within much more decentralized political configurations.
A key focus of historical work on polarization has been to ask whether the party divisions observed today are unprecedented, or even unusual. Many such analyses focus on measures of polarization derived from congressional roll call voting, such as Poole and Rosenthal's Nominate methodology (Poole & Rosenthal 1997). These measures, for example, compare the difference in the estimated ideology of the median Democrat and median Republican in Congress, or consider the percentage of partisans who are closer to the other party's center than to their own party. Scholars working in this tradition have observed that "Congress is now more polarized than at any time since the end of Reconstruction" (Hare et al. 2014; see also McCarty et al. 2006, pp. 23-29;Schaffner 2011;McAdam & Kloos 2014). This understanding has become a working assumption underlying a range of studies analyzing the impact of polarization, such as Levitsky & Ziblatt's How Democracies Die (2018, p. 204), which notes that contemporary "polarization, deeper than at any time since the end of Reconstruction, has triggered the epidemic of norm breaking that now challenges our democracy" (see also Mason 2018, p. 137).
Yet even as Nominate scores suggest that today's polarization is high by historical standards, a second lesson often drawn from these measures is that polarization is the normal state of affairs in American politics. For example, Aldrich et al. (2002, p. 20) characterize the 1930s-1970s "as a lengthy, singular exception to what had typically been essentially constant before World War II.. . . It thus appears that a Congress divided more or less sharply, but almost invariably divided, by parties is the historical norm." Brady & Han (2007, p. 506) concur, noting that "the recent period of polarization mirrors patterns of polarization that have prevailed throughout most of congressional history. In fact, the truly unusual historical period is the bipartisan era immediately following the Second World War" (see also Hetherington 2009;Schaffner 2011;Disalvo 2012, p. 185;McAdam & Kloos 2014, ch. 2;Tucker et al. 2018). Although skeptical of Nominate-based measures, Lee (2015, p. 262) suggests that other indicators-such as party platforms-also suggest that polarization has been common in American history; intense party conflict "may well be the normal state of affairs, and the long postwar period of muted party conflict may constitute a mere exception" (see also Karol 2015).
Yet historical time series tracking polarization levels only take us so far. Nominate scores reveal how differently Democrats and Republicans vote in Congress but not whether the parties are fighting about big or small policy issues (Lee 2009(Lee , 2016. They offer a snapshot of behavioral differences at a given point in time, but much more needs to be done to understand the developmental aspects of polarization. Moving beyond such time series indicators, scholars have considered whether specific historical periods offer a useful analogy for today's politics. The most common analogies have been to the lead-up to the Civil War and the turn of the twentieth century. When the Civil War is the point of comparison, most observers conclude that contemporary divisions are far less serious. For example, Hetherington (2009, p. 417) writes that "the substance and intensity of the conflicts today are certainly nothing like those in the years leading up to the Civil War, which ranged from the unpleasant to the dangerous" (see also Brady & Han 2006;McAdam & Kloos 2014, p. 253). The historian Joanne Freeman (2018, p. xiii), however, notes that while "there are worlds of difference between the pre-Civil War Congress and the Congress of today. . .the similarities have much to tell us about the many ways in which the People's Branch can help or hurt the nation." More specifically, Freeman's (2018, p. 229) analysis reveals the dangers posed when party divisions are overlaid on an intense sectional cleavage. Under such circumstances, compromise can be seen as a sacrifice of honor, and the logic of conflict can escalate far beyond previously accepted bounds of political contestation (Freeman 2018, p. 248).
Given that contemporary polarization has not generated anything resembling an actual civil war, the period of high party voting and centralized leadership in Congress spanning the late nineteenth and early twentieth century is more often seen as the best analog for today. Azari & Hetherington (2016, p. 92) note that the politics of the post-Reconstruction era are "strikingly similar to what we have in the present day. Elections were consistently close; race, culture, immigration, and populism were salient issues; and states almost always voted for the same party in election after election." Furthermore, congressional polarization "reaches its peak in the late Gilded Age and the contemporary period as well" (p. 93). Azari & Hetherington (2016) suggest that the close party balance nationally and the salience of racial concerns were critical sources of polarization both in the late nineteenth century and today (see also Disalvo 2012, Karol 2015, Hopkins 2017, Hall 2019 on the analogy between these two periods).
While specific periods, such as the 1850s or the 1890s-1900s, provide valuable lessons about what polarized politics look like in practice, any given analogy breaks down when pushed too far. The Civil War era represents an obvious extreme point in the intensity of divisions, yet the period of partisan polarization was remarkably brief: The major American parties featured deep internal divisions on slavery up until the mid-to-late 1850s, and the new Republican majority became deeply divided over Reconstruction and key economic questions soon after the war ended. By the 1870s, the within-party divisions among Republicans on Reconstruction and key economic questions rivaled the interparty cleavage in importance. The 1890-1910 analogy also has evident limitations. These years of supposed peak polarization actually featured serious regionally based intraparty divisions over economic development and regulatory policies (Sundquist 1983, Bensel 2001. These divisions became unmanageable with the rise of progressive insurgency after 1906 and the Taft-Roosevelt split in 1910-1912(Mayhew 2002. Intense partisan polarization, such as that existing just before and after the Civil War and at the turn of the twentieth century, proved short-lived and deeply vulnerable to regionally based factional interests. This vulnerability of previous episodes of polarization is what has led some of the most astute observers of our politics to conclude that polarization will ultimately falter. After highlighting the intensity of today's divisions, Azari & Hetherington (2016, p. 108) note that "the fate of previous eras of division suggests that this brand of politics is rarely sustainable in the long term. If not in 2016, it seems change is likely to come soon." We are far less confident that polarization is likely to recede, or even stabilize. Our main point, however, is that any such claim cannot rest on the observation that "this is what has happened before." Instead, it requires a careful identification and assessment of the mechanisms that either attenuate or intensify polarization. This permits consideration of the prospect that within contemporary party politics the institutional configurations that previously acted as countervailing mechanisms have been displaced by alternative structures that instead reinforce or even intensify polarization. We can sharpen this point before considering the specific changes we have in mind by briefly discussing why so many have seen our Madisonian institutions as a strong protection against intense polarization.
THE MADISONIAN SYSTEM: A SHIELD AGAINST POLARIZATION?
The dynamics of contemporary polarization raise important questions about the standard, Madisonian account of American politics, which emphasizes the ways in which core political institutions encourage compromise and stability. The Founders were, of course, preoccupied with the question of how to create a stable republic. Understanding that factional divisions are inevitable, Madison famously argued that American political institutions could prevent all-out conflict between competing camps. Critical mechanisms that would tend to attenuate or countervail against polarization, rather than reinforce it, were built directly into the constitutional system. Others, such as the development of what were by comparative standards highly decentralized and geographically factionalized political parties, were crucial (if unintended) outgrowths of the constitutional framework.
Most obviously, separation of powers, checks and balances, and federalism divide power, making it less likely that any single group will gain control of the entire government. This, in turn, means that governance will routinely require accommodating a range of group interests. The rules structuring elections also require building different kinds of coalitions for different offices, discouraging the emergence of a single coherent and dominant cleavage.
The creation of an extended republic with immense geographic diversity reinforced the institutional obstacles to polarization. Madison lays out the logic of this argument in Federalist 10: The scope of the new nation ensured a heterogeneity of viewpoints, which would make the emergence of a majority faction unlikely (Kernell 2003). While a small republic might be characterized by a single intense cleavage-over religion, wealth, or some other characteristic-an extended republic would tend to give rise to cross-cutting cleavages. Creating a majority would require broad appeals to widely shared interests, rather than narrow, parochial appeals to a particular faction. As Dahl & Lindblom (1953, p. 307) argue, social pluralism, when combined with America's fragmented constitutional structure, forces bargaining among diverse groups in order to achieve policy success (see also Truman 1951, p. 514;Gunnell 2004, p. 224).
Federalism, from this perspective, interacts with the extended republic in critical ways. It is not just that the national government shares power with 50 separate state governments. The diversity of state circumstances and the relative autonomy of state political institutions promote carefully brokered compromises that are mindful of an array of distinctive interests (Anderson 1955, pp. 135-36;Elazar 1966, pp. 6, 203;Truman 1966).
These core institutions of American government tend to frustrate efforts to consolidate the power of a particular individual or coalition; each puts a premium on finding ways to accommodate opposing interests. Under many conditions, we can think of them as functionally equivalent mechanisms for attenuating polarization. Even if one mechanism fails to operate in a particular context-such as when unified government reduces Congress's incentive to check the presidentthere are built-in redundancies that reinforce the overall tendency toward stability and moderation. Moreover, these institutions have a homeostatic quality. Given the independence and diversity of political settings and roles, politicians unwilling to engage in compromise are likely to trigger an increasingly powerful backlash.
The Framers, of course, did not anticipate several major developments that might have undermined the Madisonian formula. Most notably, political parties were not part of the Constitution, yet developed soon after the Founding. Even so, the constitutional system profoundly shaped these new parties in ways that made them unlikely vessels for intensely and durably polarized politics. From the start, American political institutions helped produce parties that were federal in character and decentralized in many of their operations (Disalvo 2012). In the words of V.O. Key (1964, p. 315), American parties were "confederative," consisting "of a working coalition of state and local parties" that provided pluralistic representation of diverse interests (see also Epstein 1982, Schattschneider 1942. A critical source of power and independence for state parties has been their control of nominations and, more generally, their role in shaping career paths for ambitious politicians. Truman (1966, p. 92) observes that "the basic political fact of federalism is that it creates separate, selfsustaining centers of power, privilege, and profit. . . [and] bases from which individuals may move to places of greater influence and prestige in and out of government." In addition to the formal leverage created by state parties' control of nominations, the need to compete for power across a wide span of very different states forced American parties to take the form of catch-all organizations that accommodated a range of ideologies and social groups.
While there also was an important national aspect to these party organizations-particularly rooted in competition for the presidency-they retained substantial autonomy throughout much of their history. Polsby (1997, p. 40), echoing an earlier comment from Dwight Eisenhower, argues that "one may be justified in referring to the American two-party system as masking something more like a hundred-party system." A Massachusetts Democrat and an Alabama Democrat might belong to the same formal organization at the national level, but they need not agree on much of anything when it comes to policy. Hershey (2017, p. 26), in her textbook on American parties, concludes that federalism and separation of powers mean that American legislative parties "can rarely achieve the degree of party discipline that is common in parliamentary systems." This feature of party politics has historically lowered the stakes of political conflict-an effect that comparativists have long stressed is conducive to democratic stability (Acemoglu & Robinson 2006). Even if one party wins power, it is forced to accommodate a diverse array of interests that likely will make its ultimate policies broadly acceptable. Furthermore, the cross-cutting cleavages and fluidity of alliances ensure that even if one's side loses today, the outcome could easily change soon (Latham 1952).
This account of the American political system always had critical blind spots. It tended to overlook the systematic biases in representation and inequalities in social and economic power that ensured that even if there were no permanent winners, there were plenty of permanent losers. Institutionalized white supremacy was the most glaring refutation of this pluralist faith, but not the only one. The Madisonian account also did not easily accommodate social movements that, in important moments, challenged the legitimacy and stability of the American political system. The political violence of southern "redeemers" during and after Reconstruction, the rise and suppression of labor militance in the late nineteenth century, and the civil rights-and later, black power and antiwar-movements of the 1960s each presupposed a kind of politics that is not comfortably categorized in terms of pluralist stability. And, of course, the Civil War constitutes a crucial moment when the Madisonian system catastrophically failed to contain conflict, as a single, overarching division broke the polity apart.
But for all of these failures, the Madisonian system was, for much of American history, a robust obstacle to the consolidation of power. The operations of the constitutional system might be remade on the ground over time by assertive presidents (see Skowronek 1997 on "Reconstructive" leadership) or new ideological formations (e.g., the New Deal), but the core features that gave rise to pluralism and fragmented power remained: separation of powers, checks and balances, territorially grounded representation, and the extended republic. The modern presidency might be a much more powerful office than Madison anticipated, yet modern presidents continued to be frustrated by the need to deal with contending power centers in Congress, the Courts, the bureaucracy, and the states (Neustadt 1960, Moe 1985, Skowronek 1997. The New Deal remade the role of the national government, yet also had to confront fundamental limitations imposed by separation of powers, federalism, and the Democrats' north-south regional coalition (Weir 2005).
In sum, while political parties might bridge the differences across branches, institutions, or localities in a way that the Framers had not anticipated, sustained, intense policy polarization at the national level has been rare. Even in periods of high party voting in Congress, there remained substantial intraparty divisions that limited the scope of partisan battles. A fragmented party and interest group system meant that national party lines failed to capture or contain many of the critical disputes animating politics-and these disputes countered the force of national party polarization.
American politics scholars are not the only ones to highlight the impact of these features on the stability of American democracy. The comparative literature on presidentialism also showed appreciation for the apparent robustness of the Madisonian system. In pathbreaking work, Linz (1990) argues that presidential systems tend to be less stable due to dueling bases of legitimacy. Viewing the United States as an apparent exception, Linz suggests that our weak and fragmented parties have prevented this kind of all-or-nothing showdown between branches under the control of competing parties. We argue below that this confidence in the moderating influence of American political institutions may no longer be justified.
THE DEVELOPMENT OF MODERN POLITICAL POLARIZATION
Standard accounts of polarization emphasize the sorting of the parties, at both the elite and mass levels, which flowed from realignment of the political parties around issues of race. This itself was a very long-term process (Schickler 2016), but in national politics the critical events surrounding the parties' repositioning on civil rights occurred in the 1960s and early 1970s. By the end of this period, national party leaders had clearly placed themselves on the conservative and liberal sides of racial issues. In doing so, they made the Republicans and Democrats, respectively, more clearly conservative and liberal parties. This new clarity in turn helped trigger the well-known processes of elite and mass sorting, and elite replacement, that analysts associate with modern partisan polarization (McCarty et al. 2006).
Although it has received less discussion in the analysis of polarization, a second development in the 1960s and early 1970s-what Skocpol (2003, p. 135) has termed the "long 1960s"-was also critical: a dramatic expansion and centralization of public policy (Melnick 1994, Jones et al. 2019. Civil rights legislation was only the entering wedge. During the long 1960s, liberal Congresses enacted, often on a bipartisan basis, major new domestic spending programs (especially Medicaid and Medicare, which now account for roughly a quarter of federal spending as well as, in the case of Medicaid, a big share of state spending). They greatly enlarged the regulatory state, creating powerful new federal agencies (such as the Environmental Protection Agency) and enacting extensive rules covering environmental and consumer protection as well as workplace safety. Finally, federal courts introduced or expanded a range of rights (most dramatically, reproductive rights enshrined in Roe v. Wade), which essentially nationalized policy making with respect to a host of controversial social issues. By the late 1970s, Washington had become a much more prominent force, across a much wider range of issues, than it had been two decades earlier.
The expanded role of Washington became a critical issue dividing the parties, contributing to polarization both directly and indirectly (Hopkins 2018). Directly, it reinforced the process of sorting between the consolidating liberal and conservative parties. Increasingly, the two parties diverged on fundamental questions regarding this emerging activist federal state. Indirectly, this new government activism encouraged the mobilization and nationalization of interest groups in response to the growing stakes in national-level political contestation (Leech et al. 2005). As we shall see, over time these two dynamics merged. These extensive and now nationalized groups faced growing incentives to align themselves with one party or the other.
THE TRANSFORMATION OF MESO-INSTITUTIONS
The initial development of polarization and of nationalized policy contestation had profound consequences for the American polity. In this necessarily abbreviated account, we wish to stress the changes it encouraged in what we are calling meso-institutions: interest groups, state parties, and the media. In earlier eras, these arrangements had been crucial bulwarks of the formal institutions of our Madisonian system, tending to attenuate partisan polarization. Today, they have changed in ways that instead encourage further national party polarization. As we briefly note, these changes help to account for an increasingly tribal mass politics marked by negative partisanship, which further intensifies polarization.
The Shifting Relationship Between Interest Groups and Parties
The expanding role of the national government led to a proliferation and nationalization of interest group activity. Over time, the new environment produced a second major shift: a growing inclination of powerful groups to draw closer to one or the other of the major parties. In an increasingly polarized environment, these groups faced incentives to pick a party-to try to achieve their policy goals by working with, and in support of, a durable political coalition. Rather than being a source of incentives and action that cross-cut parties and thus restricted polarization, interest groups became another factor reinforcing the divide between them.
Of course, some groups have always aligned closely with parties-organized labor's attachment to the Democrats offers a clear mid-twentieth-century example (Karol 2009, Schlozman 2015. Yet core understandings of politics emphasized the strong incentives for most groups to avoid close alignment with parties (see, e.g., Hansen 1991).
There is broad consensus today that the interest group environment has become more polarized and party-aligned (Krimmel 2017). The list of prominent groups that have moved into tighter alliance with a single political party as the nation has polarized is long. For Republicans it includes such influential organizations as the National Rifle Association (NRA) (Karol 2009), the Koch brothers network (Skocpol & Hertel-Fernandez 2016), the oil and gas industries, and major conservative Christian organizations (Schlozman 2015). The powerful US Chamber of Commerce provides a striking illustration of the broader trend. Traditionally conservative but studiously nonaligned, it now carefully coordinates its extensive electoral activities with the Republican Party, and its political director (a former GOP operative) can refer unselfconsciously to Republican Senate candidates as "our ticket" (Hacker & Pierson 2016).
The Democratic coalition has, since the New Deal, been aptly characterized as a coalition of distinct policy-demanding groups (Grossmann & Hopkins 2016). However, we contend that several key groups that had maintained substantial ties to both parties prior to the 1960s and 1970s are now more firmly tied to the Democratic Party. The list would include major civil rights, environmental (Karol 2019), and reproductive rights groups.
These growing attachments partly reflect the fact that as the parties polarized and offered clearer choices, more groups found one party's policy preferences to be a much better fit than the other's. A second factor inducing groups to ally with one party is that the previously dominant strategy of maintaining a collection of friends in both parties became far less effective. There used to be major bipartisan roads to policy making in Washington that did not run through the party leadership. Indeed, a "middle-out" strategy was often the best road to political success-a political reality that underpinned Mayhew's (1991) finding that divided government was uncorrelated with the production of major laws in the 1940s-1980s. While middle-out coalitions are not entirely obsolete (Lee & Curry 2019), party leaders now have much greater agenda control, and leaders look at which groups are solidly in their party's corner. The same incentives influence appointments to courts and regulatory bodies. This tightening party control makes a party-based alliance far more valuable for groups. The rise of negative partisanship further changes interest group incentives. For a party's core supporters, it has become increasingly suspect to work with politicians of the other party (Lee 2009). In such a setting, many groups will calculate that achieving their goals depends on helping their team win rather than reaching out to members of the other party.
There is a powerful self-reinforcing logic at play. The deeper and more intense the partisan divide, the stronger the incentives for most interest groups to join a team, and the more closely aligned groups are with parties, the stronger the incentives for them to do all they can to help their team win. As an interest group's party moves closer to their preferred policy positions (and the other party moves in the opposite direction), the stakes in the outcome of interparty conflict increase. As groups join teams, see increasing benefits of victory by their team, and thus work to insure those victories while punishing defectors, interest group political behavior can intensify polarization rather than moderating it. Indeed, the transformed interplay between groups and parties does more than just remove one of the traditional mechanisms that limit polarization. Many of these contemporary groups are national in scope and invested in an ambitious policy agenda. They eagerly push their partisan allies to advance that agenda wherever possible (Grumbach 2018, Hertel-Fernandez 2019. Groups may also depend on member mobilization strategies that rely on the intensification of conflict. These tendencies have generated concern among analysts about a possible "hollowing out" of the parties (Schlozman & Rosenfeld 2019). Parties have contracted out mobilizing voters to groups, who may also have considerable influence over fundraising and candidate recruitment. The fixation on winning elections that characterized many traditional party elites encouraged moderation. Party networks, however, increasingly lack the kind of robust organizational infrastructure that might limit extremism. Under conditions of polarization, they may cede power to groups that are more accepting of electoral risk to achieve potentially extreme ends (Hacker & Pierson 2014, Schlozman & Rosenfeld 2019.
Shifts in State Parties and Federalism
For much of its history, America's federal party system tended to act as a countervailing mechanism limiting partisan polarization. Even when the national parties were relatively polarized on a given set of issues-such as the tariff in the late nineteenth century-state and local parties provided a partially independent, geographically rooted power base to represent competing interests that cross-cut that division. For example, in the 1870s and early 1880s, Pennsylvania Democrats representing industrial districts constituted an important protariff bloc within the mostly antiprotectionist party. This bloc repeatedly allied with Republicans to frustrate their own party's efforts to lower rates, undercutting an otherwise intense cleavage (Peters 1990).
Perhaps even more importantly, the geographically decentralized party system attenuated polarization by providing a mechanism to incorporate new interests that fit uncomfortably with existing national party coalitions. From the start, American mass parties were premised as much on attempting to suppress issues that the two national parties preferred not to fight about as they were on highlighting the issues that neatly divided the parties. Martin Van Buren, for example, believed that the Democrats' cross-regional coalition would keep slavery off the political agenda (Shade 1998). Yet even as Democratic and Whig national leaders emphasized economic development issues, rank-and-file politicians attentive to local constituencies repeatedly brought the slavery issue to the fore. Much of this political work was done through third parties with distinct geographic bases (Brooks 2016), though these parties' efforts also pressured members of the two major parties to adapt. Fights over the gag rule, Wilmot Proviso, and Speakership elections, for example, repeatedly provided opportunities for the slavery issue to surface, with locally rooted party politicians often taking positions that departed from national party lines ( Jenkins & Stewart 2013).
In the post-Civil War era, national party polarization on economic issues was limited by regionally based intraparty cleavages that reflected sectionally distinctive political economies. Currency policy had very different implications depending on the economic development of a particular region. State parties provided a mechanism to represent these particular interests within each party. Similarly, midwestern progressives emerged as a distinctive faction in the early-twentieth-century Republican Party. Although Nominate scores would mark this as a time of high polarization, this progressive faction capitalized on its partially independent electoral base to pursue regulatory policies that conservative leaders opposed. Put simply, the national parties lacked a veto over state party positions and over the entry of new groups into a party coalition, even when those positions and groups undermined an existing line of cleavage. This process of geographically rooted factional entry repeatedly undercut partisan polarization (Schickler 2016).
Today's state parties no longer are well situated to play this countervailing role. Instead, they are far more integrated into national party networks in which key resources are outside the control of state party leaders. Part of this story has involved strengthening formal coordinating mechanisms at the national level, such as the Democratic National Committee, Republican National Committee, and congressional campaign committees. These organizations have become more active as a source of funds and professional services for local candidates, encouraging greater coordination across states (Lunch 1987, Paddock 2005. Nomination process reforms that empower ordinary voters have also shifted influence within states from professional, locally rooted politicians to policy-oriented activists who have often focused on hot-button issues that divide the parties nationally (Paddock 2005, Layman et al. 2006, La Raja & Schaffner 2015. Meanwhile, fundraising has been nationalized. Drawing on Federal Election Commission data, Hopkins (2018) finds that the share of itemized campaign contributions that cross state lines went from 31% in 1990 to 68% in 2012. This, again, undercuts the connection between politicians' ambitions and geographically specific interests.
More generally, the nationalization of politics, including of communication networks, has made it harder for state parties and politicians to tailor their identity to local conditions. As Hopkins (2018, p. 15) notes, "state parties themselves. . .especially as voters perceive them, have increasingly come to mirror their national counterparts" (see also Caughey et al. 2018). Three empirical indicators point to this nationalization pattern.
First, state party platforms are more similar across states and more distinctive across parties than in earlier eras (Paddock 2005(Paddock , 2014Hopkins & Schickler 2016). Even when a state party finds a national position to be disadvantageous locally-such as abortion for northeastern Republicans or guns for Democrats in rural states-they are more likely to avoid the issue than articulate a stance that departs from their national party.
Second, these platform differences are finding their way into state policy. Grumbach (2018) shows that state-level policies are increasingly polarized by party. He argues that these differences reflect responsiveness to intense demands from national groups allied with each party. Hertel-Fernandez (2019) documents how a national network of conservative interests has played an increasingly prominent role in shaping state policy agendas and legislation when Republicans gain power.
Third, the decline in state party autonomy is reflected in the increased alignment of state and national election outcomes (Hopkins 2018). Although there are occasions when a Democratic governor is elected in a red state or vice versa, the correlation between vote share across state and national offices has increased substantially in recent decades (Hopkins 2018). Studies have also shown that national-level factors are now having a bigger impact on state legislative elections than does the state party's own positioning (Zingher & Richman 2018).
As with the transformed role of interest groups, these changes in state party politics help make polarization self-reinforcing. When it became harder for state politicians to distinguish themselves from the national party brand in the eyes of voters, their incentives began to change. As Hopkins (2018, p. 6) puts it, state politicians "may well come to see their ambitions as tethered more closely to their status in the national party than their ability to cater to the state's median voter." When an issue potentially separates the state's median voter from the position of the national party, politicians' incentives to toe the national party line are likely to be stronger as voters prove less attentive to state-level differences and as the relevant audience for their behavior (interest groups, donors, etc.) becomes more nationalized. Indeed, state-level politicians will have an incentive to highlight national cleavages where their party has an advantage in their state, again furthering the polarization spiral (Hopkins 2017).
These changes have resulted in a more integrated party system. What were once relatively autonomous state and local party organizations that provided a basis for dissident factions to form and challenge national party lines now appear to "be rather small cogs" in a nationally oriented network (Paddock 2014, p. 165). Within this new more integrated system, state-level politicians find it in their interest to reinforce or even intensify existing national alignments.
Transformation of the Media
The presence of news outlets that are allied with a party or ideological cause is nothing new in American history. Nineteenth-century newspapers were, in many cases, clearly associated with parties, and often embraced sensationalistic material attacking the other party. These press outlets at times behaved in ways that exacerbated partisan and sectional divisions. Freeman (2018) shows, for example, how the press amplified the crisis of the 1850s, promoting conspiracy theories, surfacing evidence of pro-or antislavery plots, and spreading stories of sectional violence on the House floor.
Even so, the party press of the nineteenth century was not nationalized. Although more research is required on this topic, case study evidence suggests that voters in different regions who belonged to the same party did not necessarily receive the same messages about key issues. For example, as the fifty-first Congress debated the tariff and currency, GOP newspapers were divided regionally and thus provided a cross-cutting set of cues for many voters (Schickler 2001).
In addition to media outlets being more likely to follow the national party line than in the past, today's media landscape is more dominated by national news. This is primarily a matter of audiences shifting away from print newspapers and local television news sources, which, due to their geographic boundaries, continue to provide more state and local coverage than do other kinds of media sources (Hopkins 2018). The net result, however, is reduced information and engagement with state and local politics on the part of voters, which reinforces politicians' incentives to focus on national issues and cleavages. These changes may contribute to increasing the salience of nationally oriented political and social identities, in contrast to geographically rooted identities, furthering the trends toward nationalization and polarization (Darr et al. 2018, Hopkins 2018. Beyond these shifts in audience and emphasis, technological and commercial developments, such as the rise of cable news, talk radio, and social media, encourage the growth of an "outrage industry" that appeals to partisans (Berry & Sobieraj 2013). This industry has powerful incentives to intensify polarization in two respects. First, it attracts an audience by inflaming negative views about political opponents and making exaggerated claims about the political stakes involved. Second, it holds its audience by delegitimating other sources of information (Benkler et al. 2018). To the extent that a party's voters come to rely on media outlets with incentives to polarize, and increasingly treat alternative sources of information as illegitimate, polarization is likely to become more intense and durable.
The Landscape of Modern Polarization
A developmental perspective emphasizes that processes of polarization and nationalization have been deeply intertwined over the past half century. To understand polarization, many have, understandably, focused on southern realignment. Among other effects, this created more of a 50-50 nation, increasing incentives for elite partisan behavior (roll call voting, procedural hardball, etc.) to gain an advantage in the tightly contested battle for majority control (Lee 2016). The southern realignment can be viewed as the starting point for many, if not all, of the linked transformations we have discussed. It turned the GOP into a party whose base of rural evangelical whites tended to take the conservative position on a wide range of social and cultural issues, in addition to race (Schickler 2016, O'Brian 2019. We have also stressed the growth of the role of the federal government, which happened at about the same time. That growing role meant that more was at stake in politics. In response, groups got stronger and became more focused on controlling outcomes at the federal level. The mechanisms for self-reinforcement are clear. The more the parties diverged over increasingly consequential things, the more it mattered which side won. As new issues came on the agenda, the push for Republicans to take the conservative side and Democrats the liberal side became stronger. Due to the changing demographic/geographic bases of the parties and the growing incentives for both voters and groups to stay aligned with parties, guns, abortion, gay rights, and feminism all got absorbed into the party system in the same direction, with southern evangelicals who were becoming a key part of the GOP base on one side and coastal social liberals, who were increasingly likely to be Democrats, on the other (O'Brian 2019). All of this induced yet further changes in the organizational landscape and in the incentives for interest groups, state parties, and the media.
These coevolving forces have, in part through their impact on these meso-institutions, fundamentally changed the way the American polity fits together. In many cases, they did not just weaken the traditional generators of Madisonian pluralism; they transformed them into generators of intensified polarization. Interest groups and issues do not cross-cut; they stack, one on top of the other, along partisan lines. When new issues arise, existing groups, as well as politically aligned (and increasingly national) media, have incentives to push them into existing lines of cleavage (Layman et al. 2010).
Geography no longer encourages pluralism, as it often did even during what are typically characterized as highly partisan eras. Some of this may be just bad luck: Domains that used to run counter to one another or orthogonal to one another geographically (with the South conservative on race and culture, but more liberal or at least moderate on economics) no longer cross-cut. But much of it, we argue, reflects the growing forces of nationalization at work in our polity. If state party competition focuses on intrastate dynamics, it will tend to be multidimensional, distinctive across states, and a source of moderation and plausible bipartisanship at the national level. However, where media and interest groups are nationalized (and help create incentives for local parties to exploit even modest geographically based partisan inequalities), the role of geography may reverse. Nationalization puts the focus of state politics on the main national dimension, which means that even modest geographically based partisan inequalities may intensify over time (D.A. Hopkins 2017, D.J. Hopkins 2018. In time, the strong role of territorially grounded representation in the American system may come to act as an engine of polarization rather than a brake.
The Link to Mass Behavior
The meso-level changes in the organizational landscape described above are, in our view, critical to understanding how polarization could become more intense over time. These organizational changes have important ramifications not only for elite behavior in Washington and state houses across America but also for mass opinion and behavior, which-despite the well-known observation that most voters are not particularly "ideological" (in the conventional sense) or politically attentive-have also become contributors to increasing polarization.
Beyond the much-studied roll call record, arguably the facet of polarization that has been studied in greatest detail is mass-level changes in attitudes and behavior. Research in this area began with an extended debate concerning whether the mass public was, in fact, polarized, or instead, whether fierce divisions over policy were confined to a narrow segment of highly engaged activists and elites (see Abramowitz 2010, Fiorina & Abrams 2009). More recent work, however, has emphasized how even if most ordinary voters are not consistent liberals or conservatives with sharply polarized policy views, changes at the mass level have interacted with elite-level polarization in critical ways.
Indeed, growing polarization produced several dynamics at the level of voters that reinforce the initial polarization. One critical aspect has been the stacking of cleavages in a manner that encourages tribalism. In contrast to the cross-cutting cleavages that (often) attenuated the intensity of mass-level divisions in the past, social identities now line up more crisply with partisan divisions, including race, ideology, geography, religion, and education (Mason 2018). Mason argues persuasively that this alignment-and the associated degradation of cross-cutting social ties-has turned partisanship into a "mega identity." Mason suggests that social sorting is self-reinforcing. For example, as Republicans became firmly affiliated with conservative Christianity, individuals increasingly defined their partisanship in terms of religious-linked issues and images. Some even shifted their religious identity to line up with their partisan commitments (Margolis 2018). When a range of social identities all push in a single direction, it becomes much easier to see one's opponents as socially distant and deserving of hatred. Numerous studies have documented the increase in "negative partisanship" and "affective polarization" that is evident as more voters associate the other party's adherents with social groups that they dislike (Iyengar et al. 2012).
There are many interconnections between these developing mass attitudes and the changed institutional environment we have described. For now, we mention just one example of how mass polarization may short-circuit the system's traditional Madisonian features: This stacking of cleavages and growth in affective polarization may weaken the role of elections in moderating polarization. The pull of the median voter in incentivizing politicians to avoid extreme positions depends on citizens penalizing candidates when they move away from the center. But if there are fewer swing voters, this pull toward the center will weaken. 3 When an increasing share of voters see the other party as an alien force hostile to their core values, the willingness to punish one's own party's politicians for taking an extreme position will weaken accordingly. Bawn et al. (2012) argue that parties have always sought to capitalize on an electoral "blind spot" that allows them to serve intense policy demanders without alienating voters. When voters are more clearly sorted into enemy camps defined by stacked social identities, the size of this blind spot grows, weakening one of the most important self-correcting mechanisms to high polarization.
IMPLICATIONS
Conceptualizing polarization as a dynamic process has four important implications: (a) It directs our attention to the meso-institutional environment of the American polity; (b) it highlights the potential that these dynamic effects may disrupt the operation of self-correcting or countervailing processes; (c) it helps us account for and analyze asymmetrical, or party-specific, aspects of polarization; and (d) it provides a firmer analytic foundation for exploring the potential for democratic backsliding in the American polity.
The Significance of Meso-Institutions
The initial polarization of American politics triggered broad changes in major interest groups, state parties and governments, and media. These social arrangements are not formal (constitutional) rules, yet they play a crucial and often underappreciated role in mediating interactions among American citizens, among political elites, and between elites and ordinary citizens. Considering these meso-institutions is especially vital when we seek to explain large-scale political change, since the formal institutions of American politics are, for the most part, fixed. Over the past generation, changes in these meso-institutions have generally worked to intensify partisan polarization.
The Weakening of Self-Correcting Mechanisms
A striking feature of earlier periods of polarization is the extent to which meso-institutions operated as countervailing mechanisms that (often quickly) dampened the intensity and breadth of partisan warfare. Within a fragmented, pluralist polity, partisan pushes away from centrist or consensus positions, in support of more distinctive and aggressive policy agendas, have tended to generate a reaction.
Crucially, these reactions did not simply depend-as they do in many traditional (Downsian) models of party competition-on the median voter's political moderation. Instead, they worked through the meso-institutional features described above, often primarily within parties rather than (as Downs postulated) between parties through the constraints directly imposed by electoral competition. In large part because of the incentives created by institutional design, parties have been pluralistic and resistant to central direction. This pluralism has reflected the competing concerns of interest groups, geographically diverse state parties, locally embedded media, and the distinct institutionally derived interests of politicians situated in different positions within our fragmented system of political authority (Schickler 2001(Schickler , 2016.
If polarization helps transform these intermediary institutions and their associated incentives, however, these self-correcting processes may cease to operate. When interest groups have strongly committed to a party and regard the stakes of party defeat as very high, they may find it prohibitively costly to push back against unwanted initiatives. State parties, operating in an increasingly nationalized system of incentives, may cease to produce the political diversity that would generate backlash. The same would be true for highly partisan media. In a transformed polity, all of these forces, which might in the past have generated dissent and signaled to voters that a party had moved to the extreme, may no longer operate in the traditional fashion.
To put the basic issue more formally, a developmental analysis of polarization points to a more complex view of causal relationships in a polity. We may see a causal relationship in which a change in X changes Y. However, we should not assume that if X later returns to a prior value, it will have the same impact on Y, because the initial change in X may have had important effects on other variables as well. A central contribution of developmental analyses is to push us to question simple, symmetrical notions of causality in which the relationship between X and Y is fixed over time and across contexts. In fact, many of the developments that have been triggered or accelerated by polarization may be very difficult to reverse.
The Analysis of Asymmetrical Polarization
In recent years, Americanists have grappled with growing evidence that polarization is asymmetrical: The Republican Party has made a much more pronounced shift toward extremism (Hacker & Pierson 2005Mann & Ornstein 2012;Grossmann & Hopkins 2016;McCarty 2019). For conventional models celebrating the political dominance of the median voter, such asymmetry constitutes a considerable puzzle.
Exploring how polarization develops over time offers traction on this puzzle in two respects. First, it clarifies why a party's movement away from the center might not be self-correcting, even if the opposition party is not moving as far or as fast. In an intensely polarized environment, where party officials, aligned interest groups, and sympathetic media have strong incentives to stay with their team, the countervailing pressures that would generate a homeostatic political process may be weak or absent. In addition, negative partisanship may thrive in this kind of environment, again limiting the force of moderating pressures.
Second, appreciating the developmental elements of polarization may provide leverage for understanding the sources of asymmetry between the parties. Many frameworks for studying American politics are institutionally "thin," focusing on the mass electorate and a set of formal institutions structuring competition between elites. These thin formulations are almost intrinsically symmetrical, since competing teams of elites operate within the same rule structure and face the same electorate. Once we incorporate meso-arrangements, including interest groups, media, and diverse state parties and governments, however, the presumption of symmetry makes less sense. Instead, we can see the possibility that different parties face different structures, and therefore their incentives and behavior may differ as well.
For reasons rooted in differences in party coalitions, ideological orientations, and historical trajectories, these meso-arrangements do in fact look quite different for the two major political parties. This is perhaps clearest with respect to the media. As Grossmann & Hopkins (2018) show, the conservative media ecosystem, which developed partly in response to the perception of mainstream media bias, created new outlets that were explicitly tied to conservative organizations and causes, and focused on discrediting alternative sources of information. Grossmann & Hopkins (2018, p. 11) note that "the strategy was self-reinforcing, as right-leaning citizens came to rely more on conservative media and become less trusting of other news sources." The media ecosystem on the right is far more isolated from the informational mainstream than that of the left (Benkler et al. 2018). Messages from conservative media sources have worked to activate the existing symbolic predispositions of their audience, insulated viewers from countervailing forms of influence, and increased viewers' vulnerability to conspiracy thinking (Muirhead & Rosenblum 2019). Recent evidence suggests that Fox News-itself just one part of the conservative media ecosystem-has in fact pushed viewers' opinions further to the right (Martin & Yurukoglu 2017). As with interest groups, partisan media on the right has become tightly intertwined with the GOP, with increasingly open coordination and exchange of personnel (Skocpol & Williamson 2013).
Although empirical research on the interest group side is less developed, these differences seem likely to exist there as well. Republican networks seem to involve fewer but very powerful groupsespecially the Christian right and economic elites-with ambitious policy agendas that drive the entire party rightward. The Democratic coalition, by contrast, is made up of a wider range of interests, none of which (especially given the decline of organized labor) are large enough to dictate priorities. Thus, the interest group structure of the Democratic coalition makes it more conducive to compromise or log rolling (Grossmann & Hopkins 2016; see also Hertel-Fernandez et al. 2018).
An important contributor to the asymmetry may be the shift of political resources to the wealthy and corporations that has accompanied rising income inequality (Hacker & Pierson 2014). This shift has different effects on the two parties, accelerating polarization in the GOP while introducing an important cross-cutting cleavage for the Democrats. Like the Republicans, Democrats rely on wealthy donors, but those donors are unlikely to push the party to the left on regulatory and budgetary issues (Broockman et al. 2017). Thus, for a left-leaning party, growing inequality in economic and political power exerts a moderating influence on at least some important issues. In the case of the GOP, however, a growing concentration of economic power, mobilized by organizations like the Koch network and the Chamber of Commerce, has pushed a right-leaning party farther to the right (Skocpol & Hertel-Fernandez 2016).
In addition, a number of the Madisonian countervailing mechanisms that limited polarization in the past may remain more relevant on the Democratic side. For instance, the polarizing role of contemporary federalism that we have noted operates more weakly for Democrats, given the unfavorable geographic distribution of the party's voters. The growing concentration of Democratic voters in urban areas is, within the American electoral framework, politically inefficient (Rodden 2019). As a result, a Democratic victory in Congress requires winning red-leaning districts and states, creating an incentive to moderate and/or tolerate heterogeneity within the party. Republicans, by contrast, receive an electoral bonus from this political geography, facilitating a move to the right.
The Risks of Democratic Backsliding
Finally, exploring the dynamics of polarization may help equip Americanists to address an issue that comparativists, drawing on the experiences of other democracies, have recently brought to the fore: the prospect that the United States may be vulnerable to "democratic backsliding" (Mickey et al. 2017, Levitsky & Ziblatt 2018, Roberts 2019. Gradual democratic erosion is not a scenario that the approaches traditionally employed by Americanists are well-equipped to analyze. A developmental perspective on polarization provides a basis for seriously considering the comparativists' concerns. 4 Backsliding-the gradual undermining of rules, norms, and pluralistic organizational arrangements that sustain open political contestation-is (as the term implies) a dynamic process. True, American politics has been polarized since the early 1990s, but many of the key features of the polity, as well as the nature of partisan competition, are now quite different. The early 1990s preceded the development of the electoral weaponry that has further nationalized American politics, from Super-PACs to Fox News. In short, the social infrastructure that can deepen animosity between the parties and diminish the prevalence or impact of countervailing pressures is far more developed than it was two decades ago.
Prominent analysts of American politics have questioned the challenges to democratic stability by emphasizing that the parties may not be as far apart as their rhetoric or roll call votes make them appear. As Lee (2009Lee ( , 2016 has forcefully argued, much of what looks like polarization is better characterized as "teamsmanship." The durably close electoral balance between the two parties that distinguishes contemporary politics heightens incentives to engage in zero-sum contestation. From this perspective, conflict between the parties is often intense but not necessarily deep. Fierce jockeying for majority status obscures consensus on basic questions of government, and there is little reason to anticipate that polarization will destabilize political arrangements.
Lee is right to warn that we should not assume that fierce battles necessarily reflect deep disagreements. Yet even acknowledging the roles of close party balance and teamsmanship in accentuating partisan behavior, there is still good reason to be concerned about the intensity of the forces pulling the parties apart, the growing size of the divide between them, and the diminishing effectiveness of many of the mechanisms that would traditionally have pushed back against these developments.
Our analysis supports the view of Roberts (2019) and others that the dangers of backsliding are closely related to developments at work in the contemporary Republican Party. Powerful organized interests tightly aligned with the GOP have pushed the party rightward and have made the party vulnerable to what comparativists call bandwagoning. Bandwagoning is a process in which disparate elites within a coalition face growing incentives to go along with extremist or antidemocratic practices (Levitsky & Ziblatt 2018). A developmental perspective suggests that the prospects for bandwagoning are much greater in today's GOP than they might have been a few decades ago. Formidable, intensely partisan media have blossomed on the right. Along with the growing role of intense organized groups like the Koch network, the NRA, and organized evangelicism, these media forces have fueled negative partisanship. What Roberts (2019) calls the "movementization" of the GOP creates new incentives for political elites to stick with their team on matters-including challenges to established norms of restraint and tolerance, the rule of law, and the integrity and autonomy of core democratic institutions-where previously they might have chosen to dissent.
There has long been a view, common among both comparativists (famously Linz 1990) and Americanists, that the US system of checks and balances with its radical dispersal of political authority constituted a formidable barrier against democratic backsliding. We have already discussed why many of the stabilizing forces that traditionally were linked to these institutions seem much weaker today. In fact, in some cases (as with federalism), these arrangements now introduce new polarizing elements. We close by noting three additional reasons for questioning the extent to which Madisonian institutions-under the conditions operating today-offer effective insurance against democratic backsliding.
First, our electoral arrangements, under current circumstances, provide an unusual gateway to power for a politician with authoritarian inclinations. Presidential elections in a context of high negative partisanship facilitate a "half of a half" strategy (Pierson 2017). Donald Trump's populist, ethnonationalist posture proved highly effective within a "movementized" GOP nomination process. Once he was the nominee, both unenthusiastic elites and skeptical Republican voters felt compelled to come aboard, given their intense dislike of the alternative in the November 2016 election. 5 As Roberts (2019) points out, parliamentary systems are much less vulnerable to this sequence of events.
Second, the Madisonian system of territorial representation may create a powerful incentive for bandwagoning that is absent in systems lacking that feature. The stacking of cleavages in our polarized system has helped to deepen the territorial divide between the electorates of the two parties. Increasingly, elected officials find themselves facing local electorates dominated by a single party, further undercutting the effectiveness of traditional Downsian mechanisms for limiting extremism.
Finally, the emergence of hyperpartisanship means that the check on authoritarian developments in the presidency that the Madisonian system relies on most, Congress, may not work. Instead, GOP members of Congress face multiple incentives to bandwagon rather than resist. Among those incentives are the intense preferences of the party's interest groups, the heavily "red" and negatively partisan electoral bases of these politicians, and the likelihood that influential partisan media will exact a very high price for defection. Given these realities, it is perhaps unsurprising that even the most extreme and disquieting behavior in the White House fails to shake the solidarity of Republican members of Congress. In short, the developmental perspective we offer raises a disturbing prospect: Under conditions of hyperpolarization, with the associated shifts in meso-institutional arrangements and the growth of tribalism, the Madisonian institutions of the United States may make it more vulnerable to democratic backsliding than many other wealthy democracies would be. | 14,536.2 | 2020-05-11T00:00:00.000 | [
"Physics"
] |
Hydrothermal Carbonization of Waste Biomass: A Review of Hydrochar Preparation and Environmental Application
: The concept of a bio-based economy has been adopted by many advanced countries around the world, and thermochemical conversion of waste biomass is recognized as the most effective approach to achieve this objective. Recent studies indicate that hydrothermal carbonization (HTC) is a promising method for the conversion of waste biomass towards novel carbonaceous materials known as hydrochars. This cost-effective and eco-friendly process operates at moderate temperatures (180–280 ◦ C) and uses water as a reaction medium. HTC has been successfully applied to a wide range of waste materials, including lignocellulose biomass, sewage sludge, algae, and municipal solid waste, generating desirable carbonaceous products. This review provides an overview of the key HTC process parameters, as well as the physical and chemical properties of the obtained hydrochar. It also explores potential applications of produced materials and highlights the modification and functionalization techniques that can transform these materials into game-changing solutions for a sustainable future.
Introduction
Over the years, human activities have generated an increasing amount of waste biomass.The traditional management of waste biomass via composting or disposal in open landfills causes environmental pollution, economic losses, and health problems.Despite being considered as a waste, biomass is a valuable renewable resource and energy source [1][2][3].Currently, different methods and technologies are being researched globally to reduce the use of fossil fuels and mitigate environmental risks such as global warming.Some methods aim to improve crude oil extraction and reservoir reconstruction, while others focus on utilizing CO 2 emissions for energy purposes [4,5].In addition, the use of waste biomass as a sustainable resource has gained recognition due to its potential to decrease greenhouse gas emissions [6].However, low energetic potential and stability, high ash content, hygroscopic nature, storage issues, and volatiles released during combustion impair the motivation for its utilization [7].Significant strides have been made in the adoption of thermochemical conversion processes that can convert biomass into multifunctional products [1,7].These highly effective processes utilize heat to transform biomass into desirable biofuels, adsorbents, or valuable chemicals (bio-oil, aldehydes, phenols, ketones, acids, and furan derivates) [1,2,8].Different techniques have been developed for the carbonization of biomass, including combustion, torrefaction, pyrolysis, gasification, and hydrothermal carbonization [2,[8][9][10].
The last mentioned, HTC, is a highly effective technology for biomass utilization, making it an essential player in waste treatment and solid biofuel production.It is a cost-effective and environmentally friendly process that operates at moderate temperatures (180-280 • C) and uses water as a reaction medium (Figure 1).The HTC process offers significant advantages, including the carbonization of wet biomass without the need for drying and the absence of gas emissions due to the dissolution of oxides in process water [2,7].During the HTC process, the raw material undergoes several reactions that include hydrolysis, dehydration, decarboxylation, aromatization, and condensation [11,12].Temperature and residence time are the main parameters that control the HTC process and affect the structure and characteristics of obtained products [13][14][15].Functionalized hydrochars suitable as pollutant adsorbents are produced at lower HTC temperatures, whereas higher temperatures promote hydrochars with enhanced fuel properties [12,14,16,17].Reaction time shows a similar but milder influence than HTC temperature.During hydrochar formation, a longer reaction time causes yield reduction but results in a more aromatic structure [2].Furthermore, under HTC conditions, subcritical water behaves as a non-polar solvent and facilitates the hydrolysis of organic compounds of biomass, leading to rapid depolymerization into water-soluble products [14].Not only is water, as a solvent, an inexpensive option and biomass constituent, but it is also an environmentally friendly and non-toxic one.Moreover, carbonizing in an aqueous medium generates oxygenated functional groups on a solid hydrochar surface [18].
The last mentioned, HTC, is a highly effective technology for biomass utilization, making it an essential player in waste treatment and solid biofuel production.It is a costeffective and environmentally friendly process that operates at moderate temperatures (180-280 °C) and uses water as a reaction medium (Figure 1).The HTC process offers significant advantages, including the carbonization of wet biomass without the need for drying and the absence of gas emissions due to the dissolution of oxides in process water [2,7].During the HTC process, the raw material undergoes several reactions that include hydrolysis, dehydration, decarboxylation, aromatization, and condensation [11,12].Temperature and residence time are the main parameters that control the HTC process and affect the structure and characteristics of obtained products [13][14][15].Functionalized hydrochars suitable as pollutant adsorbents are produced at lower HTC temperatures, whereas higher temperatures promote hydrochars with enhanced fuel properties [12,14,16,17].Reaction time shows a similar but milder influence than HTC temperature.During hydrochar formation, a longer reaction time causes yield reduction but results in a more aromatic structure [2].Furthermore, under HTC conditions, subcritical water behaves as a non-polar solvent and facilitates the hydrolysis of organic compounds of biomass, leading to rapid depolymerization into water-soluble products [14].Not only is water, as a solvent, an inexpensive option and biomass constituent, but it is also an environmentally friendly and non-toxic one.Moreover, carbonizing in an aqueous medium generates oxygenated functional groups on a solid hydrochar surface [18].The process produces energy-dense, coal-like hydrochar without gas emissions or the need for feedstock drying.These features are particularly significant; they reduce the process's cost and energy consumption and increase application versatility, placing HTC ahead of conventional thermal treatments [11,15].Derived solid residue exhibits highly hydrophobic and friable properties, which facilitate its separation from the liquid phase.It outperforms raw biomass with higher mass and energy density, better dewaterability, and improved combustion performance as a solid fuel [2,3,19,20].Due to its characteristics, hydrochar is used for carbon sequestration, soil improvement, bioenergy production, and wastewater pollution remediation.Aside from solid hydrochar, a certain amount of process water (PW) is also created [1,2,11,15,19].The PW can contain various polluting, organic compounds in considerable amounts and can show potential ecotoxicity [19].Generation of these dissolved organic fragments in PW can impede clean production of hydrochar from biomass and also represents the main deficiency of the HTC process.Therefore, it is necessary to additionally treat PW before its release into environments.The process produces energy-dense, coal-like hydrochar without gas emissions or the need for feedstock drying.These features are particularly significant; they reduce the process's cost and energy consumption and increase application versatility, placing HTC ahead of conventional thermal treatments [11,15].Derived solid residue exhibits highly hydrophobic and friable properties, which facilitate its separation from the liquid phase.It outperforms raw biomass with higher mass and energy density, better dewaterability, and improved combustion performance as a solid fuel [2,3,19,20].Due to its characteristics, hydrochar is used for carbon sequestration, soil improvement, bioenergy production, and wastewater pollution remediation.Aside from solid hydrochar, a certain amount of process water (PW) is also created [1,2,11,15,19].The PW can contain various polluting, organic compounds in considerable amounts and can show potential ecotoxicity [19].Generation of these dissolved organic fragments in PW can impede clean production of hydrochar from biomass and also represents the main deficiency of the HTC process.Therefore, it is necessary to additionally treat PW before its release into environments.This obstacle can be overcome by utilizing the obtained chemicals or by recirculating PW, thus reducing water consumption during the HTC process [15,20,21].This review paper summarizes the impact of HTC parameters on hydrochar structure and characteristics, focusing on lignocellulosic biomass conversion.It discusses challenges in clean production and potential applications of the produced materials.This paper provides fundamental knowledge and highlights the need for sustainable, environmentally friendly carbonaceous material production.
Influence of Process Parameters
The chemical and physical properties of hydrochars produced from the same feedstock can differ considerably if some of the operating parameters are changed.Therefore, it is very important to understand the influence of each parameter to optimize the HTC process and produce the desired product (hydrochar) quality [22,23].Apart from the set temperature and residence time, which are the most important, the characteristics of the hydrochars are influenced by other factors such as type and amount of feedstock, pressure, and presence of catalysts.Table 1 summarizes the influence of process parameters on the main hydrochar characteristics.A deeper interpretation of the impact is given in the following paragraphs.Temperature is a crucial parameter that determines the structure and characteristics of the resulting products.It regulates ionic and radical reactions in the supercritical water region as well as the degree of precursor degradation and conversion.Higher temperatures have a significant impact on the degradation process and the number of compounds that can be hydrolyzed [2,33].When the temperature reaches 180 • C, the degradation process of lignocellulosic biomass becomes more intense.At these operation conditions, hydrolysis of the thermally least stable hemicellulose occurs, while cellulose and lignin degradation require more intense conditions [11,34].During the hydrolysis stage, biomass materials are degraded into smaller molecules including oligosaccharides and amino acids.These products are further dehydrated and then polymerized and condensed to form hydrochar [33].Longer reaction times and higher temperatures also increase the intensity of these biomass transformation reactions.
In addition, a higher carbonization temperature affects the production of hydrochar with higher carbon content, but at the same time, it reduces the yield of the solid phase due to increased degradation (Table 1) [2,20,28,32].In addition, elevated temperature conditions increase the dehydration and decarboxylation reactions, thus reducing oxygen content and causing changes in the O/C and H/C atomic ratios in biomass.The result is the production of hydrochar with improved fuel properties, especially with higher HHV (Table 1) and LHV [11,20,35].
Numerous studies have examined the influence of this process parameter (Table 1).They reveal that each constituent of biomass is individually affected under HTC conditions because their degradation originates at different temperature ranges.Summarizing the literature, temperatures from 180 to 280 • C are most often used to obtain solid hydrochar from biomass precursors.Table 1 shows selected biomasses and their carbonization temperatures.Nakason et al. [36] investigated the effect of temperature (140-200 • C) on fuel characteristics of hydrochars prepared from coconut husk and rice husk.They revealed that an increase in process temperature enhanced the degradation of biomass, leading to a decrease in hydrochar yield, volatile matter, and oxygen content, and this enhanced the carbon content and thermal stability of hydrochar.Moreover, the inorganic element in the raw material could also significantly affect the product characteristics [36].During thermal treatment of biomass at temperatures above 180 • C, inorganic compounds are leached from biomass constituents, primarily hemicellulose and cellulose, into the process water.This results in lower ash content in hydrochars [12,37].Nonetheless, higher temperatures can result in the enhanced dissolution of organic compounds into water, leading to an increased inorganic concentration in the hydrochar and greater ash content [38].Another study monitored the possibility of renewable energy generation from municipal solid food waste by HTC treatment in the temperature regime from 180 to 260 • C [26].Obtained results indicated that the highest HHV and fixed carbon was acquired at 260 • C but with the lowest mass yield.Further, the highest solid biofuel production rate was attained at 180 • C. Summarizing all the parameters, the authors concluded that the most reliable temperature for obtaining an energy source is 225 • C. Similar results were obtained by other authors who examined the influence of temperature on the characteristics of the obtained hydrocarbons.Petrović et al. [12] and Mihajlović et al. [19] in separate studies revealed that temperature governs energetic potential, increased porosity, and re-adsorption ability while lowering volatiles, ash, and moisture in hydrochars obtained from grape pomace and Miscanthus × giganteus, respectively.
Lang et al. [39] studied dissolved organic matter (DOM) from hydrochars made from cow manure, corn stalk, and Myriophyllum aquaticum at three temperatures (180, 200, and 220 • C).The study found that the hydrochars' dissolved organic carbon content decreased as HTC temperature increased.On the contrary, increased HTC temperatures increased the relative proportion of aromatic substances and humification degree of cow manure hydrochar DOM, while adversely affecting the DOM from hydrochars made from corn stalk and Myriophyllum aquaticum [39].
Pressure
During the HTC process, pressure is self-generated and primarily influenced by the initial biomass and carbonization temperature.Therefore, this parameter has no considerable impact on the process itself.As the autogenic, pressure increases with a rise in reaction temperature.However, in addition to drawing insight into feedstock-water interactions, more knowledge about obtained pressure is also crucial from the safety and cost aspects for designing effective equipment [23,40].The amount of pressure that is reached depends on the amount and type of used feedstock, initial feedstock/water ratios, used reaction temperature, and residence time [40].In pressurized HTC systems, three types of products occur during the chemical destruction of the feedstock: a solid hydrochar, water with dissolved simple organic compounds, and gases [22].From the gaseous products, CO 2 is the most represented and along with water vapor, its amount affects the pressure increase [40].Therefore, the pressure at the reaction temperature depends on the saturated water vapor pressure, the amount of partially soluble gases produced by HTC reactions, and some inert, non-soluble gas if it is added to a pre-pressurized system (such as nitrogen).Also, the pressure is higher with a decreased density of biomass and free headspace of the reactor and increased volume of liquid phase [23,40].HTC systems use reaction temperatures lower than many alternative thermochemical processes, but much higher pressures (10-65 bar) are needed since combustion, pyrolysis, or gasification are performed at atmospheric pressure [40].
Although pressure is a consequent parameter, a few papers describe its influence on the HTC process.Güleç et al. [41] reported that, in addition to higher temperatures, an increase in pressure (temperature is set to 250 • C while pressure us set to 50 or 240 bar) leads to the better transformation of biomass into hydrochars.However, the degree of these structural modifications depends on feedstock composition.The higher hemicellulosecellulose biomass structures were more affected by the influence of changing temperature and pressure than those with higher cellulose-lignin structures [41].Also, Minaret and Dutta [42] in their BET analysis evaluated that the surface area of corn husk hydrochar reduced when raising the pressure of the HTC process (from 7.4 to 4.8 m 2 /g).The HHV of the sewage sludge hydrochar showed a continual decrease (from 8.0 to almost 6.5 MJ/kg) with increasing pressure (from 0.1-0.9 to 3.1-5.4MPa), while the dewatering performance was improved [43].
Residence Time
A longer residence time increases the severity of the reaction, affecting solid recovery and forming more stable hydrochars with a polyaromatic structure.For lignocellulosic biomass, secondary hydrochar formation depends on the residence time, while non-dissolved monomers rely more on temperature [44,45].The process of hydrochar formation is enhanced by the increase in residence time, which leads to the release of more intermediate products.A study conducted by Gao et al. [46] found that the characteristics of hydrochar from water hyacinth were influenced by the residence time, while Zhang et al. revealed that shorter residence times resulted in cracks on the hydrochar surface, while microspheres were formed after 6 h [47].The residence time controls both polymerization and hydrolysis, and after 24 h, the formed microspheres aggregate, providing different textures of hydrochar.Furthermore, the diameter of the microspheres was also affected by the residence time [44].However, from the results summarized in Table 1, it can be concluded that residence time shows a smaller influence on the hydrochar characteristics than temperature.Islam et al. [31] found that, during the carbonization of banana stalks, an increase in reaction time from 60 to 180 min (200 • C) decreased hydrochar yield from 61.8 to 57.8%, while HHV and fixed carbon content increased from 18.7 to 18.9 MJ/kg and from 35.0 to 44.3%, respectively.On the other hand, an increase in reaction temperature from 160 to 200 • C (180 min) in the same study significantly affected the mentioned parameters, and yield was reduced from 72.8 to 57.8%, while HHV and fixed carbon were increased from 18.4 to 18.9 MJ/kg and from 22.5 to 44.3%.In conclusion, the residence time must be considered to polymerize the hydrochar at a specific temperature, based on the obtained hydrochar.
Catalyst
Catalysts, categorized as organic and inorganic, can speed up a chemical reaction in hydrothermal carbonization and improve hydrochar properties.Their addition promotes a reduction in reaction temperature, enhances hydrolysis, upgrades denitrogenation and deoxygenation, increases hydrochar yield, and functionalizes the hydrochar.An organic catalyst (organic acids, alcohols, etc.) is a type of organic compound that can initiate or accelerate a chemical reaction.Organic acids dissociate in water, creating an acidic solvent that significantly enhances reaction rates [48].Citric acid is a safe and inexpensive organic acid catalyst that promotes biomass transformation during the HTC process by hydrolysis of biopolymers, dehydration, and carbonization, which enhances hydrochar formation and increases its carbon content [38].Sarrion et al. reveal that increasing the concentration of citric acid from 0.1 to 0.5 M during HTC of dewatered waste-activated sludge significantly increases the carbon content and mass yield of hydrochar [49].Furthermore, the inclusion of citric acid during the carbonization of sludge provokes the formation of additional acids, such as formic or acetic acid.These acids interact with mineral compounds, impacting hydrolysis, dehydration, condensation, and polymerization processes, making the process autocatalytic [49].Further, citric acid may introduce more functional groups into the carbon skeleton and remove some minerals and organic groups in the biomass feedstock, leading to more rough and porous structures [50,51].Faradilla et al. [52] showed that the addition of citric acid during the HTC treatment of cellulose nanofiber and softwood pulp increased the diameter of formed carbon spheres.The addition of citric acid as a catalyst significantly enhances the hydrolysis of cellulose into soluble oligomers and glucose.These molecules then underwent subsequent dehydration, condensation, and polymerization processes, ultimately resulting in the formation of carbon spheres [52].Another acid that affects the hydrothermal carbonization process and hydrochar properties is acetic acid.Although this catalyst increased thermal stability and carbon content, it added a lower hydrochar yield than other catalysts.Thus, acetic acid may be better for reducing yield via fragmentation reactions, not promoting polymerization.Citric acid's structure leads to more carbon content during HTC, and its three acid functionalities enhance the dissolution of inorganic materials in the feedstock more than acetic acid's single functionality.In addition to organic acids, a protic solvent (methanol, ethanol, and other lower-molecularweight alcohols) has hydrogen bound to electronegative atoms like oxygen or nitrogen.It can participate in hydrogen bonding and hydrogen donation, promoting dehydration reactions and improving hydrochar from high-protein and carbohydrate feedstocks [38].
The addition of inorganic acidic reagents improves hydrochar properties by leaching inorganic compounds to process water, provide acidic mineral conditions, remove ash from hydrochar, depolymerize cellulose, and enhance hydrolysis and dehydration.Inorganic catalysts include strong mineral acids and bases, metal chlorides, sulfate and nitrate salts, metal oxides, and hydrogen peroxide.Strong mineral acids improved the fuel properties of hydrochar and enhanced nutrient release and solubilization of nitrogen and phosphorus, enhancing porosity while lowering volatile matter content during HTC, making the hydrochar more stable.Wilk et al. [25] investigated the addition of sulfuric acid as a catalyst during the carbonization of sewage sludge.The authors reveal that specific surface area was significantly increased upon the addition of the catalyst as a result of pronounced degradation and transformation of feedstock.Furthermore, the addition of the catalyst improves the migration of elements, so increases in zinc, copper, and lead oxides and decreases in chromium and nickel compounds in hydrochars were noticed, while an abundance of phosphorous, magnesium, calcium, and zinc were observed in post-processing water.Strong mineral bases, like CaO, raise the hydrochar yield and ash content while reducing the organic matter and polycyclic aromatic hydrochar content, while NaOH can reduce emission of sulfur dioxide and nitrogen oxide emissions during hydrochar combustion and hydrochar moisture diffusivity [2,53].Metal chlorides impact hydrochar morphology and surface properties, lower the starting temperature, catalyze dehydration and decarboxylation, and enhance the thermal characteristics and combustion properties by facilitating furfural derivatives formation and further pseudo-lignin structure development [53,54].
In general, studies show that the feedstock type and HTC temperature have the most significant effects on hydrochar properties [1][2][3]7].Thus, optimization of the process is crucial in achieving characteristic products for specific applications based on the precursor biomass and the end use of the material.
Solid Fuel
The valorization of waste biomass as a renewable energy source and raw material for biofuel production has become one of the goals of many advanced countries that have adopted the concept of a bio-based economy.HTC is widely recognized as a highly suitable method for the improvement of biomass fuel properties and the preparation of novel biofuels.Hydrochar shows superior fuel performance, increased carbon content and energy density, lower ash and volatiles, better reactivity, dewaterability, and material stability, and overall improved fuel properties in comparison to raw biomass.In addition, produced hydrochars exhibit comparable or better properties than those of commercial coal and lignite.An important feature of the HTC process is the possibility of leaching inorganic elements from the starting biomass [12,39].This results in a reduction in the ash content in the obtained hydrochars.Petrović et al. [12] reveal that carbonization of grape pomace at 200 • C causes a reduction in ash content in produced hydrochar (from 6.48% in grape pomace to 3.55% in hydrochar) due to partial leaching of inorganics (K, Mg, Ca, Fe, Si, P) into process water.Lower ash content is a highly recommended characteristic for solid fuels since the increased amount of specific inorganic elements (Si, K, Na, S, Cl, P, Ca, Mg, and Fe) in fuels and biomass can cause emission issues or corrosion, clogging, fouling, and/or clinkers formation in combustors during direct combustion [7,17].The abovementioned problems increase maintenance costs and the combustion efficiency of the fuel.Raw biomass has a highly volatile nature, resulting in inefficient combustion and increased GHG emissions compared to coal.Results from the literature indicate that HTC decreases the volatile content in biomass [12,39,55].However, Ischia et al. [56] reported that, during HTC of municipal solid waste at 180 • C, volatile matter increases from 80.7% to 82.2%.This result was unexpected since lignocellulose starts to hydrolyze at 180 • C. A possible cause is the accumulation of leached volatile compounds from process water onto the formed hydrochar surface [57].Wang et al. [58] found that volatile adsorption was most prominent during HTC treatment of sludge at 180 • C, while higher HTC temperatures reduced the peak of hydrochar devolatilization rate.The authors suggest that HTC of biomass above 220 • C leads to the devolatilization of biomass macromolecules and the creation of more stable forms of hydrochars.This could have a positive impact on the environment by improving the production and use of biomass-based products as solid fuel.According to the authors, during HTC, the carbonization of biomass begins with hydrolysis, which leads to the devolatilization of biomass macromolecules into oligomers and monomers.These fragments undergo subsequent degradation mechanisms, including dehydration and decarboxylation, followed by aromatization, which results in the formation of a carbonrich solid.These degradation reactions occur simultaneously during HTC, leading to the formation of hydrochars with a lower volatile content compared to their corresponding biomasses [2,12,44,45].
Adsorbent of Pollutants from Aqueous Solutions
Among other potential applications, hydrochars pose a prospective ability to remove different pollutants from aqueous solutions.Their structure, abundant in different oxygenated functional groups, proved to be very suitable for additional functionalization and surface area modification using various physical and chemical processes.For this purpose, various modification methods, including alkali (KOH, NaOH), acidic (H 3 PO 4 , HCl), metal salt (ZnCl 2 , MgCl, FeCl 3 , K 2 CO 3 ), and polymerization treatments, have been adopted so far [45,[56][57][58][59][60][61].KOH, as a commonly used activator, cleans partly blocked pores and incorporates novel ions onto hydrochar surfaces [58].In addition, alkali treatment increases surface area due to the removal of organic fragments from prepared hydrochars, while acidic treatment affects the mesoporous structure.Moreover, suggested treatments incorporate new functional groups to hydrochar surfaces and provide more sites for the binding of selected pollutants [1,3,45].The appropriate modification provides the preparation of novel, effective sorbent materials with tailored structures and advanced performances toward the removal of a wide range of contaminants.
Heavy Metals
Over the years, industrial water contaminated with heavy metals has represented a serious threat to the environment and humans.With the increasing development of industrialization and urbanization, the continuous discharge of pollutants like heavy metals, organic dyes, pesticides, and other organic compounds into river watercourses has become a huge concern.These pollutants come from industrial and mining wastewater, such as that from the textile industry, electroplating, paper production, leather tanning, and food technology [45,[59][60][61].These toxic and non-biodegradable pollutants tend to accumulate in plant and animal tissues, as well as in the human body, causing significant harm that includes damage to the central nervous system, dermatitis, cancer, kidney dysfunction, and damage to the liver and reproductive organs [61][62][63].Therefore, the development of efficient and economical methods for the removal of heavy metals and other contaminants from industrial wastewater before discharge into watercourses becomes essential.In order to overcome the shortcomings of traditional purification methods, the application of new sorbents obtained from renewable sources, such as hydrochars, has recently been extensively investigated [1,3].Although they exhibit low surface area and porosity, the chemically active functional groups on hydrochars' surfaces (ketones, COOH groups, and hydroxyl) provide satisfying adsorption capability [57].Numerous studies so far have tested the ability of hydrochars from different precursors to remove heavy metals from aqueous solutions.Moreover, to increase capacity towards selected pollutants, a number of modification methods were applied.Table 2 summarizes the application of hydrochars obtained from different precursors as heavy metals sorbents, while Figure 2 shows potential binding mechanisms.contaminants from industrial wastewater before discharge into watercourses becomes essential.In order to overcome the shortcomings of traditional purification methods, the application of new sorbents obtained from renewable sources, such as hydrochars, has recently been extensively investigated [1,3].Although they exhibit low surface area and porosity, the chemically active functional groups on hydrochars' surfaces (ketones, COOH groups, and hydroxyl) provide satisfying adsorption capability [57].Numerous studies so far have tested the ability of hydrochars from different precursors to remove heavy metals from aqueous solutions.Moreover, to increase capacity towards selected pollutants, a number of modification methods were applied.Table 2 summarizes the application of hydrochars obtained from different precursors as heavy metals sorbents, while Figure 2 shows potential binding mechanisms.Qin et al. investigated phosphate-modified poplar sawdust hydrochar as a potential adsorbent for Pb(II) ions [64].FTIR analysis revealed that P-containing groups were involved in Pb(II) removal by surface complexes, while aromatic structures participated in cation-π interaction.Qin et al. [64] also show that the -COOH in the P-hydrochar can more easily adsorb Pb(II) compared to the -OH group.The achieved adsorption capacity for Pb(II) ions was 119.61 mg/g (Table 2), and the main mechanisms include precipitation, π-π interaction, and complexation.Examining the effect of the modification on the hydrochar adsorption capacity, Petrović et al. [61] reveal that KOH-modified grape pomace hydrochar achieves a five times better lead removal capacity (137 mg/g) than unmodified grape pomace hydrochar (27.8 mg/g) (Table 2).Alkali modification caused different structural changes that included the incorporation of oxygenate functional groups to the hydrochar's surface, binding of K + ions that participates in ion exchange reactions with heavy metals, and the cleaning of partially blocked pores.All mentioned structural modifications provided more active sites for the binding of Pb 2+ ions and, thus, enabled significantly better adsorption.Adsorption of selected ions was achieved through the ion exchange mechanism, chemisorption, and Pb(II)-π interaction, while the Sips isotherm model gave the best fit with the experimental data [61].KOH-modified hydrochar form Sedum alfredii Hance also showed a higher adsorption performance (17 times) towards Cd(II) ions in comparison to pristine hydrochar [65].Cd(II) ions were also effectively removed from an aqueous solution by using magnetic watermelon seed hydrochar [66], while Mn(II) ions were treated by a magnetic hydrochar nanocomposite from pineapple leaves (Table 2) [67].Moreover, alkali modification using NaOH proved to be an effective direction towards the production of highly effective sorbents from a hydrochar derived from paulownia leaves for Pb(II) ion removal.Mechanisms responsible for the binding include complexation and/or Pb-π electron interaction [68].
In order to provide environmental sustainability, Kim et al. simultaneously removed Cu(II) and Cr(VI) ions using a N-doped hydrochar derived from corncob [69].The hydrochar was modified by NH 4 Cl which contributed to a better adsorption ability (1.223 mmol/g for Cu(II) and 1.995 mmol/g for Cr(VI)) than that of the pristine hydrochar.Findings from the infrared spectra suggest that the redox reaction caused an increase in the number of deprotonated imine groups on the surface of the N-doped hydrochar, which could provide additional binding sites for Cu(II), while under pH < pHpzc, the N-doped hydrochar possessed a greater adsorption affinity toward Cr(VI) than Cu(II) due to electrostatic attraction [69].In addition, a protonated amino-modified bamboo hydrochar was prepared during the interaction of bamboo hydrochar with acryloyl chloride, amine, and hydrochloric acid and employed to adsorb Cr(VI) ions [70].The authors stated that the incorporation of amine groups onto the hydrochar surface was pivotal for high adsorption capacity (523.57mg/g) due to the elimination of pollutants by electrostatic interaction.
A novel cost-effective sawdust hydrochar composite (MgSi-HC) was used to remove Cu(II) and Zn(II).The modification provides a large specific surface area and a welldeveloped pore structure, while adsorption isotherms showed that the maximum adsorption capacity of MgSi-HC for Cu(II) and Zn(II) was 214.7 mg/g and 227.3 mg/g, respectively (Table 2) [71].The adsorption mechanism suggested that electrostatic interaction, hydrogen bonding, π-π interaction, and pore filling were involved in the adsorption process [71].Composite hydrochar, a calcium-pyro-hydrochar from spent mushroom substrate, was tested as a functional sorbent of Pb(II) and Cd(II) ions from aqueous solutions.According to the Freundlich isotherm model, the maximum sorption capacities of Ca-modified hydrochar for Pb(II) and Cd(II) were 297 mg/g and 131 mg/g, respectively (Table 2), while the binding was achieved by the ion exchange mechanism, surface complexation, mineral precipitation, and cation-π interaction [72].
Dyes
Among heavy metals, dyes represent an additional frequent group of industrial pollutants.Even at low concentrations, most dyes are serious contaminants and pose a significant threat to all living organisms due to their high toxicity, carcinogenicity, and non-biodegradable effects [1,2].Textile industries use various synthetic dyes that can degrade water bodies and enter the food chain, posing a threat to aquatic organisms.Therefore, it is essential to remove dyes and purify industrial water before discharging to prevent environmental contamination.Previous studies confirmed that hydrochar can be considered to adsorb these molecules from aqueous systems.
Table 3 summarizes the application of hydrochars obtained from different precursors as dye sorbents, while potential binding mechanisms are shown in Figure 3.A functional hydrochar from olive waste was prepared by HTC at 250 • C and tested for removal of Methylene blue (MB) and Congo red (CR).NMR analysis revealed that hydrochar produced at 250 • C exhibited pronounced functionality, especially in carbonyl and carboxylic functional groups, compared to the raw olive waste.Almost 100% dye removal was achieved within 120 min (MB) and 180 min (CR), due to the presence of carbonyl and carboxylic functional groups on the hydrochar surface [73].The removal of MB was also investigated using oxone-treated pine wood hydrochar and KHCO 3 -modified hydrochar from industrial laundry sludge [74,75].The highest adsorption capacities for MB sourced from separate studies were 86.7 mg/g and 808.83 mg/g (Table 3) [74,75].Pine wood hydrochar showed a substantial increase in carboxylic content upon oxidation.This surface modification provides more sites for pollutant binding and, thus, a higher adsorption capacity [74].Camilo et al. [75] revealed that increased surface area (3005.57m 2 /g) upon activation was the reason for such a high adsorption capacity.In the study of Petrović et al. [60], it was pointed out that Mg-doped pyro-hydrochars prepared from waste grape pomace, corn cobs, and Miscanthus × giganteus can be used as efficient MB sorbents with capacities of 289.65 mg/g, 262.30mg/g, and 232.48 mg/g, respectively (Table 3).It was confirmed that magnesium (Mg) was bound to the surface of the hydrochars, contributing to ion exchange with MB ions.Further analysis referred to the complex mechanism involved in dye removal (hydrogen bonding, the π-π interaction between aromatic groups of Mghydrochar surface and MB, electrostatic interaction, and surface complexation) [60].In addition, Kozhadi et al. [76] investigated the removal of Rhodamin B from an aqueous solution by an Fe-modified wheat straw hydrochar.Furthermore, magnetic watermelon seed hydrochar-grafted chitosan was successfully prepared and applied for the adsorption of malachite green [77].The optimal parameters for reaching a capacity of 420.02 mg/g were pH 7.5, 420 min, and adsorbent dose of 20 mg at 298 K [77].Bimetal organic framework (NiFe-MOF) incorporation on sugarcane bagasse hydrochar provides a dual 3-D morphological structure of material with numerous functional groups and high removal ability towards Cristal violet dye (395.9 mg/g) [78].Modification results in a material with a 2.3 times higher surface area enriched in carboxylic and metal-carboxylate groups, displaying high thermal stability and adsorption through chemisorption [78].Similarly, Cristal violet was adsorbed using a NaOH-activated sugarcane bagasse hydrochar.Cold alkali modification was chosen to increase oxygenated functional groups and enhance material porosity by removing pore blockages.Authors report that modified hydrochar has a porous structure with microsphere-like particles, functional groups, strong π-π interaction, and high thermal stability [79].The knowledge of several studies indicates that removal of cationic dye is achieved by a complex mechanism involving the electrostatic attraction between positively charged MB amino groups and negatively charged functional groups onto hydrochar surfaces, hydrogen bonding, π-π dispersion interactions between the aromatic hydrochar structure and MB rings, surface complexation, and the ion exchange mechanism (Figure 3).In general, studies have shown that hydrochar can be an effective material for dye removal, regardless of the mechanism of adsorption.Hydrochars can easily be further tailored and modified to specifically remove a targeted dye.This carbonaceous material exhibits great potential for treating wastewater that contains dyes, particularly from the textile industry.NaOH-activated sugarcane bagasse Cristal violet (CV) 47.97 [79] The knowledge of several studies indicates that removal of cationic dye is achieved by a complex mechanism involving the electrostatic attraction between positively charged MB amino groups and negatively charged functional groups onto hydrochar surfaces, hydrogen bonding, π-π dispersion interactions between the aromatic hydrochar structure and MB rings, surface complexation, and the ion exchange mechanism (Figure 3).In general, studies have shown that hydrochar can be an effective material for dye removal, regardless of the mechanism of adsorption.Hydrochars can easily be further tailored and modified to specifically remove a targeted dye.This carbonaceous material exhibits great potential for treating wastewater that contains dyes, particularly from the textile industry.However, more research is required to determine the remedial effectiveness of hydrochars in wastewater with multiple contaminants as well as their recyclability.
Pharmaceuticals
Pharmaceuticals and personal care products, which include antibiotics, analgesics, antidepressants, and more, are a growing concern in many countries due to their classification as emerging organic pollutants [80].These substances are widely used for the improvement of human and animal health and daily life.In addition, a significant portion (30% to 90%) ends up in domestic sewage through urine, feces, and baths since it cannot
Pharmaceuticals
Pharmaceuticals and personal care products, which include antibiotics, analgesics, antidepressants, and more, are a growing concern in many countries due to their classification as emerging organic pollutants [80].These substances are widely used for the improvement of human and animal health and daily life.In addition, a significant portion (30% to 90%) ends up in domestic sewage through urine, feces, and baths since it cannot be absorbed by humans or animals [80].For this reason, pharmaceuticals are frequently detected in wastewater.To avoid their adverse impacts, it is necessary to remove them.So far, different methods (precipitation, barrier separation, and adsorption) have been used for this purpose [64,80,81].Currently, hydrochars obtained by the HTC process are being examined as effective adsorbents for this group of pollutants.The application of different hydrochars and achiever adsorption capacities for pharmaceutical removal are shown in Table 4. Delgado-Moreno et al. [81] compared the adsorption capability of unmodified biochars and a hydrochar from olive oil waste for diclofenac and ibuprofen removal (Table 4) and revealed that the hydrochar had a higher adsorption capacity despite a lower surface area.Due to the pronounced functionality, the hydrochar removed 68% of diclofenac and 43% of ibuprofen, while chemisorption mechanisms governed the adsorption capacity.In addition, an unmodified hydrochar from green tea waste produced at 200 • C also demonstrated removal of ibuprofen [82].Lowering the pH increases ibuprofen adsorption due to electrostatic interactions.Adsorption remains stable after six regeneration cycles [82].Along with unmodified hydrochars, Qin et al. [64] demonstrated ciprofloxacin adsorption behavior by utilizing a phosphate-modified poplar sawdust hydrochar.According to the Hill model, ab achieved adsorption capacity of 98.38 mg/g was influenced by hydrogen bonding, pores filling, and electrostatic attraction between the hydrochar's surface and the selected antibiotic.In addition, NaOH activation was applied on a sugarcane bagasse hydrochar to provide a valuable sorbent for tetracycline removal [79].Higher NaOH concentrations lower the mass ratio, and longer activation times increase hydroxyl functional groups, pores, and surface area, leading to increased tetracycline removal [79].A batch adsorption test using an immobilized bamboo hydrochar was performed for the investigation of potential paracetamol removal from water.An adsorption capacity of 48.12 mg/g (Table 4) was estimated during homogenous interaction between the pharmaceutical and the beads' surface [83].In addition, Hayoun et al. [84] prepared a high-performance hydrochar from loquat cores and examined its potential for the removal of diclofenac, antipyrine, and prednisolone (Table 4).The prepared hydrochar was modified using citric acid, H 3 PO 4 , and HCl.The results showed that treatment with 3 M citric acid was significantly better compared to the other two agents, with a removal efficiency of 95.88% for diclofenac, 76.21% for antipyrine, and 80.29% for prednisolone.Treatment with 3 M citric acid increases surface area from 0.0048 to 19.2261 m 2 /g and provides a porous and irrigated structure abundant with oxygenated active sites.The structural changes caused by citric acid treatment improved the hydrochar's pollutant removal efficiency [84].
In general, based on the knowledge from numerous previous studies, it can be concluded that the mechanisms responsible for the removal of pollutants from aqueous solutions using hydrochars include the ion exchange mechanism, hydrogen bonding, surface complexation, strong π-π interaction, and electrostatic interaction.Figure 4 shows the aforementioned interactions.
Processes 2024, 12, x FOR PEER REVIEW 14 of 18 In general, based on the knowledge from numerous previous studies, it can be concluded that the mechanisms responsible for the removal of pollutants from aqueous solutions using hydrochars include the ion exchange mechanism, hydrogen bonding, surface complexation, strong π-π interaction, and electrostatic interaction.Figure 4 shows the aforementioned interactions.
Conclusions
HTC represents an efficient thermochemical transformation of wet biomass into valuable products, hydrochars, suitable for various applications such as solid biofuels, adsorbents, carbon sequestration agents, and soil remediators/conditioners.This article summarizes the knowledge about hydrothermal process variables that influence hydrochar formation (temperature, pressure, residence time, catalyst) and different methods of tailoring hydrochar surfaces for specific environmental applications.Considerable strides have already been made in comprehending the hydrothermal conversion process, hydrochar formation mechanisms, and its crucial structural characteristics.The performance of hydrochars has been influenced by their specific physical, chemical, and structural characteristics.In addition, hydrochars can be easily adapted to particular applications since their surface is highly susceptible to modification and improvement.
Conclusions
HTC represents an efficient thermochemical transformation of wet biomass into valuable products, hydrochars, suitable for various applications such as solid biofuels, adsorbents, carbon sequestration agents, and soil remediators/conditioners.This article summarizes the knowledge about hydrothermal process variables that influence hydrochar formation (temperature, pressure, residence time, catalyst) and different methods of tailoring hydrochar surfaces for specific environmental applications.Considerable strides have already been made in comprehending the hydrothermal conversion process, hydrochar formation mechanisms, and its crucial structural characteristics.The performance of hydrochars has been influenced by their specific physical, chemical, and structural characteristics.In addition, hydrochars can be easily adapted to particular applications since their surface is highly susceptible to modification and improvement.
Considering the activation of hydrochars' structure, acquiring additional knowledge of tailoring the surface and its regeneration is crucial in promoting the stability and utilization of hydrochars as sorbents of pollutants from aqueous environments in large-scale applications.
Figure 1 .
Figure 1.Schematic representation of the HTC process.
Figure 1 .
Figure 1.Schematic representation of the HTC process.
Figure 2 .
Figure 2. Potential interaction between hydrochar surface and heavy metals.
Figure 2 .
Figure 2. Potential interaction between hydrochar surface and heavy metals.
Figure 4 .
Figure 4. Potential interaction between hydrochar surface and pharmaceuticals.
Figure 4 .
Figure 4. Potential interaction between hydrochar surface and pharmaceuticals.
Table 1 .
Influence of process parameters on selected characteristics of hydrochars produced from different precursors.
Table 2 .
Application of different hydrochars as heavy metal sorbents.
Table 2 .
Application of different hydrochars as heavy metal sorbents.
Table 3 .
Application of different hydrochars as dye sorbents.
Table 3 .
Application of different hydrochars as dye sorbents.
Table 4 .
Application of different hydrochars as pharmaceutical sorbents. | 9,370.2 | 2024-01-18T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Convexity properties of the condition number
We define in the space of n by m matrices of rank n, n less or equal than m, the condition Riemannian structure as follows: For a given matrix A the tangent space of A is equipped with the Hermitian inner product obtained by multiplying the usual Frobenius inner product by the inverse of the square of the smallest singular value of A denoted sigma_n(A). When this smallest singular value has multiplicity 1, the function A ->log (sigma_n(A)^(-2)) is a convex function with respect to the condition Riemannian structure that is t ->log (sigma_n(A(t))^(-2)) is convex, in the usual sense for any geodesic A(t). In a more abstract setting, a function alpha defined on a Riemannian manifold (M,<,>) is said to be self-convex when log alpha (gamma(t)) is convex for any geodesic in (M,<,>). Necessary and sufficient conditions for self-convexity are given when alpha is C^2. When alpha(x) = d(x,N)^(-2) where d(x,N) is the distance from x to a C^2 submanifold N of R^j we prove that alpha is self-convex when restricted to the largest open set of points x where there is a unique closest point in N to x. We also show, using this more general notion, that the square of the condition number ||A|||_F / sigma_n(A) is self-convex in projective space and the solution variety.
Introduction
Let two integers 1 ≤ n ≤ m be given and let us consider the space of matrices K n×m , K = R or C, equipped with the Frobenius Hermitian product Given an absolutly continuous path A(t), a ≤ t ≤ b, its length is given by the integral and the shortest path connecting A(a) to A(b) is the segment connecting them. Consider now the problem of connecting these two matrices with the shortest possible path in staying, as much as possible, away from the set of "singular matrices" that is the matrices with non-maximal rank.
The singular values of a matrix A ∈ K n×m are denoted in non-increasing order: We denote by GL n,m the space of matrices A ∈ K n×m with maximal rank : rank A = n, that is σ n (A) > 0 so that the set of singular matrices is N = K n×m \ GL n,m = A ∈ K n×m : σ n (A) = 0 .
Since the smallest singular value of a matrix is equal to the distance from the set of singular matrices: given an absolutly continuous path A(t), a ≤ t ≤ b, we define its "condition length" by the integral A good compromise between length and distance to N is obtained in minimizing L κ . We call "minimizing condition geodesic" an absolutly continuous path, parametrized by arc length, which minimizes L κ in the set of absolutly continuous paths with given end-points and condition distance d κ (A, B) between two matrices the length L κ of a minimizing condition geodesic with endpoints A and B, if any. In this paper our objective is to investigate the properties of the smallest singular value σ n (A(t)) along a condition geodesic. Our main result says that the map log (σ n (A(t)) −1 ) is convex. Thus σ n (A(t)) is concave, and its minimum value along the path is reached at one of the endpoints.
Note that a similar property holds in the case of hyperbolic geometry where instead of K n×m we take R n−1 ×[0, ∞[, instead of N we have R n−1 ×{0}, and where the length of a path a(t) = (a 1 (t), . . . , a n (t)) is defined by the integral da(t) dt a n (t) −1 dt.
Geodesics in that case are arcs of circles centered at R n−1 × {0} or segments of vertical lines, and log (a n (t) −1 ) is convex along such paths. The approach used here to prove our theorems is heavily based on Riemannian geometry. We define on GL n,m the following Riemannian structure: M, N κ,A = σ n (A) −2 Re M, N F where M, N ∈ K n×m and A ∈ GL n,m . The minimizing condition geodesics defined previously are clearly geodesic in GL n,m for this Riemannian structure so that we may use the toolbox of Riemannian geometry. In fact things are not so simple: the smallest singular value σ n (A) is a locally Lipschitz map in GL n,m , and it is smooth on the open subset GL > n,m = {A ∈ GL n,m : σ n−1 (A) > σ n (A)} that is when the smallest singular value of A is simple. On the open subset GL > n,m the metric ·, · κ defines a smooth Riemannian structure, and we call "condition geodesics" the geodesics related to this structure. Such a path is not necessarily a minimizing geodesic. Our first main theorem establishes a remarkable property of the condition Riemannian structure: Theorem 1. σ −2 n is logarithmically convex on GL > n,m i.e. for any geodesic curve γ(t) in GL > n,m for the condition metric the map log (σ −2 n (γ(t))) is convex.
Problem 1. The condition Riemannian structure ., . κ is defined in GL n,m where it is is only locally Lipschitz. Let us define condition geodesics in GL n,m as the extremals of the condition length L κ (see for example [3] Chapter 4, Theorem 4.4.3, for the definition of such extremals in the Lipschitz case). Is Theorem 1 still true for GL n,m ? All the examples we have studied confirm that convexity holds, even if σ −1 n (γ(t)) fails to be C 1 . See Boito-Dedieu [2]. We intend to address this issue in a future paper.
In a second step we extend these results to other spaces of matrices: the sphere S r (GL > n,m ) of radius r in GL > n,m in Corollary 6, the projective space P GL > n,m in Corollary 7. We also consider the case of the solution variety of the homogeneous equation Mζ = 0 that is the set of pairs (M, ζ) ∈ K n×(n+1) × K n+1 : Mζ = 0 . Now our function α is the square of the condition number studied by Demmel in [4]. This is done in the affine context in Theorem 3 and in the projective context in Corollary 8.
Since σ n (A) is equal to the distance from A to the set of singular matrices a natural question is to ask whether our main result remains valid for the inverse of the distance from certain sets or for more general functions. ·, · κ,x = α(x) ·, · x called condition Riemann structure. We say that α is self-convex when log α(γ(t)) is convex for any geodesic γ in M κ .
For example, with M = {x = (x 1 , . . . , x n ) ∈ R n : x n > 0} equipped with the usual metric, α(x) = x −2 n is self-convex. The space M κ is the Poincaré model of hyperbolic space.
In the following theorem we prove self-convexity for the distance function to a C 2 submanifold without boundary N ⊂ R j . Let us denote by Let U be the largest open set in R j such that, for any x ∈ U, there is a unique closest point in N to x. When U is equipped with the new metric α(x) ., . we have: Theorem 2 is then extended to the projective case. Let N be a C 2 submanifold without boundary of P(R j ). Let us denote by d R the Riemannian distance in projective space (points in the projective space are lines throught the origin and the distance d R between two lines is the angle they make). Let us denote d P = sin d R (this is also a distance), define α(x) = d P (x, N ) −2 , and let U be the largest open subset of P(R j ) such that for x ∈ U there is a unique closest point from N to x for the distance d P . Then The extension of Theorem 1 and Theorem 2 to other types of sets or functions is not obvious. In Example 1 we prove that α(A) = σ 1 (A) −2 + · · · + σ n (A) −2 is not self-convex in GL n,m .
In Example 2 we take N = R 2 , and U the unit disk so that U contains a point (the center) which has many closest points from N . In that case the corresponding function α : U \ N → R is self-convex but it fails to be smooth at the center of the disk.
In Example 3 we provide an example of a submanifold N ⊂ R 2 such that the function α( Our interest in considering the condition metric in the space of matrices comes from recent papers by Shub [8] and Beltrán-Shub [1] where these authors use condition length along a path in certain solution varieties to estimate step size for continuation methods to follow these paths. They give bounds on the number of steps required in terms of the condition length of the path. If geodesics in the condition metric are followed the known bounds on polynomial system solving are vastly improved. To understand the properties of these geodesics we have begun in this paper with linear systems where we can investigate their properties more deeply. We find self-convexity in the context of this paper remarkable. We do not know if similar issues may naturally arise in linear algebra even for solving systems of linear equations. Similar issues do clearly arise when studying continuation methods for the eigenvalue problem.
Self-convexity
Let us first start to recall some basic definitions about convexity on Riemannian manifolds. A good reference on this subject is Udrişte [9].
for every x, y ∈ M, for every geodesic arc γ xy joigning x and y and 0 ≤ t ≤ 1.
The convexity of f in M is equivalent to the convexity in the usual sense of f • γ xy on [0, 1] for every x, y ∈ U and the geodesic γ xy joining x and y or also to the convexity of g • γ for every geodesic γ ( [9] Chap. 3, Th. 2.2). Thus, we see that Lemma 1. Self-convexity of a function α : M → R is equivalent to the convexity of log •α in the condition Riemannian manifold M κ .
When f is a function of class C 2 in the Riemannian manifold M, we define its second derivative D 2 f (x) as the second covariant derivative. It is a symmetric bilinear form on T x M. Note ([9, This second derivative depends on the Riemannian connection on M. Since M is equipped with two different metrics: ., . and ., . κ we have to distinguish between the corresponding second derivatives; they are denoted by D 2 f (x) and D 2 κ f (x) respectively. No such distinction is necessary for the first derivative Df (x).
Convexity on Riemannian manifold is characterized by (see [9] Chap. 3, Th. 6.2): We use this proposition to obtain a caracterisation of self-convexity: α is self-convex if and only if the second derivative D 2 κ (log •α)(x) is positive semidefinite for any x ∈ M κ . We get Proposition 2. For a function α : M → R of class C 2 with positive values self-convexity is equivalent to for any x ∈ M and for any vectorẋ ∈ T x M, the tangent space at x.
Proof. Let x ∈ M be given. Let ϕ : R m → M be a coordinate system such that ϕ(0) = x and with first fundamental form g ij (0) = δ ij (Kronecker's delta) and Christoffel's symbols Γ i jk (0) = 0, and let . Those coordinates are called "normal" or "geodesic". Note that this implies ∂g ij ∂z k (0) = 0 for all i, j, k. We denote by g κ,ij and Γ i κ,jk respectively the first fundamental form and the Christoffel symbols for ϕ in M κ . Let us compute them. Note that That is, The second derivative of the composition of two maps is given by the identity (see [9] Chap. 1.3, Hessian) This gives in our context, that is when f = α and ψ = log, According to Proposition 1 our objective is now to give a necessary and sufficient condition for D 2 κ (log •α)(x) to be positive semidefinite for each x ∈ M. In our system of local coordinates the components of D 2 α(x) are (see [9] Chap. 1.3) If we replace the Christoffel symbols in this last sum by the values previously computed we obtain, when j = k, Both cases are subsumed in the identity Putting together all these identities gives the following expression for the x ẋ 2 x − 4(Dα(x)ẋ) 2 ≥ 0 for any x ∈ M and for any vectorẋ ∈ T x M. This finishes the proof.
An easy consequence of Proposition 2 is the following. See also Example 3.
Corollary 2. When a function α : M → R of class C 2 is self-convex then any critical point of α has a positive semi-definite second derivative D 2 α(x). Such a function cannot have a strict local maximum or a non-degenerate saddle.
Proposition 3. The following condition is equivalent for a C 2 function α = 1/ρ 2 : M −→ R to be self-convex on M: For every x ∈ M andẋ ∈ T x M, or, what is the same, Proof. Note that Hence, the necessary and sufficient condition of Proposition 2 reads and the proposition follows.
Corollary 3. Each of the following conditions is sufficient for a function α = 1/ρ 2 : M −→ R to be self-convex at x ∈ M: For everyẋ ∈ T x M, In the following proposition we obtain a weaker condition on α to obtain convexity in M κ instead of self-convexity.
for any x ∈ M and any vectorẋ ∈ T x M.
Proof. We follow the lines of the proof of Proposition 2 with ψ equal to the identity map instead of ψ = log.
3 Some general formulas for matrices Proposition 5. Let A = (Σ, 0) ∈ GL > n,m , where Σ = diag (σ 1 ≥ · · · ≥ σ n−1 > σ n ) ∈ K n×n . The map σ n : GL > n,m → R is a smooth map and, for every U ∈ K n×m , Proof. Since σ 2 n is an eigenvalue of AA * with multiplicity 1, the implicit function theorem proves the existence of smooth functions σ 2 n (B) ∈ R and u(B) ∈ K n , defined in an open neighborhood of A and satisfying Differentiating these equations at B gives, for any U ∈ K n×m , Corollary 4. Let A = (Σ, 0) ∈ GL > n,m , where Σ = diag (σ 1 ≥ · · · ≥ σ n−1 > σ n > 0) ∈ K n×n . Let us define ρ(A) = σ n (A)/ A F . Then, for any U ∈ K n×m such that Re A, U F = 0, we have and the first assertion of the corollary follows from Proposition 5. For the second one, note that h = h 1 /h 2 (for real valued C 2 functions h, h 1 , h 2 with h 2 (0) = 0) implies F , and D 2 σ 2 n (A)(U, U) is known from Proposition 5. The formula for D 2 ρ 2 (A) follows after some elementary calculations.
The affine linear case
We consider here the Riemannian manifold M = GL > n,m equipped with the usual Frobenius Hermitian product. Let α : GL > n,m → R be defined as α(A) = 1/σ 2 n (A). Corollary 5. The function α is self-convex in GL > n,m . Proof. From Proposition 3, it suffices to see that Since unitary transformations are isometries in GL > n,m with respect to the condition metric we may suppose, via a singular value decomposition that A = (Σ, 0) ∈ GL > n,m , where Σ = diag (σ 1 ≥ · · · ≥ σ n−1 > σ n ) ∈ K n×n . Now, the inequality to verify is obvious from Proposition 5, as Dσ n (A) F = 1 and Corollary 6. Let r > 0. The function α is self-convex in the sphere S r (GL > n,m ) of radius r in GL > n,m . Proof. It is enough to prove that any geodesic in (S r (GL > n,m ), α) is also a geodesic in (GL > n,m , α). Indeed, suppose that A and B are matrices in S r (GL > n,m ) and the minimal geodesic in (GL > n,m , α) between A and B is X(t), a ≤ t ≤ b. Then we claim that L κ rX(t) X(t) F ≤ L κ (X(t)). Indeed, for any t, d dt Therefore X(t) can only be a minimizing geodesic if it belongs to S r (GL > n,m ). Since all geodesics are locally minimizing geodesics, Corollary 6 follows.
The following gives an example of a smooth and non-selfconvex function in GL n,m .
Proof. For simplicity we consider the case of real square matrices. We have The matter of this subsection is mainly taken from Gallot-Hulin-Lafontaine [6] sect. 2.A.5.
Let V be a Hermitian space of complex dimension dim C V = d + 1. We denote by P(V ) the corresponding projective space that is the quotient of V \ {0} by the group C * of dilations of V ; P(V ) is equipped with its usual smooth manifold structure with complex dimension dim P(V ) = d. We denote by p the canonical surjection.
Let V be considered as a real vector space of dimension dim R V = 2d + 2 equipped with the scalar product Re ., . V . The sphere S(V ) is a submanifold in V of real dimension 2d + 1. This sphere being equipped with the induced metric becomes a Riemannian manifold and, as usual, we identify the tangent space at z ∈ S(V ) with The projective space P(V ) can also be seen as the quotient S(V )/S 1 of the unit sphere in V by the unit circle in C for the action given by (λ, z) ∈ S 1 × S(V ) → λz ∈ S(V ). The canonical map is denoted by p V is the restriction of p to S(V ).
The horizontal space at z ∈ S(V ) related to p V is defined as the (real) orthogonal complement of ker Dp V (z) in T z S(V ). This horizontal space is denoted by H z . Since V is decomposed in the (real) orthogonal sum V = Rz ⊕ Riz ⊕ z ⊥ and since ker Dp V (z) = Riz (the tangent space at z to the circle S 1 z) we get There exists on P(V ) a unique Riemannian metric such that p V is a Riemannian submersion that is, p V is a smooth submersion and, for any z ∈ S(V ), Dp V (z) is an isometry between H z and T p(z) P(V ). Thus, for this Riemannian structure, one has: for any z ∈ S(V ) and u, v ∈ H z . Proposition 6. Let z ∈ S(V ) be given.
2. Its derivative at 0 is the restriction of Dp(z) at H z : which is an isometry.
The following result will be helpful.
. Proof. Note that p : S(GL > n,m ) → P(GL > n,m ) is a Riemannian submersion and α 2 = α • p where α is as in Corollary 6. The corollary follows from Proposition 7.
The solution variety.
Let us denote by p 1 and p 2 the canonical maps where S 1 is the unit sphere in K n×(n+1) and S 2 is the unit sphere in K n+1 . Consider the affine solution variety, n,n+1 and Mζ = 0 . It is a Riemannian manifold equipped with the metric induced by the product metric on K n×(n+1) × K n+1 . The tangent space toŴ > is given by The projective solution variety considered here is W > = (p 1 (M), p 2 (ζ)) ∈ P K n×(n+1) × P n (K) : M ∈ GL > n,n+1 and Mζ = 0 , that is also a Riemannian manifold equipped with the metric induced by the product metric on P K n×(n+1) × P n (K).
which is a consequence of our Proposition 5.
6 Self-convexity of the distance from a submanifold of R j Let N be a C k submanifold without boundary N ⊂ R j , k ≥ 2. Let us denote by ρ(x) = d(x, N ) = inf y∈N x − y the distance from N to x ∈ R j (here d(x, y) = x − y denotes the Euclidean distance). Let U be the largest open set in R j such that, for any x ∈ U, there is a unique closest point from N to x. This point is denoted by K(x) so that we have a map defined by K : U → N , ρ(x) = d(x, K(x)).
Classical properties of ρ and K are given in the following (see also Foote [5], Li and Nirenberg [7]).
This last quantity is equal to 1 2 d 2 . It is nonnegative by the second order optimality condition.
Proof of Theorem 2 and Corollary 1. We are now able to prove our second main theorem. Let us denote α(x) = 1/ρ(x) 2 . We shall prove that α is self-convex on U. From proposition 3 it suffices to prove that, for everẏ x ∈ R j , 2 ẋ 2 Dρ(x) 2 ≥ D 2 ρ 2 (x)(ẋ,ẋ) or, according to Proposition 8.4 and Dρ = 1, that 2 ẋ 2 ≥ 2 ẋ 2 − 2 DK(x)ẋ,ẋ . This is obvious from Proposition 8.4. Now we prove Corollary 1. Let S 1 (R j ) be the sphere of radius 1 in R j and let p R j denote the canonical projection p R j : R j → P(R j ). Note that the preimage of N by p R j satisfies d(y, p −1 R j (N )) = d P (p R j (y), N ) y .
As in the proof of Corollary 6, the mapping 1/ρ(x) 2 is self-convex in the set S 1 (R j ) ∩ p −1 R j (U). Now, apply Proposition 7 to the Riemannian submersion p R j to conclude the corollary.
Two examples. Example 2. Take U the unit disk in R 2 and N the unit circle. The corresponding function is given by According to Theorem 2, the map log α(x) is convex along the condition geodesics in U \ {(0, 0)} = x ∈ R 2 : 0 < x < 1 .
Example 3. Take N ⊂ R 2 equal to the union of the two points (−1, 0) and (1, 0). In that case It may be shown that for any 0 < a ≤ 1/10, the straight line segment is the only minimizing geodesic joining the points (0, −a) and (0, a). Since log α(0, t) = − log(1 + t 2 ) has a maximum at t = 0, g(t), −a ≤ t ≤ a, cannot be log-convex. Here {0} × R is equal to the locus in R 2 of points equally distant from the two nodes which is the set we avoid in Theorem 2. | 5,674.2 | 2008-06-02T00:00:00.000 | [
"Mathematics"
] |
A hybrid algorithm for clinical decision support in precision medicine based on machine learning
Purpose The objective of the manuscript is to propose a hybrid algorithm combining the improved BM25 algorithm, k-means clustering, and BioBert model to better determine biomedical articles utilizing the PubMed database so, the number of retrieved biomedical articles whose content contains much similar information regarding a query of a specific disease could grow larger. Design/methodology/approach In the paper, a two-stage information retrieval method is proposed to conduct an improved Text-Rank algorithm. The first stage consists of employing the improved BM25 algorithm to assign scores to biomedical articles in the database and identify the 1000 publications with the highest scores. The second stage is composed of employing a method called a cluster-based abstract extraction to reduce the number of article abstracts to match the input constraints of the BioBert model, and then the BioBert-based document similarity matching method is utilized to obtain the most similar search outcomes between the document and the retrieved morphemes. To realize reproducibility, the written code is made available on https://github.com/zzc1991/TREC_Precision_Medicine_Track. Findings The experimental study is conducted based on the data sets of TREC2017 and TREC2018 to train the proposed model and the data of TREC2019 is used as a validation set confirming the effectiveness and practicability of the proposed algorithm that would be implemented for clinical decision support in precision medicine with a generalizability feature. Originality/value This research integrates multiple machine learning and text processing methods to devise a hybrid method applicable to domains of specific medical literature retrieval. The proposed algorithm provides a 3% increase of P@10 than that of the state-of-the-art algorithm in TREC 2019.
Introduction
Precision medicine is a new medical paradigm that integrates modern scientific and technological means with conventional medical methods by detailing human bodily functions and the nature of diseases scientifically, thus optimizing systematically the principles and practices of human disease prevention and health care to eventually maximize both individual and social health benefits with more effective, safer, and more economical medical services [1,2]. In precision medicine, diagnostic methods are appropriately selected for each patient to realize minimal iatrogenic damage, minimum medical costs, and optimal patient recovery [3,4]. Besides, utilizing both genomic profiles and healthcare data sources of patients to a large extent leads to personalized treatments [5]. Hence, the clinical system adopting this new approach mainly pays attention to all types of useful information regarding genes, microbiomes, environmental conditions, family history, and lifestyles of patients to pick precise diagnoses and therapeutic alternatives that individually result in better treatments [6]. In other terms, precision medicine is considered a tool that could be used for several purposes such as predictive, preventive, personalized, and participatory healthcare service utilizing all available data sources such as genetics, omics, and patients' history [7].
Precision medicine has been covering various areas ranging from drug discovery, design, and development, the analysis of drug sensitivity in pharmacology, and the construction of clinical decision support systems in health analytics to a better understanding of several diseases and their relationships with genes, family history, and other attributable factors in medicine [8][9][10][11].
With the advancement of medical technologies, the number of biomedical articles has grown exponentially. So, finding relevant articles matching the symptoms of a patient in massive article databases becomes increasingly difficult. For example, when just "precision medicine" is written in the search bar in the Science Direct database, the number of articles that are found is 229,126. Therefore, getting both useful and practical insights out of the immense collection requires to be implemented finely devised methods and approaches.
Information retrieval (IR) plays a significant role in precision medicine and refers to the process and technology to organize and access information according to the requirements of users. The main goal of information retrieval is to obtain the required information as accurately, quickly, and comprehensively as possible. Moreover, since data accumulation grows sharply, big data-based crunching and modeling have been gaining momentum, especially after 2008 [12]. Hence, more precise, and refined outcomes could be potentially reached by employing finely devised methods or algorithms.
Even though the BM25 algorithm is the first and most widely used algorithm to improve better algorithms in text ranking tasks, most BM25 algorithms only consider abstracts and do not consider the possible search morphemes and their co-occurrence relationships that could be found in chemicals, MeSH, and keywords. Zhang [13] proposed an improved BM25 algorithm that computes three scores for the vocabulary, coword, and expanded word that leads to a composite retrieval function whose parameters are optimized by the cuckoo optimization algorithm that retrieved better search outcomes. The model was trained on the 2017 dataset. The results showed that the trained parameters produced improvements in the search results when both the 2018 and 2019 datasets are used, so this research provided a reference for parameter selection for the BM25 algorithm. Several of the available algorithms utilize the BM25 algorithm as the first step of a search algorithm and then employ a deep learning model to obtain more accurate matchings. Besides, it should be kept in mind that the effect of deep learning models is dependent on how well the models get trained. Therefore, similarity results could be highly affected by the results of the employed method in the first stage. Consequently, the improved BM25 used at the first stage provides advantages to attaining better search results in the proposed algorithm.
This manuscript will base on the improved BM25 approach to pick the highest scores of 1000 articles in PubMed and conduct a clustering algorithm to split into N different clusters to reach the minimum input requirement of the pre-trained model on the data set called BioBERT to generate better text ranking results by using search terms of diseases, genes, and individual traits. Therefore, similarity-matching results will be attained based on finally running the BioBERT model that is employed also as a pretraining model and calculates the similarity between the article abstract/title and the retrieval morpheme as a score. Due to the limitation of the input vector length of the BERT model which is restricted to using 512 tokens (words or characters) in an article abstract, negative samples for the training data set are generated to improve the training effect.
The motivation of the research is to propose a hybrid algorithm consisting of a twostage information retrieval method based on the improved BM25 algorithm, k-means clustering, and BioBert model to better determine the most relevant biomedical articles to specific diseases, genes, and individual traits.
The sections of the article are organized as follows: Section "Related work" presents the related works. Section "Method" describes the improved BM25 algorithm, and proposed the algorithm whose stages are called document similarity matching, and cluster-based abstract extraction. Section "The Proposed Method and its Implementation" describes the proposed method with a flow chart and its execution details including data structure, and negative training sample generation method. Section"Experimental results" describes the experimental comparison results of the proposed algorithm and the selected algorithm presented in Track 2020, as well as the data and parameters used by the proposed algorithm. Section "Summary and future work" concludes the research.
Preliminary
In this subsection, we will present a brief introductory development of text retrieval. The Boolean model constitutes the search model of the original information, which was used for information retrieval as early as 1957 and is a simple retrieval model based on the set theory and Boolean algebra whose basic idea is to represent the query of a user and a document by utilizing a set of words. Then, the similarity of the two sets is determined by using Boolean operations. Moreover, the Boolean model is a keyword-matching type of information retrieval, that is, documents containing the keywords in a query will be retrieved. However, there exists usually a low correlation between the retrieved results and the target. In some research fields, weighting the index terms has been shown to greatly improve the retrieval results, which has led to the development of vector models [14,15].
BM25 and its modified versions, which are characterized by conventional probabilistic models employing the two-Poisson approximation of the term-frequency distribution, have been long effective tools in text ranking and the BM25 algorithm is generally used to compare the performance of the newly introduced models [16,17]. Besides, typical vector models include the term frequency-inverse document frequency (TF-IDF) approach and the BM25 model have been widely studied based on this approach. As a result, the emergence of vector models has substantially increased the relevance of retrieved documents to the retrieval target and led to the concepts of document scoring and ranking [18][19][20].
With the advancement of machine learning algorithms in recent years, several ranking algorithms have been developed by aiming at better ranking the texts in the search of matching the query with the most relevant articles. Besides, when machine learning algorithms are implemented, more automatic processes are expected to attain better outcomes. Learning-to-rank methods are generally classified into three categories according to the training methods: pointwise, pairwise, and listwise [21][22][23]. In the pointwise method, each document in the training set is treated as a separate sample, which is essentially a single-document classification and regression problem. Some widely implemented pointwise algorithms include Prank [24], McRank [25], and Rank-Prop [26]. In the pairwise method, document pairs with different labels for the same query in the training set are trained as one sample. Based on two documents with different labels, the ranking problem is finally transformed into a binary classification problem. Some broadly utilized algorithms include the rank boost algorithm [27] and the frank algorithm [28]. In the listwise method, the entire document sequence is taken as a sample, and the evaluation of the information retrieved is optimized by defining a loss function. Some widely conducted research includes ListNet [29], SVMMAP [30], and the ADA rank algorithm [31].
When machine learning algorithms are implemented, the pre-training process contributes to the success of these algorithms [32][33][34]. A pre-trained language representation approach, called BERT (A multilayer bidirectional transformer encoder stack), was proposed by [35] and the BERT's performance was found to be better than the available ones in the literature. Park et al. [36] used a bidirectional encoder representation from transformers (BERT) classifier to train retrieved articles and word vectors to represent medical articles. The studies were ranked according to similarity scores between query semantic elements and the article. The results showed that the accuracy was greatly improved over existing algorithms. Pan et al. [19] combined patient health records with biomedical articles and used three methods to expand the phrases used in queries, and the experimental results showed that the proposed model yielded a promising average weighted accuracy, better stability, and applicability. Maciej et al. [37] investigated the effectiveness of a BERT-based ranking model on different platforms. The results verified the accuracy of the BERT model for precision medicine too. Bayesian networks into query expansion and probabilistic models to expand query semantic elements to increase query accuracy were introduced [9]. Two types of BERT models, BERT BASE and BERT LARGE , are available [38]. Some articles covering various related modifications of BERT can be found in [39][40][41][42].
BioBert model
With the implementation of the BioBERT model [43][44][45][46], Natural Language Processing tasks extract better relations and generate more accurate outcomes. Instead of Zhang et al. BMC Bioinformatics (2023) 24:3 pre-training on generic data sets, BioBert requires derived data sets to perform well. On the contrary, poor performances would be expected. The BioBERT model is used for various improvement purposes. For example, the identification of functional links between proteins has been recently conducted by fine-tuning weights from BioBERT [44]. Besides, several research manuscripts have reported better outcomes when the BioBERT model is implemented [47][48][49][50] in the literature.
Baseline algorithm
Our baseline algorithm employs the improved BM25 algorithm previously proposed by the author. To ensure the integrity of the paper, The fundamental aspects of the improved BM25 algorithm are revisited [13]. First, we defined the abstract score, where Inverse Document Frequency (IDF) is the search morpheme q i , where k 1 and b 1 are the adjustment factors, which are usually set according to the experience of users, f i is the frequency of q i in d . IDF is defined as follows: IDF for a particular word can be obtained by dividing the total number of documents by the number of documents containing the searched word and then taking the logarithm of the quotient. dl is the text length of document d , and avgdl is the average text length of all documents. We propose a wordlist to combine the chemical words, MeSH headings, and keywords of a retrieved document, and the scores are defined as follows: where tfw is the sum of the IDF values of each retrieved morpheme, and k 1 and b 1 are adjustment factors, which are usually set according to the experience of users. dwl is the number of words in the wordlist of document d, and avgdwl is the average number of words in the wordlist of all documents.
We also defined the co-word score, that is, the disease and gene in the search morpheme (including expansion words) co-occur in the abstract, and the wordlist is recorded as the co-occurrence score as follows: where IDF word (g i , d) represents the score based on the expression gene g i for query Q , the summation is used since some tasks could contain genes.
To achieve the same level as the scores of the similarity method in the manuscript, we standardize the sum of the three scores, and the standardization method adopts the max-min method, as shown in Eq. (4): where x norm represents the normalized value,x represents the value before normalization, min(X) represents the minimum value of the sequence to be standardized, and max(X) represents the maximum value of the sequence to be standardized. In the algorithm, we also added query expansion to extend the mesh. The algorithm and its performance evaluation in detail can be found in [13].
Document similarity matching
Similarity matching between articles and retrieval tasks is an important step in the information retrieval process. In [24], Bidirectional Encoder Representation from Transformers (BERT) model is employed to train the abstracts/titles and query tasks. The model structure is shown in Fig. 1. [CLS], which is a special vector, is added to the top of the input before transferring and sending it to the BERT and [SEP], which is a special tag to separate sentences, is added as a separator between the abstract/ title. Then, the output of the BERT model (the embedding of sentence pairs) is taken, and [CLS] is utilized to complete the similarity calculation task. The output sigmoid is computed to obtain the similarity between the abstract/title and the query, which is considered as the matching score between the input abstract/title and the query.
Clustering-based abstract extraction
Because the BERT model is limited to 512 tokens (words or characters), the abstract needs to be further streamlined, and the key content needs to be extracted. An extractive abstract generation method is employed to preserve the writing style and the meaning of the original abstract to the highest extent. Then, the article adopts the clustering-based abstract extraction method, and the specific process is described as follows: 1. The BioBert pretraining model is utilized to generate a sentence vector for each sentence in the abstract to obtain a sentence-level vector representation, which is a 1 × 768 dimensional vector.
2. Sentences are clustered by using the K-means clustering to obtain N categories, where the number N is preassigned by the implementer.
3. A sentence closest to the center of the cluster is selected from the category until the overall length reaches 512 tokens (words or characters) to form a new abstract text.
The proposed algorithm
This research integrates multiple machine learning and text processing methods to devise a hybrid method applicable to domains of specific medical literature retrieval. The flow chart of the algorithm is depicted in Fig. 2. A hybrid algorithm consisting of a twostage information retrieval method based on the improved BM25 algorithm, k-means clustering, and BioBert model to better determine the most relevant biomedical articles to specific diseases, genes, and individual traits.
The improved BM25 algorithm computes three scores for the vocabulary, co-word, and expanded word that lead to a composite retrieval function whose parameters are optimized by the cuckoo optimization algorithm. Afterward, the BioBert pre-trained model is utilized to generate a sentence vector for each sentence in the abstract to obtain a sentence-level vector representation, which is a 1 × 768 dimensional vector. Sentences are then clustered by using the K-means clustering regarding the closest sentence to the To exemplify what has been conducted, first, patient information and medical articles are input into the system, such as patient information, disease, demographics, genes, and other attributes. Medical article information includes title, abstract, MeSH headings, chemical list, and keyword list. The patient information was input into the MeSH library to obtain the expanded query information, and the patient information and the expanded word information were input into the improved BM25 algorithm [13] to obtain the abstract score, word score, and co-word score, which were then standardized and processed according to the standardization process. Afterward, the top 1000 articles were sorted in descending order by using their composite retrieval scores. The abstract and title similarity scores of each document and the query were calculated by using the BioBert document similarity matching method for the top 1000 articles. The standardized scores were then added to the improved BM25 scores, and the final scores were sorted in descending order to reflect the similarity scores. Table 1 summarizes the evaluation results obtained between 2017 through 2019 for the initial screening of the literature. It is a screening factor for human precision medicine (PM), and the co-occurrence of disease genes is also an important factor for determining the correlation. Therefore, the co-word method proposed in the improved BM25 algorithm [13] can increase the scores in potentially relevant articles. When the search elements are defined, the term "human" as one of the search elements of the baseline is utilized to distinguish between humans and animals. Because the PM tasks in 2020 and between 2017 through 2019 were different, and demographics were replaced by treatment, the tasks in 2020 are excluded and the tasks between 2017 through 2019 are used as PM retrieval tasks for the research data. Table 2 shows the PM retrieval tasks between 2017 through 2019. Observed that disease and genes are fixed expressions, and age and gender need to be classified during retrieval. The classification criteria are shown in Table 3. The regular expression extracts the age from the abstract, such as years-old/year-old/years old, which are all extracted to form the corresponding category, and the word stem of nltk is used to extract the words that express gender in the abstract, such as woman, man, girl, and boy. If the abstract does not contain demographic information, matching items from the Mesh for extraction are searched for.
Generation of the training sample
Through the analysis of data sets between 2017 and 2019, we divided the search tasks into two types: the same gene with different diseases and the same disease with different genes. While different diseases with the same gene are shown in Table 4, different genes with the same disease are presented in Table 5. To eliminate the interference of the search task and document matching, disease, gene, and demographic information from the head of the abstract are extracted and negative samples for the content of the same disease with different genes or different diseases are generated, as shown in Table 6.
The data sources are mainly divided into baseline data and evaluation datasets. The baseline data set uses the PubMed literature metadata download provided by the organizing committee of TREC. The specific data are shown in Table 7. The metadata used includes PMID, titles, abstracts of articles, Chemical words, Mesh words, and keywords.
In the 2017-2019 TREC-PM tasks, a total of 120 patient cases and 63,387 qrels (document correlation judgment) were available, as shown in Table 8.
The parameter setting of the proposed algorithm
The adjustment factors of our baseline improved BM25 algorithm [13] use common conventional parameters presented in Table 9. In the document similarity matching algorithm, we performed similarity matching for the abstract and the title, and the query because the lengths of the abstract and the title were significantly different. Therefore, we used different parameters for training, and the settings for the training parameters of the matching degree algorithm are shown in Table 10.
Experimental comparison
Similar to the literature [58], we used the data in 2017 for evaluation and the data in 2018 for training. Besides, while 80% of the data is used for the training phase, 20% of the data is utilized for validation. We used the BioBert model as a pre-training model to generate word vectors, as shown in Table 11. The precision of the proposed method is slightly lower than that of the method proposed in the literature [58], but the recall rate and F1 score of the training set, and the accuracy rate, recall rate, and F1 score of the validation set are found to be higher since the method of negative sample generation is utilized to reduce the interference between similar samples, thus, the official Bert-base-uncased is replaced by the Biobert model. Figure 3 depicts that all 3 algorithms converged at approximately 2000 iterations. When comparisons are conducted, the BioBert converges faster, but its improvement in accuracy is not very significant, which is slightly higher than Bert-base-uncased and Bert-base-cased algorithms.As shown in Table 12, BioBert also has a lower loss rate of 0.11 than that of Bert-base-uncased and Bert-base-cased, which is 0.12. Table 13 shows the comparison of various indicators of the proposed algorithm before and after the generation of negative samples. The training set with added negative samples has improved outcomes on MAP, NDCG, P@10, and R-Prec, from 0.2928, 0.603, 0.5925, and 0.3503 to 0.3028, 0.6155, 0.6050 and 0.3524, respectively. To verify the improvement of the effect of the negative sample generation method, we used the accuracy and recall rates of 5, 10, 15, 20, 30, 100, 200, 500, and 1000 articles in the top 1000 articles to generate the PR curve, as shown in Fig. 4.
The overall curve shows a downward trend with some slight fluctuations. When the PR curve is located above the other PR curves, it means that the performance would reach higher than the other methods. Figure 4 shows that the red curve after sample optimization is located above the curve of the baseline (black) and the one obtained before sample optimization (blue). Table 14 shows the experimental comparison between the proposed algorithm and the state-of-the-art algorithm selected [59] in the 2019 TREC PM track. Even though the results of the proposed algorithm are lower than those of the algorithms selected in the 2019 TREC meeting, the evaluations were conducted by a software called the trec_eval software. Seen that the proposed algorithm uses the result of the addition of the baseline score and the abstract similarity score, which are 0.635 (P@10) and 0.344 (R-Prec). These two indicators are slightly inferior to the optimal results of the selected algorithm in that year, which is ranked second. However, we found that among the top 10 articles of the 40 topics, 366 documents that existed in qrels and 34 documents that did not exist in qrels were retrieved, as shown in Fig. 5. Namely, all the 34 documents used to calculate P@10 that did not participate in the evaluation are judged irrelevantly. However, the proposed algorithm still achieved a P@10 of 0.635 without it. If these non-participating documents had been removed from the top10, the P@10 and R-Prec scores of the proposed algorithm would reach 0.68 and 0.4823, respectively. Figure 5 shows that topics have more relevant articles, such as topic 1, topic 4, topic 7, and topic 16, the uninvolved articles still have the potential to be identified as relevant articles. If the title similarity scores had been added, P@10 would decrease to 0.605, but the R-Prec would increase to 0.352, which is already very close to the optimal values of the selected method in that year. Figure 6 shows that the addition of the abstract and title scores to the baseline score significantly improves the P@10 and R-Prec of the information system. When P@10 is a concern, the stability of baseline + abstract and baseline + abstract + title is found to be similar. However, there are more uninvolved studies in baseline + abstract + title than in baseline + abstract, which leads to a decrease in P@10. Because the baseline + abstract + title was optimized twice, it was easier to improve the ranking of the potentially relevant literature, but it also increased the ranking of the highly distracting literature, so it looks more polarized than the baseline + abstract.
To further verify the effectiveness of the proposed algorithm, we also select 80% of the data in the 2017-2018 qrels as the training set, 20% of the data as the validation set, and use the PM in 2019 as the task [58]. Just the literature that participated in the evaluation was used as the baseline, and the top 500 retrieved documents were used to submit the evaluation. The experimental comparison results are shown in Table 15. The P@10 and R-Prec of the first search were relatively low at 0.52 and 0.2307, respectively. After using the secondary sorting algorithm, the P@10 and R-Prec were significantly improved, reaching 0.6750 and 0.3912 with Baseline + REL, and Baseline + REL + ABS reached 0.6985 and 0.3627. In contrast, the baseline retrieval algorithm of the proposed algorithm achieves 0.5775 P@10 and The box-plot representation of P@10 and R-Prec concerning the three algorithms R-Prec, respectively in one retrieval. Baseline + Abstract reached 0.6725 and 0.4636, and Baseline + Abstract + title reached 0.6725 and 0.4716, respectively. Seen that the P@10 of the proposed algorithm is slightly lower than that of the algorithm proposed in the literature [58], while the R-Prec is much higher.
There are two main reasons: (1). The results of the algorithm used in the first round of the search in the literature [58] were not functioning well. (2). The Implementation details were mentioned as follows: [58]: "All parameter choices were made based on the best practices from prior efforts and experiments to optimize P@10 on validation subsets". Because of the intervention of manual experience and special optimization of the P@10 index, it resulted in a higher P@10. However, optimizing for a certain indicator would reduce the universality of the implemented algorithm.
Therefore, the proposed algorithm has the advantage of not conducting an optimization to increase the P@10 index and does not carry out any manual intervention or specified optimization scheme to the indexes, and uses conventional parameters directly. Therefore, the proposed algorithm has a stronger universality than the selected method [58]. Table 15 shows that the optimization of P@10 will produce a certain decrease in R-Prec. Therefore, to comprehensively evaluate the quality of the proposed algorithm, we refer to the calculation method of the F1 score and add an evaluation index represented by P@10*R-Prec. The optimal P@10*R-Prec of the proposed algorithm is found to be 0.3172, while that in the literature [58] is 0.2533, so the proposed algorithm has advantages in terms of universality and comprehensive performance.
Summary and future work
The manuscript proposes a hybrid algorithm consisting of a two-stage information retrieval method based on the improved BM25 algorithm, k-means clustering, and BioBert model to better determine the most relevant biomedical articles to specific diseases, genes, and individual traits.
The improved BM25 algorithm computes three scores for the vocabulary, co-word, and expanded word that leads to a composite retrieval function whose parameters are optimized by the cuckoo optimization algorithm that retrieved better search outcomes. Afterward, the BioBert pretraining model is utilized to generate a sentence vector for each sentence in the abstract to obtain a sentence-level vector representation, which is a 1 × 768 dimensional vector. Sentences are then clustered by using the K-means clustering regarding the closest sentence to the center of each category until the overall length reaches 512 tokens to form a new abstract text. Finally, the BioBert-based document similarity matching method is utilized to obtain the similarity between the document and the retrieved morphemes. Besides, negative sampling for the training data is implemented to enhance the accuracy of the proposed method. The proposed algorithm does not carry out any manual intervention or special optimization schemes to increase the index scores and uses conventional parameters to attain better search or text-ranking outcomes, which guarantees the universality of the proposed algorithm.
To verify the effectiveness of the proposed algorithm, a comparison study is conducted with the state-of-the-art algorithm [58], the proposed algorithm has advantages in terms of universality and better measurement scores. The comprehensive performance analysis of the proposed algorithm shows that a 3% increase of P@10 than that of the state-of-the-art algorithm in TREC 2019 is achieved. Moreover, to comprehensively evaluate the quality of the proposed algorithm, we refer to the calculation method of the F1 score and add an evaluation index represented by P@10*R-Prec. The optimal P@10*R-Prec of the proposed algorithm is found to be 0.3172, while that in the literature [58] is found to be 0.2533.
Consequently, the proposed algorithm has advantages in terms of universality and comprehensive performance.
In future work, the tasks that were negatively affected by the proposed algorithm are analyzed to improve its performance. Besides, different combinations of algorithms dealing with different retrieval scenarios are investigated to thus improve retrieval accuracy.
TF-IDF
Term frequency-inverse document frequency BM25 Best matching 25 IR Information retrieval TREC The text retrieval conference BERT Bidirectional encoder representation from transformers | 6,956.6 | 2023-01-03T00:00:00.000 | [
"Computer Science"
] |
A Numerical Study on Performance of Dental Air Turbine Handpieces
Dental air turbine handpiece has evolved significantly over the years and it remains a vital part of dentistry today. It has been widely used in clinical dentistry as the main cutting tool for more than 50 years [1]. The handpiece is a standard instrument in dentistry that is used to remove carries, cavity preparation, tooth tissue grinding and most dental cutting practices [2] which could have effective role in dental remedial operations [3-4]. The source of rotation power is to convert high pressure air into mechanical work via a micro rotor. The rotor blades rotate at high speed, and the bur is applied to the teeth. Usually, high speed air-turbine dental handpieces operate at 200,000 to 400,000 revolutions per minute [5]. The main part of the dental air turbine is its high speed rotor head consisting of a casing with air inlet and outlet nozzles, impeller, spindle, chuck and bearings in millimeter size, as shown in Fig. 1. The most important factors to determine the air turbine performance and to improve the efficiency are speed, torque and power. Experimental determination of the dental air turbine characteristics such as free running speed, torque, power and bearing resistance have been carried out in detail by Brockhurst et al. [1] and Dyson et al. [6-7]. Some of researchers studied the vibration characteristics of dental high-speed turbines [8]. Seto investigated the influence of further design factors, e.g. air inlet and outlet diameters and spacer thickness on dental turbine performance with an experimental testing system [9]. Juraeva et al. presented an optimum blade shape of the air turbine that maximizes the torque using the design of experiments (DOE) [5]. Muller et al. performed a parameter performance study on an air turbine for a high frequency air bearing spindle by computational fluid dynamic (CFD) [10]. Chiang et al. [11] and Hsu et al. [12] focused on the turbine blade and the flow channel designs using CFD simulations and experiments. Wei et al. studied the influence of various factors on dental handpiece’s bearing failure [13]. The high speed rotor head is one of the key components of dental air turbines. There are some studies which numerically investigated the influence of the Air turbine blade shape and flow channel design on the turbine performance [10-11]. Although there exist many studies on optimizing blades of other kind of turbo-machinery apparatus [14], however, there is no comprehensive study on pressure, temperature and air density distribution in the casing and applied forces to the impeller in dental handpieces. Therefore, the purpose of this study is to evaluate the effects of air inlet pressure and the key design parameters on the torque, the air distribution characteristics that noted and the applied forces to the impeller using 3D finite volume analysis. a b
Introduction
Dental air turbine handpiece has evolved significantly over the years and it remains a vital part of dentistry today.It has been widely used in clinical dentistry as the main cutting tool for more than 50 years [1].The handpiece is a standard instrument in dentistry that is used to remove carries, cavity preparation, tooth tissue grinding and most dental cutting practices [2] which could have effective role in dental remedial operations [3][4].The source of rotation power is to convert high pressure air into mechanical work via a micro rotor.The rotor blades rotate at high speed, and the bur is applied to the teeth.Usually, high speed air-turbine dental handpieces operate at 200,000 to 400,000 revolutions per minute [5].The main part of the dental air turbine is its high speed rotor head consisting of a casing with air inlet and outlet nozzles, impeller, spindle, chuck and bearings in millimeter size, as shown in Fig. 1.The most important factors to determine the air turbine performance and to improve the efficiency are speed, torque and power.Experimental determination of the dental air turbine characteristics such as free running speed, torque, power and bearing resistance have been carried out in detail by Brockhurst et al. [1] and Dyson et al. [6][7].Some of researchers studied the vibration characteristics of dental high-speed turbines [8].Seto investigated the influence of further design factors, e.g.air inlet and outlet diameters and spacer thickness on dental turbine performance with an experimental testing system [9].Juraeva et al. presented an optimum blade shape of the air turbine that maximizes the torque using the design of experiments (DOE) [5].Muller et al. performed a parameter performance study on an air turbine for a high frequency air bearing spindle by computational fluid dynamic (CFD) [10].Chiang et al. [11] and Hsu et al. [12] focused on the turbine blade and the flow channel designs using CFD simulations and experiments.Wei et al. studied the influence of various factors on dental handpiece's bearing failure [13].
The high speed rotor head is one of the key components of dental air turbines.There are some studies which numerically investigated the influence of the Air turbine blade shape and flow channel design on the turbine performance [10][11].Although there exist many studies on optimizing blades of other kind of turbo-machinery apparatus [14], however, there is no comprehensive study on pressure, temperature and air density distribution in the casing and applied forces to the impeller in dental handpieces.Therefore, the purpose of this study is to evaluate the effects of air inlet pressure and the key design parameters on the torque, the air distribution characteristics that noted and the applied forces to the impeller using 3D finite volume analysis.Fig. 2 3D models of three impellers
Modelling
Three impeller samples used in commercial dental turbines (as listed in table 1) have been selected to be modeled and utilized for simulation purposes.In order to achieve an accurate modeling, first of all, the turbine handpieces have been disassembled, and the impellers are 3D scanned using a digitizer apparatus (Renishaw Company, model: Cy-clone2).Then, the outputs are handled using CATCAM software to generate corresponding 3D models (Fig. 2).
Governing equations
In this study, GAMBIT 2.4.6 and FLUENT 6.3.26 have been used to generate grid and to solve governing equations, respectively.The governing equations of fluid flow in the cartridge (around the impeller) include continuity, momentum and energy equations.Continuity equation for a compressible fluid is as follows: or For a turbulent flow, the momentum equation is: The expression in brackets is shear stress.There are two types of shear stress in turbulent flow, laminar flow shear stress and turbulent shear stress.The first term in the bracket, shear stress describes the molecular diffusion.Another turbulent shear stress is a stress caused by fluctuations in velocity.The second term in the bracket indicates stress disorder called Reynolds stress tensor and proves that it is always positive.
In turbulent flow, Reynolds stress is often much greater than the stress caused by molecular viscosity except in areas near the walls.The k-ɛ model is applied to calculate the Reynolds stress tensor.The values of k and ɛ are determined by the following semi-experimental equations: ,, (1 ) , where C1, C2, and C3 are coefficients experimentally determined.
Numerical simulation
Due to the complexity of the geometry, dimensions and computations, tetrahedral elements with appropriate lengths have been used to generate grid (Fig. 3, a).In this study, because of the complexity of impeller profile and its change over the time, the moving reference frame (MRF) method was used to numerical simulation.The equations of momentum (in three directions x, y, z), continuity, energy, k and ɛ were solved simultaneously.It is worthy to note that ɛ and k are only dependent on the flow and geometry of the impeller.In this study, air is considered as the ideal gas and due to its high speed (more than 100 m/s), it is assumed to be compressible and the integrated system is considered to be adiabatic.Then the effects of air inlet pressure (relative) versus the air outlet pressure (absolute 1 bar) and other parameters (inlet diameter and gap) on the torque, pressure distribution over the blades, the equivalent forces applied to the bearings, air density, and temperature distribution around the blades were studied for three models (Types 1, 2, 3) and related charts were extracted.Using finite volume method (FVM) and applying the conservation laws, mesh geometry values in places between pieces are calculated.Due to turbulence flow of the dental air turbine flow field, It should be noted that the ɛ and k are not made only to the flow and geometry of the gas, since that method of calculation is complex formula to ease the task of Alternative Methods (introduced hydraulic diameter (the diameter of the inlet nozzle) and the intensity of turbulence) were used.To validate the numerical answers should be independent of grid size, time step size and scope of the study and the small amount remaining is laying the groundwork for the accuracy of the numerical solution.
Simulation results
In this section, the simulation results are presented.All results have been obtained for the constant speed of 250000 rpm which is considered as nominal operating speed.The angle of inlet nozzle is 10 degrees and constant during all simulations.
In this paper, the effect of air inlet pressure on the torque of the headpiece has been studied.The inlet air pressure is increased from 1.5 to 3.5 bars with step of 0.5 bars for three types of cartridges.The results show that the torque increases when the inlet pressure increase in each case.The behavior apparently follows a meaningful pattern (Fig. 4).Moreover, the effect of air inlet pressure on applied force to the impeller has been also studied.As in previous case, the inlet pressure increase from 1.5 to 3.5 bars with the step of 0.5 bars and the applied forces to impeller in three directions x, y, and z are extracted.It is important to note that the z-component of force (axial component) is much less than two other components and is insignificant and negligible compared to others.This is because of the symmetry conditions of the problem in z direction.Therefore, two force components in x and y directions are depicted in Figs. 5, 6, 7. The results show that the total applied force FT is proportional to the air inlet pressure and its behavior is almost linear (Fig. 8).It is obvious that this force must be tolerated by bearings and therefore can be an important factor affecting bearing life and intensification of the adverse effects of fatigue.Fig. 5 The radial force (total force and its x and y components) applied to impeller type 1 vs. air inlet pressure Fig. 6 The radial force (total force and its x and y components) applied to impeller type 2 vs. air inlet pressure Changing of rotational speed is similar to that of previous subsection.In the specified velocity, the pressure distribution within the casing of cartridge for the model Type 2, and its corresponding maximum pressure applied to the impeller was determined (Fig. 3, b).The obtained results show that by increasing the inlet pressure, the maximum pressure is also intensified and always occurs on the blade in opposite of the inlet nozzle (Fig. 9).
Distribution of air temperature in the cartridge is also studied in this article.Changing of rotational speed is again similar to that of previous subsections.In the specified velocity, the temperature distribution within the casing of cartridge for the model Type 2, and its corresponding max-Fig.7 The radial force (total force and its x and y components) applied to impeller type 3 vs.air inlet pressure imum temperature was determined (Fig. 3, c).The results show that the maximum temperature happens in the vicinity of the shell (around the gap).The intensity of the temperature increases when the air approaches the outlet nozzle (Fig. 10).
Fig. 8 The radial total forces applied to three types of impeller vs. air inlet pressure Fig. 9 Maximum air pressure variation on the blade vs. the inlet pressure Fig. 10 The maximum air temperature variation in the cartridge vs. inlet pressure Fig. 11 The maximum air density variation in the cartridge vs. inlet pressure Air density distribution in the cartridge has been also simulated.Changing of rotational speed is again similar to that of previous subsections.In the specified velocity, the air density distribution within the casing of cartridge for the model Type 2, and its corresponding maximum temperature was determined (Fig. 3, d).The results show that with the increase in air pressure, the maximum air density is also aggravated linearly.The maximum density of air occurs around the blade opposite the inlet nozzle (Fig. 11).
Finally, the effect of inlet nozzle diameter on the torque and force of the impeller has been studied.The input pressure and rotational velocity conditions are similar to those of the previous subsection.The diameter of the inlet nozzle varied from 1.5 to 2.5 mm with the step of 0.25 mm, and thus for the model Type 2, the torque and force applied to the impeller were determined.The study shows that by increasing the diameter of the inlet nozzle, the torque and the resultant force increase according to a certain pattern (Fig. 15, 16).
The effect of the gap on the torque, force and temperature distribution within the cartridge has been studied in this article.The simulation was performed for inlet pressure of 2.5 bars and constant rotational velocity of 250000 rpm.The gap varied from 0.2 to 0.6 mm with the step of 0.1 mm, and therefore the torque, force (both applied to the impeller) and temperature distribution in the casing of cartridge corresponding to the model Type 2 were determined (Fig. 3, e and 12).The results demonstrate that by increasing the gap in the specified range, the torque and force reduce according to a certain pattern and the maximum temperature is on the base of the blade against the outlet nozzle (Fig. 13, 14).
Discussion
The simulation results show that for a given rotational velocity, by increasing the air inlet pressure the torque and applied force to the impeller increases according to a certain pattern.Fig. 4 apparently shows that the torque values and its corresponding variation trends are in good agreement with experimental results in other studies [1] and [9].Also according to the results it is observed that the z component of the force could be neglected while the resultant force affected by the x components significantly.It is clear that the applied force should be tolerated by the bearings.Therefore, due to the high speed operation of the system, any increase in the amount of the force could have a significant impact on bearing life and also aggravate the side effects of fatigue.The resulted forced via simulation are in good agreement with experimental results reported in some previous studies [16,17].
Since the fluid density is affected by the pressure, and because the inlet nozzle has the most pressure, it is expected that the maximum of both pressure and air density happen in front of the air inlet nozzle which could be observed in the results (Fig. 3, b and 9).Also considering the impact of particles of compressed air to the impeller and passage of the fluid through the small gap and the high resistance of this area, it is expected that the temperature of the air gradually increases and the maximum temperature happens in front of the outlet nozzle; simulation results have confirmed these facts.The results show that by increasing the gap, the torque, the applied force to the impeller and the temperature reduce.These results could be interpreted in such a way that by increasing the gap, the resistance of this region against the passage of fluid decreases and the air could flow easily.In the other hand, a portion of fluid that does not perform work over the blade passes the current blade and the does not increase the total torque.Furthermore, the results show that by increasing the diameter of the inlet nozzle and its corresponding angle in the specified ranges, the torque and applied force on the impeller increase.The increasing of nozzle diameter provides the better and uniform conductivity of air.However, it does not dramatically affect the torque of the impeller.It is because when the diameter of the nozzle increases, the velocity of the inlet air decreases and in spite of that, it increases the input flow intensifying the momentum of the inlet air to the system and also in a wider air strikes the blades and increases the torque.
Simulation studies show that at very high speeds, results may not follow this pattern; for example, it has been shown that by increasing the nozzle diameter, the torque remains unchanged or even reduces in speeds more than 300000 rpm [12].Therefore, it is emphasized that the results and figures obtained in this study corresponds to the specified rotational speed and are valid for the aforementioned parameters and conditions; the current results are in good agreement with the previously performed studies [9,12].
Conclusion
In this study, the effects of air inlet pressure, gap and nozzle diameter on the impeller torque and its applied force in three directions of x, y, and z was studied.Moreover, the pressure, temperature and air density distribution within the casing of the cartridge were numerically investigated.Studies showed that by increasing the inlet pressure, torque and resultant force acting on the impeller follow certain patterns.Also, the maximum values of air pressure and density are in the region near the blade wall which is in front of the inlet nozzle and away from this position are reduced.By decreasing the gap, the temperature gradient around of the blade was intensified and the maximum temperature occurred in front of the outlet nozzle.By increasing the inlet nozzle diameter, the torque increased in a specified rotational speed and inlet pressure.Furthermore, the torque and applied forces to the impeller had also an increasing trend with reducing of the cartridge gap.In the present work, the original dental air turbine handpieces were disassembled and their parts were modeled using reverse engineering.Then, simulation of dental cartridge was performed and the effect of different parameters
Fig. 3 (
Fig. 3 (a) Fluid geometry and grid system for computation; (b) Distribution of air pressure; (c) Distribution of air temperature; (d) Distribution of air density; (e) Temperature distribution in the cartridges with the gap of 5.0 mm
Fig. 12
Fig. 12 Max temperature in cartridge for different gaps
Fig. 14
Fig. 14 Applied forces to the impeller vs. gap
B
. Beigzadeh, S. Derakhshan, D. Zia Shamami A NUMERICAL STUDY ON PERFORMANCE OF DENTAL AIR TURBINE HANDPIECES S u m m a r y | 4,270 | 2017-10-25T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Non-Gaussian behaviour of a self-propelled particle on a substrate
The overdamped Brownian motion of a self-propelled particle which is driven by a projected internal force is studied by solving the Langevin equation analytically. The"active"particle under study is restricted to move along a linear channel. The direction of its internal force is orientationally diffusing on a unit circle in a plane perpendicular to the substrate. An additional time-dependent torque is acting on the internal force orientation. The model is relevant for active particles like catalytically driven Janus particles and bacteria moving on a substrate. Analytical results for the first four time-dependent displacement moments are presented and analysed for several special situations. For vanishing torque, there is a significant dynamical non-Gaussian behaviour at finite times t as signalled by a non-vanishing normalized kurtosis in the particle displacement which approaches zero for long time with a 1/t long-time tail.
I. INTRODUCTION
The Brownian motion of self-propelled ("active") particles [1,2] bears much richer physics than the traditional diffusive dynamics of passive particles. Active particles can be modelled by moving under the action of an internal force sometimes combined with an internal or external torque. Realizations in nature are certain bacteria [3,4,5,6,7] and spermatozoa [8,9,10] which swim in circles when confined to a surface [11]. In the colloidal world, it is possible to prepare catalytically driven Janus particles [12,13,14,15] or biometric particles [16] which perform self-propelled Brownian motion. For a recent investigation including confinement see [17]. On the macroscopic scale vibrated polar granular rods [18] on a planar substrate and even the trajectories of completely blinded and ear-plugged pedestrians [19] can be considered as rough realizations of self-driven Brownian particles. If the particle is embedded in a liquid (a "swimmer"), as characteristic for colloids, the direction of its driving force is fluctuating, in general, according to orientational Brownian motion [20,21,22].
This gives rise to a non-ballistic translational motion of the particles which is coupled to the fluctuating orientational degree of freedom.
In most cases the direction of the self-propelling force is within the plane of motion. For colloidal particles, however, it is possible to confine the particle on a substrate by using, e.g., strong gravity such that the particles are still freely rotating [15,23,24] though they are confined in a planar monolayer. In this situation the component of the self-propelling force which is normal to the surface is compensated by the substrate, i.e., only the projection of the self-propelling force onto the plane is driving the particle. Therefore the translational motion is coupled to the (Brownian) orientational motion [25].
In this paper we consider a one-dimensional model [26] for the Brownian dynamics of a self-propelled particle on a substrate. The particle is self-propelled along its orientational axis, which itself is subjected to Brownian orientational diffusion. The particle is confined to a channel, however, such that only the projected force in channel direction is acting to drive the particle. The present study is more general than earlier work in reference [25]: first of all, the present calculation resolves the Cartesian components of the isotropic model on an unconfined plane. Second, an arbitrary time-dependence of the external torque is included here while this torque was constant in [25]. Finally, we calculate time-dependent moments of the particle displacement up to fourth order as compared to results up to second order in reference [25]. The results are discussed for several special cases. In general, long-time self-diffusion is found. Non-Gaussian behaviour is found for intermediate times as signalled in the corresponding fourth cumulant. The normalized kurtosis is positive for small times, then changes sign and approaches zero from below at long times with a 1/t long-time tail.
This can be compared to recent investigations for an undriven Brownian ellipsoid [27]. In the latter case, the kurtosis was found to be positive approaching zero from above for long times with the same 1/t long-time tail.
This work represents a first step towards a many-body situation of interacting selfpropelled particles. These are also realizable in experiments (see, e.g., [12,15,18]). The suitable theoretical framework is the many-body Smoluchowski equation [28], from which one can derive a coupled hierarchy of equations for the set of many-body distribution functions similar in spirit to the traditional BBGKY (Bogolyubov-Born-Green-Kirkwood-Yvon) hierarchy [29,30,31] for Liouville dynamics, see also Felderhof [32] for a discussion in the context of Brownian motion. Therefore we think that this paper is particularly appropriate for this issue dedicated to the 100th anniversary of Prof. N. N. Bogolyubov. This paper is organized as follows: In section II, we propose and motivate the model. The first four displacement moments are calculated analytically for the torque-free case in section III, while section IV contains the results for a general time-dependent torque. Finally, in section V, we conclude and give an outlook on possible future activities.
II. THE MODEL
The model system under study consists of a self-propelled colloidal sphere of radius R, which is confined to an infinite linear channel in the x-direction, where it undergoes completely overdamped Brownian motion (for a sketch see figure 1). Whereas the motion of the center-of-mass position x is constrained to one dimension, the orientation vector u = (cos φ, sin φ, 0) is constrained to rotate in the xy-plane. The self-propulsion of the particle is modelled by a constant effective force along the particle orientation F = F 0ˆ u and a generally time-dependent effective torque in the z-direction M = Mˆ e z . Because the particle is confined, only the projected force F ·ˆ e x = F 0 cos φˆ e x drives the particle systematically along the channel. Based on these considerations, the translational and orientational motion is modelled by a Langevin equation for the center-of-mass position x and the orientation force F 0 along the particle orientationˆ u. The latter is constrained to rotate in the xy-plane. Only the projected force F 0 cos φ drives the particle along the channel. A systematic, time-dependent torque M (t) = M (t)ˆ e z is also indicated.
vectorˆ u: where f (t) is a zero-mean, Gaussian white noise random force, which is characterized by where angular brackets denote a noise average. Correspondingly, g(t) is a Gaussian white noise random torque with g(t) = 0 and Here, β −1 = k B T denotes the thermal energy. D and D r are the translational and rotational short-time diffusion constants, respectively. For a sphere of radius R in the three-dimensional bulk the two quantities fulfill the relationship Due to the constraint on the orientational motion, the vector equation (2) reduces to a Langevin equation for the orientational angle φ, which is given by If the initial time t 0 is set to be zero, the solutions of the Langevin equations (1) and (4) are given by and The translation-rotation-coupling between these two equations, which is due to the cosine in equations (1) and (6), leads to nontrivial results for the mean position x − x 0 and the mean square displacement (x − x 0 ) 2 of the particle position, as is shown in the following sections. Furthermore, the presence of the coupling term leads to non-Gaussian behaviour, which is reflected in a non-zero kurtosis. The latter is obtained by calculating the fourth moment of the particle displacement distribution further down.
We start our analysis in section III by studying the special case of a vanishing systematic
III. RESULTS FOR A VANISHING TORQUE
In this section, the simplest case with a vanishing torque is covered. Solving equation (4) for M(t) ≡ 0 and averaging gives and for the second moment As φ(t) is a linear combination of Gaussian variables g(t ′ ), according to Wick's theorem [20], is Gaussian as well. Thus the probability distribution of φ proves to be Now the mean position of the particle can be calculated. From where we made use of equation (3). Thus for short times one obtains and for t ≫ D −1 r the φ 0 -dependent mean position converges towards The trajectory of the mean position x(t) is shown in figure 2 where the time t is given in units of D −1 r , while the length x is scaled by the particle radius R. To calculate the mean square displacement, the following integrals have to be solved: The third summand can be calculated easily and equals 2t/(β 2 D). As cos φ(t) only depends on the random torque g(t), cos φ(t) and f (t) are statistically independent. Therefore the second summand vanishes. To calculate the first summand in equation (14), the time correlation function is used. With φ 1 ≡ φ(t 1 ) and φ 2 ≡ φ(t 2 ) the required average can be written as Here, G(φ 1 , φ 2 , t 1 − t 2 ) is the Green function, which is given by This yields The expression for cos φ 1 cos φ 2 t 2 >t 1 is obtained in exactly the same way by replacing t 1 and t 2 with each other. Now, the first summand in formula (14) is calculated by simple integration and the mean square displacement can be written in the final form The long-time diffusion coefficient D l is given by Figure 3 displays the results for the same cases that were examined in figure 2. The graph for φ 0 = π coincides with the graph for φ 0 = 0. As can be seen in the logarithmic plots and from the expression (19), the initial angle φ 0 is not relevant for times much longer than In the following the non-Gaussian behaviour of the particle is investigated. For this purpose skewness S and kurtosis γ are calculated. The non-Gaussian behaviour is clearly signalled in the nonzero value of these quantities. In general, the skewness is given by and the kurtosis is calculated as For the third and fourth moments of x -in analogy to equation (14) -one has to solve the and respectively. Before solving the time-integrals over the first summands, the time correlation functions cos φ 1 cos φ 2 cos φ 3 t 1 >t 2 >t 3 = 1 2 cos(φ 0 )e −Dr(t 1 −t 2 +t 3 ) and cos φ 1 cos φ 2 cos φ 3 cos φ 4 t 1 >t 2 >t 3 >t 4 have to be evaluated. Here the notation φ i ≡ φ(t i ) with i ∈ {1, 2, 3, 4} is used again. Both in equation (22) and in equation (23), the remaining terms can be calculated easily using the expressions already obtained for the first and second moments. The complete analytical results for the third and fourth moments (and for the skewness and kurtosis) are presented in the appendix. Figure 4 shows that the sign of the skewness depends on φ 0 .
If the x-component of the initial orientation is positive (−0.5π < φ 0 < 0.5π), the skewness is negative, while initial angles between 0.5π and 1.5π lead to positive S. For symmetry reasons where the abbreviation a ≡ βRF 0 is used, i.e., the skewness decreases proportionally to t −3/2 . Similar analysis of formula (43) for the kurtosis γ(t) reveals a long time behaviour as γ(t) = −21a 4 9 + 12 a 2 + 4 a 4 (D r t) −1 + o First of all, as can be seen from this formula and in figures 6-8, the kurtosis does not depend on φ 0 for long times. The long-time tail, being proportional to 1/t, is more pronounced than that for the skewness. Moreover, as displayed in figures 6 and 8, for initial angles φ 0 = 0.5π the distribution is leptokurtic (positive kurtosis) for relatively short times and platykurtic (negative kurtosis) for relatively long times. Thus for intermediate times a change of sign is induced such that the kurtosis approaches its asymptotic value zero from below. This is in contrast to passive ellipsoidal particles in two dimensions [27] where non-Gaussian behaviour is due to dissipatively coupled translational and rotational motion. In the latter case, the same scaling of the long-time tail proportional to 1/t is found for the kurtosis but it approaches zero from above. We expect that the different sign is linked to the one-dimensionality of our model rather than to the qualitatively different translation-rotation coupling, which is due to the driving force in our model as opposed to the different transverse and parallel short-time translational diffusivities in the passive ellipsoidal particle model. In particular, we expect the negative kurtosis at long times t ≫ D −1 r to reflect a broad translational van Hove function [33] with shorter tails as compared to a Gaussian distribution, which is attributed to the non-linear cos-term in equation (1).
IV. RESULTS FOR A TIME-DEPENDENT TORQUE
Let us now assume an additional internal or external torque. Before considering the case of an arbitrarily time-dependent torque M(t), we first consider a constant torque M. Solving the Langevin equations (1) and (4) under this assumption, one obtains with the frequency ω = βD r M and By replacing φ 0 in formula (9) with φ 0 + ωt, the updated probability distribution of φ is gained. The mean position is obtained as In figure 9 this result is plotted for different values of the dimensionless quantity βM, which is the ratio of the external torque over the thermal energy. The long-time mean position is given by while the behaviour for short times is the same as in formula (12) for a vanishing torque. Following the notation introduced in formula (15) the Green function is now given by This leads to and by integration one obtains r ) cos(ωt) + 2ωD r sin(ωt) The result is displayed in figure 10. In this case, the long-time diffusion coefficient is given by To generalize the preceding considerations, the torque M(t) is assumed to be arbitrarily time-dependent now. Similarly to the two special cases investigated so far, it can be seen that the mean position of the particle is given by The calculation of the mean square displacement starts with formula (14) again. The first summand is the most interesting one because the other ones can be treated as before. Based on the formula we introduce Using this notation the problem can be solved in a similar way as for a constant M. The mean square displacement is now given by
V. CONCLUSIONS
In conclusion, motivated by recent experiments on catalytic colloidal particles [15,23,24], we have proposed and solved a model for a self-propelled colloidal particle on a substrate. An internal or external time-dependent torque is also included in the most general version of the model which can arise, e.g., from an external magnetic field. The first four moments of the particle displacement distribution were calculated analytically. Significant non-Gaussian behaviour was found for intermediate time.
The normalized kurtosis changes sign and approaches zero from below with a massive long-time tail inversely proportional to time.
Future work should address several generalizations of the model. First of all, the onedimensionality of our model can be generalized towards higher dimensions both for the translational and orientational degrees of freedom. In particular, the translational degrees of freedom can be considered to be two-dimensional (in a plane), and the orientational ones on a sphere. For the latter case, first analytical results have been obtained [34]. Also, e.g., for weak gravity, the third translational dimension perpendicular to the substrate is getting important, which results in unusual sedimentation effects [35]. Furthermore, the self-propelled particle can be confined in the lateral direction [24] which leads to a finite mean square displacement. This effect should be incorporated into a model study as well.
First results have been obtained for a circle-swimmer in planar circular geometry [36] and for swimmers in cuspy environments leading to self-rotating objects [37].
Last not least, the collective behaviour of many interacting self-propelled particles is expected to lead to novel characteristic nonequilibrium effects both without [38,39,40,41] and with confinement [37,42]. As stated in the introduction, the Smoluchowski equation, suitably generalized to self-propelled particles [42], is an appropriate starting point here and the general hierarchy of Bogolyubov-Born-Green-Kirkwood-Yvon [29,30,31] is expected to be a valuable tool in order to derive approximations in a systematic way. This fact after all clearly links the present paper to the 100th anniversary of N. N. Bogolyubov.
Acknowledgements
We thank L. Baraban, A. Erbe and P. Leiderer for helpful discussions which have stimulated the study of our model. We further thank H. H. Wensink and U. Zimmermann for helpful suggestions. This work has been supported by the DFG through the SFB TR6. We dedicate this work to the 100th anniversary of N. N. Bogolyubov.
APPENDIX
Using the notation a = βRF 0 and a scaled time τ = D r t, we summarize here the analytical results for the third and fourth moments as well as for the skewness S and kurtosis γ: and | 3,948 | 2009-06-18T00:00:00.000 | [
"Physics"
] |
A catalogue of high-mass X-ray binaries in the Galaxy: from the INTEGRAL to the Gaia era
.
Introduction
Since the birth of X-ray astronomy after the discovery of the first extrasolar X-ray source in the early 1960s, thousands of high-energy astrophysical objects were observed and revealed to be of various nature (see the broad review of accreting binaries in Chaty 2022).Of these, high-mass X-ray binaries (HMXBs) are powered by the accretion of material from a massive donor star (M≥8 M ) onto a compact object, usually a neutron star (NS) and rarely a black hole (BH).HMXBs are usually divided into subclasses, of which BeHXMBs (see review by Rivinius et al. 2013) host a fast-rotating Be star, and sgHMXBs (see the review by Chaty 2013) host a supergiant companion.Before the launch of INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL), sgHMXBs used to be outnumbered about 1 to 10 compared to BeHMXBs.BeHMXBs transfer matter through the interaction of a compact object with a decretion disk, while sgH-MXBs generally transfer mass via an intense stellar wind.In some rare instances, accretion in sgHMXBs may take place via Roche-lobe overflow, which produces higher X-ray luminosities than wind-accreting systems.This is the case for Cen X-3, and was more recently suggested for IGR J08408-4503 at periastron An online version of the catalogue is publicly available at https: //binary-revolution.github.io/HMXBwebcatand the database in the associated GitHub repository will be continuously updated based on community inputs.(Ducci et al. 2019).Accretion through a Be disk is much more efficient at transporting angular momentum than via wind.The spin of the compact object spin is therefore correlated to the orbital period in BeHMXBs, but not in sgHMXBs (see e.g.Corbet 1984).
The INTEGRAL satellite (see numbers given in Bird et al. 2016) has a higher sensitivity at high energies than previous generations of hard X-ray observatories.sgHMXBs are therefore no longer a minority.Notably, INTEGRAL allowed the discovery of highly obscured sgHMXBs (Filliatre & Chaty 2004) and supergiant fast X-ray transients (SFXTs; Negueruela et al. 2006b).
The discovery and subsequent unambiguous identification of an HMXB requires several observations at various wavelengths.This is usually performed by independent teams of astronomers, and it can take several years before an HMXB is securely associated with a hard X-ray source.This is mainly due to the difficulty of associating soft X-ray, optical, infrared, and radio counterparts with high-energy detections as the astrometrical precision of hard X-ray observatories, which are physically unable to focus the radiation, is systematically outperformed by at least an order of magnitude compared to focusing observatories.
This leads to a lag in the information available on HMXBs and candidate HMXBs, which is spread within the literature; the more time passes, the more tedious it is to recover valuable parameters characterising the binaries, such as the various counterparts, the spectral type of the companion star, the orbital solution, or the detection of a pulse period.Collecting this information in a single place is necessary for a proper overview of the current observational knowledge on HMXBs, and catalogues dedicated to these peculiar sources have therefore been assembled in the past.
The first edition of such a catalogue was compiled by Bradt & McClintock (1983).Following this, van Paradijs (1995) proposed a second edition, which was then further improved by a third (Liu et al. 2000).Eventually, Liu et al. (2006) compiled the fourth and latest edition to date of the catalogue (although we note that Raguzova & Popov 2005 proposed a similar work immediately before).We hereby present a catalogue of HMXBs in the Galaxy that covers new information brought during the era from INTEGRAL to Gaia (2006-2022).
We can identify various arguments towards the necessity of building an updated catalogue of HMXBs.Firstly, the aforementioned catalogues are still being used today, even though they have not been updated for more than 15 years.However, the absence of any recent update pushed us to begin compiling recent information on HMXBs in Fortin et al. (2022b) to infer natal kick properties not only on individual systems, but on the population of BeHMXBs and sgHMXBs.We are still missing crucial parameters on many binaries, however, which narrows the number of systems available for population studies.This shows the need for such catalogues to identify good HMXB candidates to follow up and which information to look for in order to complete our knowledge on these sources.Secondly, with the arrival of the new generation of observing facilities dedicated to high-energy and/or transient astronomy such as the Space Variable Objects Monitor (SVOM), the Large Synoptic Survey Telescope (LSST), or eROSITA (for the latter, see e.g.Maitra et al. 2023 for a study of a new BeHMXB in the Large Magellanic Cloud, LMC) and the nascent gravitational astronomy with LIGO 1 , Virgo, KAGRA 2 , and LISA 3 , having a contemporaneous view of the current HMXB landscape would be interesting in the scope of population studies.Catalogues of HMXBs have already been used to constrain their properties as a population (see e.g.Coleiro et al. 2013 andFortin et al. 2022a).HMXBs are also representatives of a source category that is directly related to supernova explosions as well as to compact binaries that finally merge as gravitational wave sources (see a recent review by van den Heuvel 2019).Comparing the current population of HMXBs with the population of gravitational mergers that is going to build up in the years to come may yield insightful results on stellar evolution in general.
We therefore suggest that for an evolutionary snapshot of the current population of HMXBs, it is necessary to compile measurements on intrinsic binary parameters (orbital period, eccentricity, and systemic radial velocity) as well as measurements of the individual components such as the mass of the compact object (Mx) and its spin period, and of the mass of the optical companion (Mo), its spectral type, and its luminosity class.The latest data release of the Gaia satellite (Gaia Collaboration 2022) has made the distances to Galactic binaries now widely available, giving access to their 3D spatial distribution and therefore their place in the Galactic ecology.
We note that many HMXBs are known in the Magellanic Clouds (MCs) and that previous catalogues (Liu et al. 2005) may also benefit from an update.As stated in Liu et al. (2006), the sheer number of new data justifies splitting these works, espe-1 Laser Interferometer Gravitational-Wave Observatory 2 Kamioka Gravitational Wave Detector 3 Laser Interferometer Space Antenna cially in our case, where Gaia plays an essential role in the Galaxy (distance determination) that is not applicable to the MCs.The data-mining strategy to recover information about MC HMXBs should also be adapted.Lastly, the population of MC HMXBs is known to be quite different from the Galactic population and therefore deserves a dedicated discussion and paper.
In this paper, we build an updated catalogue of HMXBs and candidate HMXBs in the Galaxy.We also include systems identified as high-mass gamma-ray binaries (HMGB), which are thought to be powered by the spin-down of a pulsar and not by direct accretion onto it (see e.g. the review in Dubus 2013).Since the publication of the last HMXB catalogues, high-energy observations (e.g.INTEGRAL, Chandra, XMM-Newton, Swift, the Monitor of All-sky X-ray Image MAXI, the Nuclear Spectroscopic Telescope Array NuSTAR, Suzaku, or Fermi) and optical/near-infrared (nIR) follow-ups allowed astronomers to discover new HMXBs.Many of the parameters mentioned above, such as spectral type, period, or eccentricity, were accurately determined.While the catalogue contents proposed here will remain fixed (last updated in September 2022), we also host a dynamic version of the catalogue online that is regularly updated when new observations are performed on HMXBs to add new systems or complete the list of known parameters.We strive to find the original references for each measurement we present, and not just reference previous catalogues.In Section 2 we describe how the catalogue is built and how we attempted to automatise the search for the multi-wavelength counterparts to HMXBs.We briefly discuss the resulting catalogue and its uses in Section 3 before we conclude in Section 4.
Building the catalogue
We describe in this section the steps we took in order to build the catalogue.We first used existing catalogues dedicated to HMXBs and cross-matched them with more recent catalogues of hard X-ray sources.We used the services of the Centre de Données Astronomiques de Strasbourg (CDS), namely Simbad (Wenger et al. 2000) and Vizier (Ochsenbein et al. 2000), to search for updated content on the sources and searched for missing HMXBs.We semi-automatically searched for known counterparts from hard X-rays to the near-infrared.To complete this, we manually compiled all the known parameters available on the HMXBs that we were able to find in the literature and list the proper reference to the original papers.
The following Section 2.1 is quite similar to what is described in a previous work (Fortin et al. 2022b), in which we built what can be seen as a precursor to this catalogue.We provide a summary of what has been done and focus on the additions brought in the present work.et al. (2006) is the most commonly referred catalogue of HMXBs, listing 114 systems in the Galaxy (including candidates).To build a working base, we added the sources seen by INTEGRAL as of 2016 to this catalogue (Bird et al. 2016).Many of the 939 hard X-ray sources presented in this catalogue are already identified, and nearly 40% are active galactic nuclei.We thus only added the sources labelled HMXBs, low-mass X-ray binaries (LMXBs), cataclysmic variables (CVs), or still unidentified.Misidentification in the exact type of X-ray binary is not unheard of, therefore we kept all X-ray binaries in this step, and discarded non-HMXB sources only after reviewing the new results published in the literature since then.We performed a posi-tional cross-match using Topcat (Taylor 2005) to find the bulk of sources common to both catalogues, and we manually confirmed any duplicates or sources that were left out because of poor astrometrical constraints.Identifiers of the sources were especially useful in this task because they are often similar from one catalogue to the next.This produced a working base of 128 HMXBs.
Liu
In parallel, we queried the Simbad database for sources of the type (or subtype) labelled HXB, the identification associated with HMXBs in Simbad.We retrieved 1288 sources in this way.Most of them were extragalactic; they are usually bundled in very tight regions of the sky associated with close-by galaxies, forming dense patches of extragalactic HMXBs.A simple way to automatically detect and remove them is to discard sources with neighbours closer than 6 .We verified that even in the Galactic plane, the sources we retrieved from Liu et al. (2006) and Bird et al. (2016) are typically twice as separated (around 15 ).This left us with 175 sources, several of which are isolated extragalactic HMXBs, which we discarded later.We note that only 109 of the base HMXBs were found in this way in Simbad; the remaining 19 are simply not labelled HXB.We individually investigated the 66 additional Simbad HXBs in order to supplement our catalogue.
In effect, a majority of these 66 Simbad HXBs are actually LMXBs.Their primary type in Simbad is still set to HMXBs, however, even though a spectral type is available many times and clearly corresponds to a cool main-sequence star.We discarded them, but kept the remaining entries even when no precise information on spectral type was available in Simbad because we performed a thorough manual search for this information later.At this point, we had a set of 145 HMXBs and candidate HMXBs.
Finding an unambiguous chain of counterparts
We considered that a secure identification of an HMXB partly comes from having an unambiguous list of its detections from hard X-rays down to the near-infrared.This ensures that none of them are blended with close-by high-energy sources, and it efficiently removes sources listed as HXMBs in the literature that were detected only once in hard X-rays 40 years ago and have had no new detection since then.
Hence, we verified each of the HMXBs in the present catalogue for their counterparts at various wavelengths.In increasing typical astrometric precision, we cross-matched the available position of HMXBs with the catalogues listed in Table 1.Independently of the origin of the positional data that were retrieved, we first queried each catalogue in a cone whose angular size varied depending on 1) the typical astrometrical accuracy of the queried catalogue and 2) the accuracy of the initial positional data.If the positional data were more accurate than the queried catalogue, the cone size was set to the radii given in Table 1, which are about twice of the worst astrometric performance in the corresponding catalogue.If the astrometric precision of the queried catalogue was more accurate than the positional data, the cone size was set to the error available in the positional data.
Then, after reviewing the counterparts found at high energies, we performed a recursive search, from poorly accurate counterparts to the most accurate catalogues (2MASS and Gaia).This allowed us to recover the chain of detection from high energies down to the optical/nIR wavelengths, as well as the soft Xray detections whose astrometrical accuracies (particularly from Chandra and XMM-Newton) can rival optical telescopes.
There is a limit in this process because this automatic query can generate false counterparts because the typical astrometrical accuracies that we used are based on the worst performance Gaia Collaboration (2022) 20 mas of each facility, so that any systematic errors in the astrometric calibration between catalogues could be taken into account.Systematic errors in astrometry appear to be especially large in older catalogues (Uhuru, the High Energy Astronomy Observatory HEAO, or Ariel V) because we often find that the historical detections of high-energy sources are not exactly compatible with more recent detections (e.g.INTEGRAL or Swift) when considering their 90% positional uncertainty.We also note that for observing facilities with astrometrical accuracies of about 1 or lower (Swift, XMM-Newton, Chandra, 2MASS, and Gaia), we added 0.5 to the positional uncertainty when validating the chain of counterparts.For instance, some XMM-Newton, Chandra, 2MASS, and Gaia detections of the same source can be so precise that they are not technically compatible with one another; for Galactic sources, even when we look towards the Galactic plane in crowded regions, it is unlikely that two separate sources lie closer than 0.5 .Using this value of systematic error was already successful in Fortin et al. (2022b), who searched for unambiguous Gaia counterparts to 2MASS sources.We verified each individual result of this automatic counterpart search.We manually removed false detections of counterparts, and searched for actual counterparts in the literature when necessary.When we manually input coordinates from specific publications, we added a reference towards it in the online catalogue; they usually come from Astronomer's Telegrams4 and are therefore not necessarily present in the queried catalogues.
Retrieving binary parameters and new HMXBs
We made extensive use of NASA's Astrophysics Data System5 (ADS) to recover the parameters and their corresponding references.Some papers greatly facilitated the process as they already listed information on some HMXBs in our catalogue.Orbital periods, spin periods, and spectral types are found in Belczynski & Ziolkowski (2009), spin periods of pulsars are reported in Annala & Poutanen (2010), spectroscopic information on Ae/Be stars is given in Fairlamb et al. (2015), tabled data on BeHMXBs is presented in Tsygankov et al. (2017) and Reig et al. (2017), HMXBs detected by INTEGRAL are reported in Sidoli & Paizis (2018), an overview of SFXT candidates is given in Sguera et al. (2020), much information on radio pulsars is collected in van den Eijnden et al. ( 2021), XMM-Newton and Swift observations of sgHMXBs are reported in Ferrigno et al. (2022), and HMXBs seen by Fermi are presented in Harvey et al. (2022).
For each information we compiled (spectral type, systemic radial velocity, masses, orbital period, spin period, and eccentricity), we provide the reference to the paper that first reported the measurements.While the articles listed above greatly sped up the process, we still manually checked each and every listed source in ADS and Simbad to search for any missing measurement and/or reference.This step is crucial not only to gather the most complete set of data on HMXBs in one place, but also to ensure that we do not cite papers in which no actual measurement was made.This facilitates determining the original source.
Furthermore, we also searched for papers announcing the detection of new HMXBs between 2016 and 2022, and added any new entry to the catalogue after performing the same precautionary steps described in this section.We mention for instance HD 96670, which was recently identified as new BH HMXB candidate in Gomez & Grindlay (2021).
Contents of the catalogue
In Table A.1 we provide a single identifier that is either the historical name of the HMXBs, the most commonly used (e.g. for INTEGRAL sources), or the main identifier as queried in Simbad.This service can be used to retrieve other identifiers available for the HMXBs.The "Spectype" column refers to the spectral type of the donor star in the binary.We also provide an indication of the subclass of the HMXBs: Be, supergiant (sg), supergiant fast X-ray transient (SFXT), and a few peculiar subclasses such as sgB[e] or Wolf-Rayet (WR).Most of the subclass information comes from the spectral type of the companion; if no spectral type is provided, a reference may be available to motivate the choice of subclass.The sky coordinates of the most accurate counterpart we found are listed alongside their 90% positional error.We also include distance inferences from Bailer-Jones et al. ( 2021) when a Gaia DR3 counterpart is available.These distances are based on Gaia EDR3, and as a result, they cannot be directly retrieved using the Gaia DR3 identifiers we provide in the full catalogue; instead, we retrieved the Gaia EDR3 identifiers first using a cone sky match, and then queried the distances in Bailer- Jones et al. (2021).Finally, Table A.1 provides a variability flag ("Var") that summarises whether the HMXBs were flagged as variable sources in the INTEGRAL, 4XMM DR11, or Chandra catalogues, or if the ratio of the peak to mean flux in the Swift 2SXPS catalogue is greater than 5.The detailed information about individual variability flags is given in the on-line version of the catalogue.
In Table A.2 we introduce the orbital characteristics of the catalogued HMXBs.We have separated this information from Table A.1 for readability, but the full on-line catalogue contains information from both tables together 6 .First are given indications on the mass of the compact object (Mx) and the companion star (Mo).Companion masses that were broadly inferred from the spectral type by us are labelled with a dagger; we used the atlas of Be stars from Porter (1996) and the stellar parameters for O stars available in Martins et al. (2005).The orbital period, eccentricity, spin period, and radial systemic velocity are given as available in the literature.
In addition to all the information in Tables A.1 and A.2, the on-line version of the catalogue provides a list of the multi-wavelength counterparts to each HMXB.For each counterpart, we provide the right ascension and declination in J2000, the 90% positional error, and the identifier as listed in the queried catalogues.This can facilitate any further cross match because sky matches can produce false associations, and identifiers help to identify any mistake in this matter.
The full catalogue content is available on Vizier in a fixed version.We also host it in a dynamic version that can be browsed online 7 , and which will be updated upon the request of users.New versions will be regularly published on the website and will be available for download in various formats.
Results, discussions, and byproducts
In this catalogue, we present 152 HMXBs and candidates in the Galaxy.This is a 33% increase from Liu et al. (2006) for the whole sample.We can also compare the increase in securely identified HMXBs because the 2006 catalogue mentions that only 63 were confirmed systems, the remaining 51 were candidates at that time.In the current catalogue, if we consider HMXBs for which we have a spectral type indicative of a massive star as confirmed, then we count 126 confirmed HMXBs.If we add to this those with a detected orbital period and spin period, this pushes the number of confirmed HMXBs to 134, more than twice the number of Liu et al. (2006).The Galactic sky map of HMXBs is shown in Figure 1.We note that 111 of the HMXBs have a Gaia DR3 counterpart, of which 4 do not have a parallax estimation.We show the face-on Galactic distribution of HMXBs seen by Gaia in Figure 2 We find that the current number of BeHMXBs in the Galaxy is 74; there are 52 sgHMXBs, of which 21 are SFXT candidates, and 5 are sgB[e] systems.Two HMXBs have a Wolf-Rayet companion.We also note that the spectral type of the companion is poorly constrained in 28 HMXBs, which indicates that optical/nIR identification campaigns are still very necessary.Liu et al. (2006) listed 50 BeHMXBs and 16 sgHMXBs according to the listed spectral types.This means that we improve the census of these subclasses by 50% and more than 200%, respectively.The dramatic increase in known sgHMXBs over the past
Examples of use
This catalogue was compiled to facilitate the retrieval of information on HMXBs and to allow for considerations to be made not only on individual systems, but on Galactic HMXBs as a population.We provide two examples of how this catalogue can be used for this purpose.
As a first example of how the data may be used is to build a Corbet diagram, shown in Figure 3 with the 38 HMXBs in Liu et al. (2006) (top panel) and the 75 HMXBs from the current catalogue (bottom panel), for which orbital period and spin period are determined.It is a great tool for visualising the effect of mass transfer in wind accretion versus decretion disk.Because the angular momentum is transferred very efficiently when an NS interacts with a decretion disk, BeHMXBs present a strong correlation (P spin ∝ P 2 orb , Corbet 1984); on the other hand, sgH-MXBs do not show a significant correlation as wind accretion is inefficient at angular momentum transfer.
As expected, the updated Corbet diagram shows a dichotomy between sgHMXBs, which tend to have shorter orbital periods and host more slowly spinning NSs, and BeHMXBs with longer orbital periods but slightly faster-spinning NSs.Even with the greatly improved census on sgHMXBs, they do not show any particular correlation in the Corbet diagram, as opposed to BeHMXBs (see e.g.Cheng et al. 2014), whose orbital period generally increases with spin period.A few remarkable systems can be quickly identified: the two millisecond pulsars SAX J0635.2+0533 and PSR B1259-63 (the latter orbiting its companion in more than 1000 d), and at the opposite end, the very slowly rotating 1A 0114+650, IGR J19140+0951, 1H 1249-637, and 4U 1954+31.The last system is also peculiar because it is the only HMXB in the Galaxy with a confirmed M I massive supergiant donor star (Hinkle et al. 2020).As a second example, we built a distribution of soft X-ray luminosities of HMXBs in the Galaxy.The current catalogue does not list the common high-energy information such as X-ray fluxes, hardness ratio, or hydrogen column density.HMXBs can be variable sources or be obscured, and the modelling of their high-energy emission requires a case-by-case approach, which is why we do not provide such high-level information.However, with the provided list of their counterparts in soft and hard X-ray catalogues, users can easily find this information.First, we use the distances in Table A.1, which were queried in Bailer-Jones et al. ( 2021) using the Gaia DR3 positions.Then, we query the Swift 2SXPS catalogue using their available Swift identifiers, and fetch the value of the unabsorbed flux in the 0.3-10 keV band (column apec_flux_b).In Figure 4 we present the distribution of X-ray luminosities that can be derived from Swift and Gaia data.We note that the source showing extreme X-ray luminosity at >10 40 erg/s is IGR J16318-4848, the prime example of an absorbed sgB[e]HMXBs (see Fortin et al. 2020 for recent broadband observations of this binary).This luminosity should clearly be considered with caution because of the uncertainty on the very high absorption in the line of sight and on the distance to the source.This is but a very crude example, as the users might wish to consider other X-ray bands or hardness ratios coming from their preferred observatories, or might consider the exact models used to infer fluxes (de-reddened or not, power law vs.black body, etc).There are many other possibilities of use for this catalogue depending on the user's goal.
Conclusion
After more than 15 years of multi-wavelength observation campaigns, the landscape of Galactic HMXBs has changed significantly.Much information about them is available throughout the literature.We present an updated catalogue of HMXBs in the Milky Way containing not only basic information such as identifiers, subclasses, and positions, but also multi-wavelength counterparts and orbital binary parameters.These are available from an in-depth both automatised and manual survey performed across published papers and catalogues of high-energy sources.
Compared to the last published catalogue of HMXBs by Liu et al. (2006), the total number of HMXBs known in the Galaxy has increased by roughly 33% (see Figure 5), by a factor of two when considering confirmed systems, and by a factor of three in the particular case of sgHMXBs.The latter most definitely benefited from the capabilities of INTEGRAL and HMXBs in general, through many focused optical/nIR identification campaigns, as well as multiple follow-up efforts in the soft X-ray band, which are essential in the process of constraining the exact position of hard X-ray sources in the sky.In addition, the data collected by the Gaia satellite since 2015 offer unrivalled estimates of positions and velocities, including distances to HMXBs across twothirds of the catalogue, which it was not possible to achieve at this scale before.
The search for new X-ray binaries and information on them is still active, and the arrival of new observing facilities will ensure continued interest in this field.The eROSITA, SVOM, and LSST observatories will not only contribute to studying currently known or new persistent systems, but will also provide much more information on transient sources and therefore provide insight into other stages of binary evolution such as supernova explosions or merger events.The addition of the gravitational messenger by the LIGO/Virgo/KAGRA observatories will work in synergy with electromagnetic transient sky facilities to constrain the endpoint of binary evolution; we will soon, if we do not already, have access to observational data on phases spanning the entire life of massive binary stars.The coming years will thus provide many opportunities for studying the evolution of massive stars in binaries, which contribute to the Galactic ecology by their X-ray emission, heavy nucleus formation, and possible retro-action on the interstellar medium.
1 :
, which indicates a positional correlation between Galactic spiral arms and HMXBs that was recently explored in Fortin et al. (2022a) along with Galactic stellar clusters to retrieve the possible birthplaces and age of the binaries.Edge-on view of the 152 HMXBs in the Galaxy.Galactic latitudes are indicated in degrees.Background image credits: ESA/Gaia/DPAC.
Fig. 3 :
Fig. 3: Corbet diagram of the 38 HMXBs in the Liu et al. (2006) catalogue (top panel) and the 75 HMXBs in the current catalogue (bottom panel).BeHMXBs are shown as blue dots, sgH-MXBs (SFXTs) are shown as green triangles (squares), HMGB are shown as pink squares, and the remaining HMXBs with a peculiar and/or unclear spectral type are shown as red crosses ("Indef").
Fig. 4 :
Fig. 4: Distribution of soft X-ray luminosities of the Galactic HMXBs seen by Swift and Gaia (N=89).
Fig. 5 :
Fig. 5: Evolution of the number and nature of HMXBs in the Galaxy with an identified spectral type from 2006 (left) to 2022 (right).
Table 1 :
List of queried catalogues for the counterpart search. | 6,437.2 | 2023-02-01T00:00:00.000 | [
"Physics"
] |
The Attack-Block-Court Defense Algorithm: A New Volleyball Index Supported by Data Science
: Spiker–blocker encounters are a key moment for determining the result of a volleyball rally. The characterization of such a moment using physical–statistical parameters allows us to reproduce the possible ball’s trajectory and then make calculations to set up the defense in an optimal way. In this work, we present a computational algorithm that shows the possible worst scenarios of ball trajectories for a volleyball defense, in terms of the covered area by the block and the impact time of the backcourt defense to contact the ball before it reaches the floor. The algorithm is based on the kinematic equations of motion, trigonometry, and statistical parameters of the players. We have called it the Attack-Block-Court Defense algorithm (the ABCD algorithm), since it only requires the 3D-coordinates of the attacker and the blocker, and a discretized court in a number of cells. With those data, the algorithm calculates the percentage of the covered area by the blocker and the time at which the ball impacts the court (impact time). More specifically, the structure of the algorithm consists of setting up the spiker’s and blocker’s locations at the time the spiker hits the ball, and then applying the kinematic equations to calculate the worst scenario for the team in defense. The case of a middle-hitter attack with a single block over the net is simulated, and an analysis of the space of input variables for such a case is performed. We found a strong dependence on the average impact time and the covered area on both the attack–block height’s ratio and the attack height. The standard deviation of the impact time was the variable that showed more asymmetry, respecting the input variables. An asymmetric case considering more variables with a wing spiker and three blockers is also shown, in order to illustrate the potential of the model in a more complex scenario. The results have potential applications, as a supporting tool for coaches in the design of customized defense or attack systems, in the positioning of players according to the prior knowledge of the opponent team, and in the development of replay and video-game technologies in multimedia systems.
Introduction
Data science and numerical modeling in sports are powerful tools applied for sports performance, which have been widely used [1][2][3]; for example, in baseball [4][5][6][7] and in a particular case in volleyball [8,9].In volleyball, this is a real fact because of data analysis resulting from plays that have occurred during games, which have been determinant in defining play strategies.
Efficacy in volleyball has been partially determined by prior knowledge regarding the opponent and the corresponding customization, which are characteristics that show successful results when the players have reached a good visual anticipation of the attack that is based on patterns of previous action outcomes [10].This skill and others are expected to be developed in players thanks to the coach's experience and training.In this context, an important resource that assists coaches to make decisions is Data Volley, a computational software designed to analyze performance in volleyball, allowing for the transformation of actions performed by players during a match or training into a standard code that is statistically analyzed [11,12].Additionally, this software has been utilized for video montages and graphical analysis to analyze the opponent team s offense and defense before official games [13,14].
For each match, although the opponent's strengths and weaknesses are important factors to consider, it is also fundamental to design robust strategies for offense and defense.Ideally, both of them should be at the highest level of effectiveness if it is required to have a winning team.Offense and defense are complementary and are divided by a thin line, because when an attack is conducted, the objective is to prevent the opponent from getting the point, which is a type of defensive action, and on the other hand, when a strictly defensive action is performed, the ultimate goal is to win the point, which is a type of offensive action.
Attack-perspective supporters emphasize the importance of creating offensive situations [15,16], while some defensive-style supporters, even though they accept that the attack efficacy in the game is decisive, also agree that defensive actions are fundamental for competitive success [17], and a more radical view in this last perspective is introduced by Terry Liskevych, a recognized USA Olympic Coach during 1984 and 1996.He is convinced that defense determines the winning team, and his argument is also supported by a dependence on the interplay and coordination of block and backcourt defense [18,19].Liskevych s perspective is partially supported by several scientific studies; one of them, a study obtained from the men's 2005 World League, found that defense, together with attack and block, are the three terminal actions that make teams win sets [20].In contrast to men's volleyball, in women's volleyball, there is a more balanced relationship between the attack and the defense, having a predominance of placed and slower attacks [21,22], which will not be considered in this work.Now, if one deals on the defense, blocking (obstruction to the opponent's attack) is usually understood as the characteristic defense factor, "due to the block being the first line of defense" [23], the reason that justifies the existence of several works in the literature studying its efficacy [15,24].Although this will be detailed in the methodology of this work, it is important to emphasize the importance of blocking for volleyball teams [24,25], especially for top-level teams, where it is the most important skill in opposition to attack [26].Blocking as a defense strategy must be performed in the best proper way.This action starts when the blocker jumps in front of the net, extending the hands upward, trying to avoid the opponent's spike.Performing a kill block, in which the ball ends on the opponent's court, is the best option; however, the second option is to block the ball in such a way that it remains in the blocking team's court but has the chance of organizing their offensive game.In a successful block, the ball ends up crashing against the blocker's hands, and in the best scenario, it returns to the opponent s side.Thus, it is said that the key to blocking is determined by jumping at the correct time and choosing the right jumping position [27,28].
Getting a high performance for blocking is also usually associated with several abilities more commonly observed in expert players; such tasks such as as perceptual speed and anticipation of the ball direction, when better performed, are outstanding [29].On the other hand, although it could be possible to consider various perspectives to characterize the performance in a volleyball match, the ball's motion analysis is relevant information that is associated with those abilities that contain variables such as ball distance, speed, and deceleration [30].Of course, the location of all players to attack the ball, and block height are also important [18,31].
However, when the ball passes the block, the time it takes the ball to fall to the floor (impact time) is another important issue, so that if it is just an instant, it could be an advantage for the spiker, because it would be less probable that the opponent team can react to it.In the opposite way, if the ball takes more time to fall, it would be more probable that the defense can recover the ball.The impact time during blocking is closely associated with block height [31,32], and although both could be controversial, block height can be a critical point to be analyzed, which is part of the purpose of this research.
Having as a background the use of valuable statistical tools such as Data Volley for the analysis of specific actions in volleyball matches, this work proposes a model to characterize the block action from a defense perspective, focusing on attending the controversy of the importance of block height, and providing a computational algorithm as a tool for supporting the decisions of volleyball coaches.The algorithm requires the 3Dcoordinates of the attacker and the blocker(s) as input data, and provides the worst scenario for the defense team in terms of some quantitative variables related to the covered area by the block and the impact time that it produces for each cell of the court.It is important to note that the input data, the actual or statistical values of players, can be simulated, which allows for the use of the algorithm for different situations; in turn, our research outcomes can be applied for defensive as well as for offensive rolls; however, the discussion is oriented to the defensive perspective.Starting from the fact that a volleyball attack is a complex phenomenon that is composed of several components, multiple variables, and unpredictable factors, the hypothesis of our method considers blocking as one key component, which is analyzed in a separated way in order to enclose the relation between its inherent variables, allowing the parametrization of the attacker-blocker interaction.In this sense, our algorithm generates statistical data by modeling the possible worst scenarios that could occur for the specific situation of the block in a match.The structure of the paper is as follows: Section 2 describes the methodology to implement the algorithm, Section 3 shows the results of applying the algorithm for some cases studies, Section 4 is dedicated to discussing the applications and limitations of the model, and our conclusions are shown in Section 5.
Methodology
This section describes the general concepts and assumptions for modeling the ball's motion, and defines variables and the set of equations, together with the conditions to be fulfilled, in order to have a solution.There is also a description on the discretization of the court.The pseudo-code of the ABCD algorithm is shown at the end of this section.
Physical Model
The model considers the kinematic equations of parabolic trajectories in Cartesian coordinates and the MKS system of units.The concepts of "heat", "dink", and "dump" should be interpreted as in [33].Key variables and assumptions are detailed below.
The ball.The volleyball ball is considered as a perfect spherical ball whose center of mass agrees with its geometrical center, such that the kinematic equations of motion can be applied to its mass point representation m.In turn, the ball is also characterized by its radius r, in order to compute the distance measurements of the model.
The attacker.The 3D-coordinates at which the attack occurs, a ≡ a x , a y , a z , is defined by the 2D location at which the spiker hits the ball (a x , a y ), and the spike's height a z , where a z is taken in such a way that the center of the ball agrees with the center of the attacker's hand, as illustrated in Figure 1a.
The blocker.The block is modeled by a rectangle over the net (at y = 0 m, without penetration), so that it covers the entire cross-section of the rectangle without space within it, blocking all of the balls trying to pass through it with any part of its circumference.This leads to a coverage discontinuity on the border of the rectangle; see Figure 1a.In this way, the block is characterized by its 2D location (b x , b y ), its maximum height b z , and its coverage width b w , so that b 1 ≡ b 1 x , 0, b 1 z , b 1 w represents a single block; see Figure 1b.The distance of the arms of blocker 1 from the net is b 1 z − h net , where h net is the height of the net.In case of double or triple blocks, the second and third blockers are characterized in a similar way by b 2 and b 3 , respectively.Even though it will not be considered in this paper, three more blocks could be added for the case in which the arms of the blocker are uneven, so that each block would represent an arm of a player.
The court.Hereafter, we will refer to the court on the defending team's side simply as "the court".Then, in order to properly illustrate the relations between variables, the court with an area of 9 m × 9 m is discretized into a cell array with n cells per side, which leads to a refinement of n/9 cells per meter and a total of n 2 cells.The ij-th cell is defined by its 3D-coordinates c ij ≡ c x , c y , 0 in meters, according to the system of coordinates shown in Figure 1b.
Ball's motion.The model considers two types of spike, which lead to two different types of ball motions: the first one refers to hard spikes (better known as "heats").In this case, it is assumed that the ball travels in a straight line and the time that it takes to reach the ij-th cell of the court is t ij = 0.This assumption agrees with real heats in elite volleyball matches, in which the players acting the roll of backcourt defense do not have time to move their position, covering only their individual spots.For each ij-th cell of the court and a fixed attack a, the straight line mentioned above is then defined by ← → ac ij .On the other hand, the second type of spike refers to soft hits in general ("dinks") and those ones near the net ("dumps").In those cases, we define the movement of the ball by the kinematic equations of motion [34]: where g = 9.81 m/s 2 refers to gravity acceleration, u, v, and w to the velocity components of the ball (only w changes with time, due to g), and the sub-indexes i and f refer to the initial and final values, respectively.4) is satisfied, while for Sector 2, a heat is not possible.The red dashed line is located at a height r, the radius of the ball.
Covered Area and Impact Time
The area of the court is represented by binary matrix A, with entries A ij , such that the covered cells are tagged by 1 and the uncovered cells are tagged by 0. Since we are looking for the worst scenario to the team in defense, it is considered that the attacker performs a heat in direction to each ij-th cell whenever possible; i.e., the ABCD algorithm supposes that a heat in c ij occurs if there is not the net or a block between a and c ij .If this occurs, c ij is tagged as not covered, then A ij = 0 and t ij = 0 s, as mentioned above.In accordance with Figure 1c, this happens when constraint (4) is not satisfied: where d ac ij and d ab ij are the distances on the plane xy from the position of a to a zone c ij , and from a to the point where the ball crosses the center line s ≡ (s x , 0, s z ), respectively, such that: In turn, s x is also geometrically defined by a linear interpolation between (a x , a y ) and (c x , c y ), whereas s z takes its value depending on whether the ball crosses the centerline over the block or over the net: where h net is the height of net, and the terms b x ± b w 2 indicate the width of each side (left and right arms) of the block.The radius of the ball, r, is added to the vertical and horizontal limits of the block and the net because any ball that touches the blocker (or also the net) with any part of its circumference is blocked, according to the definition of block in Section 2.1.Now, for each ij-th cell of the court in which a heat is not possible, the corresponding cell is tagged as covered by A ij = 1, and the time the ball takes to reach that cell (the impact time) is calculated by solving for the time the kinematic Equations ( 1)-(3), matching (in position and velocity) the trajectories: • from a to s; taking a as the initial point and s as the final point, • from s to c ij ; taking s as the initial point and c ij as the final point.
In this way, the impact time for the ij-th cell is: where ξ ij is the ratio Note that s z must be the minimal height at which the ball can cross the center line to prevent negative times t ij , which could result by computing s z directly from Equation ( 8), for the cases when: So, s z must be increased by a small factor dz until reach the minimal height that gives t ij ≥ 0. Finally, the average impact time µ t = 1 n 2 ∑ ij t ij , the standard deviation of the impact time , and the percentage of the covered area A = 1 n 2 ∑ ij A ij × 100% are defined as the output variables that bring a global measure for the court.In turn, is defined in order to establish a measure of the relative height of the attack.
2.3.The Attack-Block-Court Defense Algorithm (The ABCD Algorithm) The pseudo-code of the ABCD algorithm according to Equations ( 4)-( 11) is presented in Algorithm 1.Given the 3D-positions of the attack and block, the algorithm calculates the output variables related to the impact time and the covered area for that simulation.Our current model, as shown in the pseudo-code, accepts up to three blockers.The parameters regarding the ball, the court, the net, and the ball's trajectory can be modified.
Algorithm 1 Pseudo-code of the ABCD algorithm for one simulation.
System Initialization Set r, n, h net , dz and initialize Read the entry values Calculate Calculate ξ ij (10) 10: The ball is in the xy-direction of the k−th block 11: Define s z (8) 13: Calculate the impact time (9) and save it as t k ij 15: Tag the ij-th cell as covered: A ij = 1 if tag = 0 then The ball is not in the xy-direction of any block yet considered 25: Define s z (8) 26: Calculate the impact time (9) 28: Tag the ij-th cell as covered: A ij = 1 In summary, the algorithm consists of the following steps: 1.
Input data: 3D positions of the attacker and blocker(s); 2.
Consider one cell of the court; 3.
Calculate the ball's trajectory of the worst scenario; 5.
Repeat Steps 3-5 for the rest of the cells; 7.
Calculate the output data.
Results
Numerical simulations have been conducted to show the behavior of the output variables for different case studies.For all of the simulations, the physical parameters have been set with the aim to illustrate conditions that are close to the range of elite men's volleyball matches: r = 10.35 cm, h net = 2.43 m [35].The algorithm parameters were defined as n = 180 cells and dz = 0.1 mm, in order to obtain a good resolution.
The system could accept up to 15 input variables: three for the position of the attacker, nine for the position of the three blockers (three variables for each one) and three variables for the corresponding block width.However, in order to show the potential of the algorithm and to have a comprehensive study of the dependence of the variables, in this paper, we show only the case studies fixing both the x-position of a middle-hitter attack at x = 0 m, corresponding to a x = 0 m, and a single block located at (b 1 x ,b 1 y ) = (0,0).Block assist is then not considered, so that the second and third blockers are introduced in the algorithm, assuming b 2 z = b 3 z = h net .This reduces the number of effective input variables to four: a y , a z , b 1 z , b 1 w .Since only one blocker is considered, here after, we omit the superscript.The algorithm was programmed in R statistics [36], while the resulting data were plotted using Gnuplot [37]: Figures 2-7 and A1-A4.
Figure 2 shows four maps of the impact time on the court, corresponding to four simulations.All the cases consider that the middle hitter hits the ball at a height a z = 3.2 m.Due to the symmetry of the setup, the maps show only one half part of each solution and one half of the domain for each solution.Case 1 considers that the attacker hits the ball at one meter from the net a y = −1 m, while the block height is b z = 3.1 m and the block width is b w = 1 m.In Case 2, a higher and wider block is considered so that b z = 3.For all the cases, one can see a gradient for µ t in the y-direction, having the largest times near the centerline, located at the x-axis.The gradient is cut by the diagonal projection of the block's coverage, leading to a discontinuity in the map.Such slopes differ according to the lateral extension of the block b w , when comparing cases at the same a y .Even more, for the uncovered cells by the block, the difference in the separation of the attack from the net a y leads to a steeper gradient in Cases 1 and 2. The uncovered cells, neither by the block nor by the net, are identified by the darkest blue color, corresponding to t ij = 0. On the other hand, there is a smoother gradient for the covered cells by the block.Indeed, the minimum values of time reached are close to 1 s, in Cases 1 and 3, and more than 1 s in Cases 2 and 4, because of the higher block b z in these latest cases.The integration of those results helps to visualize and quantify the multi-dependence of the average impact time and its importance in a match, e.g., when comparing the impact time with the reaction time and the explosive speed of the players in the backcourt defense.The maps are color coded according to the bar at the bottom, in units of seconds.For all of the cases, the cells near the net have µ t > 2 s.However, the color scale has 2 s as the upper limit in order to have a proper visualization of the gradient.The four cases show a similar relation in shape between the covered area and the average impact time: there is a first part in which µ t increases linearly, with similar slopes and with almost no dispersion with respect to A, and then there is a break point that defines the beginning of a second part in which the linear relation remains, but with a clear dispersion.Nevertheless, the shape in each case has a different position and size.Therefore, even though the possible scenarios of A vs. µ t do not vary in a qualitatively manner for the backcourt defense, the attack position from the centerline a y and the block width b w modify the limits of the space of solutions.Analyzing the effect of the variables, larger average impact times could be reached when the block is wider, as in Cases (b) and (d), which is intuitive because more area is covered.On the other hand, comparing Cases (a) and (b) with Cases (c) and (d), we can observe that the farther the attack from the net, the larger the standard deviation of the impact times.In this way, Figure 3 focuses more on relating the space of solutions than on highlighting the dependence of the output variables on the input ones.Figures A1 and A2 in Appendix A show similar graphs but with color coding in accordance to variables a z and b z , respectively.
Figures 4-6 illustrate the results of the numerical simulations as shown in Figure 3, but plotting the output variables with the dependence of some input variables.Figure 4 relates the output variables A and µ t with the input variables a z and b z through the ratio ρ.The solutions of each case look like a piecewise function with a break point.The break points occur at values of ρ ≈ 1.1 when a y = −1 m, and ρ ≈ 1.2 when a y = −2 m.Moreover, the left part from the break does not depend on ρ, which means a constant range of covered area A: Case (a), a range of 70-80%; Case (b), more than 95%; Case (c), a range of 70-90%; and Case (d), more than 90%.The right part presents a non-linear decrease in A for increasing values of ρ, reaching values of A = 25% when a z = −1 m and A = 50% when a z = −2 m.In turn, the color map shows that µ t decreases when increasing ρ.The way of that decrease is non-linear and smooth, but different for each case, as illustrated in Figure A3.The effect of b w is shown when comparing Cases (a) and (c) with Cases (c) and (d), so that wider blocks lead to larger average impact times and larger covered areas.
Figure 5 is similar to Figure 4, but substitutes the output variable µ t by the input variable a z in the color code.The color gradient indicates that A depends also on a z , such that the general behavior of A as a function of ρ consists of a family of curves, each of them having higher values of A for lower values of a z .The standard deviation, σ t , shows also a similar dependence on a z , while µ t has an inverted dependence on it; see Figures A3 and A4, respectively.
Figure 6 shows the behavior of σ t and A regarding ρ.The resulting graphs for such a visualization present the largest asymmetry when comparing between cases.Here, there is an inflection point that causes a change in the concavity of the curves.Such a point coincides with the corresponding break points in the previous figures.For the cases when a z = −1 m, the right side of the inflection point is approximately constant ∼ 0.55 s, while for the cases when a z = −2 m, it increases up to 0.75 s.The left side of each graph shows a concave shape with a large dispersion of points.In turn, the color of the points represents the value of the percentage of the covered area A, which allows an alternative visualization of the results shown in Figures 4 and 5.
To break the symmetry considered in previous cases, and to show the potential of the algorithm, we performed simulations for the case of a wing spiker at position a = (−4.2,−0.55, 3.45) covered by three blockers at positions b 1 = (−4.3,0, 3.1), b 2 = (−3.5,0, 3.2), and b 3 = (−2.7,0, 3.0), with the coordinates in meters, and with b w = 0.5 m for all of them.The 10,000 numerical simulations were conducted, varying the parameters a z and b z inside the same ranges than the previous simulations.The results are shown in Figure 7.The left panel contrasts with previous cases (see Figure 2); it has a more complex pattern of zones, with µ t = 0; the contours correspond to the area where µ t = 0.This case also shows the unpredictability reached by adding more variables to the algorithm, from 4 to 15, when comparing the right panel with the graphics in Figure A1 in the appendix.Therefore, in this case, we cannot see a clear interplay between a z , µ t , and A.
Discussion
In order to provide a practical application guide of the results for volleyball coaches and players, in this section, we give some tips on how to interpret the obtained graphs for real matches and training, remarking that the purpose of our algorithm is to allow the coaches to establish defensive strategies to detect weak points of the defense in a parametric way.In turn, we mention the limitations of the current version of our model.
Application in Matches and Training
Due to the complexity and multi-dependence of the output variables on the input ones, fitting an equation to represent the relationship between them could not be the best option to make decisions in matches.On the other hand, the implementation of a graphical user interface (GUI), where the input variables can be introduced in a friendly manner, allows for choosing or supporting the coach's decision in an efficient way.As an example, Figure 8 shows a GUI prototype based on our entry data that would generate graphics such as those shown in this work, highlighting the specific solution of the entry variables according to the needs of the coach.The GUI in Figure 8 was implemented in C sharp [38].In the following, we will discuss a practical use for the graphics shown in this work.Figures 3, A1 and A2, are useful views for a quick detection of the possible combinations of solutions close to the result obtained with the entry data.For example, consider a training session in which the coach of the team in defense has statistical knowledge of the possible opponent spikers per rally in the next match.Thus, the ABCD algorithm could be applied for each possible attack to identify clusters in the plots of Figures 3, A1 and A2, and then to train the backcourt defense for such specific resultant scenarios; i.e., to optimize the defense in terms of A, µ t and/or σ t by customizing the backcourt location.
Figure 4 can be applied in real-time match rallies according to the needs of the defense.Supposing similar assumptions as in the above paragraph regarding a priori knowledge of the opponent team, Figure 4a,b could advise the middle blocker on determining the height of his/her jump: if the coach instructions are to maximize A without considering µ t or σ t , then the blocker could manage his/her energy in looking for a value of around ρ = 1.1 units, but not lower, and focusing on maximizing its lateral coverage b w , since the values of A for ρ < 1.1 depend mainly on the block width and the attacker height.Implementing this methodology from the training, the player could optimize his/her muscle fatigue for the next rallies.Now, if the purpose of the middle blocker is not only to maximize A but also µ t , with the aim of supporting the average reaction time of the backcourt defense, then Figure A3 should be an important output of the GUI.In such a case, the variable ρ must be considered for all of the possible scenarios, whereas a z could diminish in importance.Moreover, considering a most complex duty for the middle blocker in which his/her purpose is not only to maximize A and µ t but also focusing on σ t , with the aim of optimizing the location of the backcourt defense, then Figure 6 could support the clustering of such players, accounting the high variability of σ t .Finally, in the situation when the attack height of a specific opponent player is a risk factor for the defense, then Figures 5, A3 and A4 should be the output of the GUI, for a more customized defense and a proper positioning.
Limitations of the Model
The discussion of our algorithm is mainly focused on the four configurations of middlehitter attack and one blocker (Figures 2-6 and A1-A4), and the number of variables that we considered.Nevertheless, we presented a different configuration with a wing spiker and three blockers (Figure 7), that shows the unpredictability that can be reached when increasing the number of variables.
Our approach considers individual players; this is important since within the team, each player may have very specific behaviors; moreover, despite the results that were obtained for the parameters of men's volleyball, our algorithm is flexible on the input parameters and can also be applied to women's volleyball for instance, by considering different ranges of heights.It is important to note that our algorithm is aligned to the match analysis performed by [39] in the following aspects:
•
It can consider the five categories of block opposition.Here, we presented three of them: single blocking, broken double blocking, and triple blocking.• Block position and attack zone are our input data in the way of the 3D locations of such players.
•
It considers two of the five times of an attack: the ball hit and the block time.
•
It proposes three additional variables (covered area, average impact time and its standard deviation) that have not been considered inside their wide set of variables.
Thus, the search for the relation between variables could be extended by considering our results in match analysis studies, complementing pre-existing data with a physical model.
In turn, some restrictions of our results can be seen by analyzing the maps in Figure 2. The positioning of the backcourt defense, when knowing the attack and block positions, is indirectly suggested by those maps but its feasibility is limited by the lack of additional variables, such as setting the tempo, the distance from attacker to setter, and the attack zone.In fact, [40] analyzed 4544 plays of the 2011 Volleyball World League, finding that certain attack zones are deeply associated with attack tempo, and that quicker attack plays affect defense system structuring.This means that small changes on those parameters can produce variations on the locations and heights at which blockers and spikers could make contact with the ball-recalling that these are the input data of the algorithm.The effects of those variations in the parameters have not been directly considered in our model, but they are implicit in Figures 3-7 and A1-A4.In addition to this, some general limitations of the model are discussed below.
Low impact spikes with a spin (or "off-speed hits") are not considered in this work.Serves and those types of spikes could be implemented in forthcoming communications by taking into account that the larger velocity and flight time of the ball will lead to larger drag and Magnus forces that cause a considerable deviation of the ball's trajectory, and then, these forces should be modeled.
Another limitation occurs when the ball interacts with the border of a blocker, since our block model only leads to two possible ways: the ball passes or not, so those cases when the block deviates the ball's trajectory are not considered.Such situations are more difficult to model; therefore, it should be expected that more complex solutions would result.These effects are not considered in this first approximation, in order to show the potentials of our model in a structured manner, and to provide a comprehensive relation between variables.
In this sense, the interpretation of the results is restricted by the same limitations of the model that were described above, so that unpredictability effects that are difficult to measure can occur without being caught by our deterministic model.However, our simplified block-attack configuration under ideal conditions provides significant symmetric and asymmetric patterns with useful supporting information for coaches.
Conclusions
We developed a computational algorithm named the ABCD algorithm, which characterizes and quantifies the possible scenarios for the defense of a volleyball team when an attack of the opponent is performed.The algorithm calculates the average impact time µ t , its standard deviation σ t , and the percentage of the covered area A.
The algorithm was implemented for four different conditions of game, considering a middle-hitter attack with one blocker.We found a complex-general dependence of the output variables µ t , σ t , and A on the attack-block height's ratio ρ and the attack height a z .An asymmetric case considering more variables with a wing spiker and three blockers is also shown, in order to illustrate the potential of the model in a more complex scenario.
The way of characterization and the illustrative results suggest that the ABCD algorithm is a computational tool that allows us to visualize and quantify the scenarios for defense.In this way, it could be a potential widget to adapt in a GUI for supporting or planning the decisions of coaches in games and training.As a final comment, we propose the following future lines of research:
•
To add more variables and randomness to the block's model with the aim of representing complex scenarios that are not discussed in this work, such as small modifications in the ball's trajectory caused by a light contact of the blocker's fingers.
•
To illustrate representative cases of double and triple block.
•
To adapt the methodology to the serve, but including drag and Magnus forces to the ball's equations of motion.
•
To consider more statistical data in the algorithm, such as trends in the direction of the spike, and the percentage of the effectiveness of each player.
Figure 1 .
Figure 1.Visualization of some characteristics of the model.(a) View of an attacker-blocker encounter showing the input variables: attack height a z , block height b z , and block width b w .(b) Planar view of the court showing some input variables: a x , a y , and b x , and the xy-axes (the z-axis points outside);the origin of the coordinate system is located at the center of the entire court.(c) Lateral view of the court showing some trigonometric measures to decide whether a heat is possible for a certain cell or not: for cells in Sector 1, a heat is possible because restriction (4) is satisfied, while for Sector 2, a heat is not possible.The red dashed line is located at a height r, the radius of the ball.
3 m and b w = 1.4 m.Cases 3 and 4 simulate a back row attack at a y = −2 m, while considering similar characteristics for the block to Cases 1 and 2, so that in Case 3: b z = 3.1 m and b w = 1 m, and in Case 4: b z = 3.3 m and b w = 1.4 m.
Figure 2 .
Figure 2. Maps of the average impact time, µ t , at each cell of the court of the defending team; four cases are shown.The x-axis locates the position of the net.Case 1: position of the attack a y = −1 m, height of the attack a z = 3.2 m, height of the block b z =3.1 m, and lateral extension of the block b w = 1 m.Case 2: a y = −1 m, a z = 3.2 m, b z = 3.3 m, and b w = 1.4 m.Case 3: a y = −2 m, a z = 3.2 m, b z = 3.1 m, and b w = 1 m.Case 4: a y = −2 m, a z = 3.2 m, b z = 3.3 m, and b w = 1.4 m.The maps are color coded according to the bar at the bottom, in units of seconds.For all of the cases, the cells near the net have µ t > 2 s.However, the color scale has 2 s as the upper limit in order to have a proper visualization of the gradient.
Figure 3 .
Figure 3. Space of solutions of the three output variables of the entire court for four different cases of varying a y and b w : (a) a y = −1 m, b w = 0.6 m; (b) a y = −1 m, b w = 1.4 m; (c) a y = −2 m, b w = 0.6 m; (d) a y = −2 m, b w = 1.4 m.
Figure 4 .
Figure 4. Percentage of the covered area A and average impact time µ t in function of the ratio between heights of the attack and the block ρ, for the four different case studies: (a) a y = −1 m, b w = 0.6 m; (b) a y = −1 m, b w = 1.4 m; (c) a y = −2 m, b w = 0.6 m; (d) a y = −2 m, b w = 1.4 m.
Figure 5 .Figure 6 .Figure 7 .
Figure 5. Percentage of the covered area A in function of the ratio ρ, and the attack height a z , for the four different case studies: (a) a y = −1 m, b w = 0.6 m; (b) a y = −1 m, b w = 1.4 m; (c) a y = −2 m, b w = 0.6 m; (d) a y = −2 m, b w = 1.4 m.
Figure 8 .
Figure 8. Prototype of GUI for the input variables of the ABCD algorithm.The suggested GUI uses sliders to select the values of the entry data in a friendly visualization.
Figure A1 .Figure A2 .
Figure A1.Percentage of the covered area A and average impact time µ t at different attack heights a z , for the four different case studies: (a) a y = −1 m, b w = 0.6 m; (b) a y = −1 m, b w = 1.4 m; (c) a y = −2 m, b w = 0.6 m; (d) a y = −2 m, b w = 1.4 m.
Figure A3 .Figure A4 .
Figure A3.Average impact time µ t in function of the ratio ρ and the attack height a z , for the four different case studies: (a) a y = −1 m, b w = 0.6 m; (b) a y = −1 m, b w = 1.4 m; (c) a y = −2 m, b w = 0.6 m; (d) a y = −2 m, b w = 1.4 m. | 9,699.2 | 2022-07-22T00:00:00.000 | [
"Computer Science"
] |
Government intervention, industrial structure, and energy eco-efficiency: an empirical research on new energy demonstration in cities
This study investigates the relationships among government intervention, industrial structure, and energy eco-efficiency (EE). Energy eco-efficiency was measured based on a non-radial directional distance function for 236 cities in China from 2005 to 2019. Additionally, the difference-in-difference model (DID) method and spatial econometric models were used to analyse the impact of government intervention and industrial structure on energy eco-efficiency and their spatial spill-over effects. Government intervention includes fiscal expenditures and policy orientation for new energy demonstration construction. Our results indicate that: China’s EE has a fluctuating upward trend and increased 17.85% in the period, and its spatial distribution imbalance gradually developed into a regional distribution balance. Moreover, government intervention and adjustment of the industrial structure improved urban energy eco-efficiency by 7.43% and 0.92%, respectively, which also has spatial spill-over effects in neighbouring regions. Furthermore, economic development, technological innovation, and foreign direct investment enable EE. However, urbanisation hinders the improvement of energy eco-efficiency. Finally, heterogeneity analysis showed that the policy of the new energy demonstration city has better effects on eastern and western cities in promoting EE.
Literature review
EE evaluation methods include the super Slacks-Based Measure Model (Super-SBM) 2 , stochastic frontier model analysis (SFA) 6 , and super-efficiency models based on NDD (Super-NDDF) 7 .An analysis of the spatiotemporal characteristics and influencing factors of EE showed that, the EE of cities in China is generally at a low level; complex fluctuations characterise the time pattern, and the EE of each city has improved to varying degrees 6 .China's EE has significant global and local spatial agglomeration characteristics, but the spatial distribution is uneven, and there are prominent spatial effects 8 .Additionally, Wang et al. 9 calculated and analysed the EE of the energy-enriched area of the Yellow River Basin using SFA and the Spatial Durbin Model (SDM), and found that the EE was relatively low and showed a downward trend.Innovation and industrial structure were found to be prominent factors enhancing EE in this region.
The academia has faced considerable controversy over the impact of government intervention on factor productivity.Some researchers have concluded that government intervention can achieve economic and environmental objectives by improving the impact of resource endowments on energy efficiency 10 , thus effectively improving the efficiency of the green economy 11 .However, some researchers have found that government intervention will disrupt the market by restricting private investment, reducing the efficiency of capital allocation, and inhibiting the increase in factor productivity 12 .The government intervenes in social development mainly through economic intervention and policy orientation, in which fiscal expenditure is the primary method of economic intervention.Researchers have studied the relationship between factor productivity and fiscal expenditure, finding that the emission of sulphur dioxide and other pollutants reduces with an increase in public financial expenditure 13 , which can significantly promote total factor energy efficiency 14 .Zhang et al. 15 found that government fiscal expenditure can improve EE; however, many researchers also concluded that fiscal expenditure will negatively affect factor productivity.The inhibitory effect on green total factor productivity increases with increasing fiscal expenditure 16 .Fang et al. 17 found that local governments may underestimate environmental protection in industrial diversification, while the decentralisation of fiscal expenditures will inhibit the improvement of EE.
By studying the impact of relevant policies and regulations with policy-oriented backgrounds on EE, researchers have found that it is difficult to meet the dual needs of improving life satisfaction and economic level by solving the conflict between the environment and energy utilisation through market mechanisms 18 .Chen et al. 7 measured the EE of 282 cities in China using Super-NDDF and found that environmental regulation has a spatial spill-over effect in improving ecological efficiency.Cui et al. 2 studied the impact of environmental regulation on EE based on Tobit and threshold regression models.They found that mandatory environmental regulations and market incentive environmental regulations have a more significant inhibitory effect on EE.In contrast, inhibiting voluntary environmental regulations has a time lag effect.
Regarding the research on industrial structure, many researchers have concluded that industrial structure adjustment positively affects EE.Guan and Xu 8 found that industrial structure is the most important factor affecting EE.Industrial transformation facilitates the effective allocation of resources through factor flow and professional division of labour to effectively improve EE 19 .Liu et al. 20 found that upgrading the regional industrial structure can considerably enhance eco-efficiency.However, some studies had different results.Meng and Zou 21 reported that the effect of industrial structure adjustment on EE is not significant.However, increasing the proportion of industrial output will increase regional social and economic benefits, and thus inflict more severe damage to the environment.The above research shows the need for a consistent conclusion on the impact of fiscal expenditure and policy-oriented government intervention and industrial structure on EE, especially in the form of budgetary expenditure.
Researchers have found that the government can effectively guide the public to improve energy efficiency through reasonable intervention.Implementing energy policies and financial subsidies can substantially improve energy efficiency.Many countries are actively committed to implementing various policies to promote energy efficiency and establish a sustainable energy mix.The policy orientation of constructing new energy demonstration cities facilitates the utilisation of renewable energy and can effectively alleviates pollutant emissions through government support, private financing, industrial structure optimisation, and resource allocation adjustment 22,23 .Yang et al. 24 concluded that policy orientation will encourage local governments to increase the intensity of environmental regulation for high-polluting industries and enterprises, and promote a certain amount of production factor resources transfer to policy-oriented new energy industries.The exploitation and utilisation of renewable energy can reduce dependence on traditional fossil fuel energy, enhance cities' energy security, and promote sustainable economic, social, and environmental development 25 .
Although the spatiotemporal characteristics of EE and its impact have been studied from different perspectives, the impact of government intervention on EE and its spatial spill-over effects have yet to be explored from the perspective of new energy demonstration city policy orientation and financial expenditure.Furthermore, the influence of industrial structure adjustment on EE requires further verification.Therefore, in this study, we constructed a NDDF model considering the system of "energy-economy-environment", and then analysed the impact of policy orientation, fiscal expenditure, industrial structure, and spatial spill-over effects of government intervention on EE and its regional heterogeneity.
DID model
Government intervention is measured by fiscal expenditure and the policy orientation of constructing new energy demonstration cities.Here, we studied the impact of government intervention and industrial structure on EE.As DID can solve endogenous problems commonly faced in the existing literature 26 , the construction of new energy demonstration cities can be viewed as a "natural experiment".In this study, fiscal expenditure, policy orientation, and industrial structure were considered as core explanatory variables and a DID model was constructed to estimate the effect of government intervention and fiscal expenditure on EE.The equation for this model is as follows: where, i and t are cities and years, respectively; the explained variable Y it is the annual EE of each city, du it is a region dummy variable, and dt it represents the time dummy variable.The interaction coefficient β 3 reflects the net effect of policy orientation on EE.FE it is fiscal expenditure, IND it is industrial structure, X it is the control variable matrix, comprising economic development level, urbanisation level, foreign investment and technology level.T t is the time-fixed effect, µ i is the individual-fixed effect, and ε it is the random disturbance term.Logarithmic processing was performed on the data of variables to reduce the influence of skewness and heteroscedasticity.
Spatial econometric model
A certain spatial correlation was observed because cities are not independent.Fiscal expenditure, policy orientation, and industrial structure affect regional EE and may also impact adjacent areas.Thus, in this study, spatial factors were incorporated into the model, taking government intervention and industrial structure as core explanatory variables.The spatial econometric model is as follows.
where W is the geospatial distance weight, WLnEE it is the spatial lag of the explained variable, ρ and ξ are the space-effect coefficients.ν it is the error term of ε it .When ρ=ξ = 0 , it degenerates into a spatial error model (SEM).When ξ =γ = 0 , it degenerates into a spatial lag model (SLM).When γ = 0 , it is the SDM.
Samples and data
In this study, 3540 balanced panel observations of 236 cities in China from 2005 to 2019 were used to investigate the impact of fiscal expenditure, new energy demonstration city construction policy orientation, and industrial structure on EE.The investigation of the construction policy orientation of new energy demonstration cities only considered the first batch of demonstration cities, and data samples at the prefecture-level were used.To ensure the robustness of the conclusion, county-level cities, districts (autonomous prefectures), and industrial park cities were eliminated from the first batch of demonstration cities established in 2014.Subsequently, 47 and 189 cities were generated in the experimental and control groups, respectively.To investigate its regional characters, the 30 provinces in China were divided into eastern, central and western regional according to geographic location.Specifically, the eastern region includes 12 provinces and cities, namely Beijing, Hebei, Tianjin, Shandong, Jiangsu, Shanghai, Zhejiang, Fujian, Guangdong, Hainan, Liaoning and Guangxi; The central region (1)
Interpreted variables
The direction distance function (DDF) proposed by Chung et al. 27 is extensively used in energy and environmental efficiency.However, a limitation of using the DDF is that all the input and output elements must change in the same direction.Zhou et al. 28 further proposed the NDDF, which can effectively solve the problem of input and output factors changing in the same direction.Therefore, in the present study, the NDDF was used to measure EE.The NDDF is defined as follows 28 : where g = (−g x , g y , −g b ) represents the specified direction vector, W = (W x , W y , W b ) represents the weight- ing vector of each input-output element, and β = (β x , β y , β b ) T ≥ 0 represents the variable proportion of each input and output factor.
Energy (E), capital (K), and labour (L) were taken as input factors.The gross domestic product (GDP) of each city (G) is taken as desirable output.Sulphur dioxide (S), smoke (dust) emissions (C), and wastewater discharge (P) were undesirable outputs in each city 2 .Using NDDF, the DEA model of EE of 236 cities was constructed as follows: where the direction vector g = (−K, −L, −E, +G, −C, −S, −P) , referring to the research of Liu et al. 29 , the weight matrix W T = 1 9 , 1 9 , 1 9 , 1 3 , 1 9 , 1 9 , 1 9 is given, substituting into Eq.( 3), and obtaining through linear programming, that is, the optimal solution of the slack vari- able of the j th city.The EE for each city in the corresponding year is calculated as follows: In measuring EE, the labour force (L) of each city was measured based on the number of employees.Owing to the lack of city-scale energy consumption data in the statistical yearbook, provincial energy consumption data were retrieved for each city according to the light data value using a linear model without intercept 30 .The amount of capital investment was estimated using the perpetual inventory method 31 .The GDP of each city was used to express the desirable outputs.Sulphur dioxide (S), wastewater discharge (P), and smoke (dust) emissions (C) were estimated using data on industrial sulphur dioxide emissions, industrial wastewater discharges, and industrial smoke and dust emissions 2 .In this study, the impact of price factors on research was reduced by adjusting all price data in 2005.www.nature.com/scientificreports/
Core explanatory variables
(1) Policy orientation ( du × dt ) The policy orientation investigated in this article is the new energy demon- stration city pilot dummy variable, du × dt , where du is the processing variable, indicating whether the city was selected as the first batch of demonstration cities in 2014; if selected, du= 1 , otherwise du= 0 .Additionally, dt is a time dummy variable; dt= 0 before the pilot city was selected, and dt= 1 was chosen after the selection.(2) Fiscal expenditure Fiscal expenditure is a government macroeconomic control measure that can intervene in economic development, pollution and carbon emission reductions, and resource allocation.By investing in new energy industry infrastructure and technological innovation, the government can continuously improve the technological innovation environment and technological service facilities, guiding the flow of innovative resources and attracting more enterprises and research institutions to increase their investment.
Increasing investment in the new energy industry will accelerate the development and utilisation of new energy, reduce pollutant emissions, and promote the improvement of EE.In this study, the level of urban financial expenditure was measured as the proportion of urban general financial budget expenditure to GDP.
(3) Industrial structure The continuous adjustment of the industrial structure can promote the rational allocation of resource elements in various industries and maintain a balance between input and output.The Thiel index is an essential indicator for measuring the reasonable allocation of industrial resource elements in cities.In this study, the Thiel index was used to measure the industrial structure.As described in Gao et al. 32 , the calculation is performed as follows: where Y i and Y represent the added value and GDP of the three industries, respectively; L i and L are the employ- ment and total employment of the three industries.The IND value reflects a reasonable degree of industrial struc- ture.The larger the value, the more reasonable the allocation of resources and production factors among sectors.
Control variables
The economic development level (PGDP) was expressed in GDP per capita.Foreign direct investment (FDI) is measured as the proportion of FDI utilised by each city in regional GDP 33 .Urbanisation level (URBAN) was measured as the ratio of the urban population to the total population.Technological innovation (TI) is measured as the proportion of employees engaged in scientific research, technical services, and geological prospecting to those employed in the unit at the end of the year.To eliminate heteroscedasticity, logarithmic processing was performed for each variable.The descriptive statistical analysis of each variable is presented in Table 1.
Analysis of EE calculation results
Equations ( 3) and (4) were applied to measure the EE of 236 cities in China from 2005 to 2019, and the EE kernel density curves 34 were plotted, as shown in Fig. 1. Figure 1 shows the dynamic evolution characteristics of the EE levels of the sampled cities in 2005, 2010, 2014, and 2019.Overall, the main peak of the nuclear density curve tends to shift to the right over time, indicating that the EE level in China is constantly improving.
From the perspective of the distribution position, the EE kernel density curves for 2005, 2010 and 2019 show a "double peak", with a "1 main and 2 secondary peak" feature in 2014, where the main peak is "high" and to the left, and the secondary peak is "low" and to the right (the secondary peak in 2014 refers to the right side).The part with a lower EE level has a higher peak kernel density, whereas the part with a higher EE level has a lower kernel density.This means that the current level of EE in most cities in China still needs to increase, and only a few cities have a high level of EE.The value of the main peak in each region first increased and then decreased.The width initially decreased and then increased, indicating that the absolute difference in EE among cities in China first narrowed and then expanded.The small subpeak on the left side of the kernel density curve in 2014 may owe to the new energy demonstration city policy in 2014, which made some cities start to increase the construction of renewable energy infrastructures, following energy consumption increasement and the damage to the ecological environment, Hence is energy eco-efficiency is at a low level, appearing in the left side of the main peak of the small subpeak.www.nature.com/scientificreports/ The vector diagram of the average value of EE in 2005-2013 before and after the implementation of the new energy demonstration cities policy was created using ArcGIS.Combined with the breakpoint method, the average value of EE is divided into three grades: low efficiency (below 0.6), medium efficiency (between 0.6 and 0.8), and high efficiency (above 0.8), as shown in Fig. 2.
Since 2005, China's EE has generally shown an upward trend.By 2019, it had increased by 17.85%, but still in the transition stage from low to medium efficiency.The average value of EE before and after the implementation of the policy indicates that the number of cities with different levels of EE has changed significantly.The proportion of cities with high-efficiency levels increased from 5.93% before the implementation of the policy to 7.20%.The proportion of cities with medium-efficiency levels increased from 12.71 to 40.68%, while the proportion of low-efficiency cities decreased from 81.36 to 59.75%.As shown in Fig. 2, the regional distribution of EE levels has changed.Before the implementation of the policy, the average EE in the eastern region (0.5508) was the highest, followed by that in the central (0.4887) and western (0.4828) regions.The level of EE in China has gradually increased from the western to the eastern coastal cities, and cities with medium and high-efficiency levels are mainly concentrated in the eastern region.After implementing the policy, the average EE value in all the cities increased.The region with the highest average value was still in the eastern region (0.5889), followed by the central region (0.5619), and the lowest value was observed in the western region (0.5606).Medium-and high-efficiency cities are developing towards the west and central regions, and the spatial distribution of EE has gradually achieved balanced development.
In general, medium-and high-efficiency cities have gradually developed from regional concentration to balanced regional distribution, from being concentrated in the eastern coastal areas to balanced development in the central and western regions.The western and central regions could benefit from their unique natural resources and natural gas, solar energy, clean resources such as hydroelectricity, actively cultivating green industries, and promoting win-win industry and ecology.
Regression results and analysis
Table 2 lists the estimated results of Eq. (1).Models (1) to (4) in Table 2 introduce policy orientation, fiscal expenditure, industrial structure, and control variables in sequence.The regression model was determined to be a fixed-effects model using the Hausman test.The goodness of fit in Table 2 ranges from 0.6694 to 0.7630, indicating that the selected explanatory variables are key variables affecting EE.Among them, the goodness of fit of the regression with core explanatory variables and control variables is better, indicating that the estimated results of Model (4) can better reflect the effect of government intervention and industrial structure on EE.
The policy orientation ( du × dt ) and fiscal expenditure that indicate the effect of policy have a significant positive impact on EE; both have passed the 1% significance test, indicating that government intervention with policy orientation and fiscal expenditure affects EE.Specifically, the policy orientation, financial expenditure and industrial restructuring significantly increased urban energy eco-efficiency by about 2.52%, 4.91% and 0.92%, respectively.It can be inferred that the policy formed by policy orientation is conducive to improving and supplementing the infrastructure and industrial environment on which the development of the new energy industry depends.Policy orientation often integrate innovative resources and capital.Learning and sharing effects are conducive to upgrading and innovating new energy technologies.Moreover, the optimal combination of conventional, renewable, and new comprehensive energy utilisation models can increase energy utilisation efficiency and reduce pollutant emissions, which is conducive to improving urban EE.Additionally, fiscal expenditure has a "Keynesian multiplier" effect, which promotes economic growth.Local governments have focused on green energy and environmental protection in environmental protection accountability and performance appraisal.This would promote the construction of new energy projects by increasing financial investment and support for energy conservation and environmental protection industries, actively promoting the healthy and rapid development of energy conservation and environmental protection industries, and effectively reducing pollutant emissions.
The regression coefficient of the industrial structure is significantly positive, indicating that EE can be improved by adjusting the industrial structure.This is because the allocation efficiency of resource elements can be effectively improved, the total amount of pollutants discharged in the production process can be reduced, and healthy development of the urban ecosystem and industrial economy can be achieved by coordinating the balanced development of industries.Furthermore, technological innovation, economic development level, and foreign direct investment can effectively promote the improvement of EE, indicating that in the context of industrial restructuring and government intervention, EE can be improved by attracting FDI, technological innovation, and promoting economic development.However, urbanisation has hindered EE improvement.This could be attributed to the increasing population in the city, the energy consumed in production and living, and the increase in pollution emissions, which hinders the improvement of urban EE.Moreover, urbanisation has led to the construction and use of a large amount of housing, entertainment, education, medical care, and
Parallel trend test
An effective assumption of the DID method is to satisfy the parallel trend 35 .Therefore, a parallel trend test was conducted using the method of Beck et al. 36 .Specifically, the regression was performed by replacing the dummy variable dt in regression Eq. (1) with the dummy variable D T for each year (in which 2013 is the base period), as shown in Eq. ( 5).The parallel-trend test results are shown in Fig. 3.The regression coefficients of years before the implementation of the policy orientation in 2014 fluctuated around the 0 axis (the 90% confidence interval includes the 0 value).Therefore, the difference in EE between the testing and control groups is not apparent, indicating that the testing and control groups had a parallel trend before the policy was implemented.Meanwhile, the estimated coefficients pass the 10 per cent significance test from the third year of construction of the demonstration city, indicating that the impact of the pilot policy on energy eco-efficiency has a lag of three years and grows fluctuatingly thereafter.The reason could be inferred that the improvement of EE relies on industrial restructuring, and industrial restructuring has the characteristics of high investment and long cycle, coupled with the long cycle of renewable energy infrastructure investment and construction, and in the short term, it is not possible to get rid of the dependence on fossil energy consumption, and optimise the structure of energy consumption with a certain time lag, so that the impact of pilot policies on the improvement of EE has a lag.
Superposition policy inspection
Considering that many similar or related policies between regions are implemented simultaneously or cross-wise, there is a certain overlay policy effect.Therefore, to exclude the impact of other policies in the same period, this study controlled for the impact on EE of the low-carbon city pilot and carbon trading pilot policies implemented during the sample period.Specifically, this study adds the above two policy dummy variables (the interaction term of the policy grouping dummy variable and the policy time dummy variable) to the benchmark regression to examine the causality between policy orientation and EE after controlling for other policy disturbances.Table 3 presents the corresponding estimated results.Models 5 and 6 are regression results that only contain policy variables, and Models 7 and 8 are the estimated results after adding control variables.According to Table 3, after adding other relevant policy variables, the estimated coefficient of du × dt remains significantly positive, which is the same as the result of the benchmark regression.This shows that, after considering the impact of the above policies, government intervention and industrial structure still significantly affect EE.
PSM-DID test
In the present study, technological innovation, economic development, urbanisation level, foreign investment, fiscal expenditure, and industrial structure were used as matching variables, and the neighbour 1:1 method for matching, the matching effect is shown in Fig. 4.After the abovematching treatment, the probability distribution of tendency scores in the experimental group and the control group has basically tended to be consistent.It shows no significant systematic error, which is meeting the hypothesis of the PSM model.Regression estimation was performed as described in Eq. (1), and Table 4 shows the regression estimation results of PSM-DID.
(5) www.nature.com/scientificreports/Models (9) to (12) in Table 4 introduce policy orientation, fiscal expenditure, industrial structure and control variables orderly, respectively.The regression coefficient is still significant at the 10% level and the sign is positive, indicating that the improved PSM-DID model results of the DID regression model are further improved.Therefore, supporting the industrial structure and government intervention has positive significance in promoting the improvement of EE.
Spatial spill-over effect and heterogeneity analysis Spatial spill-over effect
The general and specific Moran's I tests showed that the explained variables had a positive spatial autocorrelation, indicating the spatial layout of high-high and low-low clustering.A robust Lagrangian multiplier test (R-LM) and Lagrange multiplier (LM) test were performed to obtain a better fitting effect.LM test results significantly reject the null hypothesis, but the R-LM (error) failed the test 37,38 , and LM (lag) > LM (error), R-LM (lag) > R-LM (error), indicating that the SLM should be selected 39 .Furthermore, the Hausman test supported the fixed effects model.According to the R 2 and Log-likelihood values of the SDM, SAR, and SEM models in Table 5, combined with the Hausman test results, it is more reasonable to choose the SLM under individual fixed effects for empirical testing.
According to the SLM model results in Table 5, the regression coefficients of policy orientation, fiscal expenditure, and industrial structure on urban EE are all significantly positive, which shows that industrial structure and government intervention have significantly improved the EE effect.Moreover, the spatial autoregressive coefficient ρ is significantly positive, indicating that EE has a significant endogenous spatial interaction effect among cities.The improvement of energy efficiency in this region can drive the improvement of surrounding areas and form a high-high agglomeration state in space.Specifically, the industrial structure and government intervention behaviour in a region will directly affect the local EE and indirectly affect the EE of neighbouring areas and produce a feedback effect, which will eventually further change the industrial structure and the actual impact of government intervention behaviour on local EE.LeSage and Pace 40 proposed that the influence of independent variables on dependent variables can be divided into direct, indirect, and total effects.We conducted a decomposition analysis based on the spatial lag model under individual fixed effects 37 , as shown in Table 6.
According to the decomposition results of the spatial lag model in Table 6, the direct effect of policy orientation, fiscal expenditure, and industrial structure on the EE of the city is positive.It passed the significance test, confirming that government intervention and industrial structure promote EE in the region.Moreover, the spatial spill-over effects of government intervention and industrial structure were significantly positive, indicating that Table 5. Spatial spill-over effect regression results.The values in parentheses refer to the standard error values of the regression coefficients; ***, **, and * refer to the significance levels of 1%, 5%, and 10%, respectively.
Analysis of regional heterogeneity
The impact of government intervention and the industrial structure of cities in different geographical locations on EE may differ.Therefore, we divided the research sample into three regions in the east, middle, and west and examined the impact of fiscal expenditure, policy orientation, and industrial structure on urban EE under location heterogeneity.Table 7 presents the estimated results of the spatial-lag model.During the study period, the policy orientation coefficients of the demonstration cities were all positive, which verified that policy orientation is conducive to promoting the improvement of the EE of the region.The policy orientation of western and eastern cities passed the significance test, showing direct effects.The spatial spill-over effects of western and eastern cities account for 36.23% and 39.73% of their total effects, respectively, while the driving effects of policy orientation of the central cities were not significant.Fiscal expenditure significantly impacted EE in eastern cities rather than the west and central regions.Additionally, although the industrial structure can substantially promote the EE of eastern cities, it had no significant effect on western cities and had an inhibitory effect on the EE of central cities.
This regional difference may be because the central region has a high endowment of coal resources and an incomplete industrial structure.The extensive economic growth model causes the secondary industry to account for a substantial proportion, and there are many high-energy-consuming and high-polluting enterprises.Financial expenditure and policy orientation make it challenging to adjust the energy structure dominated by traditional fossil fuel energy in the short term, which has an impact on EE.Additionally, the regional economic development model has not fully transformed from "resource factor promotion" to "green development driving" due to the weak local technology foundation, the long-term output cycle of R&D input and results, and the longterm mechanism of R&D support is not apparent.Factor productivity and technological levels are not increased as planned and even hinders the improvement of EE.
By relying on solar energy, wind energy, and natural gas resources, the western region actively supports the green industry.It promotes EE via the spatial transfer of capital, technology, and advanced concepts brought about by policy guidance.The policy benefits from the new energy demonstration construction pilots have attracted high-tech projects transferred from the central and eastern regions, and the technological spill-over effects of these projects have led to corresponding improvements in the management capabilities and technical levels of the western regions, thereby improving EE in western regions.The east is a relatively developed region with a reasonable industrial structure.It could take good use of the dominant position of a developed economy and a good industrial foundation, actively responding to policy orientation, consciously innovating green technologies, accelerating scientific and technological research and development, vigorously developing clean energy, and improving green production capacity.
Conclusions and recommendations
Studying the internal relationship between industrial structure, government intervention behaviour, and EE in China has important implications for developing new energy, reducing pollutant emissions, and realising sustainable development.In this study, we selected panel data from 236 cities in China from 2005 to 2019, constructs the NDDF model to calculate the EE of each city, and establishes a DID model to empirically verify the effect of www.nature.com/scientificreports/policy orientation, fiscal expenditure, and industrial structure on EE.The spatial impact and regional heterogeneity of government intervention and industrial structure on EE were analysed using a spatial econometric model.The research results show that China's EE is at a low level and shows an increasing trend during the research period, with a spatial distribution grid of low-low and high-high concentrations.The policy orientation, financial expenditure and industrial restructuring adjustment significantly increased urban energy eco-efficiency by about 2.52%, 4.91% and 0.92%, respectively, and industrial structure and government intervention have a positive spatial spill-over effect on EE improvement.Its spatial spill-over effect is transmitted between regions through the "learning effect" and "demonstration effect", thus promoting the improvement of regional EE.Additionally, there is spatial heterogeneity in this spill-over effect, and the driving impact of policy orientation on eastern and western cities is stronger.Foreign direct investment, economic development, and technological innovation can help improve EE, while urbanisation can inhibit the improvement of EE.
Based on the findings of this study, the following policy recommendations are proposed to promote the advancement of China's EE and promote the coupled development of energy, economy, and environment: (1) Strengthen policy guidance and financial support for high-tech and environmental protection and energysaving industries; actively exert government intervention and industrial structure effects to cultivate new economic growth; and promote the advancement of EE using foreign direct investment and technological innovation.The government should take policy as guidance and financial support to encourage the development of strategic emerging industries in an overall way, promote the sustainable development of resource elements to industries such as low energy consumption, low pollution, high value-added environmental protection and energy conservation, high-tech and other sectors, reasonably promote the optimisation and upgrading of industrial structure, and promote the sustainable development of the green economy.Moreover, the government should advance the coordinated development of the business environment, technological innovation, and sustainable urbanisation.The government should improve the business environment and compensation mechanism for technological innovation, strengthen the construction of public research and development institutions and experimental platforms, and make more efficient and standardised use of funds from various sources, such as foreign capital.Additionally, it promotes the innovation and application of green technology in urbanisation and achieves the high-quality, coordinated development of urbanisation and the environment.(2) Actively promote the construction of new energy demonstration cities, strengthen the policy orientation effect of government intervention, and provide greater policy guarantees for the country's overall new energy development.On the one hand, give full play to the policy-oriented resource allocation function, continue to promote the construction of new energy demonstration cities, strengthen the spatial spill-over effect of the construction of demonstration cities, improve the radiation scope and influence of the development of new energy cities, and achieve the high-quality development of new energy.But on the other hand, they are continuously optimising demonstration policies, strengthening policy systems such as intellectual property protection, ensuring reasonable monopoly income from innovation achievements of innovation subjects, and laying a policy foundation for constructing a strong new energy city and consummating the innovation environment.(3) Optimise the industrial structure according to local conditions, take advantage of the spatial spill-over effect of high-energy-efficiency regions, strengthen technical exchange and resource sharing, accelerate the coordinated development of EE regions, and jointly improve EE.First, it is necessary to make good use of the radiation effect of EE, strengthen inter-regional technical exchanges and resource sharing, and promote benign interactions and coordinated development between cities.Second, owing to the different characteristics of the city itself, the influence of government intervention and industrial structure on EE will have regional heterogeneity.Therefore, give full play to the advantages of resource endowment and implement a demonstration city construction plan according to local conditions.
The limitations of this study are as follows: First, this study focussed on China, which may have more reference value for developing countries but is less applicable for developed countries.Second, city-level macro-data research was used but the enterprise-level micro-angle was not considered, which has certain limitations.
Figure 1 .
Figure 1.Kernel density curve of energy eco-efficiency (EE) in main years.
Figure 2 .
Figure 2. Average distribution map of energy eco-efficiency (EE).(a) Average distribution of EE before the policy, (b) Average distribution of EE after the policy.NEDC refers to the new energy demonstration city.Note: The map is produced based on the standard map with review number GS(2019)1822 downloaded from the standard map service website of the State Administration of Surveying, Mapping and Geographic Information of China, with no modifications to the base map.
Figure 3 .
Figure 3.The results of the parallel trend test.
Figure 4 .
Figure 4. Kernel density distribution comparison of propensity score values before and after kernel matching.
Table 1 .
Descriptive statistics for variables.
Table 2 .
Difference-in-difference model (DID) regression results.The values in parentheses refer to the standard error values of the regression coefficients; ***, **, and * refer to the significance levels of 1%, 5%, and 10%, respectively.transportation infrastructure, which has led to a sharp increase in the demand for energy and other resources.Therefore, in the planning of urban modernisation development, the government should focus on emphasising quality over quantity.
Table 3 .
Superposition policy test.The values in parentheses refer to the standard error values of the regression coefficients; ***, **, and * refer to the significance levels of 1%, 5%, and 10%, respectively.
Table 4 .
PSM -DID regression.The values in parentheses refer to the standard error values of the regression coefficients; ***, **, and * refer to the significance levels of 1%, 5%, and 10%, respectively.
Table 6 .
Decomposition of space effects.The values in parentheses refer to the standard error values of the regression coefficients; ***, **, and * refer to the significance levels of 1%, 5%, and 10%, respectively.fiscal expenditure, policy orientation, and industrial structure improve the EE of the region as well as that of neighbouring regions.The construction of demonstration cities is based on the market mechanism.It is oriented by government intervention, which could encourage enterprises to implement technological innovation, rationally allocate resource elements in various industries, exert technological innovation and demonstration effects, and promote the rational use and mutual flow of high-quality resource elements among regions.Furthermore, through knowledge diffusion and technological spill-over, a good demonstration effect and space radiation effect on adjacent areas can promote the "learning effect" in adjacent areas, thereby reducing the difficulty of EE improvement.
Table 7 .
Regional heterogeneity analysis regression results.The values in parentheses refer to the standard error values of the regression coefficients; ***, **, and * refer to the significance levels of 1%, 5%, and 10%, respectively. | 8,534.4 | 2023-11-09T00:00:00.000 | [
"Economics",
"Environmental Science"
] |
MANEUVERS OF MULTI PERSPECTIVE MEDIA RETRIEVAL
Recently, machine learning and data mining have got a successful scope on multi-view representation, since the performance of machine learning methods is heavily dependent on the expressive power of data representation and multi-view representation learning has become a very important topic which is widely used. It is an emerging direction in machine learning which considers learning with multiple views to improve the generalization performance. Multi-view learning is also known as data fusion retrvial or data integration from different feature sets. In multi-view representation learning they have two major constraints which are multi-view representation alignment and multi-view representation fusion. Multi-view representation fusion has been widely applied in neural network-based sequence-to-sequence. Multi-view representation alignment is used to perform alignment between the representations learned from multiple different views. In this project we implement the concept of canonical correlation analysis to retrieve the multi-data, Inspired by the success of deep learning, deep multi-view representation learning has attracted much attention in cross-media retrieval due to its ability of learning much more expressive cross-view representation but also many challenges occur due to this implementation their effect should be low-quality input, inappropriate objectives for multi-view embedding modeling, scalable processing requirements, the presence of view disagreement.. Canonical correlation analysis is used for aligning the multiple forms of data in certain space by this we can easily retrieve that data on some space so easily multiple auxiliary information such as the item and user content information can usually be obtained. It is natural to use multi-view representation learning to encode the multiple different sources so that the generalization performance can be improved.
INTRODUCTION
Multi-view representation learning has become a very important topic which is widely used.Canonical correlation analysis (CCA) and its kernel extensions are techniques that are used to represent the early studies of multi-view representation learning.While CCA and its kernel versions show their abilities of successfully modeling the relationship between more than twovariables.Multi-view illustration learning is gaining knowledge of the representation by means of bearing on records of different perspectives of the information to boost the mastering overall performance.The getting to know scenario is not capable of coincide with the statistical houses of multi-view information [2].The illustration which can be acquired may also minimize the learning performance.Multi-view representation alignment methods are seeking for to perform alignment among the representations learned fromspecific views.Representative examples may be investigated from two aspects that is correlation-based totally alignment and distance and similarity-based totally alignment.The multi-view illustration studying strategies from the perspective of correlation-based totally multi view alignment: canonical correlation analysis (CCA), sparse CCA, kernel CCA, and deep CCA.Multi-view illustration fusion methods intention to integrate multi-view inputs right into a unmarried compact illustration.Representative examples may be reviewed from perspectives.Canonical correlation analysis and its kernel ex-tensions are consultant techniques in early research of multi-view representation mastering.A type of theories and procedures are later brought to analyze their theoretical houses, explain their success, and enlarge them to improve the generalization overall performance mainly responsibilities.While CCA and its kernel variations show their skills of correctly modeling the connection among two or more units of variables, they have got barriers on capturing excessive stage institutions among A.Alex Xavier, S. Devarajan, C. Prabu and H. Usha Rani multi-view information.Inspired by the success of deep neural networks [2].In 2016, a workshop on multi representation studying is held in in conjunction third inter-national conference on system gaining knowledge of to assist promote a higher knowledge of various techniques and the demanding situations in precise packages.So a ways, there were increasing studies sports in this path and a big variety of multi-view representation getting to know algorithms have been offered based totally on the essential theories of CCAs and development of deep neural networks.For instance, the development of multi-view representation studying stages from the traditional methods toge multi-model subject matter learning [3].Consider studying multi-model representation from the perspective of encoding the specific/implicit relevance courting among the vertices within the click on graph, in which vertices are snap shots/text queries and edges indicate the clicks among an picture and a question [4].
Proposed System
The proposed system uses visual similarity (low level features like color, shape, texture) for retrieving images.The contents of the image itself are used to perform search rather than text.These systems rely completely on the contents of the image.No keywords are required for searching.The image is analyzed; features are extracted and stored to retrieve similar images.It creates a unique and compact digital signat fingerprint of the image and matches it with other indexed images.Several important applications of multi representation learning are discussed.A great number of multi-view embedding methods have been proposed to cope with these challenges including low inappropriate objectives for multi-view embeddi scalable processing requirements and the presence of view disagreement.Since the performance of machine learning methods is heavily hooked in to the expressive power of knowledge representation, multi-view representation learning has become a really promising topic with wide applicability.
Working Data Alignment
We have the input given datasets X and Y, data alignment is expressed as follows where each view has a corresponding embedding function that transforms the original space into a alignedspace with certain constraints.Multi model documents are first in original space that should be aligned by using canonical correlation analysis then converted that related Alex Xavier, S. Devarajan, C. Prabu and H. Usha Rani., Maneuvers of Multi Perspective Media Retrieval view information.Inspired by the success of deep neural networks [2].In 2016, a workshop on multi-view in conjunction with the thirty national conference on system gaining knowledge of to assist promote a higher knowledge of various techniques and the demanding situations in precise packages.So a ways, there were increasing studies sports in this path and a big epresentation getting to know algorithms have been offered based totally on the essential theories of CCAs and development of deep neural networks.
view representation studying stages from the traditional methods together with [3].Consider studying model representation from the perspective of encoding the specific/implicit relevance courting among the vertices within the click on graph, in which vertices are snap shots/text ries and edges indicate the clicks among an picture and a The proposed system uses visual similarity (low level features like color, shape, texture) for retrieving images.The contents search rather than text.These systems rely completely on the contents of the image.No keywords are required for searching.The image is analyzed; features are extracted and stored to retrieve similar images.It creates a unique and compact digital signature or fingerprint of the image and matches it with other indexed Several important applications of multi-view representation learning are discussed.A great number of view embedding methods have been proposed to cope with these challenges including low-quality input, view embedding modelling, scalable processing requirements and the presence of view disagreement.Since the performance of machine learning methods is heavily hooked in to the expressive power of view representation learning promising topic with wide applicability.
Block diagram of media retrieval
We have the input given datasets X and Y, data alignment is expressed as follows where each view has a corresponding embedding function that transforms the original space into a alignedspace with certain constraints.Multi model documents ginal space that should be aligned by using canonical correlation analysis then converted that related documents in semantic space by this we can achieve the aligner data.Construction designs like internal, celling, design property should be first aligned analysis it is very useful to retrieve information.
Sss
All the input aligned dataset X and Y are combined by using fusion that is expressed as follows where data from multiple views are integrated into a single exploits the complementary knowledge contained in multiple views to comprehensively represent the data [1].Converted multi model documents are integrated into single element by using fusion (fusion is a method used to combine a small molecule into large molecule).Designs should be combined into single entity by this we can retrieve the information in multi view accurately and consuming retrieving time.
Multi model Retrieval
After the alignment and fusion of given input datasets sho be retrieved in multiple view of projecting data format, by using single input and retrieved the output in multiple ways.By this we can get accuracy and time consumption.When my input is single input the related output should be viewed by this our input may be text, images, audio, and video.We have major drawback that our image input should be either blur image we can reconstruct the image by splitting into seven segment pixels, when the information should be repeatedly viewed on the seven segments th original information by using that original information we retrieve the relational data in multiple view.When the input should be an image that should be analyzed first and search or retrieved the information from the database or huge number information in multi perspective view.documents in semantic space by this we can achieve the aligner data.Construction designs like internal, celling, design property should be first aligned by using canonical correlation analysis it is very useful to retrieve information.
All the input aligned dataset X and Y are combined by using fusion that is expressed as follows where data from multiple views are integrated into a single representation h which exploits the complementary knowledge contained in multiple views to comprehensively represent the data [1].Converted multi model documents are integrated into single element by using fusion (fusion is a method used to combine a small molecule into large molecule).Designs should be combined into single entity by this we can retrieve the information in multi view accurately and consuming retrieving time.
After the alignment and fusion of given input datasets should be retrieved in multiple view of projecting data format, by using single input and retrieved the output in multiple ways.By this we can get accuracy and time consumption.When my input is single input the related output should be ut may be text, images, audio, and video.We have major drawback that our image input should be either blur image we can reconstruct the image by splitting into seven segment pixels, when the information should be repeatedly viewed on the seven segments that should be original information by using that original information we retrieve the relational data in multiple view.When the input should be an image that should be analyzed first and search or retrieved the information from the database or huge number of information in multi perspective view.
Web page created for user and admin
Search page created for user
Advantage of Proposed Systems
Effective search tool. No more complex for physically challenged people. Efficient for a particular system management. Retrieving information in different dimensions. Identifications of information should be in control.
CONCLUSION
Multi-view representation learning is concerned with the multi-view representation learning of data that facilitate extracting readily useful information by canonical correlation analysis which becomes very popular for its ability of constructiely modeling the relationship more than two sets of variables.Multi-view learning is also mentioned as data fusion or data integration from multiple feature sets.This survey aims to provide an insightful picture of the theoretical foundation and the current development in the field of multiview representation learning and to help find the most appropriate methodologies for particular applications.Our project future enhancement would be to check on an exception that occurs on non-linearity, where when a blur image given as input, it does not obtain accurate Future Work 1.We are planed parsing the object like language to framework process.2. Coming day's we can implement this concept are works without any support of framework they reduce some time and business logics.3.They can be map configuration so it's easier to maintain the process in multiple server side.
Fig 1
Fig 1 Block diagram of media retrieval
OUTPUTFig 2 Fig 3
Fig 2 Web page created for user and admin | 2,728.6 | 2020-09-17T00:00:00.000 | [
"Computer Science"
] |
Flood Prediction and Uncertainty Estimation Using Deep Learning
: Floods are a complex phenomenon that are di ffi cult to predict because of their non-linear and dynamic nature. Therefore, flood prediction has been a key research topic in the field of hydrology. Various researchers have approached this problem using di ff erent techniques ranging from physical models to image processing, but the accuracy and time steps are not su ffi cient for all applications. This study explores deep learning techniques for predicting gauge height and evaluating the associated uncertainty. Gauge height data for the Meramec River in Valley Park, Missouri was used to develop and validate the model. It was found that the deep learning model was more accurate than the physical and statistical models currently in use while providing information in 15 minute increments rather than six hour increments. It was also found that the use of data sub-selection for regularization in deep learning is preferred to dropout. These results make it possible to provide more accurate and timely flood prediction for a wide variety of applications, including transportation systems.
Introduction
Floods frequently cause serious damage to various infrastructure and socioeconomic systems elements resulting in significant economic losses, both direct and indirect [1]. River flow has a complex behavior that is dependent on soil properties, land usage, climate, river basin, snowfall, and other geophysical elements [2]. It is crucial to predict floods accurately and develop the resultant flood mapping to prepare for the emergency response [3]. Currently, it is a prominent research topic in predicting natural hazards and risk management [4]. The most common types of prediction models are based on physical, statistical, and computational intelligence/deep learning algorithms.
A physical model consists of mathematical equations used to describe the physical behavior and interactions of the multiple components involved in a process. Various physical models have been developed for predicting rainfall [5] and surface water flow [6][7][8]. Further, a comprehensive physical model for coastal flooding using parallel computing was developed [9]. These models are data intensive and difficult to generalize complex problems. Because of the nature of the flood prediction problem and the assumptions involved in the physical models, they sometimes fail to make accurate predictions [10]. However, the ability of physical models to predict various hydrological events has improved through advanced simulations [11][12][13] and hybrid models [14]. Frameworks such as Hybridizing Bayesian and variational data, and a priori generation of computational grids have shown to improve the real-time estimation and forecasting [15,16].
Statistical models leverage historical data to identify underlying patterns for predicting future states. A wide range of algorithms have been used for flood modeling, including multiple linear regression (MLR), autoregressive integrated moving average (ARIMA), and a hybrid least squares support vector machine regression (LS-SVM) [17][18][19]. However, these models do not scale well and with the increase in size and complexity of the data available in recent years are difficult to use. Statistical models also need many years of historical data to capture the seasonal variations to make accurate long-term predictions [20].
Computational intelligence techniques, such as deep learning (DL), can overcome the difficulties with scale and complexity. When applied to machine learning, these techniques can handle complexity and non-linearity without needing to understand the underlying processes. Compared to physical models, computational intelligence models are faster, require fewer computational resources, and have better performance [21]. Recently, computation intelligence models have been shown to outperform statistical and physical methods for flood modeling and prediction [22,23]. Classification and time series prediction are promising flood modeling techniques within machine learning, but have not been explored.
Some classification techniques used for flood forecasting are artificial neural networks (ANNs) [24] and fuzzy-neuro systems [25]. Classification of floods with these computational intelligence algorithms involves manually extracting features from time-series data, whereas the numerous layers in deep learning algorithms make it possible to identify patterns and trends in non-linear data without preprocessing. Long short-term memory networks (LSTMs) are a popular technique for modeling sequential data as the architecture allows the capture of long-term temporal dynamics to increase performance. LSTM models have been used for the prediction of various hydrological events, including precipitation [26] and surface runoff [27]. LSTMs have shown better performance when compared to gated recurrent neural networks and wavelet neural networks for multi-step ahead time-series prediction of sewage outflow [28]. It was also observed that LSTMs can capture long-term dependencies between inputs and outputs for rainfall runoff prediction [29].
Reliable and accurate time series prediction can help in effectively planning for disaster management and emergency relief. The major challenge for accuracy is the uncertainty that arises from a wide range of factors that affect the process being modeled. LSTM networks have been proven to capture nonlinear feature interactions, which can be useful for predicting complex processes and events [30,31]. Bayesian neural networks have been used to examine uncertainty in computational intelligence prediction, using a distribution for weights instead of point estimates. This is done by initially assigning a prior distribution to the model parameters and then calculating the posterior distribution after running the model. The number of parameters in a deep learning neural network and the associated non-linearity makes it difficult to estimate the posterior distribution [32]. A few different approaches for evaluating the inference for Bayesian neural networks were proposed including stochastic search [33], stochastic gradient variational Bayes (SGVB) [34], probabilistic back-propagation [35], the use of dropout [36], and α-divergence optimization [37,38]. The objective is to introduce an error in the model which when repeated several times can predict an interval that can capture most of the possibilities for the future. Representing this uncertainty is important when dealing with flood events because of the high level of stochasticity in the elements of the hydrological ecosystem.
The Meramec River at the intersection of Route 141 and Interstate I-44 at Valley Park, St. Louis County, MO was selected for this research. This location experiences heavy traffic flow [39] and has been impacted by flood events in recent years. The gauge height predictions at this location are developed by the advanced hydrologic prediction service (AHPS), managed by the National Weather Service (NWS), and are provided on the U.S. Geological Survey (USGS) website. These predictions are based on a physical model, developed from digital elevation maps, weather, and other geophysical properties of the given region. These predictions are 6 hours apart and are not useful for transportation networks. Further, physical models cannot be generalized and have to be developed from scratch for each new region. Therefore, there is an opportunity to develop a model with improved prediction time period, accuracy, and generalizability.
The objective of the study is to develop a methodology to predict gauge height and the uncertainty associated with the prediction. The proposed model is data driven and uses historical gauge height data from 15 May 2016 5 PM onward until 1 September 2019 4 PM for the Meramec River in Valley Park, MO. The paper also discusses the future work of incorporating gauge height data into the Flood Inundation Mapper (FIM) tool developed by the United States Geological Survey (USGS), which can then generate future flood profiles for the given region.
LSTM Network
A neural network is an artificial intelligence technique based on the functioning of the human brain. The basic unit of a neural network is an "artificial neuron." For each training sample, the neural network predicts an outcome and then adjusts the weights based on the error. Once trained, this network can be used for prediction on a new data sample (x*). Thus, a neural network represents a function that maps the input variables to the outputs. The predictions from a neural network for a new data sample can be represented as below.
y*= f w x* (1) One shortcoming of traditional neural networks is that they cannot retain temporal information. To account for this shortcoming recurrent neural networks (RNN) were introduced. This network consists of loops that help in retaining information from previous time steps.
A simple representation of a recurrent neural network can be seen in the Figure 1. At a given time "I," the network makes a prediction (yi) based on the input data (Xi) in a loop and the information is passed from the previous steps to the current steps. The information from the first time step is passed to the next time step and so on. This structure can make this algorithm effective for time series forecasting. The input vector (X0) consists of inputs x0, x1, . . . .xm where "m" is known as the lookback. In other words, RNN looks at the past "m" data samples to make prediction for the current time step. A short coming identified with this approach is not being able to retain information in the long term. Therefore, as the steps increases, the information diminishes.
LSTM Network
A neural network is an artificial intelligence technique based on the functioning of the human brain. The basic unit of a neural network is an "artificial neuron." For each training sample, the neural network predicts an outcome and then adjusts the weights based on the error. Once trained, this network can be used for prediction on a new data sample (x*). Thus, a neural network represents a function that maps the input variables to the outputs. The predictions from a neural network for a new data sample can be represented as below.
y*= f w x* (1) One shortcoming of traditional neural networks is that they cannot retain temporal information. To account for this shortcoming recurrent neural networks (RNN) were introduced. This network consists of loops that help in retaining information from previous time steps.
A simple representation of a recurrent neural network can be seen in the Figure 1. At a given time "I," the network makes a prediction (yi) based on the input data (Xi) in a loop and the information is passed from the previous steps to the current steps. The information from the first time step is passed to the next time step and so on. This structure can make this algorithm effective for time series forecasting. The input vector (X0) consists of inputs x0, x1, ….xm where "m" is known as the lookback. In other words, RNN looks at the past "m" data samples to make prediction for the current time step. A short coming identified with this approach is not being able to retain information in the long term. Therefore, as the steps increases, the information diminishes. Gauge height prediction is a time-series forecasting problem that uses data for the past (n − 1) time steps to predict the gauge height for the nth step. Based on the literature review, we observed the increasing affinity toward using deep learning techniques for complicated problems, especially LSTM networks when working with time series forecasting.
Deep learning is an advanced form of a neural networks that uses an increased number of layers and layer types to better model complex systems and interactions. Traditional neural networks cannot retain temporal information, so recurrent neural networks were introduced where previous time step information is used. LSTMs are a deep learning version of recurrent neural networks that are capable of retaining longer term information. LSTM cells remove or add information regulated by the use of gates along with vector addition and multiplication to change the data.
The input vector for the model is defined as X = {x1,….xn} and output vector, Y = {y1,…..,yn}. The gates consist of a sigmoid neural network layer and a point wise multiplication operator. A value of one indicates letting through all data while a value of zero does not allow any of the data to be used. The first gate layer (the "forget" gate layer, represented in "yellow" in Figure 2) takes output from the previous step (yt − 1) and current input (xt) and outputs a value between 0 and 1, indicating how much information is to be passed on. The output from the "forget" gate is represented by ft in Equation (2), where matrices U and W contain weights and recurrent connections respectively. Gauge height prediction is a time-series forecasting problem that uses data for the past (n − 1) time steps to predict the gauge height for the nth step. Based on the literature review, we observed the increasing affinity toward using deep learning techniques for complicated problems, especially LSTM networks when working with time series forecasting.
Deep learning is an advanced form of a neural networks that uses an increased number of layers and layer types to better model complex systems and interactions. Traditional neural networks cannot retain temporal information, so recurrent neural networks were introduced where previous time step information is used. LSTMs are a deep learning version of recurrent neural networks that are capable of retaining longer term information. LSTM cells remove or add information regulated by the use of gates along with vector addition and multiplication to change the data.
The input vector for the model is defined as X = {x 1 , . . . .x n } and output vector, Y = {y 1 , . . . ..,y n }. The gates consist of a sigmoid neural network layer and a point wise multiplication operator. A value of one indicates letting through all data while a value of zero does not allow any of the data to be used. The first gate layer (the "forget" gate layer, represented in "yellow" in Figure 2) takes output from the previous step (y t − 1 ) and current input (x t ) and outputs a value between 0 and 1, indicating how much information is to be passed on. The output from the "forget" gate is represented by f t in Equation (2), where matrices U and W contain weights and recurrent connections respectively.
The next step is identifying the information that needs to be stored. A sigmoid layer is once again used to decide which values to update. A tanh layers then generates the new values to be added to the cell state. The corresponding equations for the sigmoid and the tanh layers are shown in Equations (3) and (4).
The key component of an LSTM cell is the line at the top known as the cell state (Ct) which has minor interactions with rest of the components. The old state (Ct − 1) is multiplied by ft to allow for the possible "forgetting" of the corresponding information. In the next step, the product of it and Ĉt is added to provide new information to the cell state as shown in the Equation (5).
The final layer in an LSTM cell is the output layer that decides the forecast for the current time step. A sigmoid layer and a tanh layer are used to generate the output (yt) as shown in Equations (6) and (7).
Performamnce Metrics
Mean absolute error (MAE) and root-mean square error (RMSE) are the different statistical measures used to quantify the capabilities of the prediction models. MAE represents the average of all the errors between individual predictions ( y ) and observation data (yi) values and RMSE measures square root of the mean of the squared errors. Lower values indicate a better model fit for The next step is identifying the information that needs to be stored. A sigmoid layer is once again used to decide which values to update. A tanh layers then generates the new values to be added to the cell state. The corresponding equations for the sigmoid and the tanh layers are shown in Equations (3) and (4).
The key component of an LSTM cell is the line at the top known as the cell state (C t ) which has minor interactions with rest of the components. The old state (C t − 1 ) is multiplied by f t to allow for the possible "forgetting" of the corresponding information. In the next step, the product of i t andĈ t is added to provide new information to the cell state as shown in the Equation (5).
The final layer in an LSTM cell is the output layer that decides the forecast for the current time step. A sigmoid layer and a tanh layer are used to generate the output (y t ) as shown in Equations (6) and (7).
Performamnce Metrics
Mean absolute error (MAE) and root-mean square error (RMSE) are the different statistical measures used to quantify the capabilities of the prediction models. MAE represents the average of all the errors between individual predictions (ŷ i ) and observation data (y i ) values and RMSE measures square root of the mean of the squared errors. Lower values indicate a better model fit for the data for both the metrics. With RMSE, the errors are squared before the average, therefore, prioritizes larger errors. In situations where larger differences can affect the model, RMSE can be a better evaluation measure, otherwise MAE is more appropriate.
Uncertainty Modeling
For inputs X = {x 1 , . . . .x n } and outputs Y = {y 1 , . . . ..,y n }, the resulting function developed by the forecasting algorithm is given by Y = f ω (X), where "ω" represents the parameters of the algorithm. In this case, "ω" represents the weights and bias of the LSTM network. With Bayesian modeling, a prior distribution of the model parameters p(ω) is assumed. The corresponding likelihood distribution is defined by p(y|x, ω ). A posterior distribution is then evaluated after observing the data using Bayes' theorem as given in Equation (10).
The most probable output parameters given our input data is calculated using Equation (10). The output prediction interval for a new input(x*) can then be calculated by integrating the output (y * ) on all the values of "ω" [36].
This integration is known as marginal likelihood estimation. This can be performed on simpler forecasting models, but as the number of parameters increases, it becomes computationally expensive. In such situations, an effective approximation technique is required. A probabilistic interpretation of deep learning models can be developed by inferring the distribution over the model's weights. Variational inference is the approximation technique used to make the posterior calculation tractable. Dropout is one of the most popular regularization techniques used for approximation of Bayesian inference [36].
The uncertainty in Bayesian neural networks comes from the variation in model parameters. With dropout and other regularization techniques, noise is applied in the input or feature space, either adding noise directly to the inputs or dropping out values in the network layers. This noise can be transformed from feature space to parameter space.
To estimate the uncertainty in prediction for input X, the forecasting process with variation is repeated several times (T). The average of these predictions is used to calculate the uncertainty. The posterior mean (m) and uncertainty (c) are given by the Equations (12) and (13), where f i (x) represents the network in each iteration and "p" represents the prior distribution of the network parameters.
Stochastic regularization techniques are used to estimate Bayesian inference. In this research, a technique with random data sub-sequencing is introduced for uncertainty estimation. This proposed methodology has the advantage of not using dropout or introducing error to the inputs. For each iteration, a subset (X = x m, . . . .., x n ) of the original training data (X = x 1, . . . .., x n ) is selected. The value of "m" or the starting point of the subset is randomly generated from a set of values, a larger range of these values results in a larger variation allowing for the control of the uncertainty estimates. Finally, the three different techniques adding input noise, dropout and data sub-selection are compared to identify the better model for this problem.
Quality of Uncertainty
Uncertainty area, empirical coverage and the mean performance metrics are used to compare the different uncertainty estimation techniques. Uncertainty area is defined as the total area covered by 90% uncertainty interval whereas empirical coverage indicates how many of the predictions are captured in the uncertainty interval.
Data
The historical gauge height data used to train the LSTM was obtained from the USGS.
Data
The historical gauge height data used to train the LSTM was obtained from the USGS.
Results
In this research three evaluations are presented: (1) Developing and comparison of statistical and deep learning models for gauge height prediction, (2) evaluating the effect of dropout on the LSTM performance, and (3) comparison of different uncertainty estimation techniques. To ensure a relevant comparison, the validation of the models is also presented.
Comparison of Statistical and Deep Learning for Gauge Height Prediction
ARIMA and an LSTM were compared to identify the best methodology for flood prediction. The 15-minute interval gauge height data at the considered location was available starting 19 May 2016. Therefore, for the model to capture the temporal dynamics and patterns, 80% of the gauge height data was used for training ( ARIMA is a regression model, and all regression models are based on the assumption that the values in the data set are independent of each other. When using regression for time series prediction, it is important to make sure that the data is stationary, meaning that the statistical properties such as variance do not change with time. In ARIMA, "AR" refers to the "autoregressive" component, which is the lag of the stationary series, moving average (MA) captures the lags of the forecast errors and "I" represents the order of differentiation to make the series stationary. The Dickey Fuller test was used to verify that the time-series data were stationary. The resultant p-value for the gauge height data was lower than 0.01 and the test static was −8.527531, thereby allowing us to reject the null hypothesis and conclude that the data is stationary.
The ARIMA (p, d, q) is the model used in this research, where "p" is the autoregressive component, "d" is the number of non-seasonal differences to make the series stationary, and "q" represents the moving average term. Different values of p, d, and q are tested and the results as shown in Table 1. AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) are used to evaluate the performance of the different configurations of "p," "d," and "q" [41]. The (p, d, q) values are generated using the Python library "pmdarima." The model with parameters of (1, 1, 3) gives the lowest AIC and BIC values making it the best choice to compare with the LSTM. The final architecture for the LSTM is selected through a parameter sweep in Python using the deep learning library, "keras" Different configurations of the architecture elements such as number of hidden layers, width of the hidden layers (number of neurons), batch size, activation functions, and optimizers were tested using "grid search" and "trial and error" approaches. The best performing architecture, which specifies the shape of the output generated, is shown in Table 2. The look back for the model is "90," meaning it looks at the past 90 values to predict the 91st value. The input layer consisting of 90 neurons takes the input and passes the output onto the layer, which consists of 20 neurons. The output from the LSTM layer was passed onto the dense layer generating a single output, which is the 91st value or the forecast generated by the model. The other parameters were batch size of 60 and "adam" as the optimizer. The final model consists of 120,501 trainable parameters repeated for 100 epochs. The error in the two prediction methods for the test data are shown in Table 3. Once trained, both the ARIMA and LSTM models were used to make predictions starting at 6 PM on 1 September until 12 AM 3 September. Figure 5 and Table 4 show that the LSTM model performs better at predicting gauge height when compared to ARIMA.
Effect of Dropout on Model Performance
Monte Carlo dropout is one of the several regularization techniques used to avoid overfitting and improve LSTM performance. The values of a dropout layer range from "0" to "1," representing
Effect of Dropout on Model Performance
Monte Carlo dropout is one of the several regularization techniques used to avoid overfitting and improve LSTM performance. The values of a dropout layer range from "0" to "1," representing the proportion of the nodes from the previous layer removed at random. Dropout was applied during both training for regularization and testing for Bayesian interpretation.
The large data set used for testing makes visualization of the difference between the data challenging. To highlight the differences, Figure 6 shows a small section of the results from 22 August 2019 at 6 AM until 1 September 2019 at 4 PM. The plot shows the effect of different levels of dropout on model performance for the test data. The dropout has a negative impact on the model performance due to its regularization capability. It can be observed that the prediction error increases with an increase in the value of "dropout." The results are as shown in Table 5. The evaluation metrics were calculated based on the entire testing data.
Water 2019, 11, x FOR PEER REVIEW 9 of 16 the proportion of the nodes from the previous layer removed at random. Dropout was applied during both training for regularization and testing for Bayesian interpretation. The large data set used for testing makes visualization of the difference between the data challenging. To highlight the differences, Figure 6 shows a small section of the results from 22 August 2019 at 6 AM until September 1, 2019 at 4 PM. The plot shows the effect of different levels of dropout on model performance for the test data. The dropout has a negative impact on the model performance due to its regularization capability. It can be observed that the prediction error increases with an increase in the value of "dropout." The results are as shown in Table 5. The evaluation metrics were calculated based on the entire testing data.
Uncertainty Estimation
Three different uncertainty estimation techniques were used to analyze the data: data sub-selection, noise, and dropout. Several ranges for data sub-selection were tested, and a range of (S/1000, S/2) performed best and so was used, where S represents the total length of the training data was used to identify the best range for this problem. Ranges of values from (0.1 to 0.8) were used for the dropout layer and a normal distribution of mean "0" and variance between "0.01" and "0.1" for noise were tested. Total of 200 simulations were performed for each model. The final parameters chosen for the models are 0.2 for dropout, a range of +/− 0.1 for noise, and (S/100, 2/2) for data sub-selection. The confidence intervals for dropout are shown in Figure 7, the noise results are given in Figure 8, and the data sub-selection results are given in Figure 9. Each prediction has a certain degree of error associated with it and these predictions used to make further predictions as the model continues to run. As more predictions are made using more predicted values rather than true data points, these error are propagated and build up to cause more uncertainty. This can be seen as an increase in the error bounds and a corresponding loss of accuracy, observed from the Figures 7-11. The three different shades indicate 95%, 90%, and 85% intervals starting from light to dark. The 95% confidence intervals were used to compare the performance of the different approaches. The uncertainty estimation results are as shown in Table 6. The corresponding RMSE, MAE, and uncertainty area are evaluated based on the predicted mean values, which are represented in "blue" in the Figures 7-9. All the models have similar RMSE and MAE values, however; there is a significant difference in the area under the 95% prediction interval. The range of uncertainty values increases with time for all the models. Data sub-selection has the smallest uncertainty area, followed by dropout, and then random noise. The mean predictions for dropout are slightly better than the other models. The range of predictions for 3 September, 12 AM from the three different models are (−9, 29) for dropout, (−12, 33) for noise, and (0, 21) for data sub-selection. While dropout has a slight benefit in accuracy, the data sub-selection model has a much smaller uncertainty area. Figure 10. During this period, the gauge height is stationary and the model was able to capture all the data values within the 95% confidence interval using data sub-selection method. The mean prediction for a day ahead into the future shows a deviation of approximately 2 ft. and a 95% confidence interval range of 8 ft for scenario 1. Further, the model was used to generate the forecasts when the gauge height is increasing and the corresponding results can be seen in Figure 11. The predicted mean during validation shows little deviation from the actual data, showing the capability of the model for real time gauge height predictions. The lead-time from the above figure can be observed to be 1 day and 1 hour. The model gives the flexibility to adjust the lead-time. Therefore, it provides the ability to modify according to the corresponding user's requirements.
Discussion
Deep learning models are based on the assumption that the layers and activation function would be able to capture the seasonality and trend within the data. Gauge height prediction is a complex problem as it is an observation from an intricate system consisting of weather and geophysical elements. Obtaining all the information is data intensive and in most situations, the data is not Figure 11. Forecasts of river level with uncertainty with increasing gauge.
The uncertainty estimation results are as shown in Table 6. The corresponding RMSE, MAE, and uncertainty area are evaluated based on the predicted mean values, which are represented in "blue" in the Figures 7-9. All the models have similar RMSE and MAE values, however; there is a significant difference in the area under the 95% prediction interval. The range of uncertainty values increases with time for all the models. Data sub-selection has the smallest uncertainty area, followed by dropout, and then random noise. The mean predictions for dropout are slightly better than the other models. The range of predictions for 3 September, 12 AM from the three different models are (−9, 29) for dropout, (−12, 33) for noise, and (0, 21) for data sub-selection. While dropout has a slight benefit in accuracy, the data sub-selection model has a much smaller uncertainty area. Figure 10. During this period, the gauge height is stationary and the model was able to capture all the data values within the 95% confidence interval using data sub-selection method.
Model Validation
The mean prediction for a day ahead into the future shows a deviation of approximately 2 ft. and a 95% confidence interval range of 8 ft for scenario 1. Further, the model was used to generate the forecasts when the gauge height is increasing and the corresponding results can be seen in Figure 11. The predicted mean during validation shows little deviation from the actual data, showing the capability of the model for real time gauge height predictions. The lead-time from the above figure can be observed to be 1 day and 1 hour. The model gives the flexibility to adjust the lead-time. Therefore, it provides the ability to modify according to the corresponding user's requirements.
Discussion
Deep learning models are based on the assumption that the layers and activation function would be able to capture the seasonality and trend within the data. Gauge height prediction is a complex problem as it is an observation from an intricate system consisting of weather and geophysical elements. Obtaining all the information is data intensive and in most situations, the data is not available, inconsistent, or available only for a short period. Therefore, deep learning prediction model is an ideal solution for such problems.
The LSTM was able to predict gauge height more accurately than ARIMA or physics-based models currently used by the USGS, likely because of its ability to capture long-term temporal dynamics. Because of the large time frame considered in the data set, the LSTM is better designed to capture small variation in the predictions than the ARIMA model. The moving average used by the ARIMA model seems to discourage variation, making the ARIMA model less able to capture the rapid change in water level found in the out of sample data. One of the challenges with predictions from deep learning models is the uncertainty quantification. This can be addressed by comparing the uncertainty estimates from different regularization techniques. Dropout is explored as a method for regulating the LSTM model, but a direct relationship between the error and the dropout value was found, showing that in this application dropout did not perform well. The data sub-selection method was shown to provide a better performance when used in Bayesian inferencing. Data sub-selection led to less uncertainty than both the dropout method and the random noise method. The predicted gauge heights were validated by comparing the results of the uncertainty analysis to the actual values recorded at this location from 12 December 2019 6 AM until 13 December 2019 7 AM using the same model architecture as was used for dropout testing and the uncertainty analysis. This demonstrates that the LSTM model can be used with dropout and other uncertainty estimation techniques to improve the architecture and reduce the prediction uncertainty.
River gauge height is currently used by USGS for flood inundation mapping, but inconsistency in the data available at different locations is a challenge. Many sites are operated by different organizations, resulting in variation in the type of data (weather data, gauge height, and discharge) and the time step for the data recordings. For example, two gauges in Saint Louis, the one on Meramec river in Valley Park, MO and the another one in downtown Saint Louis in the Mississippi river, are both operated by the United States Army Corps of Engineers-St. Louis District but have recordings time steps of 15 minutes and 1 hour respectively. This inconsistency results in the need to train the model for each gauge separately. The methodology presented here can identify the appropriate model in less time and with fewer resources.
The objective of the research was to develop a methodology to predict gauge height more accurately and develop an uncertainty measure to identify the quality of those predictions. The uncertainty interval can be controlled by regulating the variability being introduced to the data or the model. It should be noted that the model architecture was not changed when introducing these variations. This was done to show that existing models could be used to develop uncertainty estimates. We can further improve the estimates by optimizing the model architecture for each variation.
In the context of floods, accurate gauge height predictions can be used to develop the relevant flood mapping and identify the possible damage in the future. This can be helpful for emergency response and other applications to preemptively relocate people, close roads, and take other precautionary measures to save lives. The 3-dimensional digital elevations models published by the USGS can be used to develop the flood mapping for a given region using software such as ArcGIS and QGIS based on the gauge height predictions generated by the current model. The USGS recently published "Flood Inundation Mapper" (FIM), a tool that provides this type of flood mapping for a given region for a given gauge height. There are a limited number of locations currently included in FIM (Figure 12), but new regions are being added every month. As this resource becomes more available, it can become a useful tool to be integrated along with the gauge height prediction to generate the future flood mapping for a given region. One of the major reasons for deaths during floods is that people underestimating the amount of water and driving into the flooded roads. The use of this method for flood prediction not only gives a more accurate prediction, but also provides gauge height prediction at a smaller interval than currently being used. This methodology can be integrated with road network models to identify the flooded roads ahead of time to preemptively close roads, put up signs and evaluate alternative routes for the travelers. | 8,479.8 | 2020-03-21T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Transcending the challenge of evolving resistance mechanisms in Pseudomonas aeruginosa through β-lactam-enhancer-mechanism-based cefepime/zidebactam
ABSTRACT Multi-drug resistant (MDR) Pseudomonas aeruginosa harbor a complex array of β-lactamases and non-enzymatic resistance mechanisms. In this study, the activity of a β-lactam/β-lactam-enhancer, cefepime/zidebactam, and novel β-lactam/β-lactamase inhibitor combinations was determined against an MDR phenotype-enriched, challenge panel of P. aeruginosa (n = 108). Isolates were multi-clonal as they belonged to at least 29 distinct sequence types (STs) and harbored metallo-β-lactamases, serine β-lactamases, penicillin binding protein (PBP) mutations, and other non-enzymatic resistance mechanisms. Ceftazidime/avibactam, ceftolozane/tazobactam, imipenem/relebactam, and cefepime/taniborbactam demonstrated MIC90s of >128 mg/L, while cefepime/zidebactam MIC90 was 16 mg/L. In a neutropenic-murine lung infection model, a cefepime/zidebactam human epithelial-lining fluid-simulated regimen achieved or exceeded a translational end point of 1−log10 kill for the isolates with elevated cefepime/zidebactam MICs (16–32 mg/L), harboring VIM-2 or KPC-2 and alterations in PBP2 and PBP3. In the same model, to assess the impact of zidebactam on the pharmacodynamic (PD) requirement of cefepime, dose-fractionation studies were undertaken employing cefepime-susceptible P. aeruginosa isolates. Administered alone, cefepime required 47%–68% fT >MIC for stasis to ~1 log10 kill effect, while cefepime in the presence of zidebactam required just 8%–16% for >2 log10 kill effect, thus, providing the pharmacokinetic/PD basis for in vivo efficacy of cefepime/zidebactam against isolates with MICs up to 32 mg/L. Unlike β-lactam/β-lactamase inhibitors, β-lactam enhancer mechanism-based cefepime/zidebactam shows a potential to transcend the challenge of ever-evolving resistance mechanisms by targeting multiple PBPs and overcoming diverse β-lactamases including carbapenemases in P. aeruginosa.
IMPORTANCE
Compared to other genera of Gram-negative pathogens, Pseudomonas is adept in acquiring complex non-enzymatic and enzymatic resistance mechanisms thus remaining a challenge to even novel antibiotics including recently developed β-lactam and β-lactamase inhibitor combinations. This study shows that the novel β-lactam enhancer approach enables cefepime/zidebactam to overcome both non-enzymatic and enzymatic resistance mechanisms associated with a challenging panel of P. aeruginosa. This study highlights that the β-lactam enhancer mechanism is a promising alternative to the conventional β-lactam/β-lactamase inhibitor approach in combating ever-evolving MDR P. aeruginosa.
P seudomonas aeruginosa infections are often difficult to treat particularly in patients admitted to intensive care units with immunosuppression and other comorbidities.Often these patients have received inappropriate empiric therapy or are infected with multi-drug (MDR) or extreme-drug resistant (XDR) pathogens.The successful prolifera tion of high-risk MDR/XDR clones of P. aeruginosa is a consequence of the organism's ability to manifest intrinsic and acquired resistance mechanisms, in turn, challenging the approaches traditionally employed for the discovery of novel antibiotics (1,2).
Despite the introduction of several anti-pseudomonal antibiotics in the past decade, the conundrum of resistance mechanisms composed of hyper-efflux, impermeability, and β-lactamases in P. aeruginosa continues to pose therapeutic uncertainty during every treatment episode.Older anti-pseudomonal drugs (ceftazidime, cefepime, and piperacillin/tazobactam) are compromised by the hyper-production of pseudomonalderived cephalosporinases (PDCs) (3), while OprD inactivation and/or hyper-efflux mechanisms impact carbapenems (4).Target modifications in the background of efflux compromise the activity of fluoroquinolones and aminoglycosides (5).As a result, in the United States, currently, 20%-30% of P. aeruginosa isolates display an MDR phenotype, which prompted the Centers for Disease Control and Prevention (CDC) to designate this pathogen as a "serious" threat.Likewise, the World Health Organization (WHO) designates P. aeruginosa as a "critical" pathogen for which new antibiotics are urgently needed (6)(7)(8).A recent CDC report describes a disturbing trend of a 32% rise in hospital-onset infections caused by MDR P. aeruginosa in 2020 highlighting the "collateral damage" of the COVID-19 pandemic (9).
Of late, much of the antibiotic discovery efforts have been directed towards finding novel β-lactam or β-lactamase inhibitor-based combinations to overcome diverse β-lactam-impacting resistance mechanisms in Gram-negatives including carbapenemresistant-Enterobacterales, -P.aeruginosa and -Acinetobacter baumannii.Such efforts have led to the development of combinations such as ceftazidime/avibactam, cef tolozane/tazobactam, imipenem/relebactam, and cefepime/taniborbactam that show improved anti-pseudomonal activity compared to older therapies.However, a "coverage gap" continues to exist, as despite the chemical diversity of newer β-lactamase inhibitors, many are unable to inhibit the entire range of clinically significant β-lactamases in this pathogen (10).Also, reports of newer PDC variants continue to challenge the inhibitory activity of novel inhibitors paired with cephalosporins (11)(12)(13).
Recently, an unconventional discovery approach based on novel β-lactam enhancer action has been reported for phase 3-stage cefepime/zidebactam (WCK 5222) (14).The enhancer action of this combination is mediated by zidebactam, a novel bicycloacyl hydrazide (derived from diazabicyclooctane) possessing a potent penicillin-binding protein (PBP) 2 binding activity.When zidebactam is combined with PBP3-targeting cefepime, a mechanistic synergy is triggered resulting in the enhancement of cefepime's bactericidal activity, both in vitro and in vivo against a broad spectrum of Gram-negative pathogens expressing diverse carbapenem-impacting resistance mechanisms (15,16).Against A. baumannii, pharmacodynamic (PD) studies have established that zidebactam lowers cefepime's exposure required for in vivo bactericidal activity (17) which forms the basis for the combination's efficacy against isolates with cefepime/zidebactam minimum inhibitory concentrations (MICs) of up to 64 mg/L in translational animal infection models (18,19).
For challenging P. aeruginosa infections involving MDR/XDR isolates, the potential clinical utility of novel agents developed through the aforementioned approaches would rely on their ability to overcome a multiplicity of resistance mechanisms.To investigate this aspect, a set of 108 whole genome sequenced heterogeneous P. aeruginosa isolates collected from the U.S. harboring diverse β-lactam-impacting resistance mechanisms was assembled.The in vitro activity of cefepime/zidebactam and novel anti-pseudomo nal β-lactam/β-lactamase inhibitor combinations was determined against this panel.Furthermore, isolates with cefepime/zidebactam MICs > 8 mg/L (higher than the cefepime susceptible breakpoint) were employed in a translational neutropenic murine lung infection study to assess the in vivo efficacy of a human epithelial-lining fluid-simulated regimen (ELF-HSR) of cefepime/zidebactam.Finally, using the same model, the pharmacokinetic/pharmacodynamic (PK/PD) basis of in vivo efficacy of cefepime/zide bactam against P. aeruginosa was deciphered by studying the impact of zidebactam on cefepime's % fT >MIC requirement.For this purpose, cefepime-susceptible P. aeruginosa were used, as such isolates enable identifying the standalone cefepime's % fT >MIC requirement which then can be compared with cefepime's requirement in the presence of zidebactam.
Genetic composition of challenge isolates
Analysis of the whole genome sequences revealed that the study isolates (n = 108) belonged to a diverse genetic background with at least 29 distinct, previously reported sequence types (STs).When analyzed in comparison to the genome of the reference isolate (P.aeruginosa PAO1), the study isolates demonstrated many nucleotide changes in the genes encoding several key functional proteins (PBP2, PBP3, PDC, AmpR, MexR, MexB, NalC, and OprD) known to be associated with β-lactam resistance in P. aeruginosa (Fig. 1; Table S1).
The V126E substitution in MexR is frequently found in MDR P. aeruginosa isolates and has been associated with an increase in the MICs of imipenem in the background of impermeability (23)(24)(25).Thus, among KPC-producing isolates, the main reason for ceftazidime/avibactam and imipenem/relebactam non-susceptibilities seems to be linked with changes in PBP3 and efflux in the background of impermeability (OprD mutations).Though the OprD mutations were also observed in the ceftazidime/avibac tam and imipenem/relebactam-susceptible isolates, the absence of high-level resistance in the majority of those isolates suggests that the major contributors in raising their MICs are PBP3 modifications and efflux.
Comparative in vitro activity
Table 1 shows the MIC distribution of antibiotics for the isolates categorized as MBLor KPC-producers or non-carbapenemase producers.Fig. S2 shows the MICs of cefe pime/zidebactam versus other β-lactam/β-lactamase inhibitor combinations for each carbapenemase-producing isolate.
As stated above, bla KPC was found in 19 isolates with KPC-2 in 18 isolates and KPC-5 in the remaining one isolate.All were non-susceptible to ceftolozane/tazobactam.Against this subset, despite avibactam, relebactam, and taniborbactam being known to inhibit KPC, their respective combinations showed limited activity at their corresponding susceptible breakpoints.On the other hand, cefepime/zidebactam MICs ranged from 4 to 32 mg/L; 16/19 were inhibited at ≤16 mg/L (Table 1).
No carbapenemase was detected in the remaining 71 isolates.Regardless, 91.5% of them were non-susceptible to meropenem which indicates enrichment of carbapenem-impacting non-enzymatic resistance mechanisms in this panel.Unexpect edly, 29/71 (40.8%) and 36/71 (50.7%) were non-susceptible to ceftolozane/tazobactam and ceftazidime/avibactam, respectively (Table 1).As described earlier, this non-suscep tible population was enriched with substitutions in PBP3 and efflux proteins as well as in PDCs reported to be linked with a rise in ceftolozane/tazobactam and ceftazi dime/avibactam MICs (Table S1).Imipenem/relebactam and cefepime/taniborbactam also showed sub-optimal activity; 27/71 (38%) and 32/71 (45.1%) of isolates were non-susceptible to these combinations, respectively.With regards to non-β-lactam antibiotics, among the entire population, extreme resistance to ciprofloxacin, substantial resistance to amikacin, and potent activity of colistin were observed.The antibiotic panel was also inclusive of meropenem/vaborbac tam but not discussed above as, predictably, the addition of vaborbactam did not improve the activity of meropenem (Table 2).
Assessment of cefepime/zidebactam efficacy employing ELF-human-simulated regimen
For the in vivo efficacy study, all the isolates with elevated cefepime/zidebactam MICs (>8 mg/L, n = 15) were chosen.However, only 9/15 isolates were able to successfully infect and grow in the lungs of neutropenic mice (the others were unfit) and were included in the efficacy assessment study (Table 3).The bacterial load in the lungs at 0 h ranged from 5.4 to 6.2 log 10 CFU (mean 5.8 ± 0.2 log 10 CFU).In the untreated groups, all the mice succumbed to infection by 24 h.Cefepime HSR was not efficacious; a net-growth of >1 log 10 CFU/lung was noted in 7/9 isolates and in the remaining two isolates, 0.84 log 10 net growth in VA107 and 100% mortality in VA93 were observed.Zidebactam HSR showed a bactericidal efficacy with a mean net-drop of 0.75 ± 0.42 log 10 CFU/lung in 8/9 isolates and in a lone isolate, a net-growth of 1.97 log 10 CFU/lung was observed.In contrast, the cefepime/zidebactam HSR demonstrated pronounced killing in all the studied isolates with a magnitude ranging from 1.1 log 10 CFU/lung to 2.7 log 10 CFU/lung (mean 1.9 ± 0.6 log 10 CFU/lung), thus, exceeding the translational end point of 1-log 10 kill (Fig. 3).
Morphology of cefepime, zidebactam, or cefepime/zidebactam treated cells
All nine isolates that were treated with zidebactam transformed into spherical forms consistent with an effect of PBP2 inactivation.Cefepime-treated cells showed elongation indicative of PBP3 binding.Cefepime/zidebactam-treated cells were pleomorphic and lysis-prone suggesting bactericidal action associated with synergistic effect of concur rent inactivation of multiple PBPs (Fig. 5).
DISCUSSION
Contemporary MDR/XDR P. aeruginosa phenotypes are characterized by co-expression of various resistance mechanisms, mainly, increased efflux activity, OprD inactivation, PBP mutations and ever evolving β-lactamase variants.The daunting task has been to optimize a single antibiotic that overcomes all these resistance mechanisms.Unfortu nately, older as well as newer anti-pseudomonal antibiotics are able to handle only a limited spectrum of resistance mechanisms, thus precluding their use as a reliable monotherapy for contemporary pseudomonal infections.Therefore, for seriously ill patients, combination therapies are required to improve clinical outcomes.However, they pose toxicity risks, adverse PK interactions, and dosing difficulties (29).Challenged against a set of MDR P. aeruginosa isolates possessing diverse resist ance mechanisms (PDC, KPC, MBLs, enhanced efflux, OprD inactivation, PBP3 and PBP2 substitutions), the present study revealed the limitations of novel anti-pseu domonal β-lactam/β-lactamase inhibitor combinations.Against this select collection, the susceptibilities to imipenem/relebactam, ceftazidime/avibactam, ceftolozane/tazo bactam, and cefepime/taniborbactam were <60% even against a subset of isolates not producing any carbapenemase (overall <50%).This was somewhat unanticipated, since ceftazidime/avibactam, ceftolozane/tazobactam and cefepime/taniborbactam are expected to overcome PDC over-expression (avibactam and taniborbactam being potent β-lactamase inhibitors and ceftolozane being stable to PDC hydrolysis) (30,31) and OprD truncations (cephalosporins are known to be less impacted than carbapenems) (32,33).Thus, the modest activities of these β-lactam/β-lactamase inhibitors observed in this study could be attributed to hampered target binding due to the PBP3 substitutions, particularly, in the background of impermeability.Compromise in the activity of imipenem/relebactam could also be attributed to PBP changes, though contribution of concurrently operating additional resistance mechanisms cannot be ruled out.The probable role of PBP substitutions in impacting the activity of these β-lactam/βlactamase inhibitors stem from elevated MICs of ceftazidime/avibactam (8 to 32 mg/L) and imipenem/relebactam (16 to >128 mg/L) against ST 1801 isolates harboring KPC-2.Even though both avibactam and relebactam are potent inhibitors of KPC and mutations in genes encoding for hyper efflux were not identified in the ST 1801 isolates, ceftazi dime/avibactam and imipenem/relebactam MICs remained high (Table S1).Thus, in conjunction with already well-established resistance mechanisms, proliferation of PBP substitutions in P. aeruginosa could further add to the challenge of optimizing novel antibiotics targeted towards this problematic pathogen.
In contrast, β-lactam-enhancer based cefepime/zidebactam demonstrated potent activity against the same set of P. aeruginosa isolates with 86.1% susceptibility at a cefepime (2 g, q8h) breakpoint of 8 mg/L which rose to 100% at 32 mg/L, an in vivo efficacy-supported cut-off being proposed as cefepime/zidebactam's PK/PD breakpoint for P. aeruginosa.Earlier independent in vivo translational studies have established therapeutically relevant coverage of cefepime/zidebactam against P. aeruginosa isolates with MICs higher than cefepime's susceptible breakpoint (up to 32 mg/L) (34,35).
The consistent in vitro activity of cefepime/zidebactam against MDR P. aeruginosa isolates expressing a multitude of resistance mechanisms including PBP substitutions is a result of β-lactamase stable zidebactam's PBP2 binding action that continues even in the isolates expressing enhanced efflux or impermeability as shown in this study.Likewise, cefepime's ability to engage its high affinity PBP targets is facilitated by rapid cellular penetration and fast rate of PBP binding (14).As a result, in combination, a PBP level synergistic interaction enables cefepime/zidebactam to overcome multiple β-lactamimpacting non-enzymatic and enzymatic resistance mechanisms in P. aeruginosa.
Functional evidence of a multiple PBP binding driven synergistic interaction between cefepime and zidebactam was also manifested through changes in morphol ogy of Pseudomonas cells.Upon exposure to the cefepime/zidebactam combination, we observed that the cell morphology changed from cocco-bacillary to lysis-prone spheroplasts (indication of multiple PBP engagement).Interestingly, such morphological change was noted at a significantly lower concentration of cefepime when combined with zidebactam, as compared to the concentration of standalone cefepime required to induce elongation (indication of PBP3 binding).Thus, we hypothesize that engage ment of PBP3 and PBP2 (multiple target inactivation) by cefepime and zidebactam, respectively, has a synergetic effect by efficiently inhibiting or arresting cell wall synthesis allowing more rapid cell wall degradation (36).
A similar activity profile of cefepime/zidebactam was also proposed by Mullane et al., wherein, the in vitro activity of standalone cefepime/zidebactam was compared with several combinations against a panel of 30 carbapenem-resistant P. aeruginosa (37).While 97% isolates were inhibited by cefepime/zidebactam alone at ≤16 mg/L, the susceptibility rates to combination of other antibiotics (cefepime, ceftolozane-tazobac tam, or meropenem combined with either amikacin or fosfomycin) were lower (<70% at established breakpoints).Pathogen coverage achieved with cefepime/zidebactam alone was even broader than that achieved with the most active combination of ceftolo zane/tazobactam plus amikacin or fosfomycin (37).
We further investigated the impact of higher cefepime/zidebactam MICs of 16 and 32 mg/L obtained for nine isolates (Table 3) on its in vivo efficacy by employing a neutropenic murine pneumonia model.The results showed that, for all the isolates regardless of MICs, cefepime/zidebactam ELF-HSR caused a ≥ 1-log 10 kill, thus exceeding the translational end point.Interestingly, for the majority of isolates, even zidebactam monotherapy showed considerable bactericidal effect.Notably, these isolates display several resistance mechanisms such as VIM-2, KPC-2, and PBP3 and PBP2 substitutions.To investigate the PK/PD basis of coverage of P. aeruginosa with higher cefepime/zide bactam MIC (higher than cefepime breakpoint of 8 mg/L), we assessed the impact of zidebactam on cefepime's % fT >MIC requirement.This study showed that stand alone cefepime fT >MIC of ≥46.8% provided merely bacteriostatic to ~1 log 10 kill effect, while in combination with zidebactam, a lowered cefepime fT >MIC of 8%-16% imparted a substantially higher kill of >2 log 10 CFU.Such modulation of the partner antibiotic's PK/PD is an attribute associated with zidebactam's β-lactam enhancer action, not reported with conventional β-lactamase inhibitors.Thus, a lowered requirement of cefepime's % fT >MIC (linked with bactericidal effect) in the presence of zidebactam provided a rationale for observed in vivo bactericidal effect of cefepime/zidebactam against P. aeruginosa with higher cefepime/zidebactam MICs through its humanized regimen (cefepime 2g + zidebactam 1 g, TID).Moreover, adequacy of shorter % fT >MIC requirement of cefepime/zidebactam in rendering bactericidal effect is expected to be beneficial in critically-ill patient population often associated with reduced drug exposures (17).
The in vivo efficacy results obtained in this study are in agreement with the Kidd et al. study that showed a pronounced in vivo efficacy (static to 2 log 10 kill) of cefepime/zide bactam ELF-HSR against several carbapenem and ceftolozane/tazobactam-resistant P. aeruginosa including isolates with cefepime/zidebactam MICs up to 32 mg/L (34).Taking into account the MIC 90 of cefepime/zidebactam against global isolates (4 mg/L, n = 4808) and MIC 90 against the subset of meropenem-non-susceptible isolates (8 mg/L, n = 1147) (38), a translational efficacy of cefepime/zidebactam against P. aeruginosa isolates with MICs up to 32 mg/L, as demonstrated in this study, potentially suggests a near-total coverage of MDR/XDR P. aeruginosa.If 108 P. aeruginosa clinical isolates included in this study were to represent pathogens causing infections in high-resistance regions or in ICU setting, cefepime/zidebactam is expected to be an important future arsenal for the treatment of MDR P. aeruginosa infections.
In summary, in the present study, β-lactam enhancer based approach showed promise in overcoming MDR P. aeruginosa regardless of resistance mechanisms expressed.While a multitude of resistance mechanisms expressed by MDR P. aerugi nosa pose severe impediments to newer anti-pseudomonal drugs, the novel β-lactam enhancer approach, as exhibited by zidebactam, shows potential to transcend this challenge.
Bacterial isolates
A collection of 108 well-characterized clinical P. aeruginosa isolates was used in this study.These isolates have been collected from northeast Ohio and the Mid-Atlantic states and were stored in the investigator's laboratory.They were previously determined by phenotypic testing to be carbapenem resistant (>90% of isolates), and most were previously described (39)(40)(41)(42).
MICs of antibiotics were determined by broth microdilution method as recommended by Clinical & Laboratory Standards Institute, M100 guideline (43).Cefepime/zidebactam MICs were determined at 1:1 ratio.The fixed inhibitor concentration of 4 mg/L was used for avibactam, relebactam, tazobactam, and taniborbactam while 8 mg/L was used for vaborbactam.FDA breakpoints were employed for determining the susceptibility rates of isolates to the comparator antibiotics.
Assessment of cefepime/zidebactam efficacy employing ELF-HSR
In vivo efficacy employing HSR of cefepime alone, or zidebactam alone, or cefepime/zide bactam was evaluated in a murine neutropenic infection model as described previously (34) against isolates with cefepime/zidebactam MICs of >8 mg/L (Table 3).These dosing regimens (designed at Wockhardt) produced cefepime, or zidebactam, or cefepime plus zidebactam exposures in mice epithelial lining fluid (ELF) comparable to that of respective exposures obtained in human ELF after 2 + 1 g, q8h administration of cefepime/zidebactam.The comparability between mice and human exposures in the ELF was in terms of the proportion of time during which cefepime and zidebactam concentrations remained above cefepime/zidebactam MICs (Table S2A and B; Fig. S1).Male/female Swiss Albino mice were rendered neutropenic by intra-peritoneal injections of cyclophosphamide 150 and 100 mg/kg on 4 days and 1 day prior to the infection, respectively.The humanized regimen of cefepime/zidebactam in mice was rendered feasible by slowing the renal elimination with the help of uranyl nitrate 5 mg/kg, intra-peritoneal injection administered 3 days before the infection (46).
Animals were infected with 0.05 mL of normal saline containing ~10 7 CFU/ml of P. aeruginosa through nostrils under isofluorane-induced transient anesthesia.Nine P. aeruginosa with cefepime/zidebactam MICs of 16-32 mg/L with resistance to imipe nem/relebactam and ceftolozane/tazobactam were employed in this study (Table 3).Treatment (subcutaneous injections) was initiated 2 h post-infection with cefepime HSR or zidebactam HSR or cefepime/zidebactam combination HSR.A group of animals was administered with vehicle control.After 24-h treatment duration, animals were humanely sacrificed, and the lung bacterial load was estimated.Earlier at the time of initiation of treatment (0 h), a group of infected mice was sacrificed to determine the lung bacterial burden at time 0 h.Efficacy was defined as change in the bacterial load at 24 h as compared to 0 h.All the groups consisted of six animals.
Effect on cefepime's % fT>MIC requirement in the presence of zidebactam
The above-described murine neutropenic infection model was employed with the exception that animals were not administered with uranyl nitrate.Cefepime was administered as a single dose (q24h) or in fractionated regimens over 24 h; every 12 h (q12h), every 6 h (q6h), every 3 h (q3h), or every 2 h (q2h) and the same regi men was combined with zidebactam given 8.33 mg/kg, every 2 h (q2h).The infecting isolates were cefepime-susceptible (n = 3) which enabled determining the efficacy-linked magnitude of % fT >MIC for cefepime monotherapy, which could then be compared with that of cefepime in the presence of zidebactam.The % fT >MIC of cefepime in the various fractionated regimens was determined using non-linear sigmoidal E max model (GraphPad Prism version 7) and previously reported mouse plasma PK of cefepime (18).
Morphology of cefepime, zidebactam, or cefepime/zidebactam-treated cells
Morphological changes were studied for isolates with cefepime/zidebactam MICs of 16-32 mg/L by exposing sub-minimum inhibitory concentrations or at inhibitory concen trations of zidebactam or cefepime or cefepime/zidebactam to 10 6 CFU/mL bacterial density in cation-adjusted Mueller-Hinton broth under shaking condition.The treated cells were visualized after 3 h of exposure using a phase contrast microscope.
FIG 2
FIG 2 Comparative distribution of MICs of two cefepime-based combinations categorized as per major resistance mechanisms identified through whole genome sequencing of 108 P. aeruginosa.Each symbol represents one isolate.FEP/ZID: cefepime/zidebactam; FEP/TAN: cefepime/taniborbactam.
TABLE 1
MIC distributions of cefepime/zidebactam and other antibiotics for major resistance groups a a Susceptible range (FDA criteria) for each agent except for cefepime/zidebactam, cefepime/taniborbactam, aztreonam/avibactam and meropenem/vaborbactam is depicted by boldfaced numbers; for these approved antibiotics, FDA breakpoints are consistent with CLSI breakpoints.MIC of cefepime/zidebactam was determined at 1:1 ratio.A fixed 4 mg/L of inhibitor concentration was used for cefepime/taniborbactam, ceftolozane/tazobactam, ceftazidime/avibactam, and imipenem/relebactam.A fixed 8 mg/L of inhibitor concentration was used for meropenem/vaborbactam.
TABLE 2
MIC range, MIC 50 and MIC 90 of cefepime/zidebactam and other antibiotics for all P. aeruginosa (n = 108) a Susceptibility interpreted against FDA criteria.b As per cefepime susceptible breakpoint.c As per cefepime/zidebactam's proposed PK/PD breakpoint of ≤32 mg/L.d As per aztreonam standalone susceptibility breakpoint.e Based on meropenem susceptible breakpoint of ≤2 mg/L.f Based on CLSI intermediate breakpoint of ≤2 mg/L. | 5,124 | 2023-10-27T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Synthesis and evaluation of D-thioluciferin, a bioluminescent 6’ -thio analog of D-luciferin
All known light-emitting firefly-bioluminescent luciferin analogs are either derived from the 6’ -hydroxy- and/or 6’ -aminoluciferin. We report the synthesis of D-thioluciferin, a 6’ -thio analog or isostere of D-luciferin, starting from p -aminothiophenol, using a unique thioacrylate-S -protecting-group strategy. Upon treatment of D-thioluciferin with purified Photinus pyralis (Ppy) luciferase (Luc), a bioluminescence emission with a red-shift λ max relative to D-luciferin was observed. It was also shown that disulphide and sulphide analogs of D-thioluciferin did not produce similar bioluminescences relative to D-thioluciferin when treated with Ppy Luc under standard conditions, thus, providing a foundation for the development of D-thioluciferin based probes based on disulphide reduction and S -dealkylation .
D-Luciferin
is the light-emitting molecule responsible for the bioluminescence observed in the American firefly Photinus pyralis (scheme 1). 1 Biosynthesis of D-L-luciferin starts with quinone followed by the addition of two mol equivalents of L-cysteine with concomitant loss of CO2. 2 The enzyme-controlled stereochemical inversion of L-luciferin to D-luciferin occurs by activation with CoA by virtue of a thioester conjugate at the carboxyl moiety. 3The latter process can be considered a natural light switch that is utilised as a means of communication for the firefly.Both D-and L-luciferin reacts in the presence of O2, ATP, Mg 2+ and the luciferase enzyme, but only D-luciferin (1) produces a yellow/green light.There is speculation that the firefly does not waste the resulting oxyluciferin and can use luciferin-regenerating enzyme (LRE) to produce the 6hydroxy-1,3-benzothiazole-2-carbonitrile once again.How, or even if, the oxyluciferin is recycled, is still under investigation.The chirality at the carboxyl group in natural firefly luciferin is of the S form, as it was established by the early chemical synthesis of D-luciferin (1) from 6-hydroxy-1,3-benzothiazole-2-carbonitrile and D-cysteine.L-Luciferin has the R form and is not used for the luminescence reaction by firefly luciferase. 4hus, firefly luciferase oxidizes only D-luciferin, a specificity which has been exploited in gene reporter as well as cell viability assays based on ATP production. 5heme 1. Biosynthesis of D-luciferin (1) in the firefly, Photinus pyralis.
Modifications to the natural substrate have resulted in new luminogenic substrates with often improved properties, which have been exploited in the development of sensitive luciferin-based probes for invivo imaging, also known as "caged luciferins". 6Most of these probes, however, are based on the release of either natural D-luciferin (1) or a 6'-amino analog, D-6-aminoluciferin (2). 7,8 he natural 6-OH and synthetic 6-NH2 bioluminescent substrates have limited bioanalytical applications, particularly in terms of coupling bioluminescence activity directly with sulfur biology.The development of thiol-sensing technologies has recently become an area of increased interest because of the biological importance of thiol-containing molecules such as cysteine (Cys), homocysteine (Hcy) and glutathione (GSH). 9] It is known that HC≡C−EWG compounds with strong EWGs such as SO2R and CO2R can react at ambient temperatures with relatively very weak nucleophiles, in the presence of suitable catalysts. 12In the latter report, pyrrolidine-mediated deprotection of thiolacrylates has been demonstrated in organic medium.The reactions of Cys with HC≡C−CONHR, which are poorer Michael acceptors, proved to be sufficiently quick, complete, and Z-stereoselective in aqueous media. 13The kinetics of thioacrylate protection of cysteine and its release under physiological conditions have been investigated. 14Notably, peptides modified by terminal alkynones could be converted back into the unmodified peptides by treatment with thiophenol and free cysteine under mild reaction conditions.
Using the latter synthetic approach requires a p-aminothiophenol with an ideal sulfur-protecting group that could withstand the subsequent reaction conditions.Moreover, preparation of luciferin derivatives requires the use of a palladium (II) catalyst, which is expensive to use and can be poisoned by free thiolcontaining reagents.There is, therefore, a need for novel synthetic methods of producing D-thioluciferins and their derivatives.We utilised an expedient synthetic approach for D-thioluciferin (3) (Jardine et al.PCT/IB2018/055542) based on the established preparation of both D-luciferin (1) and D-aminoluciferin (2). 8ost thioether-sulfur protecting groups are either too labile (e.g., trityl) or too stable (e.g., alkyl) for consideration under the latter synthetic strategy. 16In addition, thioesters could not be considered as protecting groups since the cross-coupling reaction that produces benzothiazoles from thioanilides also produces benzothiofurans.8] This methodology proceeds via the 6-allylthiobenzothiazole (IV), followed by periodate oxidation, to give the allylic sulfoxide (VII), which then rearranges to an intermediate allylic sulfenate that is subsequently cleaved by triphenyl phosphite, as a reductant, to give the target intermediate 6-mercapto-1,3-benzothiazole-2-carbonitrile (VIII).Facile addition of D-cysteine gave Dthioluciferin (3).
would release D-thioluciferin (3) via the addition of 2 mol equivalents D-cysteine.The thiazoline ring formation is known to be a facile, near quantitative, reaction.The regioselective addition of one mol of cysteine to complete the carboxy-thiazoline ring to give the thioacrylate protected thioluciferin (IX) and, subsequently, another mol equivalent of D-cysteine to effect a thia-Michael addition, followed by a retro-Michael reaction, resulting in the release of D-thioluciferin (3).Gratifyingly, the thioacrylate (IVd) was well tolerated by the coupling reaction, which was interesting because there are not many reported options for thiol-protecting groups which are both easily removed and stable to palladium-mediated chemistry.
The unsaturated vinyl sulfide units (IVd or IX) could be cleaved by an addition/elimination mechanism by treatment with a thiol (RSH) (Scheme 3).Notably, D-cysteine reacts regioselectively at the nitrile group when limited to 1 mol equivalent.Thiol-sensitive or "caged" D-thioluciferin (IX) could essentially be deprotected with any biothiol (RSH) in aqueous medium.Shiu et al. investigated the modification of cysteinecontaining peptides in which they found that thiol-protection of cysteine using electron-deficient alkynes favoured formation of the Z-isomer. 14Accordingly, peptides modified by terminal alkynones could be converted back into the unmodified peptides by treatment with thiols under mild reaction conditions.The driving force for the reaction is the elimination of the more stable thiolate anion of D-thioluciferin.Scheme 3. Mechanism of thiol mediated thioacrylate (IX) deprotection and simultaneous D-thioluciferin (3) synthesis.
Spectroscopic characterization
In the presence of ATP, D-luciferin ( 1) is oxidized by luciferase to generate oxyluciferin, thereby, resulting in production of bioluminescence and loss of fluorescence proportional to the concentration of ATP.The emission wavelength of bioluminescence for D-thioluciferin (3) was then evaluated (Figure 1).It exhibited a red-shifted light emission (599 nm) when treated with purified firefly luciferase (luc) expressed from E. coli, relative to (1) (557 nm) and D-aminoluciferin (2) (593 nm) The efficiency of a bioluminescent reaction is determined by the product of the quantum yield and reaction rate.Thus, quantitative analysis and knowledge of the quantum yield and the reaction kinetics are important.The emission intensity increased, as expected, with increasing concentrations of (3) when treated with purified luciferase under standard conditions (Figure 2a).No emission was observed for the pure enzyme in the absence of D-luciferin (1) (control 1) as well as for the pure substrate in the absence of the enzyme (control 2).The burst-kinetics profile of (3) (Figure 2b) was like that reported for both (1) and (2).A rapidinjection experiment was performed in which the light output for the reaction over time was recorded.As with all known luciferins, (3) gave a robust initial burst of light followed by sustained light output of much lower intensity (Figure 2b).This trend is consistent with that previously reported for (1) and ( 2) where rapid decay in emission intensity post-burst corresponds to product inhibition. 21The lower emission intensity of (3) relative to the natural substrate (1) should, however, not be a deterrent for its applications in bioluminescence imaging.Such applications rely purely on light generated from the enzyme-substrate reaction and, as a result, generally have good sensitivity.Notably, (3) displayed >100-fold emission over the background.
The relative luminescence-emission intensity of natural (1) (Figure 2c) was, however, 100-fold greater than both (2) and (3) when treated with purified firefly luciferase as compared with the corresponding negative controls (substrates in the absence of luciferase) (Figure 2d).As reported for (2), (3) was also found to have a 100-fold less intense emission signal when compared to (1).The reduction in light output could be due to substrate (3) combined with a luciferase light-emitting reaction having a lower quantum yield or because of differences in the rate of oxyluciferin production.
To lay the groundwork for biothiol-specific biosensing applications, the thioacrylate sulphide (IX) and the D-thioluciferin homodisulphide (3') (Figure 2e), prepared from an iodine oxidation, were also evaluated for the bioluminescent reaction.Neither produced a bioluminescent signal comparable to D-thioluciferin (3).
Importantly, for the purpose of thiol sensing, the luminescence output for (3) was 90-fold greater than its S-protected-thioacrylate (IX), and 2.5-fold greater than the D-thioluciferin homodisulphide (3'), when treated with luciferase under physiological conditions (Figure 2d).It was demonstrated that neither pure luciferase nor pure D-thioluciferin thioacrylate (IX, control 1) emitted light.It was also demonstrated that, when a 0.1 µM thioacrylate solution was treated with luciferase in enzyme buffer, the luminescence output remained negligible.This reinforces that the thioacrylate (IX) of ( 3) is, indeed, not a substrate for luciferasemediated bioluminescence, and, perhaps by extension, that all sulphides of (3) are inactive, as is the case with (1) and its 6'-O-alkyl analogues and the previously reported D-luciferin-6'-sulphides. 15 Negative controls (Figure 2d) contained the homodisulphide (3') and (3), respectively.In both cases, in the absence of luciferase, a small degree of luminescence was detected (4% above background).The luminescence increased significantly when (3) and its disulphide were treated with luciferase to a final enzyme concentration of 10 nM.Notably, the homodisulphide (3') treated with luciferase emitted a degree of bioluminescence relative to the corresponding control.This effect can be ascribed to the reduction of the non-bioluminescent D-thioluciferin homodisulphide (3') to the bioluminescence-active-free (3) by the reducing agent in the enzyme buffer, namely DTT.Kinetic data of the D-thioluciferin homodisulphide (3'), however, did not show an increase in bioluminescence over time as one would expect if the free thiol were constantly being formed via disulphide reduction.Instead, the degree of bioluminescence was observed to be decreasing over time, and the rate of decrease in bioluminescence was comparable to that of (3).This could indicate that the reduction with DTT had occurred relatively quickly, generating a fixed amount of (3) which was not replenished via further disulphide reduction.As a result, the free thiol displayed five-fold greater luminescence than the disulphide, which is indeed a promising result for future redox based sensing applications.
From the kinetic assays, it was also observed that D-thioluciferin (3)'s rate of decay in bioluminescence emission, when treated with luciferase under standard conditions, was reduced when compared to that of Dluciferin (1) and D-aminoluciferin (2).This was a particularly attractive discovery since luciferins are generally known not to have a very stable bioluminescence output and, therefore, require constant re-supply or readministration.These bioluminescent properties are a good starting point for D-thioluciferin (3)-based bioluminescence imaging, despite the lower bioluminescence output relative to D-luciferin (1).
It has been reported that size and hydrophobicity at the C-6 position influence the quantum yield of cyclic amino-luciferins. 22Other factors include pH and the microenvironment in the enzyme active site.Substitution of the 6'-oxygen in D-luciferin (1) with a nitrogen or sulphur resulted in a weakening of bioluminescent intensity, a phenomenon that is not fully understood yet, but might involve bivalent metal ions, which are a cofactor in enzymatic reactions of firefly bioluminescence.
Interestingly, the 6′-methylthio-luciferin reported by Miller et al. 15 proved not to be a substrate for luciferase, whereas 6'-N-alkylated-aminoluciferins were better in vivo substrates for bioluminescence experiments than (2) itself. 23The absorbance of the 6′-methylthio-luciferin is slightly red-shifted compared to (1), which is opposite to the trend observed with (3).The fluorescence of the S-methyl-thioluciferin is blueshifted ca.40 nm, and there was a reduction of the fluorescence quantum yield.In addition to its bioluminescent emission, however, (3) was found to have a strong fluorescence emission, while its protected thioacrylate (IX) was only weakly fluorescent, after excitation at a range of wavelengths (360-520 nm) (Figure S1-S5).Thus, the latter molecules provide further opportunities for imaging applications, the most obvious of which relate to sulphide deprotection and disulphide reduction.Since it was recently reported that 6'-sulphides could be potential inhibitors for the WT luciferase enzyme, the thioacrylate-protected D-thioluciferin (IX) was further evaluated as an inhibitor of luciferase where it was shown to be strongly inhibitory (Figure S6).The sulphide's inhibition of luciferase could similarly be used to inform the design of D-thioluciferin (3)-based probes.
Kinetics
Using a plot of initial rates, the apparent Km of D-thioluciferin (3) was calculated using the Km's for D-luciferin (1) and D-aminoluciferin (2) as references.The apparent Km was then calculated as 0.09801 µM, which was on the same order as that previously calculated for (2) (0.39-0.69 µM) and related analogues (Figure S7), and consistent with the 0.16 µM reported by Pirrung et al. 15,21 The Km was, surprisingly, much lower than that of the native substrate, (1) (8.3 µM), despite the lower emission intensity at the same concentration (Table S8). 21- 22The latter result, along with the fact that the substrate (3) combined with the luciferase light-emitting reaction has a lower quantum yield compared to (1), may shed some light on the bioluminescence activity of (3).
Conclusions
The synthesis of D-thioluciferin and the S-protected-thioacrylate have now paved the way for the development of novel biothiol-relevant applications.The kinetics of D-thioluciferin release, in the case of Sprotected-thioacrylate, is expected to be more favourable compared to thiophenol and needs to be evaluated further under physiological conditions.The lower Km and longer, red shift of λmax relative to D-luciferin and Daminoluciferin make D-thioluciferin a promising bioluminogenic candidate whose properties and applications should be further explored.Moreover, thioluciferin provides a unique handle that readily allows for bioluminescence to be coupled to biologically relevant sulphur chemistry.By exploiting the difference in bioluminescent activity of thioluciferin and its oxidised forms, e.g., disulphide, one can envisage several potential applications which should be investigated further.The S-protected-thioacrylate can be utilized as a general thiol sensor.Furthermore, the established synthetic methodology for unsymmetrical disulfides would allow for the synthesis of disulfide reductase substrates that would release the bioluminescent D-thioluciferin molecule upon enzyme cleavage.This may then lead to the development of new bioluminescent sensors based on the D-thioluciferin molecule.
Experimental Section
General.All reactions were carried out in oven-dried glassware under an inert nitrogen atmosphere, unless otherwise stated.Reagents were obtained from commercial sources (Sigma-Aldrich, Merck) and used as received unless otherwise stated.Solvents were evaporated under reduced pressure at 40 °C using a Buchi Rotavapor, unless otherwise stated.Aqueous solutions were prepared using distilled water.All reactions were monitored by TLC using aluminum-backed Merck silica-gel 60 F254 plates, and compounds were visualised on TLC under a UV-lamp and/or sprayed with a 2.5% solution of p-anisaldehyde in a mixture of sulfuric acid and ethanol (1:10 v/v), iodine vapour or ceric ammonium sulphate solution, and then heated using a 1600 W heat gun.Normal-phase column chromatography was carried out using silica-gel (Fluka Silica Gel 60, 40-63 microns), and compounds eluted with the appropriate solvent mixtures.All compounds were dried under vacuum before yields were determined and spectroscopic analyses performed.Purity was determined by analytical chromatography using an Agilent HPLC 1260 equipped with an Agilent infinity diode array detector (DAD) 1260 UV-Vis detector, with an absorption wavelength range of 210 -640 nm.The compounds were eluted using a mixture of 10 mM NH4OAc/H2O and 10 mM NH4OAc/MeOH at a flow rate of 0.9 mL.min−1 (10% NH4OAc/MeOH between 0 and 1 min, 10 -95% NH4OAc/MeOH between 1 and 3 min, 95% NH4OAc/MeOH between 3 and 5 min).
Figure 2 .
Figure 2. a) Graph of luciferase luminescence at a final enzyme concentration of 10 nM, at varying D thioluciferin concentrations 1 min post-enzyme addition (control 1 is the emission recorded for the enzyme solution in the absence of substrate (1) and control 2 is the recorded emission for substrate (2) in the absence of the enzyme).b) Burst kinetics profile of purified 10 nM luciferase treated with 100 µM D thioluciferin.c) Relative luminescence emission intensity of the core luciferins (6-hydroxyl, 6-amino and 6-thiol at 0.1 µM substrate concentration and at a final luciferase concentration of 10 nM.Control 3 is the recorded emission for substrate (3) in the absence of the enzyme.d) Luminescence output of 0.1 µM of protected D-thioluciferin thioacrylate (IX)(sulphide), D-thioluciferin homodisulphide (3')(disulphide), and free D-thioluciferin (3)(thiol) at a final luciferase concentration of 10 nM (Control readings were recorded for substrates in the absence of the luc enzyme).The Relative Light Units (RLUs) were determined in triplicate and are represented as the mean ± SEM. e) Thiol-sensitive thioacrylate protected D-thioluciferin (IX) probe and redox reaction of D-thioluciferin (3). | 3,782.8 | 2021-02-08T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Asymptotic Results of Some Conditional Nonparametric Functional Parameters in High-Dimensional Associated Data
: In this paper, we propose to study the asymptotic properties of some conditional functional parameters, such as the distribution function, the density, and the hazard function, for an explanatory variable with values in a Hilbert space (infinite dimension) and a response variable real in a quasi-associated dependency framework. We consider the non parametric estimation of the conditional distribution function by the kernel method in the presence of the quasi-associated dependence
Introduction
Mathematics and statistical analysis techniques have been of considerable relevance in a variety of scientific sectors in recent years, including engineering, economics, clinical medicine, and healthcare.In particular, the application of methods of mathematical and statistical analysis have been applied to engineering, economics, healthcare, and clinical medicine, and is has been demonstrated how these approaches may assist in such vital areas as comprehension, prediction, correlation, diagnosis, therapy, and data processing.
It is crucial to note that the study of conditional models, which is included in nonparametric functional data analysis, is one of the most significant approaches of statistical analysis.An examination of this kind is carried out with the primary purpose of investigating and modeling the connection that exists between a scalar response variable and a functional regressor.In addition, two essential asymptotic features are the consistency and the asymptotic normality of particular statistical parameter estimators.
Functional data are the subject of this research.As defined in Ref. [1], statistics functional data analysis (FDA) analyzes infinite-dimensional variables, including curves, sets, and pictures.The "Big Data" revolution has spurred its rapid expansion over the past 20 years.This may be demonstrated by researching the topic's past (for an example, see [2]).In [3], the topics of density and mode estimation for normed vector space data and the problem of excessive dimensionality in functional data and are discussed and potential remedies are offered.Nonparametric models were investigated for use in regression estimation in [4].
The treatment of functional data today typically involves contemporary theory.For example, reference [5] presented the consistency rates of a variety of conditional distribution functionals, such as the regression function, conditional cumulative distribution, and conditional density, uniformly over a subset of the explanatory variable.These functionals include the conditional cumulative distribution and conditional density.The conditional cumulative distribution, as well as the conditional density, are both included in these functionals.
Uniformly in bandwidth (UIB) consistency was extended to the ergodic scenario, and the rates of consistency for different functional nonparametric models were investigated in [6].The regression function was a part of these models, a conditional hazard function, conditional distribution, and conditional density.
In the field of statistical mathematics, in recent years, there has been a surge of curiosity in the statistical analysis of functional data.These numbers are used in econometrics, medicine, environmental science, and many other fields.In the statistical functional, ref. [4] made the first attempt to estimate the conditional density function and its derivatives.In addition, they were the ones who were the first to do so in the scientific community.These authors reached an extent of convergence in the i.i.d.case that was really near to being finished.Since this work was published, much more research has been done on estimating the conditional density and its derivatives, especially for computing the conditional mode.In point of fact, ref. [7] demonstrated that a kernel estimator of the conditional mode will almost certainly converge to the true value.The authors showed this by taking into account data that included a-mixing.
The point that nullifies the kernel density estimator derivative was used by [8,9] to estimate the conditional mode.This was done to determine the conditional mode.The outcomes were comparable using this approach, but they put more of an emphasis on the estimator's asymptotic normality, which was provided in both the i.i.d and mixed circumstances, respectively.Both scenarios included mixing.Ref. [10] was able to identify the level of accuracy of the terms that dominate the quadratic error that is produced by the kernel density estimator.
We suggest that the reader check [11] for further information on the smoothing parameter that should be used in estimating the conditional density in relation to the functional explanatory variable.
The concept of a quasi-association variable refers to a variable that exhibits some degree of association with another variable; examples of research that processed data under both positive and negative dependent random variables are [12][13][14].In Ref. [15], the authors were the first people to offer the concept of quasi-association to conduct an analysis of real-valued stochastic occurrences.This is a very striking illustration of the idea of weak reliance.It was used by [16] for real-valued random fields, and it provides a unified technique for studying families of positively dependent and negatively dependent random variable families.
According to our knowledge, the nonparametric estimation of quasi-associated random variables is addressed in a vanishingly small number of published papers.The study conducted by [17] focuses on a limit theorem for quasi-associated Hilbertian random variables.The research conducted by [18] explores asymptotic results for an M-estimator of the regression for weak dependance; and in Ref. [19], the authors investigated both quasi-associated processes and asymptotic results.Ref. [20] investigated the asymptotic normality of this final estimator as part of the study, which focused on the single-index structure of the conditional hazard function.
To solve relative regression, ref. [21] explored the nonparametric estimate for linked random variables; it was found to be significant in [22,23].Both of these results were found to be significant.The authors were responsible for conducting these investigations in their own separate ways.The authors in [24] demonstrate the robust uniform consistency characteristics of partial derivatives of multivariate density functions under weak dependence, namely inside compact subsets of R d , and they determine the relevant rates of convergence to establish the asymptotic normality of these estimators.
In [25], the authors examine the application of the kernel nearest neighbors (k-nn) technique in a regression model with a single index.They specifically focus on cases where the explanatory variable is measured in functional space, in the context of the association dependency condition.The primary outcome of this study involves the determination of the asymptotic distribution for the single index estimator using the k-nn.
The study conducted in [26] examines the application of the k-nn approach within the single index regression model.This particular investigation focuses on scenarios where the predictor is functional in nature, and the response is scalar.The primary outcome of this study is the determination of the almost comprehensive rates of convergence under the assumption of weak dependence.
The primary outcome of the study referenced in [27] is the establishment of the asymptotic properties, specifically the almost complete convergence rates and the asymptotic normality, of nonparametric estimation techniques for the regression function in the context of the single functional index model (SFIM).These features are derived under the assumption of quasi-association dependence.
It is important to keep in mind that our results are connected to the model's functional space in some manner, just as they are related to every previous asymptotic statistics functional nonparametric finding.
In this research, we establish under general hypotheses the almost complete convergence with the speed of the estimator built in the quasi-associated case, and we will apply the two results of the conditional distribution function and the conditional density in [28] to estimate the conditional hazard function.We establish the asymptotic normality of the kernel estimator as the conditional hazard function of a properly normalized functional.We give explicitly the asymptotic variance.
In the subsequent sections of this work, we will introduce our model, which can be found in Section 2. In Section 3 of this paper, the primary findings are presented.The confidence bands is the subject of discussion in Section 4. In Section 5, an analysis and evaluation of the behavior of our asymptotic normality findings on finite sample data is conducted.The conclusion is articulated in Section 6.The proof of the intermediate findings is provided in Appendix A.
Model and Estimator
To commence, we provide a precise delineation of quasi-association about random variables that possess values within a separable Hilbert space.
Consider a separable Hilbert space (H, < ., .>) furnished with an orthonormal basis.The sequence of elements is represented by e k , k ≥ 1.
Let (R n ) n∈N be a sequence of real random variables with values in H.The statement is made that this sequence exhibits quasi-association about the basis.The user defines e k as a variable.The term "quasi-associated" is used to describe the sequence e k if, for any positive integer d, the d-dimensional sequence {(< R i , e j1 >, . . ., < R i , e jd >), i ∈ N} is also quasi-associated.
In this analysis, we will examine a group of n quasi-associated random variables, which we will represent as W i = (R i , S i ) 1≤i≤n .The random variables mentioned exhibit the same distribution as the random variable W = (R, S), which represents values in a separable real Hilbert space denoted as H × R.This Hilbert space is equipped with an inner product denoted as < ., .>, which generates the norm.The semi-metric d is considered, as given by: ∀r, r ∈ H/d(r, r ) = r − r .
For a fixed r in the H space, its fixed neighborhood N r and a compact subset of R all have the notation S. For each r ∈ N r , there is a S such that R = r.Using a sample of n dependent observations from W := (R, S), the conditional distribution function F r (s) is estimated.We present the F r (s) estimator of F r (s), a kernel type estimator, defined as: K: denotes the kernel, H represents a particular distribution function, and the sequence of positive real integers h K = h K,n (resp.h H = h H,n ) converges to zero when n increases to infinity.We define f r (s) estimator of the conditional density f r (s), given by: This is the derivative of H expressed as H . Finally, we obtain the conditional hazard function estimator Z r (s).This estimator is defined as follows:
Assumptions and Necessary Background Knowledge
When there is no chance of misunderstanding, we shall designate any strictly positive generic constants in the paper by the notation l or/and l .This will only happen when there is no risk of mistakes.The expression w in the process signifies a fixed point in H, and N w stands for a fixed neighborhood of w.We take into account the fact that the random couple {(W i , T i ), i ∈ N } is a process that is stationary.
Let λ k denote the covariance coefficient, which may be found by using the equation: In Let us denote the ball as B(w, h) := {r ∈ H/d(w , w) < h}.This represents the ball with a center of w and a radius h, when h > 0.
For the purpose of establishing the virtually full convergence of the estimator of F r (s).In order to establish the results of our research, it will be necessary to rely on the following assumptions.
There exists a function β(r, .)such that: The conditional distribution function F r (s) satisfies the Holder condition, that is: Here, S is a fixed compact subset of R. (P3) The kernel, denoted by H, is a differentiable function, and its inverse, denoted by H , is a positive, bounded, and Lipschitzian continuous function: For K, we have the necessary conditions for it to be a bounded continuous Lipschitz function, which are: Here, [0,1] is referred to as an indicator function.(P5) A quasi-association exists between the sequence of random pairings(R i , S i ), where i ∈ N , and the covariance coefficient λ k , k ∈ N , as long as the conditions are met.
The joint distribution functions are defined so that they hold for each and every pair (i, j).
Brief Comment on the Conditions
The attribute of concentration of the explanatory variable is denoted by the Assumption (P1) in the context of tiny balls.The function β(r, .) is very important to any asymptotic study, but especially for the variance term.The inclusion of Condition (P2) in our model serves the purpose of regulating the smoothness of the functional space.The inclusion of these components is important in order to accurately calculate the bias component of the convergence rates.The (P3) and (P4) Assumptions, similarly, center on the cumulative function H and its associated kernels K H and K.The word "bias" is purposefully left out of the asymptotic normalcy result thanks to this assumption.The presumption (P5) represents a normative restriction on the quasi-associated information.Our model is assumed to be asymptotically normal under quasi-association if and only if the joint distribution of the pair (R i , R j ) follows the assumptions of assumption (P6).This will enable us to demonstrate that our model exhibits asymptotic normality.Adopting assumption (P7) is required to rule out the possibility of "bias" in the final result of asymptotic normalcy, and this assumption is well-known as a classical one in functional estimation in spaces of finite or infinite dimensions.
Almost Complete Convergence of Z r (s)
Theorem 2. Based on assumptions (P1)-(P7), we have: Proof of Theorem 2. The basis for this analysis is derived from the subsequent decomposition.
and therefore asymptotic results for the estimator Z r (s) can be readily deduced from F r (s) and f r (s).
Asymptotic Normality of the Conditional Hazard Function Estimate
Theorem 3. Based on the assumptions, we obtain, for any r ∈ A: where The symbol Proof of Theorem 3. The basis for this analysis is derived from the subsequent decomposition.
Confidence Bands
One important aspect in statistical analysis is the establishment of confidence bands for estimations.These confidence bands provide a range of values within which we can be confident that the true value lies.By calculating and interpreting these confidence bands, we can gain a better understanding of the uncertainty associated with our estimations.The objective of this section create confidence intervals for the actual value of Z r (s) for a specified curve with the format < R, R >=< r, r >.Estimation using nonparametric methods relies on the asymptotic variance, which is determined by a number of unknown functions.Regarding our situation, we have: The variables Z r (s), F r (s), C 1 , and C 2 are not known in advance and must be approximated during the practical implementation.It is possible to derive confidence bands even when σ 2 Z (r) is functionally given.An estimate for σ 2 Z (r) may be produced using Z (r) (s), F (r) (s), C 1 , and C 2 for Z (r) (s), F (r) (s), C 1 , and C 2 , respectively.
The constants C 1 and C 2 are estimated empirically in the following manner: where The expression for the asymptotic confidence band at a significance level of 1 − ζ for Z r (s) is provided as follows.
denotes the 1 − ζ 2 quantile of the standard normal distribution.
A Simulation Study
Within this particular section, we examine how our asymptotic normality conclusions behave across data from a limited sample.Our primary purpose is to demonstrate how simple the conditional risk function is to develop and to study the effect of dependency on this asymptotic characteristic.
We produce functional observations for this purpose by investigating the functional nonparametric model that is shown below.: where i N (0, .5).The linear process with quasi-associated variables is well known to satisfy requirement (P6).As a result, the functional regressor with quasi-associates shown below is constructed. where ) and(g j )).It can be seen that, by examining the error distribution ( i ), we are able to infer the theoretical conditional distribution function of Z given W = w.This function is explicitly determined by shifting the distribution of ( i ) by r(w).Therefore, the determination of the theoretical distribution of the conditional hazard function may be easily determined.
To demonstrate this function's asymptotic normality, we fix one curve, w = W 0 , and z = Z 0 , from the created data, then collect m independent n-samples of the same data and compute the quantity: where σ h K is the previous section's standard deviation estimation.The collected sample is next tested for normality.With the cross-validation approach determined, we utilized a quadratic kernel, represented as: The performance of our estimator demonstrates its favorable characteristics and reliable performance in practical applications.Furthermore, the correlation of the data has a significant impact on the pace at which the asymptotic normality converges to its final state.Specifically, its magnitude diminishes in proportion to the values of m.
Table 1 summarizes the p-value from the Kolmogorov-Smirnov test for each value of m, which confirms an inverse relationship between the correlation of the data and the convergence rate of the asymptotic normality.
Conclusions and Perspectives
The current study examines the asymptotic properties of certain conditional functional parameters, specifically the distribution function, density, and hazard function.These parameters are analyzed in the context of an explanatory variable that takes values in a Hilbert space of infinite dimension and a response variable that is real in a quasi-associated dependency framework.The estimators' asymptotic characteristics , including practically certain convergence and normality, are derived in this study using standard conditions that encompass the key components of the research, such as the functional related assumption and the nonparametric nature of the model.The computational aspect highlights the significance of this estimator in practical applications due to its efficiency in updating findings with each new piece of information.Furthermore, the current contribution also presents intriguing avenues for further exploration.It would be of interest to establish the asymptotic normality of the estimators we have proposed in order to generalize the findings to incomplete data, such as missing, censored, or shortened data.One potential trajectory for future research involves the exploration of more intricate dependence structures, such as the ergodic spatial dependence. Then, Proof of Lemma 1.We put: where Moreover, we can write: and In order to use the lemma that was developed by [29], we need to do an evaluation of the variance term Var(∑ n i=1 ∆ i ) as well as the covariance term Cov ∏ u i=1 ∆ s i , ∏ v j=1 ∆ t j .Both of these terms are denoted by the following for all (s 1 , . . ., s u ) ∈ N u , (t 1 , . . ., We explore the following cases with respect to the covariance term if t 1 = s u .Utilizing the reality that we have: If it t 1 > s u , under (P5), the quasi-association yields: On the other hand, if we take into account (P6), we have: In addition, by taking a γ − power of (A7), and a (1 − γ) − power of (A8), we are able to derive an upper-bound of the tree terms as follows: for: Second, we entered the following information for the Var(∑ n i=1 ∆ i ) variance term for all 1 ≤ i ≤ n: For the first term T 1 , we have: As a result, considering (P2) and (P3) and integrating over the real component y gives us the following: As for all j ≥ 1: Regarding the covariance term in (A10), the following decomposition will be utilized.
where (m n ) is an infinite series of positive integers as n tends to infinity.The (P1)-(P3) and (P6) presuppositions we obtain, if i = j: Where H and K are both bounded and Lipschitz kernels, we obtain: Then, by (A15) and (A16), we get By choosing: We get: Finally, we get this result by integrating the previous three results (A10), (A14), and (A17).
Therefore, the criteria of the lemma are satisfied by ∆ i , where i = 1, . . ., n. Thus, A18) by (P7).Last but not least, with an appropriate selection of η, the Borel-Cantelli lemma makes it possible to conclude the proof of Lemma 1.
Proof of Corollary 1.We have: Therefore, For E(F R D ) = 1, we apply the result of Lemma 1 and show that: Proof of Lemma 2. can write: We derive the following by making use of the stationarity of the data, the conditioning by the variable that is doing the explaining, and the customary change in the variable and we deduce, Therefore, by (P2) we get This inequality holds true everywhere in S and, after substituting in (A22) and simplifying the expression (E(K 1 )), we get that: In conclusion, the evidence of Lemma 2 is provided by Hypothesis (P4) and Corollary 1.
Proof of Lemma 6.We denote where Therefore, the outcome is We employ Doob's basic technique [30].Indeed, we select two natural number sequences that go to infinity.
and we split S n into where Observe that, for k = n p+q (where [.] stands for the integral part), we have kq n → 0 and kp n → 1, q n → 0, which means as p n → 0 as n → ∞.Our asymptotic result is now founded on: and Proof of Equation (A26).Stationarity gives us: The fact that kq n → 0 gives us In another way, Cov(Γ i (s, r), Γ j (s, r)) We will now write down this last covariance.
where (m n ) a positive integer sequence that goes to infinity as n → ∞.
In he term I, we utilize (P1), (P3), and (P7), to demonstrate that, for i = j For (II), we utilize lipschitz and the understanding that the H and K are constrained to demonstrate: When we add all of these inequality problems together, we get , we get Thus, we obtain, From (A28)-(A31), we obtain To utilize the stationary, go to (A27) and look to the right.
Proof of Equation (A29)
Through a straightforward computation, we establish that: Therefore, by (P5), we obtain nh H (φ r (h K )) E F N (s, r) − F(s, r) → 0, as n → ∞ As E F D (r) = 1, then all that remains is to prove that nh H φ r (h K )Var F D (r) − F N (s, r) → 0, as n → ∞ This is a direct result of: and Cov F D (r), F D (r) = O 1 nφ r (h K ) The three proofs are all quite similar and very close to the evidence of (A30).As a result, for the purpose of brevity, we simply provide the first evidence.Indeed, For the first term, Then, It follows that: Let us now consider the sum's asymptotic behavior in term two of (A38).This necessitates to separate: where (m n ) is a positive integer sequence that goes to infinity as n → ∞.
Figure 1 Figure 1 .
Figure1shows the Z i 's curves discretized in the same 100-point grid in [0, 1] for m = 1, 4, and 10 values.Furthermore, the regression operator computes the scalar variable Z i :
Figure 2
Figure 2 depicts a graph of the collected sample versus a typical normal distribution at varied m = 1, 4, and 10 values.
Figure 2 .
Figure 2. The QQ-plot of the obtained sample.
Table 1 .
The p-value indicated by the test of Kolmogorov-Smirnov. | 5,422.2 | 2023-10-14T00:00:00.000 | [
"Mathematics"
] |
Robust multiferroic in interfacial modulation synthesized wafer-scale one-unit-cell of chromium sulfide
Multiferroic materials offer a promising avenue for manipulating digital information by leveraging the cross-coupling between ferroelectric and ferromagnetic orders. Despite the ferroelectricity has been uncovered by ion displacement or interlayer-sliding, one-unit-cell of multiferroic materials design and wafer-scale synthesis have yet to be realized. Here we develope an interface modulated strategy to grow 1-inch one-unit-cell of non-layered chromium sulfide with unidirectional orientation on industry-compatible c-plane sapphire. The interfacial interaction between chromium sulfide and substrate induces the intralayer-sliding of self-intercalated chromium atoms and breaks the space reversal symmetry. As a result, robust room-temperature ferroelectricity (retaining more than one month) emerges in one-unit-cell of chromium sulfide with ultrahigh remanent polarization. Besides, long-range ferromagnetic order is discovered with the Curie temperature approaching 200 K, almost two times higher than that of bulk counterpart. In parallel, the magnetoelectric coupling is certified and which makes 1-inch one-unit-cell of chromium sulfide the largest and thinnest multiferroics.
fabrication.The authors also reveal that the strong interaction between Cr2S3 and substrate induces the interlayer-sliding of intercalated Cr atoms, which breaks the space reversal symmetry and promotes the p-d orbital hybridization between S and Cr, and then results in the emergence of room-temperature ferroelectricity/multiferroic.These results enrich the 2D multiferroic materials community and provide a platform for constructing multifunctional devices.
The reviewer thinks that this work presents a new-method for wafer-scale non-layered single-crystal synthesis, room-temperature multiferroic exploration, and physical mechanism interpretation.I recommend publishing this work in Nature Communications, after addressing the following comments.
1.The authors think that the introduction of Cr changes the sapphire surface-terminated structure and increases the interfacial interaction between Cr2S3 and substrate, which contribute to the domain orientation control and single crystal synthesis of Cr2S3.If the interfacial interaction is decoupled, whether the domain orientation of Cr2S3 can be controlled?2. Please give the full name of KI, when it appeals first time in the manuscript.In addition, the authors chose KI as the growth accelerant of one-unit-cell of Cr2S3, what is the difference between KI and NaCl?As shown in the previous literatures, NaCl is commonly employed for synthesizing large-area 2D TMDCs films.
3. The author should list out the growth substrates of CrmXn in Table 2, because the substrate and the interaction between CrmXn and substrate should influence the magnetic measurements.
4. During the CVD process, what is the role of hydrogen?Please discuss more about this point.
5. The authors should give more details in the synthesis and magnetic exploration of chromium-based chalcogenide.The other relevant references should be added, e.g., Mater. Today 57, 66, 2022;Adv. Mater. 34, 2107512, 2022.Reviewer #3 (Remarks to the Author): In this manuscript, the authors reported an interfacial modulated method to synthesize wafer-scale oneunit-cell of Cr2S3 on c-plane sapphire.They proposed that the introduction of Cr changed the sapphire surface-terminated structure, increased the interfacial interaction between Cr2S3 and sapphire, induced the parallel steps formation on sapphire surface at low temperature, which contributed to the domain orientation control of Cr2S3.In parallel, the strong interaction between Cr2S3 and substrate promoted the interlayer-sliding of intercalated Cr atoms, which broke the space reversal symmetry and resulted in the generation of room-temperature ferroelectricity/multiferroic.I think that this manuscript presents a great breakthrough in wafer-scale growth of 2D ferroelectricity/multiferroic single crystals and offers a promising avenue for constructing low-power logic and nonvolatile memory device.I recommend to publish this work in Nature Communications after mirror revision.
1.The authors proposed that the sapphire surface was changed from OH-terminated to Al-terminated structure, how to confirm this structure change?2. For the bulk and thick Cr2S3 nanosheets, triangular or hexagonal morphologies are frequently observed, however, in this manuscript, the obtained Cr2S3 are half circular, the author should provide some discussions.
3. In Figure 4d,f, the PFM phase and amplitude hysteresis loops are not symmetric regarding the zero bias, the author should offer some explanations.4. 2D ferroelectricity has been discovered in some TMDCs (e.g.SnSe,In2Se3,and CuInP2S6), what is the superiority of Cr2S3? 5.The interface modulation method in the manuscript is very interesting.Can the author discuss the differences between this method and traditional methods, such as https://doi.org/10.1063/1.3633103and https://doi.org/10.1364/OL.39.005184 1/7
Our response:
We are very grateful for the reviewer's positive evaluation toward the significance of our manuscript.We also appreciate the reviewer's kind suggestion and constructive comments.These issues raised by the reviewer are considered very carefully and addressed point-by-point as follows.
However, before I suggest the publication of this manuscript, one major issue needs to be clarified which is the atomic structure of the claimed Cr 2 S 3 . I am not questioning the experimental results but the first-principles ones although authors have shown the corresponding figures in Fig. 4 which can not convince me at this moment. It should be clarified that how the calculated structure relates to the experimental results, how the polarization reverses from one direction to the other opposite one.
Especially, the initial state in Fig. 4i shows different energy with the final states that puzzles me as well.I feel that the atomic structure which provides nice two-dimensional ferroelectricity is not correctly estimated.
Our response:
We are very thankful for the reviewer's constructive comment.We agree with the reviewer that the DFT calculations are crucial for expounding the origin of ferroelectricity in one-unit-cell of Cr 2 S 3 .We have optimized the atomic structures to further match the experimental results and uncovered the polarization conversion mechanism.The corresponding structure model is constructed by superimposing the atomic arrangements on the cross-sectional STEM image.As shown in Figure R1a (or Fig. 4k), the blue, yellow, and orange spheres represent the Cr, S atoms in CrS 2 layers and the self-intercalated Cr atoms, respectively.In addition, the atomic ratio of Cr and S in the theoretical model, as well as the thickness of one-unit-cell is assessed to be 11:18 and 1.73 nm, respectively, consistent with the experimental results (2:3 and 1.80 nm).The theoretical spacing of (110) plane is set to be 3.04 Å, matching well with the experimental value of 3.00 Å.
The polarization reversion from one direction to the other opposite one can be explained by the intercalation-driven sliding mechanism.As revealed in Figure R2a (or Supplementary Fig. 27a), the ground state of Cr 2 S 3 is constructed by AAA stacking of CrS 2 , and its central inversion symmetry should not result in the spontaneous polarization.However, the strong interfacial interaction between Cr 2 S 3 and sapphire surface induces an unusual sliding of selfintercalated Cr atoms and a new ABA stacking order is built.The intercalated Cr atom in the upper interlayer forms a distorted trigonal prismatic coordination with S atoms, while the intercalated Cr atom in the lower interlayer forms a distorted trigonal antiprismatic coordination with S atoms, as displayed in Figure R1b (or Fig. 4l).During the sliding of central CrS 2 layer, the coordination environment of self-intercalated Cr atoms changes accordingly.The final state is obtained as the upper self-intercalated Cr atoms form a distorted trigonal antiprismatic coordination with S atoms and the lower self-intercalated Cr atoms form a distorted trigonal prismatic coordination with S atoms.We have optimized the atomic structures and the same energies are observed for the initial and final states, as shown in Figure R1b (or Fig. 4l).The plane-averaged charge densities along z direction are plotted in Figure R2b (or Supplementary Fig. 27b) to clarify the charge distribution of two polarization states.For the initial state, the charge near the lower intercalated Cr atom is more than that of the upper intercalated Cr atom, whereas the final state is 2/7 opposite.The net charge between upper and lower intercalated Cr atoms results in the interfacial charge transfer and induces the spontaneous polarization.A large barrier is obtained to be 1.4 eV/cell due to the breakdown and formation of covalent bonds between intercalated Cr atoms and S atoms during the sliding process.In addition, the remanent polarization of one-unit-cell thick Cr 2 S 3 is calculated to be 0.10 μC/cm 2 , consistent with the experimental result (Figure R2c or Supplementary Fig. 27c).
Figure R1 (or Fig. 4k,l).a, Atomic structure of one-unit-cell of Cr 2 S 3 with intralayer-sliding of self-intercalated Cr atoms.The blue, yellow, and orange spheres represent the Cr, S atoms in CrS 2 layers and the self-intercalated Cr atoms, respectively.b, Energy evolution between the two opposite polarization states of one-unit-cell of Cr 2 S 3 .The energy decreasing from the centrosymmetric to non-centrosymmetric structure indicates a continuous and spontaneous phase transition between these two phases.We have added some discussions regarding the polarization reversion mechanism in page 10 and page 11 by "…Comparing with the pristine AAA stacking of Cr 2 S 3 (Supplementary Fig. 27a), the strong interfacial interaction between Cr 2 S 3 and sapphire surface induces the self-intercalated Cr atoms sliding and a new ABA stacking order is built.The intercalated Cr atom in the upper interlayer forms a distorted trigonal prismatic coordination with S atoms, while the intercalated Cr atom in the lower interlayer forms a distorted trigonal antiprismatic coordination with S atoms (Fig. 4l).During the sliding of central CrS 2 layer, the coordination environment of self-intercalated Cr atoms changes accordingly.The final state is obtained as the upper self-intercalated Cr atoms form a distorted trigonal antiprismatic coordination with S atoms and the lower self-intercalated Cr atoms form a distorted trigonal prismatic coordination with S atoms.The plane-averaged charge densities along z direction are plotted in Supplementary Fig.
3/7
27b to clarify the charge distribution of two polarization states.For the initial state, the charge near the lower intercalated Cr atom is more than that of the upper intercalated Cr atom, whereas the final state is opposite.The net charge between upper and lower intercalated Cr atoms results in the interfacial charge transfer and induces the spontaneous polarization.The energy difference between polar and non-polar structure is calculated to be 1.4 eV/cell (comparable to that of Bi 6 O 9 film 49 ), implying the relatively high stability of polar phase.In addition, the remanent polarization of one-unit-cell thick Cr 2 S 3 is calculated to be 0.10 μC/cm 2 , consistent with the experimental result (Supplementary Fig. 27c)….".
Our response:
We are very thankful for the reviewer's constructive comment and kind suggestion.There are two methods for automatic selecting K points provided by VASP, Monkhorst-Pack and Gamma-centered Monkhorst-Pack grids.
During the DFT calculations, 3 3 1 Monkhorst-Pack k-point mesh is applied.Nevertheless, in view of the odd number of K-points along each dimension, the Gamma point is included in the Monkhorst-Pack method, which is the same as the Gamma-centered Monkhorst-Pack grids.According to the reviewer kind suggestion, we have revised the related description in page 16 by "…The Brillouin zone integration was performed using a 7 × 7 × 1 Monkhorst-Pack k-point mesh for one-unit-cell of Cr 2 S 3 ….".
The reviewer thinks that this work presents a new-method for wafer-scale non-layered single-crystal synthesis, room-temperature multiferroic exploration, and physical mechanism interpretation. I recommend publishing this work in Nature Communications, after addressing the following comments.
Our response: We are very grateful for the reviewer's positive evaluation toward the significance of our manuscript.We also appreciate the reviewer's kind suggestion and constructive comments.These issues raised by the reviewer are considered very carefully and addressed point-by-point as follows. 4/7
The authors think that the introduction of Cr changes the sapphire surface-terminated structure and increases the interfacial interaction between Cr 2 S 3 and substrate, which contribute to the domain orientation control and single crystal synthesis of Cr 2 S 3 . If the interfacial interaction is decoupled, whether the domain orientation of Cr 2 S 3 can be controlled?
Our response: We are very thankful for the reviewer's constructive comment.To remove or reduce the interfacial interaction between Cr 2 S 3 and sapphire, Cr 2 S 3 /WS 2 vertical heterostructures are synthesized on c-plane sapphire using a twostep CVD method, and the interfacial interaction is thus decoupled by the intercalated monolayer WS 2 .As a result, Cr 2 S 3 nanosheets with random orientations are observed on monolayer WS 2 , as shown in Figure R3 (or Supplementary Fig. 9) different from the unidirectionally aligned Cr 2 S 3 on c-plane sapphire, indicating that the domain orientation is controlled by the interfacial interaction between Cr 2 S 3 and substrate.We have added some discussions in page 4 by "…To further confirm the strong interfacial interaction determining the domain orientation of Cr 2 S 3 , Cr 2 S 3 /WS 2 vertical heterostructures are synthesized on c-plane sapphire and the obtained Cr 2 S 3 nanosheets possess random orientations (Supplementary Fig. 9)….".
Figure R3 (or Supplementary Fig. 9).OM image of as-grown Cr 2 S 3 /WS 2 vertical heterostructures on c-plane sapphire, showing the random orientations of Cr 2 S 3 nanosheets.
Please give the full name of KI, when it appeals first time in the manuscript. In addition, the authors chose KI as the growth accelerant of one-unit-cell of Cr 2 S 3 , what is the difference between KI and NaCl? As shown in the previous literatures, NaCl is commonly employed for synthesizing large-area 2D TMDCs films.
Our response: We are very thankful for the reviewer's constructive comment and kind suggestion.We have added the full name of KI in page 14.We have synthesized unidirectionally aligned Cr 2 S 3 nanosheets on c-plane sapphire using NaCl as the growth accelerant, with the results shown in Figure R4.Nevertheless, the obtained Cr 2 S 3 nanosheets are seriously etched by Cr atoms due to the fast evaporation rate of Cr with the assistance of NaCl.Therefore, NaCl is not suitable for synthesizing high-quality atomically thin Cr 2 S 3 .
The author should list out the growth substrates of Cr m X n in Table 2, because the substrate and the interaction between Cr m X n and substrate should influence the magnetic measurements.
Our response: We are very thankful for the reviewer's kind suggestion.We have added the growth substrates of Cr m X n in Table 2.
During the CVD process, what is the role of hydrogen?
Please discuss more about this point.
Our response:
We are very thankful for the reviewer's constructive comment.During the CVD growth process, the introduction of hydrogen reduces the oxygen-containing impurity, decreases the nucleation density, and increases the domain size of Cr 2 S 3 .As a comparison, the small and thick Cr 2 S 3 nanosheets are synthesized on c-plane sapphire without the hydrogen assistance, as shown in Figure R5.
Our response:
We are very thankful for the reviewer's kind suggestion.We have added the detailed descriptions regarding the synthesis and magnetic exploration of Cr m X n in page 14 and page 15 by "…Before heating, 500 standard cubic centimeters (sccm) argon (Ar) was purged into the chamber for 10 minutes to remove the residual air and humidity.Subsequently, the first and second zones were heated to 170 and 980 ℃, respectively, with 110 sccm Ar and 10 sccm hydrogen (H 2 ) as the carrier gases.The growth time was set to be 25 minutes.After completing the CVD growth process, the furnace cover was opened and cooled down to room-temperature…." and in page 15 by "…Lowtemperature quantum transport measurements of Cr 2 S 3 were conducted in a 9T-Physical Property Measurement System (PPMS, Quantum Design, Dynacool) by constructing a four-terminated Hall bar device.The Hall resistances were measured with the perpendicular magnetic field up to 9 T, and the testing temperature range was set to be 2 to 250 K with a current of 10 μA….".
We have added new references in page 2 (in Refs.29,30).
In this manuscript, the authors reported an interfacial modulated method to synthesize wafer-scale one-unit-cell of Cr 2 S 3 on c-plane sapphire. They proposed that the introduction of Cr changed the sapphire surface-terminated structure, increased the interfacial interaction between Cr 2 S 3 and sapphire, induced the parallel steps formation on sapphire surface at low temperature, which contributed to the domain orientation control of Cr 2 S 3 . In parallel, the strong interaction between Cr 2 S 3 and substrate promoted the interlayer-sliding of intercalated Cr atoms, which broke the space reversal symmetry and resulted in the generation of room-temperature ferroelectricity/multiferroic. I think that this manuscript presents a great breakthrough in wafer-scale growth of 2D ferroelectricity/multiferroic single crystals and offers a promising avenue for constructing low-power logic and nonvolatile memory device. I recommend to publish this work in Nature Communications after mirror revision.
Our response: We are very grateful for the reviewer's positive evaluation toward the significance of our manuscript.We also appreciate the reviewer's kind suggestion and constructive comments.These issues raised by the reviewer are considered very carefully and addressed point-by-point as follows.
The authors proposed that the sapphire surface was changed from OH-terminated to Al-terminated structure, how to confirm this structure change?
Our response: We are very thankful for the reviewer's constructive comment.The structure change of sapphire surface is convinced by multiscale characterizations.For example, the surface terminated Al atoms are directly observed by the atomicresolution cross-sectional STEM image in Supplementary Fig. 12.Two XPS characteristic peaks corresponding to Al-OH (531.5 eV) and Al-O-Al (530.5 eV) are detected on the sapphire surface after annealing without the presence of Cr powders, which is different from the sapphire surface after annealing with the presence of Cr powders, where only one characteristic peak (530.5 eV) is observed (Fig. 2f).Such results indicate that sapphire surface terminated structure is changed by Cr atoms.Besides, the contact angle discrepancy obtained on sapphire surfaces after annealing with and without the presence of Cr powders reconfirms the structure change of sapphire surface (Fig. 2d,e).
Our response:
We are very thankful for the reviewer's constructive comment.We agree with the reviewer that the triangular and hexagonal Cr 2 S 3 nanosheets are frequently observed on mica substrates (Adv. Mater. 32, 1905896 (2020)).However, in view of the different substrate and growth mechanism, the half circular Cr 2 S 3 nanosheets are obtained on c-plane sapphire.The evolution of atomically thin Cr 2 S 3 on c-plane sapphire obeys the step-edge-guided mechanism and the long domain edge of Cr 2 S 3 is aligned with the step edge direction of sapphire.Therefore, the half circular morphology of Cr 2 S 3 nanosheets is attributed to the high energy barrier of passing over the step edge.The similar phenomena are also demonstrated in CVD synthesis of h-BN (Nature 570, 91-95 (2019)) and WSe 2 (ACS Nano 9, 8368-8375 (2015)).We have added some discussions in page 4 by "…The half circular Cr 2 S 3 nanosheets are observed on c-plane sapphire possibly due to the high energy barrier of passing over the step edge of sapphire….".
7/7
Our response: We are very thankful for the reviewer's constructive comment.The interfacial charge and internal electric field induced by the distinct electrodes (PFM tip and Au) should result in the asymmetry of PFM phase and amplitude hysteresis loops.The similar phenomenon is also observed in other 2D ferroelectric materials, such as In 2 Se 3 (Nano Lett. 23, 3098-3105 (2023)).We have added some discussions in page 9 by "…Notably, the interfacial charge and internal electric field induced by the distinct electrodes (PFM tip and Au) result in the asymmetry of PFM phase and amplitude hysteresis loops….".
2D ferroelectricity has been discovered in some TMDCs (e.g. SnSe, In 2 Se 3 , and CuInP 2 S 6 ), what is the superiority of Cr 2 S 3 ?
Our response: We are very thankful for the reviewer's constructive comment.We agree with the reviewer that 2D ferroelectricity has been discovered in some TMDCs (e.g.SnSe, In 2 Se 3 , and CuInP 2 S 6 ).However, most of these researches are concentrated on layered van der Waals ferroelectric materials, which suffers from the inferior stability and limited species diversity.Therefore, the emergence of room-temperature ferroelectricity in non-layered Cr 2 S 3 expands the scope of ferroelectric.In addition, Cr 2 S 3 possess the highest remanent polarization (32 μC/cm 2 ) among TMDCs, which provides a promising avenue to construct low-power nonvolatile memory devices.Besides, the robust magnetoelectric coupling is uncovered, which makes 1-inch one-unit-cell of Cr 2 S 3 the largest and thinnest multiferroics.
Our response:
We are very thankful for the reviewer's constructive comment.The interfacial modulation method has been proposed to improve the quality of HgCdTe and enhance the infrared detector performances (Appl.Phys.Lett.99, 091101 (2011); Opt.Lett. 39, 5184-5187 (2014)).However, the traditional interfacial modulation method mainly focuses on the target materials (e.g.HgCdTe), in this manuscript, the sapphire surface terminated structure is changed by Cr atoms, which enhances the interfacial interaction between Cr 2 S 3 and sapphire substrate, and then determines the unidirectional growth of Cr 2 S 3 and the intralayer-sliding of self-intercalated Cr atoms.We have added some discussions in page 5 "…The interfacial modulation method has been proposed to improve the quality of target materials (e.g.HgCdTe) 38,39 , which provides a new direction to understand the growth mechanism of well-aligned Cr 2 S 3 on c-plane sapphire….".
We have added these new references in page 5 (in Ref. 38,39).
REVIEWER COMMENTS
Reviewer #1 (Remarks to the Author): The manuscript has been quite improved and I am fine with the updated theoretical results.
However, a significant point still puzzles me which is the amplitude of the polarization.As it is reported, the experimental value is at the level of 1uC/cm^2 which agrees the theoretical calculations.This value become 32uC/cm^2 in 45nm sample that surprises me a lot since it can not be explained by the calculations.
In the calculation, a periodic model is used implying the absence of the thickness and surface effect.I believe it nicely shows what happened in the 2nm sample.
If the authors would like to emphasize the large polarization in thicker samples, I think it is necessary to tell readers what happens when the thickness of samples increases.
Reviewer #2 (Remarks to the Author): The authors have addressed all the reviewers' concerns and this manuscript can be published in Nature Communications.Particularly, the authors have provided additionally experimental and theoretical results to further clarify the interfacial modulated growth mechanism and the origin of ferroelectric polarization in atomically thin Cr2S3.Therefore, I recommend to publish this work without further revision.
Reviewer #3 (Remarks to the Author): What are the noteworthy results?Answer: The authors provide a interface modulated strategy to grow 1-inch one-unit-cell of non-layered chromium sulfide with unidirectional orientation on industry-compatible c-plane sapphire.
Will the work be of significance to the field and related fields?How does it compare to the established literature?If the work is not original, please provide relevant references.
Answer: Yes, this work is significance to the field and related fields.This work has realized one-unit-cell of multiferroic materials design and wafer-scale synthesis compare to the established literature.Does the work support the conclusions and claims, or is additional evidence needed?Answer: The work has supported the conclusions and claims and isn't need additional evidence.
Are there any flaws in the data analysis, interpretation and conclusions?Do these prohibit publication or require revision?Answer: The data analysis, interpretation and conclusions are consistent with the experimental results.I am agree to publish the paper in Nature Communications.
Is the methodology sound?Does the work meet the expected standards in your field?Answer: The methodology is sound and the work meet the expected standards.
Is there enough detail provided in the methods for the work to be reproduced?
Answer: It provides enough detail in the methods for the work to be reproduced.
Reviewer #1 (Remarks to the Author):
The manuscript has been quite improved and I am fine with the updated theoretical results.However, a significant point still puzzles me which is the amplitude of the polarization.As it is reported, the experimental value is at the level of 1 μC/cm 2 which agrees the theoretical calculations.This value become 32 μC/cm 2 in 45 nm sample that surprises me a lot since it can not be explained by the calculations.In the calculation, a periodic model is used implying the absence of the thickness and surface effect.I believe it nicely shows what happened in the 2 nm sample.If the authors would like to emphasize the large polarization in thicker samples, I think it is necessary to tell readers what happens when the thickness of samples increases.
Our response:
We are very glad to know that the supplementary theoretical results are helpful to understand the origin of ferroelectricity in Cr 2 S 3 .We are also thankful for the reviewer's new comment regarding the remanent polarization of thick Cr 2 S 3 .
The strong interfacial interaction between Cr 2 S 3 and sapphire induces the sliding of self-intercalated Cr atoms and CrS 2 layers between the intercalated Cr atoms, which breaks the space reversal symmetry and results in the emergence of ferroelectricity.As the thickness increases, the interfacial interaction and the intralayer-sliding of self-intercalated Cr atoms are weakened accordingly, as confirmed by the cross-section STEM images in Figure 4i-j.Even so, the influence of depolarization field on the ferroelectricity of thick Cr 2 S 3 is also reduced.In addition, the remanent polarization value of thick Cr 2 S 3 is the summation of different unit-cells along z direction, therefore, a large remanent polarization is obtained in the 45 nm sample comparing with one-unit-cell of Cr 2 S 3 .Notably, the similar phenomena are also demonstrated in other ferroelectric materials, such as PbZr 0.2 Ti 0.8 O 3 films (Nat.Commun.8, 15549 ( 2017)) and 3R-MoS 2 nanosheets (Nat.Commun. 13, 7696 (2022)).
According to the reviewer's kind suggestion, we have added some discussions in page 10 by "…The cumulative effect of self-intercalated Cr atoms and CrS 2 layers sliding, as well as the weakened depolarization field should result in the enhancement of remanent polarization in thick Cr 2 S 3 42,43 ….".
We have added new references in page 10 (in Ref. 42,43).
Reviewer #2 (Remarks to the Author): The authors have addressed all the reviewers' concerns and this manuscript can be published in Nature Communications. Particularly, the authors have provided additionally experimental and theoretical results to further clarify the interfacial modulated growth mechanism and the origin of ferroelectric polarization in atomically thin Cr 2 S 3 . Therefore, I recommend to publish this work without further revision.
Our response: We are very thankful for the reviewer's recommendation.Again, we would like to thank the reviewer's thoughtful comments and suggestion, which we think have helped to greatly improve the readability and clarity of our manuscript.Is there enough detail provided in the methods for the work to be reproduced?Answer: It provides enough detail in the methods for the work to be reproduced.
Our response:
We are very thankful for the reviewer's recommendation.Again, we would like to thank the reviewer's thoughtful comments and suggestion, which we think have helped to greatly improve the readability and clarity of our manuscript.
I feel sorry that the current response does not fully convince me.Actually, it is unlikely to have such large amplitude of out-of-polarization when it has negligible ionic polar displacement along this direction.Usually, interfacial interaction may affect but with small amplitude.Besides, both the up and down polarization are significantly enhanced which puzzles me as well.Therefore I still feel this part is not solid enough.
To be honest, if the interaction with substrate can make such significant modification, that would be a big news in this field.I kindly suggest authors to confirm these results carefully since such big number will definitely attract plenty of attention from people working on ferroelectricity.I am ok with the rest part of the results which are enough for a nice publication on Nat.Communications.
If authors disagree to make further change, I would like to let Editor to make the final decision.
1/1
Reviewer #1 (Remarks to the Author): I feel sorry that the current response does not fully convince me.Actually, it is unlikely to have such large amplitude of out-of-polarization when it has negligible ionic polar displacement along this direction.Usually, interfacial interaction may affect but with small amplitude.Besides, both the up and down polarization are significantly enhanced which puzzles me as well.Therefore, I still feel this part is not solid enough.To be honest, if the interaction with substrate can make such significant modification, that would be a big news in this field.I kindly suggest authors to confirm these results carefully since such big number will definitely attract plenty of attention from people working on ferroelectricity.I am ok with the rest part of the results which are enough for a nice publication on Nat.Communications.If authors disagree to make further change, I would like to let Editor to make the final decision.
Our response:
We are very thankful for the reviewer's constructive comment.We agree with the reviewer that the interfacial interaction is not the only factor that determines such a large amplitude of out-of-plane polarization in Cr 2 S 3 .The ionic polar displacement and interlayer charge transfer possibly influence the amplitude, and related theoretical explorations are expected to be made in the future to further clarify the origin of ferroelectricity.According to the reviewer's kind suggestion, we have replaced the polarization hysteresis loop of 45.0 nm-thick Cr 2 S 3 in Figure 4f with a thin sample of 13.0 nm and the corresponding remanent polarization value is calculated to be 4.30 μC/cm 2 .Even so, this value is still higher than other 2D ferroelectric materials, as shown in Table 1.We have also revised the relevant discussions in page 10 by "…The remanent polarization value as large as 4.30 μC/cm 2 is observed for the Cr 2 S 3 nanosheet with the thickness of 13.0 nm (Fig. 4f), which is higher than other 2D ferroelectric materials (Table 1).Besides of the interfacial interaction, the ionic polar displacement and interlayer charge transfer possibly contribute to enhance the polarization of thick Cr 2 S 3 , and related theoretical explorations are expected to be made in the future.Furthermore, the cumulative effect of self-intercalated Cr atoms and CrS 2 layers sliding, as well as the weakened depolarization field should also result in the polarization elevation, as have been demonstrated in other ferroelectric materials 42,43 .In addition, the macroscopic ferroelectric hysteresis loop measurement of bulk Cr 2 S 3 is performed, with the result shown in Supplementary Fig. 25….".
Thanks again for the reviewer's comment regarding the remanent polarization of Cr 2 S 3 , which is very helpful for understanding the origin of ferroelectricity.
Figure
Figure R2 (or Supplementary Fig. 27).Polarization reversion mechanism of Cr 2 S 3 .a, Atomic structure of one-unitcell of Cr 2 S 3 without intralayer-sliding of self-intercalated Cr atoms.b, Plane-averaged charge densities along z direction.c, DFT calculated the remanent polarization of one-unit-cell thick Cr 2 S 3 .
Figure
Figure R4.OM images of as-grown Cr 2 S 3 nanosheets on c-plane sapphire using NaCl as the growth accelerant.The obtained Cr 2 S 3 nanosheets are seriously etched by Cr atoms.
Figure R5 .
Figure R5.OM image of as-grown Cr 2 S 3 nanosheets synthesized on c-plane sapphire without the hydrogen assistance, showing the small domain size and high thickness.
sapphire.Will the work be of significance to the field and related fields?How does it compare to the established literature?If the work is not original, please provide relevant references.Answer: Yes, this work is significance to the field and related fields.This work has realized one-unit-cell of multiferroic materials design and wafer-scale synthesis compare to the established literature.Does the work support the conclusions and claims, or is additional evidence needed?Answer: The work has supported the conclusions and claims and isn't need additional evidence.Are there any flaws in the data analysis, interpretation and conclusions?Do these prohibit publication or require revision?Answer: The data analysis, interpretation and conclusions are consistent with the experimental results.I am agree to publish the paper in Nature Communications.Is the methodology sound?Does the work meet the expected standards in your field?Answer: The methodology is sound and the work meet the expected standards. | 7,369.6 | 2024-01-24T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
pH-sensitive Self-associations of the N-terminal Domain of NBCe1-A Suggest a Compact Conformation under Acidic Intracellular Conditions
NBCe1-A is an integral membrane protein that cotransports Na+ and HCO3- ions across the basolateral membrane of the proximal tubule. It is essential for maintaining a homeostatic balance of cellular and blood pH. In X-ray diffraction studies, we reported that the cytoplasmic, N-terminal domain of NBCe1-A (NtNBCe1-A) is a dimer. Here, biophysical measurements show that the dimer is in a concentration-dependent dynamic equilibrium among three additional states in solution that are characterized by its hydrodynamic properties, molar masses, emission spectra, binding properties, and stabilities as a function of pH. Under physiological conditions, dimers are in equilibrium with monomers that are pronounced at low concentration and clusters of molecular masses up to 3-5 times that of a dimer that are pronounced at high concentration. The equilibrium can be influenced so that individual dimers predominate in a taut conformation by lowering the pH. Conversely, dimers begin to relax and disassociate into an increasing population of monomers by elevating the pH. A mechanistic diagram for the inter-conversion of these states is given. The self-associations are further supported by surface plasmon resonance (SPR-Biacore) techniques that illustrate NtNBCe1-A molecules transiently bind with one another. Bicarbonate and bicarbonate-analog bisulfite appear to enhance dimerization and induce a small amount of tetramers. A model is proposed, where the Nt responds to pH or bicarbonate fluctuations inside the cell and plays a role in self-association of entire NBCe1-A molecules in the membrane.
INTRODUCTION
Bicarbonate, also known as baking soda, plays an important metabolic role in acid-base balances. Solute carrier family 4 (SLC4) transporters are integral membrane proteins that transport HCO 3 ions across epithelial cells throughout the body [1][2][3][4][5][6]. The SLC4 family consists of five functionally distinct groups [7,8]. Each group contains many splice variants that differ in their cytoplasmic domains [9]. The sodium bicarbonate cotransporter, NBCe1-A, is one major variant of the electrophoretic group, which without ATP transports Na + and HCO 3 ions out of the cell at a ratio that yields a net negative charge across the membrane. In the case where NBCe1-A is located at the basolateral membrane of the proximal tubule, NBCe1-A reabsorbs 80% of the filtered HCO 3 from the lumen to blood, thereby playing a major role in regulating blood pH [8]. Similar to other human SLC4s, NBCe1-A contains a large (~400 aa) cytoplasmic, Nterminal domain (NtNBCe1-A), of which little is known about its role in transport or regulation [9][10][11]. However, defects in NtNBCe1-A result in severe autosomal recessive disorders that are associated with acidic blood pH, vision loss, dental abnormalities, and mental retardation [12][13][14][15][16][17][18].
The function of the cytoplasmic domains of the entire human SLC4 family is under investigation [11,19,20]. Ae1, a Cl/HCO3 cotransporter also known as Band 3, is another SLC4 family member and plays a structural role in the integrity of the membrane of the red blood cell (RBC). Early work on the cytoplasmic, N-terminal domain of Ae1 (NtAe1) describes NtAe1 as an anchoring site for the membrane skeleton of the RBC and several peripheral proteins [21,22]. These binding partners include ankyrin, protein 4.1, hemoglobin, and other cytoplasmic proteins. NtAe1 also has been described to exist in equilibrium among three conformations referred to as "a conformational equilibrium" that are reversible and pH-dependent [23]. These conformations are characterized by an alkaline-induced increase of 11 Å in Stokes radius, a two-fold increase in intrinsic fluorescence that displays a biphasic or sigmoidal curve as a function of pH, an increase in protein segmental mobility, and a loss of ankyrin affinity, without any major change in protein secondary structure [19,24]. A hypothetical model, based on the NtAe1 crystal structure, was suggested to explain the radial increase at elevated pH, where a domain movement in each monomer expands the length of the dimer [25]. Alternative studies on the hydrodynamic properties of full-length Ae1 purified from RBC suggest that Ae1 is in a pH-dependent equilibrium among monomer, dimer, and tetrameric states [26]. NtAe1 is responsible for driving these self-associations of full-length Ae1; the membrane-spanning region by itself does not demonstrate a pH dependence of association [27].
Similar protein-binding partner and biophysical studies as those above for NtAe1 have yet to be performed for NtNBCe1-A, although there has been indirect evidence by mutational analysis that a substrate tunnel exists within NtNBCe1-A [11]. Biochemical studies have been hampered by the lack of stability of NtNBCe1-A in solution. We previously created specific steps, such as the streptomycin precipitation in low salt buffers, which eliminated fractions of low solubility early in the purification to reduce the precipitation [9]. The studies here retrospectively illuminate the conditions of NtNBCe1-A necessary to keep it completely mono-dispersed and soluble for an indefinite period of time. For the first time, in vitro functional assays for NtNBCe1-A are presented and should better define the role of the Nt among all SLC4 family members. This leads to a hypothetical model of a SLC4 cotransporter that is placed in the membrane-with bicarbonate ions modulating the self-association of Nt molecules.
Purification and Protein Concentration Measurements of NtNBCe1-A
The purification of NtNBCe1-A (residues 1 to 362) followed the protocol as detailed in Gill et al. [9], except for the following adjustments: i) the wash buffer for the nickelaffinity step was adjusted to pH 7.0 instead of pH 8.0; that is, the wash buffer contained 50mM HEPES pH 7.0, 200mM NaCl, 20mM Imidazole pH 7.0, 0.2%(v/v) 2-mercaptoethanol; ii) NtNBCe1-A molecules were retained on the nickel resin overnight, thereby eliminating the subsequent need for the ammonium-sulfate precipitation step that previously was necessary for stability and storage; iii) the elution buffer contained 50mM HEPES 7.0, 200mM NaCl, 300 mM Imidazole pH 7.0, 0.2%(v/v) 2-mercaptoethanol; and iv) the eluent (~10 ml) was then directly applied without concentration over a Highload Superdex-200pg (26/60) column (GE Healthcare, Piscataway, NJ) equilibrated with 50mM Tris pH 7.0, 150mM NaCl, 1mM TCEP. Protein concentration was determined by measuring the absorbance at 280 nm implemented in the Beer-Lambert Law using an extinction coefficient of ~730 mL/(g*cm). The extinction coefficient was measured by applying the elution fraction from the nickel column over a Superdex-200 HR (10/30) column (GE Healthcare) connected online to a refractive-index detector (Optilab, Wyatt Technologies, Santa Barbara, CA). Stokes radii were calculated by measuring peak elution volumes and comparing to protein standards (Biorad #151-1901, Hercules, CA) consisting of bovine thyroglobin 670 kDa, bovineglobin 158 kDa, chicken ovalalbumin 44 kDa, horse myoglobin 17 kDa, vitamin B12 1.35 kDa.
Size-exclusion Chromatography with MultiAngle Light Scattering (MALS-SEC)
For Fig. 1, from the Superdex-200 (26/60) column, samples (0.5 ml) of ~2 mg/ml NtNBCe1-A in 50mM Tris 7.0, 150mM NaCl, and 1mM TCEP were applied over a Superdex-200 HR (10/30) column (GE Healthcare) connected serially online with a three-angle light-scattering detector (MiniDAWN TREOS, Wyatt Technologies) and a refractive index detector (Optilab, Wyatt Technologies), thus allowing for out-of-equilibrium molar-mass measurements. The column was repeatedly equilibrated with buffers containing 50mM Tris, 150mM NaCl, 1mM TCEP for each pH The average hydrodynamic radius (r H ) of NtNBCe1-A molecules exhibit an inverted V-shaped curve over the pH range 6.5 to 11.5 as measured by dynamic light scattering DLS (solid curve). The polydispersity (or standard deviation) of the apparent hydrodynamic radii (R H ) are relatively low at slightly acidic pH values (Pd < 10%) and relative high at alkaline values (Pd > 15%). Note that a hot spot or increase of R H occurs at pH 7.4 and that the R H values are larger at acidic pH compared to those at basic pH. Measurements by DLS also were compared to measurements by size-exclusion chromatography SEC (dashed curve), which yielded a similar trend in radii corresponding to molar masses estimated from peak elution volumes.
analyzed. Data was processed using the ASTRA V software package (Wyatt Technologies). A WTC-030S5 column (Wyatt Technologies) was alternatively used in MALS-SEC measurements for comparison. For Fig. 4B, samples (0.5 ml) at ~10 mg/ml were used on the same Superdex-200 HR (10/30) column.
Dynamic Light Scattering (DLS)
From the Superdex-200 (26/60) column, the peak fraction after purification also was analyzed in batch mode (50 l) using a dynamic-light scattering detector (DYNA PRO, Wyatt Technologies) that is supplemented with a single static detector. Here, the peak fraction already contained ~2.5 mg/ml protein. Concentration was avoided to minimize dimer self-associations for hydrodynamic (R H ) measurements. The purification was repeated, equilibrating the Superdex-200 (26/60) column sequentially with buffers containing 50mM Tris, 150mM, 1mM TCEP for each pH used for R H measurements. Alternatively, after passing NtNBCe1-A over the same column with buffer containing 50mM Tris pH 6.5, 150 mM NaCl, and 1mM TCEP, the entire fractions were pooled and concentrated to 2.5 mg/ml using a MW cutoff filter of 30 kDA (Millipore). Samples (100 l) were then dialyzed in micro-dialysis cups (Millipore) against 100 ml of buffers for each pH. Hydrodynamic data collection again was performed in batch mode. Finally, for in-equilibrium molar-mass measurements at pH 7.4, the peak fraction contained 1.1 mg/ml.
Steady-state Fluorescence
Samples (0.5 ml) of ~2 mg/ml NtNBCe1-A in 50mM Tris 7.0, 150mM NaCl, and 1mM TCEP were applied over a Superdex-200 HR (10/30) column (GE Healthcare). The column was repeatedly equilibrated with buffers containing 50mM Tris, 150mM NaCl, 1mM TCEP for each pH used in measurements. Peak fractions for each run were diluted to 70 g/ml in a volume of 200 l and analyzed using a fluorescence spectrophotometer (FluoroLog ® -3, Horiba Scientific, Edison New Jersey, NJ). The Raman-scattering peak was subtracted from spectral data. Alternatively, the samples after initial purification were diluted to 70 g/ml in a volume of a 100 l in buffers containing 50mM Tris, 150mM NaCl, 1mM TCEP for each pH measurement and analyzed with a M1000 plate reader (Tecan, Durham, NC).
Circular Dichroism (CD) Spectroscopy
Far-UV CD spectra of 2-3 μM NtNBCe1-A in TRISbuffered saline at varying pH values and at 25 °C were recorded on an Aviv 202A CD spectrometer (Aviv Biomedical). CD spectra were measured between 200 to 250 nm and background corrected.
Surface Plasmon Resonance
The surfaces of two flow cells (FC1 and FC2) of a carboxymethylated-dextran (CM-5) chip were washed with 50 mM NaOH in parallel using a flow rate of 10 μl/min for 3 min using a Biacore T-100 (GE Healthcare). NtNBCe1-A was immobilized on the surface of FC2 via amine coupling. Two types of experiments were performed: firstly, for simple confirmation of self-association, a resonance-unit (RU) signal of 12,000 was achieved with NtNBCe1-A (ligand) concentration of 50 μg/ml in 10 mM acetate buffer pH 5.5 at flow rate of 10 μl/ml. The chip then was blocked with 1M ethanolamine (pH 8.5) at a flow rate of 10 μl/ml for 7 min. FC1 served as a reference cell following mock-immobilization with buffer alone. Serial-diluted solutions of NtNBCe1-A (analyte) in 10 mM Hepes pH 7.4, 150 mM NaCl, 2 mM EDTA, 0.005% surfactant P20 (HBS-P) were passed over both flow-cells. Binding traces were recorded for at least five concentrations of analyte. Each binding cycle was performed at room temperature with a constant flow rate of 30 μl/min for 8 min. Each regeneration cycle was performed with HBS-P buffer following spontaneous dissociation. Secondly, for kinetic rate constants determination, NtNBCe1-A was immobilized onto a new chip yielding a RU of 500. Using similar conditions as above, curves obtained after subtraction of the reference and the buffer signals were fitted using the BIAevaluation (GE Healthcare). For bicarbonate-binding experiments, fresh solutions of bicarbonate were made and adjusted to pH 7.5. Aliquots were placed into vials sealed with rubber caps, which were punctured with needles to withdraw solution. Experiments were carried out within a few hours. Experiments also were similarly performed with bisulfite, which is a stable bicarbonate analog that unlike bicarbonate does not require the presence of CO 2 . Fig. 1 shows a plot of hydrodynamic radius (or Stokes radius) R H as a function of pH. Measurements using dynamic-light scattering DLS (Fig. 1, solid curve) indicate that the observed R H average (r H ) of 4.6 nm at slightly acidic pH is in close agreement with theoretical calculations [28] of 4.2 nm for a molecular mass expected of pure dimers. The observed increase in r H to 5.0 nm at neutral pH is consistent with a small amount of dimer-dimer interactions, which would have a theoretical value of 5.6 nm for a molecular mass expected of pure tetramers, a very large conformational change within the Nt, or both. Thereafter, the observed R H values begin to drop as pH is moderately elevated. This drop appears consistent with an increasingly amount of monomer in a monomer-dimer mixture. The NtNBCe1-A samples are monodispersed at acidic and neutral pH values, both with a polydispersity (Pd) < 10% or 13% respectively. The samples are polydispersed (Pd ~ 15 to 25%) at moderately alkali pH and above. As shown in Fig. 1 (dashed curve), peak molarmass measurements using size-exclusion chromatography are in agreement with R H measurements of DLS. Peakelution volumes correspond to peak-elution times that ideally are proportional to R H . For NtNBCe1-A, molecules generally shift toward higher volumes corresponding to a decrease in R H , as the pH of the column-equilibration buffer is gradually increased. Note in Fig. 1 that the change in the rate of diffusion through the column media mimics the directly observed increase or jump in R H at neutral pH measured by DLS.
Molar-mass Measurements Reveal a Dynamic Equilibrium Among Three States
Out-of-equilibrium (online) and equilibrium (batch) molar mass measurements demonstrate NtNBCe1-A is in a dynamic equilibrium among three states. In online experiments, measurements by 'multiangle-light scattering-size exclusion chromatography' (MALS-SEC) demonstrate a dynamic equilibrium among three molecular masses that correspond to monomer, dimer, and dimer-dimer interaction. Fig. 2 illustrates the molar mass of NtNBCe1-A applied to a gelfiltration column as a function of pH. At neutral pH, NtNBCe1-A appears to be in equilibrium among all three states. The average molar mass varies in the range of 78.9-82.4 kDa, which is in close agreement with the theoretical value of 81.2 kDa that corresponds to that of a dimer. As with the DLS experiments above, each individual fraction of monomer and dimer species can be pronounced by moderate acidic and alkaline changes in pH. At acidic pH, the UV trace appears to be most symmetrical. The uniformity is consistent with the DLS data above that suggested NtNBCe1-A is most monodispersed at acidic pH. As pH is gradually increased further, the molar masses gradually decreases and the tail to the right of each peak is increasingly pronounced. The decrease of molar mass and tail reflect an increase in the amount of monomer in a monomer-dimer mixture, yielding average molar mass values that are an average of monomer and dimer species. At pH 11.5, the observed molar mass of 42.6 kDa is in agreement with the theoretical value of 40.6 kDa corresponding that of a monomer, suggesting that the monomer now predominates the solution.
In batch experiments, measurements using a single lightscattering detector also demonstrate that the NtNBCe1-A is in equilibrium. As the concentration of NtNBCe1-A is decreased, the molar mass decreases. The average molar mass at pH 7.4 is observed to be 82 kDa at 1.1 mg/ml, 52 kDa at 0.2 mg/ml, and 43 kDa at 0.02 mg/ml. The decrease in molar The elution profiles of NtNBCe1-A are shown at varying pH with its observed average molar-mass values as measured by MALS-SEC. The molar mass trend demonstrates equilibrium among monomers and dimers. Dimer disassociation or an increase in monomer formation is observed as pH increases (significantly above pH 8). Note at pH 11.5, the molar mass is consistent with a solution of monomers. The fact that the molar mass trend over the entire pH range differs from the trend in R H measurements suggests a conformational or state change is occurring. The molar mass bars are represented on a logarithmic scale. Further, the dotted line helps to illustrate that the peak elution volume (or Stokes radius) shifts as a function of pH that were plotted in Fig. 1, dashed curve. These shifts are due to the change in the relative amounts of monomer and dimer. Note that at pH 11.5, the peak is extremely broad in comparison to the peak at pH 6.5, which has a similar peak elution volume, suggesting non-specific interactions that would explain the only discrepancy between SEC and DLS measurements of R H . mass is indicative of a dynamic equilibrium. Fluctuations within each concentration were observed and may reflect self-associations that are analyzed further below.
Tryptophan Fluorescence and CD Spectra Indicate Conformational Changes that are pH Dependent Fig. 3 shows the tryptophan-fluorescence spectral curve of NtNBCe1-A at varying pH. The spectra exhibit continuous Stokes shifts in peak emission. At acidic pH, the NBCe1-A dimer exhibits a peak emission at 326 nm. At neutral pH, NBCe1-A molecules exhibit a Stokes shift of 6 nm toward longer wavelength. Likewise, further increases in pH result in NtNBCe1-A molecules to gradually emit light at longer wavelengths. At extreme alkaline conditions (pH The emission spectra of NtNBCe1-A exhibit an incremental Stokes shift as pH is gradually raised from 5.5 to 11.5. The figure shows pH values indicated in the legend increasing from left to right in the spectra. An 11 nm Stokes shift in total is observed. Note that the shift is continuous within the physiological pH range. The continuous transition here partially reflects subtle local conformational changes. The shift above pH 8 is largely associated with an increasing population of dimer dissociation into monomers, thereby exposing W341 to solvent that would normally be buried at the dimer interface. The spectra were collected with a traditional fluorometer, which are in agreement with the inset readings, which were sampled with a sensitive fluorescence plate reader. 11.5), where monomers predominate, the peak emission occurs at 336 nm. To compare, NtNBCe1-A also was unfolded in the presence of 6 M guanidine-chloride. Unfolded NtNBCe1-A emits light at 346 nm, typical to that of other unfolded proteins [29,30], supporting that the smaller observed peak emission changes by pH are due to conformational and/or state changes rather than unfolding of the molecule. CD spectra support the robust nature of NtNBCe1-A despite extreme changes in pH. CD spectra and mean residue ellipticities were obtained for the entire spectrum of pH values each yielding minima at 208 and 220 nm, suggesting similar structures. The fact that there is negative ellipticity at all pH values, except when 6M guanidine hydrochloride is present, means that the secondary structure of NtNBCe1-A is indeed intact. This is consistent with Fig. 2, which shows that NtNBCe1-A still elutes in the linear separation volume of the gel-filtration column and thus molecules are folded despite large pH changes. Lastly, the peak emission shifts are fully reversible if pH is lowered from pH values < 8 and largely reversible if lowered from pH values > 8. This reversibility also indicates that the tertiary fold is retained and that two distinct state changes occur above and below pH 8.
Self-associations of NtNBCe1-A
Measurements by surface plasmon resonance (SPR-Biacore) directly confirm self-association interactions, which are significantly observed at neutral pH only. NtNBCe1-A (ligand) was immobilized on a chip while the same solution of NtNBCe1-A (analyte) was flowed over it. Fig. 4A shows the label-free binding of NtNBCe1-A at varying concentrations onto the NtNBCe1-A immobilized chip. NtNBCe1-A molecules can be seen to transiently or reversibly bind to the chip, yielding typical curves that exponentially increase for association and decrease for disassociation. Because the chip briefly was treated with blocking agent at pH 8.5 after immobilization of compact NtNBCe1-A dimers (See METHODS AND MATERIALS), Fig. 1-3 suggest a small amount of the dimers disassociated on the chip, and the observed selfassociations of analyte maybe due to monomer selfassociation and/or dimer self-associations. Whether we are looking at a monomer self-association (monomer-dimer equilibrium) or dimer self-association (dimer-tetramer equilibrium) depends on the concentration. Low concentration favors monomer-dimer equilibrium. High concentration of the Nt dimer results in high-order self-associations, as evident in Fig. 4B and in (Fig. 2b). Isolated tetramers and 3-5x dimer fractions were reapplied onto a SEC column resulting in a single-dimer peak again as those in Fig. 2, showing that the clusters are reversible.
Bicarbonate Binding
Employing the sensitivity of the SPR-Biacore, bicarbonate ions can be observed to bind NtNBCe1-A at low millimolar concentration (~2 mM) yielding a RU of 8. This value is in agreement with theoretical or R max values of 9 with an R L value of 9100, which corresponds to the RU of immobilized protein. As shown in Fig. 5A, when bicarbonate is pre- A blank-substracted sensogram of an SPR-Biacore experiment demonstrates NtNBCe1-A self-associate and self-dissociate. Here, NtNBCe1-A as free analyte is flown over the same as immobilized ligand. Each curve from top to bottom corresponds to a different concentration of analyte: 400 μg/ml, 80 μg/ml, 1.6 μg/ml, 0.32 μg/ml, 0.064 μg/ml with a RU of 500 of protein immobilized on the surface. The shape of the curves demonstrates slow-on and fast-off rates, whose kinetics are complicated by a dynamic equilibrium of states. Although there maybe a minor amount nonspecific clustering at 400 μg/ml, overall the binding appears to be specific and reversible, negating the need to regenerate the immobilized-NtNBCe1-A surface between concentration cycles. As a control, free NtNBCe1-A molecules were flowed over a blank CM5 sensor chip that showed virtually no significant binding to the reference chip. The concentration series are reference subtracted and blank subtracted. B. The self-association of the NtNBCe1-A dimer is further illustrated by a MALS-SEC experiment at pH 7.6, where higher-order oligomers or aggregates are increasingly observed as noted by the bars above each peak. Measurements show that these interactions can yield prominent, self-associated species upto 3-5 times of the mass of a dimer. The upper thick curve is the light scattering, and the lower thin curve is the UV trace. Finally, the monomer-dimer equilibrium can be easily noted in the inset by the fact that the molar mass trend across the far right peak slopes downward, where it is ~93 kDa to the left of the peak corresponding to a small amount of dimerdimer interaction and ~74 kDa at the right shoulder of the peak reflecting a mixture of monomer and dimer. sent in the solution during NtNBCe1-A self-association experiments, bicarbonate helps to stabilize the self-association interaction. The disassociation rate of the self-association interaction is three to four times slower (600 sec) as compared to those experiments with NtNBCe1-A (180 sec) by itself in the analyte solution. This binding enhancement is also true when bisulfite-a stable bicarbonate analog-is present in the analyte solution, yielding similar results. MALS-SEC indicates that bisulfite binds NtNBCe1-A, enhancing dimerization and high-order self-associations on the chip. Fig. 5B demonstrates that the average molar mass increases ~25 kDa at neutral pH compared to those values listed above, the monomer-dimer equilibrium is disturbed as apparent by the shoulder at the peak tail where monomers normally predominate, and a minor (~2-fold) increase of high-order self-associations is observed in the MALS trace.
Monomer -Dimer -High Order Self-association Equilibrium
Based on the aforementioned data, the observed dynamic equilibrium of NtNBCe1-A under physiological conditions can be de-convoluted into three distinct states as illustrated in Fig. 6A. In a first state, individual dimers exist in a taut conformation. In a second state, a conformational change occurs that allows dimers to relax and transiently bind one another or self-associate. In a third state, individual dimers are relaxed to the point where they disassociate into monomers. Intracellular bicarbonate levels or pH fluctuations can influence the relative amounts of individual states in the equilibrium. Fig. 6B gives a summary of the biophysical properties of each state that corresponds to the schematic representation in Fig. 6A.
Comparison of Hydrodynamic Radii Profiles between NtAe1 and NtNBCe1-A
Although the R H of NtAe1 and NtNBCe1-A are both sensitive to pH changes, they appear to display different hydrodynamic properties. In the case of NtAe1, as pH increases from low pH, the R H of NtAe1 apparently strictly increases yielding a sigmoidal-shaped curve. At extreme alkaline pH, the R H of NtAe1 is at its maximum with a value of ~6.7 nm [19]. In the case of NtNBCe1-A, as pH increases from low pH, the R H temporarily increases at neutral pH and then slowly decreases thereafter yielding an inverted V-shaped curve. Notably, R H measurements at pH 11.5 are lower than those for compact dimers at pH 6.5. This smaller value at extreme alkaline pH is expected for a solution of nearly pure monomers. The unexpected difference in trend of R H curves could simply reflect a difference in protein sequences or rather affinity of the monomers in the dimer complex. How- Figure 5. Bicarbonate binding. A. NtNBCe1-A self-disassociation is shown with and without the presence of bicarbonate. The upper curve that falls to ~0 RU in 180 sec is with 0 mM bicarbonate and the lower curve that falls to ~0 RU in 400 sec is with 10 mM bicarbonate. Note that 10 mM bicarbonate slows disassociation, or rather stabilizes NtNBCe1-A self-association, compared to 0 mM bicarbonate or those depicted in Fig. 4A. The presence of bisulfite yielded similar results. B. The binding of NtNBCe1-A by bicarbonate analog-bisulfite-is demonstrated by a MALS-SEC experiment. The right-shifted curve is NtNBCe1-A with 0 mM bisulfite, and the left-shifted is NtNBCe1-A with 10 mM bisulfite. Note by the bars above the peaks that molar mass increases from ~79 to ~105 kDa in the presence of bisulfite, suggesting an increase in the amount of dimer self-association. Also note that the red and blue chromatogram traces differ in their shape. The left-shifted, revealing a clear shoulder peak, suggests discrete species or a decrease in the monomer population in the presence of bisulfite. ever, at least in the case of NtNBCe1-A, there is no evidence for a global-conformational change at alkaline pH that extends the radius as described for NtAe1 [25]. Because the MALS-SEC technique is independent of shape changes, the gradual drop of molar mass at elevated pH is in agreement with an increase in the monomer population of the monomerdimer equilibrium. There is rather evidence for 'local' conformational changes in NtNBCe1-A that have yet to be characterized for NtAe1. These subtle changes are detected here at neutral pH by R H , fluorescence, and SPR measurements. On the other hand, artifacts of elevated pH could mislead to a sigmoidal-shaped curve such as for NtAe1. For example, if NtNBCe1-A had been briefly exposed to elevated pH and then concentrated, the measurements for R H do become uniformly elevated at all pH. These elevated R H measurements reflect non-specific aggregation as revealed by MALS-SEC measurements. Here, the Nt slowly becomes unstable, a property that is addressed below.
Stability in Solution at High-concentrations (> 2 mg/ml)
As previously described in Gill et al. [9], a challenge to the storage and application of NtNBCe1 is that it undergoes a slow, time-dependent precipitation at high concentrations, such as ~10-15 mg/ml as typically used in crystallization trials, thereby periodically changing the concentration of the protein solution until it reaches concentrations less than ~2 mg/ml. This happens between 0-22 o C incrementally after ~2-5 hrs. Generally, the protein solution holds up better at room temperature. Freezing it at -20 o C results in the immediate precipitation of ~70-90% of the protein content upon thawing. Buffer stabilizers such as glycerol, varied pH conditions in the final-exchanged or gel-filtration buffer, and salt variations do not prevent the slow precipitation. However, as de-scribed here in METHODS AND MATERIALS, careful attention to keep the pH of each buffer 7.0 or less from the beginning of the purification enabled long-term stability. Values of pH > 8 shift the equilibrium of NtNBCe1-A toward monomers. Monomers then subsequently bind non-specifically to other monomers when highly concentrated. This can be considered as a fourth state in Fig. 6A. These interactions irreversibly self-associate into insoluble complexes that fall out of solution in random waves over hours to days-even if pH is immediately reversed to neutral pH. That is, monomers are not stable at high concentrations for an extended period of time. Conversely, NtNBC1-A molecules that have only been exposed to acidic or neutral pH are stable for weeks. Although no precipitation is observed here, after two weeks at room temperature or 4 o C, half of the dimers are dissociated into monomers as judged by MALS-SEC.
Intracellular pH Sensitivity by NBCe1-A
It is unclear whether NtNBCe1-A directly responds to slight changes in intracellular pH or bicarbonate levels via deprotonation of key residues and/or to binding partners that modulate or shield charges on the Nt that induce conformational changes similar to those observed here. With that said, there are no binding partners known for NtNBCe1-A. Although trafficking chaperons and other integral proteins are likely needed to bind NBCe1-A, peripheral cytosolic proteins that regulate transport or increase stability analogous to those that bind NtAe1 in the RBC remain to be elucidated if they exist.
Intrinsic Fluorescence Changes
There are four tryptophans that can be grouped into two environments. W341 is located at the monomer-monomer A schematic diagram of NtNBCe1-A in equilibrium (A.) is correlated to a summary of its the biophysical properties (B.). MW ave is the average molecular weight, M R H is the measured hydrodynamic radius, Peak em is the peak emission wavelength, SS is the change in secondary structure, n/o means not observed. The differences in trend between MW ave and M R H and the shift in Peak em suggest that conformational and/or state changes are occurring.
interface, and the W81, W87, W101 are clustered and partially buried within the molecule, as judged by homology modeling in X-ray diffraction studies with NtAe1 [10]. Thus, the longer wavelength (red shift) at alkaline pH in emission spectra can be accounted for by the disassociation of dimer into monomers, which would directly expose W341 to solvent, based on crystal models. However, local conformational changes at neutral pH also can be explained by changes in the environments of the three remaining tryptophan near the far cytosolic side of the Nt.
Model of a SLC4
The functional roles of SLC4 family members can be divided into their Nt and transmembrane domains (TMD). The Nt of a SLC4 does the work of sensing intracellular pH or bicarbonate levels, promoting self-associations, and possibly acting as a selection filter via internal substrate conduits. The TMD of a SLC4 provides a scaffold for substrate transport across the membrane and conceivably regulates transport by occlusions that define the cotransporter class of integral membrane proteins. Structurally, as illustrated in Fig. 7, the Nt hangs like a gondola via flexible loops from the TMD. The Nt dimer is placed next the membrane with its N-termini end facing far away from the membrane. The helix-loop-helix, residues 146 to 155, on the opposite side of the N-termini, faces the membrane in each half of the dimer. This loop contains multiple lysines that interact with phospholipids in the membrane that may act to stabilize the Nt in the cytosol. Support for this placement comes from evolving electron microscopy studies, whose images show NtAe1 to be comparable in size and shaped like a pendulum tethered to its TMD [31].
Moreover, the dimensions of the Nt will vary according to the intracellular pH or bicarbonate levels. Each dimer in Fig. 7 is a in a relaxed conformation at neutral pH. By relaxed, a local conformational change occurs at the N-termini that extend the axial dimensions; but also the separation between the two halves of the dimer is likely to be further apart at physiological pH than at acidic pH values. This distance is governed by the pH-dependent monomer-dimer equilibrium as dimers dissociate. Finally, the relaxed dimer in Fig. 7 allows for self-association with other dimers. Support for selfassociation in-vivo comes from evolving fluctuation fluorescence correlation spectroscopic studies that demonstrates the full-length NBCe1-A expressed in mammalian cells to be dimers with a fraction of those self-associated [32]. The selfassociation of NtNBCe1-A molecules prompts future experiments to investigate the molecular mechanisms of binding.
Finally Fig. 7 places a SLC4 cotransporter model, such as NtNBCe1-A, at the basolateral membrane of the cell. To the left, the Nt exists as single dimers (or less self-associated) in a closed and taut conformation at acid pHs as those observed in the study here. To the right, base (such as ammonia) enters and becomes protonated (HB), thereby neutralizing the acid load. The cotransporters now relax, permitting bicarbonate ions to bind and stabilize dimer self-associations, clustering in an open state. The figure shows the Nt to be isolated in a closed state (left) and self-associated in an open state under normal physiological conditions (right) in the cell. The self-association of dimers is governed by local conformational changes that are induced by intracellular pH or bicarbonate levels. The SLC4 is modeled using the crystal model of NtAe1 [25] and the low resolution, homology model derived from X-ray diffraction studies of NtNBCe1-A [10]. NtAe1 from the RBC shares a 37% identity in amino acid sequence to NtNBCe1-A. The high identity extends to all SLC4 family members and reveals that all Nt domains share the same structural fold. Each monomer of the dimer extends an arm, which is inserted into the other monomer. The two arms are interlocked, symmetrically holding the dimers together. This interaction implies that the transmembrane domain of NBCe1-A, shown by the adjacent rectangles, necessarily must also form symmetrical dimers. The studies for NtAe1 [27] and here for NtNBCe1-A suggest that the Nt drives self-association of NBCe1-A dimers in the membrane. Digestive and Kidney Diseases (NIDDK) of the National Institutes of Health (NIH) supported this project. The content is solely the responsibility of the author and does not necessarily represent the official views of the NIDDK or the NIH. The PEPCC Laboratory at CWRU under the direction of HG provided instrumentation, which was funded by OBR Action Fund #897 & 913 and ONR grants N00014-08-1-0608 & N00014-09-1-0794. | 7,922.6 | 2012-09-30T00:00:00.000 | [
"Biology"
] |
Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.
Introduction
As shown by Mitchell (1980) and Montañez et al. (2019), inductive biases are essential for learning algorithms to outperform random guessing.These task-specific biases allow algorithms to generalize beyond training data but, necessarily, they should not be conflated with prejudicial or unwanted biases.Unwanted bias, such as bias against demographic groups, can be found in many applications, from computer vision systems to natural language processing (NLP).Vision-and-language (V&L) models lie at the intersection of these areas, where one of the key challenges is deploying robust models to perform high-level reasoning based on the multimodal context instead of exploiting biases in data (Zhao et al., 2017).
Multiple studies (Lee et al., 2021;Hirota et al., 2022b;Zhou et al., 2022) ).Both models can then be used in a two-phase analysis: 1) bias amplification is measured on the intrinsic bias of pretrained models, and 2) bias amplification, task performance and fairness are evaluated on the extrinsic performance of fine-tuned models.
and their context to make predictions, and thus are susceptible to unwanted biases.However, these authors do not explore the broad landscape of V&L models and focus on biases in common visual datasets (Wang et al., 2022;Hirota et al., 2022a), only on pretrained models (Zhou et al., 2022) or only focus on one application, e.g., image captioning (Hendricks et al., 2018;Hirota et al., 2022b) or semantic segmentation (Lee et al., 2021).
In this work, we investigate to what extent the unwanted bias in a V&L model is caused by the pretraining data.To answer this question, we focus on one important aspect of bias encoded in V&L models, namely bias amplification.Bias amplification occurs when a model exacerbates unwanted biases from the training data and, unlike other forms of bias, it is not solely attributed to the data, yet it can vary greatly during training (Hall et al., 2022).
We explore bias amplification in two encoderonly V&L models: LXMERT (Tan and Bansal, 2019) and ALBEF (Li et al., 2021), and the encoder-decoder model BLIP (Li et al., 2022).Specifically, we quantitatively and qualitatively analyse the relationship between the bias encoded in pretrained models, and after fine-tuning on downstream tasks including visual question answering, visual reasoning and image-text retrieval.
While bias can be studied with respect to any protected attribute, the majority of NLP research has focused on (binary) gender (Sun et al., 2019;Stanczak and Augenstein, 2021;Shrestha and Das, 2022).We also use gender bias as our case study but different to previous work, we advocate for the inclusion of gender-neutral terms (Dev et al., 2021) and consider three gender categories based on visual appearance: male, female and genderneutral (e.g., PERSON).The use of both visual and grammatical gender information across V&L tasks is needed for identifying the target of, for example, a question.But the demographics of the subject should not solely influence the outcome of the model.Otherwise, the model may reinforce harmful stereotypes resulting in negative consequences for certain group identities (van Miltenburg, 2016).
Motivated by this argument, we investigate the effect of shifting the projection of gender-marking to a gender-neutral space by continued pretraining on gender-neutral multimodal data-a form of domain adaptation (Gururangan et al., 2020)-and how it reflects on task performance after fine-tuning.Figure 1 depicts an overview of our full workflow.
Contributions
We examine whether bias amplification measured on pretrained V&L models (intrinsic bias) relates to bias amplification measured on downstream tasks (extrinsic bias).We show that a biased pretrained model might not translate into biased performance on a downstream task to a similar degree.Likewise, we measure model fairness through group disparity and show that it is not unequivocally related to bias in a model.Furthermore, we empirically present a simple, viable approach to promote fairness in V&L models: performing an extra epoch of pretraining on unbiased (gender-neutral) data reduces fine-tuning variance and group disparity on VQAv2 and retrieval tasks on the majority of models studied, without significantly compromising task performance.
We make our code publicly available to ensure reproducibility and foster future research.1
Related Work
Bias in language In general, bias can be defined as "undue prejudice" (Crawford, 2017).Studies targeting language models (Kurita et al., 2019;Zhao et al., 2019) have shown that biases encoded in pretrained models (intrinsic bias) can be transferred to downstream applications (extrinsic bias), but the relationship between these biases is unclear.2There are several studies (Goldfarb-Tarrant et al., 2021;Delobelle et al., 2021;Kaneko et al., 2022;Cao et al., 2022;Orgad et al., 2022), showing that intrinsic bias in language models does not consistently correlate with bias measured extrinsically on a downstream task or, similarly, with empirical fairness (Shen et al., 2022;Cabello et al., 2023).Contrarily, Jin et al. (2021) observed that the effects of intrinsic bias mitigation are indeed transferable in fine-tuning language models.To the best of our knowledge, we are the first to investigate if the same holds for V&L models.
Bias in vision & language
Prior research observed the presence of gender disparities in visual datasets like COCO (Bhargava and Forsyth, 2019;Zhao et al., 2021;Tang et al., 2021) andFlickr30k (van Miltenburg, 2016).Recent studies also revealed the presence of unwanted correlations in V&L models.Prejudicial biases found in V&L models are not only attributed to one domain, i.e., vision or language, but they are compound (Wang et al., 2019), and this should be studied together.Srinivasan and Bisk (2021); Hirota et al. (2022b) and Zhou et al. (2022) show that different model architectures exhibit gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene.Bianchi et al. (2023) show the presence of stereotypes in image generation models and discuss the challenges of the compounding nature of language-vision biases.Another line of work addresses visual contextual bias (Choi et al., 2012;Zhu et al., 2018;Singh et al., 2020) and study a common failure of recognition models: an object fails to be recognized without its co-occurring context.So far, little work has investigated bias amplification in pretrained V&L models.Our study is among the first to cast some light on the gender bias encoded in pretrained V&L models and evaluate how it translates to downstream performance.Zhao et al. (2019) examine the effect of learning gender-neutral embeddings during training of static word embeddings like GloVe (Pennington et al., 2014).Sun et al. (2021) and Vanmassenhove et al. (2021) present rule-based and neural rewriting approaches to generate gender-neutral alternatives in English texts.Brandl et al. (2022) find that upstream perplexity substantially increases and downstream task performance severely drops for some tasks when genderneutral language is used in English, Danish and Swedish.Amend et al. (2021) show that the substitution of gendered for gender-neutral terms on image captioning models poses a viable approach for reducing gender bias.In our work, we go one step beyond and investigate the effect of continued pretraining V&L models on in-domain data where gendered terms have been replaced by their gender-neutral counterparts (e.g., sister → sibling).
Problem Formulation
We characterize the gender bias encoded in V&L models in a two-phase analysis: i) Intrinsic bias: First, we investigate the bias encoded after the V&L pretraining phase.
ii) Extrinsic bias and task performance: Second, we fine-tune the models on common downstream tasks to further investigate how bias affects model performance.
These investigations will be performed using a set of original, pretrained models M D , and models that have been further pretrained on gender-neutral data M N D in order to mitigate any biases learned during pretraining ( §4.4).We hypothesize that this bias mitigation technique will decrease both intrinsic and extrinsic biases encoded in the models.
Data Our analysis relies on data where the gender of the main actor of the image is known.This is, to some degree, annotated in the crowdsourced text, e.g., image captions or questions. 3 Following Zhao et al. (2017) andHendricks et al. (2018), images are labelled as 'Male' if the majority of its 3 Zhao et al. (2021) annotated samples from the COCO dataset (Lin et al., 2014) with the perceived attributes (gender and skin-tone) of the people in the images.However, their gender labels agree on 66.3% of the images compared to caption-derived annotations.To be consistent across all datasets used in our project, we will not use their humancollected annotations for analysing gender bias on COCO.captions include a word from a set of male-related tokens (e.g., BOY), and no caption includes a word from the set of female-related tokens (e.g., GIRL); and vice-versa for 'Female'.Images are labelled as 'Neutral' if most of the subjects are listed as gender-neutral (e.g., PERSON), or if there is no majority gender mention in the texts.Finally, images are discarded from the analysis when the text mentions both male and female entities, or there are no people mentioned.This process can be applied to both pretraining data and downstream task data.See Appendix A for the complete word list.
Intrinsic Bias
When we measure the intrinsic bias of a model, we are interested in whether there are systematic differences in how phrases referring to demographic groups are encoded (Beukeboom et al., 2014).We can measure the intrinsic bias using the model's language modelling task, where the tokens related to grammatical gender are masked. 4et M D be a V&L model pretrained on corpora D. The masked words related to grammatical gender are categorised on N = 3 disjoint demographic groups A = {Male, Female, Neutral} based on reported visual appearance in the image.The gender associated with an image is considered as the ground truth (see previous section for more details).Let g i for i ∈ [1, N ] be the categorical random variable corresponding to the presence of the group i.We investigate the gendercontext distribution: the co-occurrence between attributes A i = {a 1 , . . ., a |A i | }, e.g., gender terms, for a demographic group g i , and contextual words T = {t 1 , . . ., t T }, e.g., objects that appear in a given text.This results in a co-occurrence matrix C g i a,t that captures how often pairs of attributecontext words occur in a defined context S, e.g., an image caption in a corpus C. Formally, for every demographic group g i , over the A i attributes and T objects, and all possible contexts in corpus C (1) where S(a j , t k ) = 1 if the attribute and object co-occur, zero otherwise.Based on C g i a,t , standard statistical metrics like precision, recall and F1 can be computed.In addition, we will quantify the bias amplification in a given model M D to better understand the degree of bias exacerbated by the model.We use the metric presented by Wang and Russakovsky (2021), which is described in more detail in the next section.
Bias Amplification
We use the BiasAmp metric introduced by Wang and Russakovsky (2021), as it accounts for varying base rates of group membership and naturally decouples the direction of bias amplification: While BiasAmp T →A measures the bias amplification due to the task influencing the protected attribute prediction,5 BiasAmp A→T measures the bias amplification due to the protected attribute influencing the task prediction.We give a concise treatise of BiasAmp A→T here, and refer to Wang and Russakovsky (2021) for further details.
In our setup, the set of attributes a ∈ A is given by A = {Male, Female, Neutral}, and the set of tasks (or objects) t ∈ T are the most frequent nouns co-occurring with gendered terms in the training sets (see Appendix A for details).Denote by P (T t = 1) the probability that an example in the dataset belongs to class t.And, similarly, P ( Tt = 1) the probability that an example in the dataset is labelled as class t by the model.Wang and Russakovsky (2021) introduce two terms to disambiguate the direction of bias amplification.The first term, ∆ at , quantifies the difference between the bias in the training data and the bias in model predictions.
The second term, y at , identifies the direction of correlation of A a with T t ; that is, y at alters the sign of the ∆ at to correct for the fact that the bias can have two directions.Thereby, BiasAmp A→T will be positive if the model predictions amplify the prevalence of a class label t ∈ T between groups a ∈ A in the dataset.For instance, bias is amplified if A a = MALE images are more likely to appear in the presence of a T t = SKATEBOARD in the model predictions, compared to the prior distribution from the dataset.In contrast, a negative value indicates that model predictions diminish the bias present in the dataset.A value of 0 implies that the model does not amplify the bias present in the dataset.Note that this does not imply that the model predictions are unbiased.
Extrinsic Bias & Fairness
The second phase of our analysis measures extrinsic bias amplification: downstream performance and fairness (group disparity).A given model is fine-tuned on downstream tasks that require different reasoning skills based on the image context.We evaluate model performance with respect to the three demographic groups defined in A and compare results in search of the more equitable system.
Gender-neutral Domain Adaptation
Motivated by the fact that models are known to acquire unwanted biases during pretraining (Hall et al., 2022), we also investigate what happens if a model M D is further pretrained for one additional epoch on gender-neutral data, with the goal of creating a more gender-neutral model M N D .We hypothesize that this may be sufficient to reduce the biases encoded in the original model.Given a dataset D, a new dataset D N is created by substituting gender-related tokens in the text for gender-neutral tokens.The substitution is based on a hand-crafted lexicon,6 e.g., woman or man may be substituted to person. 7The new model M N D is used for both the intrinsic and extrinsic bias evaluations.
Models
We take the LXMERT architecture (Tan and Bansal, 2019) as a popular representative of V&L models, and build our controlled analysis on VOLTA (Bugliarello et al., 2021).VOLTA is an implementation framework that provides a fair setup for comparing V&L models pretrained under the same conditions, which enables us to compare the influence of diverse training data on representational bias.In this case, LXMERT 180K refers to the original checkpoint and LXMERT 3M to the model trained on CC3M (Bugliarello et al., 2021).We also study ALBEF in two sizes and BLIP.Table 1 lists the models included in our analysis.
Gender-neutral Data
As a natural extension to study representational gender bias, we want to evaluate to what extent gender-neutral data helps to mitigate gender bias.Amend et al. (2021) showed that gender-neutral training might be a viable approach for reducing gender bias in image captioning models.We study its effect in more generic pretrained V&L models.
The gender-neutral pretraining data is the result of substituting terms with grammatical gender for gender-neutral equivalents, e.g., "A woman walking her dog" translates into "A person walking their dog."To this end, we create a list of gender entities8 by merging previous hand-curated lexicons used in a similar context to ours, provided by Antoniak and Mimno (2021). 9tarting from a pretrained checkpoint, we perform an extra epoch of pretraining.The training is done based on a linear function that increases the probability for a model to learn from genderneutral captions.The starting rate is p=0.15 and, as the training progresses, the probability of getting a gender-neutral caption increases to p=1.0 at the last step.Note that as the probability of getting a gender-neutral caption increases, the learning rate decreases.This methodology supports our intuition that starting with a gender-neutral corpus would be too drastic for the model to adapt to, and instead cause catastrophic forgetting.
Finally, we continue pretraining the original model checkpoints for an extra epoch without the gender-neutral alternative (i.e., p=0.0).The evaluation on this new checkpoint will help us to draw conclusions on longer training, as well as ensure the correct implementation of our setup.
Evaluation Tasks
For evaluation of downstream tasks, we report task performance and analyse group disparities.Bias amplification is reported on the validation splits.
MLM We follow standard practice for assessing gender bias in V&L models (Zhao et al., 2017;Hendricks et al., 2018;Wang et al., 2019;Tang et al., 2021; Srinivasan and Bisk, 2021;Agarwal et al., 2021;Cho et al., 2022) and expose representational bias in a masked language modelling (MLM) task.The words masked are gendered terms given by the same lexicon used in §5.2.Personal pronouns (if any) are also masked to avoid leaking gender information into the model representation.For example, "A woman walking her dog" would be masked as "A [MASK] walking [MASK] dog".The image associated with each sentence is also input to the model, in a setup that reflects the pretraining conditions.
We investigate the intrinsic bias of the models as detailed in §3, i.e., we look at the co-occurrence of context words (e.g., car, ball) with particular word choices from the model (e.g., gender words like woman, child).Previous work (Sedoc and Ungar, 2019;Antoniak and Mimno, 2021;Delobelle et al., 2021) showcases how the measure of bias can be heavily influenced by the choice of target seed words.To avoid misleading results from low frequency words, we define the set of target words to be the 100 most frequent common nouns that co-occur with the gender entities in the corresponding training data.Table 2 provides a summary of gender distribution.
To evaluate intrinsic bias, we do not look at the exact word prediction but instead consider two options to annotate the gender of the predicted word.First, we can extract and sum the probabilities of all male, female and gender-neutral tokens within our set to select the most probable gender entity.However, given that the distributions of tokens follows Zipf's Law, the probability mass computed for each gender group is nearly equal, yielding inconclusive results.Therefore, we use the gender category of the most probable token.Then, the bias present in model predictions is measured with the statistical and bias amplification metrics presented in §4.2.
Visual Question Answering VQA (Antol et al., 2015) requires the model to predict an answer given an image and a question.LXMERT formulates VQA as a multi-answer classification task, and AL-BEF and BLIP treat it as a language generation task.
We evaluate models on the VQAv2 (Goyal et al., 2017) and GQA datasets (Hudson and Manning, 2019), and report performance as VQA-Score and accuracy, respectively.Bias amplification is measured on the subset of question-answer pairs targeting people.Gender is inferred from the question, considering all the gender entities presented in Appendix A. We filter any answer category whose answer does not occur with gender entities at least 50 times in the training set.Finally, numerical and yes/no question-answer pairs are also removed leaving a total of 165 answer categories in VQAv2 and 214 in GQA.
Natural Language for Visual Reasoning NLVR2 (Suhr et al., 2019) requires the model to predict whether a text describes a pair of images.The notion of bias amplification considered in this project would require us to manually annotate the gender from all the images to be able to extract gender-context patterns from the training data.For this reason, we only evaluate the group disparity in NLVR2 through differences in performance, reported as accuracy.
Image-Text Retrieval This retrieval task contains two subtasks: text-to-image retrieval (IR), where we query the model with a caption to retrieve an image, and image-to-text retrieval (TR), where we use an image to retrieve a suitable caption.We report Recall@1 on the Flickr30K (Plummer et al., 2015) benchmark.Bias amplification is measured on the subset of data targeting people.In IR, we query the model with captions that include a word from the set of male-related or female-related tokens and compare to the gender annotated in the image retrieved.In TR, we query the model with images annotated as 'Male' or 'Female' and compare to the gendered terms in the caption retrieved.Captions with gender-neutral terms are treated as a separate case to assess how often the models retrieve images from each group, yet the image retrieved could be potentially valid for any gender case.In both subtasks, we consider that the model does not amplify gender bias when the image or caption retrieved has a gender-neutral subject.
Intrinsic Bias
We evaluate intrinsic bias in encoder-only models.Considering that bias varies as a function of the bias in a dataset, amongst other variables (Hall et al., 2022), we define our experiments with LXMERT variants as our control setup: the same model architecture is trained with the same hyperparameters on disjoint corpora yielding two versions of the model, LXMERT 180K and LXMERT 3M .
Gender-neutral pretraining mitigates gendered outputs Figure 2 shows results for LXMERT 180K models; complete results are in Appendix C. A model is penalised when it predicts a token from the opposite gender, but we consider a gender-neutral term as a valid output. 10The models pretrained with gender-neutral data, have near perfect F1 performance as they learnt to predict gender-neutral tokens when their standard counterparts, LXMERT 180K and LXMERT 3M , had low confidence on the most probable token. 11We presume these are images where the visual appearance of the main subject is unclear.Interestingly, the trade-off between precision and recall has opposite directions for Female and Male groups vs Neutral in LXMERT 180K and LXMERT 3M : the models tend to output female-and male-tokens more often than neutral-related, even when the subject in the image was annotated as gender neutral (low recall).
Pretrained models reflect training data biases
Table 3 shows the aggregated bias amplification measured in encoder-only model variants.Our bias mitigation strategy has the same consistent behaviour across LXMERT models and evaluation data (COCO or CC3M): models tend to reflect the same degree of bias present in the data (BiasAmp T →A closer to zero).ALBEF N-COCO 14M and ALBEF N-CC3M 14M models benefit from pretraining on gender-neutral data differently, as both decrease the overall bias amplification.Wang and Russakovsky (2021) caution against solely reporting the aggregated bias amplification value, as it could obscure attribute-task pairs that exhibit strong bias amplification.We report it here as a relative metric to compare the overall amplified bias between the models, and should not be considered in its own.See Appendix C for results broken down by gender.
We also investigated the equivalent to LXMERT N 3M , but pretrained on gender-neutral data for a reduced number of steps to match those in LXMERT N 180K .We verified that more pretraining steps on gender-neutral data equates to a reduced bias amplification in absolute terms.
Extrinsic Bias & Fairness
Trade-offs in task performance Downstream performance on the test sets is shown in Table 4. LXMERT 180K may require more pretraining steps to converge, as we verify that the performance im-provement observed in LXMERT N 180K is mainly due to the extra pretraining steps regardless of gender-neutral data.Our strategy for mitigating gender bias on pretrained models generally leads to lower task performance on NLVR2 and image retrieval, revealing a trade-off between bias mitigation and task performance.The same trade-off has been observed in language models (He et al., 2022;Chen et al., 2023).However, gender-neutral models report similar or even superior performance on question answering and text retrieval tasks compared to their original versions.
Gender-neutral models consistently reduce group disparity Group performance is depicted in Figure 3 for a subset of models and tasks.Table 7 in Appendix D shows the complete results.We observe that group disparity is consistently reduced on VQAv2 and retrieval tasks.An exception are LXMERT models, which show a minor, undesirable increase in group disparity on VQAv2, GQA and text retrieval tasks.For instance, in questionanswering tasks with LXMERT, we observe a reduction in the min-max gap of 4.5 (LXMERT N 180K ) points in VQAv2, while the min-max gap increase in GQA is only of 0.4 points.Note that Tan and Bansal (2019) pretrained LXMERT 180K on GQA train and validation data, which results in a very high performance (∼85.0 for all groups) on the GQA validation set.We speculate that the gains in performance equality across groups could be due to a shift of the final word representations to a more equidistant vector space between gendered terms and their context.That is, the conditional probability distribution of a gendered term given its context is smoother across different demographic groups.We leave exploration of this for future work.In recent work, Feng et al. ( 2023) continued pretraining language models on partisan corpora and observed that these models do acquire (political) bias from said corpora.In our case, the continued pretraining could make the M N D models more robust regarding gendered terms.Gender-neutral training reduces fine-tuning variance Dodge et al. (2020) and Bugliarello et al. (2021) analysed the impact of random seeds in fine-tuning.We do this analysis on our control setup and observe that gender-neutral variants of LXMERT consistently report lower variance in performance on all tasks, except for NLVR2.We, however, observe a strong variance in the fine-tuning process for NLVR2 due to the random ).We report VQA-accuracy in VQAv2, accuracy in GQA, and Recall@1 in F30K by gender group: male (M), female (F), and neutral (N).weight initialisation of the classification layer.See Appendix E for specific results across 6 runs.
Intrinsic & extrinsic bias are independent We estimate bias amplification in VQA tasks by evaluating the fluctuations in models' predictions when they differ from the correct answer.Otherwise, the models are said to not amplify the bias from the data.We find that all model variants -M D and M N D -reduce the gender bias across tasks.
However, contrary to what we observed in pretrained models (Table 3), there is no evidence that the gender-neutral pretraining influenced positively (nor negatively) the extrinsic bias of the models: it depends on the model, downstream task and gender group (see Appendix E for results on BiasAmp A→T fine-tuning variance).Figure 4 displays BiasAmp A→T broken-down by gender category measured on GQA for a subset of models.Whereas the degree of bias amplification is fairly consistent between a model M D and M N D in VQAv2 (see Appendix D), there is higher variance in GQA: ALBEF N-COCO 14M reduces the bias amplification compared to ALBEF 14M , but we observe the opposite effect on BLIP N-COCO .
In retrieval tasks, we look into models' behavior when querying them with neutral instances.Regardless of the degree of intrinsic bias in the model, models exhibit the same trend: in IR, all models mostly retrieve images labeled as 'Neutral', but twice as much 'Male' images as 'Female'.We find similar results for TR, i.e., query images whose main actor is defined as Neutral, but, in this scenario, only half of the captions retrieved relate to people.See Appendix D for detailed results.
Conclusion
This paper presented a comprehensive analysis of gender bias amplification and fairness of encoderonly and encoder-decoder V&L models.The in- trinsic bias analysis shows consistent results -in terms of bias mitigation -in models trained on gender-neutral data, even if these models reflect biases present in data instead of diminishing them (as we observed with LXMERT).In line with previous findings in language models (Goldfarb-Tarrant et al., 2021;Kaneko et al., 2022;Orgad et al., 2022), intrinsic bias in V&L models does not necessarily transfer to extrinsic bias on downstream tasks.Similarly, we find that the bias in a model and its empirical fairness -group disparity on task performanceare in fact independent matters, which is in line with the NLP literature (Shen et al., 2022;Cabello et al., 2023).Intrinsic bias can potentially reinforce harmful biases, but these may not impact the treatment of groups (or individuals) on downstream tasks.We believe that bias and fairness should always be carefully evaluated as separate matters.One of they key findings of our work is that the extra pretraining steps on gender-neutral data are beneficial to reduce the group disparity in every model architecture tested on VQAv2, and in the majority of models for both retrieval tasks.Crucially, there is no penalty to pay for this fair outcome: the overall task performance of gender-neutral models is similar or better than their original versions.
Limitations
The framework to characterize gender bias in V&L presented in this study is general and extensible to analyse other forms of bias in multimodal models.
We consider three base architectures to settle on the implementation.However, our work would benefit from analyzing a wider range of models.
Studying the effects of gender-neutral pretraining on V&L models with a frozen language model, such as ClipCap (Mokady et al., 2021) and BLIP-2 (Li et al., 2023), is left as future work.
Due to computational limitations, we restricted most of our analysis to single runs.We perform a first analysis across multiple random seeds for LXMERT models in Appendix E. There, we notice that gender-neutral models seem to have lower variance after fine-tuning.Yet, the cross-seed performance of a given model can fluctuate considerably for some tasks (e.g., NLVR2), corroborating previous findings from Bugliarello et al. (2021).Likewise, bias amplification, along with other fairness metrics like group disparity, often fluctuates across runs.We report bias amplification variance in fine-tuning of LXMERT models, but the absence of confidence intervals for all models and tasksdue to the same reason stated above-should be considered.We hope to motivate future work to address this issue.
Moreover, despite the existence of multilingual multimodal datasets (Elliott et al., 2016;Liu et al., 2021;Bugliarello et al., 2022, inter-alia), our experimental setup is limited to English datasets and models.Studies of (gender) bias using only English data are not complete and might yield inaccurate conclusions, albeit overcoming the structural pervasiveness of gender specifications in grammatical gender languages such us German or Spanish is not trivial (Gabriel et al., 2018).Likewise, our work considers a single dimension of social bias (gender).Further research on analyzing social biases on V&L models should account for intersectionality: how different social dimensions, e.g., gender and race, can intersect and compound in ways that can potentially impact model performance on most disfavoured groups, e.g., Black Women as discussed in Crenshaw (1989).
Ethics Statement
The models and datasets used in this study are publicly available, and we strictly follow the ethical implications of previous research related to the data sources.Our work is based on sensitive information such as gender, based on reported visual appearance in the image captions.We would like to emphasize that we are not categorizing biological sex or gender identity, but rather using the given image captions as proxies to the outward gender appearance.
B Models
In this section, we provide an overview on the models we use in our evaluation.We refer to their original work for more details.
LXMERT (Tan and Bansal, 2019) is a crossmodal architecture pretrained to learn vision-andlanguage representations.It consists of three Transformer (Vaswani et al., 2017) encoders, where visual and language inputs are encoded separately in two independent stacks of Transformer layers before feeding them into the cross-modality encoder.The cross-modality encoder uses bi-directional cross attention to exchange information and align the entities across the two modalities.LXMERT is trained with four objectives: masked language modelling (MLM), masked object prediction, imagetext matching (ITM) and image question answering.
Similar to LXMERT, ALBEF (Li et al., 2021) is a dual-stream encoder (Bugliarello et al., 2021) that first learns separate visual and textual embeddings using Transformer-based image and text encoders; and then fuses them in a cross-modal Transformer using image-text contrastive loss (ITC), which enables a more grounded vision and language representation learning.The model is pretrained with two other objectives: masked language modelling (MLM) and image-text matching (ITM) on the multimodal encoder.Unlike LXMERT, ALBEF does not rely on image features extracted from an off-the-shelf object detector, but directly feeds the raw image into a Vision Transformer (Dosovitskiy et al., 2021) BLIP (Li et al., 2022) is a versatile model based on a multimodal mixture of encoder-decoder network, that can be applied to a wide range of downstream tasks.The authors introduce a novel boostrapping method to generate synthetic captions and remove noisy pairs from large-scale web data.Unlike LXMERT and ALBEF, BLIP is trained with an autoregressive language modelling objective that allows the generation of coherent captions given an image.The model is also pretrained using the unimodal image-text contrastive loss (ITC) and the cross-modal image-text matching (ITM) loss used by ALBEF.
setup.
MLM experiment broken down by gender Table 6 provides a more granular look at which gender groups are actually amplifying/decreasing the bias in the pretrained models.
D Bias & Fairness in Downstream Tasks
Extrinsic Bias The following graphs complement results shown in § 6.2 for bias amplification measured on downstream tasks: Figure 6 shows results on GQA; Figure 7 shows results on VQAv2; Figure 8 and Figure 9 show the bias revealed on image-text retrieval tasks when querying the models with a gender-neutral caption (or image), respectively.
Task performance & Fairness
We present granular results on task performance in validation in Table 7 and group disparity, defined as the minmax difference between group performance (∆).
GQA NLVR2 F30K
Acc. ∆(↓) Acc.∆(↓) Acc.∆(↓) r@1 IR ∆(↓) r@1 TR ∆(↓) Figure 1: A V&L model pretrained on data D (M D ) is further pretrained on gender-neutral multimodal data D N , resulting in a gender neutral V&L model (M N D).Both models can then be used in a two-phase analysis: 1) bias amplification is measured on the intrinsic bias of pretrained models, and 2) bias amplification, task performance and fairness are evaluated on the extrinsic performance of fine-tuned models.
Figure 2 :
Figure 2: Statistical analysis of gender bias in MLM with gendered terms masked.Predicting a token from the gender-neutral set is always considered correct (Pre-cision=1).Models report higher recall scores for Male (M) and Female (F) groups, showcasing the completeness of positive predictions; it is the opposite for Neutral (N) tokens.
Figure 4 :
Figure 4: Bias amplification measured on questionanswering (GQA) broken down by gender group.M N are gender-neutral pretrained on COCO.
Intrinsic biasFigure 5 complements Figure 2 from the main paper showing statistical results measured on the intrinsic bias analysis in our control
Figure 5 :
Figure5: Statistical analysis of gender bias found through masked language modelling with gendered terms masked.Prediction of a token from the genderneutral set is always considered correct (Precision=1).Models report higher recall scores for Male (M) and Female (F) groups, showcasing the completeness of positive predictions, whereas it is the opposite for Neutralrelated (N) tokens.
IR -M N are gender-neutral models pretrained on COCO.
IR -N are gender-neutral models pretrained on CC3M.
Figure 8 :
Figure8: Extrinsic bias measured on text-to-image retrieval (IR) on Flickr30K.Bias is measured as the percentage of images retrieved from each group when querying the models with a gender-neutral caption.
TR -M N are gender-neutral models pretrained on CC3M.
Figure 9 :
Figure 9: Extrinsic bias measured on image-to-text retrieval (TR) (c)-(d) on Flickr30K.Bias is measured as the percentage of captions retrieved from each group when querying the models with a gender-neutral image.
Table 1 :
Summary of the models.The subscript in the model name indicates the number of images in the pretraining set.
Table 2 :
(Tan and Bansal, 2019)ross validation splits in each dataset.Note that for COCO, this refers to the minival split in(Tan and Bansal, 2019).COCO and F30K have five captions per image.Gender was inferred from image captions for COCO, CC3M and F30K.Gender was inferred from questions in VQAv2, GQA and from the sentence given in NLVR2.
Table 3 :
BiasAmp T →A averaged over attributes (gender entities) and tasks (top-100 nouns) for LXMERT and ALBEF 14M models.Light and dark backgrounds indicate bias amplification measured in-domain and out-of-domain data respectively.Negative values indicate an overall decrease of the bias in model's predictions.
Table 4 :
Test results for a model M D and its genderneutral version M N D .We report VQA-accuracy in VQAv2, accuracy in GQA and NLVR2, and Recall@1 in F30K.Results for original models computed by us.
Emanuele Bugliarello is supported by the funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199.Stephanie Brandl is funded by the European Union under the Grant Agreement no.10106555, FairER.Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency (REA).Neither the European Union nor REA can be held responsible for them.This work was supported by a research grant (VIL53122) from VILLUM FONDEN. | 8,591.8 | 2023-10-26T00:00:00.000 | [
"Computer Science"
] |
Clustering Using Student t Mixture Copulas
Tewari et al. (Parametric characterization of multimodal distributions with non-Gaussian modes, pp 286–292, 2011) introduced Gaussian mixture copula models (GMCM) for clustering problems which do not assume normality of the mixture components as Gaussian mixture models (GMM) do. In this paper, we propose Student t mixture copula models (SMCM) as an extension of GMCMs. GMCMs require weak assumptions, yielding a flexible fit and a powerful cluster tool. Our SMCM extension offers, in a natural way, even more flexibility than the GMCM approach. We discuss estimation issues and compare Expectation-Maximization (EM)-based with numerical simplex optimization methods. We illustrate the SMCM as a tool for image segmentation.
Introduction
Data clustering is a major domain of unsupervised learning which means that we do not observe an outcome or response variable. The aim of clustering is to segment the data into groups or "clusters" which are nearby in some sense. One approach is to group the data via the famous non-parametric k-means algorithm. Roughly said, k-means partitions the data into k clusters by minimizing the sum of the squared Euclidean distance from each point to its cluster center. Often k-means is used as a first-step or to determine the appropriate number of clusters. For more information on k-means and clustering in general, see, e.g., [1,11].
The focus in this paper, however, lies on probabilistic methods for clustering. The basic idea is to fit a multimodal mixture probability distribution to the data. The clustering then works by assigning the mixture component with the highest probability proportion to each data point. Gaussian mixture models (GMMs), where the mixture components are multivariate normal distributed, are certainly the most frequent widely used as a mixture distribution model for clustering in the literature. For example, Carson et al. [6] and Permuter et al. [17] applied GMMs for clustering in image segmentation, Chen et al. [7] and Torres-Carrasquillo et al. [24] for language and accent identification, or Yeung et al. [28] for biostatistical clustering. Aggarwal [1] and Friedman [11] gave a detailed introduction into probabilistic modelbased clustering. Berkhin [3] surveys several clustering algorithms including mixture models. Steinley and Brusco [22] investigated empirical properties of mixture models for clustering in several simulation studies.
The determination of the number of clusters is delicate issue. However, the literature offers several approaches, see [1,3,11] and their references therein. In this paper, we assume the number of clusters to be known and fixed and refer to the extensions in the literature for unknown cluster numbers.
The underlying normality assumption of the mixture components may not be reasonable for many applications, which motivated Tewari et al. [23] to introduce Gaussian mixture copula models (GMCMs) instead. Here, the normality assumption of the mixture components is dropped (in fact there is no distributional assumption for these) with only assumptions on the dependence structure of the data. Bilgrau et al. [4] implemented the R package GMCM which allows the easy application of the GMC model. The GMCM has also been applied in various context. For instance, Tewari et al. [23] and Bilgrau et al. [4] used it for image segmentation, Wang et al. [27] and Yu et al. [29] for wind energy predictions, or Samudra et al. [20] for surgery scheduling. Rajan and Bhattacharya [19] extended the GMCM approach to also handle mixed data (continuous and discrete) and Kasa et al. [13] improved it for high-dimensional data.
In this paper, we discuss the extension of GMCMs to Student t mixture copula models (SMCMs). We, therefore, combine the GMCM approach of [23] as a generalization of GMMs and Student t mixture models (SMMs) of [9] (they applied the SMM to image compression). The difference between GMM and SMM is the distribution for the mixture components. Both models can be learned with an appropriate version of the Expectation-Maximization (EM) algorithm [8].
The extension is to model the dependence structure via copulas rather than to model the observed data. Thus, GMCMs are capable of modeling multimodality (in contrast to, e.g., Gaussian copulas) and a dependence structure (in contrast to GMMs). The SMCM additionally allows for heavy tails of the mixture components in the dependence structure.
The remainder of this paper is organized as follows. The next section gives a brief technical introduction to general copula models and mixture distributions and proposes the SMCM. The third section discusses the estimation algorithms used to learn SMCMs. The fourth section applies the proposed SMCM to test datasets and the task of image segmentation and compares it with the GMCM. The last section concludes.
Student Mixture Copulas
First, we introduce some general background on copulas, see [16] Sklar's theorem [21] states that the relationship between a multivariate cumulative distribution function F(z 1 , … , z d ; ) = P[Z 1 ≤ z 1 , … , Z d ≤ z d ] of random variables Z 1 , … , Z d can be expressed in terms of its marginal distribution functions F j (z j ) = P[Z j ≤ z j ] and a copula C. Here, denotes the parameter vector to be estimated. Precisely, the statement is If the copula function C and the marginal distribution functions F j are differentiable, we can express the joint density in terms of the copula density c and the marginal densities f j : where u j = F j (z j ; ) . Hence, the copula density is given by We proceed by introducing the class of finite mixture distributions. An m-component mixture distribution has a density function of the form where k ≥ 0 , for 1 ≤ k ≤ m and ∑ m k=1 k = 1 . The multivariate density function g only depends on the parameter vector k of the k-th component. If we take g to be the density of a d-dimensional normal distribution we obtain the GMM and for the Student t distribution we obtain the SMM.
Joining the copula approach with mixture densities we obtain the SMCM (and analogously the GMCM). Let c be as in (3) with f as in (4). If g is the normal density, c is the copula density of the GMCM and if g is the Student t density, c is the copula density of the SMCM. More precisely, the SMCM has density function with and f sm,j is its j-th marginal. Let g s (z 1 , … , z d ; k ) denote the d-dimensional Student t density function with k = ( k , Σ vec k , k ) and k is the location parameter vector, Σ k is a scaling matrix and k is the degree of freedom. The full parameter vector is . That is, a dependence structure is allowed along the d dimensions but not between the n vector-valued observations. Each x (i) j has marginal distribution function , which is, for now, assumed to be known. Thus, where the u (i) j are distributed uniformly on (0, 1). This means that (u (i) 1 , … , u (i) d ) can be modeled by a copula C, where C is the copula function of, e.g., the GMCM or SMCM. The relation between is the copula process and {z (i) } n i=1 is the latent process. When the copula level is modeled with an SMCM (GMCM), the latent process follows the SMM (GMM). Figure 1 compares the observed data, the copula level and the latent level of an SMCM with a GMCM for n = 10,000 . For this figure, we simulated {u (i) } n i=1 according to a 2-dimensional and 3-modal SMCM and a GMCM using the same set of parameters (except the degrees of freedom). To simulate the observed level, we set H −1 j ≡ Φ −1 for any j, the standard normal quantile function. We see that on the observed level the clusters of the SMCM have more outliers than those of the GMCM. This transfers to the copula and the latent levels where the SMCM clusters have more fuzzy and overlapping boundaries than the
Estimation
In this section, we follow [23] and propose two maximum likelihood estimation algorithms for the parameters of an SMCM. We aim to learn an SMCM by maximizing the loglikelihood corresponding to the copula density. First, we discuss an Expectation-Maximization-based algorithm and second a Simplex approach. Both also work well for the GMCM. We also highlight the differences. Consider . We assume d < n , extensions are discussed in the last section. We obtain the corre- for any i and j. The log likelihood function is given by
Pseudo EM Algorithm
We propose a Pseudo Expectation-Maximization (PEM) algorithm to estimate the parameters of the SMCM. It is important to note that we cannot apply the EM algorithm as for mixtures of Student's distributions. This is likewise the case for the GMCM where [23] discussed why the EM algorithm does not necessarily find the true maximum. The key point is that the inputs to an EM algorithm have to remain fixed which is indeed the case when we only consider an SMM or GMM. However, this is not the case for the SMCM (or GMCM). The problem is that, while the input an SMCM remains fixed, the inverse distribution values or pseudo observations z j = F −1 j (u j | ) depend on the parameters. Since the parameter estimate is updated after each iteration in an EM algorithm the pseudo observations also change in every step. Thus, the assumption of fixed observations is violated.
Therefore, the PEM does not necessarily converge to a local optimum. Nevertheless, it can provide plausible starting values for simplex or gradient based methods as discussed below.
Another issue is that we only observe ( In practice, H j is unknown and has to be estimated. We follow [4,23] and estimate (u (i) 1 , … , u (i) d ) non-parametrically using the empirical cumulative distribution function Ĥ To avoid infinities in the computations û (i) j is rescaled with the factor n n+1 .
convergence criteria based on the likelihood for the hidden GMM. However, we stick with the third criterion because in our experimental results the effect of the stopping rule was minor. The reason for this is, which is more drastic compared to GMCM, that the log-likelihood of the SMCM attains a maximal value over the iterations rather early while the log-likelihood of the hidden SMM is still increasing. This suggests to use a rather high or to additionally employ a deterministic stopping rule after too many iterations.
Of course, this algorithm rarely computes the true maximum which is why we use it as a first guess in the following way. We choose different initial vectors (0) , for each we run the PEM algorithm and obtain an estimate ̂ . Out of these, Algorithm 1 outlines the PEM. Remarks on the notation: F −1 sm,j denotes the j-th marginal distribution function and denotes the digamma function. With the index t we denote the t-th step of the PEM. We stop the PEM algorithm after some convergence criterion d( (t+1) , (t) ) is met. Possible choices for convergence criteria are the following: Criterion 1 was used in [23] and criteria 2 and 3 were discussed in [4] for the GMCM. Bilgrau et al. [4] also discussed we choose the parameter vector with the highest log-likelihood (7). This parameter vector is then used in the numerical optimization outlined in the next subsection. We discuss the computational complexity of the proposed PEM in Algorithm 1. The complexity of each E-Step is O(mnd 3 ) (the d 3 is due to the matrix inversions Σ −1 k ). The complexity of each M-Step is O(mnd 2 ) . Thus, the complexity of each EM-iteration is O(mnd 3 ) . Given a deterministic additional stopping rule of the PEM after T iterations the overall complexity is O(Tmnd 3 ) . This means the complexity is higher than the PEM of GMCMs with complexity O(Tmnd 2 ) . In practice, if the dimensionality is not too high, the runtime is dominated by the k root searches in the M-Step.
Numerical Maximization
Since the PEM algorithm discussed above finds only a pseudo maximum, we cannot expect that the different seeds for the PEM will lead to a good estimator of . This motivates us to additionally apply a numerical optimization scheme like in [4,23] for the GMCM. Tewari et al. [23] proposed a gradient descent-based algorithm and Bilgrau et al. [4] compared the Nelder and Mead [15], simulated annealing [2] and L-BFGS-B [5] optimization procedures. In their experiments, Bilgrau et al. [4] found that the Nelder-Mead procedure worked best in terms of runtime and numerical robustness and with similar clustering accuracy. We found the same for the Nelder-Mead approach. There are additional difficulties for the SMCM in the optimization compared with the GMCM because of the m additional parameters 1 , … , m .
As in [4], we use the Cholesky decomposition for the scaling matrices Σ k = A k A T k whose elements are vectorized so that the Nelder-Mead algorithm can maximize over identifiable parameters. Moreover, as [4] also noted, the GMCM (and the SMCM) are invariant to translations and scaling, meaning that only relative distances between the component modes can be inferred. Hence, we set 1 = 0 and Σ 1 is scaled such that it has only 1's on the diagonal. The remaining parameters are to be estimated via Nelder-Mead maximization.
Monte Carlo Experiments
We discuss some experimental evidence for GMCMs and SMCMs clustering. We simulate data according to GMCMs and SMCMs each with n = 1000 , d = 2 and m = 3 for R = 1000 repetitions. The parameters k , k and Σ k are drawn randomly for each run. The degrees of freedom k for SMCM are fixed at the value 4 for all runs. Like in Fig. 1, we simulate the observed level using H −1 j ≡ Φ −1 for any j. For each dataset we use the GMCM-PEM, SMCM-PEM (maximized over different initial values), and k-means (as a baseline) clustering algorithms. To save computational time, we omit to use an additional Nelder-Mead after the PEMs. We compare the clustering results with the ground truth using the Adjusted Rand Index (ARI) by [12] and the Adjusted Mutual Information (AMI) metric by [26]. Table 1 reports the mean ARIs and AMIs averaged over the R runs. The top panel shows the results for data simulated with an GMCM and the bottom panel for data of an SMCM. The three columns stand for the different clustering algorithms. We observe that, obviously, the correct specified model is suited best for clustering. The corresponding other mixture copula model clusters on average similarly well. However, the difference between SMCMs and GMCMs for SMCM data (ARI: 0.749 vs. 0.708) is larger than the corresponding for GMCM data (ARI: 0.786 vs. 0.803). For a given data set the results may, however, differ more substantially. Hence, in practice, it is advisory to check both models for a given dataset. The k-means performs worse in this experiment.
Benchmark Data
We test the SMCM clustering for some benchmark datasets given in the literature. First, we consider the artificial dataset from [23] which is 2-dimensional and 3-modal. Figure 2 shows the ground truth data and clustering results using the GMCM, the SMCM (both by applying a Nelder-Mead search after the PEM) and k-means. Figure 3 shows the copula level of the data using Ĥ j (x (i) j ) . Table 2 presents the corresponding ARI and AMI statistics. Both GMCM and SMCM work very well. We additionally test the clustering algorithms on several publicly available benchmark datasets. We consider the artificial datasets of [25] (except the trivial with only one cluster) which can be found on https:// www. uni-marbu rg. de/ fb12/ arbei tsgru ppen/ daten bionik/ data. Moreover, we consider the Iris flower data of [10] which can be found on https:// archi ve. ics. uci. edu/ ml/ datas ets/ iris. Again, we compare GMCM, SMCM with k-means. We omit to plot the data and clusters here and refer to [25] for a visual inspection of the datasets (also cf. for the true number of clusters). Instead, Table 3 compares ARI and AMI metrics by comparing the clustering results with the ground truth. We observe that the clustering quality strongly depends on the given special case. The GMCM works best over all given examples meaning that it did not fail to provide a reasonable clustering. Of course, the artificial datasets are somewhat unrealistic but a good clustering algorithm should handle these well, too.
As said, we advise to try different models and compare the clustering results.
Image Segmentation
In this section, we discuss the application of learning an SMCM to segment images. The SMCM might be useful in other settings where the data exhibits multimodality, heavy tails of the mixing components due to frequent outliers, and a non-trivial dependence structure. A visual inspection might be helpful to assess if these features are present in the data unless the dimensionality is not too high.
Image segmentation (which is also discussed in [4,23]) is an important field of visual learning as it aims to simplify and extract patterns from pictures. For a deeper discussion on image segmentation, see [18]. We choose 20 images from the publicly available images of the Berkeley Segmentation Dataset and Benchmark [14]. The images have 154, 401 = 481 × 321 pixels and we cluster the images in the RGB space meaning that each pixel can be represented as a 3-dimensional vector with values in [0, 1] 3 . As in [4], we segment the images into 10 colors. In this application, the pixels are the observed data {x (i) j } . We use the empirical distribution function to estimate the copula level {û (i) j }. We fit an SMCM to the copula levels of the images by running the PEM with 10 different initial configurations and choosing the pseudo maximum with the highest log-likelihood. We take this as initial value for the Nelder-Mead search. Given the SMCM, we assign the cluster k with the highest posterior probability to each pixel. We choose the segmented color to be the color at the cluster location estimate ̂k , i.e., the color (Ĥ −1 1 (F sm,1 (̂1;̂1)),Ĥ −1 2 (F sm,2 (̂2;̂2)),Ĥ −1 3 (F sm,3 (̂3;̂3)). For comparison purposes, we proceed analogously for the GMCM. Furthermore, we cluster the pixel colors using k-means and assign the color of its cluster center to each pixel. Figures 4 and 5 show the original images and the segmented pictures for 6 of the 20 images (others and code are upon request). For each figure, the top row shows the original image, the second row the GMCM segmentation, the second-to-bottom row the SMCM segmentation, and the bottom row the k-means segmentation. We observe that all three methods perform differently well for different regions in the pictures. Hence, there is no obvious first choice. However, the SMCM seems to capture more features in the pictures than the GMCM. For example, the wall in the background in Fig. 4, left column, or the hats of the soldiers in Fig. 5, left column, are less blurry. Another example is that the GMCM shows an odd boundary of the large rock in Fig. 5, middle column.
The GMCM and SMCM algorithms have a fairly long runtime compared with the k-means algorithm. While the PEM (even for different initial parameter choices) is decently fast for both GMCM and SMCM the Nelder-Mead search for a true maximum is time-consuming given the number of parameters. The k-means clustering is in contrast fast. Therefore, the copula-based methods have the drawback of a longer runtime for a comparable outcome. However, the estimates of the degree of freedom parameters are in the most cases between 1 and 10. This means that small 's yield a higher likelihood which supports use of the SMCM.
Otherwise high values of 's would be estimated.
Conclusions and Discussion
We proposed a natural extension of GMCMs using mixtures of Student t distributions. The SMCM performed well in the application of image segmentation and in several experimental setups. However, the benefits are not striking compared with existing methods. Nevertheless, it has proven reasonable to use SMCMs in addition to GMCMs (and other clustering methods) because the goodness of clustering may vary fundamentally in some scenarios.
We discuss some further limitations of the approach. As for GMCMs, the SMCM-PEM cannot handle high-dimensional data. More specifically, the PEM does not run for d > n . Note that the complexity for the SMCM-PEM is O(Tmnd 3 ) so a high dimension is more influential in terms of complexity than for GMCMs. [13] proposed an extension of the PEM for high-dimensional data for GMCM clustering. Their approach may be transferred to SMCMs so that high-dimensionality issues can be addressed.
Although Fig. 1 reveals that simulated data of GMCMs and SMCMs appear quite different the clustering results often are very similar. One interesting research direction may be the investigation of this phenomenon and the comparison of mixture copula models using other flexible distributions.
SN Computer Science
need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,007.8 | 2021-02-13T00:00:00.000 | [
"Computer Science"
] |
Anticancer Activity of Solvent Extracts of Hexogonia glabra against Cervical Cancer Cell Lines
Objective: In this study, we aimed to harness some solvent extracts of one wild mushroom Hexagonia glabra and test their anti-cancer activity against cervical human cell lines, namelyHeLa, SiHa, and CaSki. Methods: It includes cell morphological study by microscope, nuclear morphology by DAPI staining under fluorescence microscopy, apoptosis assay by fluorescence technique, anti-proliferation by MTT assay and expression of apoptotic and anti-apoptotic genes by Western blotting and cell cycle analysis was done. Results: The selected cervical cancer cells were treated separately with 150 µg/mL of three extracts, namely of ethanolic (EE), ethyl acetate (EAE), and water extract (WE) and exhibited features like round, shrink and dead. All extracts caused apoptosis in cell lines and EE had the highest effect in this regard. The percentages of apoptotic cells in HeLa, SiHa and CaSki, at the same concentration of EE were 79.23, 75.42, and 76.36% respectively. Cytotoxicity assay showed that all three extracts (50 – 250 μg/mL) were potent for inhibition of cell growth of three cell lines and again EE had the highest effect. The percentages of cell growth inhibition in HeLa, SiHa, and CaSki cells treated with EE at 24 h at 50 µg/mL were 45.79±4.11, 41.66±4.03, and 36.72±2.67, while they were 74.23±7.45, 62.31±5.97, and 54.23±5.04 at 150 µg/mL concentration. At 250 µg/mL concentration, the percentages of cell growth inhibition were 94.25 ±8.11, 90.02 ±8.67, and 85.43±6.21, respectively. The expression of apoptotic gene (Caspase 3, 9) and tumor guard gene (p53), as their proteins in Western blotting increased . However, anti-apoptotic BcL2 gene of all cell lines was decreased following treatment with extracts. In addition, the cell cycle analysis (CaSki cell) showed that treatment (EE) arrested at G2/M check point cell cycle. Conclusion: All extracts of this mushroom were active in arresting growth of three cell lines and EE had the highest effect, indicating that this mushroom can be a valuable source of anticancer agents.
Introduction
Cancer is now one of the most complex killer diseases among human beings. Hurdles and hopes for cancer treatment are going on, but scientists search the new natural compound that can be applied as an anticancer remedy (Hunt, 2002). Searching for natural sources to find novel bioactive anticancer compounds may provide next generation of drugs (Liu et al., 2010;Lindequist et al., 2010;Xu et al., 2011;Aly et al., 2011). Mushrooms are macro-fungi whose so-called fruit bodies are seen under naked eyes. Nowadays, scientists have paid much attention to mushrooms in cancer management because they have hundreds of novel natural constituents with biological properties with lower toxic side effects and even lower cost (Lee et al., 2014;Shavit et al., 2009). In addition, mushroom extracts can act as immunomodulatory factors in the management of cancer patients . Lucas et al., (1957) for the first time showed that mushrooms had anticancer property. After prime natural killer (NK) cells and T lymphocytes, both of which are cytotoxic to tumor cells (Prestwich et al., 2008). There are more than 14,000 mushrooms out of 5.1 million estimated fungi (Blackwell, 2011), among which near about 700 exhibit medicinal properties (Wasser, 2011). In India, described mushroom species are 850 (Deshmukh, 2004;Manoharachary et al., 2005). Although the rate of application of fungi in different sectors has been increased exponentially throughout the world, near about 90% of fungal species are not yet screened in medical application like anticancer drug development. Among edible mushrooms, members of Agaricus are produced maximum, whilst from the non-edible medicinal mushroom Ganoderma are produced and used maximum globally. Many members of the Polyporaceae family have been now selected as the next candidate producers of possible valuable medicines (Mizuno, 1995). In India particularly in West Bengal, mushrooms are abundant but systematical screening of those as anti tumor or cancer has not been investigated by any one yet. On the other hand, every year in India, 122,844 women are diagnosed with cervical cancer, and 67,477 die from this disease (ICO, 2014;Sreedevi et al., 2015). The present treatments for cancer include chemotherapy, surgery, and radiation. All of these treatment approaches are associated with some complications. For instance, chemotherapeutic drugs are becoming narrow potential; on the other hand, patients are becoming resistant to these drugs (Ghosh, 2018). Under such situation, it is highly essential to search for new natural anticancer compounds, which are target specific and immune-enhancer. The screening of different mushroom extracts and isolation of different natural compounds and their targeted anticancer effects with immune enhancing property are need of hour. In this study, we tested wild Hexogonia glabra mushroom extracts, such as ethanolic (EE), ethyl acetate (EAE), and water extract (WE), as anticancer agents against cervical cell lines (HeLa, SiHa and CaSki).
Mushroom collection
Fruiting bodies of wild mushroom, Hexagonia glabra (P. Beauv.) Ryvarden (Family: Polyporaceae), were collected in September 2018 from different wooden logs from various zones of south Twenty Four Parganas district, India. The samples were sent to laboratory, and their identification was confirmed by consulting with the published keys (Ryvarden, 1999;Watling, 1973;Pacioni and Lincoff, 1981;Moser, 1983).
Extraction
The fruiting bodies of the wild mushroom (Hexagonia glabra) were washed in tap water and then distilled water to remove impurities. Hence, they were air dried in oven at 50°C for 48 h, chopped into pieces, and grinded into powder using mixer grinder. Briefly, 15 g of dried mushroom powder was dipped in 150 mL of 90% ethanol in glass bottles with tightly fitted cap, under shaking condition at room temperature for 3 days. The same procedure was followed for ethyl acetate extract (EAE). In case of water extract (WE), mushroom power was dipped in boiling water for 30 min. All extracts were filtered through Whatman No. 4 followed by Whatman No.1 filter papers, and then filtrates were collected. The solvents from each filtrate of extracts were removed using a rotary vacuum evaporator at 40°C, and hence the extract was lyophilized to dry powder. The powder of extracts (EE, EAE, and WE) was weighed and kept in airtight condition in refrigerator at 4°C for further use. Extracts used for in vitro assays were dissolved in plain RPMI 1640 medium and passed through a 0.22 μm Millipore filter for sterilization. The prepared extract was further diluted with plain RPMI 1640 medium in certain concentrations just prior to use.
Morphological examination of cancer cells after treatment with EE, EAE, and WE Cell morphology study by phase contrast/ bright field microscopy
Each cancer cell line was grown in 60-mm tissue culture dish and treated with each extract separately at 150 μg/mL of extract concentration for 24 h. Cells were examined under phase contrast /bright field microscopy, and photographs were taken using a Magna-Fire digital camera for analysis.
Nuclear morphology by DAPI staining under inverted fluorescent microscope
DAPI (4', 6-Diamidino-2-phenylindole) staining was done to observe the treated (150 μg/mL of extract power) cells' nuclear morphology. The HeLa, SiHa, and CaSki were washed with cold PBS and fixed with 3.7% (w/v) para-formaldehyde in PBS for 10 min at room temperature. After permeabilization, the cells were stained with a DAPI (10 μg/mL) solution at 37 o C for 30 min. The cells were washed with PBS and were examined under an inverted fluorescent microscope (Olympus, Tokyo, Japan), and photographs were taken using a Magna-Fire digital camera (Optotronics, Goleta, CA, USA) for analysis.
Evaluation of apoptosis by DAPI staining under inverted fluorescent microscope
Apoptosis was evaluated by morphological changes in the nuclear structure of all three treated (150 μg/mL) cell lines stained with DAPI comparing with control set. The apoptotic cells were analyzed and counted under an inverted fluorescent microscope in each experiment, and at least five optical fields were counted in each of them containing a total of 200 cells.
at 37°C and 5% CO 2 for 24 h, where DMEM used as vehicle control. After incubation, the cells were harvested, washed with Dulbecco's PBS containing 1% FBS, and re-suspended in 50 μg/ml propidium iodide (PI). Samples were analyzed on a fluorescence-activated cell sorting (FACS Calibur BD Bioscience, USA).The fractions of cells in the various phases of the cell cycle (G0/G1, S, and G2/M) were shown as a percentage of the total cells analyzed.
Cell morphological study of all three cell lines under phase contrast microscopy Figure 1 showes that HeLa cells treated with 150 µg/mL of EE resulted in more round, shrunken, and membrane blabbing. The dead cells and cell debris were also observed. The cells treated with 150 µg/mL of EAE and WE, separately, were similar to fate of HeLa cells.
Other two cell lines (SiHa and CaSki), when exposed to earlier concentration of these extracts at 24 h, showed the same trends. All three types of cancer cells treated with Adriamycin (positive control) became round shape, while many cells were dead ( Figure 1).
Nuclear morphological study of all three cell lines under florescence microscopy
Nuclei of the control/ untreated HeLa, SiHa, and CaSki1cells appeared normal taking light blue color and were round and homogeneous, while nuclei which were treated with EE, EAE, and WE, separately, were condensed and in few cases were irregular and fragmented ( Figure 2). It was revealed that all four extracts were effective on inducing apoptosis of all three cell lines.
Evaluation of apoptosis by DAPI staining under inverted fluorescent microscope
The percentages of apoptotic cells of HeLa, SiHa, and CaSki were 79.23, 75.42, and 76.36 %, respectively, at 150 μg/mL concentrations of EE. At positive control (Adriamycin), the percentages were 88.34, 86.09, and 87.51%, respectively ( Figure 3). The percentages of apoptotic cells of HeLa, SiHa and CaSki at 150 μg/mL concentrations of EAE were 55.67, 37.12, and 51.07 %, respectively. The percentages were 65.32, 46.12, and 51.78%, respectively, at 150 μg/mL concentrations of WE. It indicated that exposure of all three cell lines to these mushroom extracts (EE, EAE and WE) for 24 h resulted in apoptosis ( Figure 3).
Anti proliferative/cytotoxicity effect of mushroom extracts against HeLa, SiHa, and CaSki cell lines based on MMT assay
We investigated the effect of each of three extracts on cell proliferation of three cell lines. According to Table 1, the proliferation of all cell types was reduced gradually as dosage of EE increased gradually from 50 The percentage of apoptotic cells was calculated by the following formula: Percentage of apoptotic cells = [number of apoptotic cells / total cells counted (200 usually)] × 100%.
Cell proliferation or cytotoxicity assay
The effects of each mushroom extract on cell proliferation of HeLa, SiHa, and CaSki cell lines were evaluated by using Dimethyl thiazolyl tetrazolium bromide (MTT) assay (Sigma, USA) and in accordance with Mosmann method (Mosmann, 1983) and by implementing some modifications. Briefly, 10×10 3 cells per well of 96-well culture plate were seeded with fresh DMEM medium, containing 10% FBS and antibiotics, overnight to reach 80% confluency. Then, the culture was washed with 10% PBS, treated at different concentrations (0, 50, 100, 150, 200, and 250 µg/mL of each extract dissolved in DMEM), and incubated at 37°C in 5% of CO 2 and 95% of air. After 24, 48, and 72 h treatment, cells were washed with PBS, 100 µL of 0.5% MTT solution (dissolved in RPMI 1640) was added to each well, and cultures were further incubated for 3 h. After discarding the media, 100 µL of DMSO was added for dissolving the crystals. The plate was read by using micro plate reader (Bio-Rad) at 570 nm absorbance. For the normal human lymphocytes, which are in suspension, the cytotoxicity was evaluated using the water-soluble MTS (Vorauer et al., 1996) dye.
Growth inhibition rate was determined by the following formula: Growth inhibition = [ 1-A 570 nm of treated cells / A 570 nm of control cells] × 100% The concentration which led to a 50% killing (IC 50 ) was calculated by plotting a dose response graph of the cytotoxicity values obtained using the formula given below: Data points represent the mean ± SD in one experiment repeated at least thrice.
Western blot analysis for study of gene expression
Each cancer cell line (2×10 5 ) was treated with 150 µg/ mL of each extract separately for 24 h. After treatment, cells were lysed with RIPA buffer (Abcam). The effect of treatment on the expression of certain genes as proteins, such as p53, and on apoptotic proteins, such as Bcl-2, caspase-3, and caspase-9 (Santacruze Biotechnology, USA), were measured. Proteins were detected by incubation with the corresponding primary antibodies, and antibodies followed by blotting with the HRP-conjugated secondary antibody. The blots were then detected using Luminol (Bio-Rad), and intensity of bands of each protein was measured by Image J.
Cell cycle analysis
CaSki cells (7.5x10 5 ) were seeded in 100 mm dishes, and cultured in DMEM containing 10% FBS for 24 h. Then, cells were incubated with EE (50 and 150 μg/ ml) and positive control Cisplastin (50 μg/ml) separately (Table 3). All these extracts at the highest dosage (500 µg /mL) showed no cytotoxicity against the normal human lymphocytes up to 72 h (Data not shown).
Study of induction of apoptosis using gene expression in Western blotting assay
We found that treatment with EE, EAE, and WE at the concentration of 150µg/mL decreased BcL2 gene expression (Figure 4), while increased the expression of apoptosis genes, namely caspase 3 and caspase 9 in all three cell lines in vitro. Moreover, gene p53 of cell lines was up-regulated by EE, EAE, and WE (Figure 3). Comparative analysis among these three extracts for their ability to upregulate caspase 3, 9 and p53 genes and down regulate BcL2 genes in all three cancer cell lines, it was reflected that EE extract was best followed by WE and EAE respectively (Figure 4).
Cell cycle arrest by EE extract
The cell cycle distribution of CaSki cells was subsequently investigated using flow cytometry with the PI staining method. Based on Figure 5, the percentage of cells distributed in the G2/M phase of the cycle was 16.11% for the negative control (DMEM vehicle) ( Figure 5A), and it was 19.55% in case of positive control (cisplastin 50 μg/ml). The percentage of cells distribution in G0/G1 was 54.67% ( Figure 5B). Upon treatment with EE at concentrations of 50 and 150 μg/ml, the percentages of cells distributed in the G2/M phase increased gradually to 18.18% and 20.21% ( Figure 5C, 5D), respectively. Treatment of the cells with EE at 50 and 150 μg/ml concentrations also led to a gradual decrease in the cell distribution at the G0/G1 and S phases compared with the negative control.
Discussion
Earlier occurrence and distribution of Hexagonia speciosa were recorded in China (Zhao, 1998), but there remains a lacuna in proper record of distribution of H. glabra in China and India as well. Its collection from fallen logs in 24-Parganas district of West Bengal, India was tentatively first place where it was collected and yield of EE, EAE and WE of Hexagonia glabra in our experiment were taken but there are no reports of such yields earlier, yet. The isolation and structure determination of a series of oxygenated cyclohexanoids were performed by Jiang et al., (2009) from Hexagonia speciosa. They include speciosins, 5H-furan-2-one me-tabolite, 5′-O-acetylaporpinone A, and aporpinone A. Out of these compounds, speciosin B exhibited cytotoxicity against some cancer cell lines. IC 50 values range from 0.23-3.30 μM (Jiang et al., 2009;Jiang et al., 2011). Silva et al., (2009) first reported antitumor activity of methanolic extract (ME) of the Hexagonia papyraceae against K562 (Human chronic myeloid leukemia cell) and Daudi (human Burkitt's lymphoma cell). ME inhibited more than 60% of the proliferation of both K562 and Daudi cells. The IC 50 value against K562 was 39.1μg/mL. Literature review indicated that anticancer effect of solvent extract of H. glabra is not tested against any cancer cell line or in animal model by any workers . However, different solvent extracts of other mushrooms have been applied on several cancer cell lines. Laetiporus sulphureus, the saprophytic wild, cultivated polypore mushroom and weak parasite of plant, the ethanol extract of it also exhibited anti-proliferative activity against three carcinoma cell lines HeLa, HCT 116 and MCF-7 (Younis et al., 2019). The effects of ethanol extracts from G. frondosa, G. lucidum, Hericium erinaceus, and L. edodes fruiting bodies, spores, and cultured broth on cell proliferation and apoptosis in CH72 cancer cells and C50 normal cells were evaluated (Gu and Belury, 2005). Out of these extracts, ethanol extract from L. edodes significantly inhibited CH72 cell proliferation, while none of these extracts had effect on normal C50 cells. Cell cycle analysis exhibited that L. edodes extract induced a transient G1 phase arrest in CH72 cells. Polyozellus multiplex inhibited cell proliferation in stomach cancer by increased expression of p53 proteins (Lee and Nishikawa, 2003). Lavi et al., (2006) showed that an aqueous polysaccharide extract from the edible mushroom Pleurotus ostreatus induced anti-proliferative and pro-apoptotic effects on HT-29 colon cancer cells. The mushroom extract can be an effective therapy for malignant estrogen-independent breast cancer (Asatiani et al., 2011). Hot-water and ethanol extract of Inonotus obliquus has ability to induce apoptosis in human colon cancer (DLD-1) cells by the prevention of reactive oxygen species (ROS) -induced tissue damage (Hu et al., 2009). Youn et al., (2009) tested the anti-proliferative effects of water extract of I. obliquus extract on B16-F10 cells (murine melanoma). The ethanolic extract of the fruiting body of P. igniarius was evaluated as the anti-proliferative and anti-metastatic agent against SK-Hep-1 (human hepatocarcinoma) and RHE (rat heart vascular endothelial). The extract inhibited cell growth of both cell lines in a dose-dependent manner, and the IC 50 values at 48 h were 72 and 103 μg/mL for SK-Hep-1 cells and RHE cells, respectively (Song et al., 2008). The growth of CaSki (Epidermoid cervical carcinoma) cells when treated with the crude dichloromethane extracts of G. lucidum was inhibited (Lai et al., 2010). The dichloromethane extract contains flavonoids, terpenoids, phenolics, and alkaloids with anti-human papillomavirus 16 (HPV 16) E6 oncoprotein activity. The methanol extract and its fractions, viz., methylene chloride, ethyl acetate, and n-butanol of Phellinus linteus exhibited anti-angiogenic effects (Lee et al., 2010). Huang et al., (2011) recorded the anti-cancer effect of P. linteus and interpreted its potential mechanism. They showed increase in number and activity of T cell and NK cell. A. campestris extract inhibited the growth of three cell lines, namely HeLa cells, A549 cells (human lung carcinoma), and LS174 cells (human colon carcinoma) (Kosanić et al., 2017).
In this study, we screened three extracts of H. glabra mushroom (EE, EAE, and WE) against three cervical cancer cell lines, namely HeLa, SiHa, and CaSki by MTT assay. We noted antiproliferation and apoptotic activity of three extract against all three cell lines. In the laboratory (Ghosh, 2015), ME, EE, and WE of wild Calocybe indica were tested for anticancer effect on MCF 7 and Ewing Sarcoma cell line, and similarly antiproliferation and apoptotic effect of ME and EE of Agaricus bisporus against CaSki cell line were reported . EAE of C. indica was applied against HeLa and CaSki, and it was found that this extract caused changes in cancer cells and nuclei, apoptosis and inhibited proliferation of these two cell lines and this mushroom extract increased the expression apoptotic genes (caspase 3, 9), tumor guard gene p53 and decreased the pro-apoptotic gene BcL2 (Ghosh et al., 2019). In the present experiment, all mushroom extracts showed morphological changes in all cancer cells and their nuclei, and antiproliferative and apoptotic activity. Increase in concentration of all extracts increased anticancer activity. Antiproliferative effect, including morphological changes, against BXPC3 cell line was also noted (Chen et al., 2009). However, Yu et al., (2012) exhibited that antroquinonol, a ubiquinone derivative isolated from the same mushroom, inhibited cell proliferation of PANC-1 and AsPC-1 cells in dose-dependent manners as we noted in MTT assay. In this study, in three cell lines morphological changes like round and shrunk, and membrane blebbing (bulbing) and the reduction of cell confluence were noted in all extracts. Similar result was found when HeLa cells treated with 500 µg/mL /750 µg/mL of methanolic extract of Agaricus bisporus for 24h at the same conditions . All extracts also modulated the expression of some genes (Caspase 3,9,p53,and BcL2). These results were also validated by other researchers investigated other cancer cell lines and other mushrooms (Gu and Belury, 2005;Zaidman et al., 2005). The ethanol and ethyl acetate extracts of Coprinus comatus inhibited LNCaP cells of Prostate cancer (Zaidman et al., 2008). Researchers noted that that these extracts inhibited dihydrotestosteroneinduced LNCaP cell viability and arrested cell cycle at G1 phase. In the cell cycle experiments performed in the present study, EE arrested cell cycle at the G2/M checkpoint of CaSki cells. Similarly, Jiang and Sliva (2010) observed that the methanolic extract of myco-complex induced significant cell cycle arrest at the G2/M phase. Furthermore, cell cycle arrest at G2/M was induced by A. blazei in gastric epithelial cells (Jin et al., 2006), and by cordycepin isolated from C. sinensis in bladder cancer cells (Lee et al., 2009).It is noteworthy that different extracts from G. lucidum demonstrated specific effects on cell cycle progression. Extracts from G. lucidum were shown to induce cell cycle arrest at the G0/G1 phase in breast cancer cells (Jiang et al., 2006), where as arrest at the G2/M phase was induced in prostate (Jiang et al., 2004), hepatoma (Lin et al., 2003) and bladder (Lu et al., 2004) cancer.
The ethyl acetate and culture broth extracts of this also showed antiproliferative activity against MCF7 cells. One study showed that IC50 value was 76 μg/ mL for culture broth extract and 32 μg/mL for ethyl acetate extract (Asatiani et al., 2011). Among mushroom extracts, an ethanol extract may find the most extensive application. In this study, we also found that EE was the best among three extracts as anticancer agent against three cervical cell lines. On the other hand, we detected that HeLa was more sensitive to EE and EAE, while CaSki cell line was more sensitive to WE than other two. Similar to our findings, one researcher found that ethanol extracts of Pleurotus florida and Calocybe indica caused apoptosis in T24 cell line (Selvi et al., 2011).
In conclusion, all extracts of this mushroom were active in inhibiting the growth of all three cervical cell lines with fair percentage at 24 h. Among three extracts, EE showed maximum anti-proliferation activity against all three cell lines (i.e. HeLa, SiHa, and CaSki). Their activities induced apoptosis and upregulation of apoptotic genes and downregulation of pro-apoptotic genes. In addition, it was detected that EE extract arrested cell cycle at G2/M point. Further studies are suggested to examine the mycochemistry of this mushroom extracts and their application in animal model for next generation anticancer drug development. | 5,300.4 | 2020-07-01T00:00:00.000 | [
"Biology"
] |
A left-right symmetric flavor symmetry model
We discuss flavor symmetries in left-right symmetric theories. We show that such frameworks are a different environment for flavor symmetry model building compared to the usually considered cases. This does not only concern the need to obey the enlarged gauge structure, but also more subtle issues with respect to residual symmetries. Furthermore, if the discrete left-right symmetry is charge conjugation, potential inconsistencies between the flavor and charge conjugation symmetries should be taken care of. In our predictive model based on $A_4$ we analyze the correlations between the smallest neutrino mass, the atmospheric mixing angle and the Dirac CP phase, the latter prefers to lie around maximal values. There is no lepton flavor violation from the Higgs bi-doublet.
I. INTRODUCTION
Despite the huge and continued success of the Standard Model (SM) in the last several decades, the flavor structure of the three generations of fermions in the SM leaves a big puzzle that remains to be understood. In particular, lepton mixing is so drastically different from quark mixing that the field of flavor symmetry model building is among the busiest ones in flavor physics. To avoid Goldstone bosons and to unify at least two different generations one typically chooses discrete non-Abelian groups as flavor symmetry [1][2][3][4].
Apart from the unusual lepton mixing structure, the second big puzzle introduced by neutrino physics is the smallness of neutrino mass. An attractive approach is to link this smallness to the parity violation of the SM. This is in fact achieved in left-right symmetric models [5][6][7][8][9] where the gauge group of the SM is extended to SU (2) L × SU (2) R × U (1) B−L .
Linking the two aspects mentioned so far, we aim in this paper at building a flavor symmetry model in a left-right symmetric model (LRSM). As Grand Unified Theories (GUTs) based on SO(10) can be broken down with an intermediate left-right symmetry to the SM, it may be possible to extent such LRSM flavor models in a bottom-up strategy to GUT flavor models. Our approach could be considered as a first modest step to unify particle and chirality species.
The constraints that are imposed by left-right symmetry modify some of the well-known features of usually considered flavor symmetry models. For instance, a typical example [10] based on the most often used flavor group A 4 , assigns the left-handed lepton SU (2) L doublets as well as the right-handed neutrinos to the threedimensional irreducible representation of A 4 . Righthanded charged fermions instead transform as the three different one-dimensional representations. This is incompatible with the fact that right-handed neutrinos and charged fermions are part of the same gauge doublet. In general, models that unify the different particle or chirality species are rarely considered and are in general challenging to construct.
Another issue concerns residual symmetries. Usually a discrete flavor symmetry group G is broken to two subgroups G and G ν which constrain the form of the mass matrices M and M ν for charged leptons and neutrinos respectively. The mixing matrix is thus essentially determined by the symmetry group. The lepton mixing is then independent of the neutrino masses. In the minimal left-right symmetric models under study, however, typically this direct correlation of subgroups with lepton mixing does not exist. The reason is that the neutrino Dirac and the charged lepton mass matrices contain in general two contributions as a consequence of the Higgs bi-doublet. As a result, even though there are in principle conserved subgroups of the flavor group, they do not translate in invariance of the mass matrices. Therefore lepton mixing will depend on neutrino masses. Another issue concerns the discrete left-right symmetry in such models. If it is charge conjugation, one may encounter (depending on the chosen flavor symmetry group) potential inconsistencies between this discrete symmetry and the flavor symmetry. This is then similar to the situation when flavor and CP symmetries are combined, see e.g. Ref. [11].
In this paper we will construct a flavor symmetry model based on A 4 within a left-right symmetric context. We discuss carefully the general and specific model building aspects of such scenarios and analyze several predictive solutions for the neutrino sector. We show that flavor changing currents in the lepton sector generated by the Higgs bi-doublet are absent.
The paper is organized as follow. In Sec. II we discuss left-right symmetric models and outline aspects of their impact on flavor symmetry model building. In Sec. III we present a model based on A 4 that is compatible with left-right symmetry and analyze it numerically and analytically in Sec. IV, in order to demonstrate that it is compatible with current data. We conclude in Sec. V, some analytical details are delegated to the Appendix.
II. THE IMPACT OF LRSM ON FLAVOR SYMMETRY
In this section we first review the aspects of minimal left-right symmetric models (LRSM) that we need in this paper and then discuss their impact on building flavor symmetry models.
A. The minimal LRSM
In the minimal LRSM [5][6][7][8][9] the gauge group is SU (2) L × SU (2) R × U (1) B−L . Right-and left-handed leptons R , L are doublets under SU (2) R and SU (2) L respectively. Three Higgs multiplets ∆ L ∼ (3, 1, 2), and further to U (1) em , respectively. We choose here the leftright parity transformation as The Yukawa interactions of the lepton sector are The above discrete left-right symmetry leads to Y = Y † , Y =Ỹ † and Y L = Y R . This can be seen in particular by comparing the term Y ij¯ Li Φ Rj and its hermitian conju- The scalar fields acquire the following vacuum expectation values From now on we will assume that v L is sufficiently small to be neglected. The neutrino Dirac mass matrix m D and the charged lepton mass matrix M are given as which implies that for given m D and M one can always find the associated Y andỸ as long as κ 2 = (κ ) 2 . The relative contribution to the mass matrices is determined by the ratio The right-handed neutrinos have a Majorana mass matrix which generates the light neutrino masses via the type I seesaw With the simple and straightforward assumption of m D lying around the weak scale, M R lies around 10 15 GeV, which implies that the scale of parity restoration and thus also the right-handed gauge boson masses lie around that scale.
B. Left-right symmetry and flavor symmetries
We mention here some aspects that are connected to left-right symmetry and flavor symmetry model building. We focus on A 4 here, but our statements will hold for many other groups as well.
Note first that the left-and right-handed lepton doublets, as well as the left-and right-handed Higgs triplets have to transform in the same representation of the flavor symmetry group. As right-handed fermions live in a gauge group doublet now, the right-handed neutrinos and the charged fermions of a given generation transform together. This means that popular A 4 models with the left-handed doublets as triplet and the right-handed charged fermions as singlets are not possible. Also models in which the right-handed neutrinos transform as triplet and the right-handed charged fermions as singlets are forbidden.
In typical flavor symmetry models, the Yukawa terms are effective in the sense that apart from Higgs, leftand right-handed fermion fields in addition scalar flavon fields are present. The full Yukawa term (keeping the bi-doublet Φ as trivial singlet of the flavor group) can be written in the usual compact form as where φ is the flavon field. If L,R are multiplets and φ is a trivial singlet of the flavor group, then as usual Y = Y † . Consider now the case when L,R and φ are non-trivial multiplets of the flavor group. In this case Y ij¯ Li Φ Rj φ should be written as k Y k ij¯ Li Φ Rj φ k , which means that there will be several Yukawa coupling matrices. For instance, in A 4 the full Yukawa term could be a tripletriplet term, i.e. L,R and φ are all triplets. Then, because the product of two triplets contains two triplets according to 3 × 3 = 3 + 3 + 1 + 1 + 1, we have two different Yukawa matrices Y 1 and Y 2 . Following the steps as given after Eq. (2), one finds that Actually we have here assumed real flavon fields, but the same results applies for complex fields. As a physical result of Eq. (8), the PMNS matrix of the left-handed leptons will be equal to its right-handed analog.
We also note that the definition of the discrete leftright symmetry is not unique in LR symmetric models. One could also choose charge conjugation, which would replace in Eq. (8) the † with T . However, this choice of discrete left-right symmetry would bring along the complications that the flavor symmetry group transformations are potentially incompatible with the charge conjugation, similar to the situation of combining flavor symmetry with CP symmetry, see e.g. Ref. [11]. In particular, for different flavor groups one would need to introduce different non-trivial charge conjugations in the LRSM. In this paper we only focus on parity as discrete left-right symmetry, leading to Eq. (8). In more general models with different definitions of the discrete left-right symmetry a careful check of the consistency would need to be performed.
Another point we wish to make concerns residual symmetries. Typical models break A 4 in such a way that in the neutrino and charged lepton sector subgroups of A 4 remain intact 1 . In general, a flavor group G breaks to different subgroups G and G ν in the charged lepton and neutrino sector, respectively: The eigenvectors of T are just the columns of the mixing matrix U which diagonalizes the charged lepton sector, and likewise in the neutrino sector S determines U ν . Thus, the PMNS matrix given by U † U ν is essentially determined by G ν , G , irrespective of the dynamical realization within a model [12][13][14]. This implies in particular that mixing is independent of masses. It is thus possible to reconstruct the flavor group G from the mixing matrix U , or vice versa to break G into proper subgroups to obtain U . Both the U ⇒ G and G ⇒ U procedures have been well understood and there are many studies on this subject [15][16][17][18][19][20][21][22][23][24][25][26]. If in a given model with a seesaw mechanism the right-handed Majorana mass matrix is assumed to be proportional to the unit matrix, or if m D and M R share the same residual symmetry G ν (hence can be diagonalized simultaneously), the above game can again be played and with identifying the residual symmetries of M ν and M , information on the original flavor symmetry group could be obtained. What concerns left-right symmetric models is that the Dirac and charged lepton mass matrices are given by contributions of two fundamental terms, Y andỸ , see Eq.
(3). Their relative contribution is governed by tan β in Eq. (4). Only in the limit tan β → ∞ the minimal LR model is similar to the SM, as in this case only Y contributes to Dirac neutrino masses andỸ to charged lepton masses. In this limit of κ κ the symmetry of m D is the one of Y . Once κ /κ is non-zero m D has neither the symmetry of Y nor ofỸ . Similar statements hold for tan β → 0.
In left-right symmetric models m D and M R cannot share the same residual symmetry G ν and hence cannot be diagonalized simultaneously: the fact that in Eq. (3) two contributions to M and m D are present, means that there is no non-trivial symmetry basis in which this can happen, unless tan β → ∞ or tan β → 0.
If neutrino mass would be given by a dominating type II seesaw term, i.e. the contribution of type I seesaw 1 Sometimes those residual symmetries are also accidental. which involves m D is suppressed, then in principle the residual symmetries can be well separated.
One can therefore conclude: if we introduce a flavor group and intend to break it into two parts for neutrinos and charged leptons respectively, then within left-right symmetric models this is impossible unless tan β takes on extreme values or the contribution of type I seesaw to neutrino masses is absent. If this is not the case, the simple connection between the flavor symmetry subgroups and U no longer applies. To put it in another way, if some VEV alignment would lead to simple residual symmetries and a simple mixing structure in a model without left-right symmetry, the presence of a left-right symmetry leads to deviations.
As is well known, the presence of the Higgs bi-doublet and thus two Dirac Yukawa contributions in Eq. (3) implies potentially dangerously rates for lepton flavor violation (LFV), see [27] for a compliation. While the Higgs triplets and processes involving the right-handed gauge bosons and neutrinos also lead to LFV, their contributions are naturally suppressed if the scale of parity restoration lies above, say, 10 TeV. This is in fact expected from simple neutrino mass constraints, where the mass scale of the right-handed neutrinos is almost GUT scale, see Eq. (6). Already in the very early Ref. [9] the dangerous LFV generated by the bi-doublet was noted and taken care of by imposing a simple Z 2 symmetry to suppress µ → eγ and µ → 3e. Hence, a flavor symmetry can be very useful and important in order to avoid LFV. Generally speaking, if Y andỸ in Eqs. (2, 3) cannot be simultaneously diagonalized, LFV processes generated by the bi-doublet Dirac Yukawas are not suppressed. If Y andỸ can be made simultaneously diagonal, such processes are absent. As we will see in the next Section, our model has this feature.
III. A4-LRSM MODEL
The flavor symmetry in this model is A 4 × Z 2 and the particle content with its transformation properties is given in Tab. I. Note that the left-and right-handed lepton doublets, as well as the left-and right-handed Higgs triplets transform in identical representation of the flavor symmetry group. In addition to the standard LRSM particles we only introduce two A 4 triplets (φ , φ ν ) and one A 4 singlet ξ. The Lagrangian of all Yukawa interactions can be written as Note the presence of two terms with¯ L φ R , as it is a triple-triplet product, see the discussion around Eq.
. For simplicity, we suppress all flavor indices in the Lagrangian. Choosing for convenience the real 3dimensional representation of A 4 , it follows that Y ξ ,Ỹ ξ and Y 0 R are proportional to the unit matrix. The terms involving Y 1 are governed by Identical flavor structure holds forỸ 1 . The terms involving Y 2 are, obeying the consistency relation from Eq. (8) proportional to with again identical flavor structure ofỸ 2 . We assume here symmetry breaking of the flavor symmetry according to the usual vacuum expectation value alignment Combining Y ξ with the structure of Y 1 and Y 2 gives with the constraint α = α * and β = γ * . AlsoỸ has this structure. Therefore, Y andỸ can be simultaneously diagonalized which implies that the Dirac mass matrices of charged leptons and neutrinos can be simultaneously diagonalized. Note that this feature implies the absence of potentially dangerous LFV processes generated by the Higgs bi-doublet, as discussed at the end of Sec. II. The remaining symmetric Yukawa matrix resulting from Y ν R is proportional to leading to Towards an explicit form of the light neutrino mass matrix we first perform the transformation with the Wolfenstein matrix U W (here ω = e 2πi/3 ) As a result of this transformation, Y ,Ỹ and Y R are transformed to Y ,Ỹ and Y R where Y ,Ỹ are diagonal matrices and Inverting this expression, Here we have defined the matrix As common in many A 4 models, U † W U 13 gives tribimaximal mixing, to be more specific: where U = diag(1, ω, −ω 2 ), U = diag(1, 1, i) and Therefore Eq. (20) can also be written as Here z ≡ b/a is in general a complex number. Since in the type I seesaw the light neutrino mass matrix is Only m has the dimension of mass while the other quantities are all dimensionless. Note that the re-phasing M ν → P M ν P † with P = diag(e iθ1 , e iθ2 , e iθ3 ) does not have physical meaning so we can always assume m and r 2 , r 3 in Eq. (25) to be real numbers. Finally, we can give the final form of the light neutrino mass matrix in the charged lepton basis: (26) Note that in the limit r 2 = r 3 = 1, M ν = mX TBM leads to TBM and the neutrino mass sum-rule 1/m 1 − 1/m 3 = 2/m 2 (here the masses are understood to be complex, see e.g. [28]) since the three neutrino masses are proportional to 1/(1 + z), 1, −1/(1 − z) respectively.
IV. NUMERICAL AND ANALYTICAL RESULTS
In our left-right symmetric A 4 model the light neutrino mass matrix is given by Eq. (26) while the charged leptons are diagonal with enough parameters to fully fit their masses. First we will numerically diagonalize M ν in order to find all possible parameter values. Analytical diagonalization of the general mass matrix turns out to be rather complicated, so we will only give one example. Note that, in the spirit of the discussion in Sec. II B, the VEV alignment in Eq. (13) breaks A 4 to subgroups, but they do not end up in the mass matrices. Hence, the mixing will depend on the values of the masses.
A. Numerical solutions
Varying all 5 free parameters (r 2 , r 3 , m and complex z) in Eq. (26) and comparing the mixing angles and masses with the 3σ global fit results from Ref. [29] reveals that there are several disconnected ranges of parameters. The eight different cases for the normal ordering and the ten cases for the inverted ordering can be seen in Fig. 1, where we plot them in the parameter space of θ 23 , δ and the smallest mass m L . Note that some solutions overlap, but this happens only because of the three-dimensional plot. The space of solutions is actually five-dimensional and the areas in that parameter space do not overlap.
For the normal ordering there are four curves in the shape of a "J" and another four in the shape of an "U". All require a smallest neutrino mass above zero, the ones in U-shape have a larger minimal value than the ones of J-shape. We name the solutions A N ±± and B N ±± . The subscript ±± denotes the signs of θ 23 − π/4 and δ (lying in our convention between −π and π). Interestingly, solutions of type A have values of the CP phase very close to ±π/2, where −π/2 seems to be preferred by current data [30]. The type A solutions always keep the signs of θ 23 − π/4 and δ, those of type B only for most of the parameter space. While the lower limit on the smallest mass is 0.034 eV for type A, it is 0.046 eV for type B.
There are similar types of solutions for the inverted mass ordering, denoted A I ±± and B I ±± (having smallest masses of at least 0.034 and 0.053 eV, respectively). In addition, there is a different type of solution denoted C I ± , where the subscript denotes the sign of θ 23 − π/4. These two cases are special in the sense that they allow only a smallest mass between 0.004 and 0.013 eV. Example solutions are given in Table II. Note that some of the solutions with δ → −δ are connected by complex conjugation of the mass matrix.
The correlation between the interesting parameters θ 23 , δ and m L are given in Figs. 2 and 3, respectively. Finally, Fig. 4 summarizes the prediction of the model for neutrinoless double beta decay [31]. We see in particular that for the inverted ordering it always holds that the effective mass takes essentially its largest possible values and that for the normal mass ordering the effective mass is non-zero.
B. Analytical calculation
Now we try to analytically find approximate expressions for one of the many possible solutions. From Table II we see that there are solutions with r 2 and r 3 close to one. Focusing on this case, we introduce the small parameters in Eq. (26). In this case the neutrino mixing should be close to tri-bimaximal mixing (TBM) because if δ 2 , δ 3 = 0 the neutrino mixing is TBM. We further assume for simplicity that the neutrino mass sum-rule as discussed at the end of Sec. III holds, which is approximately true in this case as well.
The deviation from TBM can be computed perturbatively under the assumption δ 2 , δ 3 1 and some details are found in the Appendix. The result turns out to be . .
where z first appears in Eq. (24) and the f -functions are given in the appendix. The elements not given can be found there, but are not important here. The important point is that the deviations of U e1 and U e2 are proportional to (δ 2 + δ 3 ) while the deviations of U e3 , U µ3 and U τ 3 are proportional to (δ 2 − δ 3 ). Note that U e1 and U e2 determine the value of θ 12 , which should not be too far away from the TBM value sin θ 12 = 1/ √ 3. At the same time, U e3 ∝ δ 2 − δ 3 should be relatively large compared to the deviation of θ 12 . Thus, we simplify the analysis further by taking δ 3 = −δ 2 . Another assumption to make our life simpler is that |1 + z| ≈ 1, which implies z ≈ e 2iα − 1 .
The reason for this assumption is as follows: as mentioned above, we use that the actual mass spectrum (m 1 , m 2 , m 3 ) is still very close to the leading order one which is proportional to 1 which implies |1 + z| should be very close to 1. Note that if we assume z ≈ e 2iα − 1, we are limited to the inverted ordering because | 1 1−z | 2 is always less than 1. With the above assumptions (first taking δ 3 = −δ 2 and then z ≈ e 2iα − 1), Eq. (28) can be simplified to where Note that with |g 13 (α)|δ 2 = sin θ 13 we can replace δ 2 with s 13 /|g 13 (α)| and then extract tan θ 23 and sin δ from Eq. (29). They can be expressed in terms of θ 13 and α, where α → 0. Therefore, for small |θ 23 − 45 • | the smallest mass m L is large. Furthermore, the larger α the larger is the deviation of θ 23 from π/4, which means that there should be a lower bound on m L . The above expressions also imply that lim m L →∞ | sin δ| = 1, i.e. as neutrino mass increases the CP phase approaches one of its maximal values.
Those features can be identified from the plots in Fig. 3, showing the accurateness of the analytical study. Mee eV Figure 4. The effective Majorana mass Mee in neutrinoless double beta decay for both normal (blue) and inverted mass ordering (red, note the isolated red points corresponding to the C I ± solutions). The green shaded area represents the currently allowed parameter space.
C. Phenomenological summary
Let us summarize the phenomenological consequences of the model.
First of all, the lightest neutrino mass cannot be zero or too small, quantitatively summarized from the previous results as follows: normal : m L > ∼ 0.034 eV, inverted : m L ∈ (0.004, 0.013) or m L > ∼ 0.034 eV.
The lower bound of m L for normal ordering has an important implication as the effective mass M ee is always non-zero. One finds normal : M ee > ∼ 0.036 eV, inverted : M ee ∈ (0.0482, 0.0493) or M ee > ∼ 0.059 eV.
We also note that for large m L ( 0.1 eV), This is due to the (approximately valid) sum-rule 2m −1 2 + m −1 3 = m −1 1 which will give the above relation for a quasidegenerate spectrum [28].
Another important feature of our model is the maximal CP violation. As we can see from the top plots in Figs. 2 and 3, both the A N and A I types of solution (green and blue points) always have maximal | sin δ| with very little uncertainties. For the B N and B I types of solution, if m L is large enough, | sin δ| also approaches its maximal value. This can be understood e.g. from our previous analytic computation which gives lim m L →∞ | sin δ| = 1.
The C I solution in general do not have maximal CP violation. However, from the lower plot in Fig. 3 we see that δ and θ 23 are strongly correlated (black dots). If θ 23 turns out to deviate significantly from 45 • such as θ 23 < 42 • or θ 23 > 48 • , then the C I solutions also predict maximal | sin δ|.
The two bottom plots in Figs. 2 and 3 show that if large |θ 23 − 45 • | is observed in the future, then | sin δ| must be close to its maximal value. For the inverted ordering this requires |θ 23 − 45 • | 3 • , as just discussed, while for the normal ordering it requires |θ 23 − 45 • | 1.5 • . It is interesting to note that such a deviation of θ 23 and a maximal | sin δ| are simultaneously (still rather mildly) preferred by current global fit as the best-fit of (θ 23 , sin δ) is (41.4 • , −0.94) for normal ordering and (42.4 • , −0.83) for inverted ordering [29].
Finally, since in the large m L limit θ 23 goes to 45 • , a significant deviation of θ 23 from 45 • implies an upper bound on m L . For example if |θ 23 − 45 • | 3 • in the normal ordering then from Fig. 2 we get m L 0.06 eV, which constrains m L to a very narrow region (0.034, 0.06) eV.
It is also possible to rule out a mass ordering in this model due to the different structures of solutions. For example, if the future bound on m L is pushed below 0.034 eV, then only the C I solutions survive. Also, since (θ 23 , δ) shown in the bottom plots in Figs. 2, 3 have very different distributions for both possible mass orderings, it is also possible to distinguish them with precise measurements on θ 23 and δ.
In summary, if m L is large, we have clear predictions on δ, θ 23 and M ee , which should be close to their large m L limit If m L is small, then we have some more interesting predictions among these parameters, such as large deviations from θ 23 = 45 • , correlations between θ 23 and δ as well as with the mass ordering.
V. CONCLUSION
We presented in this model a flavor symmetry model based on A 4 within a left-right symmetric framework. Various aspect exist that make this environment different from the usual model building. This includes the necessity to treat the particles in left-and right-handed doublets, but more crucially the fact that residual symmetries from breaking the full flavor group do not make it in the mass matrices and hence do not determine the mixing. Furthermore, the discrete left-right symmetry should be parity rather than charge conjugation, in order to avoid inconsistencies between the flavor and charge conjugation symmetries.
Taking all this into account, we were discussing a leftright symmetric model with A 4 flavor symmetry and analyzed its predictions. No flavor changing neutral currents from the Higgs bi-doublet are present. Several distinct solutions for the neutrino sector were possible, many of which prefering maximal CP violation as currently prefered by data. Various other predictions and correlations exist which would allow for tests of the model.
The various constraints that left-right symmetric theories impose on flavor symmetry models will allow for further analyses, both conceptual as well as phenomenological. The possibility to use left-right symmetry as a first bottom-up step to approach GUT flavor symmetries is another attractive option to study. Such endevours will be left for future studies.
is quasi-diagonal and we need to perform a small rotation 1 + iT where T = T † and T 1 to diagonalize it: (A6) We can assume T ii = 0 because if we rephase each column of U , i.e. U → U diag(e iα1 , e iα2 , e iα3 ), U still can diagonalize M and any non-zero T ii can be absorbed into such rephasing.
The next-to-leading order in Eq. (A6) gives
So we have
A ij = −i(T ji m j + T ij m i ) and its conjugate Now we can solve the above two equations with respect to T ij and T ji to get Eq. (A4). | 7,047.6 | 2015-09-10T00:00:00.000 | [
"Physics"
] |
Proof of a conjecture of Kløve on permutation codes under the Chebychev distance
Let d be a positive integer and x a real number. Let Ad,x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A_{d, x}$$\end{document} be a d×2d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d\times 2d$$\end{document} matrix with its entries ai,j=xfor1⩽j⩽d+1-i,1ford+2-i⩽j⩽d+i,0ford+1+i⩽j⩽2d.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} a_{i,j}=\left\{ \begin{array}{ll} x\ \ &{} \quad \text{ for } \ 1\leqslant j\leqslant d+1-i,\\ 1\ \ &{} \quad \text{ for } \ d+2-i\leqslant j\leqslant d+i,\\ 0\ \ &{} \quad \text{ for } \ d+1+i\leqslant j\leqslant 2d. \end{array} \right. \end{aligned}$$\end{document}Further, let Rd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_d$$\end{document} be a set of sequences of integers as follows: Rd=(ρ1,ρ2,…,ρd)|1⩽ρi⩽d+i,1⩽i⩽d,andρr≠ρsforr≠s.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} R_d=\left\{ (\rho _1, \rho _2,\ldots , \rho _d)|1\leqslant \rho _i\leqslant d+i, 1\leqslant i \leqslant d,\ \text{ and }\ \rho _r\ne \rho _s\ \quad \text{ for }\ r\ne s\right\} . \end{aligned}$$\end{document}and define Ωd(x)=∑ρ∈Rda1,ρ1a2,ρ2…ad,ρd.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Omega _d(x)=\sum _{\rho \in R_d}a_{1,\rho _1}a_{2, \rho _2}\ldots a_{d,\rho _d}. \end{aligned}$$\end{document}In order to give a better bound on the size of spheres of permutation codes under the Chebychev distance, Kløve introduced the above function and conjectured that Ωd(x)=∑m=0ddm(m+1)d(x-1)d-m.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Omega _d(x)=\sum _{m=0}^d{d\atopwithdelims ()m}(m+1)^d(x-1)^{d-m}. \end{aligned}$$\end{document}In this paper, we settle down this conjecture positively.
Further, let R d be a set of sequences of integers as follows: In order to give a better bound on the size of spheres of permutation codes under the Chebychev distance, Kløve introduced the above function and conjectured that In this paper, we settle down this conjecture positively.
Introduction
A permutation code is a subset of the symmetric group S n , equipped with a distance metric. Permutation codes are of potential use in various applications such as power-line communications and coding for flash memories used with rank modulation [6,7]. Permutation codes were extensively studied over the last decade. Hamming metric is naturally the first to be considered. Later, Ulam metric [4] and Kendall τ -metric [2] were introduced and are now the two most investigated metrics. However in [9], a new metric named the Chebyshev metric was proposed by Kløve et al., when they were studying the multi-level flash memory model. A combinatorial survey on metrics related to permutations was given in [3]. The two main questions in coding theory are fundamental limits on the parameters of the code (information rate versus minimum distance) and constructions of codes that attain these limits. It turns out that both topics are difficult for permutation codes. Few explicit constructions were obtained for various metrics and no general bounds better than the GV-bound and Sphere packing bounds were found in [1,2,4,6,9] except for the Hamming metric [5]. Both the GV-bound and the Sphere packing bound depends on the volume (V (d, n)) of a typical "ball" which consists of permutations in S n at distance at most d from the identity permutation. The calculation of the volume of that ball becomes a crucial problem.
The Chebychev distance d( p, q) between two permutations p = ( p 1 , p 2 , . . . , p n ) and q = (q 1 , q 2 , . . . , q n ) is defined by Although the permanent looks similar to the determinant of a matrix, it is a difficult problem to compute the permanent for general matrices. The celebrated van der Waerden theorem gives a lower bound for the permanent of the so called doubly stochastic n × n matrix. Here doubly stochastic means that all the elements are non-negative and that the sum of the elements in any row or column is 1. Thus, if A is an n × n matrix where the sum of the elements in any row or column is a constant k, then van der Waerden theorem gives a lower bound on the permanent of A.
By noticing that most rows and columns of A (d,n) have the sum 2d + 1, Kløve defined a closely related matrix B (d,n) with row sum and column sum 2d + 1 so that van der Waerden's theorem can be applied. The matrix B (d,n) is defined as follows: With this new defined matrix B (d,n) , Kløve [10] gave a lower bound on V (d, n) as follows: For example, Let R d be a set of sequences of integers as follows: Let Kløve [9] also gave the following lower bound on V (d, n): In particular, d (2) = d . Kløve gave the following generalized conjecture and verified it for d 9.
Conjecture 2 [10, Conjecture 1] For any positive integer d, In this paper, we shall prove that Conjecture 2 is true.
Proof of Kløve's Conjecture Theorem 3 For any positive integer d, the identity (1.3) holds.
Actually, for any m × n matrix A = (a i, j ) with m n, the permanent function of A is already defined as follows (see, for example, [11]): per(A) = σ ∈P(n,m) a 1,σ 1 a 2,σ 2 · · · a m,σ m , where P(n, m) denotes the set of all m-permutations of the n-set {1, 2, . . . , n}.
In fact, by the definition of R d , we know that R d is exactly the subset of all d-permutations of the 2d-set {1, 2, . . . , 2d} such that σ ∈ R d if and only if a 1,σ 1 a 2,σ 2 · · · a d,σ d = 0. Hence we have d (x) = per(A d,x ).
In order to prove Theorem 3, we first give a related combinatorial identity.
For example, we have Proof of Lemma 4 We compute the multiple sum in the order from k m to k 1 . It can be proved by induction on k m−1 , k m−2 , . . . , k m−i−1 respectively that n k m =k m−1 By choosing i = m − 1 in (2.2), we complete the proof of (2.1).
Proof of Theorem 3 It is clear that (1.3) is equivalent to
By the definition of d (x + 1), we know that each x comes from the first term in x + 1.
To compute b m , we first choose m x's from m (x + 1)'s which are not in the same row nor in the same column of the matrix A d,x+1 , and then choose (d − m) 1's in the other d − m rows so that no 1's are in the same column. Suppose that the m x's are chosen from the rows indexed by d + 1 − i 1 , d + 1 − i 2 , . . . , d + 1 − i m with i 1 < i 2 < · · · < i m , respectively. By noticing that the (d + 1 − i)-th row has i (x + 1)'s and all the x's we choose must be in different columns, we have i 1 (i 2 − 1)(i 3 − 2) · · · (i m − m + 1) ways to do this. As for the number of ways to choose 1's in the remaining rows, we notice that the i-th row has d + i 1's including those 1's in (x + 1)'s and all these 1's form several right trapezoids in the matrix A d,x+1 . Therefore, there are (d where k s = i s − s + 1 (s = 1, . . . , m), k 0 = 1, and k m+1 = d − m + 1. By replacing n by d − m + 1 in (2.1), we obtain b m = d m (d − m + 1) d . This completes the proof.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 1,930.6 | 2016-08-04T00:00:00.000 | [
"Mathematics"
] |
LSMO Nanoparticles Coated by Hyaluronic Acid for Magnetic Hyperthermia
Magnetic hyperthermia with the treating temperature range of 41–46 °C is an alternative therapy for cancer treatment. In this article, lanthanum strontium manganates (La1−xSrxMnO3, 0.25 ≤ × ≤ 0.35) magnetic nanoparticles coated by hyaluronic acid (HA) which possesses the ability of targeting tumor cells were prepared by a simple hydrothermal method combined with a high-energy ball milling technique. The crystal structure, morphology, magnetic properties of the HA-coated magnetic nanoparticles (MNPs), and their heating ability under alternating magnetic field were investigated. It was found the HA-coated La0.7Sr0.3MnO3, with particle diameter of ~100 nm, Curie temperature of 45 °C at a concentration 6 mg/ml, gave the optimal induction heating results. The heating temperature saturates at 45.7 °C, and the ESAR is 5.7 × 10−3 W/g · kHz · (kA/m2) which is much higher than other reported results. Electronic supplementary material The online version of this article (doi:10.1186/s11671-016-1756-3) contains supplementary material, which is available to authorized users.
Background
Magnetic hyperthermia is considered as an alternative therapy for cancer treatment since it has no side effect compared with traditional drugs or radiation treatment [1,2]. Magnetic nanoparticles (MNPs) in stable colloidal suspensions can be delivered to tumor via noninvasively route and heated with external high-frequency magnetic field. The treating temperature of target tissue should be 41-46°C to destroy the cancer cells while avoiding the harm on normal cells [3]. To prevent treating temperature to a higher value, a thermometry probe contacting with target tissue is always needed [4,5]. However, this method is harmful when tumor is inside the body. If the heating of MNPs can stop automatically when temperature is up to 46°C, the thermometry probe can be omitted. For the sake of achieving this purpose, the series of La 1−x Sr x MnO 3 (LSMO) compounds has attracted great interest in recent years [6][7][8]. The ferromagnetic-paramagnetic transition temperature T c (Curie temperature) of LSMO compounds is from 10 to 97°C with the variation of Sr content [9]. Above the T c , MNPs lose the ability of generating caloric under magnetic field. By controlling the contents precisely, LSMO MNPs can reach a saturation heating temperature around 46°C without adjusting external magnetic field.
The most popular method of preparing LSMO nanomaterial is sol-gel method, whereas the aggregation of nanograins is serious [10]. Another possible method is hydrothermal method, but the grain size and shape of the production are hard to control [11]. Both of these method need further smashing of the productions if monodisperse MNPs are desired. Meanwhile, a surfactant is necessary for prohibiting the agglomeration of MNPs in magnetic fluid and improving the biocompatibility [12].
In this work, LSMO MNPs was prepared by a simple hydrothermal method and then smashed by high-energy ball milling technique. Moreover, hyaluronic acid (HA) is a naturally occurring polysaccharide present in the extracellular matrix and synovial fluids. It can specifically bind to various cancer cells, and its conjugates containing anti-cancer agents exhibit enhanced targeting ability to the tumor and higher therapeutic efficacy compared to free anti-cancer agents. In this work, HA was used as the surfactant of the prepared LSMO MNPs. The basic physical properties and heating effect of the HA-covered MNPs under high frequency magnetic field were investigated.
Methods
La 1−x Sr x MnO 3 (0.25 ≤ × ≤ 0.35) nanoparticles were synthesized by a simple hydrothermal method combined with high-energy ball milling technique [13,14]. In the reactive hydrothermal method, La(NO 3 ) 3 · 6H 2 O, Sr(NO 3 ) 3 , KMnO 4 , Mn(CH 3 COO) 2 · 4H 2 O, and KOH were added to 100-ml deionized H 2 O in appropriate proportions. The total molar amount of La 3+ and Sr 2+ ions was 0.0125 mol. In order to keep the balance of charges, the amount of Mn 2+ and Mn 7+ ions differs with the doping of Sr 2+ while the total molar amount of them was 0.0125 mol. KOH was used as a mineralizer, and its amount was 0.8 mol. The reaction mixture was then sealed in Teflon-lined stainless steel autoclaves and heated at 220°C for 48 h. The production was decanted in magnetic field several times to remove additional ions. Before the ball milling, 0.3 g of surfactant and 1.0 g of LSMO powder were added. The pH of the mixture was adjusted to 5 by dilute hydrochloric acid. High-energy ball milling was carried out for 8 h to form the LSMO magnetic nanofluid [2,12]. To reduce agglomeration of the magnetic particles, the milled nanoparticles were dispersed in NaOH solution with the pH = 12. HA-coated LSMO (x = 0.25, x = 0.3, x = 0.35) MNPs were denoted by S1, S2, and S3, respectively. As a contrast, OA (oleic acid)-coated La 0.7 Sr 0.3 MnO 3 MNPs was also fabricated and denoted by S4 [12].
The purity, homogeneity, and crystal structure of samples were characterized by X-ray diffractometer (Philips X'pert). The Scherrer equation was used to determine the size of the crystallites using the most intense peak in all cases, which appears at approximate 32.79°. The surface morphology of the samples was observed using scanning electron microscopy (SEM, Hitachi S-4800) (Additional file 1). The particle size was determined by high-resolution transmission electron microscopy (HRTEM, TecnaiTM G2F30, FEI). The hysteresis loops were measured using a vibrating sample magnetometer (The ADE Model EV9 system) at 300 K. Curie temperature was determined as the minimal value of dM/dT versus T curves. The thermal heating curves of magnetic liquids were obtained from SPG-I high-frequency induction heating equipment. The magnetic field of the coil was calculated by the relation: in which N, I, and L represent the number of turns, applied current and diameter of the turn in centimeter, respectively [15]. The used magnetic fields here are 53.1 Oe. The temperature variation was measured using an optical fiber probe.
The heating capacity of MNPs is quantified by the SAR (specific absorption rate) value according to the following relation [12,16,17]: where C W is the specific heat capacity of water (4.18 J g −1 K −1 ), ΔT/Δt is the initial slope of the time dependent temperature curve. The value of m magn is the weight fraction of the magnetically active element.
It should be noted that SAR strongly depends on magnitude and frequency of the applied magnetic field, so experiment results from disparate studies using disparate magnetic field are difficult to compare. In order to directly compare the SAR of our samples with the other reported values, the effective specific absorption rate (ESAR) was also calculated. The ESAR is a heating transformation ability which normalizes SAR with respect to field strength and frequency. It is expressed as follows: [8,16,18,19]: where H applied is applied magnetic field strength and f represents the frequency of the applied magnetic field.
Results and Discussion
XRD Analysis Figure 1 shows the XRD patterns of LSMO MNPs with different components. All the XRD patterns correspond to the characteristic peaks of LSMO [11], except for the small peaks of La(OH) 3 in (b) and (c). The existence of unreacted La(OH) 3 is due to that the atomic size of La
SEM and TEM Analysis
The detailed surface morphologies of the samples were measured by SEM, and the corresponding images for all samples are presented in Fig. 2a that the size of most grains before high-energy ball milling is in the micro level. The images b-d show that the particle size of the sample after high-energy ball milling with surfactant is~100 nm. The TEM images of the samples were also measured (Fig. 3). It is obvious that the morphology of the particles is non-spherical and their connections are very loose. The nanoparticles size of LSMO coated by OA or HA is almost the same. In some previous studies on nanoparticles with specific sizes in the range of 5-10 nm, the particles could be rapidly removed through hextravasation and renal clearance. On the other hand, the particles with size >200 nm could be sequestered by the spleen and eventually removed by the phagocytes. This could pose a detrimental risk of pulmonary embolism [20,21], so the LSMO MNPs with size less than100 nm is a security dimension for hyperthermia application. Oe for S1, S2, S3, and S4, respectively. The M r and H c values further confirm the size of MNPs is in nano level. The M versus T curves at a magnetic field of 5000 Oe were measured for all samples in order to confirm the T c , the dM/dT versus T curve was calculated, and the position of the lowest value of dM/dT was recognized as average T c of all MNPs. As Fig. 5 shows, the T c of S1, S2, S3, S4 are 44°C, 44.3°C, 49.2°C, and 43.9°C, respectively. This result shows that S1, S2, and S4 are more suitable for magnetic hyperthermia. Magnetic Heating Experiments Figure 6 represents the temperature variation curves obtained after applying an alternating magnetic field on samples which were dispersed in water with a concentration of 6 mg/ml. All these heating curves show a monotonic rise in temperature with the increase in time. Figure 6a shows the curves of HA-coated samples at the field of H = 53.1 Oe and f = 217 kHz. It is observed that the saturation temperature of S1 and S2 is about 46°C. That could satisfy the demand of temperature for mild hyperthermia treating [15,17,22]. Whereas, S3 keeps on heating until the temperature is up to 48°C. That could be attributed to its higher T c . Among the three samples of Fig. 6a, S2 possesses the best heating ability since its initial slope of the temperature rising line is the largest. Comparing to traditional OA-coated MNPs sample S4, the heating ability of S2 is also better, as shown Fig. 6b [12]. SAR and ESAR values of our samples are listed in Table 1. ESAR values of LSMO MNPs reported by other articles are also listed as a comparison. The larger SAR of S2 may be explained by its larger area within the hysteresis loop which is indicated by its higher remanence and similar coercivity to other samples' ones (Fig. 4). The ESAR value of S2 is obviously higher than other reported results, indicating a better energy converting ability under alternating magnetic field. For the efficacy and safety of the hyperthermia treatment, the heat generated should be within the mild hyperthermia range of 41-46°C and has good cell compatibility. In our experiment work, HA-coated La 0.7 Sr 0.3 MnO 3 magnetic fluid could meet these two demands.
Conclusions
La 1−x Sr x MnO 3 (0.25 ≤ × ≤ 0.35) MNPs were successfully synthesized and coated with HA as surfactant. The HA-coated La 0.7 Sr 0.3 MnO 3 magnetic fluid with the saturation heating temperature of 45.7°C and magnetic particle size of~100 nm could satisfy the requirements of the mild hyperthermia temperature range (41-46°C) and good cell compatibility for therapeutic application. Moreover, the ESAR value of HA-coated La 0.7 Sr 0.3 MnO 3 magnetic fluid is much higher compared with the other reported experiment results. Combined with the targeting ability of HA to tumor, we deemed that the HA-coated La 0.7 Sr 0.3 MnO 3 magnetic fluid will be a good candidate for hyperthermia treatment. The improved particle dispersibility and ESAR are favorable to the future applications of coated MNPs in biomedical field. | 2,648.8 | 2016-12-03T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Acoustic Model Adaptation for Indonesian Language Utterance Training System
Problem statement: In order to build an utterance training system for Indonesian language, a speech recognition system designed for Indonesian is necessary. However, the system hardly works well due to the pronunciation variants of non-nativ e utterances may lead to substitution/deletion erro r. This research investigated the pronunciation varian t and proposes acoustic model adaptation to improve performance of the system. Approach: The proposed acoustic model adaptation worked in three steps: to analyze pronunciation variant with knowledge-based and data-derived methods; to align knowledge-based and data-derived results in order t o list frequently mispronounced phones with their variants; to perform a state-clustering proce dure with the list obtained from the second step. Further, three Speaker Adaptation (SA) techniques were used in combination with the acoustic model adaptation and they are compared each other. In ord er to evaluate and tune the adaptation techniques, perceptual-based evaluation by three human raters i s performed to obtain the "true" recognition result s. Results: The proposed method achieved an average gain in Hit + Rejection (the percentage of correctly accepted and correctly rejected utterances by the s ystem as the human raters do) of 2.9 points and 2 p oints for native and non-native subjects, respectively, w hen compared with the system without adaptation. Average gains of 12.7 and 6.2 points for native and non-native students in Hit + Rejection were obtain ed by combining SA to the acoustic model adaptation. Conclusion/Recommendations: Performance evaluation of the adapted system demonstrated that the proposed acoustic model adaptation can improve Hit even though there is a slight increase of False Ala rm (FA, the percentage of incorrectly accepted utterances by the system of which the human raters reject). The performance of the proposed acoustic model adaptation depends strongly on the effectiven ess of state-clustering procedure to recover only i n- vocabulary words. For future research, a confidence measure to discriminate between in-vocabulary and out-vocabulary words will be investigated.
INTRODUCTION
In recent years, there is an increase interest of foreign students to study in Indonesia especially on Indonesian language and local culture. Due to limited time of their study (range: from 1 month to 1 year), it would be very beneficial for them to study Indonesian language preliminary so that their study time becomes more effective. Started from this condition, an initial idea to develop an Utterance Training System (UTS) for Indonesian language came up. In addition, speaking practice is necessary skill to complement reading and listening lessons, which are available from various books and educational software. The current isolated word recognizer for Indonesian language, called as the baseline system here after which was trained by native utterances data will encounter drastic degradation on non-native utterances. The reason is non-native subjects often make substitution or deletion error due to pronunciation variant contained in their utterances. Therefore, adaptation to compensate non-native pronunciation variants is required in order to improve recognition accuracy of the baseline system. This study studies an acoustic model adaptation in the Indonesia language UTS based on non-native utterances. Various approaches had been proposed regarding with an acoustic model adaptation on non-native utterances, for example: to use characteristics of the mother tongue (source language) of non-native subject in the evaluation of his/her pronunciation (Moustroufas and Digalakis, 2006). A dictionary modification, an acoustic model adaptation and manipulation were typical techniques, which could improve non-native utterances as shown in (Oh et al., 2007;Alia and Al Mograbi, 2007). In line with the main idea of some published works, the proposed acoustic model adaptation works as follows: frequently mispronounced phones with their pronunciation variants of non-native subjects are analyzed by performing alignment analysis between knowledge-based and data-derived results. Knowledge-based method utilizes human raters to carry out phonetic analysis between Indonesian language and non-native language. On the other hand, data-derived method utilizes the system to align automatically nonnative utterances with reference transcription of correct utterance and creates monophone-based confusion matrix. Result from the alignment analysis is a list of mispronounced phones with their variants, which is used to perform an acoustic model adaptation on a stateclustering procedure. Presence of human raters in the proposed acoustic model adaptation is necessary in order to provide a standard evaluation against recognition results of the system, as mentioned in (Neumeyer et al., 1996;Franco et al., 1997). Perceptual-based evaluation of human raters is not only to simply value non-native utterances as accepted/rejected but also to analyze and locate specific errors on segmental aspects. Further, the acoustic model adaptation is combined with three speaker adaptation techniques Maximum Likelihood Linear Regression (MLLR) as proposed in (Goronzy et al., 2004;Giuliani et al., 2006;Haraty and El Ariss, 2007), Constrained MLLR (CMLLR) and Vocal Track Length ormalization (VTLN) as proposed in (Hariharan et al., 2002;Sundermann et al., 2003;Legetter and Woodland, 1995;Shen and Reynolds, 2008;Al-Haddad et al., 2009;Gales and Young, 2008) in order to eliminate interspeaker variability. Performance of the proposed acoustic model adaptation is evaluated in five measures of alignment analysis between recognition results and perceptual based evaluation: Hit, False Alarm (FA), Miss, Rejection and Hit + Rejection.
MATERIALS AND METHODS
Speech database: Speech databases are constructed to be used for training and testing purposes. The material is composed of 100 isolated words, which are used to develop a native and a non-native speech database. The data (frequently used every day words) are collected for simple isolated word recognition.
The native speech corpus consists of utterances from 42 native speakers, most of which of Javanese mother tongues (Tan and Hussain, 2009). Each word normally uttered twice. From 8400 native utterances, 4200 (50%) of them were used for training. The other native speech database consists with 10 native speakers (1000 utterances) is developed to be used in evaluating the performance of the recognizer.
The non-native speech database consists of utterances from 8 males and 1 female student. Those non-native students have no experience in learning Indonesian language before this experiment (in other words, they are all at the same beginner level). A brief explanation and pronunciation practice under native guidance is given just before recordings take place. Non-native students utter each word normally four times. Once a mispronunciation occurs during the process, they are required to redo the task to correct the mistake only. From the 3600 non-native utterances, 1800 (50%) are used for training purpose. Another nonnative speech database contained with 4 males-1 female students (500 utterances) is developed for testing. Acoustic model adaptation: An Acoustic Model Adaptation (AMA) method is proposed in order to improve recognition performance of the baseline system evaluated on non-native utterances. The proposed adaptation method consists of three steps: 1. To observe pronunciation variant made by nonnative students in Indonesian language with two different ways: knowledge-based and data-derived methods (Wester, 2003) Knowledge-based method uses general knowledge about Indonesian language and non-native languages and the procedure is as follows: • Three human raters (Indonesian graduate students whose major are engineering) are equipped with headphones, recorded speech from 5 non-native students, the list of 100 words with transcriptions and 5 lists of foreign phonology classification. Brief explanation on how to perform the evaluation is provided beforehand. Each rater is accompanied by one of authors during evaluation task to keep a steady performance measure • In response, human raters evaluate each utterance based on segmental quality. Any unusual pronunciation is noted and carefully scrutinized to find its error • To test reliability of each human rater, evaluations on the same set utterances are carried out twice for each human rater. And it is found that intra-rater reliability is 0.89. The degree of agreement among human raters (inter-rater reliability) is also high about 0.93 • Output from this process: one human rater has 5 evaluation results of 5 non-native students. The judgment of each unusual pronunciation by the three human raters is evaluated by majority rule to lead a decision. When two human raters agree to accept a certain pronunciation while one human rater rejects it, voting is carried out to determine the result. As a result, one list of mispronounced phones with their pronunciation variants summarized of 5 non-native students is obtained Data-derived method uses the baseline system which is trained by pooled data between native and non-native utterances, to perform automatic alignment of non-native utterances with reference transcription of correct utterance and to output monophone-based confusion matrix. Confusion matrix consists with a number of phones, which are correctly classified as the same phone or incorrectly classified as another phones. 2. To carry out alignment analysis of knowledgebased and data-derived results. Three human raters work collaboratively to align the list of mispronounced phones obtained by knowledgebased with the frequently mispronounced phones obtained by confusion matrix. As a result, list of frequently mispronounced phones with their corresponding pronunciation variants are obtained as shown in Table 1 3. To perform a state-clustering procedure based on the results shown in Table 1. The state-clustering of the proposed acoustic model works as follows: • An initial set of a 3 state left-right monophone model is created and trained with native and nonnative utterances • A set of context-dependent triphone models is made by cloning monophone models • In a conventional state-clustering, for each set of context-dependent triphone derived from the same monophone, corresponding states were clustered. For example (triphone l-c+r), clustering was performed for each center monophone /c/ in the triphone-based acoustic models and all corresponding left-right phones were tied to /c/. However, in the state-clustering of the proposed acoustic model adaptation method, clustering is performed in two conditional ways: for the center monophones with pronunciation variants and the other is for those without. If the center monophones of /c/ has a pronunciation variant /c'/, as a result from the alignment analysis of perceptual-based and system evaluation on nonnative utterances, the center monophone /c/ or /c'/ in the triphone-based acoustic models are pooled together and the corresponding left-right monophones are clustered. Otherwise, the conventional state-clustering is performed • The number of mixture components in each state is incremented and the models re-estimated until the best performance reached AMA in combination with speaker adaptation: A UTS should be speaker independent i.e., inter-speaker variability should be eliminated. Various adaptation methods have been used to deal with inter-speaker variability. An approach used to solve this problem is to use a speaker adaptive training to deal with inter-speaker variability. The main idea is to normalize the speech signal of a new utterance such that it is similar to the average utterances. Another way is a parameter adaptation. A transformation is used to minimize the mismatch between new utterances and average utterances. This study shows simple and commonly used speaker adaptation techniques (MLLR (Goronzy et al., 2004;Giuliani et al., 2006), CMLLR (Hariharan et al., 2002;Sundermann et al., 2003;Legetter and Woodland, 1995;Shen and Reynolds, 2008) and VTLN) to compensate for speaker-specific differences caused by non-native language influence for isolated words. Table 2 shows the results of the baseline system adapted with speaker adaptation techniques (MLLR, CMLLR and VTLN). And Table 3 shows the results of the baseline system adapted with the combination of AMA and speaker adaptation techniques (MLLR, CMLLR and VTLN).
Assessment and evaluation:
Automatic analysis: The HTK Tools package (Woodland et al., 1994) is used for speech analysis, acoustic model training and speech recognition purposes. Table 3: Results of alignment analysis between recognition results and perceptual-based evaluation for native and non-native utterances evaluated on the baseline system with Acoustic Model Adaptation (AMA) in combination with three Speaker Adaptation (SA) techniques (MLLR, CMLLR and VTLN). There are independent programs for each step of training and recognition processes. A set of phoneme level HMMs is trained on the utterances (and the labels) in the training set. During the training process, each utterance is encoded and the relevant features are extracted based on the choice of features, window size and frame period. Each HMM state is modeled initially by a mixture-of-Gaussians of size 1 and trained using four-cycles of the Baum-Welch re-estimation. This is repeated until the best performance was reached. After obtaining the phoneme level HMMs, testing process is conducted by applying these HMMs to the test set using forced alignment and the Viterbi algorithm. The testing process generates a set of autolabeled phones (phone name, start and end time) for each utterance. The recognition performance of the system is calculated by counting the correctly recognized words. The overall process including adaptation procedure is drawn in Fig. 1.
Alignment analysis between recognition result and perceptual-based evaluation:
Human raters take part as a standard evaluation in evaluating non-native utterances against recognition results of the system. Perceptual-based evaluation obtained by human raters should be target of the system in measuring its performance reliability. Human raters evaluated the quality of each non-native utterance for its entire content (overall pronunciation) as follows: • Three human raters as previously mentioned are used. They work voluntary for this task that takes about 3 h. A brief explanation on how to perform the evaluation on non-native utterance is performed before the task. Each rater is provided with the list of 100 words with transcriptions. In total there are 5 lists of non-natives students to be evaluated by each rater • Non-native utterances are presented via headphones to human raters who are asked to assess the performance of each non-native utterance. All raters listen to the speech material and perform their own evaluations. Each rater is accompanied by one of authors during evaluation task to keep a steady performance measure • Human raters are allowed to listen to a specific utterance many times, but once a judgment is made, it cannot be changed. Each human rater has to evaluate 100 sets of utterances from different non-natives. In total, 500 sets of utterances from 5 non-natives students are evaluated by each human rater • Evaluations are based on the understandability of each utterance. When understandable, it is accepted; otherwise, it is rejected. As a result, each human rater has the list of accepted-rejected utterances from 5 non-native students • To make a final evaluation, the judgment of each utterance by the three human raters is evaluated by majority rule to lead a decision. When two human raters agree to accept a certain utterance while one human rater rejects it, voting is carried out to determine the result Average of intra-rater reliability and inter-rater reliability for overall pronunciation is the same as those for segmental quality in knowledge-based method as the evaluation of overall pronunciation and segmental quality is carried out in parallel. Results of perceptualbased evaluation are of a total of 500 non-native utterances, 115 (23%) utterances are rejected with regards to overall pronunciation. These results will be used in the next step, alignment analysis with recognition results obtained by the system. Recognition results are aligned with perceptual-based evaluation in measuring Hit, False Alarm (FA), Miss, Rejection and Hit + Rejection rates: (Hit + Miss + FA + Rejection) rates [%] = 100% • Hit: The percentage of correctly recognized utterances. Both the system and the human raters accept the utterances • False Alarm (FA): The percentage of incorrectly recognized utterance. The system accepts the utterances of which the human raters do not accept • Miss: The percentage of incorrectly rejected utterances. The system rejects the utterances of which the human raters accept • Rejection: The percentage of correctly rejected utterances. Both the system and the human raters reject the utterances • Hit + Rejection: The percentage of correctly recognized and correctly rejected utterances. Both the system and the human raters accept and reject the utterances Table 2 summarizes the alignment analysis of the baseline system and the baseline system adapted with three Speaker Adaptation (SA) techniques (MLLR, CMLLR and VTLN) respectively. As shown in the Hit + Rejection, the SA techniques provide gain about 4.2 points (68→72.2%) and about 2.3 points (15.4→17.7%) in the FA, corresponding to decrease in the Miss about 6.5 points (16.6→10.1%) and about 2.3 points (7.6→5.3%) in the Rejection when the system evaluated on non-native students. A positive improvement is also happened to native with reduction about 10.7 points (13.8→3.1%) in the Miss while keeping absolute rate in the Hit + Rejection. From the results, it can be seen that the SA techniques improve the recognition performance of native and non-native utterances. In other words, the performances of the baseline systems adapted with SA techniques are satisfactory. Table 3 summarizes the alignment analysis of the baseline system adapted with Acoustic Model Adaptation (AMA) and the baseline system adapted with the combination of AMA and three SA techniques (MLLR, CMLLR and VTLN) respectively. It is shown that for the baseline system adapted with AMA, the Hit + Rejection increases 2.0 points (68→70%) over the baseline system when evaluated on non-native students. For native, the Hit + Rejection also increases about 2.9 points (86.2→89.1%). For the baseline system adapted with the combination of AMA and SA techniques (MLLR, CMLLR and VTLN), there is an improvement over the baseline system in the Hit + Rejection about 6.2 points (68→74.2%) and 3.6 points (15.4→19%) in the FA, corresponding to decrease in the Miss about 9.8 points (16.6→6.8%) and about 3.6 points (7.6→4%) in the Rejection when the baseline system evaluated on non-native students. For native, a positive improvement is performed with a gain about 12.7 points (86.2→98.9%) in the Hit and a reduction about 12.7 points (13.8→1.1%) in the Miss.
RESULTS
The acoustic model for the systems is originally built with a low rejection in order to give more encouragement for non-native students. However, this approach results in a relatively large proportion in false rejection (Miss) and False Acceptance (FA). Some experiments (the baseline system adapted with SA, the baseline system adapted with AMA and the baseline system adapted with the combination of AMA and SA) conducted for 500 non-native utterances yielded quite fair correct acceptance rates, Hit (66.9, 64 and 70.2% respectively) for very beginner level students. These results imply that more than half of the non-native utterances are correctly accepted. Moreover, the overall accuracy, that is, the percentage of correct acceptance and correct rejection (Hit + Rejection) is slightly higher (72.2, 70 and 74.2% respectively).
DISCUSSION
Perceptual-based evaluation of human raters is used as a standard against results of the system. Evaluation is based on the same test set and the results obtained are as follows: • Non-native: Hit = 77%, Rejection =23% and Hit + Rejection = 100% • Native: Hit = 100%, Rejection = 0 and Hit + Rejection = 100% In general, the native and non-native results show that the baseline system adapted with SA techniques, the baseline system adapted with AMA and the baseline system adapted with the combination of AMA and SA techniques are comparable with each other relative to the baseline system in terms of Hit, FA, Miss, Rejection and Hit + Rejection. This can be explained by the fact that when the baseline system adapted with AMA, which already covered variants of pronunciation is combined with SA techniques, mismatch pronunciations between native and non-native utterances have been masked by model parameters. As an individual system, the baseline system adapted with the combination of AMA and MLLR slightly outperforms the other systems, as shown in bold in Table 3. Comparison between the baseline system adapted with the combination of AMA and MLLR and the perceptual-based evaluation for overall performance (the Hit + Rejection) results in 25.1 points difference (this number comprises FA and Miss) for non-native students and no difference performance for native.
A slight gain in Hit and FA with the corresponding reduction of Miss and Rejection can be explained by the fact that a speech recognition system will have FA and Miss fluctuated and overlapped in between Hit and Rejection. The issue of an acceptable level of FA depends largely on the application of the system. As the system is trained on a sharing data between native and non-native utterances, the native utterances can be used to define as the acceptance criteria, such that utterances from non-native subjects exceeding the acceptance criteria are accepted while utterances not exceeding the criteria are not accepted. However, in practice, the utterances from native and non-native subjects will overlap to each other on a certain degree, which means that the choice for a given criteria results in a combination of the Hit and the Rejection. In this case, perceptual-based evaluation is a goal standard in determining the validity of the system evaluation which can be represented in equation as follows: • Hit of the system = Acceptance of the perceptualbased evaluation • Rejection of the system = No Acceptance of the perceptual-based evaluation • Miss of the system = 0 and FA of the system = 0 An accurate procedure of recovering the Miss to gain the Hit and recovering the FA to the Rejection still needs to be experimentally set up and investigated in more detail.
CONCLUSION
This study presents work on the proposed acoustic model adaptation for Indonesian language Utterance Training System (UTS) based on non-native utterances. The study achieved two objectives: (1) to provide the list of typical mispronounced phones together with their pronunciation variants made by non-native subjects in general that can be used as a corrective feedback to improve UTS performance and (2) to propose the acoustic model adaptation based on objective no. (1) and to use it in combination with speaker adaptation techniques. The proposed adaptation demonstrates its potential by showing a positive improvement on correct acceptance and correct rejection rate (Hit + Rejection) when it is evaluated on native and non-native utterances. The performance of the proposed acoustic model adaptation depends strongly on the effectiveness of state-clustering procedure to recover only in-vocabulary words. In a future study, a confidence measures to be discriminate between in-vocabulary and out-vocabulary words will be investigated. It is also found that alignment analysis between recognition results of the system and perceptual-based evaluation of human raters has a potential to provide significantly confidence assessment for both native and non-native utterances. | 5,119.8 | 2010-10-23T00:00:00.000 | [
"Computer Science"
] |
“Deficits Don’t Matter”: Abundance, Indebtedness and American Culture
If deficits, nor defaults, don’t really matter anymore, what sign of our times is it? What has changed from the days that Franklin Delano Roosevelt risked the fragile economic recovery from the great depression by returning, in 1937, to the standard of his economic orthodoxy, a belief in fiscal rectitude and anaversion to debts and deficits? If that was a sign of a certain American character, what has happened to it? A massive shift in public culture must have occurred, affecting people’s views on public probity and political rectitude. The following is an attempt to trace some of the main shifts on the way to our present quandary.
In a conversation between vice-president Dick Cheney and Secretary of the Treasury Paul O'Neill Cheney is quoted as saying: BReagan proved deficits don't matter.^It is one telling quotation among many that show the prevailing hubris in government circles at the time, a belief of having cast off the shackles of economic reality. The hubris is still there, unabated. In a review of two books tracing back the lines of development of this hubris to the days of Nixon and Reagan, Robert G. Kaiser reminds his readers of the 2013 House of Representatives vote to raise the national debt ceiling. Failing to do so would effectively force the United States to default on its obligations to creditors. The ceiling was duly raised, but 144 Republican members said no. A number among them expressed confidence that default wouldn't really matter (my italics). Kaiser goes on to say: B…that a 144 members of the House were willing to cast a vote to default on the full faith and credit of the United States is a sign of our times.^I f deficits, nor defaults, don't really matter anymore, what sign of our times is it? What has changed from the days that Franklin Delano Roosevelt risked the fragile economic recovery from the great depression by returning, in 1937, to the standard of his economic orthodoxy, a belief in fiscal rectitude and an aversion to debts and deficits? If that was a sign of a certain American character, what has happened to it? A massive shift in public culture must have occurred, affecting people's views on public probity and political rectitude. The following is an attempt to trace some of the main shifts on the way to our present quandary. 1
Debts in Abundance
In the early days of what its guiding lights and eager followers called the American Studies Movement, in the United States in the 1930s and '40s, and spreading abroad into the early Cold War years under U.S. cultural diplomacy auspices, the quest was on for establishing and defining what was variously called the American identity, the American character, the American mind, or even the American Self. Agreement was never reached, which only added to the appeal of the quest. Literary studies, the study of history, and the newly reputable social sciences were all yoked together in the hot pursuit of this elusive, if not chimeric, target. For good measure, rival stories of origin were thrown into the mix. Puritan origins were a strong contender, from Perry Miller's Errand into the Wilderness to Sacvan Bercovitch's The Puritan Origins of the American Self. But so were stories of America's given natural resources, of America as cornucopia, as in David Potter's People of Plenty, or stories of America as an ideological blank sheet, open to be inscribed with European liberalism, to the exclusion of its European rivals, as in Louis Hartz's The Liberal Tradition in America. Others were working parallel veins, such as Richard Hofstadter, and Daniel Boorstin. They were all in their own way varying on the theme of American exceptionalism, exploring themes of Bdivine election^or Bchosenness,^or of manifest destiny and the fore-ordained westward course of empire, or of geographical determinism, following in the footsteps of Frederick Jackson Turner's frontier thesis. As for the historians among these 1950s' writers, reference is sometimes made to them as constituting a B school,^the school of the consensus historians. The word is felicitous, highlighting as it does a crucial pre-condition for the existence of something like an American Mind, or an American Self. It takes a shared history of like-mindedness, a national-if not notional-consensus, for there to be such a thing as one national identity, or one national character.
Yet, given this fevered quest for one shared national character, recognizably there in each of its individual carriers, it is nothing but utterly ironic that much of the intellectual debate in the United States in the 1950s was set by a book that was out to explore B the changing American character.^I am referring of course to The Lonely Crowd, a book commonly linked to the name of Harvard sociologist David Riesman, but really the result of team work. 2 Rather than bringing historical data together to buttress the case for one national identity, Riesman a.o. suggested that historically America may have known two or three modal characters, following each other in time, and each with its typical modes of behavior, cultural tastes and appetites, and individual character structure. I may remind you here of the two main character structures that Riesman recognized. Historically, he sees Binnerdirected man^give way to Bother-directed man.^Innerdirected man is the self-reliant and self-sufficient character, redolent of the Puritan individual guided by an inner sense of righteousness and direction, an inner compass, as Riesman metaphorically called it. Other-directed man is the successor personality type, entering the stage in the wake of radical social transformations.
In a new era of greater social interdependence and much more rapid social and cultural change, parents are no longer able to equip their children for life with their own inner compass. They now need to be trained to become social animals, taking their cues on a daily basis from their peers, adapting their behavior and tastes accordingly, adopting the hue and color of the settings they find themselves in. They now orient themselves by using, not an inner compass, but what Riesman calls their inner radar. Other classic texts from the 1950s further fleshed out this type, such as William H. Whyte's The Organization Man, highlighting the structural setting in which increasing numbers of people spent their working lives, the bureaucratic setting of the large-scale business corporation or government organization.
Riesman, as I shall argue, is only one among many authors who set out to recognize tidal changes in dominant character types in American history as they relate to underlying social and economic changes. As people's characteristic patterns of dependence-financial dependence through indebtedness critically among them-change and as they lose such measures of autonomy as they might have grown used to seeing as rightfully theirs, character structures and larger patterns of culture are assumed to reflect these changes and to turn into their symbolic representations.
Of course, as the world became increasingly bureaucratic in its patterns of organizing society and as people became enmeshed in large-scale structures controlling their lives, they no longer have the option, as Polonius, in Shakespeare's Hamlet (ACT I SCENE III) puts it to his son Laertes: BThis above all, to thine own self be true.^They have to play by the ever-changing rules of social games, that Riesman, for one, took sardonic pleasure in analyzing. But my point is, Riesman was not the only one to do this. He stands in a long line of social critics who read the signs of the times in the changing behavior patterns of their contemporaries. I will take you on a tour d'horizon of such critical writing. Can they teach us anything on the ways in which patterns of dependence-financial dependence included-have been reflected in the modes and tones of larger cultural eras. This will then lead to my ultimate question concerning the current state of affairs in America. What possible cultural reflections can we see of a current situation where all of America, at every level, internationally as a sovereign state, nationally as a government, and down from there to the level of individual businesses and families, is in deficit, on a scale of indebtedness unprecedented in its national history? Are there any clear signs of cultural characters emerging to reflect this state of affairs?
Changes in Cultural Character
It would be tempting to see Riesman's The Lonely Crowd as a nodal point, a conceptual hub, where several lines of intellectual gestation came together before it would inspire later portraits of American culture in broadly the same vein. Undoubtedly later work, like Christopher Lasch's 1979 study of The Culture of Narcissism: American Life in an Age of Diminishing Expectations, can be seen to echo some of Riesman's central characters, yet between the 1950s and the 1970s dramatic shifts had occurred in America's structural setting. A postwar era of explosive growth and all its unsettling impact on the population's rising expectations had, by the early 1970s, turned into its opposite, of economic stagflation and diminishing expectations of individual life chances.
As Lasch puts it, every age develops its own peculiar forms of pathology, which express in exaggerated form its underlying character structure. The pathology that Lasch chose to use as the metaphor for the prevailing character structure of the BMe-decade^is narcissism. It is an age that had seen the eclipse of individual achievement and of the satisfactions of its pursuit. BToday men seek the kind of approval that applauds not their actions but their personal attributes. They wish to be not so much esteemed as admired.^(p.59) Lasch stands in a long line of critics of mass society. He located the pivot of modern psychic development in the rise of mass production, with its concomitant deskilling of workers, destruction of economic independence, change in relations of authority from personal to abstract, and the professionalization of education, management, mental health, social welfare, and the like. The result of those epochal changes was a drastic change in the socialization of children. Individuation-the process of the formation of individual selves-largely consists of the gradual reduction in scale of infantile fantasies of omnipotence and helplessness, accompanied by the child's modest but growing sense of mastery, continually measured against its human and material surroundings. Formerly, the presence of potent but fallible individuals, economically self-sufficient, with final legal and moral authority over their children's upbringing, provided one kind of template for the growing child's psychic development. As fathers (and increasingly mothers) become employees, with the family's economic survival dependent on remote, abstract corporate authorities, and as caretaking parents were increasingly supervised or replaced by educational, medical, and social-welfare bureaucracies, the template changed. The child now has no humansize authority figures in the immediate environment against which to measure itself and so reduce its fantasies to human scale. As a result, it continues to alternate between fantasies of omnipotence and helplessness. This makes acceptance of limits, finitude, and death more difficult, which in turn makes commitment and perseverance of any kind-civic, artistic, sexual, parental-more difficult.
The result is narcissism, which Lasch, in the opening pages of Culture of Narcissism, described thus: Having surrendered most of his technical skills to the corporation, [the contemporary American] can no longer provide for his material needs. As the family loses not only its productive functions but many of its reproductive functions as well, men and women no longer manage even to raise their children without the help of certified experts. The atrophy of older traditions of self-help has eroded everyday competence, in one area after another, and has made the individual dependent on the state, the corporation, and other bureaucracies. Narcissism represents the psychological dimension of this dependence. Notwithstanding his occasional illusions of omnipotence, the narcissist depends on others to validate his self-esteem. He cannot live without an admiring audience. His apparent freedom from family ties and institutional constraints does not free him to stand alone or to glory in his individuality. On the contrary, it contributes to his insecurity, which he can overcome only by seeing his Bgrandiose self^reflected in the attentions of others, or by attaching himself to those who radiate celebrity, power, and charisma. For the narcissist, the world is a mirror, whereas the rugged individualist saw it as an empty wilderness to be shaped to his own design. Narcissism refers to a weak, ungrounded, defensive, insecure, manipulative self-what Lasch's next book, eponymously titled, labeled Bthe minimal self.Ŷ et readers may be forgiven if they recognize in Lasch's narcissistic personality the traits of Riesman's other-directed man. Lasch vehemently denies the similarity, the family likeness. As he argues, BAmericans have not really become more sociable and cooperative, as the theorists of other-direction and conformity would like us to believe, they have merely become more adept at exploiting the conventions of interpersonal relations for their own benefit.^(p. 66) This could only be argued by someone totally missing out on the sardonic pleasure Riesman takes in analyzing precisely the oneupmanship involved in the interactions of other-directed persons, with their eye on the main chance to upstage others. Riesman's other-directed man is more than just the incarnation of Dale Carnegie's smooth social operator, the central character of his immensely successful 1936 B How to…^book, and held up as a model for all to follow on their way to success, Bwinning friends and influencing people.^Carnegie did catch unfailingly a cultural shift underway ever since the 1920s, a demotion of certain long-respected virtues, where character gave way to personality, self-control to self-fulfillment, industry and thrift to skill at handling people. Carnegie's engineering of the self constructed a model of modern individualism composed entirely of serial images, disjointed, lacking any logic of inner cohesion, with no sturdy commitments or beliefs, no firm moral standards, no authentic and rooted core of self, (words that might have been Lasch's, but are not). 3 In Carnegie's view, it consisted only of a pliable personality eager to please others and advance socially and economically.
All this we may recognize in Riesman's type of the otherdirected man, or for that matter-think of Bno authentic and rooted core of self^-in Lasch's narcissist. But there is so much more that feeds into Riesman's perspective, and into his tongue-in-cheek, picaresque pantheon of tricksters and confidence men. After all, who can forget the unforgettable personae that Riesman conjured up, like the inside-dopester ( a word it took me years to probe in its depths of American colloquial resonance)? There are echoes here of the Chicago School in Sociology, and central figures like George Herbert Mead and Herbert George Blumer and their ideas on symbolic interactionism, echoes also of seminal insights into the social construction of the self, as a process of ongoing social negotiations and interactions like so many feedback loops informing people's trajectory towards self-definition. One is also reminded of Erving Goffman, another Chicago School name and author of the classic The Presentation of Self in Everyday Life. They are all examples of a special intellectual sensibility and an alertness to concepts like personality and culture seen as essentially open and in flux. Goffman in particular had an ear and an eye for the trickster element in all this, for the histrionics and theatricality in people's social strategies.
Yet another resonance that we may pick up reading Riesman is the unmistakable voice of Thorstein Veblen, odd man out in the history of American sociology and economics, yet a one-man fount of insight, critique and sardonic wit. He wrote at a time, in the late 19th, early 20th century, of rapid transformation across a wide swathe of life in America. Relative latecomer to industrialization and urbanization that America was, much like Germany in Europe in the fevered catch-up of its so-called BGründerjahre^-the years of industrial take-off -students of society in both countries invented new concepts for analytically capturing the advent of modernization. These were the years that Alan Trachtenberg would call the age of incorporation, the years in which a business paradigm of large-scale rational organization began to dictate most people's workaday lives. Not only had the systems of production dramatically increased in scale, so had the attending systems of control and governance. Increasing numbers of people had become enmeshed in a web of bureaucracy, putting them at an ever growing remove from the actual line of production. A parallel world arose, of staff workers alongside line workers, a world of growing abstractness, losing point and purpose for those involved. This new world was explored and analysed in Germany by leading early sociologists like Max Weber or Alfred Tönnies. Weber came up with the metaphor of the Biron cage^to capture the social experience of life in a bureaucratic setting. Tönnies introduced the pair of opposed concepts of Gemeinschaft versus Gesellschaft, words that in their English translation lose the evocative force they have in German. Early American sociology came up with a felicitous parallel, though, opposing primary to secondary social relations.
In this view the rich affective resonance of primary groups, like the family, neighborhood and local community, stood opposed to the cold and formal qualities of secondary relations, connecting people merely through formalized social roles. The latter evoke the world of the office window, bank tellers, secretaries and desk workers, a world that was increasingly liquid, losing form and meaning for the self-definition of all those involved, eating away at the many-stranded bonds of civil society, eroding its social capital. In this BGreat Transformation,^as Karl Polanyi memorably called it, a selfregulating market was to emerge, turning human beings and the natural environment into commodities. 4 Yet, as many observers at the time noted, human beings did not take this lying down. New social stages for public selfdefinition evolved which allowed people to explore early forms of a consumption culture with a view to setting themselves apart from others and distinguish themselves in the public eye. This is the stage that Veblen exposed in his first published book, The Theory of the Leisure Class. In it he lets his eyes roam across the wide array of strategies of social distinction through the ostentation of spending behavior. His sardonic wit coined phrases for the description of this behavior that survive until the present day, words such as Bconspicuous consumption,^Binvidious distinction,^or Bmarginal differentiation.^5 The latter term in particular survived through Freud's reflections on the narcissism of minor differences for the exalted display of individuality. In current post-modern analyses, the strategic point in using this form of narcissism is to achieve a superficial sense of one's own uniqueness, an ersatz sense of individual distinctness which is only a mask for an underlying uniformity and sameness. If Veblen is to rank as a social and cultural critic here is the reason why: he exposed the underlying vacuity of an era whose cultural parameters were set by the robber baron and the alienated office worker. If there is a dialectic at work here, it is that between the alienated many and the extortionist few who manage to get something for nothing. It is the group who, not unlike Karl Marx's expropriating capitalists, have kept their eyes on the main chance and the main prize. With characteristic sarcasm Veblen calls them the impropriators, reviving an old word from the world of canonic law to highlight the impropriety of expropriation. BSo there has been incorporated in American commonsense and has grown into American practice the presumption that all the natural resources of the country must of right be held in private ownership , by those persons who have been lucky enough or shrewd enough to take them over according to the rules in such cases made and provided, or by those who have acquired title from these original impropriators.^6 As one further interpretative revisit of the era reminds us, the telling metaphor for the period may be its fashionable middle-class affliction which went by the name of neurasthenia, best described as the physical symptoms of French poet Paul Verlaine's Blangueur monotone.^Neurasthenia, as author T.J. Jackson Lears suggests in his No Place of Grace, 7 was the medicalized expression, if not representation, of a more general feeling that in view of modern life having grown dry and passionless, one must somehow try to regenerate a lost intensity of feeling. But not only that. As Jackson Lears points out: BLate Victorians felt hemmed in by busyness, clutter, propriety; they were beset by religious anxieties, and by debilitating worries about financial insecurity.^There was a financial dimension to the way Americans responded to the transformation of their collective life in the late 19th century. It is what drove the new games played with the commodities produced by America's industrial machine, transforming them into signs and symbols of material success in a social arena shot through with status anxieties and feelings of economic insecurity. Whether or not individual Americans came out on top, they were all equally drawn into a new social game that before long would form an integral part of America's nascent culture of consumption.
That cultural transformation came with its own key word, abundance. At long last the American Dream could appear to have come into its own, unlocking a veritable cornucopia, fulfilling what had in fact been age-old European fairytale dreams of a land of plenty, Bun pays de Cocaigne^(which today does not sound right if translated back as a land of cocaine) or for that matter a Marxian dream of a realm of scarcity being replaced by a realm of affluence. Entering the 1920s America seemed to have led the way into this realm, even in the eyes of assorted European socialists, syndicalists and even communists.
Jackson Lears made abundance the topic of a separate study, published as Fables of Abundance: A Cultural History of Advertising in America. 8 Similarly, one of the seminal authors in this field, father of contemporary cultural history and cultural studies as we know them today, Warren Susman, suggested as the backdrop for his explorations of 20th-century cultural trends in America the single word abundance. 9 B… struggling to articulate for myself and my students some definition of what our culture is like and how it got this way, I find that I was developing almost unconsciously a way of understanding American culture: I was coming to see America through the notion of 'the culture of abundance.'^(p. xx) As he came to see it, one of the fundamental conflicts of twentieth-century America is between two cultures-an older culture, often loosely labeled Puritan-republican, producer-capitalist culture, and a newly emerging culture of abundance. As those familiar with his work will remember, Susman really made his mark developing approaches to the problem of capturing signs of this cultural transformation taking place. Working in a pre-digital age, he truly morphed into a one-man logarithm, pioneering work that would later be known as data-mining, producing word clouds as if he were a cutting-edge digital historian. Word clouds? Yes, world clouds. With characteristic inquisitiveness and sensitivity to the uses of language he struck upon submerged shifts in the frequency with which words were used, unearthing words that were becoming the shibboleths of their age. Words came in packages, cohering through their contextual uses; some were on the way out, falling into disuse, others pushed forward. And Susman presented them as word clouds. Here is Susman at work: BInitial investigations to answer such questions yielded suggestions of significant transformation. Key words began to show themselves: plenty, play, leisure, recreation, self-fulfillment, dreams, pleasure, immediate gratification, personality, public relations, publicity, celebrity. Everywhere there was a new emphasis on buying, spending, and consuming.^(p. xxiv) In a brilliant chapter he shows how the older culture, Puritanrepublican, producer-capitalist demanded something it called Bcharacter,^which stressed moral qualities, deeply ingrained, whereas the newer culture insisted on Bpersonality,^which emphasized being liked and admired. It is not hard to see these two key words as foreshadowing Riesman's later social types of the inner-directed man and the other-directed man, only taken forward in time to the turn of the 19th century.
Susman and Jackson Lears both mention advertising as a critical new use of new technologies of mass communication for the new world of abundance and mass consumption to function smoothly. Susman even mentions one of advertising's central functions lying in its actively creating wants, inducing consumer demand for novel products entering the market. Advertising in that sense plays a critical role in balancing supply and demand, in channeling production to meet consumption. And in fact, one of the standard accounts of the causes of the Great Depression is precisely in terms of over-production, of a failure of market mechanisms. But there is such a thing as under-consumption, of lagging demand due to stagnant purchasing power among the mass of consumers. And the remarkable thing, going over Susman's word clouds as they hang about the capitalized word ABUNDANCE, is the total absence of words connected to debt, insolvency and poverty.
There is one student of the American Dream of Abundance who has his eye out for this different set of words, which, if they form a cloud, it is surely a storm cloud. Roland Marchand, in his Advertising the American Dream: Making Way For Modernity, 1920Modernity, -1940, in fact makes this central point that the advent of consumer culture brought with it a radical break with older virtues such as frugality, financial prudence and a general aversion to debt. All this went overboard in the 1920s. A general buy-now, pay-later attitude was advertised in its own right as the thoroughly modern way to go. As Lizabeth Cohen reminds us, all expenditure for private consumption came to be seen in the later 1930s and '40s as good citizenship, keeping the national economy going and growing. 10 But much of the spending critically hinged on financing mechanisms, through installment plans, charge cards, and other forms of deficit financing, and let individual consumers blithely run up private debts. Yet never did the debts collectively amassed in this 1920s' trial run of consumerism reach the heights they would a half century later. Nor did they set a tone of cultural life or produce a new social type as they may have much later. Christopher Lasch may have been on to something when he set out to explore a novel social character structure in his Age of Narcissism, or for that matter in his Haven in a Heartless World, against the background of what he termed an Bage of diminishing expectations.Â
merica's Cultural Character at the End of Empire
Given the immense debt overhang at every aggregate level of American society, how does this situation reflect in the writing of social commentators, historians and cultural critics? What forms of representation, what symbolic reflections, can we recognize? What sort of Colossus is America today, sole remaining superpower, a hegemon by any measure, yet deeply indebted to the main rival to its power, China? Are what we are witnessing the signs of the end of empire, of its unstoppable decline?
In one analysis of America's status as an empire, Charles Maier makes the following interesting distinction. Asking himself the question whether America can rank as an empire among empires, and if so on what grounds compared to earlier historical cases, he distinguishes two historical stages in the American case: America as an empire of production followed by America as an empire of consumption. By the latter term Maier does not, as one may briefly expect, refer to America's era of consumerism and the cultural forms attending it. What he evokes is not America as an empire with the full panoply of the soft power of its culture of consumerism. No, he wishes to bring out the stark contrast between America as the marvel of productive prowess that it was in the mid-20th century and the America that can no longer produce all it wishes to consume. So from being an net exporter of goods it produced, it turned into a net importer, with its trade balance duly reflecting this shift. From being a creditor nation it had turned into a debtor nation, losing independence and freedom of action in the process. Now if there are signs of empire declining to be read in these secular trends, of an empire depending not only on borrowed money, but on borrowed time, are they beginning to dawn on the broader American population? And if so, what effect do they have on America's over-all state of mind?
It doesn't take the documentary eye of a Michael Moore to conjure up a visual America replete with the signs of decay, decadence and defeat. Forgotten veterans of America's faraway wars-far-away geographically, but more dramatically far-away from the public consciousness, repressed and pushed out of the public sphere-bring to mind Georg Gross' depictions of World War I veterans limping through Berlin streets. There is a seething anger among Americans, aimed at the impotence of presidents, of politics, aimed at the one-percent of the obscenely rich, an anger thrashing about wildly, yet unable to find meaningful expression, other than in a politics of resentment, Tea party politics, gun-toting and empty patriotic gestures. It is the anger of the self-styled militia, vindictive and utterly nihilistic. If there is a changing American character to be recognized here, we need a Richard Hofstadter to do it for us. After all, he has done it before, magisterially describing for us the paranoid style in American politics. 11 And yet, paranoia as a metaphor seems to cover only part of what I wish to capture. Paranoia does not stretch any farther than the lunatic fringe whose conspiratorial fantasies see the federal government in Washington, DC. as one big plot against the freedoms of individual Americans. As a metaphor it does not begin to account for statistics that show the proportion of Americans who still trust a government institution like Congress to be a meager 7 % or the proportion of Americans who expect their children to be worse off than they are to be a staggering two thirds. These are signs of collective disaffection in the face of a dysfunctional political system and of a collective sense of loss of control and direction. Nor is it only a matter of politics and a lack of citizen empowerment. It doesn't take the conspiratorial view of Hofstadter's paranoid style to see the economic system as producing ever growing income and property gaps. You don't have to be a Riesmanlike inside-dopester to take seriously a view of the world of finance as driven by self-interest, geared against standards of decency and public service, a view presented in an awardwinning, muck-raking documentary film like Inside Job. 12 It is a world of sharks, sharpers, and conmen, where banks are betting against their own customers, and where suckers are born every minute. If all this has led to a massive breakdown of social trust, it is not so much a sign of paranoia as it is of rational people who duly feel duped.
There is a number of best-selling books that have all tried to diagnose this mounting distrust, this erosion of America's social capital or of its habits of the heart, all noticing secular trends away from golden ages of civic enthusiasm and levels of engaged public debate and of trust worthy of a republic. 13 They all notice a secular slippage away from Tocquevillean standards of a multi-stranded associative life, of an erosion of civil religion and civic participation, of a loss of social capital. They all see the downward slope of democratic vigor, yet tend to miss the aspect of a rational assessment of reality behind it. Rather than people bowling alone because they no longer join social clubs, people have chosen to withdraw from politics, have withdrawn their trust from economic institutions, and no longer believe what they are told by talking heads on their TV's. They have done this because they have knowledge of Wall Street inside jobs and related fraud, not because they have let themselves be passively Bframed^by the relentless distortion of public debate that now passes for TV journalism. Outside the dysfunctional media landscape, where enlightened public debate has been bought out by private capital and the nihilistic ideology of corporate interests, many are now exploring ways to restore Bsocial capital,^finding ways of discussing a political agenda that no longer will get a fair hearing in the traditional halls of the republic. If the American character is morphing once again, it is not in the direction of people bowling alone, but toward the B agora,^the online marketplace of ideas and organized action, of life in cyberspace. It may not be the only crowd roaming America's public space, a lonely crowd it certainly isn't.
This ironically takes us back to the theme of Bprimary groups^as the mainstay of Tocqueville's civil society. Ever since Polanyi's BGreat Transformation,^or Trachtenberg's Bincorporation^of America, there has been an ongoing quest for signs of primary groups surviving and kicking. If the advent of modernity meant the demise of communitarian settings and primary relationships, students of society kept spotting primary groups in the most unlikely settings. In urban life, where the early Chicago School had explored Burbanism as a way of life,^and celebrated its modernity, individualism, and cosmopolitanism, integrated community structures were found to have survived, even thrived, as Herbert Gans showed in his Urban Villagers. If the advent of new media, such as radio, spawned big national broadcasting corporations, this need not have been the only, pre-ordained outcome. As Lizabeth Cohen showed in her Making a New Deal, working-class communities in a metropolis like Chicago for a brief period managed to harness the medium to give voice to the local community rather than the impersonal corporatism that characterizes the current media landscape. If, in the world of industrial work, Taylorism and the rationalization of production meant the reduction of individual workers to mere cogs in a machine, early industrial relations research in, e.g., Elton Mayo's classic Hawthorne studies pointed up the power of informal groups on the work floor to bend the rigidity of imposed production norms. If in politics the individual voter was seen as increasingly alienated and atomized, studies at the local level once again showed the role played by informal groups, inspiring an interpretive paradigm, popular in the 1950s, known as pluralist elitism. Robert Dahl's Who Governs is the classic reference here, although David Riesman's Lonely Crowd memorably contributed to the new paradigm with its view of what he called Bveto groups,^informal groups strong enough to block political decisions they do not like, yet insufficiently strong to have things their own way. It is basically a return to classic Tocquevillean intimations about American politics as the interplay of a multiplicity of groups. 14 Yet, undeniably, all these examples can be seen as so many exercises in nostalgia, as studies of lost causes. If processes of incorporation, under auspices of an impersonal neo-liberalism, have now gone global, can we possibly conceive of a response along Bprimary group^lines to get us out of the Biron cage^of globalization? For an answer we might look at the ways in which an international commonwealth, literally a republic, a res publica, organizes itself around issues of human rights, the environment, and economic inequality, through the network possibilities of the World Wide Web. In areas like these, on a global scale, the social capital is being formed of a civil society that is truly trans-national. | 8,218 | 2015-03-03T00:00:00.000 | [
"Economics"
] |
In silico analyses of Wnt1 nsSNPs reveal structurally destabilizing variants, altered interactions with Frizzled receptors and its deregulation in tumorigenesis
Wnt1 is the first mammalian Wnt gene, which is discovered as proto-oncogene and in human the gene is located on the chromosome 12q13. Mutations in Wnt1 are reported to be associated with various cancers and other human diseases. The structural and functional consequences of most of the non-synonymous SNPs (nsSNPs), present in the human Wnt1 gene, are not known. In the present work, extensive bioinformatics analyses are used to screen 292 nsSNPs of Wnt1 for predicting pathogenic and harmless polymorphisms. We have identified 10 highly deleterious nsSNPs among which 7 are located within the highly conserved areas. These 10 nsSNPs are also predicted to affect the post-translational modifications of Wnt1. Further, structure based stability analyses of these 10 highly deleterious nsSNPs revealed 8 variants as highly destabilizing. These 8 highly destabilizing variants were shown to have high BC score and high RMSIP score from normal mode analyses. Based on the deformation energies, obtained from the normal mode analyses, variants like G169A, G169S, G331R and G331S were found to be unstable. Molecular Dynamics (MD) simulations revealed structural stability and fluctuation of WT Wnt1 and its prioritized variants. RMSD remained fluctuating mostly between 4 and 5 Å and occasionally between 3.5 and 5.5 Å ranges. RMSF in the CTD region (residues 330–360) of the binding pocket were lower compared to that of WT. Studying the impacts of nsSNPs on the binding interface of Wnt1 and seven Frizzled receptors have predicted substitutions which can stabilize or destabilize the binding interface. We have found that Wnt1 and FZD8-CRD is the best docked complex in our study. MD simulation based analyses of wild type Wnt1-FZD8-CRD complex and the 8 prioritized variants revealed that RMSF was higher in the unstructured regions and RMSD remained fluctuating in the region of 5 Å ± 1 Å. We have also observed differential Wnt1 gene expression pattern in normal, tumor and metastatic conditions across different tissues. Wnt1 gene expression was significantly higher in metastatic tissues of lungs, colon and skin; and was significantly lower in metastatic tissues of breast, esophagus and kidney. We have also found that Wnt1 deregulation is associated with survival outcome in patients with gastric and breast cancer. Furthermore, these computationally screened highly deleterious nsSNPs of Wnt1 can be analyzed in population based genetic studies and may help understand the Wnt1 associated diseases.
Estimation of structural deviation in Wnt1 variants. The most deleterious and destabilized variants
of Wnt1 were modelled using I-TASSER. The best model was selected on the basis of C-score. Typically, C-score value is in the range of − 5 to 2, where a higher C-score value signifies a model with a high confidence and viceversa 34 . All the protein structures were then visualized, analyzed in Chimera 1.11 51 .
Impact of nsSNPs on the stability of Wnt1-FZD-CRD binding complexes. The effects of 8 most destabilizing and deleterious nsSNPs and other nsSNPs present on the binding interface of Wnt1-FZD-CRD complexes were analyzed by mCSM. mCSM predicts the impact of mutations on the binding affinity of protein-protein complexes in both regression and classification tasks (i.e. prediction of the numerical change or its direction) 44 .
Molecular dynamic (MD) simulation of WT and mutated Wnt1 in apo and Wnt1-FZD-CRD complexed conditions. To evaluate the stability of the modelled structures, MD Simulation was done and the list of which was provided in Supplementary Table S1. Docking results of HADDOCK 2.4 and ClusPro 2.0 revealed best score with FZD-8 and the structures were almost similar ( Supplementary Fig. S1). Wnt1-FZD-8 complex obtained from HADDOCK 2.4 was chosen for simulation optimisation. Each system was solvated using TIP3P model 57 assuring at least 8 Å thickness of water layer in cubic solvation box, and 0.15 M KCl concentration after neutralization of overall charges appropriately. Following a brief energy minimization, all atom molecular dynamics simulation was run on each system using CHARMM36 force field 58 implemented through NAMD2.12 59 , under periodic boundary condition. PME 60 was used to compute long range electrostatic interactions, whereas short range non bonded interactions were truncated at 14 Å with a switching function. Langevin dynamics and Langevin piston methods 61 were used to maintain NPT condition at 300 K and 1 atm pressure. Time integration step was set to 2 fs after freezing the vibrations of the bonds involving hydrogen using SHAKE algorithm 62 . Heating and equilibration was followed by 20 ns of production MD run for each system. The trajectories were clustered using Chimera 1.11 51 using default settings. Central frame of the top cluster was selected from each trajectory to model the mutants, i.e. the separate models for A129T, G169A, G169D, G169S, A253S, G312A, G331R, G331S systems, using CHARMM-GUI webserver 63 . Each mutant was simulated using MD for 5 ns, using the same strategies. Total 120 ns simulation was run. Statistical analyses. We have used multiple web-servers in our present study and each web-server has its own implemented statistical methods. The statistical significance of differential expression of Wnt1 between 21 different normal and tumor samples was observed from Mann-Whitney p-value; whereas, in case of 12 different normal, tumor and metastatic samples, Dunn's multiple comparisons test p-value was used. For survival analyses, Kaplan-Meier (log rank) test, the p-value was used to measure the statistical significance.
Secondary structure, evolutionary conservation profile and protein-protein interactions of Wnt1 in relation to its nsSNP distribution. Physico-chemical analysis of Wnt1 by ProtParam revealed its theoretical pI of 9.28, aliphatic index of 71.46, theoretical extinction coefficient (at 280 nm) of 62,335 M −1 cm −1 and the GRAVY score of − 0.347. The average flexibility of Wnt1 was predicted from its sequence by Expasy and found several highly flexible regions. It was also found from GlobPlot analysis ( Supplementary Fig. S2) that the protein had six disordered regions (1-5, 9-39, 142-156, 192-217, 317-333 and 339-368) and two potential globular domains (40-141 and 218-316). It was observed that 106 nsSNP sites were present in the predicted globular domains and 65 nsSNP sites were present in the disordered regions of Wnt1. It was evident from the GlobPlot analyses that the 10 highly deleterious variants had no major impact on the disordered regions and globular domains of Wnt1. Wnt1 has a signal sequence in the N-terminal region from 1 to 27 residues and we have found 23 nsSNPs are located in this region. The secondary structural details of Wnt1 showed that random coil (42%) contributed the major portion of the protein, followed by alpha helix (35%), extended strand (16%) and beta turn (7%) regions. The secondary structures of Wnt1 and its 10 highly deleterious variants are shown in Supplementary Fig. S3. It was also observed that out of 10 highly deleterious nsSNPs, only A129T variant was located in alpha-helix region, 7 variants (G169A, G169D, G169S, A253S, G312A, G331R and G331S) in loop region while the rest 2 variants (C218Y and W351G) were located within the extended strand in beta-ladder region. It was also evident from the Supplementary Fig. S3, that these 10 highly damaging nsSNPs had no impact on the secondary structure of Wnt1. The multiple sequence alignment of Wnt1 and its 10 highly deleterious variants is shown in Supplementary Fig. S4. The Sequence based evolutionary conservation of human Wnt1 was carried out using phylogenetic relations between homologous sequences by ConSurf. The rate of evolution at each residue was calculated by ConSurf, using the empirical Bayesian method with the best fit method of substitution for proteins. In ConSurf, Multiple Sequence Alignment was built using MAFFT and the homologues were collected from UNIREF90 database with HMMER homology search algorithm (E-value: 0.0001). ConSurf analyses had identified the conserved residues in human Wnt1 protein and predicted the residues to be exposed or buried in the protein structure (Fig. 1). The colour based conservation score (grade 1-9) indicates the evolutionary conservation of a particular position, where 1 indicates rapidly evolving sites and 9 indicates slowly evolving (i.e. evolutionarily conserved) sites. Position wise conservation scores of human Wnt1 protein is given in Supplementary Table S5. It was found that the residues 92-106, 120-144, 198-210 and 217-240 of human Wnt1 protein were highly conserved, whereas the N-terminal and C-terminal regions were highly variable. The 292 variants occurred within the 195 residues of the Wnt1 protein and out of which 61 positions were highly conserved (conservation score: 7 to 9). It was also observed that out of 10 highly deleterious nsSNPs (obtained after initial sequenced based screening), 7 nsSNPs (viz. A129T, G169S. G169D, G169A, C218Y, A253S and W351G) were located in highly conserved areas of protein having conservation score of 9; and rest of the 3 nsSNPs (G312A, G331S, G331R) occurred in residue positions having conservation score of 8.
Protein-protein interaction prediction by STRING. Determination of protein-protein interaction
(PPI) network is important to understand the functional impact of the protein and its interacting partners. It was found that Wnt1 had high confidence score for interaction with CTNNB1, WLS, DVL-1, PORCN, FZD1, FZD8, LRP5, LRP6, RYK and SFRP1 proteins ( Supplementary Fig. S5). This decoding of PPI network at cellular level helps further understand the mechanism of the target protein and the possible changes in interaction affinity upon amino acid alteration in Wnt1.
Predicting the impact of nsSNPs on post-translational modifications (PTM) in Wnt1.
Further studies were carried out to assess the impact of 10 highly deleterious nsSNPs on post-translational modifications of Wnt1 by MutPred2 (Supplementary Table S6). Probability scores above threshold (0.5) is considered as 'harmful'; whereas scores greater than 0.75 signify 'harmful' prediction with high confidence 33 . All the 10 highly deleterious nsSNPs were predicted to have 'harmful' effect (scores > 0.75) with altered posttranslational modifications and structural features. Amino acid substitutions like G169A, G169D and G169S cause loss of catalytic site at W167 position and gain of disulfide linkage at residue C170. There is a gain of new allosteric site at W167 due to G169A and G169S; but the variant G169D causes loss of that allosteric site. Catalytic site is also lost at H221 due to amino acid substitution C218Y and at residue position C330 due to variant G331S and G331R. Some PTM like residue Y258 acquire new phosphorylation site and residue N346 gain new N-linked glycosylation site due to the amino acid substitution of A253S and W351G respectively. But N-linked glycosylation site at residue N316 is lost due to another amino acid variation of G312A which also introduce a catalytic site at R313 and a disulfide linkage at residue C315. Variant G331S alters the protein stability by acquisition of new beta strand and formation of new disulfide linkage at residue C330 while the same disulfide linkage is lost due to amino acid substitution of G331R. Variant W351G alters the stability of Wnt1 by loss of beta strand and acquisition of intrinsic disorder. www.nature.com/scientificreports/ The impact of nsSNPs on the Wnt1 structure and stability. The functions and ability to interact with ligands depend upon the tertiary structure of the protein 68 . As there was no available crystal structure of Wnt1 in the protein data bank, the structure of Wnt1 was modelled by comparative homology based approach using I-TASSER, Phyre2, RaptorX and Swiss Model ( Supplementary Fig. S6). The best structural model was selected based on structural quality assessment and structure validation of these predicted models. The Ramachandran plots of all four predicted models of Wnt1 are also given in Supplementary Fig. S6. Therefore, on the basis of comparative homology modeling and assessment, we used Wnt1 structure, modelled by I-TASSER for further studies. Sequence based screening approach revealed 10 nsSNPs as highly deleterious. These 10 nsSNPs of Wnt1 were then analyzed using mCSM, SDM, DUET, INPS-3D and MUpro to assess the structural stability of the protein upon amino acid substitution ( Table 2). The mCSM server predicted all 10 variants as destabilizing (ΔΔG: Fig. S7) and the best model for each variant was selected on the basis of their corresponding C-score value. Further analyses of these 10 mutant models along with the wild type Wnt1 in Chimera 1.11 revealed that the mutant residues had different non-covalent bonding interactions than the wild type residues. The structural alterations of these 10 Wnt1 variants were measured by their corresponding RMSD values (Table 3). It was found that the variant G169S had the highest RMSD value of 1.182 Å, while the G331R variant had the lowest RMSD value of 1.008 Å. The coarse grained elastic network model (ENM)-based normal mode analysis (NMA) revealed some important dynamic features of the wild type and mutant Wnt1 ( Supplementary Fig. S8). It was found that all the BC values of wild type Wnt1 and its 8 high risk variants were very high and the score was 0.97. A similar picture was also presented by high Root Mean Square Inner Product (RMSIP) value of 0.96. Based on the deformation energies obtained from the normal modes, it was observed that G169A, G169S, G331R and G331S variants were largely unstable. Molecular dynamics (MD) simulations allowed the structural model to relax and to optimize themselves by adjusting internal interactions. Root Mean Square Deviation (RMSD) of the Cα atoms of all apo Wnt1 were computed and plotted as a function of time, taking the initial model as reference. Result of the WT Wnt1 exhibited increment upto 5 ns, which regime reflected the process of optimization of the conformation which was stabilized thereafter and then RMSD remained fluctuating mostly between 4 and 5 Å and occasionally between www.nature.com/scientificreports/ 3.5 and 5.5 Å (Fig. 2A). The initial rise of RMSD upto ~ 5 Å was because of the event of closure of the binding pocket (Fig. 2E,F) and this observation was in agreement of earlier reports in absence of the ligands 52 . So the MD simulation seemed to be able to benchmark the literature data and reflects the reliability. Root Mean Square Fluctuation (RMSF), of Cα atoms, averaged over the MD trajectories, showed ~ 2 Å fluctuation of the folded region while little higher values were noted in the unstructured loop regions (Fig. 2B). For the mutated systems, the RMSD values were limited within 4.5 Å (Fig. 2C) and the RMSF values followed the trend of < 2 Å for folded regions and higher values for unstructured regions (Fig. 2D). Interestingly, RMSF of the residues within 330-360 (CTD region) were lower compared to that of WT (Fig. 2D). This was because the binding pocket of all the mutants were already closed at the starting point of the MD as they were modelled based on the last conformation obtained after the 20 ns MD run of WT; as mentioned earlier the WT experienced the closure of the binding pocket during MD. Orientation of the side chains of the mutated residues were visually inspected using VMD 69 before and after the MD simulations; the observed deviations were summarized in Fig. 3. 70 . The interaction interfaces between wild type Wnt1 and FZD-CRDs were then evaluated on the basis of docking score (DS) from HDOCK, Z-score from HADDOCK and balanced weighted score from ClusPro (Table 4). Different types of binding interaction pattern of each docked wild type Wnt1 against seven different frizzled receptors are obtained from PDBsum and the details are given in Supplementary Table S7 and Supplementary Fig. S9. It was observed that Wnt1 had Among the seven Wnt1-FZD-CRD complexes, best docking was observed between Wnt1 and FZD8-CRD on the basis of lowest balanced weighted score from ClusPro (− 1068.5) and lowest Z score from HADDOCK (− 2.7). Analyses of interacting interfaces of Wnt1-FZD-CRD complexes revealed that the residue positions of 8 high risk variants were not involved in the binding interfaces. Therefore, we had studied the distal effects of these 8 high risk Wnt1 variants in their interaction with seven FZD-CRDs using HDOCK, HADDOCK and Clus-Pro and the results were also summarized in Table 4. The docking results were then evaluated for interactions between wild type Wnt1 and FZD-CRDs; and it was observed that most of these variants interfered with the Wnt1-FZD-CRD complex formation. The interaction 2D plots between wild type Wnt1 and seven FZD-CRDs ( Supplementary Fig. S9) also revealed the interaction patterns between them. Further stability based analyses of binding interfaces between Wnt1 variants and FZD-CRDs in mCSM revealed that almost all the highly deleterious variants had destabilizing effects on respective Wnt1-FZD-CRD binding interfaces (Table 5). For this study, we used Wnt1-FZD-CRD complexes, obtained from the three docking servers viz. HDOCK, HADDOCK and ClusPro. It was found from Table 5 that the amino acid substitution A129T and A253S in Wnt1 increased its binding affinity for all the 7 FZD-CRDs; whereas other five substitutions www.nature.com/scientificreports/ were predicted to decrease the binding affinity for FZD-CRDs. As the 8 highly destabilizing variants of Wnt1 were not present in the binding interfaces with FZD-CRDs, we had included all the interacting residues (Supplementary Fig. S10) of Wnt1 where nsSNPs were reported to investigate their effects on the stability of Wnt1-FZD-CRD complexes (Supplementary Table S8). It was found that most of the amino acid substitutions present in the interaction interface of Wnt1 showed destabilizing or highly destabilizing effects. We found that Wnt1 and FZD8-CRD was the best docked complex in our study, therefore, we performed MD simulation based analyses of wild type Wnt1-FZD8-CRD complex. For complexed WT Wnt1, RMSD increased to 5.5 Å up to ~ 7.5 ns of MD run, and thereafter went on fluctuating in the band of 5 Å ± 1 Å after which there was not much deviation (Fig. 4A). RMSF of WT Wnt1 in complex with FZD8-CRD (Fig. 4B) revealed residue wise fluctuation of the protein. RMSF was higher in the unstructured regions, as expected, especially in the residues 225-245 (Fig. 4E,F). The folded region did not show much deviation from the average structure position, as understood from their lower RMSF values. Among the mutated systems, the A129T showed highest deviation (Fig. 4C), and this might be due to the reorientation of the helix with residues ranging from 260 to 280 (Supplementary Fig. S1). The same was reflected in the RMSF result (Fig. 4D). The G169S mutant system could be www.nature.com/scientificreports/ considered as most stable because the RMSD and RMSF values were least among all the mutants. Orientations of mutated residues were somewhat different before and after simulation except for the A129T mutant system (Fig. 3).
Interaction of
Clinical correlation between Wnt1 expression and deregulation with different cancer types. Microarray based differential expression of Wnt1 gene under Tumor and Normal conditions were carried out in TNM-plot 64 . Wnt1 mutations were found in different types of cancers like adenomatous polyposis coli, colorectal cancer, and lung cancer, breast cancer, gastric cancer 71 . The Wnt1 gene expression in different cancer types revealed its importance in tumor progression and cancer formation (Supplementary Fig. S11). We have compared the expression of Wnt1 gene in normal and tumor samples (Supplementary Table S9) across 21 different tissues (as available in TNM dropdown list). It was observed that Wnt1 gene was upregulated during tumor formation of endometrium, vulva, ovary and skin; although this upregulated expression was not statistically significant. The Wnt1 gene expression pattern was then compared among all the twelve normal, tumor and metastatic tissue samples as provided in the TNM dropdown list using gene chip data. It was observed from Fig. 5 and Supplementary Table S10 that Wnt1 expression was significantly higher (Dunn's Test p value < 0.01) www.nature.com/scientificreports/ in metastatic tissues over tumor tissues of lungs, colon and skin; and was significantly lower (Dunn's Test p value < 0.01) in breast, esophagus and kidney metastatic tissues over tumor tissues. As deregulation of Wnt1 activity was associated with various cancers and other human diseases 8,9 , a meta-analysis was also carried out to assess the overall survival rate of breast, gastric, lungs and ovarian cancer patients with Wnt1 expression, using Kaplan-Meier plotter (Fig. 6); and we found a strong relation between Wnt1 deregulation and the overall survival rate 72 . In case of breast cancer patients, Kaplan-Meier curve and log-rank test analyses showed that high expression of Wnt1 (HR = 0.83; P = 0.0004) was associated with less number of patients at risk. This observation also correlates with the data obtained from TNM-plot which shows that Wnt1 has significantly lower expression in metastatic breast tissues. In case of gastric cancer (HR = 1.8; P > 0.00001) and lung cancer (HR = 1.13; P = 0.051) patients, higher expression of Wnt1 was associated with less survival rate (i.e. more patients at risk) 73 . From TNM-plot we have also observed significantly higher expression of Wnt1 in metastatic tissues of lung. On the other hand, in the case of ovarian cancer patients, Wnt1 expression (HR = 0.9; P = 0.15) was thought to have no such effects.
Discussion
In our present study, we have addressed the structural and functional implications of a large number of nsSNPs of human Wnt1 gene using multiple sequence and structure based approaches. Use of multiple softwares and algorithms increase the confidence during prediction of the effect of missense mutations on a protein 67 . Initial screening of 292 nsSNPs of Wnt1 using multiple sequence based algorithms prioritized 10 highly deleterious nsSNPs (A129T, G169S, G169D, G169A, C218Y, A253S, G312A, G331S, G331R, W351G). As random coil is the major contributor of Wnt1 structure, and a considerable number of deleterious nsSNPs are present within this random coil region of protein, therefore, these nsSNPs may impact on flexibility of Wnt1 protein 67 . Amino acid substitutions at the highly conserved regions, often involved in important biological functions, tend to be more deleterious in nature than nsSNPs which are located at less conversed sites 74,75 . Consurf analyses revealed that 46.23% of total nsSNPs occurred in the conserved areas of the protein. It was found from Consurf analyses that the positions for all the 10 highly deleterious nsSNPs were highly conserved. ConSurf predicted that the variant A129T and C218Y occurred at important structural residues (highly conserved and buried) whereas, G169A, G169D, G169S, G312, G331R and G331S occurred at important functional residues (highly conserved and exposed).
Being a multi-functional protein, Wnt1 interacts with numerous partners at the membrane as well as in the cytosol 76 . Protein-protein interaction (PPI) networks are important to understand the functional impact of the protein and its interacting partners 77 . Therefore, amino acid substitutions in Wnt1 due to nsSNPs, may alter the binding interfaces in the PPI network, which may have crucial importance in Wnt1 secretion, transport, canonical and non-canonical Wnt signaling. Residues 214-234 of Wnt1 are involved in porcupine (PORCN)-dependent palmitoylation 52 , which is required for its secretion through binding of palmitoylated Wnt1 to Wntless (WLS) 78 . Therefore, nsSNPs occurring in this region (like M214I, R215G, R215L, E217K, C218G, C218Y, H221Y, G225S, S226L, C227G, C227Y, V229A, R230H) of Wnt1 may potentially alter the interacting interface between Wnt1 and porcupine and thereby may hinder its secretion. SIFT had predicted all these variants as deleterious and REVEL had predicted these variants as likely disease causing. Interestingly, variant like C218Y, occurred within this PTM sequence site of Wnt1 also present within the 10 highly deleterious nsSNPs. Double acylation (O-acylation at position C93 and S-acylation at position S224) in Wnt1 was essential for its secretion as well as for its activity. Studies showed that Wnt1 mutants C93A and S224A were trapped in the ER, impairing their secretion 75 . The local structure around the palmitoylation site is important for recognition of Wnt1 substrates by PORCN. Therefore, structural alterations in these regions due to nsSNPs might affect the palmitoylation of Wnt1 by PORCN 79 . Variants A129T and G169D are located within the N-terminal domain, and form hydrogen bonds with amino acid position at 120, thus making the structure unstable 5 . Variant A129T also changes the binding cavity around the mutant residues 9 and increases the relative solvent accessibility. Upon amino acid substitutions in Wnt1, the major structural and post-translational modifications were gain of relative solvent accessibility, altered disordered interface, altered ordered interface, altered transmembrane protein, loss and gain of catalytic sites, loss and gain of disulphide linkage, gain and loss of allosteric sites, altered metal binding, gain of intrinsic disorder, gain and loss of strand, altered stability, loss of phosphorylation, loss and gain of N-linked glycosylation.
It is well known that the function of a protein is directly depends on its tertiary structure 80 . Therefore, the substitution of amino acids due to nsSNPs may impact on structural conformation of Wnt1 that may alter its potential physiological functions. Majority of the disease associated nsSNPs affected the stability of the protein 81 and it was found that 8 out of 10 highly deleterious variants of Wnt1 showed strong destabilizing effects by all the five prediction methods. Furthermore, variants like G169S, G169D, G169A, A253S, G312A, G331S and G331R are located within the loop region of Wnt1 structure, which may affect the flexibility of the protein.
Further investigation of the structural deviation of the 8 high risk destabilizing variants of Wnt1 revealed that G169S and G331S showed higher RMSD values from wild type; whereas, G331R showed lowest RMSD value 74 . Therefore, the structural variation of Wnt1 at local or global scale may impair its interaction with the partner proteins in the network 79 . As the protein dynamics play an important role in molecular recognition as well as in catalytic activity and as the mobility of a protein is an intrinsic property, encrypted in its primary structure, we have performed NMA study to examine whether Wnt1 and its 8 highly destabilizing variants display any unique patterns of intrinsic mobility 82 . Based on the deformation energies obtained from the normal modes, it was observed that some variants became unstable. Further insights from MD simulations of WT Wnt1 revealed the close state of the binding pocket of Wnt1 for CRDregions of frizzled receptors in absence of ligands. It was reported that Klotho-derived peptides facilitated the close to open state transition of FZD-CRD binding pocket of Wnt-1 52 . Therefore, this structural transition might be necessary for Wnt1-FZD-CRD interaction. It was also reported that Wnt4 showed higher fluctuation rate in residues nearer to edges of two domains, the thumb (NTD) and index finger (CTD), involved in FZD-CRD interactions 83 . The unstructured loop regions of Wnt1 showed fluctuations, revealing the flexibility of the protein 84 . Interestingly, RMSF values in the binding pocket for FZD-CRDs (residue 330-360) were lower compared to other regions of the Wnt1. RMSD and RMSF may also account for the atomic fluctuations due to changes in molecular orientations of a protein around their average conformations and serve as important indicators of dynamic behavior of several biological processes 83 . The superimposition of WT Wnt1 and its 8 prioritized variants before and after simulation clearly showed positional shifts of side chain structures. The exact mechanism and the role of those predicted nsSNPs, which influenced the Wnt1 stability, should be validated experimentally. Wnt1 binds at the CRD regions of frizzled receptors-FZD-1, FZD-2, FZD-4, FZD-5, FZD-7, FZD-8 and FZD-10 52 . Previous studies reported that, proteins-protein interaction were mediated through their interfaces that generally accomplishing their functions 85 . In addition, the residue properties, involved in these interfaces, also determine the binding specificity and affinity 86 . In this context, we further intended to investigate the effects of highly destabilizing variants of Wnt1 on the binding interface and to explore the receptor selectivity towards Wnt1 and the downstream signaling cascade. Among 8 86 . This could affect some of the interactions of the Wnt1 protein or "edges" in canonical Wnt signaling, which could have functional consequences 87 . Another important variant of Wnt1, R235W disrupts the structure of Wnt1 thumb region by destabilizing the β-strand that supports FZD-CRD binding loop. The Trp side chain in the R235W variant would clash with the side chains of Trp233, Arg141, and Asp172 and may seem to alter the interaction with FZD receptor 88 . Furthermore, Cys227, Leu239, Arg240 and Glu343 residues are involved in binding with FZD-CRDs 52 , therefore substitutions in these positions (C227G, C227Y, L239P, R240C and E343Q) may disrupt the Wnt1-FZD-CRDs binding interface. Further experimental works are needed to validate the destabilizing effects of Wnt1 variants in the binding interface. As Wnt1-FZD8-CRD was the best docked complex, therefore, analyses of Wnt1-FZD8-CRD interaction through MD simulation revealed that RMSF values were decreased in the index (CTD) and thumb (NTD) region of Wnt1, indicating strong interactions between them 84 . There is a hint that mutations in Wnt1 perturb the intra-molecular interactions of the protein compared to the same in WT, and therefore it seems that the mutations might have functional consequences on the protein.
The genes exhibit comparable expression profiles between cancerous and normal tissues, can serve as either targets of treatment or molecular biomarkers of cancer progression 8 . Comparing the expression of Wnt1 gene in normal and tumor samples across 21 different tissues had revealed no statistically significant up or down regulation of this gene. Further, when the Wnt1 gene expression pattern was compared among normal, tumor and metastatic tissue samples, it was observed that Wnt1 expression was significantly higher in metastatic tissues of lungs, colon and skin 71,89,90 . KM Plot analyses revealed that Wnt1 deregulation affected overall survival in gastric and breast cancer patients and it had no impact on the survival rate of lung and ovarian cancer patients. This observation may be due to functional redundancy of both Wnt1 and FZD receptors, and thereby suggests differential regulation of subsequent downstream signaling cascade 76 . In case of breast cancer, high expression of Wnt1 was associated with less number of patients at risk. This inverse relationship between Wnt1 expression pattern and overall survival rate of breast cancer patients may be due to oncogenic regulation of FZD1, one of the Wnt1 binding receptor 91 . In breast cancer, nestin, an extracellular matrix (ECM) intermediate filament protein, inhibits FZD1 expression and thereby β-catenin signaling, resulting the halt of proliferation and invasion of breast tissues by decreasing the expression of matrix metallopeptidase 92 . Therefore, Wnt1 differential gene expression pattern may be used as a prognostic biomarker to predict overall survival of the gastric and breast cancer patients. As nsSNPs may deregulate the encoded protein 80 , the structurally destabilizing variants identified in our study, may have similar functional consequences in Wnt1 deregulation.
In conclusion, our results demonstrate that several nsSNPs in the Wnt1 gene may be deleterious to its structure and functions. We have identified 10 highly deleterious and destabilizing nsSNPs of Wnt1. We found that several nsSNPs affect Wnt1 post-translational modifications, important for its protein-protein interaction and signaling. Thirteen nsSNPs of Wnt1 may potentially alter the interaction interface between Wnt1 and porcupine and thereby impact on its secretion. We have identified highly destabilizing variants of Wnt1 on the binding interface with FZD-CRDs, which may alter the downstream Wnt1 signaling cascade. MD simulation of apo Wnt1 and complex of Wnt1-FZD8-CRD revealed the open and close state transition of Wnt1 upon interaction with FZD-CRDs. Furthermore, MD simulation also showed that amino acid substitutions in Wnt1 might perturb the intra-molecular interactions of the protein compared to the same in wild type. Wnt1 showed differential expression profiles between normal, tumor and metastatic tissues and its deregulation affected survival outcome in patients with lung and gastric cancer. Although bioinformatics tools have their own limitations, our present computational study may be useful in further population based researches and towards development of precision medicines for the treatment of diseases caused by the most deleterious nsSNPs of Wnt1. Further experimental works are required to elucidate the deleterious nature of the prioritized nsSNPs of Wnt1. | 7,386.4 | 2022-09-02T00:00:00.000 | [
"Biology"
] |
Influence of hunger on attentional engagement with and disengagement from pictorial food cues in women with a healthy weight
Because of inconsistencies in the field of attentional bias to food cues in eating behavior, this study aimed to re-examine the assumption that hungry healthy weight individuals have an attentional bias to food cues, but satiated healthy weight individuals do not. Since attentional engagement and attentional disengagement have been proposed to play a distinct role in behavior, we used a performance measure that is specifically designed to differentiate between these two attentional processes. Participants were healthy weight women who normally eat breakfast. In the satiated condition (n = 54), participants were instructed to have breakfast just before coming to the lab. In the fasted condition (n = 50), participants fasted on average 14 h before coming into the lab. Satiated women showed no stronger attentional engagement or attentional disengagement bias to food cues than to neutral cues. Fasted women did show stronger attentional engagement to food cues than to neutral cues that were shown briefly (100 ms). They showed no bias in attentional engagement to food cues that were shown longer (500 ms) or in attentional disengagement from food cues. These findings are in line with the assumption that healthy weight individuals show an attentional bias to food cues when food stimuli are motivationally salient. Furthermore, the findings point to the importance of differentiating between attentional engagement and attentional disengagement.
Introduction
Overweight and obesity are major problems in today's society. Currently, 39% of adults are overweight and 13% obese (World Health Organisation, 2018). The World Health Organization (WHO) states that overweight and obesity are preventable, and recommends that people restrict their caloric intake, increase their fruit and vegetable consumption, and engage in frequent physical activity (World Health Organisation, 2018). However, people have a hard time adhering to self-set rules on food restriction (Knäuper, Cheema, Rabiau, & Borten, 2005), and only about 20% of overweight individuals seem to be successful in achieving long term weight loss (Wing & Phelan, 2005). It thus seems important to enhance our understanding of factors that control food intake (e.g., Loeber, Grosshans, Herpertz, Kiefer, & Herpertz, 2013). One characteristic that has been of interest in this regard is attentional bias (AB) to food cues (e.g., Castellanos et al., 2009). As a first step in understanding how AB to food cues might play a role in obesity, it seems important to understand the role of AB to food cues in healthy eating behavior.
Individuals' attention has been proposed to be biased towards stimuli in the environment that have a positive value (e.g., rewarding stimuli), and attention to rewarding stimuli is associated with increased response activation and approach behavior to these stimuli (Anderson, 2017;Higgs et al., 2017;Pool, Brosch, Delplanque, & Sander, 2016). Since food is thought to have a high intrinsic reward value (Robinson & Berridge, 2001), an attentional bias to food stimuli might therefore contribute to individuals' food intake (cf. Berridge, 2009;Franken, 2003). Moreover, it has been suggested that heightened AB to food cues lowers the threshold for overeating and may thus set individuals at risk for the development of overweight and obesity (Berridge, Ho, Richard, & Difeliceantonio, 2010;Polivy, Herman, & Coelho, 2008).
However, results regarding differences between obese and healthyweight individuals in their attention to food cues have been inconsistent (Field, Werthmann, & Franken, 2016). Some studies showed stronger AB for food in obese compared to healthy weight individuals (Kemps, Tiggemann, & Hollitt, 2014;Nijs, Muris, Euser, & Franken, 2010), whereas several others found no evidence for a difference (Loeber et al., 2012;Werthmann et al., 2011). On top of that, attempts to modify AB for food in obese or unsuccessful dieters have been largely ineffective in influencing eating behavior (Boutelle, Kuckertz, Carlson, & Amir, 2014;Jonker, Heitmann, et al., 2019;Stice, Lawrence, Kemps, & Veling, 2016;Verbeken et al., 2018). The inconsistencies in the field have driven us to revisit the studies that lie at the foundation of the work on the role of attentional bias in obesity. These earlier studies integrated the influence of hunger on attentional bias in the comparison between healthy weight and obese individuals. That is, these studies examined whether attentional bias to food might differ between healthy weight and obese individuals when taking the potential influence of hunger into account. For healthy weight individuals the reward value of food is higher when individuals are deprived of food than when they are satiated (Higgs et al., 2017), and the value might even approach neutral when satiated (Berridge et al., 2010;Small, Zatorre, Dagher, Evans, & Jones-Gotman, 2001). If indeed the attentional bias to food cues is related to the reward value of food, the bias should be strong following deprivation and relatively weak or even absent when satiated. In line with this assumption, one important previous study found that healthy weight individuals only showed an attentional bias to food cues when they were deprived of food for more than 8 h, whereas obese individuals still showed an attentional bias to food when they were satiated (Castellanos et al., 2009). However, two later studies using fasting periods of 10-12 h (Stamataki, Elliott, McKie, & McLaughlin, 2019) or more than 17 h (Nijs et al., 2010) did not show this same interaction between weight status and hunger condition.
Going even a step further back, there is also no consistent empirical evidence for the notion that food deprived healthy weight individuals would, whereas satiated healthy weight individuals would not, have an attentional bias for food cues. Healthy weight individuals with high self-reported hunger have been found to show a greater attentional bias to food cues than individuals low in self-reported hunger as indexed by differential reaction times within the context of a 500 ms visual probe task (VPT). However, a similar pattern was absent in the context of a 14 ms VPT (Mogg, Bradley, Hyare, & Lee, 1998). In addition, a more recent study could not replicate this earlier finding using a 50 ms or 500 ms VPT (Loeber et al., 2013). Furthermore, studies that experimentally manipulated hunger status provided only limited and somewhat inconsistent evidence for the influence of hunger on attentional bias. In one study, healthy weight individuals who were food deprived for more than 8 h showed a higher attentional bias to food cues than satiated healthy weight individuals, but only as measured with eyetracking and not on reaction times in the context of a 2000 ms VPT (Castellanos et al., 2009). In contrast, in another study healthy weight individuals who were food deprived for 10-12 h showed a higher attentional bias to food cues than satiated healthy weight individuals, but only as measured with the reaction time measure of a 100 ms VPT, and not on a 500 ms VPT, or on an eye-tracking measure (Nijs et al., 2010). Thus, the available findings provide no consistent pattern and no robust basis for drawing a final conclusion with regard to the question whether hunger plays a role in healthy weight individuals' attention for food. Hence, this study was designed to arrive at a more final conclusion regarding this role. In order to do so we made several important changes compared to the designs of previous studies.
First, a serious weakness of the previous studies is that they included a limited number of participants. For example, the study of Mogg et al. (1998) only included 16 participants in the fasting and 16 participants in the non-fasting group, and the study of Castellanos et al. (2009) relied on only 18 healthy weight individuals who participated once in a fasted and once in a satiated state. As a consequence these studies had very limited power (< 30%) which might have led to an under-or overestimation of the true effect and therefore also low reproducibility (Button et al., 2013). In the current study, our sample size will reach sufficient power (i.e., 95% to find a medium effect size) to examine the influence of hunger on attention to food cues.
Second, the statistical approach of previous studies did not allow to directly examine whether there is an AB in food deprived individuals and whether such bias is absent in satiated individuals. That is, such a question cannot be answered by the commonly used frequentist analyses, since these analyses can only find evidence to reject null-hypotheses and not to accept the null-hypothesis. In line with the boundaries of the analyses following the frequentist approach, previous studies only examined differences between groups. A limitation of this approach is that finding a group difference does not necessarily mean that the AB is absent in one group, and present in the other. Therefore, Bayesian analyses will be included in the current study allowing to examine the evidence in favor of the null-hypothesis.
Third, previous studies were unable to examine the potentially distinct role of attentional engagement and disengagement in eating behavior. Attentional engagement (i.e., automatic orientation towards food cues) and disengagement (i.e., redirection of attention away from food cues) have been proposed to play a distinct role in behavior (Mogg & Bradley, 2016;Posner, Inhoff, & Friedrich, 1987). Furthermore, these two processes might be differentially related to eating behavior. For example, attentional engagement might be specifically implicated in healthy eating behavior (Jonker, Glashouwer, Hoekzema, Ostafin, & De Jong, 2019), whereas attentional disengagement might be specifically implicated in compromising dieting success (Franken, 2003;Jonker, Heitmann, et al., 2019). However, previous studies on the role of food deprivation on attention to food cues used AB tasks unable to differentiate between attentional engagement and attentional disengagement (Grafton & MacLeod, 2014). If these processes are differentially influenced by hunger, using such combined measures might have unwanted effects on the results. For example, if engagement would increase as a result of hunger and disengagement would decrease, the results would show no change in attentional bias as measured as the combination of these two processes. Therefore, in this study, AB to food cues will be assessed with a performance measure that differentiates between attentional engagement and attentional disengagement (Grafton & MacLeod, 2014;Jonker, Glashouwer, et al., 2019). The task will include trials in which cues are shown for 100 ms and 500 ms providing the opportunity to examine whether it is a faster (i.e., more automatic) or slower (i.e., controlled) bias in attention that is relevant.
In summary, the following hypotheses were tested in the current experimental study examined: (1) satiated healthy weight individuals do not show more attention to food cues than to neutral cues, (2) fasted healthy weight individuals do show more attention to food cues than to neutral cues, and (3) fasted healthy weight individuals show stronger ABto food cues than satiated healthy weight individuals. As a subsidiary issue, we will explore whether hunger has a similar or differential impact on food-related attentional engagement and attentional disengagement.
Participants
Women with a healthy weight (i.e., BMI between 18.5 and 25) who normally eat breakfast were eligible for participation in this study. To obtain a power of 95% to find a difference of medium effect size on the one sample t-tests while controlling for multiple testing (α = .05/ 4 = .0125) because of our four attentional bias measures, a sample size of 63 per group is needed. Only women were included because gender might play a key role in the relation between attention to food cues and eating behavior (Hummel, Ehret, Zerweck, Winter, & Stroebele-Benschop, 2018), and dieting behavior is more common in women (De Ridder, Adriaanse, Evers, & Verhoeven, 2014;Wardle, Haase, & Steptoe, 2006). Participants were recruited via flyers specifically stating these inclusion criteria which were spread via social media and placed at University faculties. Additionally, first year psychology students were recruited via the online platform for psychology research. If interested, women were asked to complete a screening questionnaire. From the self-reported height and weight on this screening questionnaire we calculated BMI (i.e., self-reported BMI). In total 163 individuals completed the screening, of which 1 was male, and of 8 individuals their self-reported BMI was not within the healthy range. Of the remaining 154 women who were eligible, 129 women aged 18-35 (Mean = 22.43, SD = 3.16) with a self-reported BMI within the healthy weight ranges (Mean = 21.43, SD = 1.82) participated in the study. Of these 129 women, 14 were recruited via the university's online platform and 115 via flyers.
Hunger scale
The item "How long has it been since you last ate?" from the Hunger Scale (Grand, 1968) was used to examine participants' compliance to the study instructions. Scores reflect the number of hours that have passed since the participants last ate, rounded off to quarters of an hour. Furthermore, the item "How hungry do you feel right now" which was answered on a 7-point scale from not hungry at all (1) to extremely hungry (7), was used to assess whether food deprivation also led to higher subjective hunger ratings.
Eating disorder symptoms
For descriptive purposes eating disorder symptoms were assessed with the Eating Disorder Examination Questionnaire (EDE-Q; Fairburn & Beglin, 2008). Questions are answered on a scale from 0 to 6. An average score of the 22 items was used as general index of eating disorder pathology (cf., Aardoom, Dingemans, Slof Op't Landt, & Van Furth, 2012). Internal consistency of this global EDE-Q score was excellent in both the non-fasting and fasting condition (Cronbach's alpha = .94 and .93 respectively).
Attention to food
Attentional engagement with food cues and difficulty to disengage from food cues were assessed with The Attentional Response to Distal vs. Proximal Emotional Information (ARDPEI) task that was originally designed by Grafton and MacLeod (Grafton & MacLeod, 2014;Jonker, Glashouwer, et al., 2019). The ARDPEI was programmed in E-prime 2.0 (Schneider, Eschman, & Zuccolotto, 2002), and performed on a desktop computer with a 27 inch screen.
Task procedure. See Fig. 1 for a screen by screen overview of an ARPDEI trial. Each trial started with a white square left, and a white square right from the middle of the screen, against a middle gray background. Participants were instructed to focus their attention to a red outline that appeared in one of the two white squares. This red outline appeared with equal probability in the left or the right white square. After 1000 ms a red line (i.e., the anchor) appeared within this red outline. This anchor was either a horizontal or vertical line, and it appeared for 150 ms. Hereafter, two images, a representational image which is either a food or neutral image (i.e., Image Category) and an abstract art image, replaced the two white squares. The images appeared with equal probability in the left or the right square, and thus appeared distal or proximal to the anchor (Image Position). Following Grafton and MacLeod (2014), images were shown for 100 or 500 ms (Image Time) to examine differences between long and short exposure duration. Following the images, another red line (the probe), appeared on the left or right side of the screen. This red line was, with equal probability, a horizontal or vertical line. After the probe appeared on the screen, participants had to identify as quickly as possible whether the anchor and the probe had the same orientation (i.e., both horizontal or vertical) or a different orientation (i.e., one horizontal and one vertical) by pressing the corresponding button on the USB response box (Probe position). The task thus contains 16 different types of trials (Image Category (2) x Image Position (2) x Image Time (2) x Probe Position (2)).
The original task consisted of 128 trials which were presented in randomized order (Grafton & MacLeod, 2014;Jonker, Glashouwer, et al., 2019). This means that there are 8 trials per trial type (e.g., a 100 ms food trial where the food image appears distal to the anchor, and the probe has a different position than the food image). In the current study, participants performed this short version of the task twice, directly following each other, to provide a more accurate reflection of the true bias.
Stimuli. Food stimuli were 64 high caloric food items that were used previously in this task (Jonker, Glashouwer, et al., 2019). Of these, 32 were sweet high caloric food items (e.g., pancakes, cheese cake, and chocolate) selected from the food-pics database (Blechert, Meule, Busch, & Ohla, 2014), and 32 were savory high caloric food items (e.g., chips, fries, and pizza) of which 22 were selected from the food-pics database. 1 The additional 10 were selected from our own database and were Fig. 1. Example of (A) a food trial where the food image appears distal to the anchor, the probe has a different position than the food image, and the orientation of the anchor and the probe is different; and (B) a neutral trial where the neutral image appears proximal to the anchor, the probe has a similar position as the neutral image, and the orientation of the anchor and the probe is different.
Data reduction. Outliers were deleted following Grafton and MacLeod (2014), and this was done separately for the fasting and nonfasting group. Two participants from the non-fasting condition and four participants from the fasting condition were removed because they fell more than 2.58 SD below the mean number of correct responses. After removing these participants mean accuracy of the non-fasting and the fasting conditions were comparable (Mean = 94.5%, SD = 5%; Mean = 95.3%, SD = 4%, respectively). Incorrect trials were deleted. Of the correct trials, 2.4% in the non-fasting condition and 2.3% in the fasting condition fell more than 2.58 SD from the mean reaction time for that trial type and were therefore deleted. Lastly, reaction times faster than 200 ms, which are most likely anticipations errors, were deleted. This was limited to 5 trials in the non-fasting condition and 9 trials in the fasting condition. After data reduction of the ARDPEI there were 62 participants in the non-fasting condition and 61 in the fasting condition.
In total four AB scores were calculated, an engagement bias for the short (100 ms) and long (500 ms) Image Time trials, and a disengagement bias for the short and long Image Time trials. Engagement biases were calculated based on trials where participants had to look away from their initial focus point to see the representational image. Thus, on the trials in which the representational image position was distal from the anchor position. The difference in engagement bias was represented by the difference in reaction times of trials where the probe is in the same position as the image, and trials where the probe is in the opposite position. The engagement bias was calculated as follows: Engagement bias (higher scores reflect facilitated attention engagement with food compared to neutral cues) = (RT for probes in different location as food image -RT for probes in same location as food image) -(RT for probes in different location as neutral image -RT for probes in same location as neutral image).
The disengagement biases were calculated from trials in which participants' initial focus was on the position where the representational image appeared. Thus, from the trials in which the image position was proximal to the anchor position. The difference in difficulty to disengage was also represented by the difference in reaction times on trials were the probe appears in the same versus the opposite position. The disengagement bias was calculated as follows: Disengagement bias (higher scores reflect more difficulty to disengage from food compared to neutral cues) = (RT for probes in different location as food image -RT for probes in same location as food image) -(RT for probes in different location as neutral image -RT for probes in same location as neutral image).
Internal consistency. Internal consistency of the ARDPEI was assessed by performing split-half reliability analyses. The relationship between the first and second half of the attentional engagement on the short Image Time trials (Spearman-Brown = .23), on the long Image Time trials (Spearman-Brown= .04), and attentional disengagement on the short Image Time trials (Spearman-Brown = .16), and on the long Image Time trials (Spearman-Brown = . 27) were weak.
Procedure
The study protocol was approved by the ethical committee of the psychology department of the University of Groningen (17374). Participants signed up for the study through the screening during which they also provided informed consent. Women who reported to have a healthy BMI on the screening were randomly assigned to either the fasting or the non-fasting condition. To inform participants about their assigned condition and the corresponding instructions, and to increase compliance, participants received instructions via telephone from the researcher 1-5 days preceding their scheduled session. They additionally received an email with the instructions of the relevant condition. Participants who were assigned to the fasting condition were instructed to abstain from food, including drinks containing sugar, for at least 14 h prior to their appointment. Participants who were assigned to the non-fasting condition were instructed to have breakfast just before, but no later than half an hour prior to, their appointment. Participants in both conditions were instructed to not drink alcohol for at least 14 h prior to their appointment. Sessions were scheduled at 9 a.m. or 10.30 a.m.
For both conditions the lab session followed the same fixed order. First, participants answered the hunger scale to assess compliance. Following, they completed the ARDPEI. Finally, they completed the EDE-Q and an explicit question about study compliance. Lastly, their weight and height were measured. This study is part of a larger project additionally consisting of the Affective Simon Task (De Houwer, Crombez, Baeyens, & Hermans, 2001) completed after the ARDPEI, the Profile of Mood Scales (POMS-40; Grove & Prapavessis, 1992), and Restraint Scale (Herman & Polivy, 1975) administered before the EDE-Q. The study took about 50 min to complete and participants received study credits (n = 14) or financial compensation (n = 115).
Analyses
To test the hypothesis that individuals who have just eaten do not show more attention to food cues than to neutral cues, the attentional engagement and attentional disengagement scores of the satiated individuals were tested against zero with one sample t-tests. To test the hypothesis that individuals who have fasted do show more attention to food cues than neutral cues the attentional engagement and attentional disengagement scores of the fasted individuals were tested against zero with one sample t-tests. Further, on top of testing the presence and absence of attentional biases in the two groups, independent samples ttests were performed to examine whether the attentional bias in the fasted group is larger than in the satiated group. To correct for familywise error rate for testing our hypotheses with four tests (engagement short, engagement long, disengagement short and disengagement long), we applied a Bonferroni-Holm correction. This means that the smallest p-value will be tested against an alpha of .0125, the p-values following against .016 and .025, respectively, and the largest against .05.To complement the results of the statistical analyses following the common frequentist approach, results were also reported with the Bayesian approach. Whereas the frequentist approach tests the probability of the data given one's hypothesis, the Bayesian approach tests the probability of the hypothesis given the data. Although the analyses following the frequentist approach have 95% power to find a medium effect size, the power for a small effect size is only 25%. Complementing these analyses with the Bayesian approach increases the confidence in our results. Most importantly, the Bayesian approach is able to test the evidence in favor of the null-hypothesis, which in the case of our examination of the attentional bias in the non-fasting group is what we are interested in. Bayesian analyses were conducted with JASP (JASP Team, 2018). Prior was set at the recommended default r = .707 (Wagenmakers et al., 2017).
In line with our hypotheses we reported BF 01, which quantifies the evidence for the null hypothesis over the alternative hypotheses when examining the one sample t-tests in the non-fasting group, and BF 10, which quantifies the evidence for the alternative hypotheses over the null hypotheses when examining the one sample t-tests in the fasting group. We also reported BF 10 when examining differences between the groups. A Bayes factor of 1 is considered no evidence, between 1 and 3 anecdotal, between 3 and 10 moderate, between 10 and 30 strong, between 30 and 100 very strong, and more than 100 extreme evidence that the data are more likely under the alternative hypothesis. A Bayes factor between 1/3-1 is considered anecdotal, between 1/10-1/3 moderate, between 1/30-1/10 strong, between 1/100-1/30 very strong, and less than 1/100 extremely strong evidence that the data are more likely under null hypothesis (Wagenmakers et al., 2017).
Compliance
Six participants in the fasted condition reported to have eaten shortly before their appointment in the lab (0.25-1.5 h before), and were therefore excluded from the analyses. There were some additional participants in the fasting condition with a fasting duration between 8 and 13 h (n = 8). However, since this is still a substantial fast, and also more than the minimum fasting period in other previous we decided to leave these participants in (Castellanos et al., 2009;Nijs & Franken, 2012;Stamataki et al., 2019). Additionally, one participant in the satiated condition reported to not have eaten the morning before the experiment (12.25 h before), and was therefore also excluded from the analyses. Further, when measuring height and weight in the lab three participants were underweight (BMI < 18.5; 1 in satiated and 2 in fasted condition) and nine participants were overweight (BMI > 25; 6 in satiated and 3 in fasted condition). These participants were therefore excluded from the analyses. This leaves 54 participants in the satiated and 50 in the fasted condition. This results in a power of 88.5% to find medium effects in the fasted condition, and 91% to find medium effects in the satiated condition.
Descriptive statistics
Group characteristics can be found in Table 1. As an indication that the manipulation was successful, there was a large difference between fasted and satiated individuals with regard to the number of hours that passed since they last ate (t(102) = −45.45, p < .001). Furthermore, fasted individuals also reported higher subjective hunger than satiated individuals (t(102) = −11.28, p < .001).
No attentional bias to food cues in satiated healthy weight individuals?
There is indeed evidence that the satiated individuals did not show stronger attentional engagement to food cues than neutral cues or more difficulty to disengage from food cues than from neutral cues (Table 2). This was the case for both the short or long image time trials. The frequentist approach shows that null hypotheses (i.e., indexes are not larger than zero) cannot be rejected. The Bayesian approach showed strong evidence that satiated individuals did not show stronger attentional engagement to food cues or attentional disengagement from food cues compared to neutral cues on the short image time trials, and moderate evidence that satiated individuals did not show stronger engagement to food cues or disengagement from food cues compared to neutral cues on the long image time trials.
Attentional bias to food cues in fasted healthy weight individuals?
There is indeed indication for an attentional bias to food cues in fasted individuals. Following the frequentist approach, it seems that fasted individuals showed stronger attentional engagement to food cues than neutral cues on the short image time trials, although this finding did not reach statistical significance (p = .014, α = .0125). The Bayes factor showed moderate evidence for more attentional engagement with food cues than neutral cues in fasted individuals. Fasted individuals did not seem to have an attentional bias for food compared to neutral stimuli as indexed by attentional engagement on the long image time trials and attentional disengagement on both the short and long image time trials. The frequentist approach showed no evidence for the alternative hypotheses that there is an attentional bias, and the Bayesian approach showed moderate to strong evidence that fasted individuals did not have an attentional disengagement bias on the short and long image time trials, or an attentional engagement bias on the long image time trials.
3.5. Do fasted healthy weight individuals have a stronger attentional bias to food cues than satiated healthy weight individuals?
Independent samples t-tests showed the same pattern as the one sample t-tests. That is, the frequentist approach showed a marginally significant difference between fasted and satiated individuals, but only on the attentional engagement measure with short cue delay (Table 3). Bayesian analyses showed anecdotal evidence that fasted individuals had more attentional engagement to food cues than satiated individuals.
Discussion
This study set out to examine the assumption that healthy weight individuals who are deprived of food have an AB to food cues, whereas healthy weight individuals who are satiated do not show an AB to food. In contrast to previous studies, we included a sufficient sample size to detect medium effects, performed Bayesian analyses allowing to Note. EDE-Q = Eating Disorder Examination Questionnaire. Jonker, et al. Appetite 151 (2020) 104686 examine the evidence in favor of the null-hypothesis, and used an AB measure designed to differentiate between engagement and disengagement (Grafton & MacLeod, 2014). Our findings can be summarized as follows: 1) there was moderate to strong evidence that satiated healthy weight women do not show more attention to food cues than to neutral cues; 2) there was moderate evidence that fasted healthy weight women show stronger attentional engagement to food cues than to neutral cues when these cues are shown briefly (100 ms), but no attentional engagement bias when cues are shown longer (500 ms) or attentional disengagement bias to food cues; 3) there was anecdotal evidence for a difference in attentional engagement to food cues between healthy weight women who fasted and satiated healthy weight individuals when cues were shown briefly (100 ms).
In line with expectations, findings indicate that satiated healthy weight women do not have an AB to food cues. The Bayes factors showed that there is moderate to strong evidence that healthy weight women who are satiated do not have a stronger attentional engagement or attentional disengagement bias to food cues than to neutral cues. Further, the current study showed moderate evidence that women who fasted for an average of 14 h showed stronger attentional engagement to food cues than to neutral cues when these cues were shown briefly (100 ms). These findings seem to be consistent with the expectation that food stimuli are attention grabbing because of their rewarding value in food-deprived situations, and that they have less rewarding value when individuals are satiated (Berridge et al., 2010;Higgs et al., 2017;Pool et al., 2016). It should however be acknowledged that the presence of AB to food cues per se does not imply that food cues are considered rewarding, as stimuli with negative saliency are also expected to result in heightened attention (Field et al., 2016;Pool et al., 2016).
Interestingly, fasted healthy weight individuals only showed attentional engagement to food cues that were shown briefly (100 ms), but not to food cues that were shown longer (500 ms). Until now, there was only inconsistent empirical evidence for a role of hunger in the attentional bias to food cues in healthy weight individuals. Further, the studies used overt (i.e., eye-tracking) and covert (RT measures) attention as index of attentional bias making comparison across studies difficult, since it is not uncommon that different results are found with overt and covert outcome indices of the same task (e.g., Motoki, Saito, Nouchi, Kawashima, & Sugiura, 2018). Nevertheless, when comparing current findings with other findings from covert attention measures there seems to be a relatively consistent pattern. That is, a previous study using the VPT with stimuli that were shown for 100 or 500 ms also found an ABto food cues in individuals who were food deprived (Nijs, Franken, & Muris, 2010), yet a study using the VPT with stimuli that were shown for 2000 ms did not (Castellanos et al., 2009). Together with our current findings, these findings thus seem to indicate that an ABto food cues in food deprived individuals might only be apparent when the food images are shown briefly. This might indicate that participants are able to ignore food cues and to focus on the task at hand when there is sufficient opportunity for cognitive control.
The current study showed moderate to strong evidence that fasted individuals did not have an attentional disengagement bias to food cues. Thus, it seems that only attentional engagement and not attentional disengagement might be influenced by hunger in healthy weight individuals. This seems to be in line with the findings from a previous study using the ARDPEI in the context of food (Jonker, Glashouwer, et al., 2019). In that study, healthy weight adolescents also showed an AB to food cues only on the attentional engagement trials where cues were shown briefly (100 ms), and not on attentional disengagement trials. Healthy weight individuals in this previous study were food deprived for about 3 h 2 before performing the attentional bias task.
Although this is substantially shorter than individuals in the current study, it has been shown that fasting for 3 h or more already resulted in substantial differences in hunger among healthy weight individuals (Sawada, Sato, Minemoto, & Fushiki, 2019). Given these findings, it thus seems that specifically attentional engagement might be involved in eating behavior of healthy weight individuals.
The current study provides some important implications for future research on AB to food cues in the context of for example obesity. First, taking satiation into account seems important. For example, when the expectation is that individuals with obesity have more attention to food cues than healthy weight individuals, outcomes of such a comparison will likely differ based on whether obese individuals are compared to satiated or fasted healthy weight individuals. Further, also the influence of satiation on AB in obese individuals should be examined in more depth. Although it was suggested that obese individuals might have an AB to food cues when hungry but also when satiated, studies thus far have not shown consistent findings (Castellanos et al., 2009;Nijs et al., 2010;Stamataki et al., 2019). Second, it seems to be important to differentiate between attentional engagement and disengagement when examining attentional bias to food cues in the context of eating behavior. Thus far, studies on AB to food cues in healthy weight individuals seem to show that specifically attentional engagement might be important (Jonker, Glashouwer, et al., 2019). Nevertheless, it might be premature to conclude that attentional disengagement is not relevant in the context of eating behavior. For example, it might be that attentional disengagement bias is related to overeating and obesity.
The current study has some limitations that should be taken into account when interpreting the results. First, although we initially included 64 women in the non-fasting and 65 individuals in the fasting condition, and substantial measures were taken to counteract noncompliance, after exclusion based on the ARDPEI outlier procedure, non-compliance to the manipulation, and BMI as measured in the lab, a sample of 54 individuals in the non-fasting and 50 in the fasting condition remained. Although this still resulted in a substantial power to find medium effects, it was lower than intended. Furthermore, the evidence for stronger attentional engagement to food cues than to neutral cues in fasted individuals was moderate. It thus seems important to replicate the findings of the current study.
Second, the ARDPEI showed low internal consistency, which might negatively influence the interpretability of the results (Parsons, Kruijt, & Fox, 2018). Task characteristics such as using a range of stimuli that might not all be relevant to individuals, and the randomized order of trials might account for this low internal consistency (Ataya et al., 2012;Christiansen, Mansfield, Duckworth, Field, & Jones, 2015). Further, AB indices calculated from the ARDPEI are difference scores of four different trial types that each contain true and noise variance. Last, a potential issue with the internal consistency of AB measures in general is that they are based on difference scores. The components of these difference scores are often correlated, for example because individuals' average speed of responding is included in all components. When components of such a difference score are highly correlated the reliability of this difference score will be low (Thomas & Zumbo, 2011). Nevertheless, since the AB measures are not used as individual difference variables in the current design, the most crucial aspect of the task is that it is sensitive enough to pick up a difference in attention for food and neutral stimuli on a group level. Furthermore, the ARPDEI does seem to provide consistent results when comparing current findings to a previous study using the ARDPEI (Jonker, Glashouwer, et al., 2019).
Taken together, the current study provides evidence for an AB for food cues in fasted healthy weight women and an absence of such a bias in satiated healthy weight women. Fasted healthy weight women showed attentional engagement to food cues that were shown briefly (100 ms), but not to food cues that were shown longer (500 ms). Furthermore, fasted healthy weight women did not show more difficulty to disengage attention from food cues than from neutral cues. Satiated healthy weight individuals did not show an attentional engagement or disengagement bias. These findings are in line with the assumption that an AB for food cues might play a role in eating behavior of healthy weight women. Further, they seem to show that attentional engagement to food cues is only stronger in hungry individuals when there is little room for cognitive control. Lastly, our findings point to the relevance of differentiating between attentional engagement and attentional disengagement.
Ethical statement
The study protocol was approved by the ethical committee of the psychology department of the University of Groningen (17374). Participants signed up for the study through the screening during which they also provided informed consent.
Funding
The first author was supported by a NWO research talent grant [406-14-091]. | 8,835.4 | 2020-03-29T00:00:00.000 | [
"Psychology",
"Biology"
] |
Analysis of infrared radiation emitted by moxibustion devices made of different materials using Fourier transform infrared spectroscopy
Moxibustion has a long history of use as a traditional Chinese medicine therapy. Infrared radiation is an important and effective factor in moxibustion. Instead of the time-consuming and laborious process of holding moxa sticks in the hand, moxibustion devices are commonly used as moxibustion methods and tools in modern times. With the publication of the international standard of moxibustion devices (ISO18666:2021, Traditional Chinese Medicine - General requirements of moxibustion devices) published, moxibustion devices of various materials are now sold in the pharmacies and online stores. However, the influence of moxibustion devices on the therapeutic effect of moxibustion has not been studied. Therefore, this research was aimed to evaluate the infrared radiation of moxibustion devices, in order to select the moxibustion device that delivered infrared radiation closest to that of moxa stick combustion. The combination of combustion stability and infrared radiation intensity showed that cardboard tubes and silicone were better materials for moxibustion devices. In the mid-far infrared wave band, the moxibustion devices made from cardboard tubes and silica gels can better maintain the thermal effect generated by moxibustion and enable it to be more easily absorbed by the human body. The infrared radiation intensity of the cardboard moxibustion devices increased rapidly and steadily and could be maintained for the longest time. In conclusion, cardboard tubes are the better material for moxibustion devices with respect to infrared radiation.
Introduction
Moxibustion is an important external therapy in traditional Chinese medicine (TCM) [1,2].The infrared light, heat, and smoke generated by igniting moxibustion materials have a therapeutic effect on the human body in preventing diseases or modifying disease status [3,4].It has been reported that moxibustion has the effects of reducing inflammation, preventing mild cognitive impairment, improving reproductive function, etc [5][6][7].From ancient times to the present day, moxibustion has been used by igniting moxa sticks rolled from mugwort for acupoints.The effective elements of moxibustion include heat, infrared radiation, and moxa smoke [8].In addition to the medicinal and thermal features of moxibustion, the infrared radiation of moxibustion is also an important factor [9][10][11].The infrared generated by the burning of moxa sticks can provide energy for the body's metabolic and immune functions [12].In the past, moxibustion for patients usually used moxa cones or holding moxa sticks by traditional Chinese doctors [13].In the past, traditional Chinese doctors usually treated patients using moxa cones or by holding moxa sticks.Nowadays, moxibustion devices are widely used.Doctors put the ignited moxa stick into the moxibustion device and place it on the acupoint, which makes the clinical use of moxibustion easy and fast.
With more widespread use of moxibustion devices, the International Organization for Standardization published the international standard for moxibustion devices (ISO18666:2021, Traditional Chinese medicine -General requirements of moxibustion devices).Moxibustion devices can be used to hold the moxa sticks firmly and to adjust temperature.The size and shape should be suitable to cover on a specific acupoint or an area of the human body surface.A safety arrangement is necessary to prevent ash or ember from falling onto the surface of the human body.However, the international standard places no restrictions on the materials used for making moxibustion devices., although the different materials used for moxibustion devices influence the infrared radiation spectrum from combustion of moxa sticks [14].
The body of moxibustion devices are usually made from woods, ceramics, cardboard tubes, silica gels, etc.The strength of infrared radiation is closely related to the differences in production materials.Any object with a temperature above absolute zero can release energy.The invisible infrared light generated during the combustion of moxa sticks accounts for the majority of their radiation spectrum [15], which varies in intensity from red light to far infrared, and it is mainly in the near infrared wave band.Current research indicates that near-infrared and far-infrared have positive effects on the human body [16][17][18][19].The different materials used for moxibustion devices can cause affect the changes in radiation intensity [20].
The aim of our study was to observe the Fourier transformation infrared spectroscopy of four types of moxibustion devices, including cardboard tubes, silica gels, woods, and ceramics.We compared the advantages and disadvantages of four types of moxibustion devices with respect to combustion stability and infrared radiation for different times and wavebands.Our results can provide the support and basis for the international standardization of moxibustion devices.
Measurement of FTIR spectra
In this experiment, we chose the Nicolet iS50 Fourier infrared transform spectrometer (Thermo Fisher, USA) to detect the spectra of the moxibustion devices (see Fig. 1).Each spectrum was complied from the average of 32 scans in the range of 1.28 μm-25μm.The ambient temperature of the experiment was 22 • C ± 3 • C, the relative humidity was 55 % ± 10 %, and the vibration source around the instrument was avoided while the spectrum was collected.Before the experiment, all the moxibustion devices were placed in a constant environment (the same temperature and humidity conditions) for 72 h to ensure the uniformity of the moisture content of the samples.
The specific spectrum collection process was as follows: First, the background radiation spectrum was collected after a 20 min' stabilization of the spectrometer.Because water or carbon dioxide in the air would absorb the infrared light of moxibustion devices, eliminating these influences provides greater accuracy.
Secondly, moxa sticks were ignited and put into each type of moxibustion device.We wet the cowhide with water around 37 • C which aimed to simulate the temperature of human skin and put it under each device.Moxibustion devices were placed horizontally about 8 cm in front of the detection window, and we recorded this moment as the 0 min in the combustion process of moxibustion combustion.
Finally, the spectrum test was carried out at the corresponding time point.We wet the cowhide at the end of each spectrum collection and then saved the spectrum for follow-up analysis.Each moxibustion device was tested in turn to prevent temperature of the moxibustion device from rising all the time because of the continuous testing.
Materials of moxibustion devices
We used moxa sticks made by Chongqing Baixiao Co., LTD for all the moxibustion devices to eliminate any impact on the spectrum of moxa sticks produced by different manufacturers.Cardboard moxibustion devices (MD) were purchased from Chongqing Baixiao Co., LTD.Wooden moxibustion devices were purchased from Qichun Chutian Yangshengtang Qidai products Co., LTD.Ceramic moxibustion devices were purchased from Changsha Yaofei Network Technology Co., LTD.Silicone moxibustion devices were purchased from the Qi moxibustion factory outlet store.The four types of moxibustion devices are shown in Fig. 2.
We collected the infrared radiation spectrum of the human body after treatment by moxibustion.The construction of the bottom of the moxibustion device was usually designed to be poriferous, and the light detector of the FTIR spectrometer was able to go through the holes and measure the spectra of the moxa sticks inside the moxibustion devices.We decided to use a cushion of isolation material as the simulated skin beneath the bottom of the moxibustion devices, and we selected cowhide, filter paper, and biological semi permeable membrane (SPM) as the isolation materials.To compare with the infrared radiation spectrum of the human body, we chose the best isolation material to simulate human skin.
Measurement and analysis of FTIR spectra at different time points
Generally speaking, the therapeutic process of moxibustion would take 20 min in a TCM clinic.In order to measure the variation of infrared radiation of the moxibustion devices, the infrared spectra were detected at the 1st, 3rd, 5th, 7th, 9th, 11th, 13th, 15th, 17th, and 19th minutes after the moxa sticks were ignited in the moxibustion devices.We then utilized the repeat analysis to explore the intensity of the devices at different time points.The intensity of 1.28 μm-25 μm at every time point was measured and analyzed.
Measurement of combustion stability
In this experiment, the stability of the infrared radiation of the moxibustion devices was measured by the relative standard deviation.We detected 10 infrared spectra of moxibustion devices at the 1st, 3rd, 5th, 7th, 9th, 11th, 13th, 15th, 17th, 19th minutes and calculated the standard deviation using OMNIC software.Finally, we saved the standard deviation spectra, and the average value of the standard deviation graph was recorded as the relative standard deviation value of the corresponding moxibustion devices: the greater the standard deviation, the smaller the stability of the infrared radiation of the moxibustion devices.
Analysis methods for different wavebands
Four wavebands were analyzed, including the detection band of the FTIR spectrometer (1.28 μm-25μm), near-infrared radiation (1.28μm-2.5 μm), mid/far-infrared radiation (2.5 μm-25μm) and the peak band of human infrared radiation (7.5 μm-10 μm).Infrared radiation intensity at different wavebands were calculated by the algorithm 'mean value measurement' of the OMNIC software.The average infrared spectrum of each moxibustion device was calculated from the 10 spectra within 20 min by the algorithm 'statistical spectrum and mean spectral calculation'.We calculated and recorded the respective mean value of infrared radiation intensity of the four wavebands by the algorithm 'mean value measurement'.
Principal component analysis
Principal component analysis (PCA) of the infrared radiation spectra was performed by SPSS 19.0 and OriginPro 2021(OriginLab, USA).Firstly, the intensity of every moxibustion device at each time point were used as the components of the principal analysis.Two eigenvalue of the components which were greater than 1 were chosen as the two principal components, and each moxibustion device's score was calculated by eigenvalues.We also analyzed and compared the radiation intensity of the cardboard moxibustion devices with other moxibustion devices for the four different bands, in order to judge whether the intensity at different time points shows the same effect in different moxibustion devices.
Statistical analysis
The spectrometer was controlled by the software OMNIC (Thermo Fisher, USA), which could optimize the spectrum.The software OriginPro 2021 (OriginLab, USA) and GraphPad Prism9.0 (GraphPad Software, USA) was used to performed PCA and make the statistical graphs.One-way ANOVA with LSD test was used to test differences in each indicator between groups using SPSS 19.0 statistical software.
FTIR spectral features of moxibustion devices
Since infrared radiation passing through human skin produces a certain degree of attenuation [21], we put three different materials under the moxibustion device separately to simulate moxibustion on human skin.We chose paper, semi permeable membrane and cowhide as the barrier mediums.Fig. 3 shows the FTIR spectra of the moxibustion device for paper, semi permeable membrane and cowhide respectively, and the spectrum of the moxibustion applied to the human skin by a moxibustion device.The spectral shape of paper and semi permeable membrane is different from that of the human skin, and the two materials are not suitable for imitating moxibustion on the human skin.As the spectral shape of cowhide is more similar to that of the human skin, we selected the cowhide for the next experiment.
Fig. 4A-D shows the FTIR spectra of the four types of moxibustion device, with and without cowhide: the higher the absorbance, the lower the infrared radiation intensity of the moxibustion device.The shape of combustion infrared spectroscopy is regular and easy to analyze using absorbance as the vertical coordinate.Ceramic moxibustion devices without cowhide emit strong infrared radiation directly.The infrared radiation intensity of the moxibustion device used with wet cowhide is evidently reduced.The area of the blue region represents the wastage of infrared radiation passing through the cowhide.The ceramic moxibustion devices have the highest wastage of infrared radiation when passing through cowhide.The infrared radiation loss of cardboard moxibustion devices passing through cowhide is minimal.The cowhide absorbs a portion of infrared radiation, which is similar to moxibustion on the human body.The absorbance of cardboard and silica gel moxibustion devices by cowhide is significantly lower, which indicates that the two moxibustion devices release more energy.Wooden and ceramic moxibustion devices emit only weak infrared energy.
Fig. 4E-H shows the transmittance of the four types of moxibustion device, with and without cowhide.Transmittance (T) is inversely proportional to absorbance (A), and the relationship between A and T follows the formula.The four types of moxibustion devices have high transmittance of around 20 μm in wavelength.Ceramic and silicone moxibustion devices without cowhide reflect higher transmittance than the cardboard moxibustion devices.The cowhide greatly reduces the transmittance of infrared radiation of the moxibustion device.The absorbance of cardboard moxibustion devices with cowhide is obviously lower than those of the other three types of moxibustion devices.But the transmittance of cardboard moxibustion devices with cowhide is higher than the other moxibustion devices with cowhide (Fig. 4E-H, red line).In terms of transmittance, the cardboard moxibustion devices have good infrared radiation penetration power.Fig. 3.The FTIR spectra of the moxibustion device with paper, semi permeable membrane (SPM), cowhide and human skin.
J. Zhang et al.
Combustion stability of moxibustion devices
In the test of combustion stability measurement, we used standard deviation of absorbance as a criterion for evaluating stability.The moxibustion devices made from the four different materials were measured 10 times separately and repeatedly.The standard deviation of absorbance is inversely proportional to the combustion stability: the smaller the standard deviation, the more stable the combustion stability of the moxa sticks with this device.
Fig. 5 shows that the cardboard and silica gels moxibustion device have higher combustion stability than the moxa sticks.The combustion stability of moxa sticks in the wooden and ceramic moxibustion devices is the worst among the four types of moxibustion devices: the larger the relative standard deviation of the infrared radiation intensity of the moxibustion apparatus working for 20 min, the smaller the stability, the greater the change of the infrared radiation intensity of the moxibustion apparatus working for 20 min, and the more unstable the infrared radiation intensity emitted by the moxibustion apparatus during the operation on the human body.*vs control without device moxa sticks, #vs ceramic moxibustion device, Δvs wooden moxibustion device, MD means moxibustion device.
The infrared radiation of moxibustion devices at different times
In a TCM clinic, the usage time of moxibustion devices is about 20 min.We collected the infrared radiation spectrum every 2 min, for 20 min in total, and each detection took 49 s.The average intensities of infrared radiation spectrum at different times were recorded and analyzed.Fig. 6 shows the average value of absorbance and the change line of infrared radiation intensity.In the control group, the infrared radiation intensity of the moxa stick without the moxibustion device reached its maximum after ignition and then gradually decreased.Moxa sticks of the same length burned out at the 15th minute, and the infrared radiation was not collected after 15th minute.
In the 1st minute, the infrared radiation intensity of wooden moxibustion devices is the lowest in the 4 types of moxibustion devices.The infrared radiation intensity of the wooden moxibustion device increases rapidly, and reaches maximum in the 5th minute.Then the intensity decreases at 9th minute, and becomes unstable from 9th to 20th minutes.The infrared radiation intensity of ceramic moxibustion devices fluctuates repeatedly in the first 9 min and stabilizes gradually after the 11th minute, but the overall intensity is low in the ceramic moxibustion devices.The infrared radiation intensity of silicone moxibustion devices increases and stabilizes rapidly at the initial stage, but it weakens in second half of the combustion process.The cardboard moxibustion devices appear to perform best, with a stable and increased infrared radiation intensity which is maintained for a longer time.Fig. 7 shows the comparison of infrared radiation intensity from the 1st to the 19th minutes.In the 1st minute after ignition, the absorbance of the moxa stick without the moxibustion devices is significantly lower than with moxibustion devices, which demonstrates that the devices reduce the infrared intensity after initial ignition, but the wooden moxibustion device exhibits a faster rate of infrared radiation enhancement.In the 3rd minute, the cardboard and silica gels moxibustion devices are heated up gradually and the infrared radiation is strengthened.From 5th to 7th minute, the moxibustion devices made from cardboard tubes, wood and silica gels tend to have similar infrared radiation intensities; however, the ceramic moxibustion device heats up slowly and produces lower infrared radiation.Moxibustion devices can steadily increase the burning temperature of moxa sticks, and the cardboard and silica gels moxibustion devices have good performance from the 9th minute.Due to the fast burning rate of the moxa stick without a moxibustion device, it is extinct in the 15th minute.This also indicates that the moxibustion devices prolong the burning time of moxa sticks.Within 20 min of moxibustion, the infrared radiation of cardboard and silica gels moxibustion devices is strong and stable integrally; of the two, the cardboard moxibustion device is superior.
Fig. 8A shows the infrared radiation intensity in the 1.28-25 μm wavebands.The absorbance in cardboard and silica gels moxibustion devices made by are the closest to the burning of moxa sticks, and significantly lower than in the other two types of Fig. 6.Infrared radiation intensity of the moxa sticks burned in moxibustion devices for different combustion times.
J. Zhang et al.
moxibustion devices.In the near infrared wavebands (1.28-2.5 μm, Fig. 8B), the intensity of moxa sticks is clearly stronger than that of moxibustion devices, which suggests that explains the moxibustion devices may absorb part of infrared radiation from the burning moxa sticks.In the 2.5-25 μm and 7.5-10 μm wavebands (Fig. 8C and D), there is not much difference between the burning of moxa sticks alone and in the cardboard and silica gel moxibustion devices.
Moxibustion devices of principal component analysis (PCA) score
In the PCA score plot, samples with the same infrared radiation intensity will cluster together, showing the clustering effect of each group of samples.Fig. 9A shows the infrared radiation intensity scores of four different moxibustion devices at different time points in the 1.28-25 μm wavelength range, and Fig. 9A and B shows that the four groups exhibit a certain degree of clustering.There are differences in the infrared radiation intensity of the four types of moxibustion devices at different time points.
Having compared the cardboard moxibustion devices with the other three types of moxibustion devices, we have obtained PCA scores for infrared radiation intensity in different bands, which are 1.28-25 μm, 1.28-2.5 μm, 2.5-25 μm, and 7.5-10 μm.Fig. 10A shows a high similarity between the infrared radiation intensity of the first minute of cardboard moxibustion devices and the infrared radiation intensity of ceramic moxibustion devices from 1 to 20 min, which is the lowest infrared radiation among the whole burning period of cardboard moxibustion devices.Fig. 10B shows that the infrared radiation from cardboard moxibustion devices at 1st, 3rd, 11th minute has the similar effects to that from the ceramic moxibustion device in the 20 min of combustion.Fig. 10C illustrates the PCA score and Cluster degree of cardboard and silica gel moxibustion devices.Based on the PCA analysis, the two factors with the highest contribution are 1.28-2.5 μm and 2.5-25 μm.The differentiation between cardboard and silica gel moxibustion devices is not significant in the 1.28-2.5 μm and 2.5-25 μm wavebands.
Discussion
In ancient times, moxibustion was the earliest physical therapy method [22,23].Infrared radiation of moxibustion is one of the important effective factors of moxibustion, which acts on the acupoints on human body [24].The invisible infrared light emitted by the burning of moxa in the moxibustion device accounts for the majority of its radiation spectrum.There is radiation of varying intensities from red light to far infrared.As a direct and rapid collection technique, FTIR spectroscopy can profile the radiation intensity of moxibustion devices made of different materials.The influence site of infrared radiation of moxibustion includes epidermal tissue and subcutaneous tissue.In this study, the infrared radiation transdermal ability of moxibustion devices was measured by comparing the infrared radiation of moxibustion devices covering simulated skin and without simulated skin.The key issue was whether the moxibustion warmer can assist the infrared radiation of moxibustion to reach the human body and thus play a warming effect.The results of this study show that the infrared radiation of the cardboard moxibustion device has the best ability to simulate the skin, and it is speculated that it has a good ability to assist the infrared radiation of moxa pillar to penetrate the superficial part of the human body in the process of clinical treatment, so as to reduce the loss of infrared radiation in vitro and the superficial part of the human body.
The results of this study showed that, compared with moxa sticks, the four kinds of moxibustion devices had stronger stability within 20 min, which was speculated to be due to the effect of the outer structure of moxibustion apparatus on the accumulation and accumulation of infrared radiation emitted by moxibustion devices, accompanied by the conversion of light and heat energy, reducing energy loss and ensuring the stability of infrared radiation intensity during the operation of moxibustion devices.Therefore, moxibustion devices can help improve the stability of infrared radiation intensity during moxibustion, and cardboard and silicone devices have the best stability.Due to the different thermal conductivity and heating rate of different materials of moxibustion devices, there are differences in the stability of infrared radiation intensity during the burning process of moxa sticks in different material moxibustion devices.Stable combustion can ensure the quality of moxibustion, thereby exerting its therapeutic effect.
According to the change trend of infrared radiation intensity of moxa sticks and each moxibustion devices within 20 min, the infrared radiation intensity of moxibustion devices showed a downward trend from the first minute, while the infrared radiation of devices increased with the extension of internal burning time.The reason for this phenomenon may be that part of the visible light emitted by the combustion of moxa sticks cannot be converted into heat energy or because the infrared radiation may be absorbed by substances in the air, resulting in the loss of visible light energy and infrared radiation, so the light energy generated by the combustion of moxa sticks cannot be maximized and accumulated in the human skin.Conversely, due to the effect of its external structure, part of the light energy generated by the combustion of moxa sticks inside the moxibustion devices can be converted into heat energy through the absorption of the external structure of the moxibustion devices, slowing down the loss of light energy generated by the combustion of moxibustion devices, so as to be used by the human body through heat transfer.The structure of moxibustion tube slows down the loss of heat energy generated by burning moxa sticks to a certain extent, so the moxibustion devices can reduce the loss of infrared radiation from burning moxa sticks in many ways.In the field of medicine, the therapeutic effects may vary depending on the infrared wavelength range.Near infrared radiation has strong penetration ability wavelength range, and mid to far infrared radiation can generate thermal effects [25][26][27][28], The absorbance in 2.5-25 μm wavebands reflects the intensity of mid to far infrared radiation, which is related to thermal effects.The wavebands in 7.5-10 μm reveal the capacity of the human body to absorb infrared radiation [29,30].The above results indicate that two cardboard and silica gel moxibustion devices exhibit good performance in generating thermal effects and human absorption and utilization.In this experimental study, it was found that in the process of moxibustion application of four kinds of moxibustion devices, the external structure of ceramic moxibustion devices had the highest temperature.Combined with the results of this experiment, it was speculated that the reason for its weak infrared radiation might be due to the excessive absorption of energy emitted by moxa sticks combustion by its external structure, which led to the preferential absorption of heat energy or light energy by the external structure.As a result, the energy loss of the ceramic moxibustion devices were more than that of the other moxibustion devices.In clinical treatment, there are many kinds of moxibustion devices to choose, and it is very important to choose the better material moxibustion device.In this study, the infrared radiation from cardboard moxibustion devices at the 1st, 3rd, and 11th minutes has the similar effects to the ceramic moxibustion device in the 20 min of combustion.We have observed that whatever the material of the moxibustion devices are, the overall trend of the intensity radiation of the moxibustion devices are upward.It indicates that although the ceramic moxibustion devices accumulates enough infrared radiation in 20 min, it is still only comparable to cardboard moxibustion devices' infrared radiation in minutes 1, 3, and 11.Therefore, in clinical treatment, the therapeutic effect of cardboard moxibustion devices may be better than that of ceramic moxibustion devices made by ceramic.
Conclusion
This study is based on the Fourier transform infrared spectrometer to explore the infrared radiation spectral properties of moxibustion devices.Starting from the spectral shape, infrared radiation intensity, and multiple infrared spectral wavebands, the infrared spectral characteristics of moxibustion devices were comprehensively studied.
In total, moxibustion devices can prolong the burning time of moxa sticks, allowing the warm stimulation of moxa sticks to continue to affect the human body.The cardboard and silica gel moxibustion devices perform better with respect to Combustion stability and infrared radiation intensity.PCA analysis indicates there is difference between the moxibustion devices made by cardboard tubes and silica gels.The cardboard moxibustion devices made by cardboard tubes can stabilize and increase the intensity of emitted infrared radiation within 20 min of moxibustion.Therefore, cardboard tubes and silicone are the best materials for making moxibustion devices, with cardboard tubes being superior.
Fig. 2 .
Fig. 2. Appearance of four types of moxibustion devices A) Cardboard moxibustion devices, B) Silicone moxibustion devices, C) Ceramic moxibustion devices D) Wooden moxibustion devices.
Fig. 4 .Fig. 5 .
Fig. 4. The FTIR spectra of the four types of moxibustion device with and without cowhide.
Fig. 7 .
Fig. 7. Repetitive analysis of infrared radiation intensity of moxibustion devices at different combustion times.*vs wooden moxibustion device, #vs ceramic moxibustion device, &vs control moxa sticks without device, MD means moxibustion device.
Fig. 9 .
Fig. 9. PCA score plot of infrared radiation intensity of moxibustion devices at different time points A) four moxibustion devices, B) moxibustion devices made from cardboard tubes vs silica gels.
Fig. 10 .
Fig. 10.PCA score plot of infrared radiation intensity of moxibustion devices at different wavebands.A) cardboard moxibustion devices vs ceramics; B) cardboard moxibustion devices vs wood; C) cardboard moxibustion devices vs silica gels. | 6,056.4 | 2024-06-01T00:00:00.000 | [
"Materials Science"
] |
Improving Sediment Transport Prediction by Assimilating Satellite Images in a Tidal Bay Model of Hong Kong
Numerical models being one of the major tools for sediment dynamic studies in complex coastal waters are now benefitting from remote sensing images that are easily available for model inputs. The present study explored various methods of integrating remote sensing ocean color data into a numerical model to improve sediment transport prediction in a tide-dominated bay in Hong Kong, Deep Bay. Two sea surface sediment datasets delineated from satellite images from the Moderate Resolution Imaging Spectra-radiometer (MODIS) were assimilated into a coastal ocean model of the bay for one tidal cycle. It was found that remote sensing sediment information enhanced the sediment transport model ability by validating the model results with in situ measurements. Model results showed that root mean square errors of forecast sediment both at the surface layer and the vertical layers from the model with satellite sediment assimilation are reduced by at least 36% over the model without assimilation.
Introduction
Suspended sediment particles are an integral part of ecosystem health in many coastal environments, because it is related to the total production and fluxes of heavy metals and micro-pollutants.The knowledge of the suspended sediment transport is critical to water quality management in the coastal ocean area [1].Numerical models have long been the primary tools employed for understanding and assessing sediment movement in coastal and ocean environmental systems, in particular recent state-of-the-art fine-resolution models have led to fruitful outcomes in these research areas.However, predicting the transport and fate of coastal sediment is still a challenge, because of the highly complex nonlinear sediment dynamics and the lack of understanding of the key sediment processes underlying the behavior of a real-world dynamical system [2].The state-of-the-art regional-scale sediment transport models, which are based on semi-empirical relationships, would suffer from significant uncertainty of the predictions, unless constrained with observations [3].Traditionally, observations used for model initialization, calibration and validation have been collected by ship-based surveys or fixed moorings.Such methods usually acquire data with sparse spatial and temporal density and could be very costly.
Satellite ocean color data can provide sea surface sediment information that is highly resolved in both space and time.This is a promising source of data that matches very well to the models' spatial scales and has been proven to be valuable for model evaluation and development [4,5].However, satellite observations generally have long temporal frequencies and are limited to the surface.On the contrary, numerical models could have no time or space limitations.Numerical models can potentially provide the missing information of the causes of distributions and changes seen in two successive remote sensing observations, as numerical models are built on fundamental principles of ocean physics [6].Therefore, combining both the advantages of remote sensing data and numerical models can enhance our knowledge on sediment movement.In fact, integrating remote sensing data and numerical simulation has been widely applied in studying ocean and inland water environments.Pleskachevsky et al. [7] presented a three-dimensional SPM transport model in synergy with a two-dimensional suspended sediment distribution from ocean color (CZCS) images to analysis the resuspension and deposition characteristic of vertical sediment.Miller et al. [8] used sediment concentration derived from cloud-free Moderate Resolution Imaging Spectra-radiometer (MODIS) images to calibrate and validate the output of the sediment transport model by comparing with the model predicted SPM concentrations.Kouts et al. [9] analyzed the effect of sand dredging on the sediment dynamics of Pakri Bay, Finland, by comparing distributions from remote sensing images and numerical model results.Chen et al. [10] proposed an application using a sediment concentration distribution from MERIS images to initialize and calibrate a three-dimensional sediment transport model of Bohai Bay, China.
However, apart from the numerical uncertainties existing in numerical models, satellite data may also suffer from problems coming from on-board acquisition and image interpretation methods [4].Integrating a model with satellite images without considering their errors may aggravate the uncertainties in the model prediction.The accuracy of model results could be improved through data assimilation, by which models can be regulated in a way that the system's dynamics is strongly complied with and errors in both models and satellite observations are acknowledged.Many researchers have shown interests in studying remote sensing sediment assimilation in coastal and ocean models.Yet, the corresponding literature is scarce; but the number of publications is growing [4,[11][12][13].
In the present study, a three-dimensional hydrodynamic and sediment transport model of Deep Bay, a tide-dominated bay in Hong Kong, has been established and used to assimilate sea surface sediment data derived from MODIS images.Deep Bay suffers from many environmental problems related to suspended sediment, including high turbidity and seriously contaminated wetland by heavy metals and nutrient sediment pollution, because of the high intensity of human activities [14].The purpose of this work is to explore the combination of a sediment transport model and satellite images through data assimilation schemes to improve the understanding of the complicated sediment dynamics of Deep Bay.Such an investigation is important to enable better water quality management and wetland ecology restoration of the Bay.
The remainder of this paper is organized as follows.Section 2 introduces the case study region and observational datasets and outlines the hydrodynamic and sediment transport models.Section 3 describes the satellite data processing method and the data assimilation scheme, which integrates the sediment transport model with the remote sensing data.Section 4 presents the model calibration and validation processes and discusses the data assimilation results.Conclusions are given in the last section.
Study Area and In Situ Data
Deep Bay (22.41° to 22.53° N, 113.88° to 114.00° E) is a semi-enclosed shallow bay on the eastern side of the Pearl Estuary, between Shenzhen to the north and the New Territories of Hong Kong to the south (Figure 1A).The width of the bay varies from 4 km to 7.6 km at the narrowest section near the mouth.The length is 13.9 km, and the total sea surface area is about 80 km 2 .It is influenced by the irregular mixed semi-diurnal tide of the South China Sea.Four major rivers (Shenzhen, Dasha, Yuen Long and Tin Shui Wai River) discharge into the bay with relatively small flow rates.Because of its unique geographic location, the embayment exhibits complicated tidal and sediment movement, subjected to tidal flushing and river outflows, as well as human activities, like reclamation [15].Deep Bay is important for the conservation of the Futian National Nature Reserve (FNNR) and the Mai Po Nature Reserve (MPNR).These wetlands are located near the mouth of Shenzhen River at the upstream of Deep Bay and provide a habitat for numerous rare and endangered species [16].The wetlands have been suffering from increasing contamination problems in the region, for example, the adsorption and release of heavy metals and organic pollutants from sediments in the water and sea bottom [17].
Measurements used in this study are parts of the synchronous survey project in the Shenzhen River Basin, which were conducted by the Hydrology Bureau of the Yangtze River Water Resources Commission of China.The project was commenced in October, 2004, during which the measurements were taken hourly from 11:00, on 17 October, to 15:00, on 18 October (in total, 29 h).Water levels were taken by means of tidal gauges placed at Dong Jiao Tou (DJT), Tsim Bui Tsui (TBT), Chi Wan and Lan Kok Tsui (See Figure 1).The vertical profiles of flow velocity, sediment concentration and salinity data were collected at sites located along three transects inside the bay.Transect A is to the northwest of Deep Bay near the mouth of the Shenzhen River.There are three observation sites: S4, S5 and S6.Transect B is in the middle of the bay and includes two observation sites: S7 and S8.Transect C (not shown in Figure 1A) is at the mouth of Deep Bay, almost parallel to Transect B, and includes five observation sites.The measured data in Transect C were not used for calibration or validation purposes, but as the boundary condition data.According to the water depth, H (m), at local measurement time, the velocity measurements were obtained at six vertical levels: 0.0 H (surface layer), 0.2 H, 0.4 H, 0.6 H, 0.8 H and 1.0 H (bottom layer).At each site, current velocities were measured hourly by the ZSX-3 direct-reading flow instrument at each vertical layer.Water samples were collected sequentially from the surface layer to the bottom layer at each site.Five hundred milliliters of each water sample were taken and filtered immediately on a pre-weighted Whatman Cellulose Acetate Membrane filter with a diameter of 47 mm and a nominal pore size of 0.45 micrometers.The filter was stored in a desiccator, which was then combusted in a 500 oven for 3 h and weighed in the laboratory.An analytical balance was used to weigh the filter, with a precision of 0.01 mg.Sediment concentration was determined by the weight difference normalized by the filtered water volume.Salinities were measured by a digital salinity meter from these water samples.It is noted that when the water depth less was than 2 m, no measurements at the surface layer were taken, and measurements were made near the surface layer.In this study, near surface sediment data were used as the surface sediment data when there were no surface sediment measurements.It also should be noted that there was a very small amount of near surface sediment taken at S4 and S6, because the water depths at these two sites were very shallow most of the time, and measurements were taken only near the bottom layer (0.8 H).
Model Description and Configuration
The model applied for the calculation of the hydrodynamics and sediment transport in Deep Bay is an unstructured grid, finite volume, free-surface, 3-D primitive equation coastal ocean model
B A
(FVCOM) developed by [18].Unstructured triangular grids used in FVCOM provide an accurate fit for the geometry of irregular coastlines.A sigma-coordinate transformation was used to represent bottom slope irregularities in the vertical direction.The model simulates water surface elevation, 3D velocity, flooding and drying processes, temperature, salinity, water quality and sediment transport.The FVCOM sediment transport model is based on the Community Sediment Transport Model (CSTM) developed by the U.S. Geological Survey (USGS).It is implemented by solving the three-dimensional advection-dispersion equations.It has been tested on many coastal environment studies for the calculation of current-induced erosion, transport and the deposition of multiple sediment classes and has also been implemented in the well-known Regional Ocean Modeling System (ROMS) [19].Through analyzing the size distribution of suspended sediment from the field survey data of Deep Bay by Wong and Li [20], it is argued that the median size of suspended sediment in the dry season is 10 μm, and there is no significant difference in the size distribution of suspended sediment in the vertical direction.In this study, cohesive sediment with a dominant median size of the suspended matter was considered to simulate the three-dimensional sediment transport of Deep Bay by FVCOM.
The model grids were generated based on the land boundary with a relatively high resolution (80 m) in the inner bay near the Shenzhen River and a coarser resolution (250 m) at the open boundary.Because the in situ measurement was conducted at 0.0 H, 0.2 H, 0.4 H, 0.6 H, 0.8 H and 1.0 H (H is water depth), we set six vertical sigma layers in the model in order to facilitate model validation.Hourly wind metrological data at S7 was used as the spatial uniform water surface driving force.The open boundary was driven by tidal elevations measured at Chi Wan and Lan Kok Tsui.Measured sediment concentrations and salinity at the sites of Transect C were also prescribed in the open boundary.Mean flow rates and sediment concentration of Shenzhen, Yuen Long, Tin Shui Wai and Da Sha Rivers (Table 1) in the dry season obtained from the Drainage Services Department of Hong Kong were prescribed in the model.Model run time extended from 0:00, on 10 October, to 24:00, on 20 October in 2004, and the external and internal time steps were 1 s and 10 s.The model was cold started and initialized with zero current velocity.Since the model initial time is in the neap tide period, the sediment concentration and salinity were initialized by horizontally uniform values with the mean observed profiles measured from 11:00, on 17 October, to 15:00, on 18 October.The tidal amplitude was initialized by the in situ measured water level on TBT obtained from the Hong Kong Observatory.The model reached a steady state after several tidal cycles of spin-up time until 11:00, on 17 October, when the model calibration and validation were conducted by comparing with in situ measurements.
Satellite Sediment Information Retrieval
In this study, we used images from the National Aeronautics and Space Administration (NASA) MODIS data archive website [21].Two cloud-free MODIS Aqua satellite images were obtained during the in situ measurement period.One is in 13:30, on 17 October, and the other is in 13:30, on 18 October.We selected Level 1 data to correct the atmospheric effect to obtain the water reflectance using the Quick Atmospheric Correction (QUAC) method proposed by Bernstein et al. [22].QUAC is a semi-empirical method that requires no prior information on atmospheric conditions and illumination/viewing geometry at the time of image acquisition, and it is based on several simplistic assumptions.Previous results over turbid waters have shown that QUAC yields accuracies that are comparable to other methods [23], with significantly faster computational speeds and a robust atmospheric correction result.
Using satellite image data to derive sea surface sediment concentration, a relationship between suspended sediment concentration (SSC) and water reflectance must be determined.Such relationships have been proposed through semi-analytical algorithms based on radiative transfer theory [24,25] and, very often, by empirical regression methods.Many empirical regression relationships documented by other researchers have been tested to establish the remote sensing retrieval models of the suspended sediment.These empirical models include linear, exponential, logarithmic statistical relationships, and so forth [8,26,27].In this study, we tried to establish an empirical regression algorithm for retrieving the sediment concentration from MODIS images.
For MODIS data, traditional bands with a spatial resolution of 1000 m were specifically designed for ocean color observation of open ocean waters.Such bands are not suitable over highly turbid coastal and inland waters for ocean color observation, because the bands saturate and the true signals are unknown [28].However, a number of investigators have looked to exploit some land/cloud bands that have a spatial resolution of 250 m and 500 m for application to inland and coastal turbid waters [29,30].These bands contain 645 nm and 555 nm wavelengths, which have been proven to be useful for establishing the sediment retrieval algorithms on the eastern part of Pearl River Estuary in which the Deep Bay is located [31,32].In this study, the water reflectance on these two bands obtained from MODIS images was used.Because the acquisition time of the used MODIS data is at 13:30, on 17 October and 18 October, the average value of in situ measured sediment concentration at 13:00 and 14:00 was used as the measured sediment concentration at 13:30.Based on water reflectance and the in situ measured suspended sediment concentration at all sites on Transect A, B and C at the two image acquisition times, the best relationship for representing the model with the square correlation coefficient of 0.835 has been found, and it is shown as follows.
where SSC denotes the suspended sediment concentration (mg/L) and Rss(645) and Rss(555) denote the water reflectance on bands 645 nm and 555 nm, respectively.Figure 2 shows the scatter plot of remote sensing reflectance and suspended sediment concentration.The surface sediment concentration was retrieved from the two available images based on this function.The retrieved image reflectance may have unusually high values in the pixels adjacent to the land due to the land reflection effect.This will result in abnormal sediment concentration in the image pixel near the coastline.Therefore, such bad pixels have been deleted manually to ensure a reliable assimilation experiment.Figure 5 shows the retrieved sediment distribution of the two images at 13:30, on 17 October, and 13:30, on 18 October.
Assimilation Scheme
In this study, the widely used ocean data assimilation method-the optimal interpolation (OI) algorithm-was employed as the assimilation approach [33,34].The OI method obtains the statistically optimal state based on model forecasting and observation through least squares estimation.It has the advantage of the simplicity of implementation and its relatively small computation cost, especially for highly nonlinear, high-dimensional ocean model systems.Based on OI, the sediment observations from remote sensing images were interpolated into a model grid using a model forecasted field as a first guess.First, we mapped remote sensing sediment concentration into n model grids.The updated sediment concentration field is then obtained from where C is the n-dimensional vector of sediment concentration, with superscripts a, f and rs denoting the updated state, the model forecast and remote sensing observation, respectively, and subscript k denoting the assimilation time when there is a remote sensing image.The gain, W, is an n × n matrix of weights that determines each observation's influence on the final updated state.Using a principle of minimizing error variance of updated sediment concentration [35], we can argue that W k should be: where P f is the n × n error covariance matrix of the sediment concentration field from the model forecast and R is the n × n error covariance matrix of the sediment concentration field from remote sensing images.After determining W k , the updated sediment concentration field is obtained by Equation (2).The model is then integrated to the next forecast time, with the updated field as the initial condition until the next assimilation time.
In order to perform OI, the model forecast error covariance matrix, P f , and remote sensing error covariance, R, in Equation (3) must be determined.In this study, the error variances of model forecast and remote sensing sediment concentration were obtained through a comparison with in situ measured sediment concentration at all sites on three transects.They are computed using the following formula.
where , are the error variances of forecasted sediment concentration and remote sensing sediment concentration, with subscript k denoting the assimilation time.denotes the in situ measured sediment concentration at the i-th in situ site.N is the number of in situ measurement sites.The error variances of model forecast and remote sensing sediment concentration were calculated as 739.84 and 28.09, respectively.
In optimal interpolation, it is often assumed that both model and measurement errors follow a Gaussian distribution, and there is no correlation between measurement error and model error [36].It is also usually assumed that there is no correlation between measurement errors, so the error covariance matrix, R, is a diagonal matrix with an error variance of 28.09 in the diagonal line and 0 in other elements.
For the determination of the model forecast error covariance, P f , many schemes for calculating forecast error correlations have been proposed and put to practical assimilation application [36,37].In this study, an exponential correlation model was chosen to define the error correlation.It is assumed that the forecast errors follow a Gaussian distribution, and the error correlation decreases exponentially with the square of the distance [38].The formulation of the correlation function is given as: where ρ is the forecast error correlation, Δx, Δy are the distances between two forecast grid points in the x, y directions and R is the correlation length, which limits the influence of interpolated data within a fixed region in optimal interpolation [39].In this study, the hydrodynamic and sediment transport model first ran "cold" with the provided input data and parameters and was validated by in situ measurements.In this study, the hydrodynamic and sediment transport model first ran "cold" with the provided input data and parameters and was validated by in situ measurements.Based on the established model, model runs were conducted with two surface sediment concentration images obtained in Section 3.1 sequentially assimilated into the model.In order to achieve a good performance of data assimilation, the optimal forecast error correlation must be determined.By repeatedly assimilating remote sensing sediment using different correlation lengths, an optimal correlation length that produced the best result in the OI was selected.In this study, the root mean square error (RMSE) was calculated for each correlation length at the two assimilation times to evaluate the assimilation performance.The RMSE is calculated as: where and denote the sediment concentration from OI results and in situ measurement sites.N is the number of in situ measurements.The RMSE is also used to evaluate the model performance to forecast water level, salinity, current velocity and sediment concentration.
Model Calibration and Validation
A well calibrated and validated hydrodynamic and sediment transport model based on FVCOM that simulates the tide and sediment movement of Deep Bay has been built by Zhang et al. [40].However, the model was established for application in the wet season of the bay.For application in the dry season in the present study, all parameters need to be recalibrated.Based on the hydrodynamic and sediment transport model parameters in Zhang et al. [40], we use the trial-and-error method to adjust the parameters to achieve a consistent computed result with the measurement.After repeatedly adjustments, an optimal set of parameters has been determined for the model of Deep Bay in the dry season.The final parameters are listed in Table 2. Figure 3 shows the comparisons of simulated hydrodynamic and sediment transport model results and corresponding measurements.The comparisons show that the simulated water levels agree well with in situ measurements at DJT (Figure 3a) and TBT (Figure 3b).The RMSEs of the computed water level at the two stations are 0.026 m and 0.075 m, respectively.The simulated results from two sites, S5 in Transect A and S7 in Transect B, were selected to compare with the measurements (Figure 3c-j) and for the later analysis.Figure 3c,d demonstrates that the simulated salinities are in good agreement with the measurements at S5 and S7, respectively.The statistics shows that the RMSEs of simulated salinity for all sites are less than 1.8 ppt.By comparing the salinity dynamics at sites S5 and S7 with water level variation at the nearby tidal stations, TBT and DJT, it is found that the salinity transport in the bay is highly correlated with the ebb and flood tidal cycles.The dynamics of salinity show almost the same periodic variation with the tidal levels.This indicates that salinity in the dry season of Deep Bay is largely dominated by brackish water intrusion from the outlet of the bay.However, in the wet season, as shown in Zhang et al. [40], the salinity transport in Deep Bay did not show obvious correlation with tidal level variation, but was largely affected by the salinity of the water discharged from rivers.
Figure 3e-h demonstrates that the simulated depth-averaged current velocity and direction are reasonably consistent with corresponding measurements at sites S5 (Figure 3e,g) and S7 (Figure 3f,h).The RMSEs of simulated current velocity and direction for all measurement sites are less than 0.118 ms −1 and 19.7°, respectively.Figure 3i,j shows a comparison of simulated depth-averaged sediment concentration with measurements at S5 and S7, respectively.It shows that, due to the complex sediment dynamics, the sediment is not precisely forecasted, especially when the sediment concentration is relatively low.The time series of simulated sediment demonstrated an apparent smooth dynamic trend compared to in situ measurements, which exhibit a more irregular cycle.However, both the measured and simulated sediment concentrations exhibit a dynamic trend that is similar to the complicated dynamics of current velocities.This finding indicates that sediment transport is dominated by the current dynamics of the bay, which is in line with the previous study of Zhang et al. [40].Because deposited sediments need time to be resuspended to the water column after the occurrence of the maximum ebbing or flooding tidal period, the maximum value of sediment concentration lags behind the maximum value of current velocity by about one hour.Figure 4 compares the profiles of measured and modeled current velocity, sediment concentration and salinity at the maximum flood (11:00, on 17 October) and the maximum ebb (4:00, on 18 October) periods.The comparisons show that the simulated results can reasonably represent the profiles denoted by the measurements.It can be seen that the vertical velocities at the maximum ebb period are relatively higher than the ones at the maximum flood period, but the vertical sediment concentrations are higher at the maximum flood period.This is probably because tidal transport dominates the sediment dynamic at the maximum flood period, when a large amount of sediments are carried into the bay from the outer bay.It should be noted that, at the maximum ebb period, the high concentrations occurred at the near-bottom layers, which may be caused by the bottom resuspension, because the tidal flow reached the maximum current velocity.The vertical profiles of salinity demonstrate unobvious vertical stratification.It can be seen that the salinities shown in the profile at the maximum flood period are relatively higher than those at the maximum ebb period.This is because the tidal level at the maximum flood period (0.95 m) is higher than that at the maximum ebb period (0.64 m), so more salt water is brought into the bay at the maximum flood period.In order to choose an optimal forecast error correlation length, a series of data assimilation experiments were repeatedly conducted by changing the correlation length, trying 250, 500, 750, …, 2500 m.It is found that when the correlation length is 1500 m, a minimum RMSE of 13.8mg/L was obtained by comparing with in situ measurements at the assimilation time.Therefore, we use 1500 m as the correlation length in the assimilation.
Figure 5 shows the comparison of sediment concentration from in situ, model and OI results at the two assimilation times.It shows that the model overestimated the surface sediment concentration at most of the sites, and they are clearly improved by OI.The improvement is also pronounced for the underestimated sediment concentration at sites S4, S6 and S8 at the second assimilation time.For all validation sites, the relative errors of the model vary from 4.3% to 68.1%, and for OI, results vary from 0.8% to 31.9%.The OI results reduce the errors by 36%-91%.
Optimal interpolation is expected to improve the prediction of the spatial distribution of the sediment concentration by correcting the model results with remote sensing data by capturing their error information.Figure 6 shows the spatial distribution of sediment concentration from model, remote sensing images and OI results at the two assimilation times.It is observed that the OI results display a better spatial consistency with remote sensing sediment distribution.More detailed surface sediment distributions are captured in the OI results, and it is particularly evident in the shallow inner region of Deep Bay.The comparisons reveal that both the information from the model results and the remote sensing sediment concentration is retained in the OI results.This is obvious in coastal areas, where the sediment concentration is filled up by interpolating remote sensing sediment and model results.Table 3 displays the statistical characteristics of sediment concentration difference between OI and model results and between OI and remote sensing results.It can be seen that the mean value and variance of difference between OI and remote sensing results are smaller than those between OI and model results.This indicates that more information in the OI results comes from the remote sensing image.This is possibly because the remote sensing sediment concentration is more accurate than the model results.Table 3.The mean value and variance of the spatial difference between OI results and model results and the spatial difference between OI results and remote sensing sediment at the two assimilation times.Figure 7 shows the time series of the surface sediment concentration of in situ measurements, model results with and without data assimilation, together with the remote sensing sediment at the assimilation time at sites S5, S7 and S8.The figure shows that the effect of remote sensing sediment assimilation on the model result was limited, because the remote sensing image was only a snap shot of the surface sediment concentration at a short period of time.It is found that, after assimilating the first sediment image into the model, the temporal effect lasts about four and half hours, until the water level reached its lowest; while the temporal effect of assimilating the second sediment image lasts about two hours, until the water level reaches its highest.This suggests that the temporal effect ends at the time when tidal currents alternate their flow directions.At the same time, the direction of horizontal sediment transport also tends to change.Thus, the disappearance of the temporal effect brought by remote sensing data assimilation may be owed to the rapid dynamic behavior of sediment transport in a tidal cycle of the bay.However, more powerful data assimilation schemes, like the four-dimensional variational method, which can handle observations that are distributed within a time interval, can be used to achieve a global optimization or correction of the whole model forecasted state.Despite the limited temporal effect on the model forecast brought by remote sensing data assimilation, the results have been improved to a large extent at the affected time.It could be seen in Figure 7 that the model with data assimilation improved the forecasts results, to agree better with the in situ measurements.In particular, it can be apparently observed at S7 and S8 after the first assimilation time (between 14 h and 18 h) that the model forecast with data assimilation caught the oscillation of sediment concentration, which was not seen from the model results without data assimilation.It is observed that, although the assimilated remote sensing sediment showed greater deviation from in situ measurement at S7 at the first assimilation time, the improving of assimilation results is still obvious.This reveals the validity of data assimilation for providing a more accurate state by catching the errors from remote sensing data and model results.sensing data assimilation at S5, S7 and S8.It shows that the model with data assimilation reduces the RMSE of the forecasted surface sediment concentration by at least 47.8% over the model without data assimilation.In short, the data assimilation scheme shows the validity of improving model forecasting on the whole.Table 4. Statistics of RMSE (mg/L) over surface sediment concentration and the depth-averaged sediment concentration from the model results with and without assimilation in the limited effect time (four and a half hours after the first assimilation time and one and a half hours after the second assimilation time, along with the relative RMSE reduction) a .Note that the RMSEs of the simulated surface sediment at S4 and S6 were not calculated, because no in situ surface sediment concentration was taken at the two sites.The temporal ability produced by assimilating the surface sediment concentration is not limited to the upper surface layer.As an integral part of the sediment distribution in the water column, the surface sediment is involved in sediment settling and vertical mixing in the sediment transport model.Therefore, updating the surface sediment concentration by assimilation has the potential to change the sediment movement trend, hence affecting all of the simulated sediment vertical profiles.In this study, such a temporal ability in the vertical column has been observed.However, the temporal effect of data assimilation still did not remain too long in the water column and, at most, lasted for one hour.In consideration of the small effect caused by a very small amount of discharged sediment from the rivers (see Table 1), this may be owed to the rapid sediment exchange between the sea bed and the water column.However, from the RMSE statistics of simulated depth-averaged sediment concentration in the limit effect time, it can be seen that the model with data assimilation reduced the RMSE of the simulated depth-averaged sediment concentration by 36% against the model without data assimilation.This reveals the potential ability of surface sediment data assimilation to improve three-dimensional sediment concentration forecasting.Figure 8 demonstrates the comparison of the vertical sediment profile from measurements, the model with and that without data assimilation at S7 half an hour after the first assimilation time and the second assimilation time.A positive change of sediment distribution from the model with data assimilation can be seen in the figure.Effective schemes on how to improve the sediment modeling in the lower layer within a longer forecast time by assimilating surface sediment should be further explored.
Conclusions and Outlook
The present study explored the integration of remote sensing data with a three-dimensional sediment transport model to improve model prediction.Two scenes of surface sediment data derived from MODIS satellite images were sequentially assimilated into the sediment transport model of Deep Bay in a tidal cycle.The results showed that the data assimilation can improve the sediment transport modeling.The model with remote sensing sediment data assimilation produced more accurate sediment dynamics than the model without data assimilation.The temporal ability of the data assimilation on both the surface transport and the vertical mixing were observed.Due to the rapid sediment transport and resuspension induced by the current flows in this tidally-dominated bay, the temporal ability of data assimilation was limited to a maximum of four and a half hours.In a future study, multi-platform remote sensing data could be employed to narrow the gap of assimilation time to achieve a long-term effect on the model forecast and prediction.Further work can also explore assimilation schemes to improve the sediment prediction of the entire vertical water column by assimilating the surface sediment concentration.The error correlation of the computed sediment concentration between the surface layer and the lower column can be considered in the improved schemes.Other sophisticated assimilation methods, like the ensemble Kalman filter and the variational scheme, could also be alternatives.To conclude, this work provides an insight into improving sediment transport prediction in highly dynamic coastal zones using remote sensing data assimilation.The present method can be applied to other coastal and inland water areas of interest for monitoring and modeling of the suspended sediment concentration, as well as other ocean color parameters, such as chlorophyll.Moreover, the combination of assimilating both remote sensing sediment concentration and field measured current velocity into a three-dimensional model may also be a promising idea.Because current dynamics largely govern the sediment movements in coastal waters, it is likely that the improved current circulation will result in a better prediction of the suspended sediment concentration.
Figure 1 .
Figure 1.Model area and in situ measurements location (A) and the model grids with bathymetry (B).FNNR, Futian National Nature Reserve; MPNR, Mai Po Nature Reserve.
Figure 2 .
Figure 2. Matched in situ suspended sediment and remote sensing reflectance retrieved from Moderate Resolution Imaging Spectra-radiometer (MODIS) images.
Figure 3 .
Figure 3. Validation of model results.(a) the water level at Tsim Bui Tsui (TBT); (b) water level Dong Jiao Tou (DJT); (c) the depth-averaged salinity at S5; (d) the depth-averaged salinity at S7; (e) the depth-averaged current velocity at S5; (f) the depth-averaged current velocity at S7; (g) the current direction at S5; (h) the current direction at S7; (i) the sediment concentration at S5; (j) the sediment concentration at S7.
Figure 4 . 1 .
Figure 4. Vertical profile validation of model results at S7. (A) the current velocity at the maximum flood period at 11:00, on 17 October (a,b) and the maximum ebb period at 4:00, on 18 October (c,d); (B) the sediment concentration at the maximum flood period at 11:00, on 17 October (a,b) and the maximum ebb period at 4:00, on 18 October (c,d); (C) the salinity at the maximum flood period at 11:00, on 17 October (a,b) and the maximum ebb period at 4:00, on 18 October (c,d).Notes: a, c representing the measured profiles and b, d representing the simulated profiles.
Figure 5 .
Figure 5. Surface sediment concentration from in situ, model and optimal interpolation (OI) results at all sites at the first assimilation time (13:30, on 17 October) and the second assimilation time (13:30, on 18 October).Note that because the in situ measurements were collected hourly and there are no in situ sediment measurements during the satellite image acquisition time, therefore in situ measurements at the assimilation time used in this figure are the average values of the in situ sediment concentration of a half an hour before and a half an hour after the assimilation time.
Figure 6 .
Figure 6.Comparison of the sediment spatial distribution from those retrieved from (A) model results (Model); (B) remote sensing images (RS); and (C) OI results at the first assimilation time (13:30, on 17 October) and from (D) model results (Model); (E) remote sensing images (RS); and (F) OI results at the second assimilation time (13:30, on 18 October).
Figure 7 .
Figure 7.The time series of in situ measured surface sediment (half hourly) and forecasted sediment from the model with and without assimilation (half hourly) at sites S5 (A); S7 (B); and S8 (C).The remote sensing sediment at the assimilation time is also shown.
Figure 8 .
Figure 8. Vertical distribution comparison at site S7 at half an hour after the first assimilation time (A) and at half an hour after the second assimilation time (B).
Table 1 .
The mean flow rates (V) and sediment concentration (C) of four rivers.
Table 2 .
Parameters used in the model of Deep Bay in the dry season.
Table 4
tabulated the RMSE statistics of the simulated surface sediment concentration from the model with and without remote | 8,619.8 | 2014-03-24T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae): An Invasive Insect Pest Threatening the World Tomato Production
The South American tomato pinworm or tomato leaf miner (TLM), Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae), is a serious invasive and destructive insect pest of tomato (Solanum lycopersicum L.) worldwide. The moth can cause 100% damage in tomato crop in both greenhouses and open fields if control measures are not carried out. Due to the high reproduction potential, dispersal ability, and tolerance to environmental conditions, the TLM invaded most tomatoproducing countries in Europe, Africa, and Asia. The tomato leaf miner originated in South America and was first introduced in Spain in 2006 and from where it spread to other part of the world. This chapter consolidates the rich literature on the pest with emphasis on invasion history, economic significance, and possible management options adopted worldwide.
Introduction
Biological invasion has occurred for millennia, but increased globalization in recent decades has accelerated it [1]. Invasive insect species reduce crop yield, increase cost of production especially pest control costs, increase reliance on pesticides, and disrupt preexisting integrated pest management (IPM) programs. Invasive insect species cause considerable damage to agriculture, horticulture, and forest industries worldwide [2,3] with an estimated annual economic loss of about 70 billion US$ [4]. Transportation and international trade are increasing rapidly, thus facilitating the spread and dispersal of invasive species [5]. The tomato (Solanum lycopersicum L.) is an important horticultural vegetable crop that is only second to potato. The total world production of tomato is about 180 million tons grown in areas of approximately 4 M ha. The top 10 tomato-producing countries in the world are China, India, USA, Turkey, Egypt, Italy, Iran, Spain, Brazil, and Mexico. China, India, and Turkey, account for almost half of the land area covered worldwide with tomato crops, that is, 31, 11, and 7%, respectively [6]. Tomato is the sixth most valuable cultivated crop in the world worth US$ 87.9 billion in 2016. The tomato leaf miner Tuta absoluta (Lepidoptera: Gelechiidae) is threatening about 87% of this production worldwide [3,6,7]. T. absoluta has several 2. Origin, morphology, and taxonomic position T. absoluta originated in the Peruvian Central highlands from where it spread to other areas of Peru and then to the rest of Latin American countries during the 1960s [3]. TLM is small moth with body length of 5-7 mm and wingspan of 10-14 mm [8]. The moth has silvery-gray scales and black spots on the forewings. The antennae are long, filiform with black and brown scales (Figure 1). Shashank et al. [9] described the male and female genitalia as well as the pupal genital aperture as useful distinguishing character for sexing of the moth. Egg is small (0.36 mm long and 0.22 mm wide) with elliptical shape and creamy white to bright yellow color. Larva is whitish in first instar (0.9 mm long) and becomes greenish or light pink in the second and fourth instar (7.5 mm). Pupa is obtect with greenish coloration at first, turning to chestnut brown and dark brown near adult emergence [8] (Figure 2). Tabuloc et al. [10] studies the genome of T. absoluta to generate and design a panel of 21 SNP markers for the species identification instead of depending only on morphological identification and symptoms of damage on the host plants.
Biology and bionomics
The TLM has a complete metamorphosis type of reproduction, where it undergoes through four developmental stages, namely, egg, larva, pupa, and adult (Figure 3). Adults are nocturnal and hide between host leaves during the day. The female starts to release a sex pheromone 1-2 days after emergence to lure males for mating. The female sex pheromone is a mixture of tetradecatrienyl acetate and tetradecadienyl acetate in a ratio of 10:1, respectively [15,16]. TLM is known to have multiple mating and the average number of mating per female is about 10.4. Both sexes are polygamous with no refractory period. The female sometimes can exhibit deuterotoky parthenogenesis, which gives both females and males from unfertilized eggs [17]. Males use female sex pheromone to locate females and mating can last from few minutes to 6 hours. Female uses plant volatiles (kairomones) and leaf contact for oviposition. A single female can lay as many as 260 eggs during its life cycle, which may extend to 3 months [18]. About 92% of the total eggs are laid in the 1-3 days following mating [8]. Eggs are laid singly on the upper part of the plant (young leaves, stems, and sepals). The eggs hatch in 5-7 days depending on temperature and relative humidity. After hatching, the larvae go through four instars, which are completed in about 20 days. The mature larva then gets rid of all gut materials, constructs a silken cocoon, and turns into pre-pupa and pupa. Pupation may last for 10-11 days before adult emergence for female and male, respectively. Mature larvae leave the mines and build silken cocoon on the leaflet or in the soil. When pupation occurs in the mines or tomato fruit, the pre-pupa does not build cocoon. Adult longevity may extend for 30-40 days [8]. The whole life cycle of the moth is completed in 29-38 days, depending on the environmental conditions (Figure 3). Moreover, about 10-12 generation Invasive Species -Introduction Pathways, Economic Impact, and Possible Management Options 4 may be produced annually. The thermal constant from egg to adult has been estimated to be 453.6 degree days (DD) [19]. TLM larvae do not enter diapause as long as food is available; however, it may overwinter as eggs, pupae, and adults [8,18].
Host range
TLM is an oligophagous feeding on many related species of the family Solanaceae including tomato (Solanum lycopersicum L.), potato (Solanum tuberosum L.), eggplant (Solanum melongena L.), pepper (Capsicum annuum L.), sweet pepino (Solanum muricatum L.), tobacco (Nicotiana tabacum L.), the jimson weed (Datura stramonium L.), the African eggplant (Solanum aethiopicum L.), and the European black nightshade (Solanum nigrum L.) [1,19,20]. Sylla et al. [21] reported 12 host plants in the family Solanaceae, 2 in the Amaranthaceae, 2 in the Convolvulaceae, and 1 in the e in native South America, invaded Europe, and Africa. The two hosts in the Amaranthaceae are Chenopodium album L. and sugar beet, Beta vulgaris L., while the Fabaceae is represented by common bean, Phaseolus vulgaris.
TLM prefers tomato on which it is considered as a major pest while it is a minor pest on other alternative hosts. Host plant knowledge is essential for developing integrated pest management (IPM) against T. absoluta [21]. Sylla et al. [21] studied the oviposition acceptance, oviposition preference, and performance of two population of TLM from France and Senegal on six solanaceous plants, namely, tomato, eggplant, Ethiopian eggplant, potato, sweet pepper, and pepper. Their findings suggest that there is differentiation in the host range of TLM across invaded areas. In this respect, it has been reported that the relation between the female preference (maternity care) and larval performance should be very tight, as the larvae can survive only on small number of host plants [21].
Damage and economic significance
TLM usually attacks the apical buds, flowers, and new fruits of tomato. Larvae make conspicuous mines and galleries on leaves and stems. Damage can occur at any stage of tomato growth from seedlings to mature plant [8]. The larvae feed on the mesophyll tissue, leaving the epidermis intact, thus creating irregular mines and galleries on the leaves (Figure 4). The mines and galleries may become necrotic with time. This mining activities lead to reduction of the photosynthetic potential of infested leaves [1]. Infested tomato with TLM show burnt up-like symptoms [9]. The galleries made by the larvae are wider than that caused by the dipteran leaf miner Liriomyza trifolii [9,22]. Larvae can penetrate the axillary buds of young stems when at high density. Thus, it leads to plant withering and check of vegetative growth [8].
After fruit setting, the larvae excavate tunnels in the fruits, which may facilitate invasion by pathogenic agents, resulting in fruit rot ( Figure 5). The larvae of TLM have a cryptic behavior and endophagous habit, which makes detection of infestation at an early stage difficult [1]. Damage on stems causes necrosis that reduces tomato plant growth and development. Feeding tunnels and holes in the fruits lower their quality and reduce their market value [1]. The serious damage on tomato, due to T. absoluta, is caused by the leaf-mining activities and to a lesser extent by tunneling in the fruits [5]. Damage on tomato can reach 100% if no action is taken against the moth. Estimation of economic losses is difficult due to the interaction of many factors including climate, production pattern (greenhouse versus open field), and production costs including seeds, insecticides, fertilization, and other resources. Most of the damage occurs at the early years of invasion, due to lack of farmers' experience on how to manage the pest [1]. Han et al. [23] and Biondi and Desneux [5] summarized the damage of TLM into the following: 1. Production reduction due to injuries on leaves, stems, and fruits 2. Increase in cost of management practices (IPM) against the pest, particularly the purchase and application of insecticides 3. The ban or restriction of fresh tomato, from the side of non-invaded countries, which will affect the economy of countries where TLM is an endemic pest 4. Other costs include the disruption of preexisting integrated pest management (IPM) programs and disturbance of natural ecosystems [24]. Invasive Species -Introduction Pathways, Economic Impact, and Possible Management Options 6
Invasive potential and global distribution
According to Begon et al. [25], any species distribution is limited and governed by three basic components: 1. The ability of the species to reach a potential site (introduction pathway) 2. Capacity to develop in specific environmental conditions (establishment) 3. The ability to compete with other species occupying the same habitat TLM is a highly invasive insect pest of tomato crop [1,6]. The moth was first reported in Europe (Eastern Spain) in 2006 [19]. The introduction in Spain is believed to be from a single population in Chile [26]. Three years later, it was reported in Turkey, the fourth largest producer of tomato in the world, in 2009 [27]. It spread then across Europe and North African countries [6,28] and Asian countries [23]. According to Seebens et al. [29], most of the invasion occurred during the last 40 years due to increased globalization and trading among continents. The possible introduction pathways for T. absoluta include tomato fruits, packing materials of tomato, eggplant and pepper, and planting material [30]. Santana et al. [31] studied the global geographic distribution of TLM using a combination of spatial distribution models as well as the current distribution of the pest. They showed that the suitable areas for T. absoluta include North and Central Americas, Africa, Europe, Asia, and Oceania at present time and in the future. Additionally, their model showed that large tomato-producing countries such as China, USA, and Mexico, where the moth is not present, stand a high risk of being invaded by T. absoluta. Damme et al. [32] and Han et al. [23] listed important reasons explaining the vast and wide spread of T. absoluta around the globe. These reasons include the following: 1. The strong intrinsic invasiveness with high reproductive potential of the moth 2. The dispersal capacity and ability of TLM to adapt to the newly invaded areas.
The adults can fly actively for several kilometers, which allows for short distance spread [33]. The abovementioned reasons are pertaining to the biological traits of the pest. However, there are several reasons connecting human activities and measurements adopted by countries to curb the introduction, establishment, and spread of the pest that also contributed to the vast spread of T. absoluta [1,4,23,34]. These reasons are as follows:
Management
Because of its high invasiveness and economic significance, management of T. absoluta could be carried out at local, regional, and international levels. The management can be divided into pre-invasion and post-invasion measures. The former are mainly preventive including strict quarantine measures, inspection of tomato consignments, and treatment, when necessary, with proper fumigants before shipping. Endorsement, by countries, of T. absoluta as pest of a high risk in quarantine list is essential. T. absoluta population management in invaded countries could significantly lower the invasion risk to neighboring non-invaded ones [23,35]. Establishment of regional network to connect research entomologists, policy-makers, and major stakeholders from all invaded as well as threatened countries [23]. Such network and platforms are supposed to coordinate joint research activities and validation of newly developed management technologies before being applied in the field.
The post-invasion management of T. absoluta is to try to eradicate the pest at an early stage of invasion if possible, otherwise a sustainable containment strategy based on integrated pest management is recommended.
In native and invaded areas in the world, current IPM components against TLM include the following:
Semiochemically based control
The female sex pheromone can be used in several ways for the management of TLM. These include the following: 1. Monitoring and surveillance. Pheromone-baited sticky traps can be used to monitor all stage of tomato production and across the production chain in nurseries, farms, greenhouses, and packaging and processing facilities [36]. Monitoring of TLM is performed by trapping males and/or by sampling eggs and larvae on infested tomato plants. The latter is however, tedious and difficult to perform over large areas. On the other hand, economic threshold based on male capture is not reliable because trapping process may be affected by 3. Mating disruption by saturating the atmosphere with sex pheromone, which alters ability of males to locate and find females. This technique can be effectively applied in confined environment such as protected tomato in greenhouses. However, the performance of the technique was poor [17].
Biological control
Salas Gervassio et al. [38] critically reviewed the natural enemies' complex in tomato agroecosystem. They determined the natural enemies that are suitable for augmentative and conservative strategies in South America and for classical biocontrol agents elsewhere in the world where T. absoluta has arrived. The authors reported that more than 50 species and morphospecies of Hymenoptera were associated with T. absoluta; however, only about 23 of them could be confirmed as parasitizing the moth. Augmentative biocontrol for T. absoluta is commercially available in South America using the parasitoid Trichogramma pretiosum, particularly in Brazil, Chile, Colombia, Ecuador, and Peru [38]. The use of endogenous natural enemies for biocontrol of TPW is one of the key points of conservative strategies [39]. The Macrolophus basicornis (Stal) and M. pygmaeus (Hemiptera: Miridae) are potential biocontrol agents against (egg predators) TLM. The nymphal stage of the former can consume an average of 331 eggs per day, while the adult can feed upon as many as 100 eggs per day [40,41].
The predator Nesidiocoris tenuis (Hemiptera: Miridae) can be used for the management of other tomato pests including the whiteflies, thrips, leafminers, and aphids [43]. This predator has shown great potential in controlling TPW in Asia [23], Turkey [44], and India [45]. This predator is commercially produced and released against TPW.
Omnivorous mirids had been used against TLM after its arrival in Europe through augmentative and inoculative release in the field and plant nurseries. They are sometimes supplied by conservation strategies using banker plants [1]. The mirid predators Dicyphus bolivari Lindberg and D. errans (Wolff) Hemiptera: Miridae are potential biocontrol agents against TPW [46].
The generalist egg parasitoid, Trichogramma achaeae, is a potential agent for biological control of T. absoluta. This worldwide-distributed parasitoid is also attracted by volatiles produced by tomato plants whether uninfested or infested as well as by the sex pheromone of the moth [47]. Trichogramma evanescens (Westwood) was also used against TLM in Turkey [44]. The egg parasitoid, Trichogramma brassicae, is a potential biocontrol agent of TPW [48]. Hemipteran predators such as anthocorids, geocorids, mirids, nabids, and pentatomids have been identified to be biological agents against Tuta absoluta [49]. Since larvae of Tuta absoluta are endophagous, cryptically living and feeding inside mines or tunnels in tomato leaves and fruit, respectively, their predation and parasitism by the natural enemies seem to be difficult. Nevertheless, numerous natural enemies can still be used in the management of this notorious pest. The eggs seem to be more vulnerable to predations and parasitism because they are exposed on the surface of tomato growing points. However, the efficacy of natural enemies in suppressing T. absoluta populations may be altered by environmental abiotic factors through bottom-up effects triggered by agronomic practices such as irrigation and fertilization. Moreover, plant constitutive and/or induced resistance traits against T. absoluta are another source of bottom-up effects, which may interact with irrigation and fertilization and jointly affect the performance and population density of T. absoluta, and counterpart natural enemies and their interactions [37]. In addition to the arthropod biocontrol agents, microbial biocontrol agents such as entomopathogenic nematode (EPN) of the genera Steinernema and Heterohabditis have potential to kill larvae of TLM when they are outside their mines.
Biotechnological control
Recently, the transcriptome data showed that most of the core genes of RNAi pathway such as Dicer-like and Argonaute and putative orthologous Sid-1 genes are present in T. absoluta, suggesting the feasibility of RNAi for controlling this pest [50]. Full plant protection and high larval mortality of T. absoluta have not been achieved, probably due to a low expression of dsRNA in transgenic plants [51]. Novel management technologies for TPW include genetically modified crops (GM), for example, GM Bacillus thuringiensis (Bt) tomato [52]. RNA interference (RNAi) is a biological mechanism that leads to posttranscriptional gene silencing directed by the presence of double-stranded RNA (dsRNA) molecules [53]. Biotechnically, sterile insect technique (SIT) may also be used for the management of TLM [37]. However, this technique may be compromised if field populations of T. absoluta can reproduce by deuterotokous parthenogenesis [17]. It is worth to mention here that these authors reported tychoparthenogensis reproduction of T. absoluta under laboratory conditions. They stated that the origin of this type of reproduction could be considered as classical automictic tychoparthenogensis or due to the microbial manipulation by bacterial endosymbiont such as Wolbachia, which has recently been identified in T. absoluta [54].
Chemical control
Chemical control of the invasive TLM is difficult; however, its arrival to new invaded areas has been linked to an excessive application of broad-spectrum insecticides [1,6,55], in attempts to curb the outbreaks of the pest and to reduce yield losses in tomato crop. Currently, insecticides application seems to be the most commonly used strategy against T. absoluta worldwide in open fields of tomato [1,[56][57][58]. The cryptic behavior and the endophagous habit of larvae make it extremely difficult to control TLM with insecticides [1,19]. The possible reasons for difficulty of controlling TLM with insecticides, according to Biondi et al. [1] and Guedes et al. [58] include the following: 1. Infestation of tomato by the moth occurs at an early stage of plant growth 2. The multiple attacks by the pest on different plant parts (stems, leaves, buds, young fruit, and ripe fruit) 3. The morphology and architecture of tomato plant that provide protection for feeding larvae against insecticides DOI: http://dx.doi.org /10.5772/intechopen.93390 Insecticides from different chemical classes were used against TLM in South America, Europe, and other parts of the world. These chemical classes include, but not limited to, organophosphate, pyrethroids, pyrrole, spinosyns, diamides, benzoylureas, and avermactins [54,56,59]. Spinosad, azadirachtin, and Bacillus thuringiensis toxins (Bt) have been to control TLM in organic tomato production systems [1,56].
The excessive application of insecticides to prevent and control the outbreaks of T. absoluta, particularly in open fields lead to an increased selection pressure which, eventually reduce the effectiveness of such insecticides [58,60]. For example, when the moth was introduced in Brazil, the farmers initially used insecticides at frequencies of 10-12 applications per cropping season, which was later increased to 30 applications [60]. In Turkey, the annual cost of chemical insecticides used against T. absoluta in 2014 was about 160 million Euros [27]. The frequent use of insecticides speeded the appearance of resistance in tomato leaf miner populations, which can migrate outside their geographical range into new invaded areas [1,6,[56][57][58][59].
Guedes et al. [58] reported that enhanced levels of detoxification enzymes and altered target sites are the main resistance mechanisms commonly found in T. absoluta. In addition to the development of resistance in TLM populations, due to excessive use of insecticides, compromising of biological control, in tomato agroecosystems, is also not avoidable. In this respect, Soares et al. [43] studied the lethal and sublethal effects of five insecticides (spinetoram, chlorantraniliprole + abamectin, triflumuron, tebufenozide, and abamectin) on adults and the third instar nymph of the predator Macrolophus basicornis. They concluded that abamectin caused high mortality in both adult and nymphs. All tested insecticides caused negative effect on the predator.
To overcome the problems of insecticide resistance and other harmful effects on tomato ecosystem, due to the excessive use of insecticides, insecticide resistance management (IRM) strategies are needed to sustain production of tomato crop [1,58]. Such strategies include adoption of alternative control options such as cultural control, semiochemically based control, biological control, and host plant resistance. All these alternative strategies and tactics would reduce the reliance on insecticides and accordingly the selection pressure on TLM populations [1,58].
Conclusions
Recently, tomato leaf miner has emerged as a highly invasive key pest threatening the global production of tomato. The global commercialization and trade of fresh tomato fruit and transplanting material have accelerated the spread of the pest. The impact of T. absoluta on global tomato production industries and on the livelihood of small tomato farming communities in Africa and Asia might be more severe in the coming years, unless great efforts are made to contain its spread. Chemical control of T. absoluta with insecticides seems to be ineffective and not sustainable; therefore, alternative management options such as biological control and semiochemically based control should be encouraged. The socioeconomic impact of this moth on subsistent agriculture need to be addressed in future studies.
Author details
Hamadttu Abdel Farag El-Shafie Date Palm Research Center of Excellence, King Faisal University, Al-Ahsa, Saudi Arabia *Address all correspondence to<EMAIL_ADDRESS>© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 5,198.2 | 2020-08-11T00:00:00.000 | [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
] |
Indonesian News Text Summarization Using MBART Algorithm
. Purpose: Technology advancements have led to the production of a large amount of textual data. There are numerous locations where one can find textual information sources, including blogs, news portals, and websites. Kompas, BBC, Liputan 6, CNN, and other news portals are a few websites that offer news in Indonesian. The purpose of this study was to explore the effectiveness of using mBART in text summarization for Bahasa Indonesia. Methods: This study uses mBART, a transformer architecture, to perform fine-tuning to generate news article summaries in Bahasa Indonesia. Evaluation was conducted using the ROUGE method to assess the quality of the summaries produced. Results: Evaluation using the ROUGE metric showed better results, with ROUGE-1 of 35.94, ROUGE-2 of 16.43, and ROUGE-L of 29.91. However, the performance of the model is still not optimal compared to existing models in text summarization for another language. Novelty: The novelty of this research lies in the use of mBART for text summarization, specifically adapted for Bahasa Indonesia. In addition, the findings also contribute to understanding the challenges and opportunities of improving text summarization techniques in the Indonesian context.
INTRODUCTION
Current technological changes have resulted in online users facing information overload due to the rapidly growing amount of textual information on websites, making it difficult to read through the information [1].Web-derived sources on the internet, such as blogs, social media networks, news, and so on, are a huge source of textual data [2].There are many websites that provide news in Indonesia, such as Kompas, BBC, Liputan 6, CNN, and so on.These media produce news and articles every day [3].More and more online documents require summarization in order to help online users understand information.Text summarization of online documents is done so that users do not spend time looking for the information needed [4].
Text summarization is the process of summarizing a long text into a short text while maintaining the main idea.In natural language processing (NLP) and information retrieval, automatic text summarization is one of the fundamental tasks [5].Text summarization can be applied in various industrial fields, such as news, aggregators, blogs, product descriptions, and others [6].Text summarization can make it easier for search engines to search for content compared to searching in full text.Digital businesses such as e-commerce can also benefit from text summarization to display a brief description of the product.Text summarization can also help journalists display news headlines [7].
Text summarization can be classified into three categories: extractive, abstractive, and hybrid.Extractive summarization is done by finding important parts of the content and forming a subset of sentences from sentences contained in the original document [8].Extractive summarization does not add words to existing content and cannot combine two or more sentences to summarize content.Extractive summarization works on the basis of combining words or phrases from the corpus for summary [6], [9], [10].Hybrid summarization combines extractive with abstractive.The method has the drawback of producing lowerquality abstractive summaries compared to the pure abstractive approach [2], [11], [12].
Abstractive summarization performs summarization by understanding the given sentence and developing relevant summary sentences on its own.Abstractive summarization is also more flexible in generating summaries [13], [14], [15].Unlike extractive summarization, which can produce poor sentences, abstractive summarization can produce grammatically correct sentences [6], [16], [17].The abstractive method paraphrases and rearranges sentences into a summary [18], [19].In this research, we will use the abstractive summarization method.This is due to the advantages possessed by the abstractive method.
In using the abstractive summarization method, there are various algorithms that can be used, one of which is the multilingual version of the BART algorithm.BART is one of the pre-trained systems based on transformer architecture [20], [21].Currently, there is a multilingual version of BART, or what can be called mBART.One of the languages that can be processed is Indonesian.The use of mBART in text summarization can produce a good model.Some research on the use of mBART has been conducted in several languages, such as Russian [22], [23], Vietnamese [24], [25], [26] dand various other languages.From some of these studies, the evaluation results can produce good values, such as in Vietnamese language research [24], [26], [27] which get a rough-value of 55.21, a rough-2 of 25.69, and a rough-L of 37.33 for a dataset called WikiLingua, and for the Vietnews dataset, a rough-value of 59.81, a rough-2 of 28.28, and a rough-L of 38.71.In this study, research was conducted on the use of mBART in the text summarization of Indonesian news.
Further discussion in this paper included the following: Section 2 contains a description of the research method regarding the application of the MART algorithm in the text summarization of Indonesian news.Section 3 contains the results and discussion, followed by the conclusion drawn in Section 4.
Proposed method
The proposed research method for text summarization using mBART is described in Figure 1, covering the steps of pre-processing, fine-tuning, training, evaluation, and summary prediction.It starts with preprocessing to fine-tune the XL-SUM dataset of Hugging Face: https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/indonesian [28], [3].The method ensures the readiness of the data for the next steps.Using the Google Colab Python tool to retrieve and split the dataset, the next step, fine-tuning the mBART model, was followed by training to optimize model performance.Evaluation, using the ROUGE metric, assesses the quality of the summary against the reference or original text.Before entering the summary prediction stage, the ability of the model to produce short and precise summaries is tested.
Literature study
In carrying out research, the first step is to combine a number of literature studies or research related to having similar problems or topics.Scientific articles, journals, books are one of the literature sources that can be used.The selection of literature studies is based on the same problem, namely text summarization.The type of text summarization chosen is the abstractive method using mBART.
Dataset text summarization
The text summarization dataset collection method can be done by several methods.The first method is manually scraping the website and then making a summary manually.The first method is very ineffective because it requires a long time to make datasets.The second method is by asking permission from similar research, namely text summarization, to be used.The third method is to use a public dataset.The third method can get datasets from open dataset websites such as Kaggle, UCI Machine Learning Repository, and so on.In this study, we used the third method, namely the method of using a public dataset, namely XL-SUM, which contains 44 languages, one of which is Indonesian [3].XL-SUM is a large and diverse dataset that includes 1 million pairs of professionally annotated article summaries taken from the BBC using a series of carefully designed heuristics.The dataset used is only the Indonesian language part of XL-SUM.The selection of the XL-SUM dataset is due to the ease of using and accessing the dataset, namely by accessing it through the library on the HuggingFace website.
Evaluation of results
In this study, automatic evaluation of results was carried out using ROUGE in accordance with previous research on the same topic.ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a package that includes several automatic evaluation methods that calculate the similarity between summaries.There are 4 types of ROUGE calculations, namely ROUGE-N, ROUGE-L, ROUGE-W, and also ROUGE-S [29].In this research, the types of ROUGE which were used are ROUGE-N (ROUGE-1, ROUGE-2) and ROUGE-L.The results of ROUGE used as an evaluation of the model that has been made and as a comparison with previous research.
RESULTS AND DISCUSSIONS Dataset preparation
The dataset used in this research is a dataset in the form of Indonesian news articles.The dataset used is XL-SUM [3].XL-SUM has 44 article languages, with one of them being Indonesian.In the XL-SUM dataset, the dataset has been divided into train, test, and validation.There are 47,800 Indonesian-language datasets in XL-SUM, with a division of datasets for training of 38,200 and testing and validation of 4780.The dataset contains articles from BBC News.The dataset contains an ID, a URL, a title, a summary, and text, as shown in Table 1.
Pre-processing
Before the data could be given to the training model, it would be pre-processed first.This is so that the model can learn from the given data.The pre-processing done is a tokenizer, as well as loading the model and data collator.
Tokenizer
The tokenizer performs the breakdown of text into tokens according to the terms of the desired rules.Some examples of tokenization that are often used are word tokenization and sentence tokenization [27].In this study, we use a pre-trained model so that to use a tokenizer in pre-processing, we must use the tokenizer associated with it.The pre-trained model used is mBART50, so the tokenizer used is also related to mBART50 [30].This is done in order to ensure that the split text corresponds to the same way in the corpus of the pre-trained model and also uses the same vocabulary at pre-training time.Tokenization in mBART50 is also based on SentenPiece [31].SentecePiece can perform sub-word model training directly from raw sentences [32].In the tokenization process, it would be limited to a maximum of 1024 tokens from the input data.Table 2 shows the results of the tokenization process.In table 4.2, after the text has undergone tokenization, there are 3 outputs, namely 'input_ids', 'attention_mask', and 'labels'.'Input_ids' comes from the 'text' category in the dataset, while 'labels' comes from the'summary' category.'Attention_mask' is an optional argument that is used when merging sequences.There are two values in 'attention_mask', which are 0 or 1. 1 in 'attention_mask' indicates tokens that need attention, and 0 indicates tokens that do not need attention.The numeric results in the 'input_ids' and 'labels' sections are token IDs derived from breaking the input sentence into tokens so that the tokens can be processed at a later stage.
Loan model and data collator
Before fine-tuning, the model would be downloaded first, which could be accessed on the HuggingFace website.MBART has several versions available, such as mbart-large-cc25, mbart-large-50, and others.In this research, we used mbart-large-50, which can be used in Indonesian.The model that has been downloaded is 2.44 GB.In addition, the padding process is also carried out using DataCollatorForSeq2Seq.This process is done in order to effectively perform the process of padding because it can be done dynamically to pad the longest sentence in a batch during inspection.The padding process is needed so that the tokens that have been processed before can have the same length, and then the tokens would be entered into the model.
Fine-tunning model
The model used is pre-trained using the Transformer architecture.A pre-trained model is one that has been trained in advance with other datasets.In this model, fine-tuning would be done.Fine-tuning is a technique used to adjust the model to a new dataset.MBART50 is one of the pre-trained so that fine-tuning will be done [33], [34].
In the process of fine-tuning this model, there is no change in the model architecture but only an adjustment of the model to the dataset used.This fine-tuning would be adjusted to the dataset used, namely the XL-SUM dataset for the Indonesian part.In this process, batch_size = 4, learning_rate = 2e-5, optimization = "adamw_torch", weight_decay = 0.01, save_total_limit = 3, and num_train_epochs = 1.These hyperparameters will be used in the next model training.
Training model
After pre-processing the dataset and selecting the hyperparameter model, the next step is the training process.The method used in this study is a pre-trained model based on the architecture of the transformer, namely mBART50.MBART50 is an extension of mBART25, so it has similarities in its model architecture [35].The mBART25 architecture is based on a sequence-to-sequence transformer architecture with 12 layers in each encoder and decoder and 1024 model dimensions in 16 heads (~680 million parameters).In addition, the mBART25 architecture has an additional normalization layer on top of each encoder and decoder [36], [37].The difference between the mBART25 architecture and mBART50 is that in mBART50, an embedding layer is added with a randomly initialized vector for an extra set of 25 new language tokens [33].
MBART50, which is multilingual for BART, then in the performance way for fine-tuning summarization, is to copy the information from the input but manipulate it, which is closely related to the purpose of denoising pre-training.In this case, the input encoder is the input sequence, and the decoder produces the output autoregressively [20].This training process uses Pytorch.The data used in the training process is about 38200 articles for the training part, and the validation dataset is 4780 articles.The training process takes 1 hour and 28 minutes for 1 epoch using the A100 GPU with 40GB of RAM for the GPU on Google Colab Pro.In the training process, it produces the loss in Table 3. Model training in the training process runs well.Then the model that has gone through the training process will be stored locally, namely in Google Drive.This storage is done to be able to use the model again during the process of trying to summarize the text.
Evaluation and comparison
The method used to evaluate the model is rough.ROUGE, or Recall-Oriented Understudy for Gisting Evaluation, is a set of metrics used in automatic text summarization evaluation [38].ROUGE consists of several automatic evaluation methods to determine the similarity of summaries [29].In the results of this study, the roughness used is based on roughness N (roughness 1 and roughness 2) and roughness L. ROUGE: N is a calculation of the number of corresponding n-grams between the text generated by the model and the reference.N-grams are collections derived from tokens or words [39].A unigram is an ngram derived from one word.Bigram is an n-gram derived from two consecutive words.While ROUGE-L performs calculations on the longest common subsequences (LCS) between the output derived from the model and the reference [40], [41].In the research results using mBART50, the resulting ROUGE is in Table 4. Table 4. Fine-tuning results of mBART50 Previously, one of the hyperparameters, learning_rate, was also tested to get the best results.In this test, 3 types of learning rates were carried out, namely 1e-4, 1e-5, and 2e-5.From these results, it can be seen that the difference is not far between one learning rate and another, so the best results are taken from using a learning rate of 2e-5 to compare with other model comparisons and benchmarks.
In this research, experiments were also conducted using another similar algorithm, mT5.MT5 is a multilingual model with an encoder-decoder architecture based on T5.In mT5, there are 5 versions of the model, namely small (≈ 300 million parameters), base (580 million), large (1.2 billion), XL (3.7 billion), and XXL (13 billion) [42].In conducting the comparison for this experiment, the mT5 version used is mT5 base [3], [43].The MT5 base is used because of the size of the model before fine-tuning, which is almost balanced with the mBART 50, which is 2.33 GB.From the results of the table, it can be concluded that mBART50 has better results than mT5 base, with a difference of 11.4272 for ROUGE 1, ROUGE 2 of 7.5777, and ROUGE L of 9.319, even though their model sizes are almost the same.In addition, a comparison is also made with the results of benchmark research using similar algorithms.Based on Table 7, the code AR refers to Arabic, IT to Italian, VI to Vietnamese, RU to Russian, and ID to Indonesian.Table 6 also shows that the ROUGE-1, ROUGE-2, and ROUGE-L scores obtained by researchers are still relatively low and have not exceeded some other studies.For example, AR XL-S was trained using the mBAR25 model and achieved ROUGE evaluation scores of R1 = 32.1,R2 = 12.5, and RL = 27.6.However, when trained with the XL-T dataset, the ROUGE score remained around R1 = 29.8,R2 = 11.7, and RL = 26.9[43].Likewise, IT MLSum-It achieved ROUGE scores of R1 = 19.3,R2 = 6.4,and R3 = 16.3 by using the mBART model [44].Another study that tested the mBART model with the RU Gazeta dataset (Russian language) obtained ROUGE evaluation results with values of R1 = 32.1,R2 = 14.2, and RL = 27.9 [22].Our evaluation of the three AR language models with the XL-S and XL-T datasets, the IT language model with the MLSum-It dataset, and the RU language model with the Gazeta dataset shows that each trained model faces various problems caused by several factors.These factors include the preprocessing stage, which only uses tokenizers and data collectors as padding, and the use of only one epoch for training.These constraints are triggered by resource limitations such as limited GPU usage on Google Colab Pro and the number of datasets that affect the final ROUGE score of the model.
We evaluate and compare prediction models that have been trained and tested to summarize news articles in Bahasa Indonesia using the Google Colab Pro tool.Our proposed model, mBART50, was trained using the XL-Sum ID dataset, which is a news dataset collected from BBC News Bahasa Indonesia.During the training stage, we found that the evaluation of the mBART50 model is highly dependent on the data used for training, especially at the pre-processing stage to clean up noise and irrelevant words in the documents.At the fine-tuning stage, the mBART50 model can summarize documents well without losing their original meaning, as shown by the evaluation using the ROUGE metric with scores of R1 = 35.9,R2 = 16.4,and R3 = 29.9.However, we also realize that the quality of the summary produced by the mBART Model also depends on the characteristics of the dataset and the type of language used in text summarization, which may affect the overall quality of the summary.The model proposed in this study, mBART50, shows significant results in summarizing Indonesian text.Proven by better evaluation using the ROUGE matrix with values (R1 = 35.9,R2 = 16.4,R3 = 29.9)compared to other models that have been evaluated in previous studies.The mBART model was trained on the XL-Sum ID dataset, which consists of filtered Indonesian news from BBC News.The mBART50 model has a specific approach to text handling in the Indonesian context.Pre-processing that filters out noise and irrelevant words in the documents adds to the quality of the summaries produced.The use of mBART as a multilingual transformer model signifies novelty in this research, while the rough-based evaluation provides high confidence in the validity of the evaluation results.Thus, the mBART50 model not only makes a significant contribution to improving the quality of Indonesian text summarization but also brings novelty by adopting a better approach to the use of language-specific technologies in the text summarization domain.
CONCLUSION
Based on the research, it can be concluded that the use of mBART in Indonesian text summarization has been explored, resulting in better progress in the development of text summarization models.The evaluation method using ROUGE shows better values with ROUGE-1 of 35.94, ROUGE-2 of 16.43, and ROUGE-L of 29.91.Nevertheless, the performance of the model is still not optimal and has not been able to outperform existing models.Challenges such as performance improvement and resource efficiency remain an important focus for future research.Thus, this research makes an important contribution to the development of Indonesian text summarization and highlights the need for quality and effectiveness improvements in future text summarization methods.
Figure 1 .
Figure 1.Flowchart of research flow Data collection method Data type The type of data required in this research is qualitative data.Qualitative data is non-numeric data.Text, sentences, words are examples of qualitative data and are needed as datasets for model building.The model created is a model about text summarization of Indonesian news.Therefore, the data needed is in the form of news articles in Indonesian.
Table 3 .
Results in loss
Table 5 .
Comparison results with learning rate
Table 6 .
Comparison between mBART50 and mT5 base
Table 7 .
Comparison of results with benchmark
Table 8 .
Text summary experiment | 4,558.4 | 2024-02-29T00:00:00.000 | [
"Computer Science"
] |
Designing gamification for geometry in elementary schools: insights from the designers
Popularly used in marketing and business, gamification has been gaining interest in educational contexts for its potential to invigorate otherwise mundane or difficult processes. A gamified environment transfers motivational elements of games to learning activities thereby engaging learners in the learning task thus transforming dull classroom environments to smart ones. This paper presents the design process of a gamification intervention in geometry at elementary level, based upon Huang and Soman (Gamification of education. Research report series: behavioural economics in action, 29. Rothman School of Management, University of Toronto, Toronto, 2013) model. We describe how insights from various sources helped us to refine an intervention previously used in one school. The design focuses on gamifying the tangram, an unplugged resource, through incorporating game-based elements of leader boards, points/stars and challenge levels to motivate young learners individually and in teams. Cognitive and motivational scaffolding undergird five challenge levels to bring affordances to self and social elements for learner participation in increasingly complex geometry tasks. There are limited theoretical models to guide educational researchers, especially ones that do not require digital resources. This paper presents our insights and recommendations to support scaffolded learning in student-centred gamified learning environments.
use in educational contexts is important if it is to become a recognized instructional approach (Dichev & Dicheva, 2017;Smiderle et al., 2020). However, knowledge of how to gamify an intervention for educational contexts is limited yet important for researchers and practitioners alike. Gamification interventions offer opportunities to engage the learner during the learning process (Pesare et al., 2016), thus creating smart learning environments.
We define gamification as "using game-based mechanics, aesthetics and game thinking to engage people, motivate action, promote learning, and solve problems" (Kapp, 2012, p. 125) in "non-game contexts" (Deterding et al., 2011, p. 1). Gamification is the practice of using "game elements or functionalities such as points, rewards, tasks, challenges, goals or immediate feedback for learning purposes" (Lämsä et al., 2018, p. 598). Gamification fosters a playful or gameful attitude, which may benefit learning, but differs from play. When learners play and play games, these activities aid in developing cognition, social processes, motivation and foundational mathematics skills, (D' Angour, 2013;Wiersum, 2012). Neither play nor playing games, however, contain all the elements and functionalities as those exemplified in gamification. Research provides evidence that gamification can have positive outcomes in mathematics (e.g., Baldeón et al., 2015;Stoyanova et al., 2017) and supports learner engagement and achievement in geometry specifically (see Aridi & Saad, 2020;[Authors]).
Over time, there have been numerous challenges in teaching and learning mathematics (Clements, 2004;Duru, 2010;Hodara, 2011;Jones, 2002;Rouadi, 2014;Siew et al., 2013). Traditional approaches have not been able to address the growing challenge of learner disengagement in studying mathematics. Teachers struggle to find appropriate instructional strategies in geometry (Sunzuma & Maharaj, 2019). Geometry is a strand that focuses on geometric thinking and visualization-key transferable skills. Strong links are found between teachers' instructional strategies and students mathematics proficiency (Hodara, 2011). Strong links also exist between active approaches that develop students' competence to enhance their performance (Duru, 2010;Patadia, 2016;Takele, 2020). In geometry, recommendations are made for an early start at elementary level and the use of active pedagogies (see Salifu et al., 2018). These challenge educational researchers to find ways to use novel pedagogies and approaches to transform the dull learning environments in mathematics classrooms to smart ones.
As an alternative to traditional pedagogies, active learning pedagogies such as games, can be beneficial to students (Nand et al., 2019;Ting et al., 2019). Salen and Zimmerman (2004) succinctly defined games as systems where players engage in an artificial conflict, their actions are defined by rules, and the consequence of playing the game is a quantifiable outcome. The player has an emotional attachment to the outcome, and the results of the activity are negotiable (Juul, 2005). Gamified learning incorporates game elements to enact game-like experiences to accomplish predefined goals which impact positively on learners' motivation (Dichev et al., 2020). Two approaches to introducing games into mathematics classrooms are game-based learning and gamification, both engaging students in playful explorations of mathematical concepts (Aridi & Saad, 2020;Baldeón et al., 2015).
We focused on designing a gamification intervention to transform the learning environment in elementary geometry. First, we conducted a pilot study at one school with Tangrams and origami at Grade 6. Following analysis of pilot data from teachers and students, we reflected on the initial design and refined it using Huang and Soman's (2013) five step model. The research team used group ideation and discussion to produce a list of key game elements and functionalities of the gamification to engage elementary students cognitively and motivate them to succeed (Nand et al., 2019). One key decision was to continue to use unplugged approaches, particularly to appeal to diverse learners (Huang & Looi, 2021) and provide young learners with more concrete experiences (Saxena et al., 2020). Unplugged approaches, which have proliferated in recent times in computational thinking, do not use computers, in other words are not technology-based (Huang & Looi, 2021). As such, this study focuses on gamification of a tangible resourcethe tangram-and is supported by Saxena et al. 's (2020) work that suggest that unplugged resources can aid mastery of cognitive concepts in young children and provide costeffective interventions.
In this non-empirical paper, we present the key decisions and insights that guided designing the intervention for the main study. We begin by examining the relevant literature on geometric thinking and gamification, presenting a conceptual framework of social constructivism and scaffolding, followed by a brief outline of methods. We then present in detail, the application of Huang and Soman's five steps to designing the intervention, ending with conclusions and recommendations.
Geometric thinking at the elementary level
Geometry is an important area of mathematics because it develops students' spatial sense and geometric thinking. The challenges experienced in teaching geometry at elementary level can be related to students' individual cognitive and affective development, and teachers' selection and application of instructional strategies, learning activities and resources aligned with levels of student thinking (Clements, 2004;Jones, 2002). van Hiele's (1999) levels of thinking continue to be valuable in developing geometric thinking and proposes that individuals progress through five levels of geometric thinking from being able to recognise shapes to abstraction of visual images and real models (see Fig. 1). The use of manipulatives during early learning of geometry allows students to Kamalodeen et al. Smart Learning Environments (2021) 8:36 touch, move, rotate, flip, combine and rearrange shapes and can support their progress through van Hiele's levels (Fuys et al., 1988). Elementary school students' success with geometry requires their teachers to identify at which level they are thinking and design instruction to scaffold their progress from one level to the next, reducing students' reliance on the teacher within the zone of proximal development (ZPD) (Vygotsky, 1978). Progressing through each level requires mastery of previous levels.
van Hiele's (1999) Level 0 (visualisation) is foundational to all levels (Jones, 2002). Van Hiele suggested that students' thinking relies on visual observation for identifying, naming, and recognising shapes without explaining their properties. Students pay attention to how shapes look. For example, students may recognise a rectangle because it resembles a door. They may recognise a square but differentiate it from a rhombus because they have trouble recognising the shape when it is rotated as it does not resonate with their definition of a square. This occurs because of poor conceptual development. At Level 1 (analytical), students can identify, describe, and explain parts and properties of shapes using appropriate language, but cannot yet identify important relationships between properties of different shapes. For example, they can recognise that parallelograms and rectangles have equal and parallel opposite sides, but do not recognise the rectangle as a parallelogram because relationships between concepts are not reinforced. At Level 2 (informal deduction), students can recognise the relationships between shapes and their properties and classify shapes in this way. For example, logical reasoning can be used to explain why a square is a rectangle, but a rectangle is not a square. Students' however have trouble with this due to underdeveloped reasoning skills (Fuys et al., 1988;Salifu et al., 2018;Siew et al., 2013).
The use of manipulatives to support student learning is well documented in the literature (e.g., Boggan et al., 2010;Ojose & Sexton, 2009;Wiersum, 2012). Using manipulatives to help elementary students transition from concrete to the abstract mathematical thinking is also well established (e.g., Moyer, 2001;Sarama & Clements, 2009). Tangrams are useful for developing students' geometric skills (Russell & Bologna, 1982;Siew et al., 2013;Tchoshanov, 2011); its constituent pieces, tans, have both visual and tactile appeal and learners can manipulate them through hands-on experiences. Additionally, tans as unplugged resources, can add to teachers' basket of strategies to promote mental modeling, thus engaging learning in cognitive tasks (Looi et al., 2020). Sarama and Clements (2009) recommend their use in well-planned instructional settings and not just for play. The active manipulation of tans, therefore, can allow for visualization, analysis and deductive reasoning, and developing young learners' spatial sense and geometric thinking.
Gamification for learning mathematics
Gamification offers the use of game design elements in contexts not usually considered as fun and can transform "dry, discouraging school subjects" such as mathematics (Aridi & Saad, 2020, p. 1) into interesting ones. These authors suggest that gamification of geometry with elementary students improved students' achievement levels with sustained engagement in learning. Several gamification initiatives are found in the literature related to elementary mathematics (such Aridi & Saad, 2020; Baldeón et al., 2015;Kimble, 2020) and at other education levels (see Faghihi et al., 2014;Lämsä et al., 2018; Page 5 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 Rincon -Flores et al., 2018;Wiersum, 2012). However, literature on gamification in education contexts often allude to the inconclusive relationship between student engagement and achievement. What is known, however, is that gamification invokes the same physiological experiences as games such as pleasure and fun (Deterding, 2012). Further, a gamified environment transfers social elements of games to learning activities that develop skills such as communication, sharing, goal setting (Landers & Callan, 2011), engagement, interactivity, and problem solving (Kapp, 2012). Pedagogical components of gamification include motivation, engagement, learning, thinking and problem solving thus situating this pedagogical innovation within educational contexts. Like educational games, gamification allows for combining "concentration demanded by challenging activities and the enjoyment experienced when maximally utilizing one's skills" (Hamari et al., 2016, p. 171). If designers build out concepts "in steps" to aid cognitive development (Aridi & Saad, 2020;Faghihi et al., 2014), then challenges within a gamification framework can increase the attractiveness of the learning environment and promote learner engagement (Rincon-Flores et al., 2018). Vygotsky's (1978) ZPD is a process that scaffolds learning from fully supported by an instructor to independent problem-solving; temporary external supports are removed once learning has been achieved (van Geert & Steenbeek, 2005). Cognitive scaffolds guide students through using tools and resources to engage in learning activities, organize their ideas and thoughts, and determine strategies they need for monitoring and evaluating their learning (Hannafin et al., 1999;Stavredes & Herder, 2014). Scaffolding is effective when teachers understand how much of it to apply, when to begin scaffolding and when to disengage, so learning becomes more learner directed (Dabbagh, 2003;Xiang et al., 2014). Motivational scaffolds provide learners with tasks connected to intrinsic motivation (Belland et al., 2013), and are supported by students reflecting on their learning and making necessary adjustments to enhance expectancy for success. Social constructivist theory (Vygotsky, 1978) emphasises the role of social interactions in contributing to learning through joint construction of knowledge and goal accomplishment. Scaffolds deployed during collaborative tasks allow students to interact with each other and their teachers and share skills and knowledge to expand learning boundaries (van Geert, & Steenbeek, 2005). Collaborative learning activities, such as team games, support the learning process (Sumtsova et al., 2018); they can facilitate positive feelings because there is a social responsibility to other members of the group, common goals, and belief in group achievement (Deci & Ryan, 2000). Scaffolding has been applied in gaming environments to teach and reinforce mathematical concepts and skills (e.g., Faghihi et al., 2014;Plass et al., 2015). Hence, gamification that employs scaffolds can increase motivation to learn and promote desired learning behaviours and outcomes.
Gamification and scaffolding
Teachers must move away from traditional teaching to a new teaching approach such as gamification that develop the student's own thinking. This approach, as catered for in our design, can be supported by flexible and dynamic scaffolding which can respond to each student in the social group (Cobb et al., 2012). Teachers can scaffold learning in the ZPD by using appropriate hints and clues as students progress through a lesson (Rouadi, 2014). For students to perform different tasks, teachers can deploy cognitive scaffolds Page 6 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 to expand students' ZPD through questioning, modeling, explaining, instructing, giving hints, and demonstrations, for examples. Motivational scaffolding targets engaging students and maintaining their interest in learning. Teachers should provide tasks that interest students and increase task value (Belland et al., 2013). This occurs by creating an enriching environment through group work, feedback (Miller & Brickman, 2004), student reflection (Moos & Azevedo, 2009), promoting positive emotions (Gross & Thompson, 2006).
Conceptual framework
Scaffolding is useful when learners are exposed to increasingly challenging levels that require mastery to progress (Alsawaier, 2018), and receive prompts and hints that direct them towards a solution path rather than presenting the solution (Fisch, 2005, as cited by Obikwelu et al., 2012). Skill attainment involves a sequence of tasks, feedback, and a way of measuring learner achievement (Patadia, 2016). Scaffolded gamified activities help to actualize the ZPD. Figure 2 illustrates how scaffolds were gradually removed from graduated tasks in this gamification intervention in geometry involving Grade 4 elementary students. It illustrates how scaffolds initially integrated into game tasks, were gradually removed as learners progressed through challenge levels, and from full teacher and peer support to independent practice over time.
Methods
We now present design considerations and rationales for key decisions that were made in each stage of gamification intervention. We used an informed decision-making (IDM) process (Mullen et al., 2006) to gather information from key sources to apprise these decisions. These sources were the relevant research literature, our collective experiences with mathematics teacher preparation for diverse learning contexts, recommendations from the pilot study, followed by collaboration with teachers from schools selected for the intended gamification intervention.
Fig. 2 Conceptual Framework for Gamification Intervention
Page 7 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 IDM is particularly well suited to education (Huffman, 1974;Schildkamp, 2012), as such, we applied the concept to the design of an instructional intervention by considering the key benefits, challenges and limitations, and the learning contexts. The research team met regularly over one year to expand on our initial understanding of gamification and arrive at a common understanding of gamification in the Trinidad and Tobago context. Our research team comprised of university teacher educators with expertise in mathematics instruction, curriculum design and development, project planning, research methods, instructional design and classroom assessment.
The pilot study was an instrumental case study involving 11-year-old students in Grade 6 at a purposefully selected private elementary school in Trinidad and Tobago. We collected and analyzed data from students and teachers in a mixed methods study. We targeted the geometry strand in the Trinidad and Tobago National Primary Mathematics Curriculum (TTNPMC, 2013) intentionally, and focused on reinforcing previously taught geometry concepts and skills through gamified Tangram and Origami puzzles, as unplugged resources. We applied the five steps in Huang and Soman's (2013) gamification model for the pilot study and used insights from findings thereof to gain new understandings for a larger study in multiple schools and diverse contexts. As gamification was new to all of us, we used multiple sources to inform the revised design which we present in this paper.
Designing the gamification intervention
In this section, we describe each step of the 5-steps of Huang and Soman's model in designing the gamification intervention and articulate how our learnings informed our decisions.
STEP 1: Understanding the target audience and the context
It is critical to completely understand the characteristics of the learners and the context of the curriculum, such as "pain points" (Huang & Soman, 2013, p. 8) which are factors that hinder students' progress through the curriculum and achievement of its objectives. Such pain points include learners' age and level of cognitive development, learning environment, sequencing of content, among others. They assist with selecting specific gamification elements to be implemented during the planning process. The pilot study revealed the need to attend more carefully to the first level in the Huang and Soman's (2013) model when designing the games to be a meaningful learning tool, particularly with respect to closer collaboration with the class teacher who brings intimate knowledge about students and the learning environment. Consideration to the target group of students and their prerequisite knowledge and skills in the given learning context are critical to designing tasks that cover the curriculum content, and are sufficiently challenging to keep students interested in playing without frustrating them ([Authors], p. 90).
attend more carefully to the first level in the Huang and Soman's (2013) model when designing the games to be a meaningful learning tool, particularly with respect to closer collaboration with the class teacher who brings intimate knowledge about students and the learning environment. Consideration to the target group of students Page 8 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 and their prerequisite knowledge and skills in the given learning context are critical to designing tasks that cover the curriculum content, and are sufficiently challenging to keep students interested in playing without frustrating them ([Authors], p. 90).
Hence, in the revised gamification design, we were more attentive to the pain points associated with learner characteristics (age and level of geometric thinking) and transforming the school environment to a more engaging one. The pilot study revealed that students responded positively to co-operative activities [Authors]. We wanted to ascertain students' pre-knowledge, the current learning environment, and opportunities for student collaboration and group work. Thus, we engaged teachers in selecting the levelappropriate curriculum content and discussed the challenges we had observed with Grade 6 students experiences in working at Level 2 of van Hiele's geometric thinking during the pilot study. We chose to focus on a younger age group-Grade 4 students -who would have attained competence at Level 1 of van Hiele's (1999) geometric thinking and would now be transitioning to developing Level 2 skills, based on the national curriculum for mathematics (TTNPMC, 2013). We continued to use unplugged resources at this earlier age, based on recommendations by Huang and Looi (2021) and Saxena et al. (2020) and selected tangrams to focus our gamification.
To increase opportunities for student success, we visited the selected schools with diverse learning contexts such as size, government-funded or not and low to high SES, to glean in-depth understandings of their school environments and gather information to inform the planning and design of the gamification intervention. The schools were not at the same level of technology readiness and thus confirmed the decision to use non-digital or unplugged resources. As such we could potentially reach more students "without the distractions of getting the resources and infrastructure for computers" (Looi et al., 2020, p. 4). Thus, we obtained the support of schools' principals, participating teachers, parents and students.
Hence, by the end of Step 1, we gleaned a good understanding of the schools' environments and physical spaces, the age and prerequisite knowledge of students to be engaged, had established working relationships with school personnel, and had decided on the curriculum content to reinforce Level 2 skills. Thus, we leveraged learnings from the pilot on context and learner characteristics which allowed us to move on to the next steps in designing the intervention as all other aspects hinged on this foundation.
STEP 2: Defining learning objectives
The gamification process must be guided by clear learning objectives (Huang & Soman, 2013). Though the TTNPMC (2013) suggests using student-centred pedagogical approaches to improve student motivation, mathematics self-efficacy and performance in the medium to long term, specific strategies to achieve these are not specified. Games are suggested in the Number strand (see TTNPMC, 2013, p. 249) but not the geometry strand, and neither gamification nor game-based learning are mentioned in the document.
In the pilot study ([Authors]), students' responses on the pre-tests and post-tests revealed that while they recognised and classified shapes from their properties, they struggled with: (a) working with shapes when their orientation had changed, (b) applying Page 9 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 the concept of symmetry in working with plane shapes, and (c) applying the principal of conservation of area in problem solving. This suggests that while the students were competent with concepts and skills expected at Level 1 of van Hiele's (1999) levels of thinking, they struggled with those expected at Level 2. Hence, there was a need to support students as they transition from Level 1 (analysis) to Level 2 (informal deduction). Therefore, our refined design focuses on activities that reinforce concepts and scaffold thinking for transitioning to Level 2. We gamified Tangram activities only, because they can develop concepts and skills related to shape identification and spatial relationships expected at Level 2; students in the pilot study reported greater challenges with the complexity and levels of difficulty associated with Tangrams. Our intervention objectives included: 1. Scaffolding to reinforce geometric concepts related to plane shapes to improve spatial sense and geometric thinking. 2. Promoting manipulation of 'tans' to explore the properties and relationships of plane shapes and solve Tangram puzzles. 3. Encouraging the use of imagination to create Tangram puzzles. 4. Facilitating cooperation among peers to solve Tangram puzzles. 5. Facilitating interest, motivation, enjoyment and creativity, in applying geometric thinking.
These general objectives of the design were guided by specific curriculum objectives in the TTNPMC (2013) for Grades K-4 (called Standards 1-5). They focused on developing appropriate concepts, skills and dispositions that "would facilitate life-long learning and higher order thinking skills'' (p. 22). The specific content, skills, disposition objectives and learning outcomes in geometry from the TTNPMC (2013) are listed in Table 1.
Our intervention objectives were also guided by observations of students' challenges with the content, and student and teacher reports from the pilot study which indicated that enjoyment and cooperation were motivational dimensions which enhanced engagement with the gamified activities ([Authors]).
By the end of Step 2, we had clearly identified the specific learning objectives and outcomes in geometry for the selected group of students in the main study. This allowed us to focus on structuring the learning experiences for students to achieve these stated objectives.
Elaborations
Investigate the effects of changing the orientation of a shape by first measuring the shape and then changing its orientation and then measuring again (p. 182) Page 10 of 24 Kamalodeen et al. Smart Learning Environments (2021)
STEP 3: Structuring the experience
It is important to break up educational programmes into stages, with each stage having a desirable milestone (Huang & Soman, 2013). Stages help chunk programme objectives into manageable deliverables. Milestones help with evaluating what students have learnt and mastered and motivate students to move on to the next phase. Milestones should be easily achievable initially and then increase in demand and complexity as students' progress through the programme stages. Educators must reflect on each stage and milestone during planning for effective gamification.
In the pilot study, students reported that some of the tangram activities were too complex and challenging, some shapes were difficult to construct, and they needed more instructions that were easier to understand ([Author]). To address these observations, we chunked the learning material into graduated stages in the new design, which we referred to as challenge levels. This approach would scaffold students' transition from Level 1 to Level 2 of van Hiele's (1999) geometric thinking. We noted that we had not oriented these students to the set of tans comprising the tangram or allowed play with the tans ahead of using them for learning tasks. This may have contributed to the challenges students reported about completing Tangram puzzles. Hence, we developed six stages that started with a pre-activity followed by five challenge levels that increased in difficulty and complexity. Every stage was designed with increasing complexity to scaffold students' transition through the levels of thinking. Each level targeted specific objectives aligned to those in the TTNPMC (2013). Tangram manipulation allowed teachers to use visualization and manipulation for exploring practical problems on all the five challenge levels, to reinforce the properties of plane shapes and their relationships with each other for developing deductive reasoning (Siew et al., 2013). Tangram activities were sourced from Tangram Blackline Masters (1994) which gives free reproduction rights to educators (p. ii).
Stage 1: Pre-activity
The pre-activity was a non-competitive level that introduced students to the Tangram (see Fig. 3). Students would investigate the properties of the Tangram pieces and make decisions about how to manipulate specific pairs of tans to cover the surface area of Page 11 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 different shapes and form compound shapes without overlapping tans. These activities were scaffolded, offering students explicit instructions about which tans to use to complete the activities. They would use a pencil to trace the perimeter of the tans to preserve their shapes once the tans were removed; this instruction would hold for all subsequent levels. This pre-activity was critical to establishing the foundation for the challenge levels that followed and reinforced the TTNPMC (2013) content objectives 2.1.4, 2.1.5, 2.1.6, skill 2.2.7, disposition 2.3.2, and learning outcome 3. The milestone at this stage was students' ability to successfully manipulate pairs of tans to create shapes.
Stage 2: Challenge level 1
Students would initially manipulate four specific tans to cover the surface of given compound shapes, and then identify any other combination of tans that would cover the same shape (see Fig. 4). This activity provided students with explicit instructions about which tans to use to cover the shape. It would probe their understanding of the properties of and relationships among these tans and decisions they made about manipulating them. It also reinforces the principle of conservation of area by allowing them to cover the surface area of a compound shape with different combinations of plane shapes in different orientations. In the pilot study, students reported difficulty with creating Tangram puzzles using all seven tans ([Authors]), which suggests that students needed to first manipulate fewer tans and progress to more complex Tangram puzzles with more tans. This level attended to reinforcing TTNPMC (2013) Page 12 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 in this level is that students would move from manipulating two tans to four tans to cover a compound shape.
Stage 3: Challenge level 2
Students would select and manipulate any three tans to cover the surface area of compound shapes (see Fig. 5). The activity further probed their deductive reasoning and decision-making skills while manipulating tans. Lifting the restriction of using specific tans removes scaffolds (specific instructions) and gradually increases the cognitive demand on students. This will allow them more flexibility in problem solving and reinforce the same TTNPMC (2013) content as Challenge Level 1. The milestone in this level is that students can successfully complete the challenge using any three tans to cover compound shapes.
Stage 4: Challenge level 3
Students would manipulate combinations of three, four or five tans to cover a given compound shape (see Fig. 6). This activity increased the number of tans to be used from three to five. We increased the cognitive demand by removing more scaffolds and allowing students to be creative in their decision making as they progressively used more tans in solving Tangram puzzles. This level reinforced the same TTNPMC (2013) content as Challenge Level 2 and introduced disposition objective 2.3.3 relating to confidence in exploring plane shapes. The milestone in this challenge is that students could use informal deduction to make decisions about covering a more complex compound shape than Page 14 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 in previous levels, with no restriction on which tans could be used, only on the number of tans.
Stage 5: Challenge level 4
Students would be required to use all seven tans to cover the surface of a well-known Tangram puzzle (see Fig. 7). We designed this challenge to allow students to demonstrate their understanding of the properties of plane shapes, their relationship to each other, and use their developing spatial sense and geometric thinking to manipulate tans to cover the shape. This level reinforced the same TTNPMC (2013) content as Challenge Level 3. The milestone in this challenge is that students would rely on deductive reasoning to complete the task without any scaffolds that were previously provided.
Stage 6: Challenge level 5
Students would use all seven tans to create their own Tangram puzzle for their peers to solve (see Fig. 8). We decided to provide students with the opportunity to be creative in designing and naming a Tangram puzzle. We thought this approach would contribute to students' enjoyment if they could see their creations on display and challenge other students to complete a puzzle that they created themselves. This level reinforced the same TTNPMC (2013) content as Challenge Level 4. The milestone here is that students would demonstrate their understanding of the properties of the tans to design a unique shape. Page 15 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 By the end of Step 3, informed by learnings from the pilot we included a pre-activity to orient students, and chunked the curriculum content into smaller tasks to provide students with the experience of gradually moving from working fewer tans to a full set of seven tans. They would move from simple to more complex Tangram puzzles as scaffolds are removed. This step allowed us to identify the resources required for each Challenge Level.
STEP 4: Identifying resources
Identifying resources is a critical stage in the gamification process for successful implementation (Huang & Soman, 2013). First, we developed the implementation procedure for teachers, which is elaborated below. We then sourced and produced the required resources for the gamification design in the five schools. This approach is an important step following the pilot study that involved only one school. It facilitates standardisation and coordination of processes across the schools and minimises possible variations in implementation procedures that may contribute to differences between expected and observed outcome (McBryde et al., 2004).
Resources for the gamification
We produced a resource kit with seven items. Each of the five participating schools would receive one kit per classroom. Kits comprised: 1. One Tangram set (7 pieces) per student made from Bristol board and stored in small Ziploc bags 2. A leader board made from Bristol board to record groups' scores as they progress through the Challenge Levels (see Fig. 9) 3. A discussion board made from Bristol board and 3 cm x 3 cm Post-it notes to record hints for other groups that need assistance (see Fig. 10) 4. Colour-coded folders for each stage in Step 3 that contained: Page 17 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 a. Individual activity tan sheets (colour-coded to match folders) for students to work with in group challenges. b. Teacher checklist for group challenge activity to allow the teacher to monitor and track student engagement and participation in group activities (see Fig. 11). c.
Student self-evaluation sheet per challenge level to allow students to reflect on their learning and progress as they moved through each Challenge Level (see Fig. 12).
5. Pencils, markers, glue, tape, sheets of glitter stars. 6. Implementation procedure to guide the teachers implementing the gamification design. 7. Envelopes containing the pre-tests and post-tests for teachers to administer to students.
1. The teacher administers the pre-test to all students. 2. The teacher organises students into groups of three or four and distributes to each group member, a set of tans, challenge sheets, and self-evaluation sheets. Groups receive Post It notes, pencils, markers, and glue. 3. The teacher places the leader board and discussion board on a wall for all students to access freely and to provide transparency to the process. The teacher records groups' names on the leader board. 4. The teacher introduces students to a gamified vocabulary (e.g., hints, leader board, badges, challenge, and stars) and discusses gamification instructions with students. 5. The teacher guides students through the pre-activity. 6. The teacher begins the competition, monitors groups and records their scores on the leader board. 7. A challenge level is completed when the teacher verifies the solution through visual inspection of students' work. Each Challenge Level earns a maximum score upon completion (see Table 2). The overall maximum possible score is 30. Penalties are not applied to limit interfering with student motivation. 8. When a group completes a challenge, members can post a hint, comment, or guiding question on the discussion board to assist other groups. Groups' contributions are assessed by the teacher using a rubric (see Table 3) and recorded on the leader board. Points are awarded based on the quality of the contribution and ranged from 0 (low quality) to 5 (high quality). A group's total score is the sum of these points plus points from Challenge Levels. 9. At the end of each Challenge Level the teacher completes the checklist and students fill in the self-evaluation sheets. 10. .The game ends when all groups have completed all challenge levels. Teams are awarded stars based on the order of correct completion of each challenge level. The teacher totals the scores and announces the winning group. 11. .The teacher administers the post-test. Kamalodeen et al. Smart Learning Environments (2021) 8:36 By the end of Step 4, we had identified, sourced and created the resources for each stage and Challenge Level described in Step 3, including detailed guidelines and instructions for teachers, resources for students and for documenting students' participation and progress. We built on and refined the wording of the guidelines, instructions and resources used in the pilot study, based on observations of improvements needed. This step would minimise discrepancies in subsequent gamification implementation across schools. It also allowed us to apply appropriate gamification elements in the next step. Huang and Soman (2013) suggest that it is the education program that guides the decisions and refinement of gamification elements. It is important that two classes of elements, self-elements, and social elements, are addressed. Cognitive and motivational scaffolding can promote self-elements and social elements in gamification challenges. Huang and Soman gave examples of self-elements as "points, achievement badges, levels, or simply time restrictions" (p. 13). This type of gamification element places emphasis on individual progress and self-achievement. Self-elements are closely related to the social elements. Social elements relate to how groups compete, interact, and cooperate within the community. An example of this type of gamification element is the use of leader boards. This tool assists with charting and publicizing groups' progress and achievement. Both self and social elements have a unique purpose; hence, it is important for the success of the gamified activities to carefully plan when each type of element is used. Self-elements are used for more challenging tasks, so as not to demotivate students, but rather amplify students' personal achievement. The social elements are used to motivate students as a group or community to move on together, to the next or higher levels. Consequently, our design incorporated both types of gamification elements.
STEP 5: Applying gamification elements
Our learning from the pilot study highlighted how social elements enabled a sense of cooperation through social interaction within the competitive environment of a gamified classroom ([Authors]). Cooperation and competition are aspects of gamification that facilitate students working together towards goal achievement which embraces "the winning state of cooperative gamification" (Kapp, 2014, para.2). Thus, in designing this study, we decided to amplify social elements through cooperation by including opportunities for meaningful dialogue and student interaction within and among teams. Within teams, students can assist each other through scaffolded activities to complete Challenge Levels successfully (Obikwelu et al., 2012). Collaborative student interactions can produce gains in knowledge, practical, and social skills as students engage in the learning process (Parga, 2011). The social elements are emphasized by displaying team points on the leader board after successive Challenge Levels, and through the discussion board where hints can be placed to assist other teams. In Step 1 of the gamification process, we collaborated with teachers to facilitate creating supportive learning environments in their respective schools. They would monitor teams' progress using the observation checklist provided to ensure they complete each level before progressing to the next.
We attended to the self-elements by planning Challenge Levels that were graded by difficulty and scaffolded through peer collaboration, and using points, rewards, ranks Page 20 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 and levels for motivation and engagement (Pesare et al., 2016) creating a sense of accomplishment. Through the years there has been continuous debate whether extrinsic rewards lower intrinsic motivation (Alsawaier, 2018;Deci et al., 2001), but gamification designers can mitigate this risk by including motivational scaffolding for student emotions. The self-element can be developed by identifying challenges, providing opportunities for improvement, celebrating successful outcomes, and peer support in solving given tasks. Self-evaluation is a key component for social constructivism and motivation (Shepard, 2001). Students can use self-evaluation to positively influence the self-element. Self-evaluation aims to develop learners' skills, competencies, and responsibility for their own learning. Students can identify gaps in their knowledge, and skills that require more practice. In our gamification design, we planned to provide the students with tools for self-monitoring by including self-evaluation sheets. This gamification design is supported by Hamari et al. (2016) who suggest that offering activities that are within the learners' ZPD, challenges them within an appropriate skill level, thus maximizing engagement. In this design, the focus is on peers collaborating at different levels of difficulties, rather than the teacher aiding individual students, through skilful cognitive scaffolding to execute the learning task more effectively. Thus, by the end of this step, we drew on the findings of the pilot study that supported teamwork and collaboration, which was also a desired disposition of the TTNPMC (2013).
Conclusions and recommendations
This paper presents our insights on designing a gamification intervention for enhancing the mathematics learning environment for elementary students to attain mastering of geometry. Thus, our gamification design seeks to transform the dull learning environment to a smart one, by engaging learners actively (Pesare et al., 2016). We incorporated cognitive and motivational scaffolding strategies to build students' confidence in geometry, through collaborative interactions that facilitate student satisfaction and enjoyment (Baldeón et al., 2015) as they work towards goal achievement. The increasing complexity of challenge levels used a fading technique from team collaboration to independent practice to support students' success (Obikwelu et al., 2012). Designing the intervention required using our insights from a pilot study, and flexible thinking to implement gamification, an emerging pedagogical approach, using Huang and Soman's (2013) gamification model in our educational context. This paper bridges theory and practice on gamification by providing insights into how social constructivism, specifically Vygotsky's (1978) ZPD and van Hiele's levels of geometric thinking support the design of an intervention. Our gamification design clearly targeted national mathematics curriculum objectives even though gamification is not specified as a learning strategy in the geometry strand of the TTNPMC (2013). Hence, curriculum developers should intentionally include gamification as a strategy in geometry and other strands in the curriculum.
This paper highlights the use of unplugged activities so that teachers across any school context can use gamification to improve cognitive and motivational engagement in learning geometry. We considered the scalability of the intervention and used an unplugged geometrical resource-the tangram-to broaden access to a wide range of learners (Huang & Looi, 2021), and of a young age (Saxena et al., 2020). Teachers should Page 21 of 24 Kamalodeen et al. Smart Learning Environments (2021) 8:36 not be afraid to enhance learning environments with gamification (Erenli, 2013) and this paper illustrates one approach which teachers can leverage. The accessibility of our design can assist even a reluctant teacher to gamify learning environments. It takes time and effort to design these activities and many teachers may feel burdened to do this on their own (Looi et al, 2020). Our work can reduce the disconnect between classroom instruction and educational research to improve teaching and learning (Clements et al., 1997), while paying attention to learner characteristics (Smiderle et al., 2020).
In moving forward, we invite educators across diverse learning contexts to implement this intervention to contribute rich and varied experiences to further enhance the design. We recommend that educators be open minded about embracing gamification that can bring joy and learning. Gamification is not interchangeable with play or even games, though both have utility in children's learning. We have learnt that designing a gamification intervention requires a great deal of time in planning and game elements need to be incorporated purposefully. Balancing internal and external motivation factors for young children remain an area for future research and gamification continues to hold the attention of educational researchers who seek better models for enhanced learning environments.
Abbreviations TTNPMC: Trinidad and Tobago National Primary Mathematics Curriculum; SEA: Secondary Entrance Assessment examination; ZPD: Zone of proximal development; IDM: Informed decision making. | 9,714.6 | 2021-12-01T00:00:00.000 | [
"Education",
"Mathematics",
"Computer Science"
] |
Stabilization of Multi Fractional Order Differential Equation with Delay Time and Feedback Control
: The purpose of this article is to introduce the original results which devoted with the nonlinear control system problems involves of nonlinear differential equations of fractional orders. Thus, this system is described with a mixed of ordinary derivatives in the first and second order that, are unstable before feedback gain. More precisely, we investigate and analysis the nonlinear control system in related to feedback gain matrix. In addition, we prove that the considered system is locally asymptotically stabilizable via certain conditions. Then, this work reinforce through some application examples that programmed for illustrating and showing the stabilizability of the current systems with high efficiency and accuracy.
Introduction
The field of control and systems is currently one of the most important topics that play a good role in simplifying some systems.Thus, the control is involved in nonlinear systems and the interpretation of complex phenomena, which is of great benefit in modernizing human civilization day after day [1].
Fractional calculus contributes to many important aspects such as science, engineering and physical applications.We mention some of its applications with fractional optimal control problems (FOCPs) that are subject to dynamic constraints with the objective function problems, in, bioscience [2], economic [3], and so on .
The stability of a nonlinear Langevin system of Mittag-Leffler (ML)-type fractional derivative affected by time-varying delays and differential feedback control stability has been studied by Zhao in ref. [4].Then, Li and et al. are studying of the global stability problem for feedback control systems of impulsive fractional differential equations on networks [5].Another direction studied by Qasim and et for some classes with composition FOCPs as in [6].The stabilization and destabilization of fractional oscillators via a delayed feedback control has been considered by Čermák et al. in [7].
The Mittag-Leffler stabilization of fractional-order nonlinear systems with unknown control coefficients is verified and examined by Wang in [8].[19], [20] The main objective of this work is to study non-linear systems with multiple fractional orders between zero and one with an ordinary derivative for control systems.The systems that are unstable were examined, then a feedback describes gain matrix the presence of control.So, after that we investigate and demonstrate the local stabilizabiliy with complete accuracy for nonlinear systems.
This outline of paper is organized as follows: Section 2, present some basic preliminaries concept and some auxiliary definitions.In section 3, we obtain the rigorous new results for the multiplying (fractional-one order ordinary) differential nonlinear feedback control system with some applications.Finally, we provide the results that have discovered which focused on the stabilizability problem of non-linear feedback control systems.
Preliminaries
In this section, we will present some important definitions and characterizations which play a good role to achieve the stabilizability concept of the considered system.
And the Mittag-Leffler function with two parameters: , where ∈ and , > 0.
Definition 2.2 [10]:
The Gamma function is defined by the integral formula With the property of Gamma function ( + 1) = ().
In particular, when And
Definition 2.8 [13]:
The power function and the constant function of the Caputo's derivative, is: ii
When
→ ∞, ‖()‖ → 0 For > 3 {( − ) −1 }.Consequently, the system (1) is asymptotically locally stable.∎ Example 3.2: Consider the following differential nonlinear without feedback control system: This nonlinear control system consists of nonlinear differential equations of fractional orders with a mixed of ordinary derivatives of the first order that are unstable before feedback gain matrix.Thus, we examine this nonlinear control system after applying feedback gain matrix and prove the asymptotic local stabilizability of the system by using the conditions in the theorem (3.1).] The nonlinear control system (25) with feedback takes the following form:
Conclusions
A new outcomes has been explored in nonlinear dynamical systems for double fractional with ordinary order in this paper related to some necessary conditions.Then, the stabilizability of nonlinear systems with control was obtained by feedback gain for the nonlinear systems class.So that, the precise results are obtained in locally case according to the conditions of theorem 3.1, which have demonstrated their accuracy in some applications.Also, as the work technology has been programmed and reinforced with the illustrative examples that have shown the efficiency of the stabilizability of the considered systems.Finally, may be interested to extend the obtained results in this work to the case of regional observer problem in distributed parameter systems as in [17][18]. | 1,031.8 | 2023-09-30T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
A Raspberry Pi-Based Traumatic Brain Injury Detection System for Single-Channel Electroencephalogram
Traumatic Brain Injury (TBI) is a common cause of death and disability. However, existing tools for TBI diagnosis are either subjective or require extensive clinical setup and expertise. The increasing affordability and reduction in the size of relatively high-performance computing systems combined with promising results from TBI related machine learning research make it possible to create compact and portable systems for early detection of TBI. This work describes a Raspberry Pi based portable, real-time data acquisition, and automated processing system that uses machine learning to efficiently identify TBI and automatically score sleep stages from a single-channel Electroencephalogram (EEG) signal. We discuss the design, implementation, and verification of the system that can digitize the EEG signal using an Analog to Digital Converter (ADC) and perform real-time signal classification to detect the presence of mild TBI (mTBI). We utilize Convolutional Neural Networks (CNN) and XGBoost based predictive models to evaluate the performance and demonstrate the versatility of the system to operate with multiple types of predictive models. We achieve a peak classification accuracy of more than 90% with a classification time of less than 1 s across 16–64 s epochs for TBI vs. control conditions. This work can enable the development of systems suitable for field use without requiring specialized medical equipment for early TBI detection applications and TBI research. Further, this work opens avenues to implement connected, real-time TBI related health and wellness monitoring systems.
Background
Traumatic Brain Injury (TBI) is a form of acquired brain injury caused by external impact to the head that results in damage to the brain [1].It is a common cause of death and disability in the United States (U.S.) and can be caused by a variety of factors including falls, motor vehicle crashes, sports, or combat injuries.TBI affects an estimated 2 million people in the U.S. across all age groups, according to data from U.S. Centers for Disease Control and Prevention (CDC) [2] and likely a larger number globally.TBI often leads to neurological problems in individuals, including cognitive, motor, and sleep-wake dysfunction [3].
Currently available medical tools for TBI diagnosis are largely subjective [4] and a lack of consensus regarding what constitutes mild TBI (mTBI) adds to the complication of the under-diagnosis of mTBI [5].TBI is categorized into mild, moderate, or severe based on the Glasgow coma scale (GCS), Loss of consciousness (LOC), and Post-traumatic amnesia (PTA) [6] which are qualitative tests rather than quantitative measures.The World Health Organization's (WHO) definition of mTBI allows for a GCS score of 13-15 to be assessed after the typical 30-min timeframe, which accounts for the expected time of arrival of a qualified healthcare provider [7].However, the GCS has its drawbacks.Being highly inter-observer dependent makes it necessary to report exact findings rather than just the score.In addition, one of the key parameters in GCS is the eye score which might be unattainable in case of an eye injury.
Considering the requirements from a medical resource standpoint, existing clinical tools used to diagnose mTBI such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT) [4] require an extensive, high-cost clinical setup and specialized operator skill set which are not always available at the time and place of an incident, and these neuroimaging tests may still be negative in many cases of mTBI.As a result of the limitations of the present-day methods used to detect TBI, there is a need for new technology capable of rapid, accurate, non-invasive, and most importantly, field-capable detection of mTBI to bridge the technological gap that exists today.Early, objective, and reliable mTBI detection can help affected individuals undergo timely monitoring and therapy and can prevent death in severe cases.
Machine learning techniques provide a way to study mTBI and create systems to help objectively diagnose and monitor mTBI presence and stages in individuals.Recently, machine learning techniques have been investigated for the purpose of classifying mTBI from electroencephalogram (EEG) data in mice based on models created using the lateral Fluid Percussion Injury (FPI) method [3].FPI induced mice demonstrate very similar behavioral deficits and pathology to those found in humans afflicted with mTBI, including sleep disturbances [8,3].In this work, we use EEG data acquired from the compelling FPI mouse model of mTBI.Previous investigations have studied a variety of classification techniques, including classical machine learning such as SVM [9], and deep learning such as Convolutional Neural Networks (CNN) [9,10].These techniques have been shown to perform TBI classification with more than 80% accuracy [9].However, in most investigations we reviewed (as described in the subsequent Related Works subsection) that implement machine learning for TBI detection, the primary focus was the study of classification techniques and performance of classification models [9] rather than portable deployment.In cases, where portable deployment was involved, the focus was application-specific implementation, for example, closed-loop robotic control systems [11].
This work focuses on creating a machine learning-based, fast, portable, and ready to use EEG classification system for mTBI detection using Raspberry Pi 4 (RPi).This system works with a variety of machine learning models and can be used along with live EEG recording systems to detect mTBI.This capability can enable field use and make early mTBI detection possible without the requirement of extensive medical setup or specialized medical domain knowledge.Further, this work has the potential to create avenues for implementing mTBI related real-time connected health monitoring systems [12] and allow further research on real-time mTBI detection using EEG.
The deployment system created in this work incorporates an Analog to Digital Converter (ADC) front-end and utilizes Convolutional Neural Network (CNN) and XGBoost [13] predictive models to perform sleep staging and detect the presence of mTBI using a single-channel EEG signal.The system captures and classifies EEG epochs into four target classes -Sham Wake, Sham Sleep, mTBI Wake, and mTBI Sleep.We demonstrate that our system can capture physical EEG signals and perform feature extraction and prediction using the XGBoost model in the order of 0.02 s per epoch which makes it possible to quickly detect the presence of mTBI.We also verify that the cross-validation metrics obtained on the RPi based system are identical to those obtained on a High-Performance Computer (HPC) such as a 64-bit workstation computer running macOS or Windows.
The Related Works subsection covers previous relevant investigations in this area.We describe the deployment system design, operation, classification model configuration, and validation techniques in the Methods section.The Results section covers the performance comparison of XGBoost with CNN and the deployment system performance evaluation.Finally, we conclude our work and discuss possible future directions in the Conclusions section.
Related Works
The study of brain activity using electroencephalogram (EEG) typically involves extracting information from signals associated with certain activities.In recent years, machine learning techniques have been applied to the classification of mTBI because it enables the extraction of complex and typically non-linear patterns from the EEG data [14][15][16][17].Most of the work surveyed used rule-based techniques, such as k-Nearest Neighbors (k-NN).
A few systems used a small, portable computer for deployment in some form.The Neuroberry platform [18] used a Raspberry Pi 2 device to capture EEG signals but the focus was on enabling EEG signal availability on the Internet of Things (IoT) domain.The Acute Ischemic Stroke Identification System [19] utilized an Analog to Digital Converter (ADC) front end with Raspberry Pi 3 to capture physical EEG signals.However, this system transferred the captured data to an HPC running MATLAB for signal analysis and processing and did not focus on signal classification.Zgallai et al [11] described a Raspberry Pi-based system that used deep learning to perform EEG signal classification.It was designed to identify a subject's intended movement direction from a multi-channel EEG signal to control wheel-chair movement in a closed-loop robotic system rather than as a general system for identification, analysis, and monitoring of a physiological condition such as mTBI.Bruno et al [20] highlighted challenges with existing medical diagnosis techniques and described a classification system from the perspective of real-time TBI diagnosis, but their work was focused on the algorithm to perform TBI diagnosis and not on the implementation of a deployment system.
In our previous work [10], we developed and described a CNN based model to perform automated sleep stage scoring and mTBI classification.In addition, we did a limited deployment of the CNN model on a Raspberry Pi 4 system.In that work, the focus was on describing the CNN model configuration, evaluating its performance, and showcasing that deployment to RPi was feasible rather than designing a complete, portable classification system.We have reused the previously developed CNN model in the current work to provide a baseline performance comparison with a new XGBoost model developed for this work.Further, the two models enable us to demonstrate the versatility of the current system to operate with multiple types of predictive models.
To the best of our knowledge, no standalone, portable system has yet been created using Raspberry Pi that can capture real-time EEG signals, detect the presence of mTBI, and classify mTBI sleep/wake epoch states.
Materials and Methods
In this section, we describe in detail the dataset used, software and hardware design, and operation of the RPi based classification system.We also describe the classification model configuration and criteria used for system verification.
Dataset Details
A previously published dataset as described in [3] was used to train and evaluate deployed models.This dataset was collected as part of a study involving 11 adult male mice subjects divided into two groups -mTBI and Sham.FPI [8] procedure was used to induce mTBI in 5 subjects and the remaining 6 mice were used as Sham (control) subjects.Single-channel EEG data were captured from each subject at a 256 Hz sampling rate over 24 hours.Sleep stages were scored manually by experienced scorers into 4 s epochs of wake, non-rapid-eye movement (NREM), and rapid-eye-movement (REM) stages.In this work, we combined NREM and REM sleep stages in the dataset to a single sleep stage, resulting in 4 target classification classes -Sham Wake, Sham Sleep, mTBI Wake, and mTBI Sleep [10].The hardware design of the system is illustrated in Figure 1.MCP3008 [21] Analog to Digital Converter (ADC) is used as the hardware front-end to capture and digitize input EEG signals.It is a 10-bit ADC with Serial Peripheral Interface (SPI) that connects to RPi's SPI interface.While MCP3008 is suitable for single-channel EEG applications like our work, having 8 channels per Integrated Circuit (IC) makes it possible to expand the current system to a multi-channel configuration, if needed.It is a low power device with 5 nA standby current and 500 µA operating current, making it suitable for standalone, portable embedded system applications.It supports a data rate of 200 kilo-samples per second (ksps), which provides sufficient data capture and transfer speed for typical EEG sampling rates, 256 Hz in this case.Also, a 200 ksps peak sampling rate exceeds the basic Nyquist criteria (1) and allows us to sample at more than 10 times the peak signal frequency component (0.5 Hz -60 Hz for the current signal) to fully capture the frequency, amplitude, and shape of EEG signals.
System Design
where is the sampling frequency and is the highest frequency component in the signal.MCP4725 [22] is used to generate the EEG time series signal voltage levels from an EEG data file.MCP4725 is a 12-bit Digital to Analog Converter (DAC) with Inter-Integrated Circuit (I2C) interface that is suitable for connectivity with RPi's I2C host interface.Having the ability to generate a physical EEG signal from a raw EEG data file makes it possible to validate the system operation without requiring a live signal from a subject.
To view live classification results and for initiating and controlling system operation, either a dedicated touch display or an ethernet Secure Shell (SSH) connection can be used.For this work, we used an SSH connection.
Classification System
Figure 2 shows the architecture of the classification system.The input was a singlechannel EEG signal sampled at 256 Hz.Two different machine learning model implementations were used for deployment on RPi -CNN and XGBoost.The first implementation used a CNN model [10] to automatically extract features suitable for classification from an EEG signal with an epoch duration of 16 s to 64 s.The second implementation involved XGBoost that also used a 16 s to 64 s epoch duration.In this case, decibel normalized sub-band powers and ratio of theta to alpha sub-band power were extracted from the EEG signal as described in [23].The extracted features were fed to an XGBoost classifier to obtain the predicted classes.An epoch size of 16 s to 64 s is optimal as it allows sufficient distinguishing patterns to be captured from the signal for accurate and reliable classification, and at the same time is small enough for fast prediction and deployment to an embedded device with limited memory and processing resources such as RPi.
System Operation
EEG time series data points were loaded into RPi memory from the EEG data file and were then transferred to the MCP4725 DAC for generation, thus recreating the time series EEG signal.The generated EEG signal was captured by the MCP3008 ADC sampling at 256 samples per second.The captured EEG data points were transferred to the RPi system and accumulated into epoch buckets.The EEG epochs were then placed on the processing queue of the software processing system to undergo preprocessing, feature extraction, and classification stages.
To enable the system to operate in real-time capacity without losing EEG signal data points, we used a queue-based software design with separate threads for EEG signal capture and processing.This ensured incoming EEG data points could be collected while previously collected EEG data points were being processed.Figure 2 illustrates the queuebased processing system operation.
The implemented system was designed to display the inferred label count, epoch processing time, and distribution of epoch count per classified label via a histogram (Figure 3).The capability of the system to provide a live view of inferred EEG epoch labels distribution enabled early inference of whether a subject is afflicted with mTBI or not.
System Verification
This section describes the validation techniques and metrics applied to the deployment system to consider it successful for its intended purpose.The actual verification results are covered in the Results section.
The EEG data used in this work comprised of 11 mice divided into two groups -mTBI and Sham.Labeled non-overlapping epochs were created from the 24-hour recording to train and evaluate the classification models across 10 folds and mean accuracy values across all folds were calculated.The current dataset offers limited generality because of the small number of subjects and hence we used random sampling to split non-overlapping epochs across all subjects into training and testing datasets.The current work focuses on describing the live classification system design and operation, and from a generality perspective, the accuracy results in the current work should be validated with a separate, larger dataset [10].
To evaluate the RPi based deployment system functionality, classification metrics and inference time were calculated and compared to that of an HPC.The operation of the queue-based system architecture was evaluated by calculating the inference time and processing time per epoch batch with different batch sizes.We also verified that the generated EEG epochs were captured and processed by the queue-based system without data loss by tracking the generated and captured epoch count.The classification performance of the deployment system was evaluated using accuracy, precision, and recall as metrics.The definitions of these metrics are as follows:
𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁
(2) where TP: True Positives, TN: True Negatives, FP: False Positives, and FN: False Negatives.CNN and XGBoost models were trained on an HPC and deployed on the RPi system to calculate metrics and to perform live classification.We evaluated the timing performance of the live operation of the system and verified that the classification model provided valid output labels for the input EEG epochs.The actual timing and classification metrics are provided in the Results section.
We verified that the signal generated on the RPi system using the DAC and captured by the ADC was consistent with that stored in the EEG signal data file.To compare the stored and generated signals, and as a measure of generated signal quality, we used the Mean Squared Error (MSE) value as a metric, which is calculated as follows: where is the number of data points, is the observed set of magnitude values generated by the DAC and ̂ is the expected set of magnitude values in the EEG data file.
Performance Comparison of RPi with HPC
The CNN and XGBoost predictive models trained on the HPC were deployed on RPi to verify system functionality, retrieve classification metrics, and calculate inference time.Accuracy, precision, and recall results on RPi were identical to those achieved with an HPC across epoch lengths varying from 16 s to 64 s, as shown in Table 1.
Epoch Processing Time
The variation of epoch processing time (which includes preprocessing, feature extraction, and classification operations) with the number of epochs is shown in Figure 4. We observed that the processing time on RPi increased with an increase in the number of epochs processed in a batch.This was expected as the epochs are processed sequentially and the processing time per epoch is accumulated.We discuss its significance in the subsequent Discussion section.
Generated EEG signal quality
MSE was used as a metric to compare stored and generated EEG waveforms, as a measure of generated signal quality.We observed a typical MSE value of 0.26, which indicated high fidelity between the DAC generated signal and stored signal data points.
Performance Comparison of CNN and XGBoost
We compared classification metrics and performance of the XGBoost and CNN models on the deployment system as well as an HPC. Figure 5 shows the variation of accuracy with change in epoch size and Figure 6 shows the variation of inference time per epoch for the XGBoost (labeled as XGB) and CNN models.Accuracy across XGBoost and CNN was found to decrease slightly (0.01%) with each increase in epoch size.Compared to CNN, the overall accuracy for XGBoost was better by 12 to 15 percentage points across various epoch sizes.The inference time for XGBoost was found to be significantly faster than CNN, especially for execution on RPi.For a 64 s epoch size, the inference time of XGBoost was about 0.004% of the classification time of CNN on RPi.For XGBoost, we found that the inference time per epoch for RPi was within 1 µs of that on an HPC.Further, the variation of inference time remained roughly within 2 µs with each increase or decrease of epoch length for both HPC and RPi.Overall, the timing performance and accuracy were found to be better in the case of XGBoost compared to CNN, which would render XGBoost more suitable for use in a deployment configuration on RPi.
Queue-based Processing System Validation
We verified the functionality of the queue-based processing system on RPi by generating 100 epochs of 64 s duration and capturing those epochs in the processing loop.We found that all generated epochs were consistently captured and processed, resulting in a 0% epoch loss across 5 test runs.Details on timing related to collection and processing are shown in Table 2.The processing time (which includes preprocessing, feature extraction, and classification operations) ranged from 0.01% to 0.03% of epoch collection time.This meant the system could process a given number of epochs significantly faster than the time it took for the system to collect the epochs.
Discussion
In this work, we proposed and demonstrated an RPi based EEG acquisition, processing, and classification system for early mTBI detection and sleep stage classification.This system was demonstrated to operate in a portable, real-time, and standalone configuration and perform classification of real-time EEG epochs into four target classes (sham wake, sham sleep, mTBI wake, mTBI sleep).
As shown in Table 1, the accuracy, precision, and recall results were identical across RPi and HPC.This confirmed that the predictive model behavior did not change when the training and deployment systems involved different system architectures, i.e., x64 based MacOS/Windows HPC for training vs. ARM-based RPi for deployment and prediction.Hence, it is possible to train a predictive model on a more powerful computer (HPC) and deploy it to an embedded device such as RPi that has limited memory and processing resources.This is especially applicable to multi-layered neural networks like CNN that typically have long training times on an HPC, and the training times would be prohibitively long on an embedded device like RPi.
We calculated the epoch processing time (which included preprocessing, feature extraction, and classification operations) on RPi by varying the number of epochs, as shown in Figure 4 and described in Table 2.While it was expected that the processing time would increase as the number of processed epochs is increased, the key inference was that the processing time was significantly smaller than the time required to collect the EEG epochs.At 256 Hz sampling rate and 64 s epoch size, the processing time ranged from 0.01% to 0.03% of the epoch collection time.Hence, we concluded that the system had ample time to process previously captured EEG epochs while new epochs were captured at practical EEG signal sampling rates.
We employed two different approaches for supervised learning models used in this system, the CNN model developed in our previous work, and an XGBoost predictive model created in the current work.We compared classification metrics and performance of the XGBoost and CNN models on the deployment system as well as an HPC.We observed that the XGBoost model exhibited significant performance improvement in terms of accuracy (as shown in Figure 5,) and inference time (as shown in Figure 6) compared to the CNN based predictive model.In the case of XGBoost, the variation of inference time remained roughly within 2 µs between HPC and RPi.A low inference time was critical for the real-time operation of the classification system.One possible reason for the better accuracy performance in the case of XGBoost compared to CNN was that the classification model for XGBoost was created using hand-crafted features which enabled learning differentiating patterns for the four target classes better than the CNN model that automatically extracted the differentiating features.These results, however, were datadependent, so they should be validated on different datasets to verify the generality of the model.We found that overall, XGBoost was better suited for deployment on RPi because of its faster inference time and better performance than CNN.By using two different predictive models for classification, we demonstrated the flexibility of the system to deploy improved classification models in the future.
In this system, we used a DAC to generate EEG signal waveform form European data format (EDF) files.This provided a reliable way to generate an EEG signal waveform without requiring an actual subject to capture the EEG signal from.We verified that the EEG waveform generated using the DAC on RPi was consistent with the EEG data stored in the EDF file.The verification was done by calculating MSE across the stored and generated signal, which was found to be 0.26, a small value indicating that the generated signal represented the stored signal accurately.Synthesizing EEG signals to replicate the complex and typically non-linear signal patterns is challenging and the ability to generate EEG signals from an actual recording data file using a DAC simplifies the setup that is required to test an EEG classification deployment system hardware and software chain.It enables the use of several available open-access EEG data files to train classification models and test the deployment system.For future use, the signal generation capability of this system can be simplified for ease of use and expanded to work with a variety of EEG data file types.This can help accelerate mTBI related future research pertaining to portable classification systems that are often constrained by the lack of readily available live EEG signals to test a hardware classification system.
In addition to early mTBI detection, the capability of the system to perform live classification on input EEG signals can be extended to cover mTBI related health and sleep monitoring applications in the future.Typically, after the initial diagnosis, TBI patients undergo EEG sleep monitoring in a hospital setup.A portable EEG sleep monitoring system, such as the one described in this work, can enable a subject to self-monitor in home settings, greatly enhances the accuracy, efficiency, and efficacy.
The classification system developed in the current work can also provide a replacement of the labor-intensive manual sleep-stage scoring of EEG signals by human experts with an online and automated system with the capability to perform fast sleep staging.Further, our technical approaches can be extended to several other EEG applications, including detection of the onset of epileptic seizures, strokes, and other neurological conditions.
In this work, we used a relatively simple hardware system to capture and digitize EEG signals, which could be improved.Because we generated EEG signals from a data file containing clean EEG data, this hardware did not include amplification and filtering stages.A practical system designed for field use would require additional hardware and software capabilities to capture and process EEG signals in real-time.In terms of hardware, such a system would require amplification, preprocessing, and filtering stages.In software, decimation, normalization, Independent Components Analysis (ICA), physiological artifact removal (e.g., eye and muscle movement artifacts), and filtering stages can be implemented.Further, we used an 8-bit ADC for this proof-of-concept system, but for devices designed for practical use, ADCs typically vary from 16-bit to 24-bit resolution.For example, the OpenBCI Cyton Biosensing system [24] for sampling EEG and other physiological signals uses a 24-bit ADC.We will note that higher resolution ADCs also involve a relatively higher cost and have lower sampling rates as the number of resolution bits increases.Also, the system in this work was designed for single-channel EEG generation and capture, which limits its use for multichannel EEG applications.The current system also assumes a direct single-channel EEG electrode connection to the ADC input.It does not directly provide connectivity to wireless (Bluetooth and Radio Frequency) EEG headsets.However, several 'hardware attached on top' (HAT) devices are available for RPi, for example, the brainHAT [25], that makes it possible to connect wireless headsets seamlessly and we anticipate the system in this work to function as intended with the actual streaming EEG data outside the particulars of EEG headset interfacing.
Conclusion
To the best of our knowledge, this is the first system capable of performing mTBI related EEG signal classification as described previously, deployed on a portable, low-cost device like RPi and designed to operate in a live configuration.The techniques developed in this work are general and can potentially be extended to create and deploy predictive models from EEG as well as other physiological signals acquired from human subjects, enabling e-care, self-care, and telemedicine.
Figure 4 .
Figure 4. Variation of epoch processing time with the number of epochs on RPi (64 s epoch).
Figure 5 .
Figure 5. Variation of accuracy with epoch size.
Figure 6 .
Figure 6.Variation of classification time with epoch length.
Table 1 .
System performance comparison of RPi with HPC with 4 classes using XGBoost.
Table 2 .
Epoch collection and processing time (64 s epoch). | 6,269.2 | 2021-01-23T00:00:00.000 | [
"Computer Science"
] |
On monotone circuits with local oracles and clique lower bounds
We investigate monotone circuits with local oracles [K., 2016], i.e., circuits containing additional inputs $y_i = y_i(\vec{x})$ that can perform unstructured computations on the input string $\vec{x}$. Let $\mu \in [0,1]$ be the locality of the circuit, a parameter that bounds the combined strength of the oracle functions $y_i(\vec{x})$, and $U_{n,k}, V_{n,k} \subseteq \{0,1\}^m$ be the set of $k$-cliques and the set of complete $(k-1)$-partite graphs, respectively (similarly to [Razborov, 1985]). Our results can be informally stated as follows. 1. For an appropriate extension of depth-$2$ monotone circuits with local oracles, we show that the size of the smallest circuits separating $U_{n,3}$ (triangles) and $V_{n,3}$ (complete bipartite graphs) undergoes two phase transitions according to $\mu$. 2. For $5 \leq k(n) \leq n^{1/4}$, arbitrary depth, and $\mu \leq 1/50$, we prove that the monotone circuit size complexity of separating the sets $U_{n,k}$ and $V_{n,k}$ is $n^{\Theta(\sqrt{k})}$, under a certain restrictive assumption on the local oracle gates. The second result, which concerns monotone circuits with restricted oracles, extends and provides a matching upper bound for the exponential lower bounds on the monotone circuit size complexity of $k$-clique obtained by Alon and Boppana (1987).
Introduction and motivation
We establish initial lower bounds on the power of monotone circuits with local oracles (monotone CLOs), an extension of monotone circuits introduced in [Kra16] motivated by problems in proof complexity. Interestingly, while the model has been conceived as part of an approach to establish new length-of-proofs lower bounds, our results indicate that investigating such circuits can benefit our understanding of classical results obtained in the usual setting of monotone circuit complexity, where no oracle gates are present (see the discussion on the Alon-Boppana exponential lower bounds for k-clique [AB87] presented later in this section).
Before describing the circuit model and our contributions in more detail, which require no background in proof complexity, we explain the main motivation that triggered our investigations.
Relation to proof complexity. A major open problem in proof complexity is to obtain lower bounds on proof length in F d [⊕], depth-d Frege systems extended with parity connectives (cf. [Kra95]). It is known that strong enough lower bounds for F 3 [⊕], the depth-3 version of this system, imply related lower bounds for each system F d [⊕], where d ∈ N is arbitrary [BKZ15]. A natural restriction of F 3 [⊕] for which proving general lower bounds is still open is the proof system R(Lin/F 2 ) (cf. [IS14], [Kra16]). It corresponds to an extension of Resolution where clauses involve linear functions over F 2 . 1 In order to attack this and other related problems, [Kra16] proposed a generalization of the feasible interpolation method to randomized feasible interpolation. Among other results, [Kra16] established that lower bounds on the size of monotone circuits with local oracles separating the sets U n,k and V n,k (defined below) imply lower bounds on the size of general (dag-like) R(Lin/F 2 ) proofs. In addition, it was shown that strong lower bounds in the new circuit model would provide a unifying approach to important length-of-proofs lower bounds established via feasible interpolation (cf. [Kra16, Section 6], [Pud97]).
Motivated by these connections and by the important role of feasible interpolation in proof complexity, we start in this work a more in-depth investigation of the power and limitations of monotone circuits with local oracles. We focus on the complexity of the k-clique problem over the classical sets of negative and positive instances considered in monotone circuit complexity [Raz85,AB87]. While the monotone complexity of k-clique has been investigated over other input distributions of interest (cf. [Ros14]), we remark that the structure of these instances is particularly useful in proof complexity (cf. [Kra97,Pud97,BPR97]). The corresponding tautologies have appeared in several other works.
We provide next a brief introduction to the circuit model and to the set of instances of k-clique that are relevant to our results.
An extension of monotone circuits. A monotone circuit with local oracles C( x, y) is a monotone boolean circuit containing extra inputs y j (local oracles) that compute an arbitrary monotone function of x. In order to limit the power of these oracles, there is a locality parameter µ ∈ [0, 1] that controls the sets of positive and negative inputs on which the inputs y i can be helpful. In more detail, we consider circuits computing a monotone function f : {0, 1} m → {0, 1}, and associate to each input y i a rectangle U i ×V i , with U i ⊆ f −1 (1) and V i ⊆ f −1 (0). We restrict attention to sets of rectangles whose union have measure at most µ according to an appropriate distribution D that depends on f . We are guaranteed that y i (U i ) = 1 and y i (V i ) = 0 but, crucially, the computation of C( x, y) must be correct no matter the interpretation of each y i outside its designated sets U i and V i .
The k-clique function and the sets U n,k and V n,k . We focus on the monotone boolean function f : {0, 1} m → {0, 1} that outputs 1 on an n-vertex graph G ∈ {0, 1} m if and only if it contains a clique of size k, where m = n 2 . More specifically, we investigate its complexity as a partial boolean function over U n,k ∪ V n,k , where U n,k is the set of inputs corresponding to k-cliques over the set [n] of vertices, and V n,k is the set of complete ζ-partite graphs over [n], where ζ = k − 1. Roughly speaking, for this choice of f , we measure the size of a subset B ⊆ U n,k × V n,k using the product distribution obtained from the uniform distribution over the k-cliques in U n,k , and the distribution supported over V n,k obtained by sampling a random coloring χ : [n] → [k − 1] of [n] using exactly ζ = k − 1 colors, and considering the associated complete ζ-partite graph G(χ). 2 A more rigorous treatment of the circuit model and of the problem investigated in our work appears in Section 2.
Our Results
We observe a phase transition for an extension of depth-2 monotone circuits with local oracles that separate triangles from complete bipartite graphs.
Theorem 1 (Phase transitions in depth-2). Let s = s(n, µ) be the minimum size of a depth-2 monotone circuit (DNF) on inputs x, y i ( x), and g j ( y) that separates U n,3 and V n,3 , where the y-inputs have locality ≤ µ, and each g j is an arbitrary monotone function on y. Then, for every ε > 0, Furthermore, the upper bounds on s(n, µ) do not require the extra inputs g j ( y).
Observe that the lower bounds remain valid in the presence of the functions g j ( y). In other words, in the restricted setting of depth-2 circuits, a small locality parameter does not help, even if arbitrary monotone computations that depend on the output of the local oracle gates are allowed in the circuit. (As explained in Section 3, the monotone functions g j ( y) can be handled in a generic way, and add no power to the model.) The proof of Theorem 1 is presented in Section 3. The argument considers different bottlenecks in the computation based on the value of µ. In our opinion, the main conceptual message of Theorem 1 is that an interesting complexity-theoretic behavior appears already at depth two. Indeed, the oracle gates can interact with the standard input variables in unexpected ways, and the main difficulty when analyzing general monotone CLOs is the arbitrary nature of these gates, which are limited only by the locality parameter. 3 We obtain stronger results for larger k = k(n) and with respect to unrestricted monotone circuits (i.e., arbitrary depth), but our approach requires an extra condition on the set of rectangles that appear in the definition of the oracle gates. Our assumption, denoted by A d , says that if each oracle variable y i is associated to the rectangle U i × V i , then the intersection of every collection of d + 1 sets U i is empty.
Theorem 2 (Upper and lower bounds for monotone circuits with restricted oracles). For every k = k(n) satisfying 5 ≤ k ≤ n 1/4 , the following holds.
1. If D( x, y) is a monotone circuit with local oracles that separates U n,k and V n,k and its y-variables have locality µ ≤ 1/16 and satisfy condition A d , then size(D) = n Ω( √ k/d) .
2 Some authors consider as negative instances the larger set of complete ζ-partite graphs where ζ ranges from 1 to k − 1. For technical reasons, we work with exactly (k − 1)-partite graphs (cf. Claim 1). In most lower bound contexts this is inessential, as a random coloring χ : [n] → [k − 1] under a bounded k(n) contains non-empty color classes except with an exponentially small probability.
3 It is plausible that the analysis behind the proof of Theorem 1 extends to larger k, but we have not pursued this direction in the context of depth-2 circuits. See also the related discussion on Section 5.
2.
For every ε > 0, there exists a monotone circuit with local oracles C( x, y) of size n Oε( separating U n,k and V n,k whose y-variables have locality µ ≤ ε and satisfy condition A 1 . The proof of Theorem 2 appears in Section 4. The lower bound extends results on the monotone circuit size complexity of k-clique for large k = k(n) obtained in [AB87]. 4 Indeed, our argument relies on their analysis of Razborov's approximation method [Raz85], with extra work required to handle the oracle gates. The upper bound is achieved by an explicit description of a monotone CLO generalizing the construction from Theorem 1. The following corollary, stated for reference, is immediate from Theorem 2.
Corollary 1. Let 5 ≤ k(n) ≤ n 1/4 , µ = 1/50, and assume rectangles are mapped to local oracle gates in a way that no k-clique is associated to more than a constant number of rectangles. Then the monotone circuit size complexity of separating the sets U n,k and V n,k is n Θ( (We note that the constant 1/50 appearing in this statement is not particularly important, and that any small enough constant locality parameter µ suffices.) To our knowledge, Corollary 1 provides the first explanation for the tightness of the Alon-Boppana [AB87] exponential lower bounds for k-clique. In particular, in order to prove monotone circuit lower bounds for this problem stronger than n √ k in the regime where k(n) ≫ poly(log n), one has to consider either a different set of instances, or employ a technique that does not apply to circuits with local oracles of constant locality. 5 We discuss some directions for future investigations in Section 5, where we also say a few more words on the connection to proof complexity. Monotone CLOs. A monotone boolean circuit C(x 1 , . . . , x n , y 1 , . . . , y e ) on n variables and e local oracles (monotone CLO for short) is a (non-empty) directed acyclic graph containing ≤ n + e + 2 sources and one sink (the output node). The non-source nodes have in-degree 2. Source nodes are labeled by elements in {x 1 , . . . , x n } ∪ {y 1 , . . . , y e } ∪ {0, 1}, and each non-source node is labeled by a gate symbol in {∧, ∨}. We say that C has size s if the total number of nodes in the underlying graph is s, including source nodes. The computation of C on an input string (a, b) ∈ {0, 1} n × {0, 1} e is defined in the natural way.
The formulation above is consistent with the statement of Theorem 2. In Theorem 1, which concerns bounded-depth circuits, we allow the internal {∧, ∨}-nodes to have unbounded fan-in.
We consider the computation of C( x, y) on input pairs where each bit in the second input y is a function of x. Furthermore, we will restrict our analysis to monotone computations over a set A ⊆ {0, 1} n of interest. For this reason, to specify the computation of C on a string x ∈ A, we will associate to each local oracle variable y i a corresponding monotone function f i : A → {0, 1}.
In order to obtain a non-trivial notion of circuit complexity in this model, we use a realvalued parameter µ ∈ [0, 1] to control the family of admissible functions f i . Each function f i separates a particular pair of sets U i ⊆ f −1 (1) ⊆ A and V i ⊆ f −1 (0) ⊆ A, but C must be correct no matter the choice of the functions f i separating these sets. The parameter µ captures the measure of i U i × V i . This is formalized by the definitions introduced next.
Correctness and locality.
We say that a pair For convenience, we also say in this case that F is a W-separating sequence of functions.
Given a monotone CLO pair (C, W) as above, and a W-separating sequence F of monotone functions, let denote the function in A → {0, 1} that agrees with the output of C when each oracle input y i is set to f i (x). Observe that C(x, F) is a monotone function over A = U ∪ V , since C is a monotone circuit and each f i is a monotone function over A. We will sometimes abuse notation and view C(x, F) as a circuit. We say that the pair (C, W) computes the function f : A → {0, 1} if for every W-separating sequence F of monotone functions, we have C(a, F) = f (a) for all a ∈ A. (We stress that the monotone CLO pair must be correct on every input string, and on every W-separating sequence.) We say that f can be computed over A ⊆ {0, 1} n by a monotone circuit with local oracles of size s and locality µ (with respect to a distribution D) if there exists a monotone circuit C( x, y) of size ≤ s and a sequence W = (U i , V i ) i∈[e] of length e ≤ s that is included in W and has locality ≤ µ such that the monotone CLO pair (C, W) computes f over A.
For convenience of notation, we will sometimes write y i = y[U i , V i ] to indicate a local oracle over the pair W = (U i , V i ).
Defining U n,k , V n,k , and D n,k . Let m = n 2 , where n ≥ 4, and let k ∈ N be an integer satisfying 3 ≤ k < n. We view [n] as a set of vertices, and [m] as its associated set of (undirected) edges. For B ⊆ [n], we use K B ∈ {0, 1} m to denote the graph (also viewed as a string) corresponding to a clique over B. Let Clearly, U n,k ∩ V n,k = ∅. It is convenient to associate to each coloring χ : be the family of all possible colorings of [n] using at most k − 1 colors. Under our definitions, for a given coloring χ ∈ V χ n,k we have G(χ) ∈ V n,k if and only if |χ([n])| = k − 1. We measure the locality of monotone CLO pairs (C, W) separating U n,k and V n,k with respect to a product distribution D n,k def = D U n,k × D V n,k , whose components are defined as follows. D U n,k is simply the uniform distribution over the k-cliques in U n,k , while D V n,k assigns to each fixed graph H ∈ V n,k probability mass . 7 (This is simply the uniform distribution over V n,k , but this is not the most convenient point of view in some estimates.) The sequence F ⋆ . The definition introduced above agrees with the formulation of monotone circuits with oracles from [Kra16]. We stress that a source of difficulty when computing a function f : A → {0, 1} using a monotone circuit C( x, y) and a sequence W = (W i ) of pairs included in W = (f −1 (0), f −1 (1)) is that C(x, F) must be correct for every W-separating sequence F = (f i ) of monotone functions. In order to prove lower bounds against a monotone CLO pair (C, W), we will consider a particular instantiation of the monotone functions f i : A → {0, 1}, discussed next. Let as follows: to denote the corresponding sequence of functions for a given choice of W = (W i ). For an arbitrary monotone function f : However, for the problem investigated in our work f ⋆ i is always monotone, as stated next.
It is enough to observe that, under these assumptions, there are no distinct strings a 1 , a 2 ∈ A n,k satisfying a 1 a 2 . Here we crucially used that the (k − 1)-partite graphs in V n,k have exactly k − 1 non-empty parts.
The use of F ⋆ to prove lower bounds against monotone CLO pairs (C, W) computing a monotone function f : A → {0, 1} is justified by the following observation, which describes an extremal property of F ⋆ .
Claim 2. Let F = (f i ) be an arbitrary W-separating sequence of monotone functions Proof. Assume that a ∈ U . Consequently, f (a) = 1, and the assumption that . By the monotonicity of the circuit C, it follows that C(a, F ⋆ ) ≤ C(a, F). Thus C(a, F ⋆ ) is incorrect on input a as well. The case where a ∈ V is analogous.
Therefore, F ⋆ is the hardest separating-sequence, meaning that any circuit that computes f under F ⋆ computes f under any separating-sequence.
Remark 1 (Simulating negated inputs). It is possible to simulate negated input variables in
It is well-known that U n,k and V n,k can be separated by counting input edges and using a single negation gate. However, it is easy to see that, by combining the latter construction with the trick above, we get monotone circuits with oracles of huge locality.
Indeed, for the problem investigated here, monotone circuits with local oracles can be seen as an intermediary model between monotone and non-monotone circuits, where the locality parameter µ restricts the computation of the extra input variables y i .
In order to be precise, we rephrase the hypothesis A d employed in Theorem 2 using the notation introduced in this section.
The assumption A d . Let d ∈ N, and (C, W) be a monotone CLO pair with W = (W i ) i∈I , 3 Phase transitions in depth-2: Proof of Theorem 1 Our argument relies on Claims 1 and 2 described in Section 2. We start with a straightforward adaptation of a lemma from [Kra16].
1. Over inputs a ∈ A, for every i, j ∈ [e], the following holds: Proof. Immediate from the definitions.
First, we prove a weaker version of Theorem 1 that forbids the extra inputs g j ( y). Then we use Lemma 1 to observe that our argument extends to the more general class of circuits.
Let ε > 0 be a fixed constant, and n be sufficiently large.
We argue next the lower bound on circuit size for this range of µ. In other words, we prove that if µ ≤ 1 − ε then the circuit size is Ω ε (n 2 ). Let (C, W) be a monotone CLO pair, where C( x, y) is a monotone DNF with t ≤ s terms, W = (W i ) i∈[e] , e ≤ s, W i = (U i , V i ), and each W i is included in the pair (U n,3 , V n,3 ). Further, let B = i U i × V i . Assume the pair (C, W) computes 3-clique over A n,3 . In order to establish a lower bound, we consider the sequence F ⋆ , as defined in Section 2. Then, using Lemma 1, we can write this circuit in an equivalent way as follows: where . This is without loss of generality, since terms that did not originally include a y-variable can be represented using f ⋆ (U n,3 ,∅) , which is equivalent to the constant 1 function over inputs in A n,3 .
Next, observe that if |S j | > 3 for some j ∈ [t] then the corresponding term cannot accept an input from U n,3 . Thus we can assume without loss of generality that 0 ≤ |S j | ≤ 3. Partition the terms of C( x, F ⋆ ) into sets T ℓ , 0 ≤ ℓ ≤ 3, with T ℓ containing all terms for which |S j | = ℓ.
Every triangle K B accepted by a term from T 0 forces a measure ≥ 1/ n 3 in B, since the corresponding functions f ⋆ in order for the term not to accept a complete bipartite graph H ∈ V n,3 . Consequently, using that µ ≤ 1 − ε, a total number of at most r = (1 − ε) n 3 triangles can be accepted by terms in T 0 . Now each term in T 2 or in T 3 accepts at most one triangle, and each term in T 1 accepts at most n triangles. Therefore, using the preceding paragraph, in order for the circuit to accept all n 3 triangles in U n,3 , we must have: This implies that at least one of |T 1 |, |T 2 |, and |T 3 | must be Ω(n 2 ). In particular, the original circuit must have size at least Ω(n 2 ).
Case 3: 0 ≤ µ ≤ 1/2 − ε. The O(n 3 ) size upper bound at µ = 0 is achieved by the trivial monotone circuit for 3-clique. For the lower bound, we adapt the argument presented above. Using the same notation, we assume there is a correct circuit as described in (1). By the same reasoning, |S j | ≤ 3 for each j ∈ [t]. Furthermore, we can assume that the edges corresponding to each S j are contained in some triangle from U n,3 . Rewrite C( x, F ⋆ ) as an equivalent circuit C ′ : where I ≤2 contains the indexes of the original sets S j such that the edges obtained from S j touch at most 2 vertices, and I 3 contains the indexes corresponding to sets S j whose edges span exactly 3 vertices. First, suppose there exists ℓ ∈ I ≤2 such that D V n,3 (V ℓ ) ≤ 1/2 − ε/4. This implies that f ⋆ ℓ rejects a subset of V n,3 of measure at most 1/2 − ε/4. Moreover, using that ℓ ∈ I ≤2 , e∈S ℓ x e rejects a subset of V n,3 of measure at most 1/2 + ε/8. Consequently, the ℓ-th term of the original circuit C( x, F ⋆ ) must accept some negative input from V n,3 . This violates the assumption that the initial monotone CLO pair computes 3-clique over A n,3 . We get from the previous argument that for every ℓ ∈ I ≤2 , D V n,3 (V ℓ ) ≥ 1/2 − ε/4. Consider now the quantity η = | ℓ∈I ≤2 U ℓ |/|U n,3 |, and observe that µ ≥ η · (1/2 − ε/4) by the previous density lower bound. Since we are in the case where µ ≤ 1/2 − ε, we obtain η ≤ 1 − Ω ε (1).
In turn, using the definition of η and of F ⋆ , it follows that the left-hand side of C ′ ( x, F ⋆ ) in (2) accepts at most a η-fraction of U n,3 . By the correctness of C(x, F ⋆ ), the right-hand side of the equivalent circuit C ′ ( x, F ⋆ ) must accept at least a Ω ε (1)-fraction of the triangles in U n,3 . Now observe that for each i ∈ I 3 , the corresponding term e∈S i x e accepts exactly one triangle. Therefore, we must have |I 3 | ≥ Ω ε ( n 3 ). This completes the proof that t = Ω(n 3 ).
In order to prove lower bounds in the presence of g j ( y) input variables, observe that the following holds. First, all lower bounds were obtained using F ⋆ . Due to Lemma 1, each g j ( y) is equivalent over A n,3 to f ⋆ Finally, in addition to the locality bound, the inclusion in B is the only information about the y-variables that was employed in the proofs. In other words, each g j ( y) can be treated as a new y-variable in the arguments above, without affecting the locality bounds.
This extends the lower bound to the desired class of circuits, and completes the proof of Theorem 1.
Circuits with restricted oracles: Proof of Theorem 2
We start with the upper bound.
Proof. We generalize a construction in the proof of Theorem 1. For every set B ∈ [n] k , let F (B) ∈ B ℓ be the lexicographic first ℓ-sized subset of B. Consider the following monotone circuit with local oracles: where to each y D we associate a pair (U D , V D ) with U D × V D ⊆ U n,k × V n,k , defined as follows: ℓ . In other words, assumption A 1 is satisfied. Further, the size of E is O( n ℓ · ℓ 2 ). The correctness of this monotone CLO can be established by a straightforward generalization of the argument from Section 3. It remains to estimate its locality parameter µ.
Fix a set D ∈ [n] ℓ , and let γ D def = D V n,k (V D ). By symmetry, γ D = γ D ′ for every D ′ ∈ [n] ℓ . Since distinct sets U D are pairwise disjoint and locality is measured with respect to the product distribution D n,k = D U n,k × D V n,k , the locality of the oracle rectangles associated with E is at most γ D . This value can be upper bounded as follows: This completes the proof of Lemma 2.
The upper bound in Theorem 2 follows immediately from Lemma 2, by taking a large enough ℓ = O( √ k). Observe that, more generally, one can get a trade-off between circuit size and locality.
We move on now to the lower bound part, which relies on a sequence of lemmas. For a set X ⊆ [n], we let ⌈X⌉ def = {i,j}∈( X 2 ) x {i,j} be the corresponding clique indicator circuit. For convenience, we define ⌈X⌉ def = 1 if X is a singleton or the empty set. Also, note that ⌈X⌉ = ⌈X⌉ ∧ f ⋆ U n,k ,∅ over A n,k . Under this notation, we don't need to consider standalone terms in the lemma below, which adapts to our setting a result from [Kra16].
Lemma 3. Let W = (W i ) with W i = (U i , V i ) be a sequence of pairs included in (U n,k , V n,k ). Let C( x, y) be a monotone circuit with local oracles of the form where t is arbitrary, |X i | ≤ ⌊ √ k⌋, k(n) ≥ 5, and all rectangles U i × V i ⊆ B, for some set B ⊆ U n,k × V n,k of locality µ ≤ 1/16. Then, for large enough n, the following holds.
1. Either C(x, F ⋆ ) accepts a subset of V n,k of measure at least 1/10, or 2. C(x, F ⋆ ) rejects a subset of U n,k of measure at least 1/10.
Proof. If t = 0 the circuit computes a constant function, and consequently one of the items above must hold. Otherwise, for each i ∈ [t], since U i × V i ⊆ B and D n,k = D U n,k × D V n,k , we have that either D U n,k (U i ) ≤ µ 1/2 or D V n,k (V i ) ≤ µ 1/2 . We consider two cases. First, assume there is i ∈ [t] such that D V n,k (V i ) ≤ µ 1/2 ≤ 1/4. Then, Pr The latter probability is 0 if |X i | ≤ 1. Otherwise, it can be upper bounded by This shows that item 1 above holds, using k ≥ 5 and the previous estimate.
If there is no i ∈ [t] satisfying D V n,k (V i ) ≤ µ 1/2 , by the observation in the first paragraph of this proof we get that D U n,k (U i ) ≤ µ 1/2 and D V n,k (V i ) > µ 1/2 for all i ∈ [t]. Recall that the measure of B is at most µ ≤ 1/16. Therefore, it must be the case that | i U i |/|U n,k | ≤ µ 1/2 , as each K B in this union contributes at least µ 1/2 to the measure of B. Due to our choice of F ⋆ and the structure of C, C( x, F ⋆ ) will accept at most a (1/4)-fraction of U n,k , and item 2 holds.
Crucially, Lemma 3 requires no upper bound on the number of terms appearing in C, and this will play a fundamental role in the argument below.
For the rest of the proof, let D( x, y) be a monotone CLO of size s that computes k-clique over A n,k , and W i = (V i , U i ) for i ≤ e be its associated pairs, where e ≤ s. As usual, we set Recall the extra condition on the local oracle gates.
We can assume without loss of generality that different oracle variables appearing in the description of the circuit are associated to distinct subsets of U n,k . Indeed, due to monotonicity (cf. Claim 2), we can always take a larger subset of V n,k if different oracle variables are associated to the same subset of U n,k . A bit more precisely, if y i = y i [U ′ , V i ] and y j = y j [U ′ , V j ], we can redefine these local oracles to use the pair (U ′ , V i ∪V j ). This does not increase the overall locality, and does not change the correctness of the computation. Note that this transformation produces oracle variables associated to the same pair of subsets, but since we use boolean circuits instead of boolean formulas, oracle variables don't need to be repeated in the description of the circuit.
For J ⊆ [e], we use D J ( x) to denote the circuit with y j substituted by 1 if j ∈ J, and by 0 otherwise. In particular, each D J is a monotone circuit in the usual sense, i.e., it does not contain local oracle gates. Moreover, size(D J ) ≤ size(D).
Lemma 4. Under Assumption
where U J def = j∈J U j and V J def = j∈J V j (here an empty intersection is U n,k and an empty union is ∅, corresponding to the case where J = ∅).
Proof. First, observe that for inputs in A n,k , using our definition of D J ( x). As we explain below, this circuit is further equivalent to a circuit where we drop the negated part: Clearly, by eliminating some "literals" we can only accept more inputs. However, by monotonicity the latter is not going to happen. Indeed, if we have a term and a negative input where we have used the monotonicity of D( x, y) in order to claim that D J ′ (H) ≥ D J (H). This is impossible, since by assumption D( x, F ⋆ ) separates U n,k and V n,k .
Using Lemma 1, we know that j∈J f ⋆ j = f ⋆ (U J ,V J ) , for U J and V J as in the statement of the lemma. Under assumption A d , whenever |J| > d we get U J = ∅. Therefore, Using the equivalences established above and the correctness of the original circuit, the circuit in (3) accepts every input in U n,k , and rejects every input in V n,k . Now observe that the righthand terms of the circuit cannot accept an input in V n,k , due to the presence of the functions f ⋆ (∅,V J ) . Thus such terms can be discarded, and the circuit obtained after this simplification still accepts U n,k and rejects V n,k . This completes the proof of the lemma.
Observe that U J ×V J ⊆ B for every J ⊆ [e], due to Lemma 1. In particular, the simplification above is well-behaved with respect to the new oracle rectangles introduced in the transformation.
The next steps of our argument rely on results from Alon and Boppana [AB87] related to the approximation method [Raz85]. We follow the terminology of the exposition in Boppana Approximate each individual circuit D J ( x) as in Boppana-Sipser, obtaining a corresponding depth-2 approximator D J ( x). Since each D J ( x) is a monotone circuit of size at most s, our choice of U n,k and V n,k and the argument in [BS90] provide the following bounds.
Now define using D and the individual approximators D J a corresponding monotone circuit D( x, y) with access to the functions f ⋆ (U J ,V J ) : ≤d )
Concluding remarks
We discuss below some questions and directions motivated by our results, and elaborate a bit more on the connection to proof complexity.
Monotone circuit complexity. The main open problem in the context of circuit complexity is to understand the size of monotone circuits of small locality separating the sets U n,k and V n,k , under no further assumption on the y-variables. It is not clear if the hypothesis A d in Theorem 2 is an artifact of our proof. As far as we know, it is conceivable that smaller circuits can be designed by increasing the overlap between the sets U i . 10 However, if one is more inclined to lower bounds, we mention that the fusion approach described in [Kar93] can be easily adapted to monotone circuit with local oracles, and that this point of view might be helpful in future investigations of unrestricted monotone CLOs.
Another question of combinatorial interest is whether the phase transitions observed in Theorem 1 extend to more expressive classes of monotone circuits beyond depth two. More broadly, are the phase transitions observed here particular to k-clique, or an instance of a more general phenomenon connected to computations using monotone circuits extended with oracle gates?
Corollary 1 suggests the following problem. Is it possible to refine the approach from [AB87], and to prove that the monotone circuit size complexity of k-clique is n Ω(k) for a larger range of k? In a related direction, it would be interesting to understand if monotone CLOs can shed light into the difficulties in proving stronger monotone circuit size lower bounds for other boolean functions of interest, such as the matching problem on graphs (see e.g. [AB87, Section 5] and [Juk12, Section 9.11]).
Proof complexity. Back to the original motivation from proof complexity, we have been unable so far to transform proofs in R(Lin/F 2 ) into monotone CLOs satisfying A d , for d ≤ k 1/2−ε , or certain variations of A d under which Theorem 2 still holds. Observe that, using the connections established in [Kra16], this would be sufficient for exponential lower bounds on proof size.
The reduction from randomized feasible interpolation actually provides a distribution on monotone CLOs C r with a common bound on their sizes such that each is correct and they satisfy: Pr r [(u, v) ∈ B r ] ≤ µ for every fixed pair (u, v) ∈ U × V, where B r is the union of the oracle rectangles in C r . An averaging argument then yields a fixed monotone CLO whose locality is bounded by µ. One might lose some information useful for a lower bound in this last step depending on the choice of the distribution D supported over U × V . Even though our initial attempts at establishing new length-of-proofs lower bounds have been unsuccessful, we feel that in order to prove limitations for R(Lin/F 2 ) and for other proof systems via randomized feasible interpolation it should be sufficient to establish lower bounds against monotone CLOs under an appropriate assumption on the oracle gates. (In particular, the existence of monotone CLOs of small size and small locality separating U n,k and V n,k does not imply that the approach presented in [Kra16] is fruitless.) For instance, while A d is a semantic condition on the (unstructured) sets U i and V i , one can try to explore the syntactic information obtained on these sets from a given proof, such as upper bounds on the circuit complexity of separating each pair U i and V i , or other related structural information. | 8,965.6 | 2017-04-20T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Fabrication of hollow flower-like magnetic Fe3O4/C/MnO2/C3N4 composite with enhanced photocatalytic activity
The serious problems of environmental pollution and energy shortage have pushed the green economy photocatalysis technology to the forefront of research. Therefore, the development of an efficient and environmentally friendly photocatalyst has become a hotpot. In this work, magnetic Fe3O4/C/MnO2/C3N4 composite as photocatalyst was synthesized by combining in situ coating with low-temperature reassembling of CN precursors. Morphology and structure characterization showed that the composite photocatalyst has a hollow core–shell flower-like structure. In the composite, the magnetic Fe3O4 core was convenient for magnetic separation and recovery. The introduction of conductive C layer could avoid recombining photo-generated electrons and holes effectively. Ultra-thin g-C3N4 layer could fully contact with coupled semiconductor. A Z-type heterojunction between g-C3N4 and flower-like MnO2 was constructed to improve photocatalytic performance. Under the simulated visible light, 15 wt% photocatalyst exhibited 94.11% degradation efficiency in 140 min for degrading methyl orange and good recyclability in the cycle experiment.
Preparation of Fe 3 O 4 /C/MnO 2 /C 3 N 4 composite. 6.0 g dicyandiamide was calcined at 550 °C for 4.0 h to produce g-C 3 N 4 . Thereafter, 2.0 g g-C 3 N 4 powder was dispersed in 80 mL deionized water, and then heated at 210 °C for 6 h to form a CN transparent precursor. Fe 3 O 4 /C/MnO 2 microspheres were added to the precursor (5.0, 10, 15, 20, 30 wt%). The solvent was slowly removed through a lyophilized process. Finally, Fe 3 O 4 /C/ MnO 2 /C 3 N 4 flower-like photocatalyst was obtained via annealing at 200 °C for 4.0 h in a tube furnace under N 2 protection.
Characterization. Scanning electron microscope (SEM, JSM-6700F, JEOL Ltd., Japan) was employed to obtain a surface topography image of the samples. Transmission images were gotten by using a high-resolution transmission electron microscope (TEM, JEM-3010, Hitachi Co., Japan). X-ray diffraction patterns of samples were obtained by the use of an X-ray diffractometer (XRD, Shimadzu XRD-7000, Shimadzu Co., Japan). X-ray photoelectron spectrometer (XPS, JPS-9010 MC, JEOL Ltd., Japan) was utilized to obtain the samples' surface elemental composition of the samples. Brunauer-Emmett-Teller (BET, ASAP 2020, Quantachrome, US) means was used to test the pore size and specific surface area of the catalyst. The saturation magnetization of the samples was obtained by employing a vibrating sample magnetometer (VSM, Lake Shore 7307, Lake Shore Ltd., USA). A photochemical reactor (BL-GHX-V, Shanghai Bilang Instruments Co., Ltd., China) was used to simulate the illumination. The ultraviolet-visible absorption spectra were measured on an ultraviolet-visible spectrophotometer (UV-vis, UV-5200PC, YuanXi, China).
Photocatalytic experiment. Firstly, 20 mg Fe 3 O 4 /C/MnO 2 /C 3 N 4 photocatalyst were added to 65 mL, 10 mg/L MO solution. Under dark environment, the mixture was agitated to reach adsorbed-desorbed equilibrium. Secondly, photocatalytic reaction was carried out with simulate light stemming from a 400 W metal halide lamp. The absorbance of the solution at intervals was monitored with the help of UV-visible spectrophotometer. Ultimately, the degradation curves of the MO solution were recorded, followed by the calculation of photocatalytic degradation rate. Figure 2b shows Fe 3 O 4 /P(MMA-DVB) microspheres prepared by distillation precipitation process. Compared with the former, the surface of the latter becomes much smoother, which proves the successful formation of polymer coating. And these polymer core-shell microspheres have a diameter of 225 nm. To obtain the conductive carbon layer, the polymer microspheres were calcined and carbonized. The SEM image of Fe 3 O 4 /C microspheres is displayed in Fig. 2c. One can see that the original core-shell structure of the material is not destroyed after the calcination treatment. And the agglomeration that originally occurred in Fe 3 O 4 /P(MMA-DVB) polymer microspheres has been slightly weakened due to the carbonization treatment. From Fig. 2d,e, it can be found out that the flower-like morphology of the composite microspheres produced by the hydrothermal method is composed of MnO 2 intersecting sheets. And the overall particle size is about 480 nm. As shown in Fig. 2f, the overall flower-like morphology has not changed, but the thickness of the MnO 2 flower sheets has increased significantly. This case indicates that the ultra-thin C 3 N 4 layer is successfully formed on the surface of MnO 2 to form a flower-like Fe 3 O 4 /C/MnO 2 / C 3 N 4 composite photocatalyst. It can be seen from Fig. 2g that the synthesized magnetic microspheres have a clear hollow structure with a particle size of about 200 nm. Figure 2h shows the TEM image of the Fe 3 O 4 /C microspheres, which have a core-shell structure with 13 nm thickness of C shell. Figure 2i is the TEM image of the flower-like Fe 3 O 4 /C/MnO 2 /C 3 N 4 microspheres. It can be found out that the composite photocatalyst with a complete magnetic core and flower-like shell exhibits the diameter of around 480 nm. According to these results, the composite photocatalyst with a magnetic core and flower-like shell was successfully prepared. The crystal phase composition of the composite was demonstrated by XRD characterization, as shown in Fig. 3I. Figure 3I-a is the diffraction curve of the bulk g-C 3 N 4 obtained by pyrolysis of dicyandiamide. The strong peak near 27.4° belongs to the (002) plane, corresponding to the crystal plane stack of the CN aromatic system 48 . The broad peak at 13.0° belongs to the (100) plane ascribed to the triazine repeat unit 44 . Figure 3I The XRD patterns cannot verify the existence of the C layer. For further confirming the formation of the C layer, the Raman test was used to characterize the Fe 3 O 4 /C sample. The spectrum in Fig. 3II indicates two different peaks at 1344 cm −1 and 1596 cm −1 , corresponding to D-band and G-band of carbon material, respectively. These results confirm the carbonization of Fe 3 O 4 /P(MMA-DVB) material, and Fe 3 O 4 /C microspheres are successfully obtained. These two bands are related to the A 1g phonon of sp 3 carbon atoms in disordered graphite and the in-plane vibration of sp 2 carbon atoms in the crystalline graphite, respectively 51 . The peak intensity ratio (I D /I G ) can evaluate the carbon material's crystallinity. The smaller the value is, the higher the degree of atomic order is 52 . Herein, the value is 0.79, meaning that the carbon material is graphitized partially. Therefore, the presence of the carbon matrix can improve the electronic conductivity and help avoid the recombination of photo-generated electron holes.
The surface chemical composition and the chemical state of the products were demonstrated by XPS characterization. Figure 4a is the full-scan spectrum of the photocatalyst, presenting the peaks of Mn, O, N, and C elements. From Fig. 4b, as for the Mn 2p spectrum, two peaks at 653.9 eV and 642.3 eV correspond to Mn 2p 1/2 and Mn 2p 3/2 . With respect to the O1s, as illustrated in Fig. 4c, three peaks at 529.7 eV, 531.3 eV, 533.2 eV are fitted, which are separately attributed to the Mn-O-Mn lattice oxygen, surface hydroxyl and surface adsorbed oxygen. The C1s spectrum in Fig. 4d shows the sub-bands centered at 284.8 eV and 288.5 eV, which are ascribed to the C-C coordination of the surface-unstable carbon and N=C-N 2 of g-C 3 N 4 . In addition, there is another peak centering at 286.3 eV, which is assigned to the C-O bond formed between the C of C 3 N 4 and the O of MnO 2 . This result indicates that MnO 2 and g-C 3 N 4 are closely connected and form a solid MnO 2 /g-C 3 N 4 interface, promoting the transfer and separation of photo-generated carriers. In the case of the N1s spectrum (Fig. 4e) Table 1. The former of the Fe 3 O 4 /C/MnO 2 and Fe 3 O 4 /C/MnO 2 /C 3 N 4 products are 119.56 m 2 /g and 120.25 m 2 /g, and the latter of them are 0.35 cm 3 /g and 0.31 cm 3 /g. Since C 3 N 4 does not significantly affect the morphology of the composite structure, these parameters of the two samples are almost similar. The higher values are owing to the flower-like structure of the composite photocatalyst. The increase in specific surface area is conducive to exposing more active sites and increasing more surface adsorption, followed by improving catalytic performance.
To evaluate the saturation magnetization value of Fe 3 O 4 , Fe 3 O 4 /C, Fe 3 O 4 /C/MnO 2 and Fe 3 O 4 /C/MnO 2 /C 3 N 4 , VSM measurement is conducted. It can be seen from Fig. 5a that the magnetization value of the Fe 3 O 4 microspheres is 70.58 emu/g. After the carbon layer is recombined, the value of Fe 3 O 4 /C microspheres decreases to 56.97 emu/g (Fig. 5b). After the flower-like MnO 2 was fabricated, the content of Fe 3 O 4 component is decreasing, which leads to the value of Fe 3 O 4 /C/MnO 2 microspheres decreases obviously to 37.62 emu/g (Fig. 5c). With the further formation of g-C 3 N 4 , the value is 30.02 emu/g (Fig. 5d). This value still meets the needs of magnetic separation. As shown in the illustration, when the magnet is placed next to the Fe 3 O 4 /C/MnO 2 /C 3 N 4 photocatalyst www.nature.com/scientificreports/ suspension, the photocatalyst can be quickly attracted to the side of the cuvette in a short time. The results show that the photocatalyst has a good magnetic response to the magnetic field, favoring the magnetic separation from the mixed solution.
Determining the adsorption capacity of the photocatalyst in dark reaction, then degrading MO under simulated light is used to investigate the photocatalytic activity of the prepared photocatalyst, and the results are shown in Fig. 6. Figure 6a reveals the mixture reached adsorption-desorption equilibrium within 60 min. And Fe 3 O 4 /C/MnO 2 /C 3 N 4 can adsorb about 22% of MO within 60 min, which is related to its higher specific surface area (120.25 m 2 /g). Figure 6b displays that UV-Vis is employed to monitor the change in the absorbance of the solution during the photocatalytic reaction. In Fig. 6b, one can clearly view that MO was almost completely degraded with adding Fe 3 O 4 /C/MnO 2 /C 3 N 4 composite photocatalyst after 140 min. The photocatalytic degradation MO over Fe 3 O 4 /C/MnO 2 /C 3 N 4 could be described by the following reactions: Figure 6c indicates the change of the MO concentration ratio C t /C 0 with varying the light time, in which C 0 and C t are the initial concentration of MO and the concentration of MO during the reaction, respectively. The degradation rate of MO solution with Fe 3 O 4 /C/MnO 2 /C 3 N 4 photocatalyst reaches 94.11%. From Fig. 6d, this reaction is attributed to a pseudo first-order reaction, which belongs to the Langmuir-Hinshelwood model with ln (C t /C 0 ) = −kt. In the formula, k is the apparent first-order rate constant. The calculated rate constant k of Fe 3 O 4 /C/MnO 2 /C 3 N 4 photocatalyst is 0.022 min −1 . The excellent photocatalytic performance of Fe 3 O 4 /C/MnO 2 / C 3 N 4 composite material benefits from the synergistic effect between the various components.
In order to find the optimal ratio, the effect of amount of g-C 3 N 4 on the photocatalytic efficiency was investigated. Meanwhile, determining the minimum optimal amount of photocatalyst in practical applications is important to reduce the costs. The composite photocatalyst containing different amounts of g-C 3 N 4 (5%, 10%, 15%, 20%, 30%) were used to degrade MO dyes under the same conditions. From Fig. 7a,b, when the amount of g-C 3 N 4 is 15%, the Fe 3 O 4 /C/MnO 2 /C 3 N 4 composite photocatalyst has the highest value. In Fig. 7c, the effect of the amount of photocatalyst on the degradation efficiency is examined. The results show that the photocatalytic efficiency gradually increases when the amount of photocatalyst increases in the range of 0-20 mg, due to the effective reaction area and the reactive site increase. When the amount of photocatalyst continues to increase, the photocatalytic efficiency does not change significantly, which may be caused by the particle agglomeration affecting the increase of active sites. Therefore, the optimal dosage of Fe 3 O 4 /C/MnO 2 /C 3 N 4 photocatalyst is 20 mg. Considering the industrial application of Fe 3 O 4 /C/MnO 2 /C 3 N 4 nanoparticles, it is essential to investigate the recyclability and stability of the photocatalyst. The Fe 3 O 4 /C/MnO 2 /C 3 N 4 was reused four times to examine their www.nature.com/scientificreports/ performances. And Fig. 7d reveals the results that the degradation rates for the four cycles are 94.11%, 90.42%, 88.37% and 79.69%, respectively. There is no doubt that after the photocatalyst is recycled, the conversion rate will decrease, which might result from the loss of sample during the cycle. However, even after four cycles, the value still has 79.69% that might be related with the structure stability of the used photocatalysts, strongly demonstrating that the designed photocatalyst has excellent recyclability. In this study, Fe 3 O 4 /C/MnO 2 /C 3 N 4 photocatalyst was synthesized by compounding g-C 3 N 4 on the surface of MnO 2 . In terms of enhanced photocatalytic activity, it is assumed that the charge transfer in the photocatalyst uses the Z-type mechanism, as shown in Fig. 8. For the individual g-C 3 N 4 or MnO 2 component, due to thermodynamic effects, photo-generated holes in g-C 3 N 4 cannot oxidize OHto form •OH radicals, while photo-generated electrons in MnO 2 cannot generate · O 2 − radicals effectively. Therefore, individual g-C 3 N 4 or MnO 2 material cannot possess good photocatalytic performances. However, after a heterojunction was fabricated between these two components, the photo-generated electrons in the conduction band of MnO 2 can be transferred to the valence band of g-C 3 N 4 and combined with the photo-generated holes there. This configuration of the Z-type scheme makes the utilization of holes from MnO 2 and electrons from g-C 3 N 4 remarkably enhanced. In addition, the conductive C layer can also increase the photo-generated electron-hole pairs' separation in MnO 2 , which effectively prevents the recombination of photo-generated carriers. In the meantime, the higher specific surface area supplies much more active sites for photocatalytic activities. The prepared flower-like Fe 3 O 4 /C/MnO 2 /C 3 N 4 photocatalyst forms a Z-type photocatalytic system, which effectively enhances the separation of carrier, so that the composite material has excellent photocatalytic degradation efficiency.
Conclusions
In summary, a magnetic recyclable flower-like Fe 3 O 4 /C/MnO 2 /C 3 N 4 heterojunction photocatalyst was prepared for degrading organic dyes. The Fe 3 O 4 core was used to facilitate magnetic separation and recovery. The C layer could conduct photo-generated electrons in MnO 2 and protect the core. The thin g-C 3 N 4 layer was compounded on the surface of MnO 2 , which greatly improved the specific surface area and the reactive sites of the material. The obtained Fe 3 O 4 /C/MnO 2 /C 3 N 4 composites exhibited enhanced photocatalytic performance for the degradation of MO solution (65 mL, 10 mg/L) under simulated light irradiation. The maximum photocatalytic degradation efficiency was 94.11% within 140 min. It was assumed that a Z-type heterojunction was fabricated between www.nature.com/scientificreports/ MnO 2 and g-C 3 N 4 , which stimulated the electron transfer from the valence band of MnO 2 to the conduction band of g-C 3 N 4 . This structure promoted the photo-generated electron-hole pairs' separation, inhibited the free charges' recombination, and improved effective use of visible light. In here, an effective method to construct heterostructure nanomaterials was provided for efficient photocatalytic degradation. www.nature.com/scientificreports/ | 3,770.2 | 2021-01-28T00:00:00.000 | [
"Materials Science",
"Environmental Science",
"Chemistry"
] |
Klf9 is a key feedforward regulator of the transcriptomic response to glucocorticoid receptor activity
The zebrafish has recently emerged as a model system for investigating the developmental roles of glucocorticoid signaling and the mechanisms underlying glucocorticoid-induced developmental programming. To assess the role of the Glucocorticoid Receptor (GR) in such programming, we used CRISPR-Cas9 to produce a new frameshift mutation, GR369-, which eliminates all potential in-frame initiation codons upstream of the DNA binding domain. Using RNA-seq to ask how this mutation affects the larval transcriptome under both normal conditions and with chronic cortisol treatment, we find that GR mediates most of the effects of the treatment, and paradoxically, that the transcriptome of cortisol-treated larvae is more like that of larvae lacking a GR than that of larvae with a GR, suggesting that the cortisol-treated larvae develop GR resistance. The one transcriptional regulator that was both underexpressed in GR369- larvae and consistently overexpressed in cortisol-treated larvae was klf9. We therefore used CRISPR-Cas9-mediated mutation of klf9 and RNA-seq to assess Klf9-dependent gene expression in both normal and cortisol-treated larvae. Our results indicate that Klf9 contributes significantly to the transcriptomic response to chronic cortisol exposure, mediating the upregulation of proinflammatory genes that we reported previously.
A principal component analysis (PCA) of the RNA-seq data indicated that 60% of the variance in gene expression among samples is captured in the first two PCs, which respectively correlate with the two treatments (chronic cortisol and absence of a GR, respectively accounting for 46% and 14% of the variance; Fig. 2B). The VBA-samples again underexpressed nr3c1, indicating that the VBA screen was effective in identifying homozygous mutants (Fig. S2B). The eight VBA+ (i.e. mixed wild type and heterozygous mutant) samples cluster according to whether they were treated with cortisol, with the two clusters (cortisol-treated and vehicle-treated controls) segregating toward opposite poles of PC1. This is not the case in the VBA-(i.e. homozygous GR 369-) fish, the four cortisol-treated replicates of which are widely dispersed along PC1. This indicates that the global effect of the cortisol treatment captured in PC1 is GR-dependent and suggests as well that gene expression is less constrained overall in larvae lacking a GR. PC2 correlates with the presence and absence of a GR (VBA+ )): VBA+ vs. VBA-(vehicle-treated controls); cortisol-treated vs. vehicle-treated VBA+; and cortisoltreated vs. vehicle treated VBA-. Upregulated genes in common respectively between the first two and the last two comparisons are listed on the right. (D) Box plots of Z-transformed expression levels of klf9 and npas4a obtained from the RNA-seq data. Sample numbers correspond to those in (B). Asterisks indicate significant differential expression (adjusted p values): ***< .0001; *< .05.
Scientific RepoRtS | (2020) 10:11415 | https://doi.org/10.1038/s41598-020-68040-z www.nature.com/scientificreports/ vs. VBA− respectively). Interestingly, along PC2 chronic cortisol treatment displaces the VBA+ transcriptome toward the VBA-pole, suggesting that the cortisol-treated fish adapt to the exposure by developing GR resistance. The regulatory roles of the GR were further assessed by analyzing differential gene expression (DGE) between pairs of treatments, using an adjusted p value (false-discovery) threshold of 0.05 as the criterion for differential expression. Comparison of VBA+ and VBA-larvae identified 405 genes affected by loss of the GR in 5-dpf larvae, 103 of which are underexpressed (Fig. 2C) and 302 of which are overexpressed in VBA-larvae compared to their VBA+ counterparts (Fig. S3A). Gene ontology (GO) analysis shows that the underexpressed genes (i.e. genes normally upregulated by the GR) are involved in sugar metabolism and response to heat, whereas the overexpressed genes (i.e. genes that are normally downregulated via the GR) are involved in basement membrane organization, epidermis development, cell adhesion, locomotion, and growth (Figs. S3 and S4; Table S1).
A DGE analysis comparing cortisol-treated VBA+ fish to their vehicle-treated VBA+ counterparts showed that in cortisol-treated larvae with a functional GR, 4,298 genes were differentially expressed (Fig. S3B), 2,177 of which were upregulated (Fig. 2C) and 2,121 of which were downregulated. GO enrichment analysis of the upregulated genes identified biological processes related to nervous system development and function as well as cell adhesion, locomotion, and growth, whereas the downregulated genes were largely involved in protein synthesis and metabolism (Figs. S5, S6; Table S2). Interestingly, the transcriptome of cortisol-treated VBA+ larvae overlapped more with that of vehicle-treated VBA-larvae than with that of vehicle treated VBA+ larvae (Fig. S3C, Table S3), and accordingly, many of the biological processes affected by the absence of GR function in VBA-larvae were similarly affected by the chronic cortisol-treatment in VBA+ larvae (Fig. S8). This again suggests that the latter larvae develop a GR resistant phenotype. This resistance is probably not associated with any effect on levels of the GR or MR in the cortisol-treated larvae, as neither nr3c1 nor nr3c2 displays significant differential expression in response to the cortisol treatment.
We reasoned that some of the transcriptomic effects of the cortisol treatment might stem from GR-induced upregulation of a GR target gene that functions as a feedforward transcriptional regulator of GR signaling. To the extent that basal expression of such a gene depends on the presence of a GR it might be expected to be underexpressed in VBA-larvae (which lack GR function) but upregulated in VBA+ larvae in response to chronic cortisol (i.e., opposite of the predominant trend noted above). Of the 2177 genes upregulated in cortisol-treated VBA+ larvae only four were basally underexpressed in VBA-larvae (Fig. 2C), two of which encode transcription factors: klf9 and per1a. Of these, only klf9 was also found to be significantly upregulated in our previous RNAseq analysis of the effects of chronic cortisol treatment in wild-type fish (Fig. S3C), being one of the most highly upregulated transcription factors 9 . A plot comparing klf9 expression in each of the conditions reveals that the GR contributes to both its normal developmental expression and its upregulation in response to the cortisol treatment ( Fig. 2D), which was confirmed by qRT-PCR in another experiment (Fig. S2C). However, the plot also suggests that cortisol affects klf9 in a GR-independent fashion, albeit more variably, as indicated by the range of expression levels in the cortisol-treated VBA-samples shown in Fig. 2D, which correlate with the spread of the cortisol-treated VBA-samples along PC1 shown in Fig. 2B.
In contrast to the situation in VBA+ larvae, only 8 genes were differentially expressed in cortisol-treated VBA-larvae compared to their vehicle-treated VBA-siblings, all of them upregulated (Fig. 2C, Fig. S3D). The genes included the immediate early genes (IEGs) npas4a, egr1, egr4, fosab, and ier2b (Fig. 2C, Fig. S9). The GR-independence of their cortisol-induced upregulation is clearly seen in a plot of the expression levels of the most highly upregulated gene of this set, npas4a (Fig. 2D), a neuronal IEG that along with the other IEGs was also found to be upregulated in our previous RNA-seq analysis of cortisol-treated larvae (Fig. S3D) 9 . This indicates that the GR mediates nearly all the transcriptomic effects of chronically elevated cortisol, except for a small subset that appears to relate to neuronal activity.
A frameshift deletion introduced into exon 1 of zebrafish klf9 eliminates the DNA binding domain and significantly reduces expression of the mature transcript. The fact that klf9 was the transcriptional regulatory gene most consistently found to be upregulated by chronic cortisol exposure in a GR-dependent way suggested that it may contribute to the transcriptomic effects of the exposure. To test this, we mutated klf9 using CRISPR-Cas9 with a gRNA that targets exon 1 (Fig. 3A,B). This resulted in a frameshift mutation upstream of the DNA binding domain (Fig. 3A,B), producing a transcript encoding a truncated protein predicted to lack function as a transcription factor. Klf9 loss-of-function mutations are viable in mice 22 and similarly, the klf9 mutant fish were viable and fertile when bred to homozygosity, although mutant embryos survive at a lower rate than wild type (data not shown).
To ask how the mutation affects klf9 expression we used qRT-PCR to compare klf9 transcript levels in wild type and klf9 homozygous mutant (hereafter referred to as klf9 -/-) larvae under both normal conditions and in response to chronic cortisol treatment. This provided further confirmation that the cortisol treatment leads to upregulation of klf9 and revealed that klf9 mRNA levels are significantly reduced in the klf9 -/larvae, probably due to nonsense-mediated decay triggered by the premature stop codon (Fig. 3C). In support of this possibility, there was no significant effect on klf9 pre-mRNA levels, measured by qRT-PCR of the intron (Fig. 3C). We conclude from these experiments that the frameshift mutation introduced into klf9 exon 1 abrogates Klf9 function.
RNA-seq shows that klf9 is required for the pro-inflammatory transcriptomic effects of chronic cortisol treatment. To identify Klf9 target genes and ask whether Klf9 contributes to the transcriptomic response to cortisol treatment we used RNA-seq to query gene expression in 5-dpf wild type and klf9 -/mutant larvae from sibling parents, developed both normally and in the presence of 1 µM cortisol. Samples were collected at the same time on day 5 post-fertilization as in the previous GR knockout experiment and processed similarly. However, PCA revealed the largest source of variance in this experiment was not due to genotype or .0005 (Table S4), suggestive of a physiological stress response (e.g. to the stress of capture). However, a further confound is that PC1 also correlates with preparation of the RNA in two batches on separate days, suggesting that it may also reflect technical variance in sample preparation (see Materials and Methods). Reassuringly, after normalizing for this variation the samples segregate along two principal components representing genotype and treatment (Fig. S10B). For subsequent DGE analysis we therefore treated the batch effect as a categorical covariate. DGE analysis using a false-discovery rate of 0.05 identified 239 genes affected by loss of klf9 function in vehicle-treated larvae, 100 of which were upregulated and 139 of which were downregulated. Gene ontology term enrichment analysis shows that the upregulated genes are largely involved in complement activation (e.g. c3b, c3c, c4, cfb ), glucose and carbohydrate metabolism (e.g. pgm1, tpi1b, and pfkmb), and nucleosome positioning (e.g., h1fx, h1f0, and smarca5; Fig. S11, Table S5). Genes downregulated in response to loss of klf9 function are largely involved in sterol metabolism (e.g. faxdc2, sqlea, tm7sf2, sc5d, cyp51, lss, and msmo1; Fig. S12, Table S5). Interestingly, several of the processes associated with carbohydrate metabolism that we identified as being positively regulated by the GR are negatively regulated by klf9 (Fig. S13).
To ask how loss of kf9 function affects the transcriptomic response to cortisol treatment, we compared the response in wild-type embryos to that in klf9 -/larvae (Fig. 4A). Strikingly, with a false discovery threshold of 0.05 the major difference between the two responses was that ~ 70% (408) of the genes upregulated in by cortisol treatment in wild type embryos were not upregulated by the treatment in klf9 -/embryos, which instead upregulated a mostly different set of genes, albeit less strongly (Fig. 4A,B). This indicates that Klf9 is a key regulator of the transcriptomic response to cortisol. Of the 408 genes upregulated by the cortisol-treatment in a klf9-dependent way (Fig. 4A,B), about a quarter (91) were also identified in our previous study 9 as being significantly upregulated by chronic cortisol exposure (Table S6). Examples of the latter include irg1l and marco ( Fig. 4C and Fig. S14), as well as irg1, irf1a, ifi35, mpeg1.1, mpeg1.2, mxc, socs1a, socs3b, stat1b, and stat4 (Table S6). Gene ontology term enrichment analysis of the 408 genes upregulated by chronic cortisol treatment in wild-type but not klf9 -/embryos revealed significant enrichment for genes involved in defense and immunity (Fig. 4D, Table S7), the same biological processes that we previously found to be the most strongly affected by the chronic cortisol treatment 9 , whereas these processes were not associated with either the 176 genes upregulated in both wild-type and klf9 -/embryos, or the 903 genes upregulated in klf9 -/embryos but not wild-type (Table S7). Notably the set of genes upregulated by cortisol in wild-type but not klf9 -/embryos included four interferon regulatory factors (irf1a, irf8, irf9, and irf10), two interleukins (il1b and il34), and four interleukin receptors (il4r.1, il6r, il13ra1, and il20ra). These results indicate that Klf9 contributes in a significant way to the proinflammatory gene expression induced by chronic cortisol exposure.
Genes found to be consistently upregulated by chronic cortisol treatment in multiple RNA-seq experiments depend on klf9 for that upregulation. As a final analysis we assessed the overlap between the transcriptomic effects of chronic cortisol treatment across all our RNA-seq experiments with wild-type and VBA+ (mixed wild-type and heterozygous GR 369-) larvae, including the experiment published previously 9 , in order to identify a set of high-confidence genes that consistently respond the cortisol treatment and then ask how loss of klf9 function affects that response. To eliminate any technical artifacts emanating from the use of different parameters in the different analyses the sequence reads from all the experiments were reanalyzed from scratch using a common pipeline (see Materials and Methods). A PCA of the variance across all experiments produced several interesting observations. First, it showed that the effect of chronic cortisol treatment across all experiments was subtle, found only in PCs 4, 5, and 6 accounting for 14.3% of the total variance. PC5 captures a cortisol-treatment effect common to all three experiments (accounting for 2.77% of the total variance) and when plotted against PC4 clearly shows segregation of the cortisol-treated and control (vehicle-treated) samples (Fig. 5A). Gratifyingly, GO analysis of a single gene list ranked by upregulation along PC5 showed the same biological response to chronic cortisol as that which we reported previously 9 , i.e. upregulation of processes related to defense, inflammation and immunity (Fig. 5B, Table S8). The fourth principal component, accounting for 8.95% of the total variance, shows a cortisol treatment effect in the two experiments reported here, but not in that which we reported previously 9 ( Fig. 5A and Fig. S15). This suggests that PC4 represents cortisol-treatment effects that are dependent on the circadian light-dark cycle under which embryos in this study developed, which was absent in our previous study in which embryos developed in the dark 23 (see below, Discussion). The sixth PC (2.57%) is the complement of PC4, suggesting cortisol-induced effects that are only apparent in the larvae that were developed in the dark (Fig. S15).
The spread of the cortisol treatment effects across three principal components likely reflects differing biological responses to the treatment among different experiments. That the effects are somewhat different under total dark compared with light-dark cycles is not surprising given what is known about the interplay of GC signaling and the circadian clock [23][24][25][26] . Indeed GO analysis of PC2, which segregates our previously published experiment 9 from the two reported here (accounting for 18.5% of the variance), clearly shows that effects on circadian and light-responsive gene expression (Table S9). Additional sources of variance are less clear. The first PC, which segregates the wild-type samples in the last (klf9 -/vs. wild-type) experiment from the wild-type samples in our previous experiment as well as the VBA + samples described here (accounting for 37.6% of the variance, Fig. S15), is heavily loaded with genes involved in synaptic signaling and neurogenesis (Table S10) suggestive of differences in neurodevelopment and/or responsiveness to stress in those samples. The third PC (14.2% of the variance) segregates replicates 1 and 2 of the Klf9 experiment from all other samples, possibly reflecting the batch effect in sample preparation noted above.
Unsurprisingly given the large amount of gene expression variance across experimental samples unrelated to the cortisol treatment (i.e. noise), only 12 genes were found to be consistently upregulated by the cortisol (Fig. 6A). The consistently upregulated set included klf9, the only gene in that set that encodes a transcription factor. We reasoned that many more genes are affected by the treatment, albeit not consistently with statistical significance owing to the abovementioned noise (see Discussion). This was borne out by plotting the estimated fold-change of the 149 genes that were upregulated by cortisol-treatment in at least two of the three experiments (Fig. 6B, Table S11), which also revealed that the upregulation of those genes is klf9-dependent. Furthermore, this plot shows that the cortisol treatment effect is stronger in the wild-type larvae than in the VBA+ (1:2 wild-type:heterozygous GR +/-369-), suggesting haploinsufficiency of the GR for some of the effect. Query of this set for enrichment of GO biological process terms indicated chronic cortisol exposure upregulates genes associated with response to organic substance (including response to stress, defense response, www.nature.com/scientificreports/ response to external biotic stimulus, and inflammatory response to wounding), similar to what we reported previously 9 , and gluconeogenesis (Fig. 6C, Table S12). Finally, we used HOMER motif enrichment analysis 27 to ask what transcription factor binding sites are enriched in flanking regions of the set of 149 genes upregulated by cortisol-treatment in at least two of the three experiments. The resulting set of motifs included sites for the various krüppel-like factors as well as the GR (Table S13). The most significantly enriched motif, the Klf14 binding motif RGKGGGCGKGGC, matches the Klf9 consensus motif in the JASPAR database 28,29 and would be expected to bind Klf9, which is in the same KLF subfamily as Klf14 30,31 . In contrast, HOMER analysis of the 408 genes identified as klf9-dependent in Fig. 4 recovered a somewhat different list of motifs that included binding sites for several immunoregulatory transcription factors (Table S13; note that there are 65 genes in common between the 408 identified in Fig. 4 and the 149 identified in Fig. 6), suggesting that the larger set includes more indirect targets of feedforward regulation downstream of Klf9. Consistent with this, the latter set of motifs includes sites for two immunoregulatory genes, irf1 and stat4 that are both consistently upregulated by chronic cortisol treatment (i.e. in the common list of 149 genes identified in Fig. 6) and dependent on klf9 for that upregulation (i.e. in the list of 408 genes identified in Fig. 4). Altogether these results underscore the conclusion that klf9 is a feedforward regulator of GR signaling that mediates the pro-inflammatory transcriptomic response to chronic cortisol exposure, likely involving both www.nature.com/scientificreports/ direct engagement of Klf9 with its transcriptional regulatory targets and downstream effects that those targets in turn have on the genes that they regulate.
Discussion
Several previously published studies have characterized loss-of-function mutations of the zebrafish GR, including four frameshifting indels that introduce a premature stop codon in exon 2 [13][14][15] (Fig. 1C). Here we report a new frameshift deletion (GR 369-) within exon 3 (Fig. 1A,B), which unlike the previous mutations removes all possible in-frame initiation codons upstream of the DNA binding domain (Fig. 1C), eliminating the transcriptional function of the mutant gene (Fig. 1E). We used RNA-seq to determine how this loss of GR function affects the larval transcriptome, both in larvae developed under normal conditions and in larvae treated chronically with cortisol. To overcome the low survival of embryos from homozygous mutant females and avoid nonspecific maternal effects that might be associated with poor egg quality, we took advantage of a visual background adaptation (VBA) screen to identify homozygous GR 369progeny of a heterozygous cross, as those larvae lack a VBA response (VBA-larvae). Based on the observation that VBA− larvae comprised ~ ¼ of the population, as expected for a recessive Mendelian trait, larvae that successfully mount a VBA response (VBA+ larvae) are predicted to consist of a 1:2 mixture of homozygous wild-type and heterozygous GR 369mutants, and hence to contain at least one intact nr3c1 allele encoding a functional GR. A principal component analysis of the RNAseq data revealed that absence of a functional GR has a profound effect on gene expression in both normal and cortisol-treated larvae, such that transcriptomes from larvae lacking a GR are clearly distinguished from those that have one, and that cortisol treatment produces a coherent effect only in larvae with a functional GR ( Fig. 2A). A pair-wise comparison of gene expression between VBA+ and VBA− larvae (accounting for PC2) identified with statistical significance about four hundred genes that are regulated by the GR in 5-dpf larvae at the time they were collected (midmorning). GO term enrichment analysis indicated that genes upregulated by the GR at that time are involved in metabolism and stress response, as would be expected, while genes downregulated by the GR are involved in epidermis development, cell adhesion, growth, and basement membrane formation, suggesting that the GR may function as a switch to downregulate those morphogenetic processes in late development, or to temporally segregate them to a certain time of day given the circadian dynamic of glucocorticoid signaling. Interestingly, numerous biological processes and individual genes affected by loss of GR function were similarly affected by chronic cortisol treatment in larvae with a GR. This suggests that one effect of the chronic treatment is to promote development of GR resistance, and moreover, that it does so via the GR. Such resistance might be construed as an adaptive response to the chronic exposure. Comparing gene expression in VBA+ larvae treated with cortisol versus vehicle (accounting for PC1 in that experiment) identified over four thousand genes that are differentially expressed in response to chronic cortisol treatment. This latter number is substantially larger than the 555 differentially expressed genes identified in our previous analysis 9 of the effects of chronic cortisol treatment and yielded a somewhat different result when subjected to GO term enrichment analysis (Figs. S5, S6). One major difference between the analyses reported here and that reported previously 9 is that in the latter the embryos and larvae were cultured in the dark, whereas in the present study we cultured them from fertilization in a diurnal light-dark cycle. In zebrafish larvae the circadian clock is not synchronized until the fish are exposed to a light-dark cycle 32,33 , so our previous results may have had circadian asynchrony as a confounding variable. Indeed, the impact of this difference on the transcriptome is clearly seen in the PCA of the combined analysis of all three RNA-seq experiments (Fig. S15), accounting for nearly 20% of the variance. Despite this, the RNA-seq results reported here assessing effects of chronic cortisol exposure in wild-type and klf9 -/larvae (Fig. 4) were from embryos developed with light-dark cycles, and in the wild type larvae produced an effect on proinflammatory gene expression similar to that of our earlier study, demonstrating that that effect was not an artifact of circadian asynchrony. Another difference between both studies using wild-type larvae and that depicted in Fig. 2 was that the latter measured transcriptomic effects of chronic cortisol treatment in a 1:2 mixture wild-type and heterozygous mutant larvae (VBA +); thus the results in VBA + larvae would be expected to be less sensitive to any effects of the chronic exposure for which the GR is haploinsufficient, a possibility supported by the comparison shown in Fig. 6B. Further work is required to more fully assess the effects of GR gene dosage on the transcriptome under both normal conditions and in response to chronic cortisol exposure.
The meta-analysis comparing all our RNA-seq experiments examining the transcriptomic effects of chronic cortisol treatment in wild-type or VBA + larvae (Figs. 5 and 6 and Fig. S15) provides some important insights that are broadly relevant to RNA-seq data interpretation. One is that different experiments that examine the effects of a single variable under somewhat different conditions and in a limited number of biological replicates will often produce different lists of differentially expressed genes passing an arbitrary threshold of statistical significance (e.g. adjusted p < 0.05). The reason is clear enough: biological systems are highly responsive to genetic and environmental factors that vary between experiments and affect gene expression, sometimes stochastically and/or in ways that are difficult to control and measure, especially with a limited number of biological replicates. Nevertheless, GO analyses of the lists obtained from different experiments can detect consistent biological effects following reanalysis of the data from all three experiments and their overlap. Only 8 of the 12 genes in common between all experiments were annotated with gene names (listed). (B) Violin and box plots of estimated fold change of the 149 genes that are upregulated in at least 2 of the three experiments shown in (A); the differences between each experiment are statistically significant (Table S14). (C) Treemap generated by REVIGO 57 (https ://revig o.irb.hr/) of GO biological process terms found by GOrilla 34 to be associated with the 149 genes upregulated by chronic cortisol treatment in at least two out of the three experiments.
Scientific RepoRtS | (2020) 10:11415 | https://doi.org/10.1038/s41598-020-68040-z www.nature.com/scientificreports/ even if the gene lists differ in the individual genes that they include, particularly if methods such as GOrilla 34 are used to test for statistically significant enrichment of GO terms toward one end or the other of a single list of genes ranked by some measurable criterion (e.g. a principal component that accounts for a specific condition as in Fig. 5). This underscores the important but often unappreciated point that statistical significance does not equate to biological significance, and generally is not a good sole criterion to assess the effects of a given condition on the expression of a given gene using high throughput methods such as RNA-seq. By comparing across the three experiments, we were able to identify both a small but "very high confidence" set of 12 genes as well as a larger "high confidence" set of 149 genes consistently affected by the cortisol treatment that would not have been discernible without our integrated meta-analysis (Fig. 6). Moreover, use of unbiased approaches such as PCA to parse the variance in the data can help identify robust condition-specific effects and provide insight into the biology underlying those effects when combined with GO term enrichment analysis. For the experiments reported here this approach validated our earlier finding that chronic cortisol exposure leads to upregulation of pro-inflammatory gene expression and extended that result by showing that the upregulation depends on the GR target gene klf9. The set of "very high confidence" genes identified in the meta-analysis included klf9, one of only four genes showing GR-dependent expression in normal 5-dpf larvae that were also upregulated in VBA+ larvae in response to chronic cortisol, and only one of two that encode transcription factors, the other being per1a (Fig. 2B). Both klf9 and per1a are involved in circadian regulation, and both have been shown to be GR targets in other vertebrate models 16,[35][36][37] . Interestingly, klf9 was also found to be the most commonly upregulated transcription factor gene in a recent meta-analysis of glucocorticoid-induced gene expression in the mammalian brain 38 . We have found by ATAC-seq that the promoter region of klf9 is one of the most differentially open regions of chromatin in blood cells of adults derived from cortisol-treated embryos (Hartig et al., submitted). In mice klf9 was recently shown to mediate glucocorticoid-induced metabolic dysregulation in liver 39 . Among other things Klf9 functions as a transcriptional repressor 40,41 , and in mouse macrophages as an incoherent feedforward regulator of the GR target klf2 42 , which functions to control inflammation 43 . Further work is needed to determine how klf9 contributes to pro-inflammatory gene expression in response to chronic cortisol exposure, which could either be directly as a feedforward activator (possibly via effects on metabolism), indirectly as a feedforward repressor of an antiinflammatory regulator like Klf2, or both. GO analysis also showed that genes involved in sterol biosynthesis are downregulated by loss of klf9, and more experiments are required to determine if klf9 regulates the metabolism of cortisol and other steroid hormones. Our motif enrichment analysis of flanking sequences from the set of 149 genes upregulated by chronic cortisol in at least 2 of our 3 RNA-seq experiments indicated enrichment for KLF binding sites (Table S13); further work involving chromatin immunoprecipitation is needed to determine whether any of those sites are bound by Klf9 or Klf2. Additional studies are also required to determine whether loss of Klf9 alters the function of immune cells or the inflammatory response to injury or infection.
Perhaps unsurprisingly, our results ( Fig. 2 and Fig. S3) indicate that the GR is required for nearly all the transcriptomic effects of chronically elevated cortisol. However, eight genes upregulated by the treatment were found in the RNA-seq analysis to be upregulated in both VBA+ and VBA− larvae (Fig. 2B), indicating that the GR is not required for their upregulation. Interestingly most of these genes are known IEGs, and include the neuronal activity-dependent gene npas4a, the mammalian homologue of which is directly repressed by the GR 44 , as well as egr1 which has been shown to differentially regulate GR in rat hippocampus depending on level of maternal care during development 45 . One possible explanation is that the genes are upregulated by increased MR activity, which was recently shown to contribute to stress axis regulation in zebrafish larvae 14 . Further work in MR mutant fish 14 will be needed to test this.
Finally, gene ontology analysis of genes upregulated by chronic cortisol treatment in VBA+ progeny of the GR +/369cross indicated a strong effect on biological processes associated with nervous system development and function. This is consistent with a recent report that injection of cortisol into eggs leads to increased neurogenesis in the larval brain 10 . In this regard it is interesting that klf9 is a stress-responsive gene that regulates neural differentiation and plasticity 35,46 . The long-term dysregulation of the HPA/I axis caused by early life exposure to chronic stress and/or chronically elevated cortisol suggests that the exposure perturbs brain development and activity. Given its role in regulating plasticity in brain regions relevant to neuroendocrine function, it will be interesting to determine whether klf9 contributes to those effects.
Materials and methods
Zebrafish strains, husbandry, and embryo treatments. The AB wild-type strain was used for all genetic modifications. Husbandry and procedures were as described previously 9 . All animal procedures were approved by the Institutional Animal Care and Use Committee (IACUC) of the MDI Biological Laboratory, and all methods were performed in accordance with the relevant guidelines and regulations. Embryo culture and cortisol treatments were performed as previously described 9 , with one difference: embryos were cultured in a diurnal light-dark cycle (14 h light-10 h dark). Briefly, fertilized eggs were collected in the morning, disinfected and at ~ 4 h post fertilization placed in dishes with either 1uM cortisol or vehicle (DMSO) added to embryo media. Embryos developed in a 28.5° C incubator with a 14/10 light/dark cycle synchronized with the core fish room. Media was changed daily.
Construction of nr3c1 and klf9 mutant lines. To mutate nr3c1 and klf9 we used CRISPR-Cas9 47 , injecting zygotes with multiple guide RNAs for each gene and mRNA encoding Cas9. Guide RNAs were designed using the CHOP-CHOP algorithm 48,49 .
To generate the GR 369mutant line fertilized wild-type AB embryos were injected at the 1 cell stage with 1-2 nL of a gRNA cocktail targeting nr3c1 exons 2 and 3 (final concentration 40 ng/µL for each gRNA, 230 ng/µL Scientific RepoRtS | (2020) 10:11415 | https://doi.org/10.1038/s41598-020-68040-z www.nature.com/scientificreports/ Cas9 mRNA, 0.1 M KCl, and 0.01% phenol red indicator dye). Individual whole injected larvae were screened for mutations of the targeted regions by high resolution melt analysis (HRMA) of PCR amplicons containing those regions 50 . Detected mutations were then verified by Sanger sequencing of a PCR amplicon containing the targeted region. F0 adults bearing mutations were identified by HRMA of DNA extracted from tailfin clips, and germline mutations were then identified by PCR and HRMA/sequencing of sperm. F0 males with germline mutations were outcrossed to AB females, and heterozygous progeny were screened via sequencing from tailfin clips. The GR 369mutation was identified and selected for by breeding over 2 additional generations to yield F3 homozygous progeny. Subsequent generations were maintained as heterozygotes for health and breeding purposes.
To generate the klf9 -/mutant line fertilized wild-type AB embryos were injected at the 1-cell stage with < 20% cell volume of injection mix consisting of 200 ng/ul Cas9 mRNA, 100 ng/ul guide RNA, 0.05% phenol red dye, and 0.2 M KCl. Individual F0 larvae were screened with HRMA to confirm CRISPR efficacy. Larvae were placed into system and raised. Young adult fish were genotyped via HRMA using DNA extracted from fin clips, and mutations were confirmed by Sanger sequencing. F0 fish positive for mutation were outcrossed to wild-type (AB) fish. F1 offspring of this cross were screened by HRMA to confirm germline transmission, and Sanger sequencing identified the 2 bp frame-shift mutation in one female founder. This female was out crossed with WT AB males. The resulting F2 fish were screened as young adults via fin clip and HRMA, and males positive for mutation were back crossed to the F1 founder female. Resulting F3 generation fish were screened via fin clip HRMA and sequenced to identify homozygous mutants as well as homozygous wild-type siblings. F3 generation was Mendelian 1:2:1 wild-type:heterozygote:homozygous-mutant ratio.
In vitro mRNA transcription and injection. Total RNA was extracted from 5dpf wild-type larvae using Trizol. RNA was treated with DNase I (NEB), and full-length nr3c1 cDNA was reverse transcribed using a specific primer (ggtcaaggttagtttaatgaattagtctgac) and Primescript RT kit (TaKara). Template DNA for transcripts was amplified from this cDNA using Q5 high fidelity polymerase (NEB), full-length (agtaatgcaaaatggatcaaggagg), truncated 310-(ctctttgggaacagctcgcc) or 369-(gggccagtttatgcttttcca) forward primers with upstream T7 promoters, and a common reverse primer (catcgtgtcctgctgttggg) downstream of the stop codon. Template DNA was run through an agarose gel to verify size, extracted and purified using E.Z.N.A. kit from Omega Biotek. Transcription reactions were run using mMessage mMachine Ultra T7 kit from Invitrogen, and yield quantified by spectrophotometry. Xenopus elongation factor 1a transcript from pTRI-Xef included in the mMessage mMachine kit was used as a control. Homozygous GR 369mutant embryos were injected at the one-cell stage with a mix containing 200 ng/ul mRNA and 0.05% phenol red dye, with an injection volume of ~ 20% cell volume. Injected embryos were snap-frozen at 6 h post fertilization for RNA extraction and qRT-PCR.
Visual background adaptation (VBA) screen. To identify larvae lacking a functional GR a VBA screen was performed on 4-day old larvae as described 12 . Briefly, larvae were incubated for 20 min in a dark incubator then transferred to a white background and immediately examined under a stereomicroscope with brightfield optics. Larvae that failed to mount a VBA response were identified by the failure of melanophores to disperse, remaining clustered in a dark patch on the dorsal surface 12 . Larvae were segregated as VBA+ and VBA-cohorts and returned to culture for an additional day before they were collected for RNA-seq.
RNA-seq and data analysis. At 3 h zeitgeber time (post lights-on) on day 5 post-fertilization four biological replicates of cortisol-treated and control embryos were collected as follows. For the first RNA-seq experiment, one by one, a dish of larvae corresponding to a single condition was removed from the incubator, and 8 larvae per replicate (4 replicates) were collected in a 1.5 mL tube with minimal water and immediately snap frozen in liquid nitrogen. All replicates from one condition were collected sequentially before moving on to the next condition. Total RNA was extracted using the Qiagen RNA-Easy Plus mini kit (Qiagen). For the second RNA-seq experiment, four replicates of n = 10 larvae were collected from a single dish for each condition and immediately snap frozen in liquid nitrogen. Collection of 16 samples occurred over 22 min. RNA was prepared as described above on two different days. On the first day (experimental replicates 1 and 2) the lysis buffer was added to all 8 frozen samples before homogenization, while on the second day the lysis buffer was added to each sample which was then immediately homogenized. This difference in sample preparation likely accounts for the large variance between samples prepared on day 1 and day 2. RNA was sent to the Oklahoma State Genomics Facility for Illumina library preparation and single-end sequencing.
RNA-seq libraries were generated with Illumina-compatible KAPA libraries and sequenced on an Illumina NextSeq 500 High Output sequencer. klf9 -/and matched control samples were sequenced as single end 75-bp reads. VBA+ and VBA-samples were sequences as paired-end 75-bp samples.
Fastq formatted read files were preprocessed with Trimmomatic 51 version 0.38 with default options, and then aligned to the Zebrafish genome version 11 as presented in Ensembl 52 version 93, using the STAR aligner 53 version 2.6.1b. The Ensembl transcriptome was preprocessed with a splice junction overhang of 100 nt. Following alignment, the resulting BAM files were processed with RSEM 54 version 1.3.0 for isoform and gene-level expression estimates. The resulting gene-level expression values were merged into a single expression matrix with an in-house python script. EDASeq 55 carried out in R version 3.6.1 was used to further normalize data for systematic effects, using gene-level length and GC-content as downloaded from Ensembl version 98 using EDAseq's included scripts. "WithinLane" normalization with GC content was judged as superior to that based on length. Final normalized gene-level counts (which = "full") were generated using GC-based WithinLaneNormalization followed by BetweenLaneNormalization. Subsequent differential expression analysis was carried out in R version 3.6.1 with the DESeq2 56 version 1.24.0, using either treatment (DMSO/cortisol) or genetics (VBA+/VBA− or klf9 -/-/WT) as the comparison. The klf9 -/experiment analysis also included a Scientific RepoRtS | (2020) 10:11415 | https://doi.org/10.1038/s41598-020-68040-z www.nature.com/scientificreports/ two-level categorical batch covariate. DESeq2 was also used to generate a rlog-matrix which was subsequently Z-transformed to normalize each gene across all samples. The Z-transformed rlog matrix was then loaded into JMP version 15 and used for PCA (using the "Wide Method") after thresholding to an average rlog expression value of 7.5. Both RNA-seq dataset have been deposited in the NCBI Gene Expression Omnibus database, under accession numbers GSE144884 (GR+/− experiment) and GSE144885 (Klf9+/− experiment).
The previously published RNA-seq data set was reprocessed as just described, and an overall matrix of only wild-type or VBA+ samples was generated and jointly normalized with EDASeq and then subjected to PCA as described in the previous paragraph.
The GOrilla algorithm 34 (https ://cbl-goril la.cs.techn ion.ac.il/) was used for Gene Ontology term enrichment analysis, and the data were visualized using REVIGO 57 (https ://revig o.irb.hr/). Venn diagrams were generated using Venny 2.1 (https ://bioin fogp.cnb.csic.es/tools /venny /). HOMER motif enrichment analysis 27 was used to compare incidence of known vertebrate motifs in a list of promoters of interest with incidence in a background list of all zebrafish promoters by running the findMotifs program (https ://homer .ucsd.edu/homer /micro array / index .html) using default settings except that sequence from − 1500 to + 500 bp relative to the transcription start site was searched for motifs from 10 to 18 bp in length.
Quantitative reverse transcription and polymerase chain reaction (qRT-PCR). Total RNA purified from snap-frozen larvae using the Trizol method and used as template to synthesize random-primed cDNA using the Primescript cDNA synthesis kit (TaKaRa). Relative gene expression levels were measured by qRT-PCR, using the delta-delta Ct method as described previously 9 , and eif5a as a reference gene. Examination of the results of multiple RNA-seq data sets indicated that eif5a activity was highly stable across treatments and genotypes. In many experiments beta-actin was also used as a reference gene, and this did not substantially change the results.
Graphics. RNA-seq results (PCA plots, box plots, scatter plots, and violin plot) were graphed using JMP version 15 from SAS. qRT-PCR results were graphed using Microsoft Excel. All figures were drafted using Adobe Illustrator CS4. Some of the graphs (Figs. 2B,D, 4C, and 5A) were redrawn in Illustrator exactly without modifying the depiction of the data. | 9,238.6 | 2020-07-10T00:00:00.000 | [
"Biology",
"Medicine"
] |
Inhibition of acetyl-CoA carboxylase impaired tubulin palmitoylation and induced spindle abnormalities
Tubulin s-palmitoylation involves the thioesterification of a cysteine residue in tubulin with palmitate. The palmitate moiety is produced by the fatty acid synthesis pathway, which is rate-limited by acetyl-CoA carboxylase (ACC). While it is known that ACC is phosphorylated at serine 79 (pSer79) by AMPK and accumulates at the spindle pole (SP) during mitosis, a functional role for tubulin palmitoylation during mitosis has not been identified. In this study, we found that modulating pSer79-ACC level at the SP using AMPK agonist and inhibitor induced spindle defects. Loss of ACC function induced spindle abnormalities in cell lines and in germ cells of the Drosophila germarium, and palmitic acid (PA) rescued the spindle defects in the cell line treated transiently with the ACC inhibitor, TOFA. Furthermore, inhibition of protein palmitoylating or depalmitoylating enzymes also induced spindle defects. Together, these data suggested that precisely regulated cellular palmitate level and protein palmitoylation may be required for accurate spindle assembly. We then showed that tubulin was largely palmitoylated in interphase cells but less palmitoylated in mitotic cells. TOFA treatment diminished tubulin palmitoylation at doses that disrupt microtubule (MT) instability and cause spindle defects. Moreover, spindle MTs comprised of α-tubulins mutated at the reported palmitoylation site exhibited disrupted dynamic instability. We also found that TOFA enhanced the MT-targeting drug-induced spindle abnormalities and cytotoxicity. Thus, our study reveals that precise regulation of ACC during mitosis impacts tubulin palmitoylation to delicately control MT dynamic instability and spindle assembly, thereby safeguarding nuclear and cell division.
S-palmitoylation (hereafter palmitoylation) is a tubulin PTM that has been studied in various cells and in vitro systems [7][8][9][10][11]. However, it remains largely unclear how tubulin palmitoylation affects MT properties and functions. Palmitoylation is a reversible modification that can adjust the hydrophobicity of a protein to regulate its membrane association and its activity [12,13]. Several oncogenic proteins (e.g., RAS, EGFR, and Wnt) and tumor suppressors (e.g., SCRIB, MC1R, and Bax) also require palmitoylation to facilitate their membrane localization and execution of cellular functions [14,15]. Besides, tubulin palmitoylation was found to be increased after androgen treatment and is required for cell proliferation in prostate cancer cells [16]. However, the roles of tubulin palmitoylation and how it is regulated during cell proliferation remain unknown.
Palmitate is the substrate used for palmitoylation and the product of successive reactions catalyzed by acetyl-CoA carboxylase (ACC) and fatty acid synthase (FASN) in the fatty acid (FA) synthesis pathway [17][18][19]. The two ACC isoforms in humans, cytosolic ACC1 and mitochondria-associated ACC2, catalyze the carboxylation of acetyl-CoA to produce malonyl-CoA. FASN then utilizes the malonyl-CoA produced by ACC1 together with acetyl-CoA to synthesize palmitate. Malonyl-CoA produced by ACC2 serves to allosterically inhibit carnitine palmitoyl transferase 1 (CPT1), a key enzyme for import of FAs into the mitochondria for β-oxidation. Together, the two isoforms increase the total cellular lipid level by stimulating de novo lipogenesis and inhibiting CPT1mediated lipid oxidation. Notably, perturbed activities of ACC and FASN in yeast were shown to induce defective cytokinesis and unequal chromosome segregation during cell division [20][21][22]. Inhibition of ACC and FASN has also been shown to modulate cancer cell responses to MT-targeting drugs, including taxol and nocodazole [23]. In addition, ACC, the rate-limiting enzyme in FA synthesis, undergoes inhibitory phosphorylation at Ser 79 by AMPK and localizes to the spindle pole (SP) during mitosis (from prophase through anaphase) [24,25]. These studies imply that the FA synthesis pathway may be regulated to ensure successful mitosis progression. ACC inhibition has been demonstrated to induce FA β-oxidation [26,27] and to elevate cellular acetyl-CoA level, thereby increasing protein acetylation [28,29]. Yet, it remains unclear how the regulation of FA synthesis might contribute to mitotic fidelity.
In this study, we investigated whether and how FA synthesis regulates MT functions during mitosis. We showed that ACC and FA synthesis may be regulated during mitosis to control tubulin palmitoylation, MT dynamic instability, and spindle assembly. Our findings also reveal potential coordination between lipogenesis, tubulin PTMs, and MT functions that is important for mitotic cell division.
RESULTS
The pSer 79 -ACC accumulates at the SP of mitotic cells To characterize the mitotic regulation of FA synthesis pathway, we examined the protein levels and subcellular localization of ACC and FASN in CGL2 cells. The AMPK-mediated inhibitory phosphorylation of ACC at Ser 79 (pSer 79 -ACC) [30] was also examined. Increased ACC and FASN were found in mitotic cells expressing G2/M markers Cyclin B1 and phospho-histone-H3 (pHH3) (Fig. 1a, 9 h from thy-). Significantly increased pSer 79 -ACC was also found in mitotic cells, concurrent with AMPK phosphorylation at Thr 172 (pThr 172 -AMPK); both pSer 79 -ACC and pThr 172 -AMPK returned to baseline expression after mitosis exit when pHH3 and Cyclin B1 were reduced (Fig. 1a, 12 h from thy-). These observations are consistent with previous findings Fig. 1 The expression and cellular distribution of ACC and FASN. ACC is phosphorylated during mitosis and localizes to the spindle pole (SP). a Level of the indicated proteins at each cell cycle stage. The cell cycle of CGL2 cells was enriched at each stage by double-thymidine block and release. The stages of attached G2 interphase cells (A) and floating mitotic cells (F) collected at 9 h after thymidine release (thy-) were confirmed by the expression of the G2/M markers Cyclin B1 and pHH3. b, c Subcellular localization of ACC (b) and FASN (c) in interphase and mitotic CGL2 cells. Cells were fixed and immunostained for ACC or FASN (green) as described and co-stained for α-tubulin (red) and DAPI (blue). d Subcellular localization of pSer 79 -ACC in interphase cells, untreated mitotic cells and mitotic cells treated with Ro-3306 as indicated. Cells were stained for pSer 79 -ACC (green), α-tubulin (red), and DAPI (blue). e Subcellular localization of pThr 172 -AMPK in interphase and mitotic cells. Cells were stained for pThr 172 -AMPK (green), α-tubulin (red), and DAPI (blue). f, top: Western blots show the efficiency of ACC depletion by shRNAs. Cells were transduced with shRNA targeting either ACC1 (shACC1) or ACC2 (shACC2) and the lysates were probed with an ACC antibody recognizing both forms. Cells transduced with the empty vector pLKO.1 were used as the control. The volumes of the shRNAcontaining virions are indicated. f, bottom: Subcellular localization of pSer 79 -ACC in mitotic cells after transduction with control vector (pLKO.1), shACC1 or shACC2. Cells were stained for pSer 79 -ACC (green), α-tubulin (red), and DAPI (blue).
While ACC and FASN exhibited uniform cytoplasmic localization in interphase and mitotic cells (Fig. 1b, c), both pSer 79 -ACC and pThr 172 -AMPK accumulated at the SP of mitotic cells at prophase to metaphase without showing the distinct pattern in interphase cells (Fig. 1d, e). The SP-localization of pSer 79 -ACC was abrogated by acute treatment with the CDK1 inhibitor, Ro-3306 (Fig. 1d, Ro-3306), confirming the mitosis-specific ACC phosphorylation. Thus, the role of ACC was further examined with shRNA-mediated depletion. shRNAs targeting each of the two ACC forms efficiently depleted ACC proteins (Fig. 1f, top), while SP-accumulation of pSer 79 -ACC was abrogated by depletion of ACC1 but not ACC2 (Fig. 1f, bottom), suggesting that ACC1 is the major form to be phosphorylated and localized to the SP. This mitosis-specific, SPlocalized, and AMPK-mediated inhibitory phosphorylation of ACC1 implied that ACC function might be stringently regulated during mitosis to control spindle assembly.
Disrupting ACC activity induces defects in mitotic spindles
To infer the roles of ACC in mitotic spindle assembly, we then modulate the activity of its upstream kinase, AMPK. As shown in Fig. 2a-c, SP-localization of pSer 79 -ACC was significantly enhanced by AICAR (AMPK agonist [31]), but decreased by compound C (AMPK inhibitor [32]). Importantly, both AICAR-and compound C-treated samples had significantly increased percentages of mitotic cells with abnormal spindles (Fig. 2d, e), suggesting that both increased and decreased level of ACC phosphorylation are correlated with the mitotic spindle defects. In addition, inhibition of ACC with TOFA [33] or depletion of ACC1 significantly increased the percentage of mitotic cells with spindle abnormalities (Fig. 2f-h). These data suggested that phosphorylation of ACC may be stringently and dynamically regulated at a certain level during mitosis to ensure proper mitotic spindle assembly.
To test whether ACC regulates mitosis at the tissue level, we examined the mitotic spindle in the Drosophila female germline with ACC knockdown using nos-GAL4. ACC was found enriched in germ cells located within the germarium and egg chambers in the ovariole of control nos > mcherry RNAi flies but was dramatically reduced in those of nos > ACC RNAi flies (Fig. 2i, top scheme and bottom images). We found approximately 55% of mitotic germ cells in the germarium (nos > ACC RNAi ) exhibited abnormal spindles, while none was observed in control germ cells (nos > mCherry RNAi ) (Fig. 2j, k, left). Further examination in each germ cell type (Fig. 2k, right) revealed that ACC knockdown caused fourfold higher abnormal spindles in cyst cells (more than 60%) than in germline stem cells (GSCs) and cystoblasts (CBs) (15%). Since CBs undergo four rounds of mitotic division to generate cyst cells, the fourfold increase in spindle defects in ACClacking cyst cells reflected that ACC is required for proper bipolar spindle assembly to support cell division during oogenesis in the Drosophila germarium. Taken together, these data suggest that precise regulation of ACC1 function is required for accurate spindle assembly during mitosis.
Precise regulation of protein palmitoylation is required for spindle assembly Since palmitate is the product of FA synthesis [19], we tested whether palmitate acts downstream of ACC in spindle assembly using combined treatment of TOFA and PA, the exogenous source of palmitate. TOFA was reported to reduce FA synthesis within 2 h [33] and we thus adopted less than 4 h treatments to monitor the immediate mitotic effects. We found that 1-h treatment of TOFA significantly increased spindle abnormalities (Fig. 3a, b). While PA alone did not increase spindle defects, its treatment in combination with TOFA significantly reduced cells containing spindle defects to the level of controls (Fig. 3b). These results implied that palmitate synthesis may play roles in ACC-dependent spindle assembly.
Malonyl-CoA, the immediate product of ACC, is an inhibitor of the CPT1-mediated FA β-oxidation [17]. We found that etomoxir, the chemical inhibitor of CPT1-mediated FA β-oxidation [34], neither induced spindle defects when treated alone nor did it rescue TOFA-induced spindle defects when treated in combination with TOFA ( Fig. 3b), suggesting that the mitotic effects of ACC may be independent of CPT1-mediated β-oxidation.
Elevated acetyl-CoA level and increased protein acetylation are the potential downstream effects of ACC deficiency [28,29,35]. C646, the p300 acetyltransferase inhibitor [36], and SB204990, an inhibitor of ATP-citrate lyase (ACLY) that synthesizes acetyl-CoA, [37], were thus examined for their effects on TOFA-induced spindle defects. In CGL2 cells (Fig. 3c, CGL2), C646 or SB204990 in combination with TOFA additively induced spindle abnormalities compared to single treatment of each drug. In MDA-MB-231 cells (Fig. 3c, MDA-MB-231), neither C646 nor SB204990 in combination with TOFA further enhanced spindle abnormalities compared to single treatment of each drug. Thus, C646 and SB204990 exhibit no antagonizing effects on TOFA-induced spindle defects in both cell lines. Since palmitate, but not C646 and not SB204990, rescued cells from TOFA-induced spindle defects, we concluded that lack of palmitate may play a role in spindle defects induced by disruptions in FA synthesis, independent of coincident effects on protein acetylation and cellular acetyl-CoA level.
A fraction of palmitoylated proteins colocalize with α-tubulin To infer the role of protein palmitoylation in mitotic spindle assembly, we labeled the palmitoylated proteins as described [42] and examined their subcellular distribution. Since tubulin plays essential roles in spindle assembly and is known to be palmitoylated [9,11], α-tubulin was counterstained. We found that the palmitate in interphase cells exhibited both granular and fibrillar structures that were partially colocalized with α-tubulin ( Fig. 4a, interphase); however, the palmitate in mitotic cells exhibited only granular structures without obvious α-tubulin colocalization (Fig. 4a, mitosis). 2D histograms of the abovethreshold intensities of palmitate and tubulin revealed a clear positive correlation in interphase cells but not in mitotic cells (Fig. 4b). We found a significantly lower Pearson's correlation coefficient in mitotic cells than that in interphase cells (Fig. 4c), suggesting reduced palmitate-α-tubulin colocalization during mitosis. These data implied that a fraction of palmitoylated proteins may be associated with MTs, but the level of association is greatly reduced after mitosis entry.
α-tubulin is less palmitoylated in mitosis and ACC inhibition further decreases α-tubulin palmitoylation To see whether FA synthesis affects palmitate-MT association, we quantitatively measured the level of palmitate-α-tubulin colocalization with or without TOFA treatment by proximity ligation assay (PLA) [42,43], using No PA group as the control (Fig. 5a). Notably, the PLA signals showed colocalization with the palmitate (Fig. 5a, bottom). Four-hour TOFA treatments significantly reduced the PLA signal in interphase cells (Fig. 5a, b, interphase), suggesting disrupted palmitate-α-tubulin association. In addition, the PLA intensity in mitotic cells was significantly lower than that in interphase cells (Fig. 5a, b, mitosis) and was further decreased by 4-h TOFA treatment ( Fig. 5a, b, TOFA). These data suggest that the palmitate-α-tubulin association is decreased after mitosis entry and that ACC inhibition may disrupt the palmitate-α-tubulin association in both interphase and mitotic cells.
To confirm that tubulin is palmitoylated in cells, we performed the acyl-biotin exchange assay (Fig. 5c) [44] to purify palmitoylated proteins. As shown in Fig. 5d, we found that αtubulin was readily purified from lysates of cycling MDA-MB-231 cells, indicating that a significant fraction of α-tubulin is palmitoylated. Consistently, 4-h TOFA treatment greatly reduced the purified palmitoylated α-tubulin. In addition, the level of αtubulin purified from mitotic cell lysates was also greatly C.-T. Fang et al. decreased compared to untreated cycling cell lysates and was also further decreased by 4-h TOFA treatment. Quantification of the protein bands revealed the relative levels of palmitoylated tubulin in lysates of untreated cycling (set at 1), TOFA-treated cycling (0.3 ± 0.02), untreated mitotic (0.2 ± 0.01) and TOFA-treated mitotic cells (0.1 ± 0.01), consistent with the relative levels of the palmitate-α-tubulin PLA intensity in these cells (Fig. 5a, b). These data suggest that the level of tubulin palmitoylation is reduced after mitosis entry and can be further reduced by TOFA treatment. Since MTs reorganize at G2/M transition and become more dynamically unstable, reduced tubulin palmitoylation may be correlated with the dynamic-unstable property of MT during mitosis. We thus hypothesized that TOFA-induced perturbations in tubulin palmitoylation may alter MT dynamic instability to induce spindle defects.
Altering tubulin palmitoylation disrupts MT dynamic instability
To validate our hypothesis, we subjected untreated or TOFApretreated CGL2 cells to the cold exposure assay, which disassembles MTs at the SP. As shown in Fig. 6a, b, 5-min cold exposure disassembled MTs less efficiently after 1-h TOFA pretreatment. These suggested that TOFA, treated at a concentration that decreases tubulin palmitoylation, reduced cold-induced MT disassembly at the SP. We also examined the effects of cold exposure to MT fibers comprised of the EYFP-fused tubulin-C376A (in which Cys 376 , the proposed site of palmitoylation [45], is substituted with an palmitoylation-deficient alanine residue [46]). EFYP-tubulin-wild-type (WT) and -C376A were expressed in HeLaS3 cells (Fig. 6c) and formed polymerized MT fibers in 80% of the mitotic cells (Fig. 6d, e, no cold). After 10-min cold exposure, the mitotic cells exhibited MT fibers were approximately 20% in EYFP-tubulin-WT-expressing cells and significantly increased to 60% in EYFP-tubulin-C376A-expressing cells (Fig. 6d, e, cold exposure), suggesting that palmitoylation-deficient tubulin rendered MT fibers resistant to cold-induced disassembly. We concluded that proper regulation of tubulin palmitoylation is required for accurate MT dynamic instability during mitosis; ACC inhibition by TOFA may perturb tubulin palmitoylation and thus disrupt the control of MT dynamic instability.
Since TOFA disrupted tubulin palmitoylation and MT dynamic instability, we tested if it can affect the mitotic and cytotoxic effects of MT-targeting drugs. We found that the spindle defects induced by taxol or nocodazole alone can be further increased by pretreatment of TOFA (Fig. 6f, g, TOFA). 2-BrPA pretreatment also further increased taxol-and nocodazole-induced spindle defects (Fig. 6g, 2-BrPA). These data suggest that disruption in FA synthesis and protein palmitoylation may enhance the effects of taxol and nocodazole on spindle assembly. TOFA also enhances the cytotoxicity of taxol in MDA-MB-231 cells (Fig. 6h). Together, our data indicated that TOFA disrupts the dynamic regulation of tubulin palmitoylation, perturbs the control of MT dynamic instability and enhances MT-targeting drugs-induced spindle abnormalities and cytotoxicity.
DISCUSSION
The tubulin PTMs play essential roles in cell shaping, movement, division and intracellular transport. Thus, it may participate in various MT-related human pathologies [1][2][3]5]. In addition, regulation of FA metabolism can regulate the development of MT-based structures [47,48]. In this study, we found that appropriate regulation of ACC is critical during mitosis to ensure faithful mitotic spindle assembly, potentially by controlling the level of tubulin palmitoylation.
Previously, fission yeasts mutated at cut6 and lsd1/fas2 (the respective homologs of ACC and FASN) were shown to exhibit the defective "cut" phenotype during cytokinesis, in which the nucleus is intersected by the medial septum, potentially due to reduced phospholipid production and failed nuclear envelope expansion [20,22]. These findings implied that delicate regulation of FA synthesis is required for proper membrane dynamics/organization during cytokinesis to ensure faithful mitosis progression. In mammalian cells, pSer 79 -ACC was found localized at the SP from prophase through early anaphase [24,25] and that acute inhibition of PLK1, a master mitotic kinase, abrogates SPlocalization of pSer 79 -ACC [25]. These findings suggest a mitosisspecific phosphorylation of ACC and imply that FA synthesis in mammalian cells may also be regulated in early mitosis, when spindle assembly is initiated. However, the mechanism through a Protocol for Ro-3306 block and release and treatments of AICAR and compound C. The cell cycle of CGL2 cells was arrested and synchronized before mitosis entry by 16-h Ro-3306 treatment. For the last 2 h of Ro-3306 incubation, AICAR or compound C was added into the culture medium. After the 16-h incubation, Ro-3306 was washed away and cells were kept in medium containing AICAR or compound C for another 45 min. b Representative images of the Ro-3306-block-and-release-enriched mitotic cells; cells were untreated (−) or treated as indicated and stained for pSer 79 -ACC (green), α-tubulin (red), and with DAPI (blue). c AICAR increased pSer 79 -ACC, while compound C decreased the level of pSer 79 -ACC at the SP. The relative intensity of pSer 79 -ACC at the SP of the mitotic cells (as in b) was measured and presented in the scatter plot with the interquartile distribution from two independent experiments. The numbers above indicate the number of the SP measured. *P < 0.05 compared to untreated by Mann-Whitney Rank Sum test. d Representative images of mitotic cells with normal spindle or abnormal spindles stained for CEP152 (green), α-tubulin (red) and DAPI (blue). e Percentages of abnormal spindle-containing mitotic cells collected after the protocol described in (a) are shown as mean ± SD of at least 600 cells from two independent experiments. *P < 0.05 compared to untreated by Student's t test. f, g ACC inhibition by TOFA-induced mitotic spindle abnormalities. Percentages of abnormal spindle-containing mitotic cells, untreated or treated with TOFA as indicated, are shown as mean ± SD of at least 400 cells from two independent experiments for CGL2 (f) and MDA-MB-231 (g). *P < 0.05 compared to untreated by Student's t test. h Percentage of control (pLKO.1) and ACC1-depleted (shACC1) mitotic MDA-MB-231 cells with abnormal spindle are shown as mean ± SD of at least 400 cells from two independent experiments. *P < 0.05 compared to pLKO.1 by Student's t test. i-k Knockdown of ACC-induced spindle abnormalities in Drosophila germaria. i, top: The schematic illustration of Drosophila germarium and egg chambers (upper scheme) and the magnified view of the germarium (lower scheme). Drosophila germarium each house 2-3 germline stem cells (GSCs) that asymmetrically divide to generate cystoblasts (CBs); each CB then undergoes four rounds of incomplete division to generate a 16-cell cyst. i, bottom: One-week-old control (nos > mCherry RNAi ) and ACC knockdown (nos > ACC RNAi ) ovarioles were stained for ACC (gray) and co-stained for 1B1 (blue, somatic cell membranes) to examine knockdown efficiency. j Representative images of the indicated types of mitotic spindles in Drosophila germ cells stained for γ-tubulin (green), α-tubulin (gray), and DAPI (blue). k, left: Percentage of mitotic cells in nos > mCherry RNAi and nos > ACC RNAi germ cells within germaria with abnormal spindles (including abnormal bipolar spindle, disorganized spindle, and multipolar spindle; as shown in j). Mean ± SD of at least 100 cells from two independent experiments is shown. k, right: The mitotic germ cells in nos > ACC RNAi germaria were identified as GSCs, CBs, and Cyst cells, and the percentages of mitotic cells with abnormal spindles from each cell type are shown as mean ± SD. *P < 0.05 compared to pLKO.1 by Student's t test.
which ACC contributes to mitotic progression in mammalian cells has not been previously elucidated. Our observations that increased pSer 79 -ACC level at the SP of mitotic cells is acutely altered by Ro-3306, AICAR and compound C confirmed previous findings regarding mitosis-specific ACC phosphorylation. Furthermore, altering pSer 79 -ACC level by ACICAR and compound C, disrupting ACC activity by TOFA, and depleting ACC protein by shRNAs all increased mitotic spindle abnormalities. These data suggested that precise ACC activity and thus tightly regulated FA synthesis are required for mitotic spindle assembly and may partially explain how FA synthesis affects mitosis progression Notably, Drosophila germline lacking ACC also exhibited mitotic spindle abnormalities, indicating that the tight regulation of FA synthesis (and ACC) during spindle assembly may be conserved across different organisms and may be critical for physiological processes that involve mitotic cell division, such as germ cell division and differentiation.
The downstream effects of disrupted FA synthesis include decreased palmitate production [49,50], upregulated FA β-oxidation [26,27], and increased protein acetylation [28,29,35]. Since the TOFA-induced spindle defects were rescued by exogenous PA, but not β-oxidation inhibitor (etomoxir) or protein acetylation-disrupting agents (C646 and SB204990), our data revealed a plausible role for palmitate to mediate the effects of ACC on spindle assembly. Furthermore, 2-BrPA, ML348, ML349, and palmostatin B also acutely induced spindle defects, indicating that protein palmitoylation, one of the critical functions of palmitate [38], may be tightly regulated during mitosis to ensure spindle assembly. Accordingly, the human palmitoyl-proteome (available from the SwissPalm database) includes several mitotic proteins, such as centrosome components and MT motors [51]. Collectively, we reason that precise control of ACC activity and tightly regulated FA synthesis ensure accurate spindle assembly by controlling protein palmitoylation during mitosis.
The various PTMs of tubulin play critical roles in the properties, functions, and dynamics of MTs [1,2]. Accurate spindle assembly relies on delicate control of MT dynamics and thus also requires precise spatiotemporal regulation of tubulin PTMs [4]. Moreover, previous studies have shown that tubulin is subjected to palmitoylation in yeast [10], rodent cells [7,8], human platelets [9], and in proliferating cancer cells [50]. Except for mediating MTmembrane association [8,9], the cellular and physiological impacts of tubulin palmitoylation remain elusive. We found in this study that tubulin palmitoylation is reduced during mitosis, coincident with the dramatic changes in cell shape and MT dynamic instability after mitosis entry. Similarly, tubulin palmitoylation was shown reduced after platelet activation [9], a process that also involves dramatic changes in cell shape and MT dynamics/organization [52]. These findings imply the correlations of tubulin palmitoylation level with MT dynamic instability. We therefore speculate that during mitosis, tubulin may undergo a dynamic palmitoylation/depalmitoylation cycle to allow for intricate control of MT dynamic instability, which is an essential MT property for accurate spindle assembly. Notably, our data showed that a TOFA treatment regimen that reduces tubulin palmitoylation is also sufficient to suppress cold-induced MT Fig. 3 Precise regulation of protein palmitoylation is required for spindle assembly. Exogenous palmitate, but not β-oxidation inhibitor and not protein acetylation-disrupting agents, rescued TOFA-induced spindle defects. a Representative images of normal and abnormal mitotic spindles. Cells were stained for CEP152 (green), α-tubulin (red) and DAPI (blue). b CGL2 cells were treated for 1 h with TOFA, PA and etomoxir, either alone or in combination as indicated. Treated cells were then fixed and stained for spindle analysis. Percentages of mitotic cells with abnormal spindles (as in a) are shown as mean ± SD of at least 450 cells from three independent experiments. *P < 0.05 by Student's t test; n.s. not significant. c disassembly, induce spindle abnormalities and enhance nocodazoleor taxol-induced effects. Consistently, MT fibers comprised of palmitoylation-deficient mutant tubulins are more resistant to cold-induced disassembly than those comprised of WT tubulin. These findings support the idea that regulated tubulin palmitoylation is an essential factor of MT dynamic instability. Furthermore, TOFA may act to limit tubulin (re)palmitoylation during mitosis, thereby disrupting the tubulin palmitoylation/depalmitoylation cycle and causing defects in MT dynamic instability and spindle assembly.
A previous report showed that pharmacological AMPK activation and the subsequent ACC inhibition can rescue defective MTdependent cholesterol transport and MT reformation after recovery from cold treatment in cystic fibrosis cell lines [53], suggesting that AMPK-ACC axis may regulate MT functions and dynamics. Another study demonstrated that AMPK is activated during FA starvation to reorganize detyrosinated MTs through an unknown mechanism; this reorganization promotes lipid droplet (LD) dispersion on the detyrosinated MTs, thereby increasing LDmitochondria contacts to upregulate FA β-oxidation [54]. These findings implied that pathophysiological alterations in lipogenesis pathways may control MT functions by modulating tubulin PTMs, thus reprogramming cellular functions to meet metabolic demands. In this study, we found that the phosphorylation of AMPK and ACC at mitotic entry occurs concomitantly with a physiological reduction of tubulin palmitoylation. In addition, ACC inhibition by TOFA disrupt MT dynamic instability and induce spindle defects. Based on these findings, we hypothesized that the Ser 79 -phosphorylation of ACC at mitotic entry may contribute to the stringent regulation of lipogenesis, reducing palmitate production to limit tubulin palmitoylation during mitotic entry until metaphase. This regulatory mechanism may serve to precisely manage MT dynamic instability and control spindle assembly. Our data thus reveal a model that coordination between the lipogenesis pathway and MT functions supports spindle assembly and mitosis progression, in particular by modulating tubulin modifications.
Our results showed that TOFA may induce spindle defects by disrupting tubulin palmitoylation and MT dynamic instability and can enhance taxol-induced spindle abnormalities and cytotoxicity. These are consistent with a previous study showing that FASN inhibitors enhance the antiproliferative efficacy of taxane [50] and imply that disruption of FA synthesis may enhance the anticancer effect of taxol. Since upregulated ACC and FASN and deregulated protein palmitoylation are associated with many types of cancer, modulating FA synthesis pathway and protein palmitoylation have been considered as plausible anticancer strategies [14,15,17,18]. Our findings thus expand the understanding of how interfering FA synthesis could benefit the anticancer effects of anti-MT drugs. Future studies regarding how overexpression of ACC and FASN affects the tubulin palmitoylation and MT functions would provide further insights into such anticancer strategies. palmostatin B (#508738, Merck); tubulin-targeting drugs, taxol (paclitaxel, 580555, Merck), and nocodazole (V1377, Sigma). Cells were subjected to double-thymidine block and release as previously described [57] to enrich populations at each cell cycle stage; attached cells at 0, 3, 6, and 9 h after thymidine release were collected as G1, early S, late S, and G2 cells, respectively, and the floating cells at 9 h were collected as the mitotic cells. Alternatively, the cell cycle was blocked by 14-16 h treatment with 2-5 μM Ro-3306 (optimal duration and concentration were empirically determined for each cell line), followed by a PBS wash and incubation in drug-free medium for 30-45 min to release the block and allow mitotic entry for collecting mitotic cells [58].
MATERIALS AND METHODS Cell culture and drug treatments
The mitotic cells then were fixed for staining or shaken off the plates and collected. The cytotoxicity of each drug was tested for each cell line with the trypan blue exclusion assay or colony-formation assay as previously described [56], in order to empirically determine the appropriate treatment concentration.
Depletion of cellular ACC
The shRNAs targeting ACC1 (gene symbol ACACA, TRCN-232456, and -232458) and ACC2 (gene symbol ACACB, TRCN-3093, and -10759) were purchased from the National RNAi Core Facility (Genomic Research Center, Academia Sinica) and were used to deplete ACC proteins in cell culture. Tubulin palmitoylation was reduced during mitosis and further decreased by acute TOFA treatment. a The PLA between α-tubulin and click-labeled palmitate (palmitate-488) was performed using CGL2 cells. The representative images show the control cell (No PA) and the PLA signal between α-tubulin and palmitate (palmitate-α-tubulin PLA) in interphase and mitotic cells that were untreated (−) or treated with TOFA as indicated. The insets below show magnified views of regions with colocalization between PLA signals and palmitate-488. b Relative intensity of palmitate-α-tubulin PLA in a single cell was measured. The scatter plot shows the interquartile distribution of the PLA intensity in the group of cells described in (a). The numbers above indicate the numbers of cells assayed from two independent experiments. *P < 0.05 by Mann-Whitney rank-sum test. c Illustration of the acyl-biotin exchange assay. d Palmitoylated proteins in MDA-MB-231 cells were purified using acyl-biotin exchange assay and then subjected to western blotting for α-tubulin detection. HAM addition (+) is required for palmitate cleavage, biotinylation, and subsequent purification of the palmitoylated proteins, and the omission of HAM (−) served as the negative control. Input lanes show equal amounts of total tubulin in the cell lysates were applied in each reaction. The western blots show the levels of total cellular tubulin (Input) and palmitoylated fractions of tubulin purified by streptavidin agarose (SA-IP) from total cell lysates (Input) of untreated (−) and TOFA-treated cycling and enriched mitotic cells. The input tubulin was first normalized with PCNA to adjust loading variations and then served to normalize the purified tubulin. The values below each band indicate the level of purified α-tubulin after normalizing to input tubulin, relative to untreated cycling cells; mean ± SD from three independent experiments is shown. a Representative images of cells subjected to cold exposure. Cells were untreated (−) or pretreated for 1 h with 100 μM TOFA, followed by 5 min cold exposure before fixation and staining for centrin2 (green), α-tubulin (red), and DAPI (blue). Cells fixed without cold exposure (No cold) served as the control. The white arrow indicates the site of MT disassembly at the SP, and the yellow arrow indicates the site where MT remnants were observed at the SP after cold exposure. b Percentages of untreated or TOFA-pretreated CGL2 cells with MT remnants at the SP after 5 min cold exposure. Mean ± SD from three independent experiments is shown. *P < 0.05 by Student's t test. c-e C376A mutation on αtubulin renders spindle MTs resistant to cold-induced disassembly. c Western blots show the expression efficiency of EYFP-tubulin-WT and -C376A in HeLaS3 cells. Cells transduced with empty vector pLAS5w.Pneo served as the control. d Representative images of EYFP-tubulin-WTor C376A-expressing cells subjected to 10 min cold exposure, fixation, and staining for CEP152 (red) and DAPI (blue). MT fibers of EYFP-tubulin are displayed in green. Cells fixed without cold exposure (No cold) served as the control. e Percentages of the cells described in (d) that exhibit MT fibers with EYFP-tubulin-WT and -C376A. Mean ± SD from two independent experiments is shown. *P < 0.05 by Student's t test.
f Representative image of the normal and abnormal mitotic spindles. Cells were stained for CEP152 (green), α-tubulin (red) and DAPI (blue). g TOFA and 2-BrPA enhanced taxol-and nocodazole-induced spindle abnormalities. CGL2 cells were treated for 4 h with TOFA or 2-BrPA, alone or in combination with 0.5-h treatments of taxol or nocodazole as indicated. Treated cells were fixed and stained for spindle analysis. Percentages of cells with abnormal spindles (as in f) are shown as mean ± SD of at least 600 cells from two independent experiments. *P < 0.05 by Student's t test. h TOFA enhanced taxol-induced cytotoxicity. MDA-MB-231 cells were treated with TOFA and taxol for 24 h, alone or in combination as indicated. Treated cells were subjected to colony-formation assay. Percentages of colonies formed compared to the vehicle control are shown as mean ± SD from six independent experiments. *P < 0.05 by Student's t test.
The shRNA-containing virions were prepared, and transient depletion of endogenous ACC were performed as described previously [55]. Empty vector pLKO.1-containing virions were also prepared as the control.
Examination of mitotic spindles in ACC RNAi Drosophila germarium
The following fly lines were obtained from Bloomington stock center. The nanos (nos)-Vp16-GAL4 line (#4937) and ACC RNAi line (#34885) were crossed to generate flies (nos > ACC RNAi ) with ACC knockdown in the germline. The crosses were cultured at 18°C to reduce GAL4 activity to avoid developmental influence. nos > mCherry RNAi flies generated from the cross of nos-Vp16-GAL4 line and mCherry RNAi line (#35785) were used as the control. The experiment was performed with two biological replicates. Ten newly eclosed females with the indicated genotypes were randomly picked from a standard cross and shifted to 29°C for 7 days; ovaries were dissected and subjected to immunostaining. Ovaries from at least ten pairs of ovaries were separated, mixed, and mounted for observation; 400 germaria were examined. Immunostaining of the Drosophila germaria was performed as previously described [59]. The antibodies included: anti-ACC, a gift from Dr. Jacques Montagne (1:100; French National Centre for Scientific Research, France) described previously [60], anti-1B1 (1:25; Developmental Studies Hybridoma Bank, DSHB), anti-α-tubulin (1:100; GTX112141, GeneTex Hsinchu, Taiwan, or T5168, Sigma) and anti-γ-tubulin (1:250; T3559 or T6557, Sigma).
For analysis of mitotic spindle abnormalities, the mitotic spindle was revealed by immunostaining for α-tubulin and centrin2 or CEP152, as described above. The numbers of cells with mitotic spindle abnormalities were counted under a Zeiss Axioplan 2 Imaging MOT fluorescence microscope. Mitotic spindles at prophase, prometaphase, and metaphase were evaluated as described previously [55,57]. Multipolar spindles, disorganized spindles, and bipolar spindles with misaligned chromosomes or with abnormal MT array were regarded as mitotic spindle abnormalities. The level of spindle abnormalities is expressed as the percentage of the mitotic cells containing abnormal spindle.
Metabolic labeling and visualization of palmitoylated proteins
Metabolic labeling of palmitoylated proteins was performed using palmitic acid-azide (PA-azide, 15-Azidopentadecanoic Acid, C10265, Invitrogen) as described [42]. PA-azide was solubilized in DMSO at 50 mM and kept as a stock solution. The stock was diluted in medium to 100 μM, sonicated at room temperature for 15 min, settled for 15 min, and then used to treat cells seeded on coverslips for 16 h. Cells incubated in PA-azide free medium (No PA) were used as the control. The cells were then fixed in PTEMF buffer [61] for 15 min, washed with PBS, and then subjected to click chemistry using Click-iT Cell Reaction Buffer Kit (C10269, Invitrogen) to conjugate a fluorophore to PA and visualize the palmitoylated proteins. Briefly, the click reaction was performed by immersing cells in the click cocktail (88% reaction buffer, 2% CuSO 4 and 10% buffer additive) containing 2.5 μM AlexaFluor TM 488-alkyne (A10267, Invitrogen) at a volume of 50 μL/coverslip for 30 min at room temperature. Cells were then washed with PBS and subjected to α-tubulin immunostaining, after which the cells were mounted in Fluoromount-G containing DAPI for confocal imaging.
Analysis of colocalization between palmitoylated proteins and MTs
Potential tubulin palmitoylation was assessed by examining the colocalization between α-tubulin immunostaining and PA-azide labeled by click chemistry in confocal images. A 24 × 24-μm region of interest (ROI) covering a single mitotic cell and a free-hand drawn ROI covering a single interphase cell were created in ImageJ and were sampled for colocalization analysis in Imaris. The level of colocalization between α-tubulin and palmitoylated proteins in a single cell was assessed by Pearson's correlation coefficient, the value of which ranges from −1 to 1 and indicates the correlation between two colors above the threshold intensities [62]. The correlations were assessed for 39 interphase cells and 39 mitotic cells collected from two independent experiments.
The colocalization between α-tubulin and click chemistry-labeled palmitoylated proteins was also examined by the proximity ligation assay (PLA), as described previously [42]. After click chemistry-mediated conjugation of AlexaFluor-488-alkyne to the PA-azide, the samples were reacted with anti-α-tubulin (T5168, Sigma) and anti-AlexaFluor-488 (#A-11094, Invitrogen) at 4°C overnight and then subjected to the PLA procedure using Duolink ® In Situ Detection Reagents Red (DUO92008, Sigma) as previously described [43,57]. Samples were then mounted in Fluoromount-G containing DAPI for observation and imaging. The PLA signals indicate tubulins in a complex with PA at a distance of less than 40 nm. The PLA signal intensity in a single interphase or mitotic cell was measured using Imaris and indicated the level of α-tubulin-PA-azide colocalization.
Acyl-biotin exchange assay
A previously described [44] procedure for the acyl-biotin exchange assay was adapted in this study to purify the palmitoylated proteins as illustrated in Fig. 5c. Briefly, cycling cells and the Ro-3306-blocked-and-released mitotic cells, with or without TOFA (100 μM, 4 h) treatment, were collected and suspended in lysis buffer (150 mM NaCl, 50 mM Tris-HCl, 5 mM EDTA, 0.2% SDS) supplemented with 5 mg/mL protease and phosphatase inhibitor (Roche), followed by homogenization for 10 min at 4°C. The homogenates were then incubated for 2 h at room temperature in the presence of 10 mM N-ethylmaleimide (NEM) to block the free thiols. Next, the palmitate moieties linked to cystine residues through thioester linkage in the homogenate were cleaved by the addition of 0.5 M hydroxylamine (HAM) and were replaced with biotin by the addition of EZ-link TM HPDP-biotin (#21341, ThermoFisher, Waltham, MA, USA) (Fig. 5c), followed by overnight incubation of the homogenates at room temperature. Homogenates incubated without HAM were used as the control. The biotinylated proteins were then purified with streptavidin agarose and used for western blotting.
Analysis of MT dynamic instability and MT nucleation from the SP
A cold exposure assay, in which MT disassembly is induced at the SP, was used to probe the dynamic instability of MTs as previously described [55,57]. Cells seeded on coverslips were treated with or without TOFA as indicated and then subjected to cold exposure for the indicated time, followed by immunostaining. The percentage of mitotic cells with MT remnants at the SP was calculated for each sample, and differences in the percentages indicated the level of altered MT dynamic instability.
To probe the effects of tubulin palmitoylation on MT dynamic instability, EYFP-tubulin open reading frame in the vector pEYFP-tubulin (#6118-1, BD) was subcloned into pLAS5w.Pneo vector. Cys 376 in tubulin is the proposed site for palmitoylation [45]. Using site-directed mutagenesis, this residue was substituted with alanine (C376A), which cannot undergo palmitoylation [46]. Virions containing EYFP-tubulin-WT and -C376A were collected and used to transduce HeLaS3 cells as previously described [55]. After transduction, the cells were incubated with 1 mg/mL G418 to establish stable cell lines. HeLaS3 cells stably expressing EYFP-tubulin fusion protein were then seeded on coverslips, subjected to 10 min cold exposure, and analyzed by immunostaining the centrosomes. Percentages of mitotic cells, among those exhibiting fluorescent MT fibers, were calculated. Decreases in the percentages following cold exposure indicated coldinduced disassembly. | 9,143.4 | 2023-01-09T00:00:00.000 | [
"Biology"
] |
Measuring Economic Freedom: Better Without Size of Government
The Heritage Foundation and the Fraser Institute measure economic freedom in nations using indices with ten and five indicators respectively. Eight of the Heritage indicators and four of the Fraser-indicators are about specific types of institutional quality, like rule of law, the protection of property, and the provision of sound money. More of these is considered to denote more economic freedom. Both indices also involve indicators of ‘big government’, or levels of government activities. More of that is seen to denote less economic freedom. Yet, levels of government spending, consumption, and transfers and subsidies appear to correlate positively with the other indicators related to institutional quality, while this correlation is close to zero for the level of taxation as a percentage of GDP. Using government spending, consumption transfers and subsidies as positive indicators is no alternative, because these levels stand for very different government activities, liberal or less liberal. This means that levels of government activities can better be left out as negative or positive indicators. Thus shortened variants of the indices create a better convergent validity in the measurement of economic freedom, and create higher correlations between economic freedom and alternative types of freedom, and between economic freedom and happiness. The higher correlations indicate a better predictive validity, since they are predictable in view of the findings of previous research and theoretical considerations about the relations between types of freedom, and between freedom and happiness.
Introduction
The role of governments, in relation to security and freedom, has been a subject of vivid discussions since Thomas Hobbes published his 'Leviathan' in 1651. Utilitarians like Jeremy Bentham, James Mill and John Stuart Mill added happiness as an additional value to be considered. We cannot decide, in any scientific way, what priorities security, freedom, and happiness deserve as values, but we can try to get a better understanding of their mutual relations as actual phenomena.
Previous Research by Veenhoven
In his article 'Freedom and Happiness; a comparison of 126 nations in 2006' with data for the years 2000-2006Veenhoven (2008 defines freedom as the possibility to choose. This possibility requires an opportunity and a capability to choose. The capability is an individual characteristic, but the opportunity depends on the environment. 1 This opportunity, offered by the environment, involves two requirements: first that there is something to choose. This requirement depends on the societal supply of life style alternatives and conditions like information and physical infrastructure. The second requirement is that free choice is not frustrated by formal or informal restrictions created by other people or institutions. It is not unusual to discern positive and negative freedom, parallel to this distinction. Positive freedom refers to the actual availability of options; negative freedom refers to the absence of restrictions or interference by others. 2 The focus of Veenhoven is on negative freedom in nations by the absence of formal or informal restrictions. He discerns three kinds: economic freedom as measured by the Heritage Foundation, global freedom 3 as measured by Freedom House, and private freedom. Veenhoven measures private freedom with an index for the absence of restrictions to travel, religion, marriage, divorce, euthanasia, suicide, homosexuality, and prostitution. He uses data of the World Values Surveys to apply this index. One of Veenhoven's conclusions is that, in nations, these types of freedom tend to go together. There is a substantial mutual correlation: ?.69 between economic freedom and global freedom; ?.66 between global and private freedom, and ?.58 between economic and private freedom. Another conclusion is that there is a positive correlation between freedom and average happiness; for economic freedom ?.63; 4 for private freedom ?.58; and for global freedom ?.54. Together these kinds of freedom explain 44% of the variation in average happiness in nations. Veenhoven observes that the positive correlation is 1 Christian Bay makes a similar distinction: between 'psychological freedom' on the one hand and 'social' and 'potential freedom' on the other. Psychological freedom is the degree of harmony between basic motives and overt behaviour. Social freedom is the relative absence of perceived external restraints on individual behaviour. Potential freedom is the absence of unperceived external restraints on individual behaviour (Bay 1958). 2 The distinction between 'negative' and 'positive' freedom is made, among others, by Fromm (1941), Berlin (1969) and Okulicz-Kozaryn (2014). There can be a tension between positive and negative freedom, as pointed out by Berlin. For example: if governments construct roads and bridges this will contribute to positive freedom, but it also has a negative impact on negative freedom if it leads to a higher level of taxation. In Berlin's view negative freedom is the only 'real' freedom. 3 Veenhoven uses the phrase 'political freedom', but I will stick to the original phrase 'global freedom' as used by Freedom House. This is the average score for political rights and civil liberties. Just like Veenhoven I reverse scores to make sure that higher scores always indicate more freedom. 4 Spruk and Kešeljević (2015) also found a positive correlation between economic freedom and happiness. universal, even though the correlation is somewhat higher for rich nations and nations with higher levels of education.
This correlation is in his view at least partly the result of causal effects of freedom on happiness. He suggests that global freedom, or political freedom in his terminology, contributes to economic freedom, and economic freedom contributes to happiness. Both types of freedom make way for private freedom, which on its turn adds to happiness by allowing a better fit between life-styles and individual preferences. This conclusion is consistent with more findings that freedom and individual autonomy are important for happiness and development (Sen 1999). There are no signs that the impact of freedom on happiness is lower at higher levels of freedom; so there are, so far, no signs of 'diminishing returns' (Veenhoven 2008). 5
Research Questions
In this research-paper I want to repeat Veenhoven's research with data for the years 2010-2012, but I want to answer some additional questions. Veenhoven did not assess how economic freedom is measured. The Heritage Foundation and the Fraser Institute assume that government activities have, on the balance, a negative impact on economic freedom. These institutes use general levels of government activities, as reflected in expenditures, consumption and taxation, as negative indicators. This assumption is debatable. I will therefore answer the next research questions. Foundation and the Fraser Institute? 3. What is the impact of such improvements on the correlation between economic freedom and alternative types of freedom, and on the correlation between economic freedom and happiness? In view of previous research, and the related theoretical considerations, we may expect higher correlations if economic freedom is measured in a better way.
To answer the research-questions I use a sample of 127 nations. Nations with missing values for more than one key-variable are left out. There are almost 200 countries in the world and these 127 countries are not a random sample, because there are more missing values among 'failed states' without any effective government. This has, however, no substantial negative impact on the representativeness of the sample. 6 The 5 There is discussion at this point. In his book 'The Paradox of choice; Why More is Less' (2004) Barry Schwartz argues that eliminating consumer choices can greatly reduce anxiety for shoppers. Paolo Verme (2009) argues that freedom can turn into a disutility if people expect they will be unable to handle freedom. Brulé and Veenhoven (2014) explain the difference in average happiness of the Finns and the French by a difference in the capability to choose, or the 'psychological freedom'. But some incapability to choose is a specific problem, and Veenhoven can still be right that there are no 'diminishing returns' of negative or positive freedom, or the opportunity to choose as such, in the social or physical environment. 6 The World Bank measures six dimensions of the quality of government for all nations every year. The results are expressed in standardized scores, and the average of such scores is by definition 0. The averages in the sample of 127 nations are close to 0: -.05 for the democratic quality as the average of two dimensions, and ?.09 for the technical quality as the average of four dimensions (on a range of -2,5 to ? 2,5 in standard deviations). This is an indication that the representativeness of this sample is very acceptable. See ''Appendix'' for more information.
representativeness of the 127 countries is good enough to justify general conclusions. Average scores are used for the years 2010-2012 (around 375 country-year observations).
In the Appendices 1, 2, 3 and 4 some information about relevant variables and data is summarized. More information is available at the sites mentioned. Descriptive statistics and correlations are presented in Appendices 5 and 6. The scores for global freedom and press-freedom (Freedom House) are reversed to make sure that higher scores always indicate more freedom. This makes it easier to understand the correlations in the Tables.
The research questions 1, 2 and 3 will be discussed in Sect. 3, 4 and 5; followed by conclusions and a discussion in Sect. 6. Table 1 are in brackets.
In the cells of Table 1 we see similar correlations for 2000-2006 and 2010-2012. The correlation between global freedom and personal autonomy (?.92) is higher than between global freedom and private freedom (?.66). This is understandable since personal autonomy is a sub-indicator of civil liberties, one of the two components of global freedom. The correlations are otherwise comparable and we may conclude, following Veenhoven, that different types of freedom go together and go together with more happiness. We will see the same pattern with more types of freedom in Table 5a, b.
The empirical observation that different types of freedom go together, and go together with more happiness, deserves some attention because this is not 'self-evident'. Specific types of freedom can be more important for specific groups. More negative freedom will be more important for people with money and power (no interference please!); more positive freedom will be more important for poor people (more public goods and services please!); more economic freedom will be more important for employers, the self-employed and investors (just rule of law, no regulations please!); more private freedom and personal autonomy will be more important for cultural, religious, and sexual minorities (more equality and tolerance please!). Such differences in the appreciation of freedom are apparently not inconsistent with a high mutual correlation and a high correlation with average happiness. One reason is perhaps that there is a substantial overlap in (sub-) indicators in the measurement of different freedoms. There is also a more theoretical explanation: it is eventually always about the individual freedom to make decisions, and we may expect that individuals will claim similar levels of freedom, whatever the decisions at stake. A certain level of individual freedom or autonomy can easily become a general cultural standard.
4 The Measurement of Economic Freedom and Options for Improvement 4.1 Using Convergent Validity to Evaluate Measurements I use convergent validity to evaluate measurements. This validity refers to the consistency of a measurement, if a variable is measured by several indicators. The Cronbach alpha, as a scale reliability coefficient, is a good statistical measure for this consistency. 7 The minimum value of this measure is 0 and the maximum value is 1. The value will increase when correlations between the indicators increase. As a general rule a value of 0.7 is acceptable and a value of 0.8 is good. (Table 2) Column 1, 2 and 3 in Table 2 are about the convergent validity of the measurement of economic freedom by the Heritage Foundation. In the first column are the correlations between the aggregated or summary scores for economic freedom in nations, as presented by the Heritage Foundation in the original index, and the scores of nations on the 10 indicators, mentioned in the beginning of the rows, used by this Foundation to construct these summary scores. Eight indicators are related to specific types of institutional quality; two indicators (3 and 4) are related to levels of government activities. Nations with higher levels of institutional quality or higher levels of government activities always get higher scores for these indicators.
Measurement by the Heritage Foundation
There are, however, different ways to construct the summary scores. In the first column we look at the construction by the Hermitage Foundation. The summary scores are the average of all indicators with equal weights. The levels of government activities, however, are used as negative indicators for economic freedom; nations get lower scores if these levels are higher. If the assumption of the Heritage Foundation, that government activities have a negative impact on economic freedom, is correct, we may expect substantial negative correlations between these actual levels of government activities and the summary scores. In the first column we see however that the actual correlation is indeed negative, but close to zero, for 'Fiscal Freedom' or the level of taxation (%GDP), and positive for 'Government Spending' (%GDP). Both correlations are not significant while all other correlations are positive and significant (at .01 level). The validity of this measurement with 10 indicators is nevertheless acceptable; the Cronbach alpha is .75. There are two options for improvement. In the first column (1) there are correlations between a and b (a) The scores for actual levels of 10 indicators mentioned in the rows. Nations get higher scores if they have higher levels of taxation and government spending (3 and 4) and higher scores for the institutional qualities (1, 2 and 5, 6, 7, 8, 9, 10) (b) The aggregated or summary scores for economic freedom in nations as presented by the Heritage Foundation (Original Index). These scores are the average of the scores for 10 indicators mentioned in the rows. Nations with higher levels of taxation or government spending get lower scores for indicators 3 and 4 and, as a consequence, lower scores for economic freedom. Nations with higher levels of institutional qualities get higher scores for the indicators 1, 2, 5, 6, 7, 8, 9 and 10 In the second column (2) there are correlations between (a) Same as in column (1), but not for 3 and 4 (b) Same as in column (1) (1) but now the scores are the average of the 10 indicators again mentioned in the rows, but now nations get higher scores for higher levels of taxation and government spending, and as a consequence higher scores for economic freedom. Nations with higher levels of institutional qualities (1, 2 and 5, 6, 7, 8, 9, 10) get again higher scores if they have higher levels of institutional qualities
Options for Improvement
a. Leaving indicators 3 and 4 out. In the second column indicators 3 and 4 are left out in the construction of the summary scores. The summary scores are now the averages of the 8 remaining indicators related to institutional qualities. We see a substantial improvement in the convergent validity. The Alpha goes up from .75 to .90. b. Reversing the sign of 3 and 4. In the third column we reverse the scores for the indicators 3 and 4 in the construction of the summary scores. We have again 10 correlations between summary scores and 10 indicators, but now nations also get higher summary scores if they have higher levels of government activities, instead of lower scores. Indicators 3 and 4 always remain the same: higher scores are related to higher levels of actual government activities. We see a similar improvement in the convergent validity. The correlation between the summary scores and actual levels of taxation and government spending is now positive and significant. The Alpha goes up from .75 to .88.
We may conclude that both options improve the validity. 8 It is clear that the assumption that actual levels of taxation and government spending have a negative impact on economic freedom, as measured by the other indicators, is not correct. The correlation between level of taxation and such qualities is negative but close to zero. The correlation between government spending (expenditures), with government consumption and transfers and subsidies as important components, is positive but not significant. It is however debatable to use taxation and government spending as positive indicators for economic freedom. There is no theoretical justification; government spending, consumption and transfers and subsidies included, can be directed at very different policies, liberal or less liberal! (Tables 3, 4) Columns 1, 2 and 3 in Table 3 are about the convergent validity of the measurement of economic freedom by the Fraser Institute. In the first column are the correlations between the summary scores for economic freedom in nations, as presented by the Fraser Institute (Original index), and the scores of nations on the 5 indicators of this Institute to construct these summary scores. Four indicators are related to specific types of institutional quality; one indicator is related to actual levels of government activities ('Size of Government'). Nations with higher levels of institutional quality and higher levels for 'Size of Government' always get higher scores for these indicators. There are, however, different ways to construct the summary scores. In the first column we look at the construction by the Fraser Institute. The summary scores are the average of all 5 indicators with equal weights. 'Size of Government' however, is used as a negative indicator for economic freedom; nations get lower scores if they have more government. If the assumption of the Fraser Institute is correct, that government activities have a negative impact on economic freedom, we may expect a substantial negative correlation between actual 'Size of Government' and the summary scores.
Measurement by the Fraser Institute
In the first column we see indeed that the correlation is negative but not significant. The negativity is consistent with the assumption of the Fraser Institute that 'Big Government' has a negative impact on economic freedom. The negativity is strange, however, because 'Size of Government' contains 'Government Consumption' and 'Transfers and Subsidies' as important components. Taken together these two sub-indicators are comparable with the actual level of government spending as used by the Heritage Foundation. This actual level In the first column (1) (1) (b) Same as in column (1) but now the scores are the average of the 5 indicators again mentioned in the rows, but now nations get higher scores for higher levels of government consumption, transfers and subsidies, government enterprises, and top tax-rate, and as a consequence higher scores for economic freedom. Nations with higher levels of institutional qualities (2, 3, 4, 5) get again higher scores if they have higher levels of institutional qualities of spending has a positive correlation with the summary scores for economic freedom as measured by the Heritage Foundation.
In Table 4, about the sub-indicators of 'Size', we see the explanation. Actual levels of government consumption and transfers and subsidies have indeed a positive correlation with the summary scores of the Fraser Index. This 'positivity' is overruled, however, by 'Government enterprises and Investment' (%GDP) and 'Top Tax-rate' (average, equal weights). The last two indicators have substantial negative correlations with the summary scores of the Fraser Index.
The validity of the measurement in the first column in Table 3 is reasonable; the Cronbach alpha is .66. But there are options for improvement.
Options for Improvement
a. Leaving indicator 1 out. In the second column indicator 'Size' is left out in the construction of the summary scores. The summary scores in the Index are now the averages of the 4 remaining indicators related to institutional quality. The Alpha goes up from .66 to .85, indicating a better convergent validity. b. Reversing the sign of indicator 1, 'Size of Government'. In the third column we reverse the scores for 'Size of Government' in the construction of the summary-scores. We have again 5 correlations between summary scores and 5 indicators, but now nations get higher summary scores if they have higher levels of actual government activities, as expressed in a higher score for 'Size of Government'. The correlation between the summary scores and scores for 'Size' is now positive and significant (at .01 level), and the Alpha goes up, but not as much as in option a, from .66 to .76.
Both options improve the convergent validity, but the second option, reversing the sign of 'Size' is less effective, obviously because of its mixed composition as shown in Table 4. We may conclude that 'Size' is not an appropriate indicator to measure economic freedom. Only actual level of 'Government Enterprises' (%GDP) and 'Top Tax-rate' can be used as ** significant at .01% a Actual level of government consumption as a % of national consumption b Average of top marginal income tax rate and the top marginal income and payroll tax rate, and the income threshold at which these rates begin to apply. The measurement of the Heritage Foundation is more comprehensive, because it refers to the total tax burden as a % of GDP negative indicators, because they have a significant negative correlation with economic freedom, as measured by the other indicators. Both indicators are however rather specific, and not representative for government activities in general. 'Top Tax-rate' is rather specific because it is technical characteristic of a tax-system. The level of taxation, as measured by the Heritage Foundation, is more comprehensive because it refers to the total tax burden as a % of GDP. The negative correlation between this tax burden, as measured by the Heritage Foundation, and economic freedom is, however, close to zero and not significant. The inconsistency of 'Size of Government' is also visible in the correlation of the subindicators with happiness. This correlation is positive for government consumption, transfers and subsidies 9 and taxation, but negative for government enterprises and investments. 10
The Impact of a Better Measurement of Economic Freedom on the Correlation with Alternative Freedoms and Happiness
Reversing indicators related to levels of government activities, and using them as positive instead of negative indicators, is not a good option. I therefore only assess the impact of leaving them out since this option is more effective and transparent.
In Table 5a, b, for the Heritage Foundation and the Fraser Institute respectively, we see that leaving levels of government activities out as indicators in measuring economic freedom leads indeed to higher correlations with alternative types of freedom and happiness. The higher correlations indicate a better predictive validity, since they are consistent with findings of previous research and theoretical considerations.
Only 'Freedom satisfaction' (satisfaction with freedom to make life choices) has a relatively low correlation with the different types of actual freedom. The correlation between this satisfaction and economic freedom is also rather insensitive for the improvements in the measurement of economic freedom. This variable is, however, not really about actual freedom but about satisfaction with freedom. It is a subjective reality and as such it has a substantial correlation with (subjective) happiness. High correlations between individual subjective realities are quite common, but it is interesting to see a high correlation between average subjective realities in nations.
Conclusions
Now we can answer the research questions, about the mutual correlations of types of freedom, possibilities to improve the measurement of economic freedom by the Heritage 9 The positive correlation between average happiness and 'government consumption' (?.50) and 'transfers and subsidies' (?,49) is remarkable, but government consumption and 'transfers and subsidies' refer to different government activities. More research is needed to explain the correlation with happiness. One hypothesis is that some types of 'transfers and subsidies' contribute to 'decommodification', as defined by Esping-Andersen (1990). See also Ott (2015). 10 The negative correlation between average happiness and 'government enterprises and investments' (-.41) is remarkable. One hypothesis for further research is that ex-communist nations still have high levels of government enterprises and investments and low levels of happiness. Another hypothesis might be that government involvement in the economy can easily become problematic, by unpredictable interventions in commercial management and ambiguities in responsibilities.
Foundation and the Fraser Institute, and the impact of such improvements on the correlation between economic freedom and alternative freedoms and between economic freedom and happiness. however, rather specific and not representative for government activities in general. Taxation, as measured by the Heritage Foundation as a % of GDP, is more representative, but the correlation with economic freedom, as measured with institutional indicators, is close to zero. Levels of government spending, transfers and subsidies and government consumption, are also representative for government activities in general, but have a positive correlation with economic freedom as measured by the institutional indicators. They should certainly not be used as negative indicators. There is, however, no substantial theoretical justification to use them as positive indicators instead. As a general rule it is apparently better to use types of institutional quality to measure economic freedom, and to ignore the Size of Governments. 3. If economic freedom is measured in a better way we see, as expected, higher correlations between economic freedom and alternative types of freedom and between economic freedom and happiness.
Discussion: the Importance of the Quality of Governments
In previous research I found that the correlation between government activities and happiness depends heavily on the quality of governments, and in particular on the technical or delivery quality. The conclusions in this research indicate that, in the sample of about 127 nations, this quality is 'good enough' to create a positive correlation, or at least a neutral correlation, between general levels of government activities on the one hand and economic freedom and average happiness on the other. Since this sample is representative for all 200 current nations we may assume that nowadays government activities in general contribute to freedom and happiness, even if there are some regrettable exceptions. The quality of governments is a strong predictor of average happiness. Only GDP per capita has a comparable importance for happiness, but GDP and economic growth depend heavily on the quality of governments. Kaufman of the World Bank (2005) once estimated that a nation improving the quality of its governance from 'low' to 'average' can almost triple its income per capita in the long term. Up to a point GDP per capita is an intermediate between the quality of governments and average happiness. Such statements are obviously generalizations; there are exceptions. Bad governments can contribute to happiness if they have enough money. Governments with money can also improve their quality first and contribute to happiness later. We also must realize that small governments, with low levels of spending and transfers, can be very effective, e.g. if they can solve problems with intelligent legislation. Size and power are different subjects.
The quality of governments is important for happiness in two ways (Ott 2010). The quality is important in a direct way in direct contacts between citizens and government agencies: citizens want to be treated carefully and respectfully. 11 The indirect impact depends on the creation of conditions and resources that contribute to happiness, like safety, physical infrastructure, employment, education and information. These conditions and resources also contribute to freedom. Freedom, economic freedom included, is apparently an important intermediate, just like GDP, between the quality of governments and happiness.
Some people are optimistic about the impact of governments on freedom and happiness. Other people are more pessimistic. 12 Contacts between citizens are based on equality and consensus, but contacts between citizens and government agencies are based on hierarchy. Most of us prefer contacts based on consensus. And citizens can become inactive, and even apathetic, if their governments are too big or powerful.
It is absolutely wise to be critical about governments, but we have to be open minded. The observation that there is in general a positive or neutral relation between government activities and economic freedom, suggests that there is no natural contradiction between governments and free markets. Free markets need institutional qualities and some qualified and authoritative supervision. Government agencies can provide for such facilities. Free markets and governments are therefore not antagonistic, but need each other to produce the best outcomes in terms of freedom and happiness.
The measurements of the Heritage Foundations and the Fraser Institute are very similar and primarily directed at negative freedom. Economic freedom is defined as the freedom of individuals to engage in economic transactions without interference. Most indicators value specific types of institutional quality, e.g. rule of law, the protection of property and sound money. A few indicators value 'small government' by using levels of government activities as negative indicators. Many data come from the same sources. The scores for countries are based on available statistics and on the standardized assessments of experts. It is no surprise that the correlation in the outcomes is high; ?.88 for averages in the years 2010-2012. If the measurements are improved this correlation goes up to ?.91.
In Measurement by the Heritage Foundation and Measurement by the Fraser Institute information is summarized about the measurement of economic freedom by the Heritage Foundation and the Fraser Institute.
Concept
Economic freedom is defined as the fundamental right of every human to control his or her own labor and property. Individuals are free to work, produce, consume, and invest in any way they please. Governments allow labor, capital, and goods to move freely, and refrain from coercion of liberty beyond the extent necessary to protect and maintain liberty itself.
Measurement
Economic freedom is measured with 10 indicators with equal weights, related to four broad categories.
A. Rule of Law: For each indicator countries can get 0-100 points and a summary score for economic freedom in general, as the average score of the 10 indicators. More points indicate more freedom. Countries get higher scores if they have lower levels of Fiscal Freedom (indicator 3, = level of taxation as %GDP) and government spending (indicator 4, %GDP). These levels are used as negative indicators.
Data Source
The Heritage Foundation collects information from many specific sources, like the World Bank, IMF and Economist Intelligence Unit. Data are available in the Index of Economic Freedom of the Heritage Foundation. See: http://www.heritage.org/index.
Concept
Economic freedom implies that individuals are permitted to choose for themselves and engage in voluntary transactions, as long as they do not harm the person or property of others. The primary role of government is to protect individuals and their property from aggression. The index of economic freedom of the Fraser Institute is designed to measure the extent to which the institutions and policies correspond with a limited government ideal, where the government protects property rights and arranges for the provision of a limited set of 'public goods' such as national defence and access to money of sound value.
A country must provide secure protection of privately owned property, even-handed enforcement of contracts and a stable monetary environment. It also must keep taxes low, refrain from creating barriers to both domestic and international trade, and rely more fully on markets rather than government spending and regulation to allocate goods and resources. A country's summary rating in the index is a measure of how closely its institutions and policies compare with the idealized structure implied by standard textbook analysis of microeconomics.
Measurement
Each year the Fraser Institute presents a report about economic freedom in nations: the annual 'Economic Freedom of the World Report' (EFWR). Economic freedom is measured in five major areas: 1. Size of Government, with four sub-indicators with equal weights: government consumption as a % of national consumption, transfers and subsidies, government enterprises and investments, and top tax-rate. 2. Legal structure and security of property rights. 3. Access to sound money. 4. Freedom to trade internationally. 5. Regulation of credit, labor, and business.
Within these five areas there are 23 components and many of them are made up of subcomponents. Each component and sub-component is placed on a 0-10-scale. The subcomponent ratings are averaged to determine each component and the component ratings are averaged to derive ratings for each major area. The final summary rating is the average of the five area ratings on a 0-10 scale; lower scores indicate lower levels of economic freedom. In area 1, Size of Government, countries get lower scores if they have higher levels of government consumption, transfers and subsidies, government enterprises and investments, and top tax-rates (='bigger government'). These levels are used as negative indicators.
Data Source
The data-set of the EFWR-index is actualized each year with new data for the last year. The Fraser Institute collects information from more or less the same sources as the Heritage Foundation, e.g. the Doing Business dataset of the World Bank. See: http://www. freetheworld.com.
Concept
Freedom House defines global freedom as the opportunity to act spontaneously in a variety of fields outside the control of the government and/or other centers of potential domination. It measures freedom according to two broad categories: political rights and civil liberties. The sum of the scores for political rights and civil liberties indicate the state of global freedom in nations as experienced by individuals.
Political rights enable people to participate freely in the political process through the right to vote, compete for public office and elect representatives who have a decisive impact on public policies and are accountable to the electorate. Civil liberties allow for the freedom of expression and belief, associational and organizational rights, rule of law, and personal autonomy without interference from the state. Personal autonomy is one of the indicators of civil liberties and is more specifically defined by the following aspects: • Do citizens enjoy the freedom of travel or choice of residence, employment, or institution of higher education? • Do citizens have the right to own property and establish private business? Is private business activity unduly influenced by government officials, the security forces, political parties/organizations, or organized crime? • Are there personal social freedoms, including gender equality, choice of marriage partners, and size of family? • Is there equality of opportunity and absence of economic exploitation?
This personal autonomy is comparable to Veenhoven's 'private freedom'; measured with an index for the absence of restrictions to travel, religion, marriage, divorce, euthanasia, suicide, homosexuality, and prostitution (using data of the World Values Surveys).
Measurement
Each country or territory is assigned two numerical ratings from 1 to 7 for political rights and civil liberties. Global freedom is the average of the two averages, also ranging for 1-7. Higher scores indicate less freedom. Countries with 1 or 2 points are 'free'; with 3, 4 or 5 points 'partly free' and with 6 or 7 points 'not free'. In the tables the scores are reversed to make them more consistent with the other scores. Higher scores for personal autonomy, on a scale ranging from 0 to16 points, indicate more autonomy.
Data Source
The Freedom of the world survey provides an annual evaluation of the state of global freedom as experienced by individuals. Legal rights are considered, but more emphasis is placed on whether these rights are implemented in practice. Rights and liberties can be affected by both state and non-state actors. In this analysis findings are used for the year 2010 and 2012 and are retrieved from the report 'Freedom in the World 2015'. The data are stable over the years and there is always a high correlation between the scores for political rights and civil liberties; ?.92 in 2012. See https://freedomhouse.org.
Concept, Measurement and Data
The Freedom of the Press report measures the level of media independence in 197 countries and territories. Each country receives a numerical score from 0 (the most free) to 100 (the least free) on the basis of combined scores from three subcategories: A. The legal environment. B. The political environment. C. The economic environment.
For each category, a lower number of points is allotted for a more free situation, while a higher number of points is allotted for a less free environment. Here again this is reversed in the tables to make them more consistent and understandable. Data are available in: https://freedomhouse.org/report/freedom-press/freedom-press-2011#.VcN4j3kw_AU. | 8,363.6 | 2016-11-30T00:00:00.000 | [
"Economics"
] |
Accounting for Context in Randomized Trials after Assignment
Many preventive trials randomize individuals to intervention condition which is then delivered in a group setting. Other trials randomize higher levels, say organizations, and then use learning collaboratives comprised of multiple organizations to support improved implementation or sustainment. Other trials randomize or expand existing social networks and use key opinion leaders to deliver interventions through these networks. We use the term contextually driven to refer generally to such trials (traditionally referred to as clustering, where groups are formed either pre-randomization or post-randomization — i.e., a cluster-randomized trial), as these groupings or networks provide fixed or time-varying contexts that matter both theoretically and practically in the delivery of interventions. While such contextually driven trials can provide efficient and effective ways to deliver and evaluate prevention programs, they all require analytical procedures that take appropriate account of non-independence, something not always appreciated. Published analyses of many prevention trials have failed to take this into account. We discuss different types of contextually driven designs and then show that even small amounts of non-independence can inflate actual Type I error rates. This inflation leads to rejecting the null hypotheses too often, and erroneously leading us to conclude that there are significant differences between interventions when they do not exist. We describe a procedure to account for non-independence in the important case of a two-arm trial that randomizes units of individuals or organizations in both arms and then provides the active treatment in one arm through groups formed after assignment. We provide sample code in multiple programming languages to guide the analyst, distinguish diverse contextually driven designs, and summarize implications for multiple audiences. Supplementary Information The online version contains supplementary material available at 10.1007/s11121-022-01426-9.
. Technical Details Regarding Individually Randomized Group Treated Designs
The following sections provide technical descriptions that fully specify statements in the text but are not essential for general understanding.
Appendix 1a. Estimation of Variance Due to Grouping After Randomization Occurring in a Single Arm of a Two-Arm Trial Here we discuss some subtleties regarding the estimation of variation in intervention impact across groups formed after assignment. If there are multiple groups in the one arm, and these are comprised of different individuals in each group, then one can treat each group as independent of one another. First define the sample averages in each group g, � , g = 1 , …, G, (G > 1), while the total average is � , and the within group sample standard deviation for group g is 2 . If we assume that individual variation within each group is the same across individuals, the group size N does not vary, then the combined within variance in each group, ,/G, and the between variance B = ∑ can be combined so that W is an unbiased estimate of 2 , the population within variance, and B -W/N is an unbiased estimate of 2 , the between group variance. All this leads to appropriate testing of the mean in one arm with multiple groups against the overall mean in the other arm that has no groups. However, if the group intervention is delivered to everyone in the one arm at the same time, B is undefined because of the denominator G-1.
Even though direct estimation of intervention variation by group cannot be done with only one group, it is possible to estimate the variance of the intervention effect across groups indirectly, provided one is willing to make a strong assumption that the variance for each person is the same in both arms of the trial. Wth this assumption a formula for estimating the ICC = 2 / 2 + 2 is ( � − � � ) � −( −1) � � , here the two variance estimates are the standard formulae for standard errors of the means in the grouped and non-grouped arms, and N is the total number of subjects in the grouped arm. The precision of this estimate, being based on a single degree of freedom is not precise.
Appendix 1b. Specification of Model for IRGT Simulation Studies
Formally, the model generating the data we used for examinngi the consequences of incorrectly specifying the random effects in an IRGT model with a large number of groups and subjects, can be written as follows.
Let index i = 0 , 1 , … , 200 representing group (0 represents control and 1 or larger for intervention), and j = 1 , …, 8,000 for control and j = 1 , … , 40 for i larger than 0 (group treatment). Distributional assumptions are provided below with all error terms being independent of one another.
(1) Yij = α + β Txi + εi + δij j = 1 , … , 8,000 for i = 0; j = 1 , …, 40 for i = 1 , …, 200 Tx0 = 0 , Txi = 1 for i = 1 , …, 200 εi ~ N ( 0 , σB 2 ) for i = 1 , … , 200 The six analyses in Table 1 in the main text are ordered by increasing number of terms in the model. The simplest analysis, displayed in Row 1 of Table 1, is to erroneously ignore grouping effects completely; that is, a model with only fixed and no random effects. This is often the way that IRGT analyses are performed, but we will see that it produces erroneous findings. The second analysis is a mixed-effects model -one with fixed and random terms in the model --that represents the IRGT model correctly (i.e., with Equation 1) with a random effect only for the intervention condition delivered in groups (Row 2). The third analysis involves a mixedeffects model that naïvely accounts for a common random-effect across both intervention conditions without directly accounting for the IRGT structure (Rows 3). The fourth analysis provides for distinct variances by treatment group but ignores grouping entirely (Row 4). The fifth incorporates a common intercept random effect for everyone plus a random-effect for the treatment delivered in a group setting (Row 5). The sixth analysis includes two distinct random effects, one for each intervention condition (Row 6).
Appendix 2. Computer Code and Output for IRGT Modeling of Univariate and Growth Modeling Outcomes in 6 Computer Programming Languages
This online resource provides code for a range of statistical packages for Individually Randomized Single Group Treatment (IRSGT or IRGT) modeling with a single normally distributed outcome and a linear growth model. The underlying model for four test files for univariate and growth models are available from the first author. In all these datasets we label the treatment variable Tx, which takes the value of 1 for active intervention and 0 for control. The variable Group takes the same value for everyone in the control group (i.e., 0) to signify that there is no grouping (i.e., individually randomized individually treated) and for those in the group-delivered intervention condition the value for each unit's group identity (i.e., 1 , …, G).
Univariate Modeling of IRSGT
In the univariate case we generate data from Equation (1) in the main text, copied below.
(2) Yij = β0 + β1 Txi + εi + δij for Txi = 0 for i = 0 controls , Txi=1 for i -1 , … , G j = 1 , … , Ni , i = 0 , 1 , … , G δij ~ N( 0 , σW 2 ) εi ~ N ( 0 , σB 2 ) for i = 1 , … , G εi = 0 for i = 0 Linear Growth Modeling of IRSGT For linear growth modeling, we describe how the data would be organized when we use the "long form," in which each row corresponds to a unique subject by time combination. This format can be used for all statistical packages described below (R, SAS, SPSS, SuperMix, STATA) ; for Mplus we use the "wide format." We introduce two additional variables -Time is a non-negative variable that starts at 0 at baseline, and Subject indexes the individual. We let Y represent the outcome measure; for the growth model the repeated measures of Y on the same subject are distinguished by their Time.
Note that the error term for Group-Slope, eg Slope , is only present for the intervention group, making this an IRSGT. In our simulated data, none of the groups differ at baseline. If there happen to be variations at baseline -which might occur if enrollment varies over time -it would be necessary to include a random effect that is equivalent for those randomized at the same time to the control group. In our dataset we have added variables called Cohort, which distinguishes those in the controls who correspond to each Group, and CohTx, an indicator of each cohort by treatment combination. CohTx is used in the control condition as an artificial, zero-variance component in the control condition.
Coding for Univariate Model
For each of the statistical programs below, we provide a minimal statement to run both the Univariate and the Growth models themselves; if needed a preface identifies minimal code that are needed to set up the model and a postface identifies minimal code that is needed to output the results. Two test datasets, one called Univariate.csv and one called Growth.csv are available for downloading and testing.
The following provide minimal code to specify the fixed and random effects, type of model, and output.
Code to read and organize the data is not provided. All of the programs account for random group effects limited to the active beintervention group as random slopes, so that the controls, having Tx = 0 are ignored at this level. To specify the optimization function (e.g., marginal maximum likelihood or REML), the nonlinear optimization algorithm and convergence criteria ( Table 3 provides a representative list of contextually driven trial designs. For each trial design considered, we describe: where and when the random assignment occurs, how and when groups or networks are formed or changed, and a verbal description of the random effects. Examples are provided for each class. To focus on the core design issues in such designs, this table ignores common design strategies such as blocking (e.g., school is considered a blocking factor when two interventions are assigned to different classes within each school), cross-over designs (e.g., in a stepped-wedge or more general rollout design where each unit is randomized to the timing of when its intervention condition changes (Wyman, Henry, Knoblauch, & Brown, 2015)), and multiple levels of randomization (e.g., a split-plot design where different interventions are randomized at classroom and school levels). Given the popularity of mixed effects modeling (i.e., inclusion of both fixed effects for intervention and covariates and random effects to account for clustering), in Table 3 we provide brief notes on common specifications of random effects. Virtually all these designs can also incorporate heterogeneous variance components across the two arms and, for many, other analysis methods may also be appropriate (e.g., Generalized Estimating Equations, Bayesian methods). Some common design names have been modified to make them more precise (e.g., using Single or Both to distinguish what occurs in one or both arms of the trial), and we also use the word treatment to include prevention.
Appendix 3. Classification and Illustrations of Contextually Driven Intervention Trials
The first (Row 1), a Group Randomized Trial (GRT) (Murray, 1998) involves a head-to-head comparison of two interventions, where groups of individuals already exist (e.g., schools), and these groups are randomized to receive one or the other of these conditions. A recent example is the Wingman Connect Trial (Wyman et al., 2020), which tested a novel intervention focused on preventing suicide and depressive symptoms in new US-Air Force Airmen-in-Training against a stress management active control condition. For both intervention conditions, all components of their respective interventions were delivered in existing Airmen technical training classes (these are the pre-existing groups). The trial involves randomizing 215 training classes of average size 7, to those receiving Wingman Connect or stress management. As the responses of individuals' within the same group (i.e., class) may depend on each other, a correct mixed effect analysis of GRT is one which includes at least one random effect that accounts for group -or a group random effect for each arm if their variances are different. If we were to ignore how the responses of individuals within groups correlates (i.e., classes), we would reject the null hypothesis more often than appropriate.
In the second row of Table 3 is an Individually Randomized Both Groups Treated (IRBGT) trial. As the name indicates, the only difference with a GRT is that here assignment to intervention condition is at the individual level followed by forming groups that receive the same intervention. An example is the comparison of a group-based mindfulness versus group-based present centered therapy trial in the Veterans Administration (Polusny et al., 2015). In this trial, a total of 116 veterans with post-traumatic stress disorder were randomly assigned to one of these interventions; both interventions were delivered in a group format. This type of trial is very similar to GRTs; one difference, however, is that testing for baseline equivalence on individual characteristics in a GRT should include a random effect for grouping, while this baseline equivalence test for an IRBGD may ignore grouping (unless subject enrollment varies over time).
In the third row we present an Individual Randomized Single Group Treated (IRSGT) Trial. With all individuals randomized to two intervention conditions, one condition is delivered in a group setting, and the other is delivered individually. In the literature this design is most often referred to as an IRGT; we include the word "Single" to specify that only one arm is delivered in a group setting and therefore is different from an IRBGD trial described above. An IRSGT is also known as a Partially Clustered Design (H. Li & Hedeker, 2017). Specific guidance on completing a CONSORT statement for this design is available (Boutron, Moher, Altman, Schulz, & Ravaud, 2008). In these designs, eligible individuals are continually randomized to condition, and once sufficient numbers are available to form a group in the relevant single arm, that intervention, as well as the comparison condition -which is administered individually -begins. An example of a trial using this design is the Prevention of Depression Study (PODS), which randomly assigned 316 youth from four locations to a cognitive behavioral group-based prevention program consisting of 8 weekly sessions followed by 6 monthly sessions administered in a group, or individualized usual care (Garber et al., 2009).
Thus, one arm experienced all the intervention through their respective group. For IRSGT trials, traditional intent-to-treat analysis is used even if some individuals drop out before the group or comparison condition begins. Many of these trials include random effects to account for repeated measures (e.g., growth models) and clustering of family members (Brent et al., 2015), but surprisingly few of these trials account for nonindependence of individuals within the same group (Pals et al., 2008).
One ongoing IRSGT study that has transitioned from delivery to a group at one location to the use of virtual groups in order to decrease COVID-19 exposure, is the M-BODY trial (Burnett-Zeigler et al., Submitted for Publication), which compares a group-based mindfulness intervention to reduce stress and depression for African American adults compared to usual care. In this case, the transition from a traditional group to a virtual group setting for the mindfulness arm does not change the planned analysis that would include a random effect for group. However, in analysis, we could investigate whether the face-to-face versus virtual groups have different means and variances.
One variation on the traditional IRSGT trial is the Place Randomized Single Group Treated (PRSGT) trial shown in Row 4 of Table 3. Instead of randomizing individuals and grouping them, we randomize places, sites, or settings to one of two interventions or implementation strategies, one of which combines them into larger groups. This type of design has been used to test two head-to-head implementation strategies in 51 counties, one involving an implementation strategy including each county's service systems and the other a team-based approach that combines 6-8 counties together into a learning collaborative Chamberlain et al., 2008;Chamberlain & Reid, 1998). As the same underlying evidence-based intervention -Multidimensional Treatment Foster Care (MTFC) (Chamberlain, Leve, & DeGarmo, 2007) -was implemented in both arms in this trial, this implementation trial tested whether the learning collaborative improved the quality, delivery, and speed of implementation compared to one that facilitated MTFC's delivery within a single county. Like the IRSGT, an appropriate analysis of this trial required the inclusion of a random effect to account for non-independence due to the learning collaboratives.
Another variation to a traditional IRSGT trial occurs when only part of the intervention is delivered in a group setting, which we call an Individually Randomized Single Group and Individual Treatment (IRSGIT) trial. An example of this is the Familias Unidas, an intervention aimed at preventing the target Hispanic adolescent's substance misuse and HIV sexual risk behavior. Familias Unidas uses both parent groups as well as individual family intervention sessions with a parent and the target youth, In these Familias Unidas trials (Prado et al., 2016;Prado & Pantin, 2011), there have been 8 parent sessions delivered in a group setting that uses participatory learning through dialogue rather than instruction, followed by 4 family sessions, which involve a facilitator supporting an individual parent and adolescent. In these interventions with both group and single family components, it is still appropriate to include a single random effect in that arm to account for the parent component delivered in a group setting.
We note that a new version of Familias Unidas, e-Familias Unidas, is now being tested. This new version is conducted fully virtually. It uses a telenovela format to simulate a group rather than involve groups of parents meeting together. This new version allows parents and youth to view material (Prado et al., 2019) on their own schedule, and it also retains the individualized sessions with the parent and youth, but delivered remotely rather than in the home. Because all the components are based on the individual family, there is no need to account for non-independence with a group random effect.
An Individually Randomized Single Rolling Group Treatment (IRSRGT ) trial (Row 5) differs from IRSGTs in the way that individuals enter and exit groups, which exist only in the active intervention arm.
Instead of having new enrollees wait until enough eligible assigned to the active arm are available to form a new group, they immediately enter an existing group. The curriculum is adapted to address entrances and exits in a rolling fashion. Thus, the composition of the group, and consequently each person's exposure, varies over time. An example of this rolling group design is the BRIGHT trial of a cognitive behavioral intervention to address drug abuse Watkins et al., 2011). In this quasi-experiment involving 299 residential clients, individuals could enter the group at the beginning of each of the four modules (thoughts, activities, people, and substance abuse), each of which lasted 2 weeks with 2 sessions a week. One way to capture non-independence is to account for cross-classified random effects for each session and attribute each pair's covariance to the sum of the random effects that are shared. A simpler analytical approach is to posit a variance-covariance matrix that accounts for the proportion of sessions that are shared in common between each pair of individuals. Here we would model the mean structure to depend on intervention condition and individual level covariates, while the variance-covariance matrix across all subjects in the control condition is a variance times an identity matrix (i.e., forcing independence), and in the rolling group condition the variancecovariance has two parts; one is the same covariance matrix as that for controls, plus a second covariance matrix with a new variance times a correlation matrix where the off-diagonal values for subject i and j is their proportion of shared sessions. Methodologic details on analyzing IRSRGTs using these ideas are less developed than other models in Table 3. An alternative is to use Bayesian methods to account for these multiple membership multiple classification models (Browne, Goldstein, & Rasbash, 2001).
Individually Randomized Network Treated (IRNT) trials (Row 6) represent an intervention in which participant's exposure may vary by one's location in a network. An example is the HOPE Trial that uses trained peer leaders to deliver HIV prevention messages in newly formed social media networks (Young et al., 2015). Both peer leaders and community members who are males having sex with males (MSM) are randomized to these new networks in the HIV intervention arm, where peer leaders deliver general health messages, or serve a similar role in a comparison arm. Peer leaders are recruited, randomly assigned to treatment condition, then randomly assigned within clusters so that peer leaders are different across clusters.
Participants are recruited in waves, such that only after a sufficient number of participants are enrolled and complete their baseline survey are the participants assigned to clusters; then a new wave of recruitment begins (Young et al., 2013). The hypothesized mediator of the target HIV prevention behaviors (e.g., self-testing) is the number of new ties among community members, so one's position in the network can impact exposure to these messages. Inclusion of a random effect to account for the different independent networks is one important component of the statistical analysis. One consideration in the use of social networks as a way to conduct randomized experiments is the likelihood that a platform's functionality changes over time can greatly affect the experience of an eHealth intervention, in which case the analytic approach should address such changes (D. H. Li et al., 2019).
A similar network intervention to an IRNT is a Place Randomized Network Treated Trial (Row 7); here already-formed places are randomized to condition, and the intervention is delivered with potentially different exposure based on one's position in that network. An example is a peer-led Sources of Strength youth suicide prevention program tested in randomly assigned high schools against a standard setting in comparable schools.
Student peer leaders are nominated in schools assigned to Sources of Strength to cover as much of the friendship network as possible in a school, but exposure to messages from these peer leaders is often higher among youth who are in the center rather than periphery of the network because they are more likely to have friendship ties to one or more peer leaders (Pickering et al., 2018).
Spillover Trials (Row 8) deliberately test whether an intervention that targets one individual within a group has additional effects on others within the same group. In contrast to trials where spillover from one intervention arm to another is considered a source of contamination that threatens the trial's integrity, spillover trials are designed to have effects beyond those directly touched by the intervention. As an example, the Philadelphia School Absenteeism Trial randomized parents of youth who were absent from school to either receive or not receive a letter. The effects of this brief intervention were evaluated not only on the target student but also on their siblings, using the family as a group. Thus, each arm of the trial included youth nested in families, and the analyses that evaluated their impact included random effects at the family level and fixed effects on the focal youth and siblings. | 5,462.8 | 2022-09-09T00:00:00.000 | [
"Economics"
] |
Credit Card Fraud Detection with Automated Machine Learning Systems
ABSTRACT The steady increase at the turnover of online trading during the last decade and the increasing use of credit cards has subsequently made credit card frauds more prevalent. Machine Learning (ML) models are among the most prominent techniques in detecting illicit transactions. In this paper, we apply the Just-Add-Data (JAD), a system that automates the selection of Machine Learning algorithms, the tuning of their hyper-parameter values, and the estimation of performance in detecting fraudulent transactions using a highly unbalanced dataset, swiftly providing prediction model for credit card fraud detection. The training of the model does not require the user setting up any of the methods’ (hyper)parameters. In addition, it is trivial to retrain the model with the arrival of new data, to visualize, interpret, and share the results at all management levels within a credit card organization, as well as to apply the model. The model selected by JAD identifies 32 out of a total of 39 fraudulent transactions of the test sample, with all missed fraudulent transactions being small transactions below 50€. The comparison with other methods on the same dataset reveals that all the above come with a high forecasting performance that matches the existing literature.
Introduction
The creation of international agreements that promote transactions with credit cards such as the Single Euro Payments Area have significantly eased the use of card payments by consumers and businesses. The total value of card transactions using cards issued in the SEPA area amounted to €4.38 trillion in 2016 (ECB, 2018) and is expected to double by 2025. Nevertheless, along with credit card transactions, we have seen a significant rise in credit card fraud. In 2016, credit card fraud in the SEPA area amounted to €1.8 billion (European Central Bank 2018), while the worldwide incidents rose from $7.6 billion in 2010 to $21.81 billion in 2015 and are expected to reach $31.67 billion in 2020 (Robertson 2016). Despite the increasing effort to alleviate such fraudulent transactions and the substantial resources allocated by credit card issuers toward this end, the rising cost of fraudulent transactions suggests that there is much room for improvement in this research area.
The detection of a fraudulent transaction is an ambitious task. First, fraudulent cases are rare (in our dataset only 1 every 5000 records), rendering the outcome distribution severely skewed. The distribution of fraud cases seems to have seasonality effects and structural breaks as the attack strategies evolve over time (Dorronsoro et al. 1997). Another important issue is the accurate definition of the cost function, given that the cost of a false positive differs from the cost of a false negative (Dal Pozzolo et al. 2014). When the system characterizes a genuine transaction as fraudulent and freezes erroneously that transaction (false positive), the financial institution has an administrative cost to pay, as well as a decrease in customer satisfaction. In the case of frequent false-positive alarms, the financial institution faces the risk of losing customers and gathering adverse publicity. Conversely, when the system fails to detect a fraudulent transaction (false negative), the amount of that transaction is a loss for the financial institution or the merchant. Thus, it is very hard to define the asymmetric loss of each occurrence.
Another significant difficulty in fraud detection is that electronic defrauders perform mainly legitimate transactions and occasionally fraudulent ones, rendering the profiling of them into universal standard patterns difficult. Each transaction must be examined separately, rendering the reaction overdue, especially on the non-working hours of electronic transactions. For an actual system to be useful, response to a fraudulent transaction should be almost contemporaneous, which is difficult given that most systems end up forwarding automatic flagged transactions to (human) fraud examiners for manual inspection. Finally, security and privacy laws limit the public availability of data and/or censor the performed analyses, making them difficult to assess.
Credit card fraud can be broadly separated into two categories: identity fraud with the physical presence of the card, and electronic fraud without the physical presence of the card. In the first case the fraud demands the acquisition of the credit card and the identity of the actual owner. In order to perform a transaction, the imposter must be physically present. The second category does not require physical presence of the card or its owner/imposter and is targeted to online transactions, where only identity and safety details are required. The latter category accounts for more than 70% of the worldwide credit card fraud (Robertson 2016), given that no face-to-face contact between seller and buyer is required. Despite the use of several technological improvements such as the Address Verification System (AVS), Chip and Pin verification and the Card Verification Code (CVV), new credit card fraud strategies are continuously being developed. This makes the automated timely detection of fraudulent transactions a very significant defense mechanism in combating fraud and reducing the associated losses to financial institutions.
Modern data-driven statistical and machine-learning (ML) methods can provide statistical-like predictive models that output the probability of a transaction to be fraudulent and address the above challenge. Indeed, ML applications have shown to be promising in fraud detection (see Related Work). However, each such application requires coding its own script, experimentation with several algorithms, and significant experience with statistical and machine learning methods as some ML algorithms do not converge in big sample data, some return sub-optimal predictive models, some are inappropriate for imbalanced outcomes, others require fine-tuning of their hyperparameter values, others are hard to explain or interpret, or challenging to combine with feature selection (see Related Work Section). Manual scripting is also time-consuming and prone to methodological errors. Thus, the challenges that arise are "can credit card fraud predictive modeling be automated? Do the resulting models compete with the ones developed by human experts? Does automation obfuscate the interpretation of the model, or can it actually also help in obtaining intuition into the data patterns and task?" To respond to such challenges, systems and services that automate a large part of the machine learning pipeline have recently appeared under the name of Automated Machine Learning (AutoML) system. Such systems automate the selection of ML algorithms, the tuning of their hyper-parameter values, the estimation of performance, and the visualization and interpretation of results. In this paper, we demonstrate how AutoML tools could potentially increase the productivity of detecting fraudulent credit card transactions without a reduction in the prediction performance compared to a manual analysis. Specifically, we describe and use the Just Add Data Bio 1 (hereafter JAD) AutoML tool on the fraud detection problem described above and achieve results on par with state-of-the-art previous analyses that are manually coded. Secondly, in addition to modeling, JAD performs automated feature selection to identify the most significant variables to fraud detection, providing valuable intuition to fraud inspectors. We'd like to note that JAD's feature selection considers features jointly (multivariate) and not simply one by one. Features that are informative by themselves may become redundant given other features; similarly, features that are uninformative by themselves may be necessary for optimal prediction and become informative given other features. Hence, optimal feature selection is a combinatorial problem that returns the minimal-size feature subset that in combination leads to the optimally predictive model. After examining numerous combinations of algorithms for feature selection and modeling, as well as their tuning hyper-parameter values, JAD selects the best one to create a final model for prediction. It estimates its predictive performance along several common metrics (e.g., AUC, accuracy, balanced accuracy, F1 score), the confidence intervals of performance, the Receiver Operating Characteristic (ROC) curve, and the contribution to performance for each selected feature.
Post-analysis, JAD provides an easy way to access the trained model and apply it on new data to get predictions, without the need for computer coding. This means that any employee of a financial institution or a credit card firm can get predictions and try different scenarios of credit card transactions to gauge how predictions change with the feature values. JAD also supports collaborative analyses by sharing projects, data, and analyses results; the later can also be shared with anybody via unique links to the specific results' page. 2 The present work provides evidence that AutoML systems can indeed address to a large extent the challenges for automated credit card fraud detection modeling, at least within the limited scope of the present computational experiments performed. JAD does automatically output predictive models that can compete with prior work, selects the important features for prediction removing irrelevant and redundant features, and helps explain and interpret results. Several limitations of course, remain (see Discussion). Nevertheless, Auto ML can open a new path of research and provide supervision tools to the industry that overcome some of the limitations and obstacles of academic research. Based on this work, we argue that AutoML tools and services should be considered when analyzing credit card transaction data and potentially, other similar-type financial data. The simplicity, accuracy and speed of such systems make them an excellent fit in such financial transaction situations. The model can filter and flag a transaction as probably fraudulent in real time out of thousands of other transactions, keeping human intervention to a minimum.
The remainder of the paper is organized as follows. In section 2 we describe in more detail the Related Work. In Section 3, we describe the data and the methodology, while the empirical findings are presented in section 4. Section 5 discusses the limitations of the study, and Section 6 concludes the paper.
Related Work
The obvious financial benefits in detecting fraudulent transactions has sparked a voluminous literature in the field. The first attempts to create automate detection systems that examine an--often-large number of transactions and classify them as fraudulent or legitimate, are expert systems based on a set of classification rules (Hanagandi, Dhar, and Buescher 1996). Nevertheless, given that the distribution of credit card transaction datasets changes due to seasonality patterns, new market trends and the evolvement of new fraud strategies, the applied rules should be constantly updated, making rule-induction systems infeasible and ineffective.
Following an econometric approach, Ng and Jordan (2002) compare logistic regression with Naïve Bayes classification models, showing that logistic regression models have a lower asymptotic error than Bayes classifiers, but fail to converge in very large datasets, as the ones used in credit card transaction problems. The Bayes classifier converges quickly, but its classification accuracy is lower than that of the logistic regression models. On a similar path, Maes et al. (2002) compare Bayesian and neural networks, concluding that the Bayesian network converges faster and exhibits a lower classification error than neural networks. In an extended benchmark simulation, Lessmann et al. (2015) compare 41 methodologies on various evaluation criteria and several credit scoring datasets. It is confirmed that the random forest method, i.e., the randomized version of bagged decision trees, outperforms logistic regression and has progressively become one of the standard models in the credit scoring industry (Grennepois, Alvirescu, and Bombail 2018).
Over the last decades, the rapid advances in the field of ML, provided additional tools to the satisfaction of fraud investigators. In a thorough survey of the relevant literature Ngai et al. (2011) conclude that the most commonly used ML methods in fraud detection are decision trees, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and genetic algorithms. These techniques can be used alone or in collaboration using ensemble or meta-learning techniques to build classifiers. Most of the applications are based on supervised training algorithms such as ANN (Dorronsoro et al. 1997;Prodromidis, Chan, and Stolfo 2000;Syeda, Zhang, and Pan 2002;Schindeler 2006;Juszczak et al. 2008;Quah and Sriganesh 2008) decision tree techniques like ID3, C4.5 and CART (Chen et al. 2005;Mena 2003;Wheeler and Aitken 2000) and SVM (Bhattacharyya, 2011).
A synopsis of the relevant literature suggests that classification performance of ML methodologies is heavily dependent on the dataset under study, with Bayesian networks and logistic regression exhibiting higher classification performance in smaller samples and ANNs and C4.5 decision trees outperforming all competing methodologies in larger samples. An obvious contrast of the previous works to the current proposed direction, is that a large part of the effort goes to the identification of the best algorithms for the given task and the optimization of the hyper-parameter. Moreover, as the number of the observations increases, the task of selecting the most informative features becomes a computationally impossible task. Thus, many researchers select a number of variables (often arbitrarily), conditioning the performance of their model to subjective feature selection processes. In contrast, the AutoML approach completely automates feature selection and model tuning.
The Data
For our analysis we use a large and frequently used in the literature cross-sectional dataset on credit card fraud detection, available in Dal Pozzolo et al. (2014). 3 The dataset includes online credit card transactions made in September 2013 by European cardholders. It consists of 492 fraudulent out of a total of 284,807 transactions for a twoday period. Thus, the fraud rate is approximately 0.172% of all transactions or approximately 1 in every 579 transactions. The data contains 28 anonymized variables, plus two named variables "Time" and "Amount." The anonymized variables are the result of a Principal Component Analysis (PCA) transformation of the original data for confidentiality issues. The time feature contains the seconds elapsed between each transaction and the first transaction in the dataset. The "Amount" feature is the transaction amount. Regarding the anonymized nature of the features, as stated in Carneiro, Figueira, and Costa (2017), the variables typically collected by financial institutions regarding credit card transactions are similar, since they are regulated by monetary authorities.
Variable "Amount" ranges from €0.1 to €25,691.16, with an average of � x ¼ 88:35 and a standard deviation of s ¼ 250:12. Table 1 provides an overview of the descriptive statistics of this variable. As we observe from Panel A, the data are severely skewed toward the left tail, while this finding is also highlighted in Panel B, since the majority of transactions are under €200. According to the Augmented Dickey-Fuller and the Kwiatkowski-Phillips-Schmidt-Shin tests, the variable is stationary.
Just-add-Data
JAD is a Software-as-a-Service platform that runs on AWS, available at jadbio. com. JAD employs some simple feature transformations and imputation of missing values. For feature selection, it employs the Statistically-Equivalent-Signature (Lagani et al. 2017) algorithm (SES for short). A feature selection algorithm ideally returns a subset of the features that is minimal in size, and optimally predictive in a multivariate fashion, i.e., when all features are considered jointly. The predictors selected by SES are the neighbors of the outcome in any faithful Bayesian Network representing the data distribution, which a subset of the full Markov Blanket. The latter has been shown to be the optimal solution to the feature selection problem under certain broad conditions (Tsamardinos and Aliferis 2003). A feature of SES is that it heuristically and efficiently attempts to identify statistically, equivalent solutions, i.e., minimal-sized feature subsets with the same optimal predictive performance. Identifying all equivalent solutions is important when feature selection is employed for knowledge discovery and getting insight to the domain under study. Returning an arbitrarily chosen single solution S may mislead the domain expert into thinking that all other variables are either redundant or irrelevant, when they could just be substituting for a selected feature without loss of predictive power.
89.875%
Note: * denotes rejection of the null hypothesis at the 5% level of significance. The null hypothesis of the Jarque -Bera test is that the data originate from a normal distribution. The null of the ADF test is that the data are nonstationary, while the null of the KPSS test is that the data are stationary.
For classification, JAD considers Decision Trees (DT), Random Forests (RF), Support Vector Machines (SVM) with full polynomial and Gaussian kernels, and Ridge Logistic Regression. All the algorithms included above require the user to set the values of hyper-parameters. Hyper-parameters determine the behavior of an algorithm, typically regulating how sensitive the algorithm is in detecting patterns. The optimal values of the hyperparameters must be found by trial-and-error. Results can vary greatly depending on their appropriate tuning. Using an Artificial Intelligence (AI) system JAD automatically decides which algorithms to try and which hyperparameter values, depending on the size of the data, the type of the data, and the user preferences. The AI system is based on a set of rules that guide the fine-tuning process. JAD then generates all combination of choices called configurations. A configuration is a pipeline of algorithms with specific hyperparameters that take the data and lead to a forecasting model.
To determine which configuration leads to the best model, JAD estimates the performance of the average model produced by each configuration using a (stratified) N-repeated, K-fold cross-validation protocol. The (standard) K-fold cross-validation (CV) protocol splits the data into K non-overlapping approximately equal-sized sets (called folds) of samples. The value K to use is determined by the AI system. The procedure progresses by keeping each fold out once, training models using all configurations on the remaining K-1 folds and estimating their performance on the held-out fold. The held-out test sets are used to simulate the application of the models on new, neverseen-before samples and to estimate the predictive performance obtained by training a specific configuration. In the end, the K performance estimates are computed on each fold, as well as the average, for each configuration. The configuration with the best average performance is selected as the winning configuration. For details on the repetition and stratification of CV see Tsamardinos, Greasidou, and Borboudakis (2018). To produce the final model, JAD applies the winning configuration on the full dataset. The reasoning behind this is that we expect that the model learnt on all the data to be best on average. Unfortunately, the cross-validated performance estimate of the winning configuration is optimistically biased and should not be reported as the final estimate. This is because numerous configurations have been tried. This is a statistical phenomenon conceptually equivalent to the adjustment of p-values in multiple hypothesis testing and related to the "winner's curse" in biostatistics (Zollner and Pritchard 2007). In computer science it is called the Multiple Comparisons in Induction problem (Jensen and Cohen, 2000). JAD estimates the bias of the performance and the confidence intervals using a bootstrap-based method called Bootstrap Bias Corrected CV or BBC-CV and removes it to return the final performance estimate (adjust estimations for multiple tries of algorithms/configurations).The selection of the optimum forecasting model is performed based on the Area Under the Receiver Operating Characteristic (AUC-ROC) curve, that explores the trade-offs between sensitivity and specificity of the model and selects the most costeffective operational point.
In addition to performance estimates, JAD provides several plots to help user understand and interpret results. The first is a Supervised 2D PCA plot, i.e., a 2D PCA plot based on the selected features (hence, the characterization "supervised"). The goal is to visually understand the data and detect anomalies (outliers) in the dataset. The Individual Conditional Expectation (ICE) plots displays how each instance's prediction changes when a feature changes, in an effort to explain the role of each feature in the prediction output of the model. The Cumulative Variable Importance aims to explain the added value of each feature to the final forecast. Nevertheless, we do not provide extensive analysis on the feature selection abilities of JAD, given that the 28 variables of the credit-card transaction dataset come from a PCA compression of the original financial variables. Moreover, there is no information regarding the order of the variables; we do not possess information that the first variable is actually the first component of the PCA analysis, the second variable the second component etc. Thus, we do not present other post-analysis information, given that we cannot actually support evidence of the importance of an actual financial variable in forecasting.
Empirical Findings
In order to assess the ability of the JAD application to train and forecast credit card fraud in unknown data we split our sample into 2 parts using stratified sampling: 90% of the data are used to train the models and 10% is kept aside and it is only used to test the forecasting ability of the trained model to unknown data. Thus, we use 256,552 observations for training of which 446 correspond to credit card fraud and we left 28,255 observations for testing (46 are credit card fraud cases). Fraudulent transactions are labeled Class 1 and the rest are labeled Class 0.
Overall, it took 8 hours and 40 minutes for JAD to train 415 models and test alternative configurations on different subsets of the training data. The best overall configuration in terms of maximizing the Area Under the Curve on the training dataset is: a) selecting features using the SES algorithm with hyperparameters maxK-conditioning-set = 2 and significance level a = 0.1 and b) fit (learn) a ridge logistic regression model with penalty hyper-parameter lambda = 100. In step a) JAD selected 7 out of the total 30 explanatory variables (features) in our sample as the ones required for the optimal credit card fraud detection.
The model with the highest predictive performance is: (1) where P y i ¼ 1jX ð Þ is the probability of observation y i of a transaction belonging to Class 1 (fraudulent) of seven regressors where x i,j is the ith observation of variable (feature) . The predictive performance is measured using several metrics reported in Table 2.
The simplest of these metrics is classification accuracy, which equals the probability of the model making a correct classification on a new transaction. As we observe from Table 2, the overall classification performance of the best performing model is 99,9%. Nevertheless, this metric is not suitable to measure predictive performance in heavily unbalanced datasets. One can achieve a 99.93% accuracy by classifying all transactions as Class 0, since Class 1 (fraudulent cases) accounts only for the 0.172% of all observations. Thus, classification accuracy is a metric that is affected by the class distribution. A better metric typically used for binary classification, is the area under the ROC curve (AUC). The AUC is a metric that is independent of the class distribution. It is also invariant to a change in the class distribution between the train and test sets, in other words, it will not be affected if the percentage of fraudulent transactions increases in the test data (provided this is the only change in the data distribution). Nonetheless, as we mentioned above, we used stratified sampling so that our test and training distribution remain consistent. The AUC also has another, statistically intuitive interpretation: it is the probability that the model will correctly assign a higher probability of being fraudulent to a pair of transactions, given that one fraudulent and the other is legitimate. In our case, the AUC is 0.973, suggesting a high identification ability of the legitimate vs. fraudulent transactions. The model estimates the probability that a new transaction is fraudulent i.e. P(y = 1|x), given the values x of the seven features of the transaction selected in the training step. To classify a new observation, one uses a threshold t, such that, if the probability is higher than t, the transaction is classified as fraudulent. Depending on t one can become more or less conservative in classifying any transaction as fraudulent. Depending on t one can achieve various values of sensitivity (percentage of fraudulent correctly classified), specificity (percentage of non-fraudulent correctly classified), true positive rate (which equals sensitivity), false-positive rate (which equals 1-specificity), precision, and recall. The ROC curve depicts all the potential tradeoffs between true positive rate and false positive rate (false alarms). Typically, to increase the true positive rate we must accept an increase in the false positive (false alarms) rate as well. The rate of this trade-off is described by the slope of the ROC. The ROC created by JAD for this problem is shown in Figure 1.
The evaluation of a fraud detection model is more complex than simply identifying the model with the top predictive performance; the model should also aim at the best cost-effective classification, as it is defined a) by the cost of misclassifying a fraudulent (true positive) transaction as legitimate (false negative), b) the cost of false positives, and c) the ratio of prevalence between positives and negatives. JAD can produce models that operate on any threshold and achieve several sensitivity-specificity trade-offs.
The metrics shown in Table 2 are calculated with a threshold of 0.0481, selected from the ROC curve during the training phase, as the threshold that maximizes true positive rate and minimizes false-positive rate for Class 1. Balanced accuracy refers to the average of the proportion corrects of each class individually to account for the seriously imbalanced nature of the dataset.
As we observe from Table 2, in terms of detecting fraudulent transactions (sensitivity of Class 1) our classifier achieves 78% on the training and 85% on the test sample, while the identification of legitimate transactions (specificity) reaches 100% in both cases. For the visualization of our results, in Table 3 we report the confusion matrix of the train and test sample.
The best model identified by JAD correctly identified 39 out of the 46 fraudulent transactions (84.78%), missing only 7 transactions (15,22%) and producing 10 false positives. Thus, out of the 49 cases of credit card transactions that would be flagged for manual inspection, only 10 cases would be false alarms. Given that we are provided with the exact amount of each transaction we can study the behavior of the model on each of the observed instance of Table 3. The descriptive statistics are reported in Table 4.
The economic valuation of the credit card fraud detection by JAD is very interesting. The ridge regression model correctly identified 39 fraudulent transactions saving 7,535.24€ to the financial institution, while it has missed 7 transactions with a total cost of 477.64€. Most of the missed instances are small transactions below 50€ (39.90€, 11.39€, 3.39€ and the rest are below 1€), while only two transactions (311,91€ and 108,51€) exceed the amount of 100€. The false alarm transactions are all transactions below 1€ except one transaction of 89.90€. Thus, JAD exhibited the ability to efficiently detect all financially significant fraud transactions (above 500€) and to minimize the financial fraud cost and the administrative cost of manual inspection. Comparing our findings with previous studies on the same dataset, we observe that our AutoML JAD setup exhibits similar or higher fraud detection abilities, while its AI interface simplifies the variable selection and fine-tuning procedures that are required compared to other applications. More specifically, Dal Pozzolo et al. (2015) is the first use of the dataset in our study. The authors train Logit Boost, Random Forests and Support Vector Machines (SVM) classifiers in forecasting credit card fraud based on an undersampling scheme. Awoyemi et al. (2017) train a Logistic Regression, a Naïve Bayes and a K-Nearest Neighbors classifier in forecasting credit card fraud using the same dataset, but without feature selection. Fiore et al. (2019) use the same dataset to produce artificial fraudulent transactions using a Deep Learning Artificial Neural Network (DLANN), in order to balance the dataset. Then, the artificial data are merged with the original dataset and a new DLANN is trained on the balanced dataset, keeping the last 30% observations for model evaluation (out-of-sample forecasting). Their application requires tuning 2 DLANN models that is a computationally intense and timeconsuming procedure, while it requires expert knowledge and is prone to handling errors. The comparative results pertaining to fraudulent transactions (Class 1) in out-of-sample forecasting are reported in Table 5. Overall, our AutoML approach simplifies training and testing even in such an imbalanced sample, produces a battery of useful forecasting performance metrics, while it achieves a similar or superior detection rate to the one reported in the literature.
Limitations
In terms of limitations, the current version of JAD does not automatically detect data distribution drift, perform automated data cleaning, raise alarms when the model seems to be invalidated in new samples, and in general, lacks functionalities for automatic model maintenance. In addition, it requires formatting the data as a 2-dimensional matrix. In practice however, credit card data are originally stored in relational databases and require extensive data engineering for feature extraction and construction, a step that is not automated. A limitation of the specific study stems from the fact that the features have been linearly transformed using PCA on the original measured quantities. This precludes the economic and financial interpretation of the selected features. Further experimentation with more financial datasets is necessary to generalize further the conclusions of the study.
Conclusion
In this paper we use an AutoML SaaS platform, namely JAD, to credit cards fraud detection on a dataset of 284,807 online transactions. JAD automatically performs imputation, feature selection, modeling, fine tuning of the hyper-parameters of a significantly large number of models and estimates predictive performance and confidence intervals. The automatic nature of the application provides model training and model selection in a manner that shields against methodological errors and is accessible to all users, expert and non-experts alike. Moreover, the userfriendly interface makes the retraining of the model effortless and the model update straightforward. The gains in generality and applicability do not come at the expense of forecasting performance, given that our approach has matched or superseded existing applications on the same dataset.
Notes
1. JAD Bio has been developed specifically for low-sample, high-dimensional, molecular biology data however, its algorithms are general enough to provide high-quality results in this application without any further customizations specifically for enterprise data. | 6,897.8 | 2022-06-13T00:00:00.000 | [
"Computer Science",
"Business"
] |
Medical Image Fusion : A Brief Introduction
Digital images are an extremely powerful and widely used medium of communication. They are able to represent very intricate details about the world that surrounds us in a very easy, compact and readily available manner. Due to the innate advances in the acquisition devices such as bio-sensors and remote sensors, huge amount of data is accessible for the further processing and information extraction. The need to efficiently process this immense amount of information has given rise to the emergence of the popular disciplines like image processing, image analysis, and computer vision and image fusion. This article gives a brief insight into basic understanding and significance of image fusion.
Image processing can simply be referred to performing some mathematical operations on image pixels, to get an enhanced image with a better visual quality and to extract some useful information.Image fusion is an efficient way retrieving the information from the multiple sources into one image.The combined information enables the visual perception of more comprehensive image.
The complimentary images obtained from multiple sensors; multiple foci or multiple views are fused together to generate a new image which cannot be given by the individual imagery or data set.The concept of image fusion finds its applications in various fields 1 .For instance, in remote sensing varied type of data is acquired via different sensors to obtain a fused image for instance, fused image with both higher spatial as well as spectral resolution.Other areas of application of image fusion are surveillance, biometrics, defence applications and medical imaging 2 .Numerous applications of image fusion have been found in the field of medical imaging.The medical images obtained from different sensors are fused together to enhance the diagnostic quality of the image modality.With the advancements in the field of medical science and technology, medical imaging is able to provide various modes of imagery information 3 .
Different medical images have some specific characteristics which require simultaneous monitoring for clinical diagnosis.Hence multimodality image fusion is performed to combine the attributes of various image sensors into a single image.
Image Fusion
In the process of image fusion, the discreet level of information by each and every pixel of an image at a specific instance of time, state, position or circumstances is mapped upon respective information from each pixel of another image of the same object at a different instance or state 4 .The varied design and construction of each type of optical sensors put some limitations on the type of information acquired by them.Therefore the process of image fusion develops a composite image which suffices for the data which is not made available by an individual optical sensor.The source of information to be fused may be made available from a single source for different intervals of time or from multiple numbers of sensors over a common time slot.
The revolutionary advancement in designing of innovative image fusion tools has sustained due to various signal processing techniques and analysis theory methods which include spatial filters, artificial intelligence machine learning techniques and most importantly multiscale transforms 5,6 .Followed by the decomposition of image features or coefficients using suitable transform method, they are combined using appropriate fusion rules e.g.pixel-level averaging rule, weighted averaging rule and min-max rule.With the help of these image integration tools the image pixels are combined together into a highly representational format.The block diagram of a pixel level image fusion process employing wavelet transform and pixel-level averaging fusion rule is illustrated to give a basic insight into image fusion system in Fig. 1 combining the transformed coefficients with some fusion rule, for instance pixel-level averaging method.
• inverse transformation of the fusion coefficients to generate the final fused image.
Types of Image Fusion
The image fusion can be categorised differently depending on the type of source data to be fused or on type of image sensors employed and according to the fusion purpose.a) Multi-view Fusion: It is the fusion of the single type of modality taken at the same time but at different conditions and from different angles.It is aimed at providing information from supplementary views.For instance, the view of a scenery can be taken from foreground and background view and can be fused together to obtain a single multi-view image 7 .b) Multi-modal Fusion: In this case, images captured via different sensors are fused together to integrate the information present in complimentary images 8 .c) Multi-temporal Fusion: It is the type of fusion in which the images of the same view or the modality are taken at different times, and are fused together with an aim to detect the significant changes with respect to time 9 .d) Multi-focus Fusion.In this type of image fusion, images taken at the same time of the same scene with different areas or objects in focus are fused to get all the information in a single image.For instance, image of an object with a single part in focus is fused with an image containing other parts of the object in focus 10 .
Classification of Image Fusion
In literature, image fusion has been carried out in the different manners: pixel, feature level and decision level.
Pixel level fusion: In image fusion based on pixel level, each pixel in the fused image acquires a value which is based on the pixel values of each of the source image.It is most basic type of image fusion performed at signal level.
There are numerous methods reported in literature for pixel level fusion.This type of image fusion can be carried out at spatial level as well using various transforms.At spatial level it is performed by linear or non-linearly merging the pixel intensity value of the respective images.One of the simplest examples of these methods is to compute the averaging the pixel intensity values.The physical parameter i.e. pixel intensity values in each of the source images is combined together using an appropriate fusion rule to generate a new fused images with a different set of values of image pixel intensities 11 .The pixel level image fusion can be achieved with various combinations of mutliscale decomposition methods and it is discussed in the later section of this article.A simple illustration of pixel level image fusion can be seen in Fig. 1.2.
In the figure given above, the step wise methodology of pixel level image fusion depicting the fusion of co-registered source images is shown.
The quality of the fused image is calculated using various evaluation parameters e.g.Q AB/F factor, mutual information, entropy etc.The detailed discussion of these objective evaluation metrics will be done in the later section of this article.
Feature level fusion: It is performed at feature level and involves the mining of the detailed information or features from images.Feature level fusion is the higher level processing of the information to generate data with higher amount of information.In this type of fusion the extracted information from the images is integrated using complex techniques.One of the simplest example of feature level image fusion is region based fused.The region of interests are extracted and then fused to obtain the fusion of specific features of the images 12 .The process of feature level image fusion can be understood from Fig. 1.3.
Decision level fusion: It means to merge the information at a higher level of generalization, combines the results from a set of algorithms to lay a final fused decision.Input images are processed one by one to extract information as shown in Fig. 1.4, which is then fused by applying decision rules to reinforce the otherwise common interpretation.In decision level fusion, locally classified data from several source images are combined to determine the final decision.Initially the each of the source images is pre-processed for extraction of information.Then decision rules with varied degree of confidence are employed to combine the information extracted to grasp a superior perception of the object under observation.Decision level image fusion finds its applications in varied fields such as finger print verification, biometrics and several other remote sensing applications 13 .
Medical Image Fusion
Image fusion is an important branch of image processing which is being extensively worked upon by researchers.However an image is a special form of signal which has its own complexity, diversity and unique behaviour in the following aspects: (i) Image registration is a pre-requisite in the process of image fusion.The co-registered images highly intensify the quality of the fused image (ii) In case of multisensory image fusion, the sensor resolutions are different.Therefore in pixel level image fusion pre-processing of images is a necessary step.(iii) The process of image fusion largely depends on the correlation amongst the pixels in the source images.
The fusion of images with lower degree of spatial correlation or dissimilar images (in case of multi-sensory image fusion) is quite challenging.Also different types of fusion rules yield different results on various data sources.Therefore to devise an application specific image fusion algorithm for a specific data source is quite thought provoking.
In context of medical imaging, the combined analysis of various medical modalities has made image fusion one of the readily employed medical diagnostic tools.In case of parallel analysis of medical imaging modalities medical practitioners have to switch glances in between.Image fusion facilitate the joint analysis of the modalities abridging the time between diagnosis and patient treatment.Henceforth the image integration an efficient method of collecting the [22] information from varied type of sensors which help enforcing an information decision.The integration of different medical image modalities together acts as an efficient diagnostic tool by providing the complimentary information in a single image [14][15][16][17][18][19][20] .
For integrated visualization of abdomen and stomach related problems CT and ECT (Emission Computed Tomography images are fused together).CT and MRI images are combined together for skull based tumor surgery.To confirm optimized blood arterial and vascular blood flow, MRI and Ultrasound images are often fused together.Besides this, the fusion of various types of imaging modalities helps in precise and accurate determination stages of cancer and response of ionizing radiations on tumor.These are most extensively worked upon multi-modality imaging systems 21 .
Another major field of application in image fusion is bone vessel image fusion which is carried out with the help of Digital Subtraction Angiography.The DSA images are obtained through fluoroscopic X-Ray imaging.In this process, mask (pre-contrast) image and contrast (post injection of fluoroscopic material) images are obtained.The DSA imagery is acquired by digital subtraction osseous and vascular information.The DSA image dominant in vascular information is fused upon the mask image carrying the osseous i.e. bone information.The domain of image fusion is rather originating as an important field of research and development.
DSA finds its extended range of application in determining the presence and extent of vascular abnormalities, aneurysms, occlusions, ulcerated plaques and stenoses.It helps a great deal in planning various surgical procedures and thus has served as an important paradigm for speedy patient care and treatment.Hence this serves as an excellent potential field of research at conducted higher level of precision and accuracy.
ConClusIon
The highly reputed medical institutes in India namely AIIMS Delhi, PGIMER Chandigarh and various other private and government medical colleges use highly expensive medical image fusion tool which are imported from outside India.Hence these tools become increasingly unaffordable for small scale clinics or hospital specifically in the rural areas.Therefore, there is an immense need of low cost indigenous tool in these areas which can supplement the requirement of this highly expensive equipment.Moreover, the design of this computer aided tools could make this technology accessible for the remote areas of the nation at completely affordable prices.The fusion of osseous and vascular imagery data is difficult owing to their nature of being dissimilar images.Most of the algorithms available in literature are designed for almost similar images.The DSA and mask images possess high degree of dissimilarity.Therefore designing an efficient fusion rule for this type of source data is quite a motivation.At the same time, these image fusion rules should also be able to give comparable results for fusion of similar source images.This manuscript aimed at equipping the reader's with a general understanding of image fusion and its relevance in present day scenario.
Published by Oriental Scientific Publishing Company © 2018This is an Open Access article licensed under a Creative Commons license: Attribution 4.0 International (CC-BY).
. 1 .
From Fig. 1.1 it can be understood that the basic methodology of image fusion in transform domain • mapping of the image pixel intensities of the source into the coefficients in the transform consists of domain • | 2,900.8 | 2018-09-15T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Elongator Protein 3 (Elp3) Lysine Acetyltransferase Is a Tail-anchored Mitochondrial Protein in Toxoplasma gondii *
Background: Protein acetylation is prevalent in mitochondria, yet acetyltransferases mediating this activity are unknown. Results: Toxoplasma Elongator protein 3 (Elp3) possesses a unique C-terminal transmembrane domain necessary and sufficient to target it to the mitochondria. Conclusion: Elp3 is an essential tail-anchored mitochondrial acetyltransferase in Toxoplasma. Significance: Elp3 has conserved functions involving mitochondria that may predate its established role in transcription. Lysine acetylation has recently emerged as an important, widespread post-translational modification occurring on proteins that reside in multiple cellular compartments, including the mitochondria. However, no lysine acetyltransferase (KAT) has been definitively localized to this organelle to date. Here we describe the identification of an unusual homologue of Elp3 in early-branching protozoa in the phylum Apicomplexa. Elp3 is the catalytic subunit of the well-conserved transcription Elongator complex; however, Apicomplexa lack all other Elongator subunits, suggesting that the Elp3 in these organisms plays a role independent of transcription. Surprisingly, Elp3 in the parasites of this phylum, including Toxoplasma gondii (TgElp3), possesses a unique C-terminal transmembrane domain (TMD) that localizes the protein to the mitochondrion. As TgElp3 is devoid of known mitochondrial targeting signals, we used selective permeabilization studies to reveal that this KAT is oriented with its catalytic components facing the cytosol and its C-terminal TMD inserted into the outer mitochondrial membrane, consistent with a tail-anchored membrane protein. Elp3 trafficking to mitochondria is not exclusive to Toxoplasma as we also present evidence that a form of Elp3 localizes to these organelles in mammalian cells, supporting the idea that Elp3 performs novel functions across eukaryotes that are independent of transcriptional elongation. Importantly, we also present genetic studies that suggest TgElp3 is essential in Toxoplasma and must be positioned at the mitochondrial surface for parasite viability.
The functional diversity of a cell proteome is greatly expanded through post-translational modification (PTM). 2 Covalent addition of specific chemical moieties to a protein can have dramatic influences on its regulation and function, providing the cell with tremendous flexibility in responding to stimuli. Lysine acetylation has emerged as a prominent regulator of multiple cellular processes including transcription, metabolism, cell cycle, and apoptosis (1)(2)(3)(4). Addition or removal of an acetyl group to lysine can affect protein localization, interactions, enzymatic activity, and stability. Socalled "acetylomes" have revealed thousands of acetylated proteins in bacterial, plant, protozoan, and metazoan species, suggesting that acetylation may rival phosphorylation in regards to its universality and regulatory power (5). The current challenge is to identify the enzymes responsible for acetylating proteins involved in various pathways and determine the role of this PTM in regulating those pathways.
Lysine acetyltransferases (KATs) catalyze the transfer of an acetyl group from acetyl-CoA to the epsilon-amino group of a lysine residue. KATs have been extensively studied in the context of histone acetylation; however, the discovery of proteomewide acetylation highlights the importance of investigating KATs beyond their roles in gene regulation. Extensive protein acetylation is found on non-histone proteins in the nucleus as well as the cytoplasm, and a number of KATs are present in these compartments (3,4,6). Abundant acetylation is also detected on mitochondrial proteins. While lysine deacetylases, such as SIRT3, have been found to reside in the mitochondria, the KAT-mediating acetylation of mitochondrial proteins has remained elusive (7,8).
We recently performed acetylomic analyses of the human and veterinary pathogen Toxoplasma gondii (9,10). Toxoplasma is an obligate, intracellular protozoan in the phylum Apicomplexa and causes life-threatening opportunistic infection in immunocompromised individuals (11). The parasite can also be transmitted vertically to the fetus of a woman infected for the first time during pregnancy, resulting in spontaneous abortion or birth defects. As seen for higher eukaryotic cells, the Toxoplasma acetylome contains proteins residing in all cellular compartments, including the single parasite mitochondrion. Lysine acetylation is critical for parasite replication and survival, and enzymes modulating lysine acetylation have been validated as drug targets (12)(13)(14). We have characterized a number of KATs in the parasite, noting that TgGCN5 family KATs are exclusively nuclear and TgMYST family KATs are predominantly cytosolic (15)(16)(17).
Here we report the identification of the first KAT found on the mitochondrial surface in a eukaryotic cell. Surprisingly, this KAT is the Toxoplasma homologue of Elp3 (TgElp3), which is the catalytic component of the transcription Elongator complex in other eukaryotes. We validated this unexpected subcellular localization pattern for TgElp3 using multiple independent approaches and determined that the mechanism of trafficking to the outer mitochondrial membrane (OMM) involves a unique C-terminal transmembrane domain (TMD) present only on Elp3 homologues of apicomplexan protozoa and other select chromalveolates. We also show that Elp3 is targeted to mitochondria in mammalian cells, suggesting that Elp3 performs novel functions across eukaryotes. Additionally, we present evidence suggesting that TgElp3 is essential for parasite viability and must be positioned at the mitochondrial surface.
EXPERIMENTAL PROCEDURES
Parasite Culture and Transfection-Toxoplasma parasites were maintained by passage in human foreskin fibroblasts (HFFs) with DMEM supplemented with 1% heat-inactivated FBS at 37°C and 5% CO 2 in a humidified incubator. For transfection of plasmids, 2 ϫ 10 7 freshly lysed, purified tachyzoites of either the RH⌬hx or RH⌬ku80⌬hx strain were electroporated as previously described (18) with 50 g of linearized plasmid. Twenty-four hours after electroporation, drug selection was applied and continued for three passages. Parasites transfected with a construct containing mutated dihydrofolate reductase thymidylate synthase (DHFR-TS) were treated with 1.0 M pyrimethamine and those transfected with a construct containing hypoxanthine-xanthine-guanine phosphoribosyl transferase (HXGPRT) were treated with 25 g/ml mycophenolic acid and 50 g/ml xanthine. Parasites were cloned by limiting dilution in 96-well plates under continued drug selection (18).
Cloning and Tagging TgElp3-The TgElp3 (TGGT1_041990) coding sequence was amplified from RH strain cDNA using primers F1 and R1 (Table S1) and cloned into ZeroBluntTOPO vector (Invitrogen) for sequencing. The GeneRacer™ kit (Invitrogen) was used to amplify the 5Ј-and 3Ј-UTRs with primer R2 and nested primer R3 for the 5Ј-UTR and primer F2 and nested primer F3 for the 3Ј-UTR. ClustalW (BioEdit software) was used to align amino acid sequences.
The In-Fusion HD cloning kit (Clontech) was used to clone fragments into Toxoplasma expression vectors for all constructs; primer design and cloning was conducted according to the manufacturer's protocol. Phusion High Fidelity DNA Polymerase (Thermo Scientific) was used for all PCRs, unless stated otherwise. To stably express ectopic HA TgElp3 in parasites, the open reading frame was amplified from RH⌬hx cDNA with primers F4 and R4 and inserted into a Toxoplasma expression vector at the BglII restriction site. This expression vector uses the Toxoplasma tubulin promoter and contains an HXGPRT selection marker (pHXGPRT:tub) (19). TgElp3 HA was made in the same manner with primers F5 and R5. HA TgElp3⌬TMD was amplified with primers F4 and R6 and inserted downstream of the tubulin promoter in the pHXGPRT:tub expression vector at restriction sites BglII and EcoRV. All three constructs were transfected into RH⌬hx parasites.
YFP TgElp3 Fusion Proteins-YFP was amplified from pYFP-LIC-HXG (kindly provided by Dr. Vern Carruthers) with primers F6 and R7 and inserted downstream of the tubulin promoter in pHXGPRT:tub using the BglII and AvrII restriction sites. The resulting pHXGPRT:tub-YFP construct contained an NdeI site just upstream and an EcoRV site just downstream of YFP to allow for installment of designated TgElp3 fragments. DNA encoding the N-terminal region of TgElp3 (amino acids 1-273) was amplified with primers F7 and R8 and fused to the N terminus of YFP using the NdeI restriction site. The C-terminal region (amino acids 726 -984), amplified by primers F8 and R9, was fused to the C terminus of YFP using the EcoRV restriction site. Removal of the TgElp3 TMD was achieved by amplifying the C-terminal region with primers F8 and R6 and inserting the product downstream of YFP using the EcoRV site. YFP fusion constructs were transiently transfected into RH⌬hx parasites.
⌬TgElp3 and ddHA TgElp3 Constructs-A TgElp3 knock-out construct was designed to facilitate double homologous recombination to replace the endogenous locus with the pyrimethamine-resistant DHFR-TS selectable marker. The 5Ј genomic fragment encompassing a portion of the TgElp3 5Ј-UTR and first exon was amplified from RH⌬ku80⌬hx DNA with primers F9 and R10, and inserted into the pDHFR-TS cassette (20) using restriction sites NotI and SpeI. The 3Ј genomic fragment, consisting of a portion of the last exon of TgElp3 and 595 bp downstream of the stop codon, was amplified with primers F10 and R11 and inserted into this plasmid using restriction sites HindIII and ApaI. The construct was linearized with NotI and transfected into RH⌬ku80⌬hx parasites followed by selection with pyrimethamine as described above. Fig. 7A and listed in supplemental Table S1. Plasmid Construction to Test Requirement of TgElp3 TMD-A plasmid was generated to facilitate double homologous recombination to remove the DNA encoding the TMD from the genomic locus of TgElp3 (⌬TMD). In parallel, we made a similar construct that contained the wild-type sequence (WT TMD) to use as a control for recombination efficiency. The ϳ1,700 bp 5Ј genomic fragment of each construct contained a portion of the last intron, the entire last exon with or without the TMD (encoding amino acids 958 -980), and the 3Ј UTR (Fig. 8A). The 3Ј genomic fragment consisted of 1,460 bp downstream of the 3Ј-UTR. The WT TMD 5Ј fragment was amplified from RH⌬ku80⌬hx genomic DNA with primers F15 and R16. The ⌬TMD 5Ј fragment was amplified using a series of PCRs to piece together sequences upstream and downstream of the TMD. PCR-1 and PCR-2 consisted of RH⌬ku80⌬hx genomic DNA with primers F15ϩR17 and F16ϩR16, respectively. PCR-3 used template from PCR-1 and primers F15 and R18. PCR-4 combined the pieces by using template from PCR-2 and PCR-3 with primers F15 and R16. The 3Ј fragment used for both the WT TMD and ⌬TMD constructs was amplified from RH⌬ku80⌬hx genomic DNA using primers F17 and R19. The 5Ј and 3Ј fragments were inserted into the pDHFR-TS pyrimethamine-resistance cassette using the restriction sites XbaI and HindIII, respectively. WT TMD and ⌬TMD constructs were transfected into RH⌬ku80⌬hx parasites and 24 clones from each parasite line were screened by PCR as described above using the primers shown in Fig. 8A.
Western Blotting-Toxoplasma lysates were prepared from freshly lysed, filter-purified parasites. Parasites were lysed in RIPA buffer followed by sonication and the DC Protein Assay (Bio-Rad) was used to quantify protein. The Novex NuPAGE SDS-PAGE gel system (Invitrogen) was used for protein separation and transfer to nitrocellulose membranes. Membranes were blocked in 4% nonfat milk followed by incubation with primary and secondary antibodies that were diluted in blocking buffer. Primary and secondary antibodies included 1:1,000 rat anti-HA (Roche cat. 11867423001) and 1:2,000 anti-rat conjugated to HRP (GE Healthcare cat. NA935), respectively. Proteins were detected by chemiluminescence using the Fluo-rChem E Imager and AlphaView software (ProteinSimple). Mouse brain mitochondrial and cytosolic lysates were prepared from FVB/NJ mice and kindly provided by Dr. Nickolay Brustovetsky (IUSM). The mitochondrial fraction was prepared as previously described (21) and the cytosolic fraction was prepared by centrifuging brain homogenate at 12,000 ϫ g for 10 min followed by a centrifugation of the supernatant at 100,000 ϫ g for 30 min. The obtained supernatant was used as the cytosolic fraction. Western blotting was done following SDS-PAGE separation of 50 g of each lysate. Primary antibodies included 1:1,000 rabbit anti-GAPDH (Cell Signaling cat. 2118), 1:1,000 rabbit anti-COX IV (Cell Signaling cat. 4850), and 1:1,000 rabbit anti-human Elp3 (Active Motif cat. 39949).
The secondary antibody was anti-rabbit conjugated to HRP (GE Healthcare cat. NA934).
Lysine Acetyltransferase Assay-Parasites expressing HA TgElp3 and the parental line RH⌬hx were harvested and lysed by the same method described above, resulting in a total protein concentration of 1-2 mg each. Lysates were immunoprecipitated with 50 l anti-HA conjugated beads (Roche cat. 11815016001). Beads were used directly in a 30 l reaction including KAT assay buffer (250 mM Tris-HCL pH 8.0, 25% glycerol, 0.5 mM EDTA, 250 mM KCl, 5 mM DTT, 5 mM PMSF, and 50 mM sodium butyrate), 2 g histone H3 (Millipore cat. 14-494), and 2 mM acetyl-CoA (Sigma cat. A2056). The negative control included all reagents except beads. Reactions were incubated at 30°C for 1 h followed by the addition of 1ϫ LDS NuPAGE loading buffer supplemented with 5% beta-mercaptoethanol and heated at 95°C for 10 min. Western blotting was performed and membranes were probed with 1:1,000 anti-AcH3 (Active Motif cat. 39139) to detect acetylated H3 followed by 1:2,000 anti-rabbit conjugated to HRP (GE Healthcare cat. NA934). The KAT assay was repeated for a total of three independent experiments.
Immunofluorescence and Immunoelectron Microscopy-Immunofluorescence assays (IFAs) were performed by inoculating parasites onto confluent HFF cell monolayers grown on coverslips. Infected HFF monolayers were fixed 16 -24 h postinfection with 3% paraformaldehyde/PBS for 15 min then quenched with 0.1 M glycine/PBS for 5 min and permeabilized with 0.2% triton X-100/3% BSA for 10 min. Samples were blocked with 3% BSA and incubated with one or more of the following primary antibodies in 3% BSA overnight at 4°C: 1:1,000 rat anti-HA (Roche cat. 11867423001) and 1:2,000 mouse anti-TgF 1 B ATPase (22). Coverslips were then incubated with various secondary antibodies conjugated to Alexa Fluor 488 and Alexa Fluor 594 (Invitrogen) and mounted to slides with VectaShield Mounting Medium containing DAPI (Vector Laboratories). To examine the orientation of TgElp3 in the mitochondrial outer membrane, IFAs were conducted using digitonin to selectively permeabilize membranes. IFAs were performed in the same manner as described above with the exception that samples were incubated for 5 min in either 0, 0.004, or 0.1% digitonin to permeabilize cell membranes. In addition to the aforementioned antibodies, 1:10,000 rabbit anti-TgIF2␣ (23) was used as a cytoplasmic marker.
Immunoelectron microscopy processing and analysis was conducted by Wandy Beatty at Washington University, St. Louis. Clonal parasites stably expressing ectopic HA TgElp3 or TgElp3 HA along with the parental line (RH⌬hx) were grown overnight in a T-25 cm 2 flask containing confluent HFF monolayers. The infected monolayer was washed with PBS, scraped, and centrifuged for 10 min at 3,000 ϫ g. Cells were resuspended in fixative containing 4% paraformaldehyde and 0.05% glutaraldehyde in PBS and incubated on ice for 1 h. Samples were then cryoprocessed and sectioned; 50-nm thick sections were used for immunolabeling. To immunolabel sections, they were blocked with 5% FCS/5% goat serum in PIPES buffer for 20 min at room temperature. Primary antibody, rat anti-HA (Roche cat. 11867423001), was used at a 1:25 dilution in blocking buffer and incubated for 1 h at room temperature. Samples were then incubated for 1 h in a 1:30 dilution of goat anti-rat secondary antibody conjugated to 18 nm colloidal gold (Jackson Immu-noResearch Laboratories, Inc., West Grove, PA) and stained with 5% uranyl acetate/2% methyl cellulose. Samples were analyzed on a JEOL 1200 EX transmission electron microscope (JEOL USA Inc., Peabody, MA) with an AMT 8 megapixel digital camera and AMT version 602 software (Advanced Microscopy Techniques, Woburn, MA).
Toxoplasma Possesses a Unique Elp3
Homologue-We have previously characterized several KATs in Toxoplasma (TgGCN5-A, TgGCN5-B, TgMYST-A, and TgMYST-B) and found them to be important for parasite survival and differentiation (24). Bioinformatic surveys for additional members of GCN5-related N-acetyltransferases (GNATs) in the Toxoplasma genome identified TGGT1_041990 as having unequivocal homology to Elp3, the catalytic subunit of the Elongator complex. The gene, which we have designated TgElp3, is located on chromosome IX and contains 9 exons and 8 introns. Cloning and sequencing the open reading frame confirmed the Toxoplasma database annotation of 2,955 bp to be correct. Rapid amplification of cDNA ends (RACE) data identified the 5Ј-and 3Ј-UTRs to be 568 bp and 70 bp in length, respectively. TgElp3 is predicted to contain the hallmark Radical S-adenosylmethionine (SAM) and KAT domains found on all Elp3 proteins from archaebacteria to humans (Fig. 1, A and B). The critical residues for KAT activity are conserved (Fig. 1B) (25)(26)(27)(28). Intriguingly, the Radical SAM domain in the Toxoplasma and Plasmodium Elp3 homologues contains the canonical CXXXCXXC motif important for ironsulfur cluster formation while most other Elp3 homologues contain a non-canonical CX 4 CX 9 CX 2 C motif (29). Strikingly, TgElp3 also harbors a predicted transmembrane domain (TMD) at its extreme C-terminal (Ct) end followed by four amino acids (Fig. 1A). Further analysis revealed that other members of the Apicomplexa phylum, including Plasmodium spp, Cryptosporidium, Neospora, Theileria, and Eimeria, have this unusual Ct TMD on their Elp3 homologues. Another parasitic alveolate, Perkinsus marinus, has an Elp3 with a Ct TMD, but free-living alveolates like Tetrahymena thermophila do not. The presence of the Ct TMD is also found on a limited number of other chromalveolates, including brown algal and water mold species. We could not find any eukaryote outside of Chromalveolata in possession of an Elp3 with a Ct TMD, suggesting this feature is restricted to select members of this supergroup.
Elp3 is the catalytic subunit of the Elongator complex, which consists of six proteins (Elp1-6) that are well conserved from yeast to humans (30 -32). Strikingly, bioinformatics surveys of apicomplexan sequence databases reveal no homologues of the other five Elongator subunits, suggesting that Elp3 may have functions pre-dating the evolution of the Elongator complex. In the Radical SAM domain alignment, the "#" denotes conserved cysteine residues critical for iron-sulfur cluster formation in human and yeast homologues (29,52,53) while the asterisks indicate those in the Toxoplasma and Plasmodium homologues. In the KAT domain, asterisks denote residues previously shown to be important for KAT activity (25)(26)(27)(28). Amino acid number is indicated on the right.
To address if TgElp3 is a bona fide KAT, we expressed ectopic recombinant TgElp3 tagged with an HA epitope at the N terminus ( HA TgElp3) and performed a standard in vitro enzymatic assay using recombinant histone H3 as substrate. As shown in Fig. 2A, HA TgElp3 migrates at the expected size of 100 kDa and does not appear to be subject to proteolytic cleavage. TgElp3 is able to acetylate H3 in vitro (Fig. 2B), but this does not demonstrate that H3 is a substrate of TgElp3 in vivo.
TgElp3 Localizes to the Mitochondrion-Given the unusual Ct TMD and lack of Elongator partners, we determined the subcellular localization of TgElp3 by performing IFAs with parasites expressing HA-tagged forms of TgElp3. IFA analysis of intracellular tachyzoites revealed an unexpected staining pattern that resembled the parasite's single mitochondrion organelle. HA-tagged TgElp3 co-localized with a mitochondrial marker TgF 1 B ATPase (Fig. 3A) as well as MitoTracker. The same mitochondrial distribution was observed for both HA TgElp3 and TgElp3 HA , indicating that full-length TgElp3 is present at the mitochondrion. For better resolution, immunoelectron microscopy (IEM) was also performed using an HA antibody with these parasites expressing HA TgElp3 and TgElp3 HA . Virtually all gold particles were found in the mitochondrion with the vast majority located at the outer mitochondrial membrane (Fig. 3B). No gold particles were detected in the parental parasites probed with this HA antibody (not shown). Consistent with the idea that TgElp3 has a role independent of transcription, no TgElp3 was detected in the parasite nucleus by IFA or IEM.
Mitochondrial Targeting Is Mediated by the C-terminal TMD of TgElp3-TgElp3 localization to the mitochondrion was particularly surprising as online predictive algorithms including TargetP and PSORT failed to identify a canonical N-terminal or internal mitochondrial signal sequence in TgElp3. To illuminate the mechanism by which TgElp3 is targeted to the mitochondrion, we generated a series of YFP fusion proteins (Fig. 4A). As shown in Fig. 4B, YFP alone is located in the parasite cytosol, but its small size allows diffusion into the parasite nucleus as well. N-YFP consisted of the first 273 amino acids of TgElp3 (from the start codon to the Radical SAM domain) fused to the N terminus of YFP to test if the N-terminal region contained a novel type of mitochondrial targeting sequence, which does not appear to be the case (Fig. 4B). YFP-C consisted of the region downstream of the KAT domain (259 amino acids) fused to the C terminus of YFP and demonstrated that the C-terminal region and TMD are involved in mitochondrial localization. A third construct, N-YFP-C, included both N-and C-terminal TgElp3 fragments flanking YFP (lacking the Radical SAM and KAT domains) and also localized to the parasite mitochondrion. Together, these data show that the C-terminal portion of TgElp3 is necessary and sufficient for mitochondrial localization, with the N-terminal sequence having no role in protein targeting to this organelle (Fig. 4B). Since this C-terminal fragment contains the TMD, we made an additional YFP fusion protein with the TMD deleted. Additionally, we stably expressed an ectopic form of HA TgElp3 lacking the TMD. The results consistently demonstrate that TgElp3 is unable to localize to the mitochondrion without the TMD (Fig. 4B).
TgElp3 Is a Tail-anchored Outer Mitochondrial Membrane Protein-Immediately downstream of the TgElp3 TMD is a string of four arginine residues. In other eukaryotic species, a C-terminal TMD followed by several positively charged residues is sufficient for targeting a protein and inserting it into the outer mitochondrial membrane (OMM). These proteins are referred to as tail-anchored (TA) and, while the precise mechanism of mitochondrial membrane insertion is still unclear, mutational analyses have shown that the TMD sequence serves as a mitochondrial targeting signal (33,34). TA proteins are often inserted in the OMM by the C-terminal TMD such that a short C-terminal tail faces the inner membrane space and everything upstream of the TMD resides in the cytosol (35). To determine the orientation of TgElp3 in the mitochondrial membrane, we selectively permeabilized membranes of parasites expressing N-or C-terminally HA-tagged TgElp3 with digitonin prior to IFA analyses. Treatment with 0.004% digitonin permeabilized only the plasma membrane of Toxoplasma; while a cytoplasmic marker (TgIF2␣) was detectable, the mitochondrial matrix protein (TgF 1 B ATPase) was not (Fig. 5). Exposure to 0.1% digitonin permeabilized the plasma membrane as well as the outer mitochondrial membrane, but the inner mitochondrial membrane remained largely intact. As a result, the mitochondrial matrix marker TgF 1 B ATPase was only partially detectable with 0.1% digitonin but was fully visible when 0.2% Triton X-100 was used to permeabilize all membranes. When only the plasma membrane was permeabilized, HA TgElp3 was recognized but TgElp3 HA was not. However, TgElp3 HA was visible once the mitochondrial outer membrane had been permeabilized. These results indicate that TgElp3 is anchored in the outer mitochondrial membrane with the N terminus facing the cytoplasm and the short C terminus located within the inner membrane space. Such an orientation leaves TgElp3 capable of enzymatically acting on cytosolic proteins, proteins associated with the mitochondrial surface, or proteins targeted for translocation into the mitochondrion.
Elp3 Localization to the Mitochondria of Mammalian Cells-The localization of TgElp3 to the parasite mitochondria is contingent upon the Ct TMD. Elp3 homologues in the vast majority of other species do not contain a TMD, but there has been a report suggesting human Elp3 can traffic to the mitochondria in HeLa cells (36). We examined whether mitochondrial Elp3 was unique to Toxoplasma by performing Western blots of fractionations from mouse brain. As shown in Fig. 6, a shortened form of Elp3 (ϳ49 kDa) is present in the mitochondrial fraction from mouse brain while only the full-length form of Elp3 (62 kDa) is in the cytosolic fraction. These results suggest that localization of Elp3 to mitochondria has been conserved throughout evolution.
TgElp3 Is Essential and Must Localize to Mitochondria for
Parasite Viability-To provide insight into the role of TgElp3 in parasite physiology, we designed experiments to knock-out the genomic locus. Despite several attempts to generate a TgElp3 knock-out by replacing the genomic locus with a selectable marker in RH⌬ku80⌬hx parasites (Fig. 7A), we were unable to obtain viable clones, suggesting that TgElp3 is essential for parasite survival (Fig. 7B). We were, however, able to knock-out the locus in transgenic parasites expressing ectopic TgElp3 fused to a destabilization domain ( ddHA TgElp3), which targets the fusion protein for degradation unless stabilized by adding Shield-1 to the culture (37,38). We were unable to obtain a TgElp3 knock-out in RH⌬ku80⌬hx parasites after screening 60 clones from 11 independent populations. However, we were able to knock-out the TgElp3 genomic locus at a high frequency (58%) in parasites expressing ectopic FIGURE 3. TgElp3 localizes to parasite mitochondrion. A, IFAs of RH⌬hx, HA TgElp3, and TgElp3 HA parasites with rat anti-HA (green) and mouse anti-TgF 1 B ATPase, a mitochondrial marker (red). Images were merged with the DNA stain DAPI (blue), and the white scale bar represents 5 m. B, representative IEM images of HA TgElp3 parasites probed with anti-HA and anti-rat conjugated to 18-nm gold particles. Gold particles were found almost exclusively at the parasite mitochondrial (M) membrane. IEM of TgElp3 HA parasites showed identical results (not shown). ddHA TgElp3. Both PCR of genomic DNA and reverse-transcription PCR (RT-PCR) verified the absence of endogenous TgElp3 in three independent knock-out clones (Fig. 6, A-C). Removal of Shield-1 from the ⌬TgElp3:: ddHA TgElp3 clones did not fully reduce ectopic TgElp3, complicating phenotypic analysis. We conclude that the TgElp3 genomic locus is amenable to homologous recombination, but cannot be displaced unless a second copy is present, supporting the idea that TgElp3 is essential in tachyzoites.
As an alternative approach to address the importance of TgElp3, we investigated if parasites could survive when TgElp3 is expressed but not able to localize to the OMM by deleting the TMD (⌬TMD) from the endogenous locus (Fig. 8A). As a control, we replaced the endogenous TMD with a wild-type TMD sequence (WT TMD) to confirm that integration of our construct did not produce an artifact. Recombination frequency was ϳ70% (17/24) when the WT TMD construct was used, but we were not able to isolate any viable parasites when the ⌬TMD construct was used. Fig. 8B shows five representative clones from each parasite line: five positive WT TMD clones and five negative ⌬TMD clones. These results suggest that localization of TgElp3 to the OMM is essential for parasite viability.
DISCUSSION
In this report, we describe the first Elp3 homologue from an apicomplexan parasite, which exhibits unusual features suggestive of new functions across species. Most striking is the presence of a unique C-terminal TMD that targets TgElp3 to the mitochondrial surface with an orientation consistent with that of a tail-anchored membrane protein. Such an arrangement provides TgElp3 with great flexibility to acetylate a wide variety of substrates, both cytosolic as well as proteins associated with the mitochondrion. Acetylome analyses conducted by our laboratory have shown that ϳ500 Toxoplasma proteins of diverse function and localization are acetylated, including mitochondrial proteins (9,10). Interestingly, the OMM porin was among the most heavily acetylated proteins detected. It is tempting to speculate TgElp3 may acetylate such proteins, or others that are within proximity of the parasite mitochondrion. These unexpected findings prompted us to test if Elp3 localizes to the mitochondria in other species. To date, one report has been published showing that Elp3 localizes to mitochondria in HeLa cells (36). Our results suggest that a 49 kDa form of mouse Elp3 localizes to the mitochondria in neural cells. While its presence at this organelle is conserved, the lack of a C-terminal TMD on mammalian Elp3 suggests it may be targeted to the mitochondria in a different manner than TgElp3. The identification of a mitochondrial KAT potentially addresses the long-lingering question as to which KAT is responsible for acetylating mitochondrial proteins.
TgElp3 lacks conventional mitochondrial targeting sequences, but several parallel lines of investigation clearly demonstrate that the C-terminal TMD is necessary and sufficient to traffic proteins to the mitochondrion. All other apicomplexan parasites for which complete genome sequence data is available, and the marine parasite Perkinsus marinus, have an Elp3 with a C-terminal TMD. However, free-living alveolates like Tetrahymena, do not have a predicted TMD on their Elp3. Complicating the matter is the fact that other members of the Chromalveolata supergroup, namely brown algae and water molds, have an Elp3 with the Ct TMD. Further analysis into this potential dichotomy among the chromalveolates requires more genome sequencing of other species.
The six-subunit Elongator complex, of which Elp3 is the catalytic component, is highly conserved in yeast, plants, and animals, to facilitate transcriptional elongation through association with RNA polymerase II (39). In contrast to higher FIGURE 5. Digitonin selective permeabilization determines TgElp3 orientation. HA TgElp3 and TgElp3 HA expressing parasites were used to visualize the Nand C termini of TgElp3 (green), respectively, while parental RH⌬hx parasites were used to establish the degree of membrane permeabilization. Detection of cytoplasmic TgIF2␣ (green) and mitochondrial matrix protein TgF 1 B ATPase (red) confirmed permeabilization of Toxoplasma plasma membrane and both mitochondrial membranes, respectively. IFAs were performed using indicated concentrations of digitonin or Triton X-100 for permeabilization as described in "Results." FIGURE 6. Mouse Elp3 present in brain mitochondria. Western blot of Elp3 in mitochondrial (M) and cytoplasmic (C) fractions purified from mouse brain. Mouse Elp3 is detected at its full-length form (62 kDa) in the cytoplasmic fraction and a shorter form (ϳ49 kDa) in the mitochondrial fraction. GAPDH and COX IV were used as cytoplasmic and mitochondrial markers, respectively.
eukaryotes, Apicomplexa lack all Elongator components except Elp3, suggesting that histone acetylation is not the original function of this KAT. In further support of this idea, we were unable to detect TgElp3 in the parasite nucleus. Moreover, numerous reports in other species have ascribed additional functions to Elp3 including tRNA modification, DNA demethylase activity, and acetylation of ␣-tubulin (40,41). Some of these additional functions may be due to the presence of the Radical SAM domain, which is unique to the Elp3 family KATs. In general, Radical SAM domains do not have a specific function but rather they can participate in a number of catalytic activities, such as oxidation-reduction, isomerization, methylation, and protein radical formation, which seem to be specific to the protein or its targets (42,43). The function of this domain in Elp3 orthologues is not clear but it has been implicated in DNA demethylation in zygotes (44). The first step in all reactions involving the Radical SAM domain is reduction of the ironsulfur cluster formed by three or four key cysteine residues in the domain. Yeast and human Elp3 homologues contain a noncanonical CX 4 CX 9 CX 2 C motif, and mutation of these cysteines to alanines recapitulates an Elp3 knock-out phenotype in yeast (29). Interestingly, the apicomplexan and several other TMD- FIGURE 7. Knock-out of TgElp3 genomic locus is only possible if ectopic TgElp3 is present. A, diagrams of TgElp3 genomic locus, constructs, and mRNA including primers for screening clones. In the top panel black, gray, and striped bars represent exons, introns, and UTRs, respectively. The TgElp3 knock-out construct uses double homologous recombination to replace the genomic locus with a mutated form of dihydrofolate reductase-thymidylate synthase (DHFR-TS * ) to confer pyrimethamine resistance for selection. The middle panel depicts the construct used to express an ectopic copy of TgElp3 tagged at the N terminus with the dd and HA epitope containing tubulin and DHFR 5Ј-and 3Ј-UTRs, respectively. HXGPRT was used for selection. The bottom panel shows the TgElp3 mRNA transcripts with white lines representing removed introns. B, genomic PCR and RT-PCR of TgElp3 knock-out attempts before and after introduction of ectopic ddHA TgElp3. The top panel shows a PCR of genomic DNA from 8 representative clones (out of 60 total) confirming that the TgElp3 locus was not replaced when a TgElp3 knock-out was attempted. The middle panel shows that several knock-out clones were obtained when ectopic ddHA TgElp3 was present, which is shown as the ϳ3.0 kb band in the bottom panel. The "ϩ" denotes TgElp3 knock-out clones, three of which (1E4, 2D5, and 3A12) were selected for further analysis by RT-PCR. C, RT-PCR was used to confirm the absence of endogenous TgElp3 mRNA in the ⌬TgElp3:: ddHA TgElp3 clones and the presence of ectopic ddHA TgElp3 mRNA. RH⌬ku80⌬hx served as the parental line for ddHA TgElp3, and ddHA TgElp3 served as the parental line for ⌬TgElp3:: ddHA TgElp3. Primer pairs used for PCR are located to the right of the gel images.
containing Elp3 homologues possess the more traditional Radical SAM motif (CXXXCXXC), while Elp3 homologues lacking the C-terminal TMD have a non-canonical motif. Further investigation is necessary to determine if a connection exists between mitochondrial localization via the TMD and function of the Radical SAM domain based on its cysteine motif. The function of the Radical SAM domain of TgElp3 at the mitochondrion may be independent or co-dependent with that of the KAT domain. Understanding the functions of Elp3 and other members of the Elongator complex remains an important task as they have been associated with several human diseases, including familial dysautonomia and amyotrophic lateral sclerosis, a motor neuron degeneration disorder (45)(46)(47)(48).
Underscoring the importance of Elp3 in cellular physiology, disruption of this KAT in several species has been shown to cause significant defects. Deletion of Elp3 in Arabidopsis impairs the mitotic cell cycle as well as leaf polarity (49). Migration and differentiation of mouse cortical neurons was significantly altered when Elp3 was decreased (50). In Drosophila, deletion of Elp3 results in larval lethality (51). In Toxoplasma, we found TgElp3 to be indispensable for parasite viability as the genomic locus could only be disrupted when an ectopic copy of the KAT was present. Additionally, parasites require that TgElp3 be positioned at the mitochondrion, indicating an essential function at this organelle. While TgElp3 is clearly important, the parasite evidently requires very little of the protein. Multiple lines of data provided on the Toxoplasma online database (toxodb.org) indicate that TgElp3 mRNA and protein are expressed at very low levels. This has complicated the use of knockdown technologies to further study the phenotypic consequences of TgElp3 depletion as minute amounts of residual TgElp3 are all that seem necessary for the parasites to survive.
In summary, the identification and characterization of Elp3 in Toxoplasma, in conjunction with fractionation studies performed on mouse brain, have revealed the strongest evidence to date that Elp3 plays important roles beyond transcription at the mitochondria. Curiously, no other subunits of the Elongator complex could be identified in early-branching eukaryotes. It is therefore possible that Elp3 may have a function involving the mitochondria that predates its well-established role in transcriptional elongation in higher eukaryotes. Alternatively, the other Elongator subunits have been lost in these organisms or are too divergent for in silico detection. Our studies also bolster the model that C-terminal TMDs can operate as a membrane targeting mechanism and reveal that this mechanism appeared very early in the course of eukaryotic cell evolution. | 8,016.4 | 2013-07-22T00:00:00.000 | [
"Biology"
] |
Analysis of Dynamic Characteristics of 6-PSS Parallel Mechanism Considering Spherical Hinge Clearance
This paper is based on a new type of 6-PSS parallel mechanism. Firstly, considering the clearance between the kinematic pair connecting the upper platform and the link, establish a kinematic model considering the spherical hinge clearance. Then, based on the Lankarani-Nikravesh(L-N) contact model and the modified Coulomb friction model calculate the contact force when the spherical elements contact, which is equivalent to the center of the corresponding member as the generalized external force. Apply for Newton Euler method with Lagrange Multiplier to establish the dynamics model of the parallel mechanism with clearance. Finally, the R-K method is used to solve the dynamic equation and analyze the influence of different spherical hinge gap sizes on the dynamic characteristics of the mechanism.
is the coordinate system fixed to the center of the lower platform. The Z axis is perpendicular to the plane of the lower platform and the X axis is connected to one of the horizontal guide rails. The Y axis is determined by the right-hand spiral theorem. D XYZ is a coordinate system fixedly connected to the center of the upper platform, and the direction of the coordinate system is the same as the direction of the fixed coordinate system when the entire mechanism is in the neutral position. Six guide rails are arranged on the lower platform to form two inner and outer equilateral triangles. The circumscribed circle radius is 1 R and 2 R respectively. The six spherical hinge points on the upper platform are distributed on a circle with a semi-diameter of r .The lengths of the inner and outer links connecting the upper and lower spherical hinges are 1 l and 2 l respectively.
Inverse solution of mechanism kinematics
According to the coordinate rotation formula, the rotation matrix of the upper platform coordinate system relative to the fixed platform coordinate system is: (1) Then the coordinates of the spherical hinge point on the upper platform under fixed coordinates can be expressed as: A is the coordinate of the upper spherical hinge point i A in the coordinate system of upper platform. The angle between the directions of the six guide rails and the X axis of the fixed coordinate system is i , and set the coordinates of the guide rail vertex ,then the coordinates of the lower spherical hinge: ( The distance between the upper and lower spherical hinges is equal to the length of the link, which can gain the inverse solution of the mechanism.
3.Kinematics model considering spherical hinge clearance
For an ideal sphere hinge, the geometric centers of the motion pair connecting the sphere and the sphere socket are completely coincident, and the sphere socket rotates in three directions in the middle of the sphere. In fact, there is a clearance between the elements that make up the sphere hinge, which causes the sphere to move in three directions in the sphere socket. The motion constraint is transformed into a O and 2 O . The eccentric vector is e , of which direction is from the center of the sphere socket to the center of the sphere.When the sphere socket and the sphere are in contact with each other ,suppose the contact points are 1 P and 2 P respectively, and contact deformation is . The normal and tangential unit vectors of the contact surface e n and t n respectively.
(2) Then the eccentric unit vector between the sphere and the sphere socket is e n e e , e is the modulus of vector e. Assuming that the radii of the sphere socket and the sphere are 1 r and 2 r , the sphere hinge clearance is 1 2 gap r r , and the contact deformation is | | e gap . This paper uses the "contactseparation" state model, assuming that there are only two states of contact and separation between the sphere socket and the sphere. Therefore, it can be judged whether the sphere socket and the sphere are in contact according to the contact deformation at two adjacent moments: When contact occurs, suppose the contact points of the sphere socket and the sphere are 1 P and 2 P , respectively, then: Take the derivative of (6) to get the contact velocity between the sphere socket and the sphere: The normal and tangential velocities can be obtained by projecting formula (5) to the contact surface and the normal plane of the contact surface respectively: (6) Then the tangential unit vector of the contact surface is:
Normal contact force
The classic Hertz contact model regards the contact problem as a completely elastic contact, ignoring the damping, and does not consider the energy loss during the contact process. The Lankarani-Nikravesh contact model (L-N model) proposes a nonlinear damping model that considers the coefficient of restitution, introduces the initial contact velocity and material properties [7] and takes into account the energy change during the contact process, which is closer to the real situation. Therefore, this paper uses the L-N contact model to describe the contact force of the spherical secondary elements of the parallel mechanism: Where n is the index; r c is the recovery coefficient of the spherical hinge; and 0 are the contact deformation velocity and the initial contact deformation velocity, respectively; K is the stiffness coefficient: Where c E and b E Young's modulus of the sphere socket and sphere; c and b Poisson's ratio of the sphere socket and sphere respectively.
Tangential contact force
Since the surface of the sphere socket and the sphere is not smooth, friction is unavoidable. The most classic friction model at present is the Coulomb friction model, which expresses the friction as the product of the positive pressure and the friction factor. In order to prevent the frictional force from becoming discontinuous due to the direction change when the tangential velocity is near zero, the Coulomb friction model with correction coefficient is used to describe the frictional force of the spherical hinge in the contact process: t Where 0 and 1 Specific limit velocity within the error range
Equivalence of contact force
According to the normal and tangential contact force models established above, the contact force is obtained: c n e t t F F n F n Force and torque can be used to equate the contact force to the center of the upper platform and the link: When the link rotates around its own axis, it does not affect the movement of the entire mechanism. Therefore, in order to simplify the kinematics constraint equation. Assuming that the angle of the link turning around its own axis is zero, the generalized coordinate of the link becomes kinematic pair and the components connected to the kinematic pair, which helps to quickly and accurately establish the kinematic constraint equation of the mechanism. The topology diagram of the 6-PSS parallel mechanism is Fig 5, the numbers 0-13 respectively represent the lower platform, 6 sliders, and 6 links, and the numbers H1-H18 represent various kinematic pairs that connect these components. The points in parentheses indicate the center of the coordinate system. The center points of the two components can be regarded as a group of associative arrays of the kinematic pairs, and the associative array is in turn related to the topology of the 6-PSS parallel mechanism There is a one-to-one correspondence between the graphs. The associative array of the 6-PSS parallel mechanism can be shown in Tab1.Since this paper does not need to establish a coordinate system at the center of the slider, the center of the slider is represented by "-".
Tab1. Associative array of 6-PSS parallel mechanism j The constraint equation of the spherical hinge connecting the slider and the link of the 6-PSS parallel mechanism: 6 3 1 The constraint equation of the spherical hinge connecting the upper platform and the link of the 6-PSS parallel mechanism is: In order to make the mechanism have a definite movement, a set of driving constraints must be imposed on the mechanism. The driving constraint equation is the slider displacement obtained by the inverse solution of the platform pose on the known mechanism, which can be expressed as: Since there is a clearance at the sixth spherical hinge connecting the upper platform and the link considered in this paper, the motion constraint is transformed into a force constraint, which needs to be removed from the ideal kinematic constraint equation. The removed kinematic constraint equation:
Establish the clearance dynamic equation
According to Newton's Euler formula with Lagrange multipliers, a dynamic equation including the kinematic constraints of the mechanism can be established without considering the clearance: where is Lagrange multiplier; M and F are the generalized mass matrix and generalized external force, respectively.
When considering the sphere hinge clearance, remove the kinematic constraint equation of the spherical hinge connecting the upper platform and the link, and the contact force at the spherical hinge is equivalent to the center of the corresponding member, then the generalized force received of the upper platform and the sixth link: The position and velocity equations can only meet the requirements at a certain discrete moment, but the acceleration equation has a default phenomenon, which makes it impossible to obtain stable numerical solutions through numerical method. Baumgart algorithm [9] is used the most commonly solution. In the Baumgart algorithm, the velocity and position constraints are introduced into the acceleration term. Then the kinetic equation (15) becomes: Where a and b are default correction coefficients. When a and b are positive values, the system can generally reach stability, and when a=b, the system can quickly reach a stable state.
6.Numerical simulation
This paper uses R-K method to analyze the dynamic characteristics of the mechanism. The entire solution flow chart is shown in Fig 7. Fig 7. Flow chart of clearance calculation In order to quantitatively evaluate the influence of different gap sizes on the dynamic characteristics of the mechanism, this paper selects the mean square root error of acceleration of the upper platform as the quantitative evaluation index of the simulation results. Where is the mean square root error of acceleration; n is the sample size; and represent the acceleration with the clearance and the ideal acceleration, respectively. This chapter mainly analyzes the effect of the gap size on the dynamic performance of the mechanism through numerical simulation. The clearance parameters used are shown in Tab The paper mainly analyzes the impact on the dynamic performance of the mechanism when there is a clearance at the upper spherical hinge H18, and the gap size gap=0.01mm、 0.05mm, 0.1mm and 0.2mm. The simulation parameters of the mechanism are shown in Tab 3. Because the mechanism is under the action of the single kinematic pair clearance, the upper platform's Y-direction and Z-direction movement laws are similar, so this paper uses the upper platform Z-direction movement parameters to illustrate the influence of the kinematic pair clearance on the dynamic characteristics of the entire mechanism. Given the center movement curve of the upper platform and kinetic simulation parameters Tab 3: From Fig 8(a) of displacement with different gap sizes, it can be seen that the Z-direction displacement curve with clearance is relatively smooth and basically coincides with the ideal displacement curve, indicating that the clearance has little effect on the displacement accuracy of the upper platform. Further from the displacement error Fig 8(b), it can be seen that the absolute value of the maximum displacement error in the Z direction increases with the increase of the gap size. When the gap size changes from 0.01mm to 0.2mm, the Z-direction displacement error is that about 0.005 mm rises to 0.09mm, and the maximum displacement error is less than the gap size. From Fig 9,it can be seen that the velocity with clearances becomes unsmooth, with many burrs appearing, and fluctuates up and down the ideal curve, and it is most obvious near the velocity extreme. It can be seen from the partial enlarged view of the velocity that the amplitude of the fluctuation increases with the increase of the gap size.
From Fig 10, it can be seen that there are many spikes in the acceleration curve under different gap sizes. The value of the maximum spike increases with the increase of the gap size. When the clearance ranges from 0.01mm to 0.2mm, The Z direction ranges from 27.6 m/s^2 to 60.2 m/s^2, which is due to the increase in the maximum contact force as the gap size increases and leads to a corresponding increase Finally, increase some gap sizes and use the mean square root error of acceleration proposed quantify the degree of influence of different gaps on the dynamic characteristics of the upper platform. It can be seen from Tab 4 that mean square root error of acceleration increases with the increase of the gap size, which decreases dynamic performance and stability ; when the gap is 0.01mm and 0.02mm, mean square root error of acceleration smally changes, which means that it is of a little significance to improve the dynamic performance of the mechanism in reducing the gap size which increases the cost when the gap size is 0.02mm. Tab
7.Conclusion
Based on a new type of 6-PSS space parallel mechanism, this paper uses Newton's Euler formula with Lagrange Multiplier to establish a dynamic model with spherical hinge clearance, and analyzes the influence of different gap sizes on the dynamic characteristics of the entire mechanism. Draw the following conclusion, different gap sizes have a little effect on displacement and velocity, but have greater influence on acceleration and contact force. The increase of the gap size will deteriorate the dynamic characteristics of the mechanism and decrease the stability. | 3,228.4 | 2021-01-01T00:00:00.000 | [
"Engineering"
] |
Wind Turbine Radar Cross Section
The radar cross section (RCS) of a wind turbine is a figure of merit for assessing its e ff ect on the performance of electronic systems. In this paper, the fundamental equations for estimating the wind turbine clutter signal in radar and communication systems are presented. Methods of RCS prediction are summarized, citing their advantages and disadvantages. Bistatic and monostatic RCS patterns for two wind turbine configurations, a horizontal axis three-blade design and a vertical axis helical design, are shown. The unique electromagnetic scattering features, the e ff ect of materials, and methods of mitigating wind turbine clutter are also discussed.
Introduction
Wind power installations (wind farms) are increasing globally at a rate of about 20 percent annually [1,2]. The increasing number and density of wind farms is putting them into closer proximity of microwave transmission and reception facilities such as radar, radio, television, GPS, cellular, and wireless networks. The receivers in these systems rely on detecting and processing very weak signals. Wind farms, and even individual wind turbines, can significantly affect the received signals in many cases. The large scattering cross sections of the towers and blades result in strong signals that can saturate the receiver or mask the desired signals. Furthermore, the motion of the blades introduces a Doppler shift that can degrade the processing gain. To assure that a high-performance sensor or communication system can operate in the vicinity of wind farms, detailed analysis, measurements, or simulations may have to be conducted.
The radar cross section (RCS) is a figure of merit that can serve to estimate the effect of a wind turbine on a system's performance. Numerous studies have been performed evaluating the RCS of wind farms and their effect on radar and communication systems. Studies on wind turbine impact on radar performance appear in [3][4][5][6][7][8].
References [3,6,8] have used measurements, either in the field or a measurement facility, to estimate the wind turbine scattering and its impact on radar performance. Reference a wind farm can be much larger than most targets, making the detection and tracking of objects traversing the wind farm difficult [3]. Even if the target is outside of the wind farm area, strong returns from the tower and blades can mask weak target returns. The rotor-induced Doppler spread can mask moving targets or be mistaken for weather echoes. The wind turbine portion of the return, due to its characteristics, may not be recognized and processed as clutter by the radar.
The systems under consideration are sufficiently narrowband so that phasor notation can be used (e jωt time dependence assumed and suppressed) and analysis performed at a single frequency, typically the carrier frequency.
The RCS of a point target (i.e., a target whose extent is much less than the size of the radar resolution cell) is defined for a plane wave incident [14,15] where R is distance from the target, f is the frequency, the subscript/superscript i denotes incident, s scattered, and p, q = θ or φ are the components in a spherical polar coordinate system, as shown in Figure 2. The limiting process in (1) assures that the scattered field is proportional to 1/R. The copolarized RCS refers to the case where p = q, whereas the cross-polarized RCS is p / = q. Generally, σ is written as a scalar and the functional dependencies on frequency and angle are suppressed. The unit is typically m 2 or the decibel unit dBsm defined by σ, dBsm = 10 log 10 σ, m 2 . (2) The received power from a target at range R for a monostatic radar is given by the conventional radar range equation (RRE) [16] where P t is the transmitter power, G t the antenna gain in the direction of the target, λ the wavelength, σ t the target RCS (m 2 ), and L is a miscellaneous system loss factor.
The factor F t is the one-way voltage (or field) path gain (propagation) factor. It is squared to obtain power and squared again for round trip, resulting in a fourth power. The path gain factor is a complex quantity that accounts for the relevant propagation modes between the radar and target. Generally, since most of the systems under consideration operate near the ground, it would include multipath (or "ground bounce"). It would also include losses due to precipitation and foliage.
Normally radar and communication system performance measures such as probability of detection and probability of bit error are based on the signal-to-noise ratio (SNR). In our case, we will neglect the effects of noise in comparison to the wind turbine clutter and use the signal-to-clutter ratio (SCR) as the basis for performance evaluation. The clutter power return from a wind turbine point target with RCS σ w at the range R w is Note that both the target and wind turbine RCSs are changing with time. For the wind turbine it is due to rotor motion and the associated change in multipath due to rotor motion; for the target it is due to changing aspect angle, velocity, and multipath. From (3) and (4), we obtain the SCR for the monostatic case It is seen that the SCR cannot be increased by increasing the transmitter power, because the clutter power increases along with the target power. Figure 3 illustrates the tradeoffs between the factors in (5) Due to the complex propagation environment and geometry, the relative phases between the target and clutter components can be regarded as random. The total power in the receiver is determined using the sum of the complex voltages V t and V w due to the target and clutter returns, respectively. In this case, since we are interested in the average power, it is possible to approximate the total power received by the sum of the target and clutter powers. If the receiver impedance is Z (real), then the total average received power is the noncoherent sum where denotes expected value. This approximation allows us to treat each component individually. Bistatic geometries occur when the transmitter and receiver are sufficiently separated in angle. Bistatic radar is not as common as monostatic radar; however, the general bistatic case would encompass broadcast systems, cellular radio, and GPS. Referring to Figure 4, the direct signal from the transmitter to the receiver is [17] where R is the direct path distance between the transmitter and receiver, G t the transmit antenna gain in the direction of the receiver, G r the receive antenna gain in the direction of the transmitter, and F d is the one-way voltage (or field) direct path propagation factor. Again, referring to the bistatic geometry depicted in Figure 4, the clutter power from the wind turbine arriving at the receiver is [18] where the subscript w is used to denote wind turbine parameters. The subscript t refers to transmit and r to receive; σ bw is the wind turbine's bistatic RCS when the incidence direction is from the transmitter and the observation direction is from the receiver, as defined in Figure 4. The SCR for the special case of line-of-sight propagation path and no losses (L t , L r , L, |F t |, |F r | ≈ 1) gives In order to increase the SCR, aside from reducing the wind turbine RCS, the sidelobe levels of the two antennas should be as low as possible.
RCS Prediction Methods
The determination of the RCS of complex objects, such as wind turbines, can be computationally demanding. It requires the numerical solution of some variation of Maxwell's equations, or high frequency approximations thereof, in either integral or differential form. Maxwell's equations are solved subject to the pertinent boundary conditions of the problem. Rigorous methods include the method of moments (MoM) solution of integral equations in the frequency domain [19], or the finite difference time domain (FDTD) solution of the differential equations in the time domain [20]. The finite element method (FEM) is also used in both the time and frequency domains [21]. Fourier transform relationships exist between the time and frequency domain solutions. The MoM is appealing because it is a rigorous solution that includes all the interactions between currents on the structure, and thus all scattering mechanisms (multiple reflections, diffraction, surface waves, etc.). MoM requires surface meshing of the object into facets with edge lengths small compared to the wavelength. A matrix equation, of an order approximately equal to the number of internal edges, must be solved for the current. Then the current is used in the radiation integrals to obtain E s p for use in (1). At high frequencies (e.g., 10 GHz), millions of unknowns may be required for a converged current series. This has prevented the wide-spread use of MoM for electrically large scattering objects to date. The MoM solution of systems of equations numbering in the tens of millions has been reported [22]. This size problem is potentially within the reach of multicore PCs with several hundred GB of memory. However, there are approximate yet accurate prediction methods available that do not have the memory requirements.
The FDTD method can also be formulated rigorously, and does not require the solution of a large system of equations. It does require discretization of the scattering object and surrounding volume. The incident field is introduced into the computational domain and a marching in time process used to solve Maxwell's equations at each grid location at each instant of time. Equivalence principles are employed to find the far scattered fields from the equivalent currents on the computational boundaries. The FDTD method can require long observation times for accurate results. In the process, a Fourier transform is generally required to obtain the frequency domain fields for use in (1).
The approximate high frequency (HF) methods are primarily based on geometrical optics (GO) or physical optics (PO), and their edge diffraction extensions: the geometrical theory of diffraction (GTD) in the case of GO and the physical theory of diffraction (PTD) for PO [14]. Hybrid solutions methods can include a mix of the two (e.g., the shooting and bouncing ray (SBR) method [23]). The HF approach also requires surface meshing; however, the primary mesh criterion is that it adequately conforms to the actual surface. A bundle of incident rays is "shot" and traced throughout the model, including transmission through any electrically transparent material, to find reflection points, diffraction points, and shadows (due to blockage). This process can be very time consuming for models that have hundreds of thousands of facets.
There are numerous commercial software packages that can handle the RCS calculations. Several have multiple solvers that can be selected based on the frequency range and object size. High Frequency Structures Simulator (HFSS) by Ansys has transient, frequency domain, and integral equation solvers [24]. The same is true for CST's Microwave Studio [25]. FEKO is also capable of mixed solutions, for example, MoM, PO, GO, and edge diffraction, that can be applied to different portions of the object.
RCS Characteristics of Wind Turbines
4.1. Introduction. The RCS features of the two generic wind turbine designs shown in Figure 5 are presented in this section. The first is a classic three-blade horizontal axis configuration with approximately a 60 m tower height and 80 m blade diameter. The second is a vertical axis helical blade design with a helix diameter of approximately 3 m and helix height of 3 m. The CAD models were obtained from [26] and scaled to give dimensions in meters. The dimensions are summarized in Table 1.
The software package Lucernhammer (Lucernhammer and ACAD have distribution limitations. They are available only to U.S. Government agencies and contractors.) [27] was used to perform the RCS calculations. Lucernhammer The four frequencies considered are representative of wireless, cellular, and radar bands: 400 MHz, 900 MHz, 2400 MHz, and 5 GHz. No edge diffraction was considered in computing the RCS. Both bistatic and monostatic results are shown, using the coordinate system defined in Figure 2. The z-axis points up and the x-y plane is the ground. The elevation angle EL = θ − 90 • is measured from the ground.
The azimuth angle is a compass angle that is opposite to φ (AZ = −φ). In all cases only the horizontal plane patterns are shown (EL = 0 • , θ = 90 • ), which would be the case when the transmitter, receiver, and wind turbine are at ground level.
RCS data was obtained for models with all surfaces perfect electrically conducting (PEC) and compared to models that had fiberglass blades. The maximum number of ray bounces was set to 5. The non-PEC case is only approximate because Lucernhammer does not trace rays transmitted through the fiberglass blades. This contribution should be negligible though, because the reflection loss at the air-fiberglass boundary is approximately 8 dB for a fiberglass relative permittivity of ε r ≈ 5 [29]. In addition, there is attenuation of the transmitted wave as it propagates through the blade (fiberglass has an electric loss tangent, tan δ ≈ 0.002 [29]) and an additional reflection loss at the exit surface.
Horizontal Axis Wind Turbine RCS.
Bistatic patterns for the horizontal axis wind turbine are shown in Figures 6 to 9. The blade orientation is such that one blade is vertical and down, as shown in Figure 5 (this is referred to as the 0 • rotation state). Front incidence is φ i = 0 • ; side incidence is φ i = 90 • . There is a clear sidelobe structure at the lower two frequencies that arises from the cylindrical tower shape. What is evident in all of the bistatic plots is the large forward scattering lobe at the observation angle φ = φ i + 180 • . This occurs at 180 • when incidence is from the front, and at 270 • when incidence is from the side. The forward scatter lobe increases with frequency and is orders of magnitude larger than the backscatter (φ = φ i ). This feature is one of the appealing advantages that bistatic radar has over monostatic radar with regard to detecting low RCS (stealthy) targets [18].
The large lobe at 90 • for side incidence is due to specular backscatter from the side of the nacelle. Figures 6 and 7 are for all metal surfaces, whereas Figures 8 and 9 are for fiberglass blades. Generally wires strands or a wire mesh might be imbedded in the structure for grounding and lightening protection, so the conductor approximation is good at low frequencies. The individual RCS contributions are shown in Figure 10; the bistatic RCS for the nacelle, blades, and tower were computed as if each were isolated in free space. The effect of different blade materials is not all that noticeable. The dominant contribution to the forward scattering is from the tower, which is PEC in all cases.
The copolarized components of the azimuth monostatic RCS for the four frequencies are shown in Figure 11. Lobes occur at all frequencies at 90 and 270 due to the large flat sides of the nacelle. Fluctuations occur with angle as the scattering from the tower, blades, and nacelle adds and cancel with each other. As the frequency is increased, the phase International Journal of Antennas and Propagation Figure 9: Azimuth bistatic copolarized RCS of the horizontal axis wind turbine with fiberglass blades for an incident wave from the side.
differences change more rapidly due to the distances being longer in terms of wavelength. Hence the RCS fluctuates faster with angle. The relatively high monostatic RCS in the range of 100 to 200 degrees at 400 MHz is due to the tower. The tower sides have a slight tilt back (0.6 degrees) because the diameter at the base is larger than at the top. A wave incident at 0 degrees elevation is reflected upward such that an observer at 0 degrees elevation is in the peak of the first sidelobe of the bistatic pattern. At 900 MHz the sidelobes are narrower and an observer at 0 degrees elevation is in a null between two bistatic sidelobes. This is the reason for the large change in RCS between 400 MHz and the higher frequencies.
The periodic oscillations in the RCS in this same region at 400 MHz are due to Bragg scattering from the vertical blade and tower. Figure 12 shows a top view of the relationship between the tower and vertical blade. The round trip phase difference between the blade and tower is a multiple of 2π when the condition is satisfied [14], where is approximately 4 m. Equation (10) gives a spacing of about 4 degrees between lobes near broadside (90 degrees) which agrees with the plot in Figure 11. Since the Bragg effect is a "point scatterer" phenomenon, it is not as pronounced at higher frequencies, where the surfaces have a larger radius of curvature in terms of wavelength. The bistatic RCS as a function of rotor position is summarized in Figures 13 and 14, where the RCS for rotor angles from 0 to 120 degrees is plotted in 10 degree steps for the frequency of 400 MHz. For the purpose of RCS calculation, the blade rotation is clockwise as viewed from the front. The collected curves are shown on a single figure to illustrate the range of RCS values and highlight the angular regions with the greatest variation. There is no significant change in the regions of the patterns at the higher RCS levels (>40 dB), but regions with lower RCS can fluctuate 10 to 20 dB.
Vertical Axis Wind Turbine RCS.
Vertical axis wind turbines have been around since the early 1900s. Most have a helical blade geometry, and due to their compact size, they have been proposed as urban rooftop energy solutions [30]. In this section the monostatic and bistatic patterns are presented for the same four frequencies as for the horizontal axis design. The model is shown in Figure 3 with the blades 70 Figure 13: Azimuth bistatic copolarized RCS for a collection of blade angles from 0 to 120 degrees in 10 degree increments, front incidence (400 MHz, fiberglass blades).
in the 0 degree position (one blade's bottom centered on the x axis). Front incidence is φ i = 0 • ; side incidence is φ i = 90 • . The forward scattered peaks in the bistatic patterns in Figures 15 and 16 are evident at all frequencies. At the lower frequencies, 400 MHz in particular, the sidelobe structure of the tower is visible. The tower diameter is such that it is in the resonance scattering region at 400 MHz [15]. At 5 GHz the multiple narrow lobes are due to Bragg scattering from the blades.
Computational Issues and Convergence.
With a triangular mesh, a large flat rectangular surface can be modeled accurately by as few as two triangles. However, if a large surface is part of a complex target with other scattering elements, and many rays are "shot" for the ray tracing (typically 10 per wavelength), then the larger surface needs to be segmented into smaller surfaces so that blockage and multiple reflections can be accurately determined. More segmentation increases the accuracy but also increases the run time. In the case of a flat surface there is a relationship between segmentation and mesh size. If a large plate is meshed with small triangles, then less segmentation is required. The RCS plots in Figure 17 give an example of how the segmentation value (i.e., segments per triangle edge) affects the RCS. With adequate segmentation the perturbation of RCS is relatively small, and thus tends to be an issue for only cross-polarized RCS components.
For curved surfaces, the mesh size must be fine enough (i.e., sufficiently small triangle edges) so that a "tight" fitting mesh can be generated. Even so, when a curved surface is approximated by a triangular mesh, facet noise will occur (it is also called facetization error) [31]. In a general sense facet noise can be categorized as a quantization error that arises from representing the smooth continuous surface with discrete facets. Figure 15: Bistatic copolarized RCS of the helical wind turbine, all metal surfaces, front incidence.
To observe the effects of facetization error, the monostatic RCS of the tower is computed. Ideally the azimuth monostatic RCS should be constant with angle because the tower is a body of revolution. In Figure 18 are shown three mesh models of the 60 m high horizontal axis wind turbine's tower. Two are triangular meshes with different densities, and the third is a quadrilateral triangular mesh. A quadrilateral triangular mesh is obtained by first meshing the surface into quadrilaterals and then making triangles by adding the diagonals. Long thin vertical quadrilaterals can be used on the tower because it is essentially a singly curved surface; small segments only need to be used around the circumference of the tower. This results in significantly fewer facets for the tower. Figure 19 has a comparison of the monostatic RCS for a 90-degree sector at 900 MHz for the three meshes in Figure 18. The coarse mesh results vary by 20 dB, and would generally be considered unacceptable. However, in the forward scattered direction, the peak RCS level of the bistatic RCS at this frequency is about 70 dB, so this level of facet noise may be acceptable. Even the RCS for the fine mesh has variations of approximately 2 dB. The quadrilateral/triangular mesh has the same accuracy as the fine triangular mesh using only 3.7% of the fine mesh's number of facets. The reduction yields a significant computational savings when calculating the RCS for the entire wind turbine.
In Figure 20 are shown the bistatic patterns of the horizontal axis wind turbine for the three meshes at a frequency of 900 MHz. It is apparent that the cross-polarized components are more sensitive to facet noise because of the lower values of RCS.
RCS Reduction and Control
For both the monostatic and bistatic cases, the SCR, given by (5) and (9), can be increased by reducing the wind turbine RCS. Traditionally there are three approaches to reducing RCS: (1) shaping, (2) application of radar absorbing materials, and (3) cancellation techniques [14,15]. Shaping applied to the tower and nacelle could be somewhat effective, but it would have to be done with knowledge of the transmitter and receiver directions. Although it could reduce the RCS in some desired monostatic or bistatic directions, it would likely increase it in others. Shaping of wind turbine structures to reduce RCS has been investigated in [13].
Cancellation techniques involve the introduction of secondary scatterers to cancel the wind turbine's RCS (i.e., so as to induce destructive interference). It requires phase coherence between the primary (wind turbine) and secondary scattering components. This would be very difficult to achieve, especially at the higher frequencies, and it is only effective at limited frequencies and angles. Furthermore, the secondary scatterer would have to be very large in order to cancel the large wind turbine RCS. The most promising approach is the application of radar absorbing material (RAM). The material would have to be lightweight, thin, durable, inexpensive, and provide sufficient RCS reduction to make it economically viable. Most commercial RAM materials give a specular RCS reduction in the range of 15 to 20 dB (see, e.g., Emerson and Cuming Eccosorb FGM [32]); however, it varies widely with frequency and angle of incidence. A RAM coating might make sense if the wind turbine was at a fixed location from a facility, such as an airport radar. However, the bistatic RCS is so large that the aerodynamic degradation of the blades and cost of adding RAM would not generally be merited.
Summary and Conclusions
The equations for the signal-to-clutter ratio were presented for the monostatic and bistatic cases. The equations show that the SCR can be increased by (i) reducing the wind turbine RCS, or operating in a direction where the RCS is low, (ii) reducing the antenna sidelobe levels so that the gain is lower in the direction of the wind turbine, or (iii) in the case of radar, operating in a condition where the target range is much closer than the wind turbine.
The high-frequency computational method physical optics was used to predict the RCS. It has the advantage of not requiring the solution of a large number of simultaneous equations. The computational convergence issues related to surface meshing, number of bounces, and segmentation of the edges were discussed. The quadrilateral/triangular mesh on the tower provides the same accuracy as a fine triangular mesh with only 3.7% of the fine triangular mesh's number of facets. This reduction yields a significant computational time savings when calculating the RCS for the entire wind turbine. RCS patterns were presented for two wind turbine configurations: a three-blade horizontal axis design and a helical vertical axis design. The behavior of the RCS is a complicated function of angle, frequency, and rotor position. Both wind turbine configurations have relatively high forward scatter RCS that increases with frequency. For the horizontal axis models with tall PEC towers, the RCS is dominated by the tower scattering. The finer structure of the patterns varies with rotor position, and Bragg scattering can be observed in some situations. The reduction in RCS by using nonconducting blade materials was not significant.
Lastly, the possibility of reducing the radar cross section by the application of RAM was discussed. In most situations, the aerodynamic degradation and cost would not merit the use of RAM given the relatively small reduction in RCS that it would provide. An exception might be when wind farms operate in the vicinity of radar systems, in which case shaping and RAM can be used effectively. | 5,999 | 2012-12-30T00:00:00.000 | [
"Engineering"
] |
Control of stripe, skyrmion and skyrmionium formation in the 2D magnet Fe3−xGeTe2 by varying composition
Two-dimensional (2D) van der Waals (vdW) magnets have recently emerged as novel skyrmion hosts. This discovery has opened a new material platform for tuning the properties of topological spin textures, such as by exploiting proximity effects induced by stacking of 2D materials into heterostructures, or by directly manipulating the structural composition of the host material. Previous works have considered the effect of varied composition in the bulk crystals of the vdW magnet Fe3−xGeTe2 , but so far the effects on the hosted spin textures have not been thoroughly investigated. In this work, real-space x-ray microscopy is utilized to image magnetic stripe domain, skyrmion and composite skyrmion states in exfoliated flakes of Fe3−xGeTe2 with varying Fe deficiency x. In combination with supporting mean-field and micromagnetic simulations, the significant alterations in the magnetic phase diagrams of the flakes, and thus the stability of the observed spin textures, are revealed. These arise as a result of the varying temperature dependence of the fundamental magnetic properties, which are greater than can be explained by the removal of spins, and are consistent with previously reported changes in the electronic band structure via the Fe deficiency.
Introduction
Magnetic skyrmions and related spin textures have received significant interest as potential data storage elements in future spintronic devices due to their topological and transport properties [1,2].The formation of monochiral skyrmions is induced by the Dzyaloshinskii-Moriya interaction (DMI), which arises due to the broken inversion symmetry of the host system [3].The particular underlying symmetry class may stabilize different forms of skyrmions: Néeltype skyrmions in ferromagnet/heavy metal multilayers with interfacial DMI [4][5][6], or in bulk polar magnets [7] (symmetry C nv ); Bloch-type skyrmions in bulk chiral magnets such as the B20 materials (symmetries T and O) [8,9] or antiskyrmions in bulk systems such as the Heusler alloys [10] or the recently discovered Fe 1.9 Ni 0.9 Pd 0.2 P [11] (symmetries D 2d and S 4 , respectively).
The properties of these skyrmions are further defined by the interplay of the DMI with other magnetic parameters, including the exchange and dipolar interactions, magnetocrystalline anisotropy, as well as extrinsic factors such as the shape of the sample, and the presence of structural defects and disorder.The balance of these contributions determines the size of the skyrmions [12], the extent of their stability in temperature and applied magnetic field [13], the exhibited type of skyrmion lattice ordering [14], and their transformation into other non-trivial spin textures [15].Moreover, skyrmion dynamics such as the lifetime of metastable states [16], response to microwave resonance excitations [17], and currentinduced mobility [18] may all be affected.Methods of controlling these fundamental material parameters are therefore highly valuable for engineering desired skyrmion properties.In bulk magnet skyrmion hosts, such methods have included modifying the stoichiometry of the basic material, which, for example in the Co 10−x Zn 10−y Mn x+y alloys [19], resulted in the alteration of both T C and the skyrmion size, as well the discovery of different skyrmion lattice orderings (triangular and square) [20][21][22].Alternatively, control has been achieved by partial or complete substitution of particular elements in the material composition.Well-studied examples include Mn 1−x Fe x Ge [12,23], and Fe 1−x Co x Si [24,25], where modulation of both T C and the skyrmion size was achieved by adjustment of the DMI and exchange interactions, or (Cu 1−x Zn x ) 2 OSeO 2 [26], where the increased disorder from the substitution resulted in a large enhancement of the metastable skyrmion lifetime via pinning effects [27].In thin-film systems, refinement of the skyrmion properties has largely been realized by varying the composition, thickness, and number of repetitions of each material layer in the heterostructure, enabling control of skyrmion density, size, and stability [28][29][30].
Control of the material parameters of FGT, and thus manipulation of the skyrmion properties, has now begun to be investigated.In particular, exploitation of the proximity effects when stacking FGT with other 2D materials has resulted in initial prototypes such as spin-valve and spin-orbit-torque devices [55][56][57].Moreover, the stability of skyrmions was found to increase when FGT flakes were combined with additional layers such as WTe 2 or Co/Pd multilayers [58,59].More fundamentally, compositional effects of FGT, such as altered stoichiometry [60][61][62] or chemical substitution [63][64][65], have previously been studied by bulk measurements.In particular, a previous work has highlighted the possibility to greatly modulate the magnetocrystalline anisotropy of FGT via hole doping, achieved by changing the Fe composition of the underlying crystal [62].Similar studies have been carried out with the related compound Fe 5 GeTe 2 [66][67][68].However, the direct effects of these alterations on the topological spin textures in FGT flake samples have yet to be observed.Such investigations may further clarify the origin of skyrmion formation in FGT, as well as provide a pathway for tailoring the functionality of proposed FGT-based devices [69,70].
In this work, we study both bulk and exfoliated flake samples of Fe 3−x GeTe 2 , with Fe deficiency x between 0.03 and 0.37.Using a combination of magnetometry and real-space x-ray microscopy, we investigate the effect that this altered composition has on the formation and stability of all observed spin textures, including stripe domain, skyrmion and skyrmionium states.Supporting mean-field and micromagnetic simulations confirm the vital role that the temperature varying uniaxial anisotropy plays in altering the magnetic phase diagrams of the material.The results provide a foundation for future works seeking to achieve fine control over topological spin textures in Fe 3−x GeTe 2 and other 2D magnets.
Fe 3−x GeTe 2 flake samples
We selected four bulk single crystals of Fe 3−x GeTe 2 to investigate [61], with measured Fe deficiencies of x = 0.03, 0.10, 0.27 and 0.37 (see methods).Results for magnetization M versus temperature T measurements performed with a magnetic field of 30 mT applied along the c axis of each bulk crystal are shown in figure 1(a) (see methods).In agreement with previous studies [60,61], the Curie temperature T C is reduced with higher Fe deficiency.We acquired values of 215, 202, 187 and 161 K for the x = 0.03, 0.10, 0.27 and 0.37 compositions, respectively, by finding the temperature point exhibiting largest dM/dT.Further M versus T measurements are plotted in figure 1(b), this time with a magnetic field of 3 mT applied perpendicular to the c axis of each crystal.In addition to the typical sharp increase of M at T C , there is a further step in each curve at lower temperature, labelled as T * .Previously, features observed at similar temperature ranges in FGT have been argued to be evidence for both antiferromagnetic ordering [71,72], or the onset of heavy fermion behaviour [73].Given that we observed that this additional step feature is most prominent at very low magnetic fields applied in the ab plane, and it is not observed for fields above 100 mT (see further M versus T data in supplementary data figure S1), we suggest that this feature may be related to the interlayer ordering between the individual vdW layers occurring at a lower temperature than the intralayer ordering at T C .
The measured values of T C and T * for each sample are plotted as a function of x in figure 1(c), alongside a measurement of the saturation magnetization M S extracted from M versus H measurements performed at 5 K.Such a reduction in M S can be expected by the removal of magnetic Fe atoms from the crystal lattice.As shown in figure 1(d), the crystal structure of FGT is composed of layers separated by a vdW gap, and exhibits two distinct Fe atomic sites: the Fe I site located closer to the outer Te ions; and the Fe II site, located in the central layer of the slab along with the Ge ions.Previous studies of Fe deficient samples have found that there is typically a lower occupancy of the Fe II site, while the occupancy of the Fe I site remains complete.In addition, higher Fe deficiency typically increases and decreases the c and a lattice parameters, respectively [60,61].
From these bulk crystals, we prepared three exfoliated flakes of FGT with x = 0.03, 0.27 and 0.37 compositions on Si 3 N 4 membranes for scanning transmission x-ray microscopy (STXM) measurements.The FGT flakes were each capped with flakes of protective hexaboron nitride (hBN) to prevent further oxidation under ambient conditions (methods).Optical micrographs of the prepared samples are shown in figures 1(e)-(g), highlighting the regions of interest (ROI) where subsequent STXM measurements were performed.From the optical contrast, we selected flakes with thicknesses as similar as possible (around 80 nm), to allow for a meaningful comparison of the spin texture formation in each composition (atomic force microscopy data shown in supplementary data figure S2).By tuning the incoming x-rays to the Fe L 3 absorption edge and exploiting the x-ray magnetic circular dichroism (XMCD) effect, STXM imaging yields a magnetic contrast signal proportional to the out-of-plane magnetization averaged through the thickness of the flake sample, m z (schematic illustration and x-ray absorption spectra shown in supplementary data figure S3).The images in figures 1(h)-(j) are x-ray micrographs of the ROI, revealing the formation of a dense array of skyrmions after field-cooling (FC) each flake sample from above T C to the indicated temperatures under an applied out-of-plane magnetic field of 20 mT.Such a method of skyrmion formation has been seen for the x = 0 FGT composition before [37,38,42], and here we demonstrate that similar skyrmion states can be readily realized across the full range of investigated FGT compositions.Previous observations have demonstrated FGT flakes within this thickness range exhibit Néel-type domain walls, indicating a source of interfacial-like DMI, although the origin of this effect remains under debate [38][39][40]42].
Magnetic phase diagrams
To investigate the formation of skyrmion and stripe domains across the composition series, we performed extensive STXM measurements on all three flake samples following a field-sweep (FS) procedure: at a range of temperatures, each sample was initialized in the saturated, monodomain state at −250 mT, and imaging was performed as the applied field was increased stepwise up to +250 mT.A selection of x-ray micrographs acquired following this procedure for different temperatures is shown in figures 2(a)-(o) (further images of each sample presented in supplementary data figures S4-S6).Alongside each row, we have labelled the fraction of T C of each sample at which the measurement was performed, to facilitate comparison between each composition.The T C of each flake sample was determined as the temperature at which real-space magnetic contrast could no longer be observed, and the values were found to be slightly lower than in the bulk: 207, 180 and 146 K for the x = 0.03, 0.27 and 0.37 compositions, respectively.The reductions of T C are consistent with the decreases seen in thinner flakes of 2D magnets [47], but we also cannot discount a temperature offset of a few Kelvin in the thermocouple measurement.The results for the x = 0.03 flake acquired at 203 K, or 0.98 T C are shown figure 2(a).The images reveal the formation of a dense disordered array of skyrmions (labelled Sk) for both positive and negative applied OOP fields, as well as stripe domains (labelled SD) at 0 mT.At decreasing temperatures, shown in figures 2(b) and (c), the characteristic size of the stripe domains increases, and skyrmion formation is no longer observed.Finally, at 150 K (0.72 T C ) and below, only uniform switching between the positive and negative monodomain states (labelled MD±) is observed.
The high temperature results for the two flakes with greater Fe deficiency, x = 0.27 and x = 0.37, shown in figures 2(f), (g) and (k), (l), reveal a similar behaviour, with dense skyrmion formation only observed close to T C , and the characteristic stripe domain size increasing with decreasing temperature.However, at lower temperatures, there is a significant difference in the crossover to monodomain switching behaviour.In comparison to the x = 0.03 flake, this occurs at a comparatively lower temperature in the x = 0.27 sample (0.61 T C ), revealed in figures 2(h)-(j).Furthermore, in the x = 0.37 sample, monodomain switching behaviour was not observed down to the base temperature of the STXM instrument at 30 K-instead, we observed the formation of stripe domains across the full investigated temperature range, as shown in figures 2(m)-(o).
The overall behaviour is visible from the magnetic phase diagrams of each flake sample presented in figures 3(a)-(c), which plot the observation of each magnetic state as a function of the applied magnetic field at each temperature when following the FS procedure.In cases where both stripe and skyrmion states coexisted, these were included in the skyrmion regions for clarity.The temperature is plotted on both an absolute scale, and as a fraction of T C .The phase diagram of the x = 0.03 sample, with the crossover to the monodomain switching behaviour, is similar to those previously reported for FGT flakes with x close to 0 [38].However, the results of the x = 0.37 flake, where real-space spin textures were observed across the full temperature range, is more reminiscent of typical dipolar-stabilized skyrmion bubble systems [74], as well as the multilayer skyrmion hosts [75].
To explore the stability range of the skyrmion state in each flake sample, we performed further xray imaging measurements following the FC procedure: the sample was initialized above T C , cooled down to the target temperature under an applied field of 20 mT, and images were acquired as a function of both increasing and decreasing applied field (with two separate cooling procedures).In all three samples, the FC procedure resulted in the formation of disordered arrays of skyrmions at low temperature, which were effectively quenched from the high temperature skyrmion pockets close to T C , as shown in figures 1(h)-(j).The summary phase diagrams in figures 3(d)-(f) show the field and temperature extent of the resulting metastable skyrmion state in each sample, which is greatly increased in comparison to the equilibrium skyrmion phase revealed in the FS measurements.This large extent of the FC skyrmion phase was observed for all three compositions, with the main difference being a slight change in the field range of the skyrmion stability-for the x = 0.37 sample, the stability of the skyrmions was both higher in the positive field direction, and lower in the negative field direction, in comparison to the x = 0.03 sample.
Note that these results show the phase diagram for a FS procedure from negative to positive magnetic field, and for a FC procedure with a positive applied field.In both cases, with a change in sign of the applied field, each phase diagram would be mirrored in the x axis.In all three samples, for increasing applied field, the skyrmions reduced in size and number, while for decreasing field their size dramatically increased below 0 mT (corresponding microscopy images shown in supplementary data figure S7).As discussed in previous works, this highly fielddependent skyrmion size is a strong indication of the predominantly dipolar-stabilized character of the observed skyrmion states, rather than DMI-stabilised [38,76].These results demonstrate that large differences in the low temperature behaviour of the sample are accessible through a modulation of the Fe content of FGT samples, while formation of skyrmions is maintained across the full investigated compositional range.Interestingly, we did not notice any different behaviour in the flake samples associated with T * , suggesting this may only be relevant for the bulk crystal samples.
Interplay of magnetic energy terms
We sought to perform simulations in order to reproduce the experimental results, and investigate the origin of the observed differences in the magnetic phase diagrams.In order to acquire reasonable model parameters, we performed extensive magnetometry measurements of the bulk single crystals.Figures 4(a)-(c) plots M of each FGT composition measured at 5 K as a function of H with the field applied both parallel and perpendicular to the c axis.Immediately, the plots reveal a significant change in the saturation field between the three compositions when the field is applied in the ab plane.This indicates a decrease in the uniaxial anisotropy, K U , responsible for the easy axis along the c axis, as a function of increasing Fe deficiency, as has been observed previously [60].
By measuring further M versus H loops at a range of temperatures, we extracted both M S and, using an integration of the hysteresis loops, estimated values of K U for each sample (further data shown in supplementary data figure S8, see methods).The calculated parameters for each composition are plotted as a function of temperature in figures 4(d) and (e).Note that in plotting K U , we estimated and subtracted the shape anisotropy contribution of each plate-like bulk sample from the measured effective anisotropy K eff (details shown in supplementary data figure S9).The results show both the expected decrease in M S as well as the strong change in K U with increasing x.The inset of figure 4(e) plots the extracted value of the anisotropy field H K = 2K u /M S , as a function of T for each sample, demonstrating that the change in K U between samples cannot be accounted for by the change in M S .
This implies that the magnetocrystalline anisotropy contribution is tuned by a factor of 4 for a change in x of ~0.35, in line with previous estimates [60].Notably, this change in anisotropy with x is larger than the change in M S , indicating a significant change in the magnetic behaviour of the system.Based on angle-resolved photoelectron spectroscopy and density functional theory calculations, this has been argued to be linked to fundamental changes in the electronic band structure of FGT [62].Due to the resulting hole doping at higher values of x, the majority of bands that are strongly affected by spin-orbit coupling are shifted away from the Fermi energy with increasing Fe deficiency, thus reducing the resulting out-of-plane magnetocrystalline anisotropy [62].
Considering this strong change in the uniaxial anisotropy with the composition of the samples, we modelled the experimental systems using a temperature-dependent computational model based on the mean-field approximation of a classical spin Hamiltonian with exchange, DMI, uniaxial anisotropy and dipolar interaction terms (see methods), similar to our previous work [38,77].We noticed that the M S and K U values could be collapsed upon one another via a linear scaling of both axes, indicating a homogeneity of the M S (T) and K U (T) functions across the composition series (see supplementary data figure S10).Based on a dimensional analysis of the model Hamiltonian (see supporting note A and B), it is possible to reduce the complexity of the simulation to only rescaling a few model parameters while keeping the temperature dependence of all energetic terms the same.For example, to obtain the exchange parameter J E of a system with Fe deficiency x 2 from an initial set of parameters in the system with x 1 , one can utilize ) .
Thus, starting from a set of model parameters for the x = 0.03 system, we calculated parameters for each composition system, scaled by the experimentally measured M S (T) and K U (T) values shown in figure 4. A more detailed and thorough description of these scaling arguments is included in the methods and the supporting material.We performed simulations of each system, following a field-sweep procedure starting at negative applied fields, with a summary of the results shown in figure 5 (further data shown in supplementary data figures S11-S13).Simulations 1, 2 and 3 correspond to parameters selected to model the x = 0.03, 0.27 and 0.37 experimental systems, respectively.The visualizations of the simulations in figures 5(a)-(c) show a good agreement with the experimental behaviour in figure 2. The formation of skyrmions at high temperatures is reproduced, while the observed crossover from stripe domain formation to uniform magnetization switching is seen in the case of simulations 1 and 2. The results are better visible in the simulated magnetic phase diagrams in figures 5(d)-(f).Comparison to figures 3(a)-(c) shows a reasonable agreement to the experimentally determined phase diagrams.In particular, specific features such as the applied field asymmetry of both the high temperature skyrmion pockets and the stripe domain formation (due to the field asymmetry) are reproduced.The main features that are poorly replicated are the switching fields at low temperatures, which might be explained by the presence of thermal fluctuations or defects allowing easier switching in the experimental system.Note that we did not attempt reproduction of the experimental phase diagram following the FC procedure, but expect that similar agreement would be achieved.The results demonstrate that the balance of the dipolar energy from M S and the anisotropy from K U are responsible for the magnetic phase diagrams of these FGT flakes.
Composite skyrmion formation
In a previous work, we reported the observation of composite skyrmions in exfoliated FGT flakes [51], with topological charge |Q| ̸ = 1 (the skyrmions shown so far have Q = −1).Their formation was realized by the seeding of loop-like states within the stripe domain state when following a zero-field cooling (ZFC) procedure, from which skyrmionium (Q = 0) [78][79][80][81], skyrmion bag (Q > 1) [52,53,[82][83][84] and skyrmion sack (Q < −1) [53,54,85] states emerged upon increasing an out-of-plane magnetic field.To investigate the formation of these magnetic states in the Fe 3−x GeTe 2 flakes, we performed imaging following the ZFC procedure to a range of temperatures.As exemplified in figures 6(a)-(c), in all three flakes, loop-like seed states were observed within the stripe domains upon ZFC (further data shown in supplementary data figures S14-S16).In many cases, isolated skyrmioniums emerged upon increasing the applied out-of-plane magnetic field, and occasionally more complex composite skyrmion states with |Q| > 1 were created.
The number of skyrmionium states (N SkM ) observed in each FGT flake is plotted as a function of temperature in figure 6(d).For all three samples, there appears to be a higher probability to form skyrmionium states at higher temperatures, close to T C .For the x = 0.03 and 0.27 compositions, the probability to form skyrmionium states decreases significantly for lower temperatures.On the other hand, in the x = 0.37 FGT flake, the number of observed skyrmionium states increases for lower temperatures.Figures 6(e)-(g) plots the observation of skyrmionium states as a function of both applied field and temperature, for the x = 0.03, 0.27 and 0.36 samples, respectively.The spot sizes indicate the average size of skyrmionium states observed at each temperature and field point.These phase diagrams highlight a dramatic change in skyrmionium formation characteristics across the composition series.In general, once formed, an individual skyrmionium state will decrease in size for increasing applied magnetic field [51].However, as revealed by figures 6(e)-(g), most often the skyrmionium states with larger size emerge at, and survive to, higher applied magnetic fields (see supplementary data figure S17 for more details on skyrmionium sizes).Note that due to the limited time for central facility-based measurements, we have limited statistics for these measurements.Due to the random nature of the seeded formation mechanism, more thorough analysis would require repeated imaging runs to improve the averaging process.
Nevertheless, it is interesting to speculate on the observed differences between FGT compositions.For example, the increased chance to form composite skyrmions at lower temperatures in the x = 0.36 flake may be related to the lack of low temperature monodomain switching behaviour observed in the FS measurements presented in previous figures.One might expect that the observed increased stability of stripe domains, likely due to the decreased uniaxial anisotropy lowering the cost to form magnetic domain walls, would be extended to the skyrmionium states.Alternatively, we can expect that nonstoichiometric samples exhibit increased disorder and defect density within the crystal structure, which may act as pinning sites and in turn stabilise the observed magnetic textures.As further evidence for this possible pinning effect, we note that the composite skyrmion states observed in the x = 0.36 flake often exhibited irregular shapes, while in the x = 0.03 sample they were often more close to circular, as one would expect from an unpinned domain structure.
Micromagnetic simulations
To investigate the stability of composite skyrmion states for varying magnetic properties, we performed multiple micromagnetic simulations with a range of saturation magnetization, M S (100 to 400 kA m −1 ) and out-of-plane anisotropy field H K (100 to 400 mT), including terms for the demagnetizing field and a small interfacial-like DMI, with D = 0.14 mJ m −2 , which reproduces the Néel domain walls observed experimentally in similar FGT flakes.We initialized the simulation while including temperature fluctuations, and then gradually reduced these while minimizing the energy of the system in an attempt to model the experimental ZFC procedure (see methods, supplementary data figure S18).Once at zero fluctuations, we increased the applied field in a stepwise fashion.A summary of the results is shown in figures 7(a)-(p), where we plot the field state at which the system exhibited the most composite skyrmion spin textures.It is evident that a variety of loop-like composite skyrmion states are realized for a wide range of micromagnetic parameters as shown by figures 7(q) and (r).
First we shall consider the results for M S ⩾ 200 kA m −1 in more detail, shown in figures 7(e)-(p).The domain sizes in these sets of results exhibit typical behaviour for a magnetic film with out-of-plane anisotropy, where the domain formation is dominated by the dipolar interaction [86].For increasing M S , the typical domain sizes become smaller, while for increasing H K the domain wall energy cost increases, leading to larger domains [87].For smaller H K values, we also observed the possibility to stabilise Bloch point-like structures within the magnetic domain walls, which have been termed chiral kinks [54], and further modify the topological charge of the After ZFC, the temperature is effectively 0 K. (q)-(t), Zooms into regions containing a skyrmion and skyrmionium (q), skyrmion bag (r), and objects with Bloch points (or chiral kinks) embedded within their domain walls (s), (t).The colour map indicates both the out-of-plane magnetization component, mz (black and white), and the in-plane components, mx and my (rainbow colour wheel).
object by introducing additional winding, as shown in figures 7(s) and (t).We note that in simulations performed with a slower cooling rate the number of such Bloch points were reduced (see supplementary data figure S19), tentatively indicating that fast cooling rates (exceeding 10 K min −1 ) may be required to realize them in experimental samples.The trend in domain size is reversed when considering the results from the M S = 100 kA m −1 simulations, where the domain sizes are considerably smaller.We argue that this is caused by the onset of negative domain wall energy owing to the dominance of the DMI contribution for sufficiently small M S [76] (see supplementary data figure S19 for simulation with zero DMI).In this DMI-dominated regime [88], the diversity and density of composite skyrmion states is increased.Due to the large M S of the experimentally investigated FGT flakes, it is likely that the investigated samples are in the dipolar-dominated regime.
In this work, our x-ray microscopy method is insensitive to the in-plane magnetisation, we cannot determine nature of the magnetic domain walls.However, in previous Lorentz transmission electron microscopy measurements of the flakes from the x = 0.03 and x = 0.37 crystals, we saw magnetic contrast consistent with Néel-type domain walls [38,51].One could approach this from the simulation angle, utilziing micromagnetics to estimate the size of the DMI.However, one signficant challenge is that typical micromagnetic simulation methods utilize an isotropic exchange interaction.In layered magnets such as FGT, we can expect the intralayer and interlayer exchange couplings are different.Therefore, further work is needed to develop truly quantitative micromagnetic simulations of the spin textures in 2D magnets.Nevertheless, our simulations show that in both dipolar and DMI dominated systems, the general seeding of composite skyrmions is similar, demonstrating that the zero-field cooling seeding mechanism may be generally applicable to a wide range of materials, and this should be a fruitful avenue for future research.
Conclusion
We have shown comprehensive x-ray microscopy results of the magnetic spin textures hosted in exfoliated flakes of Fe 3−x GeTe 2 with Fe deficiency between x = 0.03 and 0.37, observing the formation of stripe domain, skyrmion and composite skyrmion states.By mapping the magnetic phase diagrams, we identified significant differences between the compositions.Specifically, close to x = 0.00, FGT exhibits a monodomain switching behaviour at low temperatures, while in the x = 0.37 sample, the formation of stripe domains was maintained to the base temperature of our instrument (30 K).Supporting mean-field simulations indicate that this behaviour is due to the decrease of the uniaxial anisotropy, which could be realized by an alteration of the underlying band structure with higher Fe deficiency.
Moreover, we observed that skyrmionium and other composite skyrmion states were realizable across all compositions, emerging from seeded states in the stripe domain after a zero field cooling procedure.The density and temperature-field extent of the formation was much greater in the x = 0.37 sample, although it is unclear whether this is due to the change in uniaxial anisotropy, or due to pinning effects.Accompanying micromagnetic simulations demonstrated that skyrmionium states should be realizable across a wide range of anisotropy and saturation magnetization, in both the DMI and dipolar dominated regimes.The results provide a foundation for future compositional studies of spin textures in FGT flakes, and emphasize that composite skyrmion states may exist in a wide range of magnetic systems.The relationship between changes in the electronic structure and the properties of the hosted spin textures indicates that one should expect interesting results from other methods capable of manipulating the band structure, such as electrostatic or ionic gating.In the future, the combination of FGT flakes with additional 2D materials will allow for the tuning of the relevant magnetic energy terms by proximity effects, enabling greater control of the magnetic spin textures.
Sample preparation
Bulk crystals of Fe 3−x GeTe 2 were grown via the chemical vapour transport technique.Stoichiometric quantities of Fe (STREM Chemicals, Inc. 99.99%), Ge (Acros Organics, 99.999%), and Te (Alfa Aesar, 99.99%) powders, along with 5 mg/cm 3 of iodine as the transport agent, were sealed within evacuated quartz ampules.The single crystal growth was performed using a two-zone furnace by holding the source and sink parts of the tube at 750 • C and 675 • C, respectively, for 2 weeks, before cooling to room temperature.The growth procedure resulted in the formation of silvery metallic platelet single crystals with areas of ~2×2 mm 2 .Energy dispersive x-ray spectroscopy measurements using a scanning electron x-ray microscope determined the composition of the four chosen single crystal samples to be x = 0.03, 0.10, 0.27, and 0.37, as reported previously [61].
To prepare the flake samples on the Si 3 N 4 membranes for the x-ray transmission microscopy measurements, we utilized an all-dry viscoelastic transfer method.Flake samples were prepared from crystals with compositions x = 0.03, 0.27, and 0.37.The bulk FGT flakes were mechanically cleaved and exfoliated onto a PDMS stamp using Nitto scotch tape.By inspecting their contrast under an optical microscope, flakes with a thickness of around 80 nm were selected, and stamped onto a Si 3 N 4 membrane.Subsequently, a similar exfoliation and stamping transfer procedure was performed to cap the FGT flakes with larger hBN flakes with thickness ~15 nm.All fabrication steps were performed under ambient conditions, such that each side of the FGT flakes was exposed to the atmosphere for ~20 min.Our previous measurements indicate that this results in an oxidized FGT layer with an approximate thickness of 6 nm on both sides of the FGT flake [38].The thicknesses of the resulting FGT/hBN heterostructures was measured using atomic force microscopy measurements with a Bruker Dimension Icon, as shown in supplementary data figure S2.
Magnetometry measurements
Magnetometry measurements were carried out using a Quantum Design MPMS3 vibrating sample magnetometer.Bulk crystals of each FGT composition were fixed to a quartz rod sample holder using GE varnish.Measurements were performed for each sample with the field aligned with both the c crystal axis (H ∥ c) and in the ab plane (H ⊥ c).The sample temperature and applied magnetic field value were controlled by the built-in helium cryostat.The raw data, which yields values for the magnetic moment, was converted to SI units of magnetization using the measured the measured mass of each crystal and assuming a density of FGT of 7.3 g cm −1 for all samples.Measurements of the sample magnetization were acquired as a function of both sample temperature and applied magnetic field.
An approximate value of the uniaxial anisotropy at each temperature was estimated from the magnetization versus field data measured along both crystal orientations with the following process.A linear trend was acquired from the highest field (in the saturated state) data points of the measured hysteresis loop at each temperature, and the resulting fit was taken as a diamagnetic and paramagnetic background subtraction.This allowed a determination of the saturation magnetization, M S , at each temperature.From this data, the effective anisotropy, K eff of the bulk sample was estimated from the work done, W, to reach M S , along each crystal orientation Ĥ, (2) Thus, by integrating over the magnetization versus field curves for each orientation, the measured effective anisotropy could be calculated, K eff = W H∥c − W H⊥c .Part of this measured anisotropy will be the result of the shape anisotropy, which is particularly strong in the plate-like FGT bulk samples.This shape anisotropy K sh , or demagnetization effect, can be calculated from M S and the demagnetization factor N d = 0.93 estimated from the dimensions of the platelike samples, by The negative sign highlights the fact that the shape anisotropy for a plate-like sample seeks to align spins in the plane.Thus, the magnetocrystalline uniaxial anisotropy K U can be approximated by K U = K eff − K sh .Finally, the anisotropy field H K can be extracted as These calculations were performed for all measured bulk samples as shown in figure 4 in the main text.
Scanning transmission x-ray microscopy
STXM measurements were performed with the MAXYMUS endstation at the UE46 beamline at the BESSY II electron storage ring operated by the Helmholtz-Zentrum Berlin für Materialien und Energie.Each Si 3 N 4 membrane holding the FGT/hBN stacked flake samples was fixed to the sample holder using GE varnish.The sample holder was placed into the vacuum chamber of the instrument, with cooling achieved by a liquid He cryostat.Applied out-of-plane magnetic field control was achieved by varying the arrangement of four permanent magnets within the sample chamber.The incoming x-ray beam was focused to a small spot size using a Fresnel zone plate with central stop, and the remaining zeroth order light was isolated by positioning the order selection aperture.To acquire an x-ray micrograph, the sample was rastered through the focused beam using a piezoelectric motor stage, with the x-ray transmission measured pixel by pixel with an avalanche photodiode.Magnetic contrast was achieved by exploiting the XMCD effect, where magnetization along the x-ray propagation direction, m z alters the absorption of circularly polarized x-rays (see supplementary data figure S3).Thus, an image of the m z component of the present magnetic spin texture can be acquired.An XMCD signal can be found at the resonant L 3 and L 2 absorption edges of the magnetic element within the sample; in this case we selected the Fe L 3 edge with a nominal x-ray energy of 707.5 eV.All presented images of the magnetic domain structure were recorded using a single circular x-ray polarization (positive).In some cases, such as in figure 4 of the main text, the magnetic contrast was isolated from the structural contrast by subtracting an image of the sample saturated at an applied field of +250 mT, leaving only contrast associated with the magnetic domains.
Mean-field simulations
To perform temperature-dependent simulations we developed a mean-field model based on the standard classical spin Hamiltonian [77] with spins distributed on a 2D hexagonal lattice of 30 × 30 spins (approx.150 nm × 150 nm) with periodic boundary conditions in the plane.The mean-field energy reads: where µm represents the vector moment of meanfield spins, with µ being the zero-temperature moment magnitude and m i the temperaturedependent magnetic moment vector of length normalized to take values between 0 ⩽ |m i | ⩽ 1.The individual energy terms starting from the left correspond to exchange, DMI, uniaxial anisotropy, Zeeman energy, and finally the dipolar energy.The symbol ⟨•⟩ appearing in the first two sums in equation ( 5) implies the nearest neighbour coupling, and mi in the anisotropy term is a vector of unit length.The interaction constants J E , J D , and J DP are in the units of Joule ×µ −2 .The exchange and anisotropy energy constants J E and J K are temperature-dependent (see supplementary data for details).The temperaturedependence of J D is weak and will be ignored.Thus the DMI energy term depends on temperature only through the moments m i .The DMI vector d ij is oriented in the lattice plane and perpendicular to the line connecting two neighbouring spins i and j.The anisotropy unit vector n is oriented along the ẑ-axis perpendicular to the lattice plane.The spin moment m i has temperature dependent magnitude following the expression: where L(x) = coth x − x −1 is the Langevin function, k B is the Boltzmann constant, and T the temperature.The vector B e i represents the effective field acting on the spin m i and |B e i | is the magnitude of B e i .The expression for the effective field can be obtained from equation ( 5) by calculating the variational derivative: Thus according to equations ( 6) and ( 7), the meanfield spin m i depends on the exchange and DMI energy couplings, material anisotropy, external and dipolar field, and also on the temperature T. The magnetization structures for a given set of parameters are evaluated by minimizing equation ( 5) during the field or temperature evolution starting from uniquely defined initial states, such as fully saturated uniform state in case of the field sweep computations, or random vector distribution in the temperature cooling computations starting from the paramagnetic state [77].The parameters required to generate the results in this work and the temperature dependencies of J E and J K are discussed in detail as part of the supplementary data.The supplementary material identifies the relationship between the model constants J E , J D , J K and their micromagnetic equivalents A (J m −1 ), D (J m −2 ) and K (J m −3 ).
Micromagnetic simulations
Micromagnetic simulations were performed using the MicroMagnum framework following the Landau-Lifschitz-Gilbert (LLG) equation.We added custom extensions for including a DMI tensor and temperature fluctuations via the effective field approach [89].The simulated system had dimensions of 1 × 1 × 0.02 µm 3 , with cell volumes of 1 × 1 × 20 nm 3 .The simulations were carried out for a range of M S and H K values, with the exchange and DMI constants set to A = 0.7 pJ m −1 and D = 0.14 mJ m −2 , respectively.The thermal fluctuations utilized an Euler-type solver with a spatially and temporally decorrelated thermal field and a fixed 10 fs time stepping [90].To model the experimentally realized ZFC procedure, the system was initialized with random magnetization at the micromagnetic T C (the temperature setting calibrated to where all magnetization becomes fully random).The temperature was subsequently reduced to 0.3T C within the next 3.5 ns and then held at that temperature for another 2.5 ns.The entire ZFC process thus took in total 7 ns.In addition to the thermal fluctuations, the saturation magnetization was scaled with the decreasing temperature following the experimentally observed behaviour (and thus also the anisotropy following K U = µ0 2 M S H K + µ0 2 M 2 S with the effective anisotropy field H K (in this definition H K = 0 mT corresponds to an out-of-plane to in-plane transition).Once at low temperatures, the thermal fluctuations were switched off, and the applied magnetic field was subsequently increased in steps of 10 mT, with the system relaxed at each step (initial state after cooling shown in supplementary data figure S18).No initial magnetization pattern or manipulation was used, such that the results present a close representation of a zero field cooling process through T C .
In these simulations, we utilized temperature fluctuations to closely model the introduction of randomness observed in the stripe domain state following the experimental ZFC process.However, the included thermal fluctuations are only an approximate model, due to the inability of the LLG to vary the magnitude of the magnetic moment, and because of the high temperature cutoff of spin waves from the finite cell size.Furthermore, although we present results obtained with a range of micromagnetic parameters, we note that the utilized parameters are not equal to the experimentally measured ones for each FGT flake.However, they do correspond to effective values that reproduce the relative sizes and morphology of the spin textures in the available simulation area.Simulating the full size of the FGT flake of several tens of µm would not be feasible.
Figure 1 .
Figure 1.Fe 3−x GeTe2 bulk and flake sample characterization.(a) Magnetization M measured as a function of temperature for four FGT bulk crystals with differing composition (denoted by the Fe deficiency value x), with a field of 30 mT applied along the c crystal axis.The Curie temperature TC of each sample is indicated by the vertical dashed lines.(b) Magnetization measured as a function of temperature for each composition with a field of 3 mT applied perpendicular to the c crystal axis.The TC and second transition temperature T * of each sample are labelled.(c) The extracted values of TC (purple), T * (orange) and the saturation magnetization MS (determined at 5 K) of each composition plotted as a function of x.Error bars on the T values indicate the error due to effects of broadening of the M curve.(d) Crystal structure of Fe 3−x GeTe2 (FGT) from the side and top viewpoints.The extent of the unit cell and the identity of the Fe I, Fe II, Ge and Te atom sites are labelled.The orientation of the crystal axes are indicated.(e)-(g) Optical microscopy images of the hBN-capped exfoliated FGT flakes on the Si3N4 membranes, for the x = 0.03, 0.27 and 0.37 compositions respectively.The region of interest (ROI) indicates the location of the scanning transmission x-ray microscopy (STXM) measurements in subsequent figures.(h)-(j) STXM images of skyrmions at the ROI created by field-cooling (FC) each flake from above TC to the indicated temperatures, under an applied field of 20 mT.The colour map indicates the out-of-plane component of the magnetization, mz (arbitrary units).
Figure 2 .
Figure 2. Scanning transmission microscopy measurements following the field-sweep procedure.(a)-(o) X-ray micrographs of the Fe 3−x GeTe2 (FGT) flake samples measured as a function of temperature and applied magnetic field for the x = 0.03 ((a)-(e)), 0.27 ((f)-(j)) and 0.37 ((k)-(o)) flakes respectively.The images were taken as a function of increasing out-of-plane applied magnetic field, starting in the saturated state at −250, as indicated by the orange arrows.The temperature as a fraction of the Curie temperature TC is labelled.The images were taken at the regions of interest as shown in figure 1.The colour map indicates the out-of-plane component of the magnetization, mz (arbitrary units).
Figure 3 .
Figure 3. Composition dependent magnetic phase diagrams.(a)-(f) Magnetic phase diagrams determined by x-ray microscopy of the three Fe 3−x GeTe2 (FGT) flakes as a function of temperature and applied field.Results following the field sweep (FS) procedure (a)-(c) and the field-cooling (FC) procedure (d)-(f) are shown for each composition x = 0.03, 0.27 and 0.37, as labelled.Red arrows in a and d indicate the measurement path of each field-temperature protocol.The extent of the skyrmion (Sk) stripe domain (SD) and uniformly magnetized monodomain (MD±) states is shown by the coloured red, blue and grey regions, respectively.Markers indicate the measured phase boundary points of the Sk (red squares) and SD (blue circle) states.The vertical dashed lines indicate the measured values of TC.For (a)-(c), additional data points exist at higher fields-here we have focused on the details in the low field regions.
Figure 4 .
Figure 4. Composition dependence of magnetization reversal and uniaxial anisotropy.(a)-(c) Measurements of the magnetization M versus applied field µ0H, at 5 K, of each Fe 3−x GeTe2 bulk crystal, with compositions x = 0.03, 0.27 and 0.37, respectively.Measurements were acquired with the magnetic field H applied both parallel (purple triangles) and perpendicular (orange circles) to the c crystalline axis.(d) Extracted values of the saturation magnetization MS as a function of temperature T for each FGT composition, x = 0.03, 0.10, 0.27 and 0.37.(e) Extracted values of the uniaxial anisotropy KU of each FGT sample as a function of T, taken from the H ∥ c data.We attempted subtraction of the shape anisotropy contribution from an estimate of the sample dimensions and the measured MS values (see methods).The inset plots the calculated value of the anisotropy field HK as a function of T for each composition.
Figure 5 .
Figure 5. Mean-field simulations modelling Fe 3−x GeTe2 flakes.Simulations 1-3 correspond to systems based on parameters for the x = 0.03, 0.27 and 0.37 experimental compositions, respectively.(a)-(c), Selected visualizations of simulated states acquired following a field sweep procedure starting from negative applied fields.The colour map indicates the out-of-plane component of the magnetization, mz (arbitrary units).(d)-(f) Simulated magnetic phase diagrams of the three systems.The extent of the skyrmion (Sk) stripe domain (SD) and uniformly magnetized monodomain (MD±) states is shown by the coloured red, blue and gray regions, respectively.Markers indicate the sampled positions in phase space.The simulated values of TC are indicated by the vertical lines.
Figure 6 .
Figure 6.Observation of skyrmionium states when zero-field cooling Fe 3−x GeTe2 flakes.(a)-(c) Exemplary x-ray micrographs of the x = 0.03, 0.27 and 0.37 FGT flakes, respectively.Images were recorded as a function of increasing applied magnetic field, after zero-field cooling (ZFC) to the target temperature from above TC.The formation of skyrmionium states is highlighted with the yellow dashed rings, and the edge of the flakes with green dashed lines.The colour map indicates the out-of-plane component of the magnetization mz (arbitrary units).In (b), the inset shows a zoomed view of one skyrmionium.(d) The total of number of skyrmioniums N SkM observed in each FGT flake during the ZFC field sweep performed at each temperature.The estimated TC of each FGT flake is indicated by the vertical dashed lines.(e)-(g) The observation of skyrmionium states as a function of temperature and applied magnetic field is shown for the x = 0.03, 0.27 and 0.37 FGT flakes, respectively.The average size of the skyrmionium states present at each temperature-field point is indicated by the size of the corresponding marker.
Figure 7 .
Figure 7. Micromagnetic simulations of composite skyrmion formation.(a)-(p) Selected field points of micromagnetic simulations modelling the zero-field cooling (ZFC) seeding procedure for a range of saturation magnetization MS and anisotropy field HK.After ZFC, the temperature is effectively 0 K. (q)-(t), Zooms into regions containing a skyrmion and skyrmionium (q), skyrmion bag (r), and objects with Bloch points (or chiral kinks) embedded within their domain walls (s), (t).The colour map indicates both the out-of-plane magnetization component, mz (black and white), and the in-plane components, mx and my (rainbow colour wheel). | 11,027.8 | 2024-01-03T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Analysis of Traffic Characteristics in Sylhet City and Development of Utility Function
Sylhet is a Metropolitan city of north east part of Bangladesh, aspect severe traffic problem due to speedy and unrestrained development. This happens due to intolerable level of inequity in transportation demand and supply scenario. To assess the asperity of the existing traffic system engrossed by enormous traffic problem in Sylhet city a study was steered by Civil and Environmental Engineering department. Purpose of this study to show the traffic characteristic in Sylhet City and developed utility function of numerous modes of vehicles. Floating car or Moving observer method were used to determine the traffic characteristics and multiple regression is used to develop utility function. Results shows that center of city (Bondor to Amborkhana) have highest traffic flow and overcrowding. It is conceived that this outcome will assist in the development of future traffic model and prevent from traffic congestion of Sylhet City.
Introduction
Cities are the power houses of economic growth for any country and transportation system provides the convenient way for movements as well as medium to reach destination.Inappropriate transportation system effects on economic activities and creates hindrances for development.The volume of traffic in past several years has rapidly increased and has become increasingly necessary to understand the dynamics of traffic flow as well as to obtain mathematical description of the process.Lack of management in developing country like Bangladesh often fails to cope with the pressure of increasing growth of population and economic activities in the cities causing uncontrolled expansion of urban areas.
Sylhet city is located at the Northeast hilly region of Bangladesh and from time of establishment the city is growing.High migration rate of Sylhet city especially population growth rate of 4%/annum is observed in comparison to the population growth rate of 2.01%/annum in Bangladesh (Rahman, 2000;Ahmed, 1994).The population of Sylhet city was about 0.2 million in 1991 but at 2005 the population was about 0.7 million and also known as 4 th populous city in Bangladesh (BBS, 1991;SCC, 2005).Banik (2009) had stated in paper that traffic congestion is terrible in Sylhet city and also further suggested that future studies with better results can bring better solutions.The floating car method also known as moving observer method, can be applied to investigate it for the better results (Arai & Sentinuwo, 2013;Banik, 2009).This paper attempts to enrich the moving observer method (floating car method) in order to analyze present traffic characteristic as well as to know future condition from result, evaluate selective alternative strategies for traffic flow problems in the Sylhet city and using multiple regression analysis findings of utility function for rickshaw, auto rickshaw, and city bus were done.Research paper is divided into six section, 1 st is introduction where general study and its motivation is mentioned, 2 nd is literature which covers the previous studies, 3 rd is methodology explains the method followed for analysis, 4 th data analysis represents the detail of results from our data, 5 th is result and discussion and at last is conclusion & recommendation.
Literature Review
In transportation engineering and planning, transportation demand analysis plays several important roles.This helps to understand the long range social and environmental implementations of decisions about the transportation systems very clearly.For short range predictions of passenger or vehicular flows that are used by designers to develop operation, size facilities, control strategies, assess the impact of land development and transportation project.The main goals of transportation demand analysis is to explain about the travel condition in meaningful term, explain travel behavior and to predict demand for various types of transportation services.The transportation system in Sylhet city is predominantly read based.Sylhet city had several types of vehicles and different transports are used as income wise such as higher income group (car, taxi service, micro bus and other private vehicles), middle income group (rickshaw, bicycle, motor cycle, carriage, car, bus and minibus), lower income group (tempo, bus) and good delivery (truck, pick up, van, human driven van).Increasing population make rise in vehicle flow and which cause congestion in traffic flow.The theoretical aspect that involves driver demeanor into the transportation model, which some researchers focus on (Arai & Sentinuwo, 2013).Study was conducted in India to inquire the effect of variation of traffic composition, magnitude of upgrade, road width and its length on highway capacity (Arasan & Arkatkar, 2011).
There are two approaches for the moving observer method which are floating car procedure and other is for urban traffic measurements developed by Wardrop and Charlesworth (1954).The "moving observer method" is the way of estimating the average flow and travel time of traffic travelling in either direction over a road link entirely from measurements made from a moving vehicle with and against the stream for non-congestion conditions.Methods based on the vehicle location (Floating Car Data) which are assuring cost-effective solution to cope with some limitations from fixed detectors.The method appears to be its capability to estimate average traffic parameters on the highway and for long time periods of measurements rather than obtaining measurements at a point.Many researcher conducted study on traffic flow to solves various problems such as the cost of the infrastructure is high for installing detectors for cutting down the cost, single presence of detectors, accuracy and determining the speeds is not very high.The harmonic mean speed calculated may considerably diverge from the mean space speed because it is unknown how many vehicles don't cross the entire section (Gartner, 1997).Most traffic design manuals accept that average capacity of each highway is equal but through empirical research showed that average capacity per lane decreases with increase in number lf lanes (Yang & Zhang, 2005).The study on relationship between free-flow speed, posted speed limit and the geometric design variables on 35 four-lane urban streets in Virginia had been conducted (Ali et al., 2007).Kerner et al. (2006) explained that qualitative study of data from a series of point detectors could reveal the dominating traffic characteristics of long highway sections.
Methodology
Transportation demand analysis plays various important roles in transportation engineering and transportation planning.Short range predictions of passenger or vehicular flow that are used by transportation designer to facility size, control strategies and develop operating, assess the impact of land development and transportation project.The goal of transportation demand analysis is to account travel in meaningful term, explain travel behavior, and on the basis of an understanding of travel behavior to predict demand for various types of transportation.Traffic engineer main target is to analysis the behavior of traffic and to design the smooth, safe and economical operation of traffic.To know the traffic behaviors, it needs exhaustive knowledge of traffic stream parameters and their mutual relationships which had been detailed explained by Tom (2006).Speed is one of the basic parameters of traffic flow and there are two representations of speed (time mean speed & space mean speed).To detect opposing vehicles on a two way road by using a travelling public transport vehicle, the floating car observer is proposed (Hoyer et al., 2006).This similar steps is followed for data collection for this paper.Representation of speed and relationship between them, relationship between the fundamental parameters of traffic flow and relationship graphical form resulting in the fundamental diagrams of traffic flow have been detailed by Tom (2006).
One of the new traffic approach called DYNAMIC, which was developed by DLR to combine the advantages of Floating Car Observer with wireless radio based technologies (Ruppe et al., 2012;Gurczik et al., 2012).Bluetooth based Floating Cr Observers used for traffic monitoring, which is also known as detector.Later, Interlaced Scan Mode was implemented a new modelling, which is modified version of the detecting probability distribution function (Gurczik & Behrisch, 2015).
Figure 1.Flow chart showing the steps followed for research Data were collected by researcher throughout the year, this study was conducted by Civil and Environmental Engineering department to assess the asperity of the existing traffic system absorbed by enormous traffic problem in Sylhet city.The planning of survey is combination of technical and organizational decisions.The survey can be classified two types, one is quantitative survey, and other is qualitative.The field survey can be classified field supervision, type of data collection and types of sampling and method of tabulation.The study selected comprises 26.5 km 2 of central urban portion of Sylhet city.The administrative authority of this portion is Sylhet City Corporation (SCC) contains all major government and private commercial activities.Due to improper planning and control over land use activities, people from various districts rush to this place and made it a horde of residential, commercial and business centers.This paper have taken major links of Sylhet city which is highly congested and also suggested by SCC, i.e.Modina market, Abbarkhana, Bondor, Uposhor and Kodomtoli.Those links are also major market areas as well as offices, bus stops etc.According to Bangladesh Bureau of Statistics (BBS, 1991) the total population of this area was about 0.2 million and at 2009 population rises to about 0.64 million (SCC, 2005).Traffic congestion resulting air, noise pollution problems and are much greater than other peripheral portions of greater Sylhet.For the analysis of the Sylhet city transportation activities, the study area under the authorization of SCC is divided into five broad sub-regions or Specified Zones (SPZ) (Banik, 2009).Figure 1 showing steps of research work.
Traffic Volume and Composition
Traffic volume data were collected manually, at selected key locations along the main link in the sylhet city.Hourly counts were made generally in the peak period of traffic flow (8 am to 11 am).From the field survey we collect the travel time for different types of vehicles moving through the main links.Taking weighted average of different vehicles travel time for different links.Collect the link distance and calculate the average speeds.
Household Interview Data
In order to obtain relevant socio-economic data and trip information, a household survey was carried out in SCC area.This interview data help us to know the purpose of trip made, travel characteristics and medium of trip.
Recommendation
This also helps to know the status of traffic characteristics and duration of trip made by people in Sylhet city.100 households of five zones consisting of 20 for each zone were selected randomly for this survey work.
Populations of these 100 households were 973.Distributions of trips by purpose, by trip time and by trip mode have been assessed by household interview.From household interview for obtaining trip information, the total numbers of trips were reported as 527 for 100 households having people of 973.Therefore the gross per capita trip rate was calculated as 5.27 trips per household and 0.54 trips per person.It is evident from Figure 2 that the greater percentage of trips were made for educational purpose (42%), which was followed by business trips (29%), job trips (21%) and others trips (8%).The predominant mode of trips was observed as rickshaw (47%), which was followed by walk (23%), motorcycle (14%), car (13%) etc.On the other hand, Figure 2 represents that most of the trips were made in short time; the maximum trips (49%) took 10 to 15 minutes only.
Determination of Traffic Flow Using Floating Car Method
There are two methods to the moving observer method such as floating car procedure and other approach was developed by Wardrop and Charlesworth (1954) for urban traffic measurements and its meant to obtain both speed and volume measurement simultaneously.This method have been widely used by researcher for findings of traffic characteristics and also stated that this method is best for findings as well as detailed steps explained (Kontaratos, 2007;Wright, 1973;Morton & Jackson, 1992).In floating car method, a test vehicle is driven a number of times over a selected stretch of road at approximately the average speed of the stream of traffic.To find traffic flow of Sylhet City, we took four links (Modina market to Ambarkhana, Ambarkhana to Bondor, Bondor to Uposhor, Uposhor to Kodomtoli), four times data is collected in each link.
The method was developed by Wardrop and Charlesworth (1954) based on a survey vehicle that travels in both direction in the road and data is collected by observer.The theory behind this method is revisited by Wright (1973) and his paper also serves as a review of the paper dealing with the method in the two decades between original work and his own.The formula allows estimating speed and flow for one direction travel is:
Development of an Utility Model
Utility function expresses the transportation's indifference between various alternative choices or attributes of these choices (Sathish, 2013) and the generalized Utility equation is shown in eq. 1.The overall structure of the utility function (UF) is traditional in terms of the sequences of the structural form i.e. trip distribution, trip generation & attraction, utility model and trip assignment stages (Chiu, 2007).We have followed Multiple Regression Analysis to find out the utility model and for this we have divided Sylhet City into five zones (i.e.Modina market, Ambarkhana, Bondor, Kadamtoli, Uposhar).Generally we used Rickshaw, Auto-rickshaw and City-bus is taken for our findings because these are most used vehicles in this city.For those vehicles, we have taken three independent variable i.e. speed, travel cost and comfort and for speed, travel cost taken five data in each link, for comfort taken from household interview.With the help of stop watch we found time taken by vehicle in each link and length is measured manually, speed from formula (speed=distance/time).This step is followed for five times then from this we did average to find out actual average speed, for each vehicle cost is also taken five times and converted into per kilometer.
Multiple Regressions
It is a statistical technique that admits us to predict someone's score on one variable on the basis of their scores on several other variables.If we collected data on all of these variables by surveying one month (few days' intervals) traffic flow then we could see that how many and which of these variables gave rise to the most accurate prediction of utility function.We can also find that utility function is most accurately predicted by type of speed cost and comfort in full time traffic flow but with other variables not helping us to predict job satisfaction.Using multiple regression in psychology then we use term independent variables to identify those variables that they think will influence some other dependent variable.It is applied to linear prediction of one outcome from several predictors and the general form of linear regression is: Where, Y' = the predicted outcome value for the linear model with regression coefficients b 1 to b k. , Y' intercept b o when the values for the predictor variables are x 1 to k and the regression coefficient are analogous to the slope of a simple linear regression.
Data Analysis
Data analysis is done under discussion with SCC and it is believed that this result as well as developed utility models will be helpful for minimizing traffic congestion.Surveys were undertaken for identification of major roads in Sylhet city for traffic flow, manual count of vehicle movement through the main links (hourly for three days), a travel time and cost survey along major link for different vehicle, questionnaire survey of some 100 households within the city area to determine the travel characteristic, calculate the journey time, No of vehicle overtaking ,No of vehicle overtaken ,No of vehicle from opposite direction for determine the traffic flow in Sylhet City.Traffic volume data were collected manually at selected key location along the main link and hourly counts were made generally in the peak period of traffic flow (8 am to 11 am).The travel time for different types vehicles moving through the main links, taking weighted average of different vehicles travel time, measured each link distance and calculated the average speeds.Average running speed is calculated [(number of vehicle * average speed)/total number of vehicle], where during peak period each number of vehicle pass at each link were counted and average speed of each vehicle were calculated to find average running speed.This process is followed for all major links taken for study in this paper (table 2).
Traffic flow data are collected at the different links of the Sylhet City.Those main link of Sylhet City are Modinamarket to Amborkhana (M-A), Amborkhana to Modinamarket (A-M), Amborkhana to Bondor (A-B), Bondor to Amborkhana (B-A), Bondor to Upashahar (B-U), Upashahar to Bondor (U-B), Upashahar to Kodomtole (U-K) and Kodomtole to Uposhor (K-U).Map of Sylhet city showing different links had been shown in appendix A. Vehicles taken for data collection are rickshaw, car, auto rickshaw, motor bike, city bus and tampoo (three wheeler vehicle bigger than auto rickshaw).Moving observer method is one in which both speed and traffic flow data are obtained by a single experiment (O'Flaherty & Simons, 1970).We are found the maximum flow, density, average running speed and draw Flow-Density-Speed (Q-K-U) curves.On the other hand we again count the number of vehicle movement along the main links and calculate the average running speed (table 2).Also manually calculated the average running speed of different modes of vehicles as mention above and at last moving observer method result as well as manual calculation is compared.The figure shown below (figure 3-10) represents the fundamental diagram of traffic flow (relationship between speed-flow, speed-density and flow-density), these are the vital tools which enables analysis of fundamental relationships.Flow and density varies with time and location.Similarly, speed-density curves varies as speed will be maximum refer to as the free flow speed and when density is maximum then speed will be zero.In speed flow relationship, when there are no vehicles or there are too many vehicles so that they cannot move means flow is zero.Here, table 2 shows the comparison between manual calculations and moving observer method.Results are helpful for SCC to design traffic system in Sylhet city so that congestion could be minimized and city will have smooth traffic flow.
Development of Utility Model
Household interview were taken to develop utility function of different modes of vehicles (rickshaw, auto rickshaw and city bus).With the help of SPSS software, we found the regression parameter from data collected.
From household interview for obtaining trip information, the total numbers of trips were reported as 527 for 100 households having people of 973.This has been elaborately explained in methodology part of this paper.
From origin zone to destination zone six numbers of trips were taken and the average travel time was taken.This procedure was also followed for the every destination zone, for the analysis of regional transportation activities, the study area which is under the authorization of Sylhet City Corporation (SCC), is divided into five broad sub-regions or Specified Zones (SPZ) (Banik, 2009).To estimate the comfort level public interview was taken according to table 3 and most of the people described about their preference.On the basis of their preference comfort level was detected.But different parameter (cost, time, security, road condition) was influential in this aspect.By using SPSS software we find the regression parameter.
Result & Discussion
Due to lack of capacity of the majority of the intersection is the major feature limiting the capacity of the main road system and creating traffic congestion as well as obstruction in traffic flow.Narrow intersection, encroachment by hawkers, road side parked vehicles and poor management of intersection is the factors contributing to this problem.From our research we also came to know that Ambarkhana intersection is the busiest and most critical intersection of Sylhet with four major junctions, Bondor is same as most congested and also all over the Sylhet city vehicles comes to stop-start from that point.Heavy vehicles (tuck, bus etc.) movement on the pick hour makes more congestion in the city.In the entrance part of Sylhet is almost all roads are occupied by floating shops, mobile hawkers, artisans and temporary traders of different goods, commodities, unauthorized parking which causes major traffic congestion.Most of the divisional head offices, business, shopping complex and industries are at Zindabazar, Bondor and Ambarkhana.These areas are overcrowded but only few links are recently managed by making one-way traffic flow.Even though problem is not solved and this can only be solved by increasing road width and managed traffic condition.Motorized and non-motorized vehicles occupy the same lane at same time; rickshaw pullers as well as pedestrian don't have knowledge about traffic rules.
From our data analysis, we found that maximum average traffic flow is in Bondor-Amborkhana (1323.605veh/hr.)lane where, average running speed (U) is 14.48km.hr,average density (K) is 93.30veh/km, optimum speed (Uo) is 13.8km.hr,optimum density (Ko) is 99.80veh/km.This is because of the improper traffic management, road with is according to desire and improper installation of traffic signals at the intersection which leads to traffic congestion as well as mostly occurs accident at those section.Same as we found that minimum average traffic flow is at Upashahar-Kodomtole (1086.15veh/hr.)lane where average running speed (U) is 10.93km/hr, average density (K) is 93.4veh/km, optimum speed (Uo) is 49km/hr.and optimum density (Ko) is 70veh/km.This is because this section lies outside main city area, market is also not dense, width of road is larger than central city and traffic is well managed.
Conclusion & Recommendation
The study was motivated for the traffic flow prevailing in Sylhet city and from previous research study, although there is plethora of opinions regarding improvement of the situation.Insufficient scientific and engineering basis most of the measures undertaken in order to improve the situation but failed to produce desire result.The main aim of the study is to know the traffic flow condition of Sylhet city, which gives solution to the SCC on traffic improvement, reducing congestion and future traffic design.Development of utility function for the major vehicles (rickshaw, auto-rickshaw, city bus) which mostly people uses, helps to find the most suitable vehicle to be used inside city.The overall analysis of 5 major links of the Sylhet city, whose total length is 9.4 km was considered for finding of traffic flow.Results shows that most congested link is kodomtole to Upashahar because in this link traffic flow is 3.036 m/s which is lowest than other link.K-U link traffic congestion is high and need to improve traffic flow by expending lane, making one way traffic flow or making some changes in traffic rules etc.
Due to time limitation, the developed function is not computer based and it is suggested that if the model would be computer based then the consequences of alternative planning options could be easily evaluated.As for the quick fight against traffic congestion, study suggest to widening of roads that can be possible, SCC can enforce some rules on developing high infrastructure, blockage on roads should be removed, improvement in traffic management, add number of traffic police at the road junction or busy places, lane which can't be widen should be made one way traffic flow and non-motorized vehicle should have separate lane.This study data were taken throughout the year and now it is believed that the population has increased much more.Using same methodology or more recent technique can be known recent traffic condition of those areas.Due to time limitation it was not possible to count vehicles at intersection points and if those data were taken for analysis then more proper result can be found.
In view of constraints such as time, information resource and computational adeptness's this all study is dedicated to only central urban portion of Sylhet city (area under jurisdiction of SCC) and evaluation of some selected alternative planning options.It is also accredited that changes in transportation system have always long term effects with corresponding land use pattern changes.Such long term effects on land use are not in the scope of this study.It is believe that from this study it is known traffic flow condition in Sylhet city, factors affected to it, precaution need to be taken and better plan for future.Development of Utility model for rickshaw, auto-rickshaw and city bus, it can be compared those vehicles traffic controls based on utility function values and choose best one for transportation.It is believed that from those findings it will help to overcome the traffic flow problem, short and long term solution for efficient traffic management in Sylhet city.
Figure 2 .
Figure 2. Distributions of trips with respect to time
Table 1 .
Traffic flow at main link of Sylhet City | 5,594.4 | 2016-06-18T00:00:00.000 | [
"Computer Science"
] |
Structural and Magnetic Properties of Mo-Zn Substituted (BaFe12-4xMoxZn3xO19) M-type Hexaferrites
Molybdenum-zinc substituted hexaferrites were synthesized by high-energy ball milling and subsequent sintering at different temperatures (1100, 1200, and 1300° C). The samples sintered at 1100° C exhibited good hard magnetic properties, although a decrease in saturation magnetization from 70.2 emu/g for the unsubstituted sample down to 57 emu/g for the sample with x = 0.3 was observed. The drop in saturation magnetization results mainly from the presence of secondary nonmagnetic oxides. The samples sintered at temperatures >1200° C showed an improvement in saturation magnetization, and a sharp drop in coercivity. This behavior was associated with the development of the W-type hexaferrite, the particle growth, and possibly the spin reorientation transition from easy-axis to easy-plane.
of 160-255 kA m -1 .BaMhexaferrite is the most important hexaferrite in terms of production (more than 50% of the total globally manufactured magnetic materials [4]).
The unit cell of M-type barium hexaferritesis built by stacking R (BaFe 6 O 11 ) and S (Fe 6 O 8 ) blocks in the sequence RSR*S*, where the star denotes a block rotated by 180° about the c-axis of the hexagonal lattice [4].The unit cell therefore contains two (BaFe 12 O 19 ) molecules.The S block contains two hexagonal layers of four oxygen ions in each, while the R block consists of three hexagonal oxygen layers, with one oxygen ion in the middle layer replaced substitutionallyby a Ba ion.The metal ions occupy five different interstitial site: two sites in the S block (the octahedral 2a site and the tetrahedral 4f 1 site), two sites in the R block (the octahedral 4f 2 site and the by-pyramidal 2bsite) and one site at the R-S interfaces (the 12k octahedral site).These sublattices, their coordinations, the number of metal ions in each, and their spin directions are listed in Table 1.
The magnetic and electrical properties of barium hexaferrites were found to depend critically on the substitution of barium ions or iron ions by other cations and cations combinations.Trivalent metal ions, or combinations of divalent and tetravalent ions were used to substitute Fe 3+ ions in the hexaferrite lattice.These included Al 15,19,26 , Ga 19,27 , (Mo x Zn 0.4-x ) 14 , Mn 12 , Cr 19 , Ti-Ru 16 , Zn-Ti 13 , and Sn-Ru 18 .
In the present work, we synthesized barium hexaferrites with Fe ions partially substituted by Mo 6+ -Zn 2+ combinations.In order to maintain the cationic valence states the ratio of Mo:Znwas fixed at 1:3.The effects of the type of substitution and substitution level, and heat treatment on the structural, magnetic and physical properties of the prepared samples were investigated using x-ray diffraction (XRD), scanning electron microscopy (SEM), and vibrating sample magnetometry (VSM).
The structural refinement for the prepared samples was achieved using FULLPROF software based on Rietveld refinement techniques.
EXPERIMENTAL
High purity (~99%) powders of BaCO 3 , Fe 2 O 3 , ZnO and MoO 2 were used as starting materials to prepare the powder precursors of Mo-Zn substituted(BaFe 12-4x Mo x Zn 3x O 19 )M-type hexaferrites.The barium to metal molar ratio was 1:11, and the molybdenum to zinc ratio was 1:3.Since zinc is divalent, the substitution of this combination for Fe 3+ ions in these hexaferritesensures that molybdenum has the Mo 6+ valence state, which ensures the chemical neutrality of the hexaferrite.
The required amounts of the starting powders were weighed accurately and then mixed and transferred to the zirconia milling vessels.The mixture was then milled in an acetone bath (8 ml for each 5 grams of the powder) for 16 hours.The milling time was sectioned into 10 min.grinding periods interrupted by 5 min.pause periods to allow for cooling the sample and avoid overheating.The ball-to-powder mass ratio was 14:1 and the rotation speed was 250 rpm.The resulting wet powder mixture was left in the container overnight to dry at room temperature, and then the dry powder was collected in clean glass vials.An adhesive agent of aqueous solution of 2% wt. of polyvinyl alcohol (PVA) was prepared and transferred to the vial containing the milled powder and thoroughly mixed with the powder.After drying at room temperature, parts of the powder (about 1 gram each) were pressed into a discs (1.5 cm in diameter) in a stainless steel die under a 4 ton force.The discswere subsequently sintered at temperatures in the range 1100 -1300°C.
The density for each disc was calculated by dividing the mass of the disk by its volume, and found to be independent on the level of substitution (2.7 ± 0.1 g/cm 3 for all samples sintered at 1100° C).However, we noticed a tendency toward increasing the density with increasing sintering temperature, where the average density of the discs sintered at 1200° C increased to about 3.0 ± 0.1 g/cm 3 and that for discs sintered at 1300° C increased up to about 3.3 ± 0.05g/cm 3 .This behavior could be attributed to the growth in particle size with increasing the sintering temperature, which in turn reduces the porosity of the samples.
Scanning electron microscope (SEM) system (FEI-Inspect F50/FEG)was used to investigate the microstructural characteristics of the samples.The chemical compositions of the samples were determined using the energy dispersive x-ray spectroscopy (EDX) facility available in the SEM system.
The structural characteristics of the samples were investigated using x-ray diffraction (XRD).XRD patterns of the samples were obtained in the angular range 20° -70° using XRD 7000-Shimadzu machine with Cu-Ka radiation.The scanning step was 0.02° and the scanning rate was 1deg/min.The patterns were then analyzed using dedicated software routines (FULLPROF and Expert High Score) to determine the structural parameters of the samples.
The coercivity (H c ),remnant magnetization (M r ) and saturation magnetization (M s ) of the samples were determined from the hysteresis loops measured using a standard vibrating sample magnetometer (VSMMicro Mag 3900, Princeton Measurements Corporation).Al lmagnetic measurements were performed at room temperature in an applied field up to 10 kOe.
Scanning electron microscopy (SEM)
Fig. 1 shows SEM images for the samples at T = 1100° C. The images indicate that the particle size and morphology does not seem to be influenced by the substitution, and show hexagonal platelets with diameters ranging from 200 nm to about 500nm for all samples under investigation.However, lighter particles with generally smaller size and different morphology was also observed, which gave the impression that we may have different phases in the samples.This was examined by EDX analysis of the spectra collected at darker (D) and lighter (L) particles of the sample with x = 0.2 (BaFe 11.2 Mo 0.2 Zn 0.6 O 19 ) as an example (Fig. 2).The atomic Ba:Fe ratio in the darker particles was found to be 1:12.2which is close to the stoichiometric ratio of BaFe 12 O 19 .On the other hand, the Ba:Mo ratio in thelighter particles was 1:1.07, which is consistent with the stoichiometric ratio of BaMoO 4 .These results indicate that the Mo-Zn substituted samples with BaM stoichiometry may contain a secondary barium molybdenum oxide phase coexisting with thehexaferrite major phase under these experimental conditions.This method, although does not provide an accurate quantitative analysis of the compositions of the samples due to the low concentrations of some elements in comparison with the experimental uncertainty of the technique, it is useful in checking for the possibility of having secondary phases.Therefore, a detailed structural analysis is required to identify the existing phases and their structural properties.A large number of parameters can be obtained directly from the refinement routine such as the lattice constants (a and c), cell volume V, Miller indices of the diffraction peaks (hkl), and the goodness of fit parameters.Some refined parameters resulting from fitting the experimental diffraction data are shown in Table 2.The refinement results indicates slight fluctuations of the lattice constants around a = 5.89 Å and c = 23.21Å.The slight increase in cell volume upon substitution of iron by zinc and molybdenum could be due to partial substitution of the largerZn 2+ ions (radius = 0.74 Å) for the smaller Fe 3+ ions (radius = 0.49 Å)at the 4f 1 tetrahedral sites 28 .
X-ray diffraction (XRD) measurements
In order to investigate the effect of compaction on the development of phases and crystallinity, samples in powder form were sintered at 1100° C and investigated by XRD.The refinement results indicated that these patterns are almost identical to those for disk-sintered samples, which indicates that the compaction of the powder under a 4-ton pressure did not influence the phase evolution in the samples.
The average crystallite size for each sample was calculated from the Scherrer equation [29]:
where k is a constant equals 0.9, λ = 1.542Å, β is the peak-width at half maximum, and è is the angular position of the peak.The average crystallite sizes for the investigated samples with x ranging from 0.0 to 0.2 are listed in Table 3.The data indicates that the crystallite size tends to decrease from about 15 nm down to about 9 nm with increasing x.This leads to the conclusion that the Mo-Zn substitution for Fe results in poorer crystallinity of the samples, probably arising from crystal defects.To examine the effect of sinter ing temperature on the structure of the ferrites, all samples were resintered at 1200° C for 2 h, and their structure investigated by XRD.Fig. 4 shows X-ray patterns for resintered samples.The patterns clearly indicate that ZnFe 2 O 4 phasedisappeared in the samples, were as the BaMoO 4 phase persisted throughout the whole concentration range and a newhigh-temperature phase evolved, which was identified as W-type BaZn 2 Fe 16 O 27 (Zn 2 W) phase.Baring in mind that the unit cell of the W-type phase is a combination of the unit cell of BaM and a spinel (Zn 2 Fe 4 O 8 ) block, this phase apparently evolved according to the reaction: The structural parameters for the M-type phase in these samples were derived from the FULLPROF analysis and listed in Table 4.The data indicate slight decrease in cell volume toward that of the un-substituted sample with increasing the sintering temperature.This is associated with the incorporation of Zn 2+ ions in the evolving Zn 2 W hexaferrite rather than in the BaMhexaferrite lattice.To fur ther investigate the effect of sintering t e m p e r a t u r e , t h e s a m p l e w i t h x = 0 . 1 5 (BaFe 11.4 Mo 0.15 Zn 0.45 O 19 )was re-sintered at a temperature 1300° C and the patterns for this sample at different temperature are shown in Fig. 5.It is evident that the BaMoO 4 phase existed at all temperatures, while ZnFe 2 O 4 disappeared completely at temperatures higher than 1200° C. At such high sintering temperatures, the reaction between the (intermediate) M-type and spinel phases is completed to form the Zn 2 W-type phase.Thus, we conclude that the reaction the M-typephase and the zinc spinel Zn 2 Fe 4 O 8 (S) phase is favored at high temperatures (~1300° C).Our results are consistent with previously reported results 30 , which indicated the presence of M-phase at low sintering temperatures, and the development of the W-phase at temperatures higher than 1200° C.
Magnetic measurements
Fig. 6 shows the hysteresis curves for BaFe 12-4x Mo x Zn 3x O 19 samples sintered at 1100° C. The curves indicate that the magnetizations do not saturate in the magnetic field range of the study.According to the law of approach to saturation, the magnetization in the high field region is dominated by magnetic domain rotation 1 .Therefore, this law was used to determine the saturation magnetization for each sample from the high field region (H > 0.8 kOe), while the coercivity and remnant magnetization were determined directly from the hysteresis loops, and the results are listed in Table 6.
As shown in Table 6 the saturation magnetization for the un-substituted sample is 70.2 emu/g,the remnant magnetization is 40.6 emu/g.The squareness ratio of about 0.58 for this sample is close to the value (0.5) for a system of randomly oriented single domain particles 3 .The saturation magnetization (Fig. 7) decreased gradually with increasing Mo concentration, recording a 19% drop in the sample with x = 0.3.However, the magnetic properties of the products are still good for permanent magnet or data storage applications.
The initial drop in coercivity (Fig. 8)of about 25% for the sample with x = 0.1 is consistent with previously reported results 14 .This behavior was associated with the substitution of the Fe 3+ ions by Zn 2+ ions at 4f 1 sites and Mo 6+ ions at 2b sites (which have the highest contribution to the magnetic anisotropy).The progressive increase of Zn 2+ ions at 4f 1 sites and Mo 6+ ions at spin-up 2b sites is expected to result in an increase in the saturation magnetization and a further drop in coercivity, contrary to observed results.Therefore, we associate the noticeable decrease in saturation magnetization and the almost constant value of the coercivity with increasing x to the limited solubility of Zn -Mo ions in the BaM lattice, and the development of the nonmagnetic (ZnFe 2 O 4 and BaMoO 4 ) oxide phases, which influence the saturation magnetization, but not the coercivity.In view of the reduction of the saturation magnetization for the sample with x = 0.3, the nonmagnetic oxides apparently account for 19% of the sample weight.
It is well known that the sintering temperature has an important effect on the magnetic properties, especiallyon the coercivity value, which is sensitive to the grain morphology 3,31,32 .The hysteresis loops for samples with x = 0.2 sintered at different temperatures are shown in Fig. 9.The magnetic parameters derived from these loops are listed in Table 7.The increase in saturation magnetization with increasing sintering temperature is associated with the disappearance of the nonmagnetic ZnFe 2 O 4 phase and the development of the W-phase.Such structural developments were not observed in Mo-Zn substituted hexaferrites prepared by wet chemical mixing with different Mo to Zn ratio 14,33 .The sharp drop of the coercivity is in agreement with the results of Pasko et al., 30 who attributed this drop to the reaction of BaM and spinel phases to form the W-phase at temperatures higher than 1200° C. Further, the transition from a hard magnet (H C ~ 2800 Oe at 1100° C) to a soft magnet (H C ~ 300 Oe at 1300° C) could be associated with the reported spin reorientation transition from easy axis to easy plane in W-type hexaferrite.However, such transition in our sample should be confirmed by other techniques in a future work.In addition, the squarenes ratio of 0.53 for the sample sintered at 1100° C is consistent with single-domain particles randomly oriented.However, the decrease in squareness ratio at higher sintering temperatures is indicative of multi-domain particles due to particle growth at such high temperatures 30 .This multidomain structure typically results in a reduction of the coercivity 3 .
CONCLUSIONS
Barium hexaferrite phases were synthesized by high energy ball milling and subsequent sintering at temperatures 100, 1200, and 1300° C. Mo-Zn substitution for F 3+ ions was found to result in a decrease in saturation magnetization due to the presence of secondary impurity oxide phases in samples sintered at 1100° C. Samples sintered at higher temperatures showed an increase in saturation magnetization and a drastic drop in coercivity.The proposed spin reorientation transition at room temperature in the sample with x = 0.2 sintered at 1300° C suggests that this sample has a potential application in magnetic refrigeration 30
Figure 3
shows the XRD patterns for all samples sintered at 1100° C. The figure indicates the development of new phases with increasing x values as evidencedby the peaks around 26.6°, 30.0°, and 35.03°.The diffraction patternswere analyzed using FULLPROF software.The pattern for the un-doped sample shows a major phase with reflections consistent with BaFe 12 O 19 M-type hexaferrite(JCPDS: 00-043-0002) without other impurity phases.On the other hand, XRD patterns indicate that all doped samples consist of a major BaMhexaferrite phase and small amounts of other intermediate phases (ZnFe 2 O 4 and BaMoO 4 ).The formation of BaMoO 4 phase which is consistent with the results of EDX analysis, causes deficiency in the amount of Ba (or excessive amounts of Fe and Zn) required for BaM phase, resulting in the formation of ZnFe 2 O 4 phase. | 3,824.8 | 2014-08-30T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Applicability of an Active Back-Support Exoskeleton to Carrying Activities
Occupational back-support exoskeletons are becoming a more and more common solution to mitigate work-related lower-back pain associated with lifting activities. In addition to lifting, there are many other tasks performed by workers, such as carrying, pushing, and pulling, that might benefit from the use of an exoskeleton. In this work, the impact that carrying has on lower-back loading compared to lifting and the need to select different assistive strategies based on the performed task are presented. This latter need is studied by using a control strategy that commands for constant torques. The results of the experimental campaign conducted on 9 subjects suggest that such a control strategy is beneficial for the back muscles (up to 12% reduction in overall lumbar activity), but constrains the legs (around 10% reduction in hip and knee ranges of motion). Task recognition and the design of specific controllers can be exploited by active and, partially, passive exoskeletons to enhance their versatility, i.e., the ability to adapt to different requirements.
INTRODUCTION
In the 1970s, the scientific community began addressing the relationship between musculoskeletal disorders (MSDs) and work ergonomics. Since then, many studies have been published regarding this topic (Bernard and Putz-Anderson, 1997;Cohen, 1997;Fujishiro et al., 2005;Hamberg-van Reenen et al., 2008). Yet, in the most recent EU-OSHA report de Kok et al. (2019), MSDs are still cited as the most common work-related health problem in the EU. Indeed, 60% of workers still experience such disorders, in the majority of the cases due to back pain. MSDs affect not only the workers, but also the enterprises that, in turn, have to cope with absenteeism and productivity losses. To have an idea of the economic impact, in 2012, the total annual cost related to MSDs to the European Community represented 2% of the GDP (Bevan, 2012).
Workers performing manual material handling (MMH) activities (e.g., package loading and unloading in a warehouse or luggage handling in airports) are among the most exposed to risks and injuries. To try to reduce MSDs associated with MMH, NIOSH has developed a method for the ergonomic assessment of a task, defining whether or not it is classified as risky (Waters et al., 1993). Potentially harmful tasks should be mitigated via adoption of different solutions such as the introduction of limits for handled masses, frequencies, and task duration. Additionally, companies have tried to mitigate MSDs by re-designing the workplace according to the newer ergonomic guidelines or by resorting to plant automation and to the introduction of industrial manipulators. However, the cost associated with these solutions and the lack of adoption of external tools by the users prevents the problem of MSDs from being completely solved.
Back-Support Exoskeletons and Lifting
The ability of back-support exoskeletons to reduce the physical loading on the lumbar spine while performing lifting tasks suggests that they may present a possible novel solution to back pain-related MSDs. Indeed, a 2016 review on occupational exoskeletons reported that usage of back-support exoskeletons yielded a 10-40% reduction in back muscle activity during repetitive lifting and static holding tasks (de Looze et al., 2016). The primary consequence of muscle activity reduction is the de-compression of the lumbar spine. Such results are confirmed by a more recent review (Theurel and Desbrosses, 2019) that stresses the clear potential of exoskeletons in limiting muscular demand. However, this report also warns that there is insufficient current knowledge to justify an unreserved adoption of this technology. Fox et al. (2019) elaborate on these devices to improve manufacturing processes. Moreover, focusing on three aspects, namely (a) actuators, (b) structures and physical attachments, and (c) control strategies employed, Toxiri et al. (2019) report on the technical development of back-support exoskeletons meant for occupational use. According to the actuator choice, an exoskeleton can be defined as active or passive. A passive exoskeleton exploits its wearer's movements to store and then release energy. Energy storage is achieved by means of passive elements such as gas/coil springs, flexible beams or elastic bands (Abdoli-e et al., 2006;Lamers et al., 2017;Näf et al., 2018). In contrast to passive exoskeletons, active devices have the ability to deliver additional energy to the user exploiting electrical motors or pneumatic actuators. Such active elements, rather than relying onto the users' movement, are powered by batteries or external supplies. Properly controlling the active actuators allows designers to tune the assistance being provided based on different control strategies. As an example, in Toxiri et al. (2018) and Tan et al. (2019) sEMG signals are used to modulate the assistive torque, while in Lazzaroni et al. (2020), Chen et al. (2018), Ko et al. (2018), Zhang and Huang (2018), and Yu et al. (2015) the control relies on kinematics.
Manual Material Handling: Is There Only Lifting?
As reported in Grazi et al. (2019), a consensus on the methods and metrics for the evaluation of back-support exoskeletons is still lacking. Indeed, the analyzed signals, the testing conditions, and the performance metric vary across the many available studies. However, all these studies have in common that the exoskeleton evaluation only considers static bending and symmetric lifting tasks. Yet, risk of overload for workers arises not only from lifting: workers may find themselves performing many different activities in the same workplace. As an example, in logistics, it is possible to imagine a quite simple task where a worker walks to the shelf, picks the required object, carries it back to the cargo area, and, eventually, lowers it in the appropriate container. A similar scenario can be pictured in other contexts where MMH is involved. In such conditions, the International Standard ISO 11228 establishes ergonomic guidelines not only for lifting but also carrying, pushing, and pulling. Therefore, the analysis of the exoskeleton usage effects should not be limited to lifting tasks, but importantly should also tackle other activities such as carrying, pushing, pulling, and walking. This extension can capture the complexity of out-of-thelab environments more reliably. As an example, an interesting study presented in Baltrusch et al. (2019) focuses on the versatility of a passive exoskeleton, studying its performance not only related to lifting but also walking. As might be expected, it emerges that passive exoskeletons provide benefits during lifting and do restrict the movement during walking. From this point of view, active exoskeletons, even if more complex and heavier, are expected to perform better, because of the possibility of tuning and customizing the assistance according to the task.
Contribution of This Study
Recent works on exoskeletons have discussed about the opportunity of exploiting human activity recognition to discriminate between different tasks such as lifting, walking, carrying, or sitting (Chen et al., 2018Poliero et al., 2019a;Jamšek et al., 2020). For passive exoskeletons, this implies that, by using clutches for the engagement and disengagement of passive elements, as in Endo et al. (2006), Walsh et al. (2007), Ortiz et al. (2018), Jamšek et al. (2020), andDi Natali et al. (2020a), it is possible to assist only when needed, i.e., deactivate the passive elements when they create a restriction such as in the walking case. Active exoskeletons, on the other hand, thanks to their actuators versatility, could implement specific controllers for any of the previous tasks.
In the study presented hereafter, the investigation focuses on carrying activities, given their relevance to MMH and to the ISO 11228-1 standard. In particular, the authors want to elaborate more on (i) the impact that a non-lifting activity might have on lower-back loading and on (ii) the need to select different controllers based on the performed task.
First, a comparison is made between the spinal loading during lifting and carrying activities to investigate the impact of the task on this latter parameter. In particular, spinal loading, which is closely associated with risk of injuries, is caused by the activation of deep back muscles-related to back extension-generating compression on lumbar discs. When a worker is carrying a load, back extensors activate to keep the trunk stable and straight, thus, this situation also presents risks to the user.
Second, to better understand the need of different controllers according to the task, it might be useful to report a consideration. To date, the vast majority of available occupational back-support exoskeletons are designed and programmed to provide assistive torques that contribute simultaneously to the extension of the back and both hips, regardless of their actuation principles and control strategies. This assistance principle is derived from and replicates the typical movements observed during symmetric lifting activities. Indeed, in this situation, every time there is back flexion or back extension, there is also a corresponding flexion or extension of the hips, respectively. Therefore, the presented assistance principle seems appropriate. In this study, the soundness of applying this strategy in carrying activities is investigated. Indeed, the inclusion of gait shows a different situation with respect to symmetric lifting. In particular, during carrying, contributing to back extension is appropriate, but simultaneously pushing both hips toward extension might interfere with their natural movement. More specifically, the support could be beneficial during hip extension (associated with the leg in stance phase), but may result in restriction of the hip flexion (forward swing, characteristic of the leg not in contact with the ground). Hence, to understand the need of different controllers according to the task, the effects that assisting with carrying-adopting an assistance principle derived from observation of symmetric lifting activities-has on the users are studied. The effects will be assessed in terms of muscle activity, gait kinematics, and subjective perceptions.
In the following, details on how the experimental testing was devised are reported in Section 2. Section 3 presents the results that are then discussed in Section 4. Finally, Section 5 summarizes and concludes this work.
MATERIALS AND METHODS
We devised an experiment, approved by the Ethics Committee of Liguria 1 , that is detailed following the description of XoTrunk, the active back-support exoskeleton used in this study. Finally, information on data processing and outcome measures are reported.
XoTrunk: An Active Back-Support Exoskeleton
XoTrunk (see Figure 1) is a 6kg improved version of the Robo-Mate prototype, presented in Toxiri et al. (2018). Its aluminum frame houses the control and electronics box, the actuation units, and the anchoring points. These points are situated close to the thighs and the shoulders, allowing the device to transmit the torques-produced by its two brushless DC motors-to the wearer. These torques are used to help the user perform lifting, by partially contributing to hip and back extension. Additional anchoring on the waist provides more stability and comfort. More details on the actuators and low level control can be found in Di Natali et al. (2020b), whereas kinematics and physical attachments are reported in Sposito et al. (2020).
The versatility provided by its two electrical motors allows to test and study different control strategies. In particular, in Toxiri et al. (2018), three control strategies are presented to modulate the torque proportionally to (a) the torso inclination angle, (b) the forearm muscle activity, and (c) a combination of torso inclination and forearm muscle activity. Regardless of the selected control strategy, the motors always provided assistive torques that contribute simultaneously to the extension of the back and both hips. The backwards push on the back is the combination of the assistance provided by the left and right sides. As introduced in Section 1.3, this assistance principle is inspired by observation of symmetric lifting movements. This study concerns whether or not this assistance principle can be beneficial also for assisting carrying. The control strategy selected here was based on a constant extension torque provision. Indeed, for the sake of simplicity, during carrying the torso inclination can be neglected, whereas the forearm muscle activity can be assumed to be constant during load handling. Such simplifications were introduced to facilitate the analysis of the effects that assistance during carrying has on the users. In the following, this control mode is referred to as the Exoskeleton On (Exo-on) condition. Each motor generates a constant torque of 10 Nm, resulting in an overall assistance of 20 Nm.
Experimental Set-Up and Protocol
Nine healthy male subjects (N = 9, 1.78 ± 0.04 m, 76.55 ± 8.22 kg, 31 ± 3.46 years old) were asked to wear sporting clothes and informed they would have to perform the following tasks: • Lifting: The sequence of: standing upright, reaching for a box lying 0.30 m from the ground, grasping and lifting it, reaching upright posture again, then putting the box back down on the ground and returning to the upright posture. Each sequence was repeated three times at a self-selected speed and with a freestyle lifting technique, meaning no specific instructions on lifting motion were given (Burgess-Limerick, 2003). • Carrying: Straight level walking for 7.5m, while holding a box close to the trunk at self-selected speed.
Each test subject performed lifting with the box (1.2 kg) housing three different payloads, namely 0, 7, and 15 kg. In the following, the different weights are referred to as light (L), medium (M), and heavy (H). All the conditions were repeated three times for a total of 9 tests per subject. Carrying tasks were performed not only varying the loads (light, medium and heavy, as for lifting), but also the supplied assistance. In particular, two conditions were tested: a) No Exoskeleton (No-exo): carrying without the exoskeleton; b) Exoskeleton On (Exo-on): carrying while wearing the exoskeleton in the on-mode. The exoskeleton provides an angle independent constant torque of 20 Nm to provide support for the extension of the back and of both hips (see Section 2.1).
Each load and assistance condition was repeated three times for a total of 18 tests per subject. The task execution order, the handled weights, and the supplied assistance were randomized between subjects. Statistical analysis for each ρ x , α, iqr, and γ were analyzed (x being any of the above metrics) At the end of the experimental protocol, the subjects were asked to fill in a simplified version of an RPE (Rate of Perceived Exertion) questionnaire to rate the differences between carrying in the No-exo and in the Exo-On condition (Huysamen et al., 2018). Table 1 summarizes the protocol and its independent variables.
Measurements and Data Processing
To collect muscular activity data, the subjects were asked to wear surface EMG (sEMG) electrodes (BTS FREEEMG, BTS Bioengineering, Italy). These latter were placed, according to SENIAM guidelines, to measure the bilateral activation of the muscles responsible for trunk extension, namely the Erector Spinae Longissimus Lumborum (LL) and the Erector Spinae Iliocostalis (IL). Additionally, due to the symmetry of the task, only the subjects' right leg was instrumented to measure the activation of two muscles responsible for hip flexion and extension, i.e., the rectus femoris (RF) and the semitendinosus (ST). Back and leg muscles were chosen based not only on their relevance when performing lifting activities but also on the number of studies that analyze them in order to allow comparisons of findings across different protocols . Figure 2 illustrates the locations of the chosen muscles. Prior to attaching the electrodes onto the skin, the site was cleaned with alcohol, as suggested in Stegeman and Hermens (2007). Muscular activity information was acquired at a sampling frequency of 1 kHz. Extraction of metrics from the sEMG signals requires data post-processing. The common approach reported in Pons (2008) consists of filtering the amplified raw sEMG signals (BTS FREEEMG output), rectifying the output, and, eventually, computing the linear envelope (low-pass frequency filter at 2.5 Hz, Potvin et al., 1996). EMG data were normalized to maximum voluntary contractions (MVC) (McGill, 1991). Overall, lumbar extensor activity (averaged IL and LL muscle activity, right and left side) was computed prior to performing deep-back muscles analysis as in Koopman et al. (2019).
To collect motion data, the subjects were also equipped with a 3D motion tracking system (MTw Awinda, Xsens, The Netherlands). 7 Xsens IMUs were attached to the feet, shanks, thighs, and pelvis in order to reconstruct lower limb kinematics and gait phase events. The Xsens software can reconstruct motion data at a 60Hz sampling frequency. Using IMU trackers and biomechanical models, the software also provides gait phase information that can be used to perform data segmentation (Di Natali et al., 2020a). Two consecutive heel strike events FIGURE 2 | Schematic representation of electrodes placement. For greater clarity, only the right side electrodes are displayed for the back. It is possible to identify the Erector Spinae Longissimus Lumborum (LL), the Erector Spinae Iliocostalis (IL), the rectus femoris, and the semitendinosus. generated by the same foot are used to identify the start and finish of the stride.
Before data recording, Xsens calibration and MVC acquisition routines were performed for each subject (Vera-Garcia et al., 2010;Halaki and Ginn, 2012).
Outcome Metrics and Analysis
In the following, the metrics used for the assessment of the effects of assisting with carrying are reported along with the metrics used for comparing carrying and lifting tasks. This section also introduces how the statistical analysis was performed. Table 1 summarizes what presented hereafter.
The Effects of Assistance During Carrying
As previously introduced in Section 1.3, it is hypothesized that the exoskeleton will positively influence the back and hips extension, whereas the hip flexion would be hindered. To explore the effects of assisting with carrying, this task was analyzed in the No-exo condition (control group) and in the Exo-on state (test group). To be consistent with studies focusing on lifting, the effect of the exoskeleton on the back is analyzed in terms of muscle activation. For the lower limbs, on the other hand, gait inclusion suggests also adding gait kinematics analysis to the muscle activation. In the following, first the muscle analysis metrics are presented and, then, the gait kinematics are considered.
Muscle fatigue may be experienced as symptoms or signs of reduced motor control such as localized discomfort or decreased strength. Generally, physical exertions can cause fatigue that lasts for just a few hours. If fatigue persists, it may cause tissue damage and yield MSDs (ACGIH, 2008). In Jonsson (1982), the 50th percentile/median of the muscle activity distribution (M) is selected to reflect how the muscle has been working during the whole recording period. Based on this reasoning, in this work, M was chosen to monitor the risk associated with repetitive/cumulative fatigue both for the back and the lower limb muscles. Additionally, ergonomic guidelines for industry define the maximum allowed spinal compression. If this threshold is exceeded, traumatic damages in the inter-vertebral discs may result (Moore and Garg, 1995). Biomechanical models can be used to show how this compressive force is directly linked to muscle activity (Chaffin, 1969;Toxiri et al., 2015). In Jonsson (1982), the 90th percentile of muscle activity distribution (P) is indicated as being more informative than the maximum muscle activity. For such reasons, in this work, P was chosen to monitor the risk associated with traumatic damages in the intervertebral discs. P was analyzed also for the lower limb muscles, even though there is no clear traumatic damage associated with those sites.
The gait kinematics is focused on the hip and knee ranges of motion (RoM h and RoM k , respectively) that are defined as the difference between the 90th and the 10th percentile of the lower limbs trajectory distribution during carrying. Since users were instructed to walk at a self-selected speed, an analysis on the average stride time (δ) per condition is conducted. δ is defined as in Equation (1 where S represents the number of strides in a test and H is a vector collecting all the right heel strike time events.
Comparing Carrying and Lifting Tasks
To report on the impact that carrying has on spinal loading compared to lifting, a simple comparison of lifting in the Noexo condition (control group) and carrying in the No-exo one (test group) is presented. This analysis was focused on the overall lumbar extensor activity and on the same metrics presented in Section 2.4.1.
Statistical Analysis
Kinematic data were analyzed applying a standard one-way analysis of variance (ANOVA) with significance level set at p < 0.05. Such analysis was performed for both hip and knee angles. Initially, the same approach was meant to be adopted also for the stride duration examination and the muscle activity one. However, due to large variability in inter-subject walking speed and muscle activation signals (even after normalization with respect to the MVC), the choice was made to center the analysis around intra-subject variability. Indeed, big data variability implies that standard statistical analysis would not be very informative. For this reason, ratios between the test and control conditions (specified in Sections 2.4.1, 2.4.2) were adopted as an alternative form of intra-subject normalization, prior to comparison with the results obtained by other subjects. In the following, we define ρ x i as the ratio computed considering metric x (either M, P, RoM h , RoM k , or δ) in the control and test condition for a subject i.
The vector collecting ρ x i for all the nine healthy subjects is referred to as ρ x . To deepen the analysis of ρ x , and to better highlight trends in the data, the following values are taken into account for each ρ x distribution: • the median value (α); • the inter-quartile range (iqr), defined as the difference between the 75th and the 25th percentile of the ρ x distribution; • the number of subjects for which ρ x i < 1 (γ ).
Subjective Evaluation
The subjective evaluation forms, filled in by each subject at the end of the experimental protocol, allow a comparison to be made on whether or not the perceived effect is consistent with the objective data. Based on their relevance in this study, only the answers referring to back, waist, and legs are analyzed.
RESULTS
In the following, results referring to spinal loading during carrying are presented, followed by those associated with the effects of the assistance during carrying. In particular, these latter results are split into muscle analysis and gait kinematics.
Spinal Loading During Carrying
Figures 3A,B present the boxplot of the distribution of ρ M and ρ P when comparing the overall lumbar muscle activity during carrying and lifting activities. Lower-back muscle activation is in the same order of magnitude, but generally lower during carrying compared to lifting, according to the reported measurements. This is true in all cases for ρ P , whereas ρ M shows a few subjects for which ρ M i > 1, meaning that the lumbar muscle median activation (50th percentile) was higher in carrying than lifting. In the heavy load test, one of the subjects is considered an outlier (represented by a red cross). For light loads, considering both ρ M and ρ P , the median (α) is around 0.40, while this number increases to around 0.60 for heavier loads, showing an overall trend. It is worth highlighting that γ is always quite close to N = 9, indicating a shared trend among all the subjects.
Effects of Assistance During Carrying
The results are reported focusing first on the muscle activation and, then, on the gait kinematics. Figure 4 reports the boxplot associated with ρ M and ρ P for the overall activation of the lumbar muscles when comparing carrying activities with and without the exoskeleton. Overall, the population distributions are around the unit value and the iqr range is quite large (up to 0.62). However, the iqr has a trend to reduce as the payload increases, both for mean and peak. Indeed, the variability in the heavy load condition is about one-third of that recorded for the lighter loads.
Muscle Activation
Light and heavy load tests display opposite behaviors, with the first (light load) belonging almost entirely to the ρ x > 1 region (i.e., the exoskeleton produced an increase of the metric) and the second (heavy load) to the ρ x < 1 region (i.e., the exoskeleton produced a reduction of the metric). This is more evident for ρ M rather than ρ P . An additional interesting observation is that for both metrics the lowest value is for the intermediate weight. For both ρ M and ρ P , γ indicates that the majority of the subjects experienced a reduction of muscle activation in the Exoon condition, with respect to the control case. Moreover, as the payload increases, the value of γ increases as well. Figure 5 refers to the lower limb muscles activation analysis. Similarly to above, the distributions are centered around the unit value. The iqr still displays large variability (up to 0.71) and there is no longer a clearly narrowing trend as the payload increases. Indeed, in the case of the RF, the iqr is smaller for the intermediate loads than it is for heavier loading condition. Red crosses identify outliers in the ST ρ M , and in the RF and ST ρ P . Also in this case, it is possible to identify an increasing trend for γ as the payload increases.
Gait Kinematics
How the RoM changed between the two conditions is reported in Figure 6, revealing a clear trend for both the hip and the knee joints. Indeed, for both hip and knee RoM it almost always holds that ρ RoM k < 1 and ρ RoM h < 1 (see also γ values). On average, the median values (α) are around 0.90 indicating that there is a reduction in the RoMs of about 10% due to the exoskeleton action. The iqr values are much lower than in the muscle analysis (maximum iqr is 0.12 with respect to 0.71).
Significance levels obtained comparing the Exo-on and the No-exo condition are reported in Table 2. Bold values indicate where significance was reached (p < 0.05). In each condition, at least one joint had a significant RoM reduction between the test and control condition.
Moreover, by inspection of Figure 7, it is possible to see how the Exo-on condition yielded an increase of stride duration (δ), as all the distributions lie in the ρ δ > 1 region. The trend indicates a median increase in cycle time duration of about 6%. Outliers can be identified in the light and medium load conditions. The values of γ indicate a clear effect for all the subjects.
Subjective Perception
Finally, Figure 8 summarizes, for each body region under analysis, how many users reported a benefit or hindrance/discomfort when comparing the Exo-on and the No-exo conditions. The majority of the subjects (8 out of 9) experienced a positive exoskeleton effect on the back/trunk region, whereas 7 out of 9 subjects felt hindered in the lower limbs. Interestingly, 3 users reported benefit also on the waist, where the exoskeleton is anchored. As the users were instructed to report benefit or hindrance only if actually perceived, for a given body region, the sum of hindrance and benefit does not have to be N = 9.
DISCUSSION
In the following, the discussion is presented starting from the analysis of the carrying impact on spinal loading compared to the lifting case. The authors' assumption was that such loading would be comparable between the two activities. This supports and validates the assertion that an occupational backsupport exoskeleton is needed/valuable in providing assistance during carrying. Therefore, the first assessment is followed by an evaluation of the effects of an exoskeleton assisting with carrying while applying a constant extension torque provision. A control strategy of this type is a simplification of what happens if an exoskeleton, programmed to assist lifting, is also used during carrying. Here, the authors' hypothesis was that for carrying this strategy would turn out to be sub-optimal, namely being beneficial for the back but hindering the lower limbs. As a consequence, Section 4.3 focuses on the need to implement back-support exoskeleton versatility. Finally, the limitations of this study are discussed in Section 4.4.
The Impact on Spinal Loading
The results summarized in Figure 3 confirm that-from an ergonomic viewpoint-carrying activities can be associated with risk. Indeed, compared to lifting, muscle activity, although less during carrying, is in the same order of magnitude. In particular, as the handled payload increases, the differences between lifting and carrying are reduced and become less pronounced. This trend is particularly evident if the 50th percentile of the muscle activity distribution is considered. Generally, this value can be associated with repetitive/cumulative fatigue, whereas the 90th percentile is related to traumatic damages of the inter-vertebral discs. Despite traumatic damage is seen as more concerning, it is clear that damage can occur in both lifting and carrying and, thus, should be prevented/limited. The results found in section 3.1 are consistent with the ISO 11228-1 standard that establishes ergonomic guidelines for performing both lifting and carrying, identifying the latter activity as equally worthy of attention. Therefore, it makes sense to try to assist also the carrying activities by means of an active occupational exoskeleton.
The Effects of Assistance During Carrying
The analyzed assistance principle implies that the delivered torques simultaneously support the extension of back and both hips. It was assumed that such assistance would be beneficial for the back, whereas it might hinder the natural movement of the hips, particularly in the swing phase.
The following discussion is separated according to the two body regions under analysis.
The Lower Back
The experimental results do not indicate a clear polarity on the data and, thus, it is not possible to confidently conclude that, with respect to the conditions of this study, the exoskeleton is providing a reduction in the activation and work intensity of the lower-back muscles. Nevertheless, it is worth highlighting how the data variability shows a general trend to reduce as the payload increases and that the heavy load condition has a much clearer trend toward the ρ M , ρ P < 1 region, i.e., where the exoskeleton has a benefit on the muscle activation. This suggests that conclusions drawn for this condition are more reliable than those drawn for lighter loads. In particular, for the heavier load condition, α values for ρ M and ρ P suggest that the exoskeleton effect is beneficial, reducing the overall lumbar activity by 12.08 FIGURE 8 | Subjective perceptions of the 9 users. For each of the considered body regions, it is reported how many users felt a benefit and how many experienced hindrance. and 7.99%, respectively. It is also important and encouraging that the exoskeleton is seen to have the greatest effect with the heaviest loading, as this is the circumstance that is most in need of assistance. Also, a comparison of objective and subjective data confirm the beneficial effect of the exoskeleton. Indeed, as outlined in Figure 8, 8 out of 9 users reported benefit on the back and only 1 out of 9 reported discomfort or hindrance on the same body segment. On the other hand, a part from the light load condition, lower-back muscle analysis showed that, out of 9, 5-6 subjects (according to the analyzed metric) had a reduction of muscle activation (see γ values in Figure 4). These values are not so far from those reported by the subjective evaluation forms. Therefore, the consistency between objective and subjective data suggest that, considering spinal loading, there is some evidence that the exoskeleton effect is somehow beneficial for most of the population.
Contrary to the authors' expectations, for the medium and high payload handling, the overall lumbar activity reduction is not in line with the potential of the device used in the assessment. In particular (Toxiri et al., 2018), the experiment showed significant back muscle activation reduction (around 30%), whereas a clear reduction was not obtained in this study, even though sound bio-mechanical models supported the authors' hypothesis. This, along with the negative exoskeleton effect for the light load condition, suggests that there is room for improving the constant torque strategy used in this study.
One upgrade is to modulate the delivered torque according both to the handled payload and to the user's body mass. In particular, the analysis of both α and γ supports the need to modulate the assistance according to the handled payload. Indeed, considering the lightest loading condition, the exoskeleton does not clearly reduce spinal loading, as highlighted by α. On the other hand, as the handled weight increases, the exoskeleton assistance results in reductions of α values both for ρ M and ρ P (α < 1). To this extent, it is interesting to note that, even if very slightly, the intermediate condition seems to be a minimum and might indicate that the amount of assistance provided is best around that payload range. Moreover, the number of subjects that show a benefit from the additional torque provided by the exoskeleton increased as the weight of the carried load increased (see Figure 4). For the light payload test, the muscle activity increase may be interpreted by subjects adopting abdominal and back-extensors co-contraction, stiffening the upper body to counteract the backwards push of the exoskeleton and to regain stability. Further experimentation could help clarifying this phenomenon and if modulating the torque according to the payload would, as expected, reduce it. Finally, the large variability in the results further suggests the possibility of modulating the delivered torque not only according to the payload, but also to the subjects' body mass. Indeed, despite body masses variability (76.55±8.22 kg), the delivered torque was kept constant, and so, it is possible that subjects with different body mass experience and react differently to the same amount of assistance.
The Lower Limbs
The exoskeleton assistance on the lower limbs resulted in hindrance clearly affecting the gait kinematics of all the users. This hindrance is evident both as subjective perception of the users (7 out of 9 users felt hindered on the legs) and from the kinematic analysis. Indeed, hip and knee RoMs were reduced by up to 12%. Stride speed was also reduced due to a corresponding increase in stride duration (between 6 and 8%). In addition, a study conducted on the effects of load carriage on energy cost of walking (Abe et al., 2004) showed no significant differences in the energy cost associated with walking for values between the control condition (empty backpack) and the test one (backpack with a 6 kg load). This suggests that the differences noted in this study are related more to the exoskeleton torque provision, rather than the exoskeleton weight itself (6 kg). These elements suggest that simultaneously pushing both hips toward extension appears not to be the best assistive strategy.
Furthermore, although the kinematic analysis and the subjective perceptions are clearly polarized, this does not happen in the muscle analysis. There may be two main reasons that explain this lack of a clear trend.
The first reason being the non-ideal choice of the muscles. Indeed, partially due to the exoskeleton fitting and partially due to the difficulty in assessing via sEMG the hip flexor activity, in the proposed protocol it was not possible to measure the muscle activity of the Iliopsoas (hip flexor) and of the Gluteus Maximus (hip extensor) (Byrne et al., 2010). For these reasons, the activities of the RF and ST were chosen as representative of hip flexion/extension muscle activation. Problems in assessing the proper flexors and extensor are reported also in Baltrusch et al. (2019), where muscle activity did not show any significant differences between conditions.
The second reason why no trend emerges in the selected muscles might be related to the changes in the gait trajectory, as reported above. Analyzing both hip and knee joints, for each load condition, the exoskeleton assistance resulted in a reduced RoM. Indeed, almost all of the population lies in the ρ RoM h , ρ RoM k < 1 region. It is interesting to note the little data variability (iqr), suggesting its reliability.
Moreover, the one-way ANOVA test, significance level = 0.05, conducted to compare the Exo-on and the No-exo condition, found statistically significant differences at least for one joint in all of the conditions (see Table 2). In the case of the hip joint, the RoM reduction is due both to smaller flexion angles, hindered by the constant torque, and to smaller extension angles, possibly due to a compensation for the unwanted/unexpected backwards push of the exoskeleton. Differences in the knee trajectory can be explained as a consequence of the hip changes. Delving a bit more into the kinematic analysis, Figure 7 shows that the Exo-on condition caused a speed reduction in the users walking speed: all the population, apart from an outlier, lies within the ρ δ > 1 region. Therefore, reduced RoMs and slower stride durations show an evident hindrance confirming the authors' expectations.
On (the Need of) Back-Support Exoskeleton Versatility
To fully exploit back-support exoskeleton versatility, the standard control strategies can be expanded by including task awareness. This implies that, at first, the activity being performed by the user is recognized (high level), then, in accordance with the task the appropriate assistive strategy is selected (mid-level) and, finally, actuators are controlled to ensure that the provided torque is properly delivered (low-level). Such a distinction of control levels was presented in Tucker et al. (2015). Now that data have been presented and discussed, there are more elements to debate on the need to recognize different tasks and the opportunity of selecting the controller according to the one being performed. Passive exoskeletons, generally lighter, simpler, and cheaper than active ones, can avoid the lower limb hindrance found in walking activities (Baltrusch et al., 2019). This is achieved by resorting to manual clutches, spring offsets, and automatic engage or dis-engage of passive elements, like in the commercial products by Laevo 2 and Ottobock 3 , or in research prototypes (Jamšek et al., 2020). On the other hand, due to mechanical design limitations, passive devices cannot provide support in carrying activities. This means that there is no need to discriminate among walking or carrying.
Unlike passive devices, active exoskeletons are more versatile and, so, are able to exploit the functionality and flexibility of their actuators to create assistance profiles that can be tailored to the demands of the assistive task, like carrying in this case. Not all the active exoskeletons, however, have the same "degree of versatility." As an example, the H-WEX exoskeleton presented in Ko et al. (2018) cannot provide support differently from the approach presented in this study. This is due to the choice of a single motor for the actuation, resulting in a more compact, efficient, and lightweight exoskeleton. However, the single motor can only modulate the delivered amount of torque and cannot assist the legs independently, according to gait phase. Instead, as an example, the APO exoskeleton (Chen et al., 2018) and the XoTrunk exoskeleton used in this study have two motors, one on each side. This design choice can be exploited to develop new assistance strategies, more appropriate for carrying. Indeed, in the previous sections, it has been discussed how a better strategy could improve the effectiveness of the exoskeleton for the back region and reduce the hindrance in the lower limbs (as seen in the data analysis). Hence, considering active exoskeletons, distinguishing among walking, carrying, and lifting is supported both by the relevance of carrying activities and by the need to switch between different controllers.
As a final comment, it is useful to note that in Poliero et al. (2019b) the distinction between lifting and walking only takes into account kinematic variables, whereas specific sources of information (like forearm muscle activity, sensorized gloves/insoles, or vision) are used to discriminate among walking and carrying. This final consideration highlights that not only mechanical choices but also control ones can affect the versatility of back-support exoskeletons.
Limitations
In the designed testing protocol, MVC calibration was performed adopting a single posture. However, this procedure is more prone to variability in the MVC normalization as subjects might exhibit differences in the posture to obtain maximum muscle activity (McGill, 1991). The large inter-subject variability did not allow us to always apply standard statistical analysis such as the analysis of variance. For this reason, the authors have decided to perform intra-subject normalization between the control and test conditions. As a consequence, the results are discussed taking into consideration trends. The proposed testing protocol was carried out in a lab setting. This might present substantial differences to the conditions found in a workplace where users may be required to walk on undulating or sloped surfaces in addition to level ground. Therefore, our findings cannot be directly generalized to out-of-the-lab scenarios. Additionally, the indication to perform the carrying task at a self-selected speed might be a further simplification of actual working conditions. Indeed, for given tasks, the workers could be required to walk as fast as possible so as not to limit productivity. Also, the relatively short duration of the activities performed during the testing protocol does not allow us to observe fatigue effects, or the effects of prolonged exoskeleton usage.
CONCLUSION
In the context of manual material handling and, more specifically regarding the ISO 11228-1 standard, carrying can have an impact on the spinal loading comparable to lifting. Back-support exoskeletons are generally used to assist lifting and, thus, mitigate the ergonomic risks associated with this activity. The applicability of these devices to other activities, such as carrying, is still an open issue.
This paper investigates first the effects of carrying on spinal loading and, then, the effects of assisting carrying with an exoskeleton designed for lifting tasks support. An experimental campaign involving 9 users and three different payloads (1.2, 8.2, and 16.2 kg) was designed to assess the relevance of carrying and the benefits arising from providing assistance for this task, in the same way it is done for symmetric lifting, i.e., synchronously supporting back and both hips extension.
The findings indicate that carrying, from an ergonomic viewpoint, is a relevant activity because the corresponding spinal loading is comparable to lifting.
Contrary to the expected outcome, the experimental results do not provide clear evidence on the effectiveness of the analyzed strategy in supporting the lower-back. However, the overall lumbar activity shows a promising trend when carrying heavy objects as for muscle activation is reduced by up to 12%. Large data variability invites caution when interpreting it. In agreement with the expectations, the strategy yielded hindrance for the lower limbs. This is supported by reduction in hip and knee RoMs (around 10%) and an increase of stride duration (between 6 and 8%). Due to changes in gait kinematics and difficulties in assessing the proper hip flexor and extensor, muscular analysis for the lower limbs did not provide significant findings.
Finally, there has been a discussion on how a better control strategy could improve the effectiveness of the exoskeleton. As control strategies for back-support exoskeletons start addressing tasks differing from lifting, the capability of recognizing which activity is being performed and, thus, triggering the appropriate controller, becomes a relevant feature, promoting active exoskeletons versatility.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Comitato Etico Regione Liguria. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
TP, ML, ST, CD, and JO devised the experiment. TP led the | 10,112.2 | 2020-12-09T00:00:00.000 | [
"Engineering",
"Medicine"
] |
The Software to the Soft Target Assessment
The soft targets are closely related to the risk of attack to the group of people (to the lives). This problem can cause fatal consequences for the population. The current situation on the world reflects the fear of the attack in the soft targets. We can see the fear to lose life at these public places and in all types of access to free buildings. Each of us spends time in the shopping centers or the park every day, and our children spend time in schools where they can be threatened. The characteris-tics between the soft targets belong to a considerable number of persons at the same time in the same area, and the current state of the security measures is not adequate to the threats yet. The main aim of the software to the assessment of the soft target is to protect the people in the soft targets, minimize the impact to the people (visitors), and help to solve the problem at the moment. The methodology is based on the assessment of the object according to the features (according to the criteria).
Introduction
The soft targets, the crowded places, and the objects of critical infrastructure are very vulnerable to the risk of the attack. These objects are identified as objects with a large number of visitors per day, and the situation is not as secure as the object requires [1]. These visitors are the aim of the potential attacker. The potential attacker is a man, who believes that the attack can solve the problem (problem with the political situation, the problem with the state system, the problem with the world). The attacker believes that the fear of death can solve the problem or can spread fear between other people. The attack on soft targets can disrupt the functionality of the state with fatal consequences [2].
The proposed methodology and the software tool can help us to the evaluation process of the soft targets and can help us to protect the life of the people. The main problem is that these places have open access to people all day. The open access can cause that the early detection of the attacker is difficult and a lot of people are at risk. If we can analyze the features of the object (in advance), then we will solve the problem effectively [3].
The current state of the research is described in this chapter. We have developed the static part of the assessment tool, and this static part was verified in the practice use (analyses of the soft targets in the Czech Republic). In Section 2, the attacks in the last years are described. The mathematical definition of the analysis of the soft targets is described in Section 3. Section 4 describes the case study of the shopping centers in the Czech and Slovak Republic. The case study of the train and bus station is described in Section 5. Finally, we describe the final comparison between the shopping center analyses and the train and bus station analyses in Section 6. Finally, we summarized the conclusion in the last section.
The attacks in the last year
In Figure 1, you can see the timeline of the last attacks from the 2019. This timeline is focused on the attacks to the civilians. These attacks were in the soft targets and crowded places realized.
As you can see in Figure 1, attacks on the soft targets killed 139 persons at least, and 301 people were injured. We can say that attacks on soft targets are very popular. On the other hand, a lot of attacks on the soft targets were revealed before the attacker will fulfill the attack. We can say that our system is more needed in the last years.
The mathematical definition of the analysis of the soft targets
The whole analysis of the features of the soft target is based on the relation between the question and the answer. The question analyzes the status of the features of the object. Each of the answers can define the level of the security, the level of the question, and level of the features too. This fact can be seen in Figure 2.
The level of the security can be influenced by the security measures. After the repeated assessment, we can see the higher level of the security.
The whole coefficient of the object is defined in the next equations: K S -the final security coefficient; Wn-weight of each coefficient; L-coefficient of the locality; C EK -final coefficient of the exterior; C PK -final coefficient of the processes; C IK -final coefficient of the interior.
The weight (Wn) is set by the administrator of the software tool. We propose that these weights will be clarified after more case studies. In the current research, we evenly set these weights. The locality coefficient is defined by the map tool. The map tool has defined the risk of the locality by the administrator.
The coefficient of the interior is defined in Eq. (2). Each of the security attributes can be used in the object several times. The use of these security attributes is significant to the whole interior security. The security attributes define how many times the security attributes can be used: PB-the number of the security attributes; B i -the security attributes; I K -the criteria of the interior.
Eq. (3) defines the final interior criteria in the same category of the interior. The final interior criteria (category) are based on the sum of each of these categories of the interior criteria: I KCi -interior criteria (all); PK-the number of the criteria. The categories of the interior criteria are incorporated to the final coefficient of the interior. This equation is defined in Eq. (4): C IK -the final coefficient of the interior; P-the number of up criteria. The final coefficient of the interior is the sum of all the up criteria. The exterior coefficient is defined in Eq. (5): E K -the calculation of the value of the exterior criteria; k u -coefficient of the security level; N-the number of the attributes.
E KC -all of the criteria of the exterior; Sum N-the number of all attributes.
The final coefficient of the exterior is defined in Eq. (8).
C EK -the whole coefficient of the exterior. Each of these equations is used in the next part of the paper (in the case study). The process coefficient is defined in Eq. (9): n k * k u (9) P K -coefficient of one process (the number of criteria); nk-the number of the criteria; k u -the level of the criteria.
P KCj -the complete process coefficient of the all processes in one category (processes are divided into the categories); N-the number of the processes; n-the number of the upper level of the process; P Ki -each of the coefficient of one process.
The complete process coefficient is defined in Eq. (11). Each category has defined the weight according to the threats, or we can evenly set the weight: CPK-the whole coefficient of the processes; k-the criteria; W-the weight of the process.
This part of the chapter defined the mathematical definitions of the whole process of the analysis. We have defined three types of concrete analysis (processes, internal, and external) and one type of outside analysis (locality coefficient). The locality coefficient is defined according to the situation in the nearest area of the object. We can say that the locality can be changed in time without the change in the object. For example, the public event can influence the security situation in the object (e.g., the Christmas market).
The case study of the analysis of the shopping center
The case study was done to 35 objects which are situated in the Czech Republic and Slovak Republic. The objects which were analyzed belong to the next types of the categories: shopping centers, schools and universities, authority offices, train and bus stations, multifunctional buildings, sports stadiums, hospitals, and theaters and cinemas. In this part of the chapter, we are talking about the analysis of the shopping centers.
In Figure 3, we can see the analysis of nine shopping centers. On the x-axis, the object number is defined. On the y-axis, the security coefficient is defined. Value 10 represents the best security situation in the object. On the other hand, value 0 or 1 represents the worst security situation in the object.
As you can see in Figure 3, the best security situation is object 2. The object has the highest final security coefficient (KS) with value 6.5 (CIK). The final coefficient of the interior has object 2 with value 8.54. This value is the best security interior coefficient from this case study too. Exterior and process coefficient is the best value from the case study in object 2 too. In the next part, we can see the characterization of object 2. In Figure 4, we can see the locality of object 2. Object 2 is a relatively new shopping center. The locality is not in the center of the city, but the locality is very important for the visitors. Around the object is a river. The locality and the city are not so visited per day by visitors, and this fact can have a significant impact on the assessment of the locality and exterior. We need to say that this fact can be in the time variable. For example, the visit of the president can make the locality more unsafe, and the object will have a lower security coefficient.
In Figure 5, we can see antiterrorism pillars that are installed before entering into the object. Object 2 has integrated a lot of security measures because it is a relatively new object.
As you can see in Figure 5, these pillars can protect the enter to the object against the attack by car (drive the vehicle into the people).
On the other hand, the worst security situation is in object 9. This object can be seen in Figure 6. Object 9 is in the other city in the Czech Republic. This object is not in the center of the city, but this object is very close to the middle of the center. Object 9 is close to the main road, which is located across the parts of the city. Object 9 is located around the Rock café; in Rock café the rock concerts and the similar actions are organized too. This object can cause the risk of the violent attack to the object, however not to the people, because these actions are in the evening hours, when the shopping center is closed. This object has turnpike before its enter to the parking places.
As we can see in Figure 3, object 9 has a lower exterior coefficient. This fact can be caused by a close distance to the main road, the middle of the center, and the Rock café too.
This part of the chapter described the results of the case study in the shopping centers as a category. In the next part, we can describe the result of the train and bus station. These objects are opened, and we can see the differences between two types of security coefficients which are caused by the different categories.
The case study of the analysis of the train and bus station
This chapter describes the analysis of the objects in the category train and bus station. We can expect that these objects will have the differences between the exterior coefficient and the shopping centers. If the case study confirms this theory, we can say that the proposed methodology to the analysis corresponds to the reality. In Figure 7 we can see the objects (train and bus stations) which are analyzed in this case study. The numbers in Figure 6 mark the number of objects.
In Figure 8, we can see the data of the case study. The best security situation is object 14. This train and bus station is located in Frydek-Místek. This result is caused by the analysis. The analysis was realized only in the building of the bus station. The analytics did not analyze the train station, which shows that the final coefficient will be lower. On the other hand, the worst situation according to the final security coefficient is object 3. Object 3 is located in Zlin. This city is a country town. We can say that this object was analyzed correctly.
Object 3 is oriented in the middle of the center. The rail tracks are going across the middle of the town. On the other hand, we need to say that rail transport is not so used in Zlin. In Figure 9, we can see the localization of the train and bus station.
Finally, we can say that this analysis in the category train and bus station has some differences between assessments. This fact is caused by the aim of the analysis and software tool. The proposed software tool was developed for analyzing the buildings and not for analyzing open spaces. The second reason can be that the case study was done with more analytics.
The final comparisons
We can constant that each of the objects (train and bus stations) has a different exterior security coefficient in contrast with the shopping centers. We can see this contrast in Figure 9.
As we can see in Figure 10, the difference between the average values is significant. The average value of the exterior coefficient in the shopping centers is 4.49. The average value of the exterior coefficient in the train and bus station is 1.75. As we can see on the right side of the figure, the train and bus station has significant differences in the two cases. Object 21 and object 14 are the objects that have not been analyzed correctly. The analyst analyzes only one part of the station. In these two cases, we do not use the average value. We can constant that each of the objects (train and bus stations) has the different exterior security coefficient in contrast with the shopping centers. We can see this contrast in Figure 10.
As we can see in the right side of Figure 11, object 14 (train and bus station in Frýdek-Místek) is not correctly analyzed. We cannot use these results as the correct data. On the other hand, object 8 is the second best security coefficient. This object is correctly analyzed. We can see that the average value of this analysis is 3.73. On the other hand, the average value of the analysis of the shopping center is 5.04. We can say that the analysis corresponds to the reality because open-space objects will have a significant difference in security situations.
Conclusion
This chapter aimed to describe the proposed software tool which is realized on the website www.softtargets.eu. This methodology was developed as a doctoral thesis. In the first part of this chapter, the mathematical definitions of the proposed methodology were described. The second part of this chapter, the case study, was described. This case study was oriented to the comparison between shopping center analysis and the train and bus station analysis. The results of the shopping centers and the results of the train and bus station have significant differences. These differences were caused by exterior differences: the shopping center analyses (the building and close places) and the train and bus station analyses (the open spaces). We can constant that the proposed software can be used to the analysis of the open spaces too. However, we need to analyze all buildings and all aspects of the open spaces. We need to set more criteria for the open spaces. These criteria will be different from the criteria of the buildings because these criteria have to study the other features of the soft target. | 3,691.8 | 2019-07-17T00:00:00.000 | [
"Computer Science"
] |
Seismic evaluation of existing building structure using United States (ASCE 41-17) and Japanese (JBDPA) standard: Case study office building in Indonesia
. In this study, the target evaluation building is a reinforced concrete structure with 4 stories in Surabaya City Indonesia that was over 35 years old. The method used is the U.S standard; ASCE 41:17 and the Japanese standard; JBDPA. The results using the ASCE 41-17 Tier 1 checklist with a 975 years earthquake return period (BSE-2E) and collapse prevention performance target obtained several evaluation items that received noncompliant (NC) status. Out of the 21 checklist items, 5 items were compliant (C), 7 items were noncompliant (NC), 2 items not applicable (N/A), and 7 items are unknown (U). The results of the nonlinear static procedure (NSP) also produce performance object building is still below the target performance level. The evaluation results using JBDPA show that the seismic index of structure (I S ) at first and second-level screening procedures is less than the seismic demand index of structure (I SO ). The results evaluation of these two methods shows the same results, that the building has a deficiency of strength and ductility, to improve building performance against earthquake loads needs to retrofit.
Introduction
Indonesia, as a country with a high risk of earthquakes, already has seismic building codes for earthquakeresistant structures building design that began in 1970 (PMI 1970) and has been updated several times to the latest in 2019 (SNI 1726:2019) [1][2].However, during this period, Indonesia did not yet have a seismic evaluation of existing building standards.The evaluation of existing buildings that have been carried out so far still refers to seismic design codes, which are irrelevant.In 2021, the National Centre for Earthquake Studies under the Ministry of Public Works and Housing of Indonesia formed a team to compile the seismic evaluation standards with adopt to ASCE 41-17.Still, these standards were not officially issued until this paper was written.In the history of the development of seismic design codes in Indonesia, at least two standards are the primary references.At PMI 1970, Indonesia's seismic design codes adopted Japanese standards, but from 2012 until now, Indonesia consistently referred to U.S. standards, namely ASCE 7 [3].
In this paper, a seismic evaluation is carried out on reinforced concrete structures built in 1987 or over 35 years old in Surabaya City, Indonesia.The seismic evaluation standard used is the U.S. standard; the American Society of Civil Engineers/ASCE 41-17 [4], and the Japanese standard; the Japan Building Disaster Prevention Association/JBDPA English Version 2001 [5].The study objective of this paper is to look at the *Corresponding author<EMAIL_ADDRESS>differences in the method of analysis and evaluation results with these two standards in case studies of office buildings in Indonesia.
Object building
The object building of this paper is a reinforced concrete moment frame structure system.When referring to the construction year, use Earthquake Resistant Design Provisions Code for Building 1983 (PPTGIUG 1983) [6].In these standards, when compared with the current Indonesian seismic standard, there is a significant difference in base shear demand.Surabaya City is included in Zone 4 in the 1983 Standard.Zone 4, compared to the 2019 standard, has an increase in base shear of 24% [1].However, this cannot immediately lead to a conclusion that the building being evaluated does not meet the reliability of its structure; many variables need to be considered in the evaluation.
The object building does not have complete technical data, so several tests were conducted to obtain the data.Testing included geometric measurements, rebar scanning, rebound hammer tests, and compressive core drill tests.Drawings of building plans and measurement results can be seen in Fig. 1-5.
The results of the dimensional measurements obtained the plan and geometric dimensions of each structure member.There are two types of columns, three types of beams, and one slab.Inspection of the reinforcement is not only done by rebar scanning but E3S Web of Conferences 429, 05001 (2023) https://doi.org/10.1051/e3sconf/202342905001ICCIM 2023 also by chipping randomly to verify the results of the rebar scanning.The floor system uses a slab with a thickness of 12 mm, and there are retrofits in the slab system using a steel beam with sections 350x175x7x11, indicated by the dashed line in Fig. 4. The number of core drill samples was taken as 13 cores assuming uniform concrete quality in all structure members.This number refers to the minimum number of samples in both seismic evaluation standards.The minimum number of samples in ASCE 41-17 is six; in JBDPA, there is no minimum number.In addition, a non-destructive test was also carried out with a rebound hammer test at the location of the core drill.This aims to create a correlation curve on the relationship between the rebound number value and the compressive strength of the core concrete because the value of the rebound number can only be used to determine the quality of concrete if a correlation is made with the compressive strength of the core concrete [7][8].The number of rebound number sample points is 32.There is no specific provision for this amount, but the principle is disseminating the collected data to reach all building areas.The results of the rebound number test and compressive core drill test can be seen in Table 1.For the properties of the steel rebar, the reference standards [9] in of construction year were used, using U-39 reinforcing steel grades with a yield strength (fy): 390 MPa and ultimate strength (fu): 450 MPa.Then the concrete strength will be calculated according to the provisions of the evaluation standard used.
ASCE 41-17 evaluation
3.1 Evaluation procedure ASCE 41-17 standard provides three tiers of procedures for seismic evaluation and two tiers of procedures for seismic retrofit of an existing building adapted for use in areas of varying levels of seismicity.In this paper, the study objective was just for evaluation, the evaluation process can be seen in Fig. 6.
Tier 1 evaluation
Tier 1 Screening is the first step in carrying out a seismic evaluation of a building.At the beginning of the Tier 1 evaluation, it is necessary to know the performance level, seismic hazard level, and level of seismicity of the building being evaluated.The performance level is determined based on the risk category of the building.The evaluation is then followed by completing a more detailed inspection procedure in the form of a checklist/checklist according to the type and earthquake risk of the building being evaluated.The evaluation process of the Tier 1 Screening examination can be seen in Fig. 7. *It may be beneficial for engineer to perform a tier 1 screening evaluation prior to a tier 3 systematic evaluation even though it is not required.
**The evaluation process may proceed directly to the tier 3 systematic evaluation as an option.
Level of seismicity and structural performance
The object building function is an office building, so it is determined to have risk category II.ASCE 41-17 in the scope of assessment requires (Table 2) that the object building is evaluated for collapse prevention performance for BSE-2E (basic safety earthquake-2 taken as a seismic hazard with a 5% probability of exceedance in 50 years) and does not need to be evaluated for BSE-1E (basic safety earthquake-1 taken as a seismic hazard with a 20% probability of exceedance in 50 years).
Based on Indonesia's seismic hazard maps with a 2% probability of exceedance in 50 years [10] Surabaya City has SDS = 0.64 and SD1 = 0.57 for (BSE-2N), so it has a high level of seismicity (Table 3).
Basic configuration checklist
This checklist includes an evaluation of general building matters such as structural systems, building configurations and an evaluation of the risk of failure in geological and geotechnical aspects.The results of the basic configuration check are shown in Table 4.
Evaluation Statement
Status Low seismicity Building system -general LOAD PATH: The structure contains a complete, well-defined load path, including structural elements and connections, that serves to transfer the inertial forces associated with the mass of all elements of the building to the foundation C ADJACENT BUILDINGS: The clear distance between the building being evaluated and any adjacent building is greater than 0.25% of the height of the shorter building in low seismicity, 0.5% in moderate seismicity, and 1.5% in high seismicity C MEZZANINES: Interior mezzanine levels are braced independently from the main structure or are anchored to the seismic-force-resisting elements of the main structure N/A Low seismicity Building system -building configuration WEAK STORY: The sum of the shear strengths of the seismic-force-resisting system in any story in each direction is not less than 80% of the strength in the adjacent story above C SOFT STORY: The stiffness of the seismic-force resisting system in any story is not less than 70% of the seismic-force-resisting system stiffness in an adjacent story above or less than 80% of the average seismic-force-resisting system stiffness of the three stories above C VERTICAL IRREGULARITIES: All vertical elements in the seismic-force-resisting system are continuous to the foundation N/A
Evaluation Statement
Status Low seismicity Building system -building configuration GEOMETRY: There are no changes in the net horizontal dimension of the seismic-forceresisting system of more than 30% in a story relative to adjacent stories, excluding one-story penthouses and mezzanines C MASS: There is no change in effective mass of more than 50% from one story to the next.Light roofs, penthouses, and mezzanines need not be considered C TORSION: The estimated distance between the story center of mass and the story center of rigidity is less than 20% of the building width in either plan dimension C Moderate seismicity Geologic site hazard LIQUEFACTION: Liquefaction-susceptible, saturated, loose granular soils that could jeopardize the building's seismic performance do not exist in the foundation soils at depths within 50 ft (15.2 m) under the building U SLOPE FAILURE: The building site is located away from potential earthquake-induced slope failures or rockfalls so that it is unaffected by such failures or is capable of accommodating any predicted movements without failure U SURFACE FAULT RUPTURE: Surface fault rupture and surface displacement at the building site are not anticipated U High seismicity Foundation configuration OVERTURNING: The ratio of the least horizontal dimension of the seismic-forceresisting system at the foundation level to the building height (base/height) is greater than 0.6 Sa
U TIES BETWEEN FOUNDATION ELEMENTS:
The foundation has ties adequate to resist seismic forces where footings, piles, and piers are not restrained by beams, slabs, or soils classified as Site Class A, B, or C U Note: C=Compliant, NC=Noncompliant, N/A= Not Applicable, and U = Unknown The basic configuration screening results show that almost all evaluation items are compliant, except those requiring geotechnical conditions and foundation details.This is because no geological surveys and detailed inspections of the foundation structure were carried out in this study.
Collapse prevention structural checklist
The structural checklist includes a more detailed examination of the reliability of the building superstructure.The object building is a reinforced concrete frame structure that uses types C1 with a collapse prevention performance targets checklist.This evaluation only relied on data from inspections and field tests conducted using stratified random sampling.Some data were lacking due to the limitations of the examination, so several items were evaluated by engineering judgment with the principle of evaluating conservatively.The results of the structural collapse prevention checklist are shown in Table 5.
Evaluation Statement
Status Low seismicity Seismic -force -resisting system REDUNDANCY: The number of lines of moment frames in each principal direction is greater than or equal to 2 C COLUMN AXIAL STRESS CHECK: The axial stress caused by unfactored gravity loads in columns subjected to overturning forces because of seismic demands is less than 0.20 f'c.Alternatively, the axial stress caused by overturning forces alone, calculated using the quick check procedure of Equation 2 Several items in the structural checklist results such as pseudo seismic force, column axial stress check, column shear stress check, the shear failure, and strong column-weak beam are then explained in detail by explaining the concepts, calculations, and evaluation results.
Concrete strength
Non-destructive testing is a complement to cored drill data.The analytical approach taken for the nondestructive testing is to correlate the results of the nondestructive testing at certain sample points to the core compressive strength values at that point.For that we need to make a strength correlation curve.Correlation results referring to ACI 228.1R-19 [9] can be seen in Fig. 8. Interpreting the results of the concrete quality test in Table 6 is carried out by calculating the value of the design equivalent concrete compressive strength, ′ , .In this report, the interpretation of the results of the concrete quality test is carried out by calculating ′ , using the Alternate Method (Bartlett and MacGregor, 1995) [11] described in ACI 214.4R-10 [12].In this method, the value of ′ , is calculated based on a lower-bound value of the average core sample strength, ( ̅ ) in a particular building structure.Estimate the lower-bound value of the average compressive strength of concrete in place for the desired confidence level, CL.
Pseudo-Seismic force
Pseudo shear force (V) is calculated based on the Equation 1.
The pseudo shear force formula for evaluation is the same as the base shear Equation for design, but it does not consider ductility modification and important factors.The value of C is the modification factor to relate expected maximum inelastic displacements to displacements calculated for linear elastic response, in this case C=1.0 is used.The value of Sa is the response spectral acceleration at the fundamental period of the building in the direction under consideration.Sa in this case uses the BSE-2E hazard level according to Table 2. Based on Indonesia's earthquake hazard deaggregation map with a 5% probability of exceedance in 50 years [13] Surabaya City has SS = 0.60 and S1 = 0.25.Then the value of W is the effective seismic weight of the building which is calculated by the self-weight of the structure and the superimposed dead load.In this case, effective weight was calculated W = 19699.2kN.The results of the pseudo shear force calculation can be seen in Table 7.
In Equation 2, nf is the number of frames in the direction of loading, hn is the total height of the building, L is the span length of the frame, Acol is the column area on the corner outside, and Ms is the system modification factor, where 2.5 is used in this Equation for collapse prevention (Table 8).
Column Shear Stress
Story shear forces was calculated by distributing pseudo seismic force V vertically.Furthermore, the value of the average shear stress on the column where nc is the number of columns and nf is the number of frames in the direction of loading. calculated using Equation 3with Ms is the system modification factor, where 2.0 is used in this Equation for collapse prevention.Value on each floor must be less than the greater of 0.69 MPa or 1/6 √′ (Table 9).ASCE 41-17 requires that the shear capacity of resistant moment frame members be sufficient to guarantee the achievement of bending moment capacities at the ends of members.If the shear capacity of a component is reached before its bending moment capacity, it will have the potential to experience brittle failure which can result in total collapse.Components that have a lower shear resistance than their bending resistance must be checked for their shear resistance against the demand for shear forces acting according to the combination of acting loads.
In this case, the shear resistance of components is evaluated by sampling members who are estimated to experience the greatest demand.The sample beams are those on grids F, G-2 story-1, and F-2.3 story-1.The calculation results are shown in Table 10.
Strong column-weak beam
The requirements for strong column-weak beams can be met if the moment capacity of the column is greater than 20% of the beam's total moment capacity.In this case, the sample evaluation was taken on the 2-F beamcolumn joint story 1 for X and Y directions.
The moment capacity Mn of the column is affected by how large the axial load.In this calculation, the column Mn is taken by taking the column axial load value from the combination of gravity loads which will produce a conservative Mn.Calculation results can be seen in Table 11.
Summary of tier 1 structural checklist
From the 21 structural checklist evaluation statement, 5 items are included in the compliant (C), 7 items are noncompliant (NC), 2 items are not applicable (N/A), and 7 items are unknown (U).Evaluation items that are noncompliant are column axial stress, column shear stress, captive column, shear failure, strong columnweak beam, column-tie spacing, and stirrup spacing.Meanwhile, unknown items are items that require detailed reinforcement data, especially on foundation joints, this cannot be known due to the limitations of the tool for detecting reinforcement (rebar scanning).
Tier 3 evaluation
Referring to the ASCE 41-17 evaluation process, if noncompliant items are found in the Tier 1 checklist, there is an option to proceed with evaluation directly to Tier 3. Then in this study, this step was taken.At Tier 3, structural system analysis is carried out with acceptance criteria based on basic performance objectives for existing buildings (BPOE), namely for risk category II is life safety for BSE-1E and collapse prevention for BSE-2E.ASCE 41-17 sections 6.2.3 regarding data collection requires that the choice of procedure be in accordance with the level of knowledge of the data collection, in the case of this study, the data level is determined at the usual level because drawing surveys and material testing have been carried out.Then the building plan is symmetrical and has no horizontal and vertical irregularities, so for the analytical procedure chosen to use nonlinear static procedure (NSP).
The NSP concept is to calculate the target displacement value which will then be evaluated against the component acceptance criteria.The method for calculating targeted displacement can use Equation 4or the FEMA 440 demand spectrum method.
In Equation 4, Sa is response spectrum acceleration at the effective fundamental period, g is acceleration of gravity, C0 is modification factor to relate spectral displacement, C1 is modification factor to relate expected maximum inelastic, C2 is modification factor to represent the effect of pinched hysteresis shape, cyclic stiffness degradation, and strength deterioration on the maximum displacement, and Te is effective fundamental period.
Furthermore, NSP is carried out with a foundation modeling assumed to be fixed at the structure's base, then defining the nonlinear material properties and modeling parameters and acceptance criteria of the components like Fig. 9 referring to Table 10.7 for beams and 10.8 for columns in ASCE 41-17.Fig. 9. Acceptance criteria illustration.
Nonlinear static procedure (NSP) BSE-1E
The calculation results using Equation 4 with a response spectrum acceleration of 20% probability of being exceeded in 50 years (SS = 0.30 and S1 = 0.15) obtained the target displacement for each direction =254.497mm and =256.267mm.Based on the target displacement value, the acceptance criteria structure was analyzed by deformation when target displacement occurred.
The results of NSP BSE-1E in the X direction show that the maximum deformation of the structure is 170,352 mm.There are three columns in the CP category (Fig. 10), so the structure could not continue its deformation.It means that the structure is already at the CP level for the X direction before reaching the displacement target.For the results of NSP BSE-1E in the Y direction, the structure can reach the target displacement and the condition of the structural elements is still within the IO range.Based on these results, the BSE-1E seismic hazard target with the life safety performance level target was not met, because in the X direction, a column structure is included in the CP category.
Nonlinear static procedure (NSP) BSE-2E
The results of calculations using Equation 4with response spectrum acceleration 5% probability of exceedance in 50 years (SS = 0.60 and S1 = 0.25) obtained target displacement for each direction =358.815mm and =368.117.
The results of NSP BSE-2E in the X direction are the same as BSE-1E.The maximum deformation of the structure is 170,352 mm.The structure's condition cannot be deformed until it reaches the target displacement (Fig. 11).For the results of NSP BSE-2E in the Y direction, the structure can achieve the target displacement, and the condition of the structural elements is included in the CP category.Based on these results, the BSE-2E seismic hazard target with the collapse prevention target was not met because the structure was assumed to have experienced a total collapse in the X direction.
JBDPA evaluation
There are three level screening procedures in this standard.The first level screening procedure are intended to determine lateral strength of building and is roughly evaluated by calculate the shear strength of vertical structure component such as columns and walls with their cross-sectional area.The second level screening procedure is applied to buildings where deficiencies are found in the first level evolution method.In this method, the beam is assumed to be rigid and not collapse before the vertical members.The third level screening is a more detailed evaluation by considering the beam capacity in the evaluation.The JBDP evaluation process can be seen in Fig. 12.
To judge the structure building requires retrofitting or not, all screening levels using concept that the seismic index of structure (Is) must be greater than the seismic demand index of structure (Iso) (Equation 5).
≥ (5) In this paper, the analysis was carried out using a first and second-level screening procedure.Where the seismic index of structure is calculated from the basic seismic index of structure (E0), irregularity index (SD), and time index (T) as in Equation 6.In JBDPA, the seismic demand index of structure (ISO) is calculated from Es, which has a value of 0.8 for 1 st level screening, and 0.6 for 2 nd and 3 rd level screening, zone index Z, ground index G, and usage index U as in Equation 7.
Indonesian seismic codes do not use the concept of zone index.To determine the value of this zone index, one can use the Equation to calculate the base shear coefficient between the Indonesian seismic codes in Equation 8 and the Japanese seismic codes in Equation 9.
The value of SDS is the design spectral acceleration of the short period 2/3 MCER, R is the response modification value which in the case of object building is ordinary reinforced concrete moment frames so that the value is R=3, and the value I is the importance factor for office buildings is 1.0.For the Japanese seismic codes, the Z value is the zoning factor or zone index, Rt is the design spectral factor which in the case of this building is 1.0, Ai is the lateral shear distribution factor which is 1.0 for base shear, and C0 is the standard shear factor which the value is used 0.2 for the moderate earthquake.If CS equals Ci then the Equation for calculating the value of Z is like Equation 10.
𝑍 =
0 (10) In other references, the seismic demand index is the value of the seismic acceleration acting on the building structure.This is based the concept that seismic demand is an acceleration of earthquakes received by building structures [13].In the JBDPA or Building Japan Law, the seismic hazard return period used is 500 years, different from Indonesia which uses 2475 years for design (BSE-2N) while 225 years (BSE-1E) and 975 years (BSE-2E) for evaluation.When comparing the calculated value of the base shear coefficient for the maximum earthquake that occurred, the spectral acceleration value using 2/3 MCER (design) has the same value as the spectral acceleration value with a return period of 500 years [14], so Iso equal to SDS for object cases this building where the four-storey building has a natural period below Ts (SD1/SDS).
The results of the two methods for calculating the value of the seismic demand index above show that if calculate the zone index first using Equation 10, the value of ISO = 0.85 for the first screening and ISO = 0.64 for the second screening.As for the method using the concept of Iso concept same as the accelerated spectral design, ISO = 0.64 was obtained for the first and second screening.These results were obtained based on the BSE-2N spectral acceleration value for Surabaya City as in section 3.2.1 with SDS = 0.64 and SD1 = 0.57.For the conservative evaluation, in this case use ISO = 0.85 for the first screening and ISO = 0.64 for the second screening.
The seismic index of structure E0, the basic concept is obtained by multiplying the ductility index F and the strength index C.For the first screening, the value of E0 choose greater than Equations 11-12, and for the second screening choose greater than Equations 13-14.
To calculate the strength index, the average compressive strength of core drill concrete is used in Table 1 without using the rebound number data and its correlation, to obtain a concrete quality of 10.97 MPa.The evaluation results shown in Table 12-13.The first and second-level screening evaluation results show that the building structure requires retrofit because the seismic index of structure is lower than the seismic demand index value.
Conclusion
The results using the ASCE 41-17 Tier 1 checklist with a 975 year earthquake return period (BSE-2E) and collapse prevention performance target obtained several evaluation items that received noncompliant (NC) status, out of the 21 checklist items, 5 items were compliant (C), 7 items were noncompliant (NC), 2 items not applicable (N/A), and 7 items are unknown (U).The items that were noncompliant status are on the column axial stress check, column shear stress, captive columns, shear failure, strong column-weak beam, column-tie spacing, and stirrup spacing.The results of the nonlinear static procedure (NSP) also produce performance object building is still below the target performance level which means that the planned building needs to be retrofitted to meet the performance level.The evaluation results using Japanese standards JBDPA shows that the seismic index of structure (IS) at the first and second level screening procedure is less than the seismic demand index of structure (ISO), meaning the building must be retrofitted.
The evaluation of these two methods shows the same results, that the building has a deficiency of strength and ductility.To improve building performance against earthquake loads needs to be retrofitted.The difference is that the evaluation in ASCE 41-17 is based on a detailed checklist and the level of building performance.For JBDPA the concept is simpler and is based on fulfilling structural capacity against seismic demand.For the purposes of seismic evaluation of structures in a short time and with simple calculations, JBDPA is an option, where the results of the evaluation also tend to be conservative, but for detailed evaluation purposes and related to an examination of deficiencies down to the member scale, ASCE 41-17 can be an option.
The first screening on JBDPA has a similar concept to the column shear stress check item on ASCE-41 Tier 1, where the concept is to calculate the lateral capacity of the building structure against seismic demand.This check item is very important when evaluating the structure.When it is known that the capacity lateral structure is insufficient, this shows that the strength of the building structure needs to be improved.
, is less than 0.30 f'c NC Connections CONCRETE COLUMNS: All concrete columns are doweled into the foundation with a minimum of four bars U Moderate seismicity Seismic -force -resisting system REDUNDANCY: The number of bays of moment frames in each line is greater than or equal to 2 C INTERFERING WALLS: All concrete and masonry infill walls placed in moment frames are isolated from structural elements U COLUMN SHEAR STRESS CHECK: The shear stress in the concrete columns, calculated using the quick check procedure of Equation 3, is less than the greater of 0.69 MPa or 2 √′ C FLAT SLAB FRAMES: The seismic-forceresisting system is not a frame consisting of columns and a flat slab or plate without beams.C High seismicity Seismic -force -resisting system PRESTRESSED FRAME ELEMENTS: The seismic-force-resisting frames do not include any prestressed or post-tensioned elements where the average prestress exceeds the lesser of 4.83 MPa or f'c/6 at potential hinge locations.N/A CAPTIVE COLUMNS: There are no columns at a level with height/depth ratios less than 50% of the nominal height/depth ratio of the typical columns at that level NC NO SHEAR FAILURES: The shear capacity of frame members is able to develop the moment capacity at the ends of the members NC STRONG COLUMN-WEAK BEAM: The sum of the moment capacity of the columns is 20% greater than that of the beams at frame joints NC BEAM BARS: At least two longitudinal top and two longitudinal bottom bars extend continuously throughout the length of each frame beam.At least 25% of the longitudinal bars provided at the joints for either positive or negative moment are continuous throughout the length of the members C COLUMN-BAR SPLICES: All column-bar lap splice lengths are greater than 35db and are enclosed by ties spaced at or less than 8db.Alternatively, column bars are spliced with mechanical couplers with a capacity of at least 1.25 times the nominal yield strength of the spliced bar U Table 5 (continued).Collapse prevention structural checklist result.Evaluation Statement Status BEAM-BAR SPLICES: The lap splices or mechanical couplers for longitudinal beam reinforcing are not located within lb/4 of the joints and are not located in the vicinity of potential plastic hinge locations U COLUMN-TIE SPACING: Frame columns have ties spaced at or less than d/4 throughout their length and at or less than 8db at all potential plastic hinge locations NC STIRRUP SPACING: All beams have stirrups spaced at or less than d/2 throughout their length.At potential plastic hinge locations, stirrups are spaced at or less than the minimum of 8db or d/4 NC JOINT TRANSVERSE REINFORCING: Beam-column joints have ties spaced at or less than 8db U DEFLECTION COMPATIBILITY: Secondary components have the shear capacity to develop the flexural strength of the components U FLAT SLABS: Flat slabs or plates not part of the seismic-force-resisting system have continuous bottom steel through the column joints N/A DIAPHRAGM CONTINUITY: The diaphragms are not composed of split-level floors and do not have expansion joints C UPLIFT AT PILE CAPS: Pile caps have top reinforcement, and piles are anchored to the pile caps U Note: C=Compliant, NC=Noncompliant, N/A= Not Applicable, and U = Unknown
Table 1 .
Result of rebound and compressive core drill test.
Table 4 .
Basic configuration checklist result.
Table 7 .
The pseudo shear force calculation.
Table 8 .
Column axial stress calculation result.
Table 9 .
Column shear stress calculation result.
Table 10 .
Column shear stress calculation result.
Table 11 .
Strong column-weak beam calculation result. | 7,093.8 | 2023-01-01T00:00:00.000 | [
"Engineering"
] |
Proximity induced band gap opening in topological-magnetic heterostructure (Ni80Fe20/p-TlBiSe2/p-Si) under ambient condition
The broken time reversal symmetry states may result in the opening of a band gap in TlBiSe2 leading to several interesting phenomena which are potentially relevant for spintronic applications. In this work, the quantum interference and magnetic proximity effects have been studied in Ni80Fe20/p-TlBiSe2/p-Si (Magnetic/TI) heterostructure using physical vapor deposition technique. Raman analysis shows the symmetry breaking with the appearance of A21u mode. The electrical characteristics are investigated under dark and illumination conditions in the absence as well as in the presence of a magnetic field. The outcomes of the examined device reveal excellent photo response in both forward and reverse bias regions. Interestingly, under a magnetic field, the device shows a reduction in electrical conductivity at ambient conditions due to the crossover of weak localization and separation of weak antilocalization, which are experimentally confirmed by magnetoresistance measurement. Further, the photo response has also been assessed by the transient absorption spectroscopy through analysis of charge transfer and carrier relaxation mechanisms. Our results can be beneficial for quantum computation and further study of topological insulator/ferromagnet heterostructure and topological material based spintronic devices due to high spin orbit coupling along with dissipationless conduction channels at the surface states.
The broken time reversal symmetry states may result in the opening of a band gap in TlBiSe 2 leading to several interesting phenomena which are potentially relevant for spintronic applications.In this work, the quantum interference and magnetic proximity effects have been studied in Ni 80 Fe 20 /p-TlBiSe 2 /p-Si (Magnetic/TI) heterostructure using physical vapor deposition technique.Raman analysis shows the symmetry breaking with the appearance of A 2 1u mode.The electrical characteristics are investigated under dark and illumination conditions in the absence as well as in the presence of a magnetic field.The outcomes of the examined device reveal excellent photo response in both forward and reverse bias regions.Interestingly, under a magnetic field, the device shows a reduction in electrical conductivity at ambient conditions due to the crossover of weak localization and separation of weak antilocalization, which are experimentally confirmed by magnetoresistance measurement.Further, the photo response has also been assessed by the transient absorption spectroscopy through analysis of charge transfer and carrier relaxation mechanisms.Our results can be beneficial for quantum computation and further study of topological insulator/ferromagnet heterostructure and topological material based spintronic devices due to high spin orbit coupling along with dissipationless conduction channels at the surface states.
Three-dimensional (3D) topological insulators (TI) are gapped bulk insulators having two-dimensional (2D) gapless Dirac cones on the surface.These 2D surface states (SS) are conducting due to topological invariants which exhibit time reversal symmetry (TRS)Z 1-3 .Some peculiar phenomena like the existence of Majorana fermions, quantum anomalous Hall effect (QAHE), and the topological magnetoelectric effect are featured by the surface states of topological materials 4,5 .Under time-reversal operation, the electron wave vector k and the spin will alter their sign.The 2D surface states of a TI material remain unaffected because the opposing spin channels are locked to their corresponding momenta, however, such symmetry can be destroyed in the presence of a magnetic field (PMF) or magnetic impurities 6,7 .The TRS breaking induced SS of TI materials leads to a variety of novel quantum phenomena like topological magneto electric effect and QAHE.These unique properties of TI materials open a new dimension in condensed matter physics and in developing low-power spintronic devices 8,9 .
It is possible to break TRS via magnetic ordering, which can open the gap owing to magnetic exchange coupling 10 .There are two well-known methods to achieve this: (1) by doping of magnetic element and (2) by fabricating heterostructure with magnetic material (magnetic proximity structure) 11,12 .However, the former method results in the generation of contaminated phases, heterogeneity and small exchange gap which lead to more scattering in the specimen.Consequently, experimental realization of QAHE is only possible at extremely low temperature 13,14 .Therefore, it is quite interesting to fabricate a 3D TI and a ferromagnet (FM) heterostructure in which there are no antisite defects in bulk states as reported by Jieyi Liu et al. 15 .In such a TI/FM OPEN 1 Spintronics and Magnetic Materials Laboratory, Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj 211015, India. 2 CSIR -Indian Institute of Toxicology Research, Lucknow 226001, India. 3CSIR-National Physical Laboratory, New Delhi, India. 4Department of Physics, Indian Institute of Technology Bombay, Mumbai 400076, India. 5Department of Physics, Indian Institute of Technology Hyderabad, Kandi 502284, Telangana, India. 6Department of Materials Science and Metallurgical Engineering, Indian Institute of Technology Hyderabad, Kandi, Telangana 502284, India.* email<EMAIL_ADDRESS>www.nature.com/scientificreports/heterostructure, the TI is magnetized by the FM through the proximity effect, which has already been proposed by Huang et al. 16 .Magnetic proximity effect has several benefits over magnetic doping, i.e., the realization of the half-integer QAHE, switching the gap between surface states, and maintaining the intrinsic crystalline phase of topological material etc. 17,18 .Moreover, as suggested by Huang et al. 19 and Li et al. 20 , the magnetic proximity effect provides a higher temperature for magnetization survival that could be useful for spintronic applications.Hence, the magnetic proximity effect is a better method to create the gap between the surface states of TI materials.The gap between the surface states can be observed through angle-resolved photoemission spectroscopy (ARPES).Although, ARPES can measure the electronic structure of materials down to a few nm depth of the surface, the magnetic proximity effect occurs at the interface of TI/FM heterostructure making this technique unfit.Therefore, researchers have focused on different methods to observe gap opening in surface states, such as magnetoresistance (MR) measurement in which analysis of conductivity vs. magnetic field can give information about gap opening 21,22 .MR describes how resistance changes with an external magnetic field i.e., defined as = (R B − R 0 )/R 0 ] × 100 23 .
Due to interaction in distinct scattering loops in weakly perturbed electronic systems, different transport characteristics takes place, which can be easily realized experimentally in thin films 24 .The result shows that the surface state of topological material always exhibits weak anti-localization (WAL) behavior due to destructive interference of time reversed scattering loops caused by spin-orbit coupling 25 .However, the electrons exhibiting constructive quantum interference between time reversed scattering loops give a negative quantum correction to conductance, known as the weak localization (WL) effect 26,27 .In PMF, the positive MR is an interesting phenomenon indicating gap opening at the Dirac point of the topological surface states and dominance of the WL effect 27 .
In this work, we have successfully fabricated good quality p-TlBiSe 2 /Si and p-TlBiSe 2 /Ni 80 Fe 20 /Si heterostructures.Raman and transient absorption spectroscopy (TAS) studies have been carried out in order to investigate phonon vibrations and charge carrier dynamics, respectively.Under PMF, the ground state splitting due to symmetry breaking is observed in TASs studies.Electrical analysis is done under dark and illumination conditions in both AMF (absence of magnetic field) and PMF.The impact of magnetic field on the TI material was also investigated in detail to explore various quantum interference phenomena (WAL and WL) using magnetoresistance measurement.Furthermore, we have investigated magnetic proximity effect in p-TlBiSe 2 /Ni 80 Fe 20 /p-Si heterostructure via MOKE measurement.All these measurements give an indication of the gap opening in PMF.
XRD analysis
In order to investigate the structural analysis and physical properties of p-TlBiSe 2 film and p-TlBiSe 2 /Ni 80 Fe 20 film, both deposited on Si substrate, the x-ray diffraction (XRD) analysis was carried out, which confirms the polycrystalline nature of grown films.The XRD analysis of p-TlBiSe 2 /p-Si film shows 3 high intensity peaks having crystallographic phase (001), (002), (112).While TlBiSe 2 /Ni 80 Fe 20 /p-Si film shows one additional peak at (111) (Figure S1).The additional peak is Ni 80 Fe 20 film peak, confirmed by previously reported results 28 From Figure S1 it is clear that, in TlBiSe 2 /Ni 80 Fe 20 /p-Si film spectra, p-TlBiSe 2 /p-Si film peaks are shifted towards smaller angle.This shifting occurs due to strain effect at the interface of TlBiSe 2 and Ni 80 Fe 20 .
Raman spectroscopy study
Raman spectroscopy study of p-TlBiSe 2 /p-Si and p-TlBiSe 2 /Ni 80 Fe 20 /p-Si heterostructures was carried out to investigate the electron-phonon interaction and crystalline phases as shown in Fig. 1.TlBiSe 2 exhibits rhombohedral crystal structure [as shown in Figure S1] of the space group R3m, and it has 15 phonon branches in which 12 are optical modes, and 3 are acoustic modes 29 , however, Ni 80 Fe 20 exhibits the face-centred cubic (FCC) structure.In TlBiSe 2 /p-Si heterostructure, Raman active modes (A 1 1g, E 2 g, A 2 1g ) along with surface phonon mode (SPM) at 74.91 cm -1 were observed, in good agreement with previously reported results 29 .The A 1 1g (~ 61.99 cm -1 ), A 2 (~ 135.3 cm -1 ) modes represent the "out of plane vibration" while E 2 g ((~ 97.85 cm -1 ) mode represents the "inplane vibration" with respect to the plane of covalent bonded quintuple layers of TlBiSe 2 .The emergence of SPM is due to the high surface-to-bulk ratio 30 .In p-TlBiSe 2 /Ni 80 Fe 20 /p-Si heterojunction, along with Raman active mode and SPM, an additional mode A 2 1u is present, which is otherwise a forbidden mode.The emergence of this mode indicates the symmetry breaking of TlBiSe 2, which is due to the conjugation of Ni 80 Fe 20 with TlBiSe 2 31 .In p-TlBiSe 2 /Ni 80 Fe 20 / p-Si, Ni 80 Fe 20 magnetizes TI material, due to which spin momentum locking gets disturbed, resulting in the breaking of symmetry 32 .
From Table 1, it is clear that in p-TlBiSe 2 /Ni 80 Fe 20 /p-Si, the out-of-plane vibration (A 1 1g ~ 63.09 cm-1 , A 2 1g ~ 138.32 cm -1 ) intensity has increased while "in-plane vibration intensity" (E 2 g ~ 85.52) has decreased, indicating that, in the former, the "out plane vibrations" became less restrained or more active and the "in-plane vibrations" became more restrained due to symmetry breaking and weaker interaction between the layers.Therefore, the intensity ratio I(A 2 1g )/I(E 2 g ) is enhanced in p-TlBiSe 2 /Ni 80 Fe 20 /p-Si heterostructure in comparison to that of p-TlBiSe 2 /p-Si.
Magnetic field induced transient absorption spectrpscopy (TAS) study
The charge carrier dynamics of p-TlBiSe 2 /Ni 80 Fe 20 /p-Si heterostructure were recorded using ultrafast transient absorption spectroscopy in both AMF and PMF (~ 800 Oe).The result gives a complete study of charge carrier dynamics and phonon dynamics.The samples were excited using 410 nm pump wavelength with a power of 0.2 mW. Figure 2a and b exhibit ultrafast surfaces in the visible range for both AMF and PMF, respectively.From Fig. 2b, it is clear that in PMF, ground state energy level splitting takes place due to the Zeeman effect which is caused by the interaction between electrons magnetic moment and external magnetic field.Consequently, the gap at the Dirac point of TlBiSe 2 opens (inset of Fig. 2b), leading to TRS breaking of TlBiSe 2 33 and the same is confirmed by electrical analysis also 34 .
Figure 2c and d show the ultrafast spectra in AMF and PMF, respectively.It is clear that AMF spectra shows broad ground state bleaching (GSB) from 600 to 800 nm having maximum bleaching at 770 nm for a very long time (3.6 ns) with some anti-Stoke shifted spectrum representing stimulated emission.The appearance of GSB is due to the movement of charge carriers from the valence band to conduction band 35 .The GSB is transformed to photo-induced absorption spectra in the 400 to 600 nm wavelength range exhibiting maximum absorption at 505 nm with a lifetime of 1.99 ns.The positive transient absorption spectra also appeared after the 1.96 ps delay of the probe having maxima at 775 nm wavelength.
From Fig. 2d, it is clear that, in PMF, GSB appears from 575 to 800 nm, having a maximum at 650 nm for a very short time (6.2 ps), and the stimulated emission spectra will also appear having a maximum at 760 nm.This GSB is transformed to photo-induced absorption spectra in the range of 400 nm to 575 nm with maximum absorption at 535 nm having small life time (ps order).Transient spectra again appear at 757 nm after 10 ps of probe delay with a lifetime of a few picoseconds.
To determine the lifetime of each spectrum, the kinetic profile for each signal is simulated using the phonon fitting model in the surface explorer software.With a sum of convoluted exponentials, the below function 36 enables fitting a kinetic trace at the chosen wavelength.
where A i is amplitude, t i is decay time, t 0 is zero time, and t p is instrument response time.Here, we have fitted the kinetic spectrum using the triexponential decay function for various wavelengths and probe delays as shown in Fig. 2e and f.The corresponding decay times are listed in Table 2.
From Table 2, it is clear that in PMF, decay time becomes shorter, indicating fast decay of charge carriers.Therefore, charge carriers' decay in a very short time to lower energy states through phonon-mediated inter-and intra-band scatterings.In AMF, the charge carriers are stabilized due to inter transitions in higher energy states before relaxing back to lower energy states 37 .
Magnetic field induced electrical analysis under dark condition
Various essential junction parameters of heterojunction diode like rectification ratio, ideality factor, and builtin potential can be obtained from the I-V measurements 38 .The room temperature electrical measurements of p-TlBiSe 2 /p-Si heterojunction were carried out in both AMF and PMF. Figure 3a and b show I-V characteristics and reveled the nonlinear behavior of electric current with forward applied voltage in AMF and PMF, respectively.
(1) www.nature.com/scientificreports/ The examined devices exhibit an excellent rectification ratio (as listed in Table 3), which can be attributed to less leakage current (nA order) and high forward current (μA order).This demonstrates that the heterojunction device acts as a high-quality diode.The two primary factors impacting diode functionality are series resistance (R S ) and shunt resistance (R SH ).The greater the current flow through the heterojunction diode, the lower its series resistance.On the other hand, the higher shunt resistance reduces leakage current and optimizes the diode efficiency of the device.The value of R S and R SH were calculated from the I-V outcomes of examined heterojunc- tion device by using the relation R S = ∂V ∂I .Inset of Fig. 3a and b show resistance vs. voltage plot, the value of R S and R SH were tabulated in Table 3.The diode current of p-TlBiSe 2 /p-Si heterojunction can be obtained by the conventional diode equation 39 .
where, q is the electronic charge, V is the voltage applied, k B is the Boltzmann constant, T refers to the temperature, I 0 indicates the reverse saturation current and η is the diode ideality factor, which gives the information that how much I-V experimental data is close to an ideal diode.For sufficient bias voltage V ≫ k B T q , then value of the ideality factor can be obtained from Eq. ( 2) as ( 2) The value of ideality factor was calculated with the help of Eq. ( 3) by inserting the slope value of lnI vs. V plot as displayed in Fig. 3c and d.The calculated ideality factor of examined device p-TlBiSe 2 /p-Si was found to be greater than one indicating deviation from ideal diode behavior.The rise in ideality factor is due to interface layers, interface states, and series resistance 40 .Thus, the impact of series resistance cannot be ignored for the examined device.In such a scenario, for the determination of diode parameters, many models have been established for examining the series resistance impact, among which Cheung's approach is one of the best methods.The Cheung's functions could be given by the following relation 40 .Table 3.Under the dark condition, the calculated diode parameters for p-TlBiSe 2 /p-Si, Ni 80 Fe 20 /p-TlBiSe 2 / p-Si, p-TlBiSe 2 /p-Si (magnetic film on the top surface) and p-TlBiSe 2 /p-Si (No magnetic film on p-TlBiSe 2 surface) heterojunctions at room temperature: AMF (absence of magnetic field) and PMF (presence of magnetic field).www.nature.com/scientificreports/From above expression it is clear that, η and R S of diode are determined from the slope and the intercept of dV d(lnI) vs I plot respectively.Figure 3e and f show dV/dln(I) vs current plot of p-TlBiSe 2 /p-Si heterojunction.The calculated values of R S and η are given in Table 3.These values of the R S and η derived from the Cheung's method are nearly equal to those obtained by fitting the diode using Eq. ( 2) of the p-n junction diode.The obtained value of the ideality factor is much higher than its ideal value (1) due to the rise in diffusion current with an increase in applied voltage or due to electron and hole recombination in the depletion zone 41 .Again, using the same expression, the diode parameters were calculated for Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterojunction in AMF and PMF (Figure S5).The obtained results are tabulated in Table 3.Along with this, the effects of magnetic thin film on electrical properties of p-TlBiSe 2 /p-Si heterojunction (magnetic film is grown on top surface of TI film) were also investigated at ambient condition.The result reveals that magnetic film has a significant effect on the charge transport characteristics of the device.The outcomes of the device in both scenarios have been given in Table 3.
The obtained results (as given in Table 3) reveal that the magnetic field significantly affects the performance of heterojunction devices.Even a small magnetic field made changes in the electrical outcome of examined device.Also, Arakushan et al., experimentally verified that, as the magnetic field increases, charge carrier diffusion length reduces, resulting in a reduction in forward current 42 .A detailed explanation is given in the next section.
From Table 3, it can be also seen clearly that the coating of the magnetic film causes a reduction in current across the junction.Because of the magnetic proximity effect, the ferromagnetic material (Ni 80 Fe 20 ) magnetizes the TI material.The non-zero magnetization in the topological magnetic interface leads to the generation of the spin-orbit coupling (SOC) induced field owing to spin aggregation at the interface.The spin Hall effect (SHE) and Rashba-Edelstein effect (REE) are the two mechanisms responsible for this phenomenon.SHE explains the mechanism by which a TI layer charge current develops into a spin current.This spin current is generated due to the asymmetric spin deflection caused by SOC 43 .On the other hand, REE often develops at the interface due to spatial inversion symmetry breaking that leads to the development of an internal electrical field at the interface of Ni 80 Fe 20 /p-TlBiSe 2 having a direction perpendicular to the film plane 44 .Whenever, an in-plane charge current propagates across the TI/FM heterojunction, the conduction electrons close to the interface travel in the electrical field and are subjected to an effective magnetic field that is perpendicular to the direction of current.Such interfacial induced effective magnetic field is known as the Rashba field.
When a spin current flows across TI/FM heterojunction, the spin experiences a spin-orbit torque, (at the interface) which comprises mainly of two components i.e., longitudinal torque (τ DL ) and transverse torque ( τ FL ).Both these are realized due to SHE and REE simultaneously.As the spins approach the interface, they experience torque resulting in the localization and randomization of spins, causing a reduction in the current (Fig. 4) 45 .
Magnetic field induced charge transport study under light effect
The photodetection capabilities of p-TlBiSe 2 /p-Si heterojunction was also investigated under the illumination.The imposed laser light wavelength was varied from 500 to 1300 nm by TLS −300 XU Xenon light source.The optical power was kept at 2.96 μW throughout the investigation.Figure S6a From Figure S6a and S6b, it is clear that the illumination current is significantly higher than the dark current, due to the generation of electron-hole pairs.As the incident light energy increases, the device current also increases up to 1.38 eV, corresponding to 900 nm, and a further increase in the incident light energy, photo response decreases due to the increase in the number of electron-hole pairs.The excess electron-hole pairs cause an enhanced scattering and therefore, local heat generation takes place that results in a decrease in the photo response 47 .Figure S6c and S6d show an E-V-R contour plot that revealed efficient absorption of incident photons.The maximum responsivity is obtained, corresponding to 1.38 eV.As TlBiSe 2 is a narrow band gap (0.33 eV) material, the enhanced photo response is contributed by light absorption of the TI material.Figure S6e and S6f.show the photoconductive gain and detectivity as a function of wavelength, giving rise to a maximum response at 900 nm, while the inset figure shows the sensitivity variation for varying wavelengths as a function of voltage, which also gives maximum response corresponding to 900 nm.All the calculated performance parameters for the examined device p-TlBiSe 2 /p-Si at + 1 V in AMF and PMF are tabulated in Table 4. Similarly, for Ni 80 Fe 20 /p-TlBiSe 2 heterojunction, all the photo detection parameters were calculated using the above expressions.Here also, a significant reduction in photocurrent was observed in PMF.The photocurrent characteristic of Ni 80 Fe 20 /p-TlBiSe 2 heterojunction with varying energy of incident light is shown in Figure S7.The examined heterostructures demonstrated maximum photo response corresponding 1.37 eV to and 2.08 eV and the calculated parameters are listed in Table 4.In PMF photo detection capability of the device is significantly reduced.The rational for this observation is discussed in the section below.
Magnetic induced effects responsible for current reduction
The 3D topological material contains an odd number of mass-less 2D Dirac cones.These Dirac cones have helical spin structures.When an electron moves around the Fermi surface, they follow time reversed scattering loops owing to the generation of π Berry phase.The generation of this Berry phase results in the absence of backscattering due to an additional phase factor 48 .Therefore, delocalization of surface charge carriers of TI material takes place which is experimentally confirmed by weak antilocalization effect.Such a peculiar phenomenon is realized due to the destructive interference of time reversed loops, which nullifies the backscattering probability of carriers 49 .On the other hand, under a magnetic field, time reversed scattering loops exhibit constructive interference, yielding to an enhancement in the backscattering probability of charge carriers causing an increase in resistivity 50 .The following section elucidates the fundamental quantum interface effects (WAL and WL) in TI materials that significantly affect conductivity.The charge transport quantum diffusion mechanism is a foundation to grasp quantum interference effects (WAL and WL) in TI materials.
The charge transport in solids depends on several factors like mean free path (l), phase coherence length ( l ∅ ) and size of sample (L).If l > > L, the specimen permits charge carriers tunneling without scattering termed as ballistic transport mechanism.Contrary to this, the diffusive transport mechanism exists when l < < L, in this scenario, the charge carriers are scattered and dispersed across the specimen.The diffusive regime with l ∅ >> l condition, called quantum diffusive regime, in which the charge carriers maintain their phase despite multiple scatterings.This phenomenon exists specially in the surface state of topological insulators where quantum interference in time reversed scattering loops occurs, introducing WAL and WL effects 51 .Such an effect significantly changes the conductivity of TI material that is given by σ ∝ ± e 2 πh ln l ∅ l 51 , where e 2 h is known as the quantum conductance and ( +) and (-) sign are for WAL and WL, l ∅ is calculated from inelastic scattering due to elec- tron-phonon or electron-electron scattering 52 .
The external magnetic field causes TRS breaking of TI material.Consequently, l ∅ is reduced while the mean free path remains constant, resulting in the reduction of conductivity according to the above expression 51 .This is further experimentally confirmed by the MR measurement and MOKE measurement.
Experimental realization of WL predominance in the presence of magnetic field
The magnetoresistance behavior of a TlBiSe 2 /Ni 80 Fe 20 /p-Si film is influenced by the properties of both materials (TlBiSe 2 & Ni 80 Fe 20 ) and their interfaces.When TlBiSe 2 is deposited over Ni 80 Fe 20 , the ferromagnetic properties of Ni 80 Fe 20 and the unique electronic states of TlBiSe 2 can interact.The magnetization dynamics in Ni 80 Fe 20 have the potential to influence the magnetic field experienced by the TlBiSe 2 layer, thereby influencing the cyclotron motion of charge carriers within the topological surface states.The observed magnetoresistance behavior in TlBiSe 2 /Ni 80 Fe 20 /p-Si heterostructure can be attributed to the combined effect of quantized cyclotron orbits in TlBiSe 2 and the magnetic properties of Ni 80 Fe 20 .The variations in the magnetic field can potentially induce changes in the cyclotron motion of charge carriers within TlBiSe 2 , thereby having an influence on the overall resistance of the heterojunction.
Futhermore in TI materials, WAL effect originates due to the strong SOC and spin-momentum locking in the surface states.While in PMF, domination of WL and separation of WAL occur simultaneously.The crossover www.nature.com/scientificreports/ of WL is a clear evidence of TRS breaking and the appearance of a topological gap in surface states.Figure 5a shows the percentage MR of Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterostructure measured at different constant currents.
The examined heterostructure exhibits a positive MR, showing a maximum value of 95% at 10µA current, corresponding to 1.5 T. Interestingly, near zero magnetic fields, MR displays a pronounced sharp cusp pattern which is the hallmark of the WAL effect.Moreover, as the magnetic field increases, the cusp characteristic disappears, indicating the dominance of WL.However, this cusp forfeited its sharpness as current is increased and almost disappeared for higher than 40µA current.This is most probably due to the reduction in phase coherence length owing to the excess electron-phonon interaction 17 .As we increase the magnetic field, the MR also increases due to the transition from WAL to WL.This transition suppresses the topological protection of surface states and leads to the TRS breaking in the TI material, which in turn results in the gap opening at the Dirac point in the quantum diffusive regime 53 as shown in Fig. 5b.
In order to verify the broken TRS in the TI material and the magnetic characteristics of the topological magnetic heterostructure, the magnetization reversal process was measured using the magneto-optical Kerr effect (Durham Magneto Optics: NanoMOKE3), where the magnetic field was varied in a range of ± 3500 Oe.From Fig. 5c and d it is clear that the MOKE signal of p-si and Ni 80 Fe 20 are opposite in direction (sign) with different magnitude.Therefore, the net magnetization of p-Si/Ni 80 Fe 20 is negative.On the other hand, MOKE signal of Ni 80 Fe 20 /p-TlBiSe 2 /p-si is positive (Fig. 5e).The results reveal that the TlBiSe 2 layer contributed to the MOKE signal, due to proximity effect at the topological magnetic interface.It can be understood via exchange interaction, which results in the spin-polarized state on the TlBiSe 2 side.This spin polarization state is in the opposite direction of the Ni 80 Fe 20 magnetization which was also theoretically confirmed by Eremeev et al. 54 .
To break the TRS of TI material, the Ni 80 Fe 20 must have an out-of-plane component of the magnetization, which is already present here due to the canting of the magnetization 55 .Hence, the gap between the SS state of TI material is induced due to proximity-induced magnetic ordering caused by the exchange coupling between the TlBiSe 2 top surface and the bottom surface of Ni 80 Fe 20 .Also at applied fields below the starting field, there is a coupling between TlBiSe 6a.The incident light produces excess charge carriers, resulting in an increase in the current, which was also confirmed by electrical analysis.However, under magnetic field application, the spin-momentum locking in surface states of TI material gets affected, resulting in TRS breaking and gap opening at the Dirac point, as shown in Fig. 6b.Also, the magnetic field shifts the Fermi level of TI to close to the Dirac point (in actual topological materials, it is located at the valence band or conduction band).This shifting in the Fermi level alters the Berry phase of TI material which protects topological surface states from backscattering.It can be expressed by following equation 56 .
where is the gap between the Dirac cones.In AMF, = 0andhence from the above equation, the Berry phase ( φ b ) = π 57 , which results in the absence of backscattering and delocalization of electrons.While in PMF, the Fermi level reaches very close to the Dirac point, the Berry phase is zero ( φ b ) = 0 resulting in an enhanced backscattering probability and localization of electrons 58,59 .
Conclusion
In conclusion, we have successfully grown magnetic-topological Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterostructure using the PVD technique.XRD analysis shows that the results are consistent with previously reported results.To investigate the phonon vibration, Raman analysis was carried out, which indicates TRS breaking through appearance of Raman forbidden mode.The electrical characteristics were investigated under dark and illumination conditions in AMF and PMF.The outcomes of the examined device reveal excellent photo response in both forward as well as reverse bias regions.Interestingly, under magnetic field, device shows a reduction in electrical conductivity at ambient condition.This decrease in conductivity is due to the transition from WAL to WL.The electrons which are in the surface states of the TI material, have π Berry phase leading to a destructive interface of time reversed scattering loops of the TI material, resulting in WAL.In PMF, due to exchange interaction at the interface, the Berry phase changes, resulting in a separation of the WAL.The crossover of WAL to WL was also experimentally confirmed by MR measurement, which was done by varying the field from −1.5 to 1.5 T. At zero magnetic field, WAL cusp is found, which disappears with increasing field, indicating the dominance of WL and hence increase in resistivity of the device.Our results can be beneficial for quantum computation and further study of topological insulator/ferromagnet heterostructure and topological material based spintronic devices due to high spin orbit coupling along with dissipationless conduction channels at the surface states.
Experimental section
Ni 80 Fe 20 material were deposited on Si (100) substrate having a dimension of 1 × 1 cm 2 via confocal DC magnetron sputtering set-up at room temperature where the target guns were oriented at an oblique angle to the substrate holder.Before the deposition process, the substrates were well cleaned with acetone and ethanol, followed by rinsing with di-ionized water.The substrates were mounted at the centre of the water-cooled sample holder, and the rotation of the substrate holder was performed at a 10 rpm speed to find the uniform thickness of the film.Prior to deposition, the Ni 80 Fe 20 target (99.99%purity) was sputter-cleaned for 2 min to remove any surface contamination.The thickness of the Ni 80 Fe 20 i.e., Permalloy layer was kept constant at 30 nm for all the samples.The base pressure of the chamber was maintained better than 5 × 10 −5 Pa.In the sample growth process, the deposition pressure and sputtering power were maintained at 0.5 Pa and 80 W, respectively.In the obtained film of Ni 80 Fe 20 /p-Si, TlBiSe 2 is deposited on the top surface of Ni 80 Fe 20 .The bulk form of 99.99% pure TlBiSe 2 , bought from Sigma Aldrich was used as the precursor to fabricate a thin film of TlBiSe 2/ Ni 80 Fe 20 /p-Si employing the thermal evaporation process (12A40D model manually operated) at room temperature under high vacuum (1.3 × 10 -4 Pa).By using a diffusion pump, high vacuum within the chamber is created after that the Nickel boat (melting point of Ni = 1728 K) containing a solid precursor of TlBiSe 2 passed an electric current of amount 60-70 A for 10 min.The fabricated film TlBiSe 2/ Ni 80 Fe 20 /p-Si is ready for the XRD, RAMAN, Transient absorption study, magnetoresistence measurement and MOKE measurement and the schematic 2 is shown below.Now for the device characterization Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterostructure is fabricated.To obtain this heterostructure firstly p-TlBiSe 2 is deposited on p-Si substrate via using thermal evaporator.Before the deposition process p-Si substrate was masked from all four sides using the aluminium foil and all the deposition parameter and process was kept same which was explained above.Now in obtained p-TlBiSe 2 /p-Si heterostructure Ni 80 Fe 20 was deposited on top half surface of p-TlBiSe 2 using confocal DC magnetron sputtering set-up at room temperature and again deposition process & parameters was kept same as earlier.As the Ni 80 Fe 20 film is deposited on half surface of p-TlBiSe 2 so hence before deposition remaining half surface of p-TlBiSe 2 /p-Si is masked by aluminium foil.To accomplish the Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterojunction, the aluminum foil utilized as a mask was detached after the deposition and the fabricated heterostructure is ready for electrical analysis.
There are two types of schematics in the experimental section.One is for device fabrication purpose as shown in Schematic (as shown in Fig. 7) and another one is characterization purpose (XRD, RAMAN, Transient absorption study, magnetoresistence measurement and MOKE measurement) shown in Schematic (as shown in Fig. 8).
Contact growth
After the fabrication of the Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterojunction, metallic contacts were grown by using same method as film deposition by thermal evaporator.To transfer the metallic contacts over the Ni 80 Fe 20 /p-TlBiSe 2 /p-Si film, a special hard mask with circular holes that were 300 µm in diameter and 700 µm apart were precisely positioned.The thermal evaporator method is used to evaporate pure Al wire (purity 99.99%) in a spiral tungsten (W) boat at a high vacuum (1.3 × 10 -4 Pa) to create aluminum (Al) contacts.The thickness of the deposited contact was measured using an ellipsometer which was found to be 150 nm.
To determine the crystallite orientation and structural characteristics of the thin film XRD was performed (Rigaku Miniflex, Model No. BD68000014-01).The working conditions were: diffraction angle ranging from 10 to 70 with a step of 0.2; voltage = 40 kV; electric current = 15 A with copper (Cu) source (wavelength = 1.54).The I-V experiment was carried out using Keithley 4200 analyzer and the optoelectrical characteristic was investigated by stimulating the examined heterojunction with a TLS-300XU Xenon source
Figure 2 .
Figure 2. Ultrafast transient absorption spectra of p-TlBiSe 2 /Ni 80 Fe 20 /p-Si film from 400 to 800 nm wavelength in AMF and PMF.(a, b) show the ultrafast surface of examined heterostructure while corresponding insets represent TlBiSe 2 band structure in AMF and PMF, respectively.(c, d) show TA spectra with varying probe delay in AMF and PMF, respectively.(e, f) show kinetic profiles in AMF and PMF, respectively.( The representation of the heterostructure for this characterization is shown as Fig. 8 in experimental section).
Figure 3 .
Figure 3. Dark characteristics of p-TlBiSe 2 /p-Si heterostructure at room temperature in AMF and PMF.(a, b) show the photocurrent density vs. voltage (J-V) plot and magnified semi-log characteristics plot, while the inset shows the resistance voltage (R-V) plot in AMF and PMF, respectively.(c, d) show the semi-log (I) vs V characteristics in AMF and PMF respectively.(e, f) shows the forward biased linearly fitted plot of dV d(lnI) vs.I while the corresponding inset shows the ( dV dI ) vs V characteristics, showing the exponential relation of the slope with applied voltage in AMF and PMF, respectively.(The representation of the heterostructure for this measurement is shown as Fig. 7 in experimental section).
and S6b show the I-V characteristic of examined heterojunction under light effect in AMF and PMF, respectively.The results demonstrate excellent photoelectric and photovoltaic effects in both forward and reverse bias regions.The photodetection efficiency of device was assessed through performance parameters like, photo to dark current ratio PDCR = I ph /I dark , photoresponsivity R = I ph /P i , detectivity D = RA 1/2 / √ 2qI dark , sensitivity S = R(d/V d ) and photoconductive gain G = Rhν/qη 46 .Where I ph = ( I light − I Dark ) is photo current, I dark is dark current, P i represents the power density of laser light, A represents the effective area of the device for absorbing incoming light (0.0049 cm 2 ) , ν represents the frequency of the incoming laser light, h is the Planck's constant , η represents the external quantum efficiency, V d is the bias voltage, and d is the thickness of the diode (~ 150 nm).
2 and the perpendicular component of Ni 80 Fe 20 magnetization.Where the domain walls in Ni 80 Fe 20 results in a gapless region in the SS as shown schematically in the inset of Fig. 5e.The results corroborate well with tha MR study.
Figure 5 .
Figure 5. (a) MR vs. magnetic field plot for various currents.(b) Schematic of the band gap opening in TI due to TRS breaking.(c, d, e) shows the room temperature MOKE hysteresis loops of p-Si, Ni 80 Fe 20 and Ni 80 Fe 20 /p-TlBiSe 2 /p-Si, respectively.While the inset of (e) shows the cross-sectional view of Ni 80 Fe 20 /p-TlBiSe 2 .
6 .
Figure 6.Schematic of the energy band diagram of the Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterostructure.(a) In AMF, under forward bias with light illumination resulting an increase in forward current.(b) In PMF, under forward bias with light illumination resulting decline in current due to TRS breaking in TI material.
Figure 7 .
Figure 7. Schematic of Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterostructure in presence and absence of magnetic field respectively.
Figure 8 .
Figure 8. Schematic of TlBiSe 2/ Ni 80 Fe 20 /p-Si heterostructure in presence and absence of magnetic field respectively.
t0 ti Table 1 .
All the observed peaks are Lorentz fitted, and the corresponding data is listed.
Table 2 .
List of derived parameters for p-TlBiSe 2 /Ni 80 Fe 20 /p-Si heterostructure from the kinetic fittings of TA spectra.
Table 4 .
List of photo detection parameters at maximum operating wavelength for p-TlBiSe 2 /p-Si and Ni 80 Fe 20 /p-TlBiSe 2 heterojunction in AMF and PMF.
Ni 80 Fe 20 /p-TlBiSe 2 /p-si heterojunction band diagram
An energy band diagram is useful to elucidate the charge transport mechanism and generation of photocurrent under light in the examined heterojunctions.Figure6depicts the band diagram of the Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterojunction at forward bias under light illumination in AMF and PMF.The abbreviations E c and E V are chosen to represent the appropriate conduction and valence bands, with suffixes 1 and 2 representing p-TlBiSe 2 and p-Si, respectively.E g represents the band gap of the material (E g ~ 0 eV for Ni 80 Fe 20 , E g ~ 0.33 eV for TlBiSe 2 , and E g ~ 1.12 eV for Si).ΔE C , and ΔE V are the conduction and the valence band offsets, which will act like a barrier in the flow of current.For Ni 80 Fe 20 /TlBiSe 2 heterojunction ΔEc, ΔE V are 0.17 eV and 0.20 eV, respectively, while for p-TlBiSe 2 /p-Si heterojunction the corresponding values are 0.65 eV 0.14 eV.The smaller band offset in Ni 80 Fe 20 /p-TlBiSe 2 heterojunction is due to almost similar values of electron affinity of both the materials showing ohmic nature at the junction.In thermal equalibrium condition, the Fermi level of all three materials is aligned owing to diffusion of excess carriers from higher concentration region to lower concentration region.In the forword bias condition the shifting of the Fermi level occurs due to the applied bias.In Ni 80 Fe 20 /p-TlBiSe 2 heterojunction, the Fermi level of Ni 80 Fe 20 shifts towards the lower side while that of p-TlBiSe 2 shifts towards the upper side.Similarly, in p-TlBiSe 2 /p-Si heterojunction, the Fermi level of p-Si has shifted towards the lower side while for p-TlBiSe 2 , it has shifted towards the upper side.Under illumination and in AMF, the forward-biased Ni 80 Fe 20 /p-TlBiSe 2 /p-Si heterojunction is depicted in Fig. | 9,469.6 | 2023-12-15T00:00:00.000 | [
"Physics",
"Materials Science"
] |
The idea of fuzzy logic usage in a sheet-based FMEA analysis of mechanical systems
. The main purpose of the work which was carried out and is presented in this paper was to examine the possibility of using fuzzy logic inference for conducting a risk analysis with the help of a sheet-based Failure Mode and Effects Analysis method (FMEA). At the beginning, the main features of the analysed method were presented, with particular emphasis put on the Risk , Priority and Number parameters. Then, a proposal has been made which suggests using Matlab Fuzzy Logic Toolbox package in order to convert the factors into the form of fuzzy sets and to define rules for fuzzy inference process has been made. Finally, the created fuzzy logic model was used to present an example analysis of a turbocharger failure in the fuzzified form.
Introduction
Today, in engineering design, a lot of emphasis in engineering design is put on the reliability of the product.One of the methods used for this purpose is Failure Mode and Effects Analysis (FMEA).It consists in an analysis of the causes and effects of defects, and became one of the fundamental methods used to improve the quality of products, processes and services.It was developed in the United States as a military procedure MIL-P 1629 [1].Then, in the 1960s, it was used by NASA in the Apollo Saturn space flight project.The flight to the moon succeeded, therefore the method was used in other industries, especially in nuclear and aerospace industry.Since 1970 FMEA has become more and more popular in the automotive industry, e.g.Ford Motor Company used FMEA to analyse cars in terms of their safety for the user and the fulfilment of legal requirements.Suppliers of parts and subassemblies for the production of Ford cars were obliged to use the method in accordance with the ISO/TS 16949 standard previously known as QS-9000.Currently, FMEA has a very wide usage and can be applied in many fields, including medicine, metallurgy, food services, engineering, and even public administration.The analysis can be carried out in accordance with the EN 60812:2006 standard [2], and it has been included into the ISO standard [3].The main goal is to consistently and systematically identify potential defects in products or processes and then eliminate or minimize the risks associated with them.The greatest benefits are brought when the method is used at the initial stage of concept and design, because in this case the costs of defect removal is the lowest.
Several variants of the FMEA method can be distinguished.However, due to the preparation process, the way of conducting and presenting the obtained results, it can be divided into a sheet-based analysis (classical) and a matrix-based analysis (FFDM) [4].A state of the art summary of the classical FMEA method was published in the form of a review article by Spreafico et al. in 2017 [5].The review consists in analysing a representative pool of over 200 scientific papers and over 100 patents in order to have an overview of the evolution of the method.It shows a large number of problems which have already been solved using FMEA.However, the interest in the method is still growing.One of the ways of its development is combining the classic FMEA analysis with the fuzzy set theory.A novel modelling approach based on the intuitionistic fuzzy approach was proposed by Tooranlo and Ayatollah in [6].The approach offers advantages over earlier models since it accounts for degrees of uncertainty in relationships among various criteria or options, specifically when relations cannot be expressed in definite numbers, and provides a tool to evaluate failure modes in the case of insufficient data.Another methodology, which integrates the concepts of similarity value measure of fuzzy numbers and the theory of possibility was developed by Mandal and Maiti [7].New proposals for the application of fuzzy logic met with great interest of scientific centres, which resulted in creation of many hybrid FMEA methods and models in the fuzzy environment, such as extended MULTIMOORA or AHP, applied e.g. by Fattahi [8] and Hu-Chen [9].
The developed methods have already been used in a number of practical applications.Considering the fields related to mechanical engineering, most of the articles are devoted to the analysis of drives and control systems.Zaifang [10] proposed a fuzzy RPN-based method that integrates the weighted least square method, the method of imprecision and partial ranking method to generate more accurate fuzzy RPNs and ensure robustness against the uncertainty.The method was demonstrated by the design example of a new horizontal directional drilling machine.Xu [11] presented a fuzzy logic based method for the FMEA analysis of a diesel engine turbocharger system, while Renjith [12] carried out fuzzy FMECA (failure mode effect and criticality analysis) of a liquid natural gas storage facility, and Yazdi [13] used a similar FDFMEA method (Fuzzy Developed FMEA) for an aircraft landing system.With regard to the presented state of art, this article concerns creation of a sheet-based FMEA model using fuzzy logic in Matlab environment.The model was used to demonstrate the analysis of an example turbocharger failure.
Fuzzy model of a sheet-based FMEA method
In general, risk assessment in the sheet-based FMEA method is carried out using a risk priority number (RPN), which can be estimated on the basis of three factors: Occurrence (O), Severity (S) and Detection (D).In the conventional approach, the value of the RPN is a simple product of all three factors: RPN = OSD.The value of each factor is an integer number of range 1 to 10.The analysis is usually carried out in three stages [14]: preparation of the analysis, conducting the analysis, as well as introduction and supervision of improvement activities.The diagram of a complete procedure is shown in Fig. 1.
The application of fuzzy logic requires transformation of integer numbers (crisp values) into degrees of fuzzy set memberships (fuzzy values).For this purpose, first fuzzy sets of all input parameters (O, S, D) as well as outputs (RPN) must be defined.The detailed process of a fuzzy logic model creation is presented e.g. by Piegat [15], while its practical application illustrated on the example of a sterilization unit risk analysis is shown by Dağsuyu [16].Crisp values of the factors with verbal description and names adopted for defining fuzzy sets are given in Table 1.According to Table 1, the following fuzzy sets were defined for each input factor: VL (very low), L (low), M (medium), H (high) and VH (very high).Such unification of the fuzzy set names significantly increases consistency of the model.Due to the fact, that the cores (subsets with the maximum value of the membership functions) have the form of intervals in most fuzzy sets (e.g. the core of the fuzzy set L of Occurrence factor is in the range from 2 to 3), instead of the commonly used linear-type membership functions [6,7,10,14], the Gauss-type functions were applied in this model, similarly to [11].The scheme of the model created in Anfis editor is presented in Fig. 2. The diagram of fuzzy sets defined for the Occurrence O input factor is shown in Fig. 3, while the diagram of Detection D is presented in Fig. 4. The Severity diagram is omitted, since it has the similar form to Occurrence.The RPN failure classification factor has the crisp value from the range of [1, https://doi.org/10.1051/matecconf/201818303009QPI 2018 1000].It was divided into seven fuzzy sets, using a similar membership function set as introduced in [12], respectively: VL (very low, core less than 10), ML (medium low, core between 20 and 30), LM (low medium, core 90-110), MM (medium medium, core 180-220), HM (high medium, core 330-370), MH (medium high, core 460-540) and VH (very high, core 750-1000).The diagram of fuzzy sets defined for the RPN factor is shown in Fig. 5.In the next step, fuzzy mathematical operators as well as the defuzzification method required by the inference process were determined.Based on the analysis of previous research, the following parameters have been selected: fuzzy product operator: prod, fuzzy sum operator: probor, the defuzzification method: Centroid.Finally, a database of rules which represent connections between the fuzzy sets of input factors and RPNs has been created.The database consists of more than 100 rules in the form of if-then statements.
Practical application of created fuzzy FMEA model
As a practical example will be presented application of the proposed fuzzy model in the analysis of a selected turbocharger defect, resulting in decrease in engine power [17].The visible symptoms of the fault were noise and vibrations, hence the following factor values were assigned: S = 7, O = 8, D = 4.The fuzzy system gives in this case the RPN value of 500 (Fig. 6), which means the problem is considered to be serious (the most activated fuzzy set of RPN is VH).
Conclusion
This paper has presented a proposal to use Matlab Fuzzy Logic Toolbox to carry out a sheet-based FMEA analysis.A fuzzy model was created in Anfis editor and then was used to analyse a selected turbocharger defect.The obtained RPN values are higher than those from the classical FMEA method [17]: RPN = 224, and RPN = 120, before and after the repair, respectively.However, in both cases the RPN value was approximately halved.
Fig. 3 .
Fig. 3. Fuzzy sets defined for Occurrence factor using Gauss-type membership functions.
Fig. 4 .
Fig. 4. Fuzzy sets defined for Detection factor using Gauss-type membership functions.
Fig. 6 .
Fig. 6.Fuzzy FMEA system output for S = 7, O = 8, D = 4.After the repair, which consisted of replacing the air filter, values of the Severity and Occurrence factors changed respectively to: S = 5, O = 6.The RPN generated by the fuzzy model in this case has been reduced to RPN = 190, which means the highest activation of the LM and MM fuzzy sets.
Table 1 .
Crisp values and names of fuzzy sets defined for the O, S, D factors. | 2,287.6 | 2018-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Modeling the Specific Surface Area of Doped Spinel Ferrite Nanomaterials Using Hybrid Intelligent Computational Method
Spinel ferrites nanomaterials are magnetic semiconductors with excellent chemical, magnetic, electrical, and optical properties which have rendered the materials useful in many technological driven applications such as solar hydrogen production, data storage, magnetic sensing, converters, inductors, spintronics, and catalysts. The surface area of these nanomaterials contributes significantly to their targeted applications as well as the observed physical and chemical features. Experimental doping has shown a great potential in enhancing and tuning the specific surface area of spinel ferrite nanomaterials while the attributed experimental challenges call for viable theoretical model that can estimate the surface area of doped spinel ferrite nanomaterials with high degree of precision. This work develops stepwise regression (STWR) and hybrid genetic algorithm-based support vector regression (GBSVR) intelligent model for estimating specific surface area of doped spinel ferrite nanomaterials using lattice parameter and the size of nanoparticle as descriptors to the models. The developed hybrid GBSVR model performs better than STWR model with the performance improvement of 7.51% and 22.68%, respectively, using correlation coefficient and root mean square error as performance metrics when validated with experimentally measured specific surface area of doped spinel ferrite nanomaterials. The developed GBSVR model investigates the influence of nickel, yttrium, and lanthanum nanoparticles on the specific surface area of different classes of spinel ferrite nanomaterials, and the obtained results agree excellently well with the measured values. The accuracy and precision characterizing the developed model would be of immense importance in enhancing specific surface area of doped spinel ferrite nanomaterial prediction with circumvention of experimental stress coupled with reduced cost.
Introduction
Spinel ferrite nanomaterials have gained a significant and considerable attention lately due to their unique chemical, physical, magnetic, electrical, and optical features that are of great interest in many technological advancement and applications such as gas sensor, drug-delivery, photocatalysts, water splitting, spintronics, and supercapacitors [1][2][3][4]. Other important characteristics of spinel ferrite nano-materials that promote their wider applicability include stability, being less expensive, and ease of preparation [5]. The specific surface area of spinel ferrite nanomaterials contributes significantly to their technological applications especially during organic pollutant treatment [6][7][8]. The significance of specific surface area and other physical properties of catalysts to pollutant treatment has been treated elsewhere [9][10][11]. Tuning of specific surface area of spinel ferrite nanomaterials is often carried out experimentally through doping whereby foreign and external materials are incorporated into the parent spinel ferrite ceramic compounds and consequently leads to alteration in magnetic, electrical, and optical properties coupled with change in specific surface area of the nanomaterials [12][13][14][15]. This work models the specific surface area of spinel ferrite nanomaterials doped with foreign materials through stepwise regression-based model and hybrid genetic algorithmbased support vector regression (GBSVR) intelligent computational method using lattice parameter and the size of the nanomaterial as descriptors to the models.
Using crystal structure as a yardstick of classification, ferrite family can be hexagonal, garnets, or spinel ferrite [6]. The unique properties of spinel ferrites among other ferrite family distinguish them from others and have offered them a place in several technological applications. Interstitial sites in spinel ferrite over which cations are distributed include octahedral and tetrahedral sites [16][17][18]. Variation of charge distribution in these sites alters the specific surface area as well as other physical and chemical properties of the nanomaterials while the introduction of external materials in the crystal structure redistributes charges within the available sites [19,20]. Doping that accompanies specific surface area and other physical properties enhancement distorts the lattice structure of the parent spinel which makes lattice distortion becomes significant while modeling the influence of dopants incorporation on the specific surface area of the nanomaterials. Inclusion of the size of nanomaterial among the descriptors is of importance since the exhibited interesting physical and chemical properties of spinel ferrite nanomaterials are constrained to the nanoscale size of the materials and display distinct features when investigated outside nanoscale [7]. A hybrid of genetic and support vector regression algorithms presented in this work exclusively models the nonlinear relationship between the specific surface area and the descriptors which include distorted lattice parameter and the size of nanomaterial while the developed stepwise regression offers empirical relationship with significant deviation due to insufficient strength in addressing the nonlinear relationship existing between the descriptors and specific surface area. Support vector regression (SVR) belongs to the class of intelligent algorithms with fast computing potentials and excellent efficiency in addressing complex regression problems [21,22]. The algorithm is formulated using structural risk minimization principle and minimizes the empirical error through epsilon-insensitive loss function control [23][24][25]. Statistical learning theory upon which the algorithm forms the basis helps in error margin customization through hyperplane maximization. These features have made practical application of SVR algorithm in addressing real-life challenges and problems inevitable in various field of study [26][27][28][29]. The hyperparameters contained in SVR algorithm control the precision and accuracy of SVR-based algorithm and can be altered through various means which include grid search approach, manual search approach, or metal-heuristic approach [30,31]. Apart from robustness, precision, and avoidance of local minimal while using metal-heuristic hyperparameter optimization, time conser-vation is also a plus to metal-heuristic approach of hyperparameter selection. The metal-heuristic algorithm implemented in this work is genetic algorithm with characteristics of fast convergence and avoidance of premature convergence [32].
Stepwise regression (STWR) is a regression algorithm through which functions and equations that directly link the descriptors with the desired target are generated through either forward selection addition process or backward deletion procedures [33,34]. It weighs the significance of every descriptors based on some defined criteria before inclusion in the regression function. The algorithm also allows implementation of high-order polynomials as well as interactions of descriptors into regression function. The uniqueness of this algorithm has enjoyed a wide range of real life applications in science and engineering field [35][36][37][38][39]. This work also develops and implements STWR algorithm for modeling the specific surface area of doped spinel ferrite nanomaterials using lattice parameter and the size of nanomaterials as descriptors.
The road map for the remaining part of the manuscript is organized as follows: Section 2 presents mathematical formulation of support vector regression algorithm, stepwise regression algorithm, and physical principles governing the operation of population-based optimization genetic algorithm. Section 3 presents the computational hybridization of support vector regression and genetic algorithm as well as the descriptions with details of data acquisition. The results are presented and discussed in Section 4 of the manuscript with inclusion of the results of algorithm comparison, while Section 5 concludes and presents the summary of the main findings of the research work.
Mathematical Background of the Developed Algorithms
The mathematical formulation of the intelligent support vector regression algorithm is presented in this section coupled with the description of genetic algorithm. The section also contains the description of the implemented stepwise regression algorithm.
2.1. Support Vector Regression Intelligent Algorithm. Consider κ number of samples of data set defined as ðι j , S * j Þ, where j = 1, 2, ⋯, κ ⊂ ℝ d . The estimated specific surface area of doped spinel ferrite nanomaterials using the proposed support vector regression-(SVR-) based model is represented by S while the measured values of specific surface area from which patterns are to be acquired and drawn are represented by S * j . The descriptors to the proposed model which include the size of doped spinel ferrite nanomaterials and lattice distortion as measured by the lattice parameter are represented in regression equation as ι j . The support vector regression function is presented in Equation (1) [40,41].
Journal of Nanomaterials where S j ∈ ℝ d , ι j ∈ ℝ d , and ρis the bias of the regression function, and δ is the weight vector to be determined and optimized within support vector regression formulation. Euclidean norm kδk minimization in actualization of SVR objectives requires minimization of Equation (2) subjected to conditions and constraints itemized in Equation (3).
where C is the penalty factor that strongly influence the performance of the model.
Modification of complex optimization problem contained in Equation (2) includes nonzero positive slack variables (ψ j , ψ * j ) that have potentials of controlling further possible constraints that might hinder actualization of error minimization beyond the epsilon ε threshold [42,43]. Introduction of Lagrange multipliers (χ, χ * ) enhances dual problem transformation which aims at minimizing Equation (4) subjected to conditions presented in Equation (5).
χ and χ * are Lagrange multipliers. It should be noted that γðι j , ι i Þ in Equation (4) represents kernel function which can be polynomial, sigmoid, Gaussian, or other viable functions with distinct features. The function that best mapped the lattice distortion and size of nanoparticles to high feature space is Gaussian function presented in Equation (6) [44].
where ω is the kernel parameter.
The final regression function after transformation is presented in Equation (7).
The kernel parameter (ω) of the chosen kernel function, the maximum allowable deviation (ε) of the estimated specific surface area from the measured value, and the penalty factor are optimized using genetic algorithm in this research work.
Genetic Algorithm.
Genetic algorithm belongs to the class of well explored evolutionary optimization algorithm which mimics and replicates natural evolution [45,46]. Its development and implementation might be in binary or continuous form depending on the nature of the optimization problem to be handled since binary type caters for discrete space together with continuous space. Three operators that definitely feature in genetic algorithm principles include selection, crossover, and mutation. Before selecting best candidate through fitness function evaluation, a population matrix needs to be generated randomly and houses probable solutions within the search space [47,48]. Crossover operation produces offsprings from the defined probable solutions called parents after the implementation of selection operator. Mutation operation aids in genetic diversity maintenance and prevents local minimal convergence. The processes are repeated over a defined number of generations until stopping conditions are satisfied. Elitism inscribes robustness and effectiveness into genetic algorithm and enhances the optimality of the algorithm [49], although canonical genetic algorithm does not include elitism as an operator and among its functions is prevention of the best solution from undergoing mutation process. This helps in transference and preserving of best solutions from one generation to the subsequent generations.
Mathematical Background of Stepwise Regression.
Stepwise regression is a class of linear regression method which utilizes iterative construction of regression function in a step by step manner [33,34]. Stepwise regression equation is presented in matrix form as shown in Equation (8) in which X and β, respectively, represent descriptor vector and coefficient vector of X.
where y = ½y 1 , y 2 , It should be noted that the length of β is the length of descriptors which is two (the lattice parameter of doped spinel ferrite nanomaterials and the size of the nanomaterials) in this problem. Stepwise regression algorithm aims at iteratively and recursively determining the link between the descriptors and the target by weighing the contribution of each of the descriptors [50]. However, metrics adopted in adding and deleting descriptors include adjusted R 2 , Bayesian Information Criteria, Akaike Information Criteria, and Sum of Squared Error. The estimation strength of each of these criteria is judged using root mean square error (RMSE) of the tested samples. The procedures for adding and deleting descriptors are termed forward selection and backward elimination, respectively [33]. The choice of either of these methods does not in any way often affect the accuracy of the resulting regression function. However, one may save computational time than the other. In forward selection process, one descriptor is selected and added to regression 3 Journal of Nanomaterials function in the first step before adding the second descriptor after ensuring that the best fit is provided by its addition. The process continues until all the descriptors are added after satisfying some requirement such as adjusted R 2 , Bayesian Information Criteria, Akaike Information Criteria, and Sum of Squared Error. In backward elimination process, the development of regression function begins with inclusion of all the descriptors while one by one elimination follows after inability to meet up with the defined requirement and criteria. The choice of stepwise regression becomes significant in the addressed problem to ascertain the potential of nonlinear model in addressing and establishing the relationship between lattice parameter, nanomaterial size, and specific surface area of doped spinel ferrite nanomaterials.
Computational Methodology of the Developed Model
The details of the computational hybridization of support vector regression and genetic algorithm are presented here. This section also contains the details of the employed dataset and the results of the initial statistical analysis performed on the dataset.
Acquisition and Description of Spinel Ferrite
Nanomaterial Dataset. The lattice parameter of doped spinel ferrite nanomaterials obtained from X-ray diffraction analysis serves as the descriptor to the developed hybrid GBSVR model. The model also factors in the size of nanomaterial as its descriptor for estimating the specific surface area of spinel ferrite nanomaterial. The lattice distortion and the size of the nanomaterials with their corresponding specific surface area for forty spinel ferrite nanomaterials doped with different nanoparticles are extracted from the literature for the model development [5,7,8,18,[51][52][53][54][55]. The lattice strain as well as structural distortion in lattice structure of the parent spinel ferrite nanomaterial resulted from the introduced dopants definitely alters the material specific surface area. Table 1 presents the outcomes of statistical parameters evaluated on the dataset. Statistical parameters are evaluated on the lattice parameter, size of nanomaterial, and the measured specific surface area of doped spinel ferrite nanomaterials. The presented average (mean) values in the table as well as maximum and minimum values collectively reflect the overall content of the employed dataset. The significance of the standard deviation analysis presented in the table is to give a useful insight about the dispersion of the values of specific surface area and descriptors for different spinel ferrite nanomaterials doped with external materials. The correlation coefficients between the measured specific surface area and each of the descriptors reveal the potentialities of nonlinear model in establishing the relationship between specific surface area and descriptors. This gives an insight that the proposed hybrid genetic algorithm-based support vector regression (GBSVR) has potentials to outperform the proposed stepwise regression (STWR) that fails to incorporate nonlinearity in its operational modalities.
Computational Development of Proposed Hybrid Genetic
Algorithm-Based Support Vector Regression. Strong dependence of the performance of support vector regressionbased model on the combinatory choice of hyperparameters necessitates proper tuning and optimization of hyperparameters which include the maximum allowable deviation (epsilon), penalty factor, and kernel parameter of the best kernel function. The computational development of the proposed hybrid GBSVR model was conducted within computing environment of MATLAB. Dataset partitioning was carried out into training and testing set using 8 : 2 ratio while dataset randomization precedes data separation. Randomization promotes even, uniform, and just distribution of data points so that the possibility of validating and testing model on tasks that the model fails to learn due to insufficient data dispersion is prevented. With forty total number of doped spinel ferrite nanomaterials investigated, thirty-two doped compounds were employed in pattern acquisition during training stage, while the remaining eight doped compounds were utilized in evaluating the future performance of the model. The following procedures summarize the description of support vector regression hybridization with the genetic algorithm for performance enhancement.
Step 1. Population matrix: every chromosome in the population matrix carries information regarding the epsilon, penalty factor, and kernel parameter in a known and defined order. The kernel parameter is contained in the chosen kernel function and helps in smooth transformation of data to feature space where the construction of regression function takes place. The search space for the epsilon was set as 0.4 and 0.1 for upper and lower bound while setting outside this range resulted into perpetual running of the code. The kernel parameter ranges from 0.4 to 0.1 while the penalty factor was maintained between 1000 and 1. The right choice of hyperparameter range strengthens the exploitation and exploration capacities of the model and leads to appreciable time conservation.
Step 2. Chromosome fitness calculation: determination of chromosome fitness involves the implementation of SVR algorithm with training and testing set of data. Nonlinear
Journal of Nanomaterials
The chromosome with the lowest value of RMSE during testing stage is most fit, while other chromosomes were ranked in accordance to their fitness.
Step 3. Reproduction operation: reproduction involves selection of best chromosome with distinct qualities (usually characterized with lowest RMSE value) for transition to the next generation. In order to enhance fruitful and desired transition, a tournament selection operation with probability of 0.8 was implemented. Tournament selection involves random selection of few chromosomes from the whole population for tournament while the winners of the tournament transit to crossover stage of operation.
Step 4. Crossover stage: in crossover stage, new offsprings with desired qualities and potentials are formed from portions and subsequences of two parents. The probability value was set at 0.65 in order to achieve effective subsequences transfer from parent to the offsprings.
Step 5. Mutation stage: Altering positions that are randomly chosen within the chromosome is controlled through mutation operation, and the probability value was set at 0.009 in this present work.
Step 6. Stopping condition: the algorithm repeats the cycle between Step 1 and Step 5 until the same value of fitness is attained for consecutive sixty iterations.
Results and Discussion
The outcomes of the developed hybrid GBSVR model are discussed in this section. The performance of the developed GBSVR model is compared with that of STWR using different evaluation parameters. The sensitivity and convergence of the developed hybrid GBSVR model to the hyperparameter and the content of the population matrix is presented in this section. Comparison of the estimated specific surface area of spinel ferrite nanomaterials with the measured values for various dopants incorporation is also presented in this section.
Sensitivity and Convergence of the Developed Hybrid
GBSVR Model. The convergence of the developed hybrid GBSVR model at various iteration with different number of chromosomes in population matrix is presented in Figure 1.
The exploitation as well as exploration strength of the developed hybrid model is optimized as convergence is attained. It was observed from the convergence graph Journal of Nanomaterials presented in Figure 1 that the developed model is robust and shows the same point of convergence for different number of chromosomes in the population matrix. The optimum population matrix was set at fifty from the obtained global solution convergence. The sensitivity of the developed GBSVR model to the penalty hyperparameter factor is presented in Figure 2 at various values of chromosomes in the population matrix. When ten numbers of chromosomes are navigating the search space, the penalty factor initializes at a value of 750 and increases steadily over ten iterations before begins the convergence. For fifty numbers of chromosomes, the chromosomes initialize at a value higher than the initial value of ten chromosomes, and similar convergence is attained. The convergence presented in Figure 2 shows that the global solution is well explored when there are many chromosomes within the search space. The sensitivity of the developed model with respect to the kernel parameter is presented in Figure 3. Similar trend is attained, and the model converged to global solution with 0.4 value of kernel parameter for Gaussian mapping function. The results of genetic optimization algorithm for each of the optimized hyperparameter are presented in Table 2. The performance of the developed GBSVR and STWR models are compared during the training and testing phases of model development using correlation coefficient (CC) between measured and estimated specific surface area, mean absolute error (MAE), and root mean square error (RMSE). The outcomes of the comparison are presented in Figure 4.
The comparison on the basis of correlation coefficient on training dataset presented in Figure 4(a) shows that the developed hybrid GBSVR model displays enhanced performance over the developed STWR-based model with performance improvement of 56.25%, while the testing stage presented in Figure 4(b) shows performance superiority of 7.508%. Using RMSE for comparing the performance as Figures 4(e) and 4(f). Table 3 contains the results of the performance measuring metrics for training and testing dataset. Correlation cross-plots between the measured specific surface area of doped spinel ferrite nanomaterials and the predicted values are presented in Figure 5 for training ( Figure 5(a)) and testing ( Figure 5(b)) set of data. The precision of the model can be judged from correlation cross-plots through alignment of data points. The results of the spinel ferrite nanomaterial has been modeled using developed GBSVR, and the comparison between the measured and estimated specific surface area is presented in Figure 6. The effect of grain boundary deposition of lanthanum is initially observed from increase in specific surface area when the concentration of lanthanum nanoparticles is 0.06. Subsequently, after this concentration, the lanthanum nanoparticles get diffused into the parent crystal structure which leads to decrease in the value of specific surface area [5]. Few particles of iron (Fe 3+ ) in tetrahedral sites were replaced with lanthanum (La 3+ ) particles in octahedral sites, while this charge diffusion is responsible for the observed variation in specific surface area. The predicted specific surface area using the developed GBSVR model agree excellently well with the measured values [5].
Tailoring Specific Surface Area of Cd 1-x Y x Fe 2 O 4 Spinel
Ferrite Nanomaterial through Yttrium Dopants Using the Developed GBSVR Model. Figure 7 presents the effect of yttrium nanoparticles incorporation on specific surface area of doped Cd 1-x Y x Fe 2 O 4 spinel ferrite nanomaterial using the developed GBSVR model. Comparison of the measured specific surface area with the estimated values using the developed GBSVR model as presented in Figure 7 shows excellent agreement [52]. The observed variation of specific surface area due to yttrium incorporation can be inferred from the fact that substitution with higher ionic radius creates spaces in the configurations of spinel ferrite material. Cationic radius characterizing the nanoparticles substituted for yttrium surfers no alteration since the octahedral site of iron remains unchanged.
Importance of Nickel Nanoparticles in Enhancing Specific
Surface Area of Doped Co 1-x Ni x Fe 2 O 4 Spinel Ferrite Nanomaterial Using the Developed Hybrid GBSVR Model. Inclusion of nickel nanoparticles in the crystal structure of Co 1-x Ni x Fe 2 O 4 spinel ferrite nanomaterial further reveals the significance of cation distribution on the physical properties of spinel ferrite nanomaterials especially the specific surface area. As Fe 3+ ions is distributed among the available tetrahedral and octahedral sites, the incorporated nickel (Ni 2+ ) nanoparticle ions exist within tetrahedral sites while cobalt (Co 2+ ) ions reside in octahedral sites [8]. The comparison presented in Figure 8 shows that the experimental trend of nickel incorporation in lattice structure of Co 1-x Ni x Fe 2 O 4 spinel ferrite nanomaterial is well captured by the developed hybrid GBSVR model [8].
Conclusion
Specific surface area of doped spinel ferrite nanomaterials is modeled in this work using stepwise regression (STWR) algorithm and hybrid genetically based support vector regression (GBSVR) algorithm. The descriptors to the Journal of Nanomaterials developed models include the lattice distortion due to dopant inclusion in lattice structure of spinel ferrite nanomaterials and the size of nanomaterials. The models were developed and validated using forty different spine ferrite nanomaterials subjected to incorporation of different dopants at various experimental conditions. The developed GBSVR model outperforms STWR model when evaluated using mean absolute error, correlation coefficient, and root mean square error metrics. The developed GBSVR model investigates the significance of lanthanum, yttrium, and nickel nanoparticle inclusions in different matrixes of spinel ferrite nanomaterials and the outcomes of the developed model agree excellently well with the measured specific surface areas. The outstanding performance demonstrated by the developed model is of enormous significance in tailoring the specific surface area of spinel ferrite nanomaterials to desired values needed for specific application with resources conservation and circumvention of experimental stress.
Data Availability
The raw data required to reproduce these findings are available in the cited references in Section 3.1 of the manuscript.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 5,649 | 2021-08-18T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering",
"Computer Science"
] |
Group Action on Fuzzy Modules
In this article, we introduce the notion of fuzzy G-module by defining the group action of G on a fuzzy set of a Z-module M. We establish the cases in which fuzzy submodules also become fuzzy Gsubmodules. Notions of a fuzzy prime submodule, fuzzy prime G-submodule, fuzzy semi prime submodule, fuzzy G-semi prime submodule, G-invariant fuzzy submodule and G-invariant fuzzy prime submodule of M are introduced and their properties are described. The homomorphic image and pre-image of fuzzy G-submodules, G-invariant fuzzy submodules and G-invariant fuzzy prime submodules of M are also established.
Introduction and Main Results
The concept of fuzzy subset of a non-empty set was introduced by Zadeh [1] who introduced the notion of a fuzzy set as a method of representing uncertainty in real physical world.Following this landmark discovery, a number of studies of Fuzzy Modules and their applications have emerged.In particular, Negoita and Ralescu in [2] introduced and examined the notion of a fuzzy submodule of a module.Since then, different types of fuzzy submodules have been investigated in the last three decades.Juncheol Han in [3] has studied group actions in regular rings.The notion of group action on fuzzy subset of a ring was defined and studied by Sharma in [4] [5].
Let X be a non-empty set.A mapping µ: X → [0, 1] is called a fuzzy subset of X. Rosenfeld [6] applied the concept of fuzzy sets to the theory of groups and defined the concept of fuzzy subgroups of a group.Since then, many papers concerning various fuzzy algebraic structures have appeared in the literature.As a generalization of a fuzzy set, the concept of an intuitionistic fuzzy set was introduced by Atanassov [7].Further results on these and other aspects of fuzzy modules can be found in [8]- [17].
In this paper, we define the group action on fuzzy subset of a module over the ring of integers and introduce the notion of fuzzy G-modules.Many properties of fuzzy G-modules will be established.The concept of fuzzy G-prime submodules will be introduced and studied.Following the definition of G-invariant submodule of a module M, we define and study G-invariant fuzzy submodule and G-invariant fuzzy prime submodule of a module M. The homomorphic image and pre-image of fuzzy G-modules will be established.A number of associated results will be obtained.
Preliminaries Knowledge and Results
We recall some definitions and results for the smooth flow of our assertions and results.Throughout the paper, unless otherwise stated, R will denote a commutative ring with unity and M an R-module.[18] Let µ and ν be any two fuzzy sets of an R-module, then
Definition (2.1)
2) [19] Let M be an R-module.Then the fuzzy set µ of M is called a fuzzy submodule (FSM) of M if (i) ( ) 3) [19] Let µ and ν be two fuzzy submodules of an R-module M, then their sum µ + ν and product µν are defined as Theorem (2.4) [19] Let µ and ν be two fuzzy submodules of an R-module M. Then the sum µ + ν and the product µν of µ and ν are also fuzzy submodules of M.
Theorem (2.5) [20] Let µ and ν be two fuzzy submodules of an R-module M. Then µ ∩ ν is also a fuzzy submodule of M. In particular, if {µ i : i∈I} be a family of fuzzy submodules of an R-module M, then [20] Let µ andνbe two fuzzy submodules of an R-module M. Then µ × ν is also a fuzzy submodule of M × M.
Group Action on Fuzzy Modules
Most of the results below can be extended to an arbitrary commutative ring.We have not been able to remove the restriction to the ring of integers in Lemma (3.3) (iii) and so some results cannot be extended.
Let M be a module over the ring of integers Z and G a finite group which acts on M (i.e., ∀g ∈ G, x ∈ M, x g = gxg −1 ∈ M).The identity element of G is denoted by e.
Definition (3.1)
A group action of G on a fuzzy set of a Z-module M is denoted by µ g and is defined by From the definition of group action G on a fuzzy set, following results are easy to verify.
Lemma (3.2)
Let µ and ν be two fuzzy sets of Z-module M and G a finite group which acts on M. Then , G; Let us now prove Lemma (3.3)Let G be a finite group which acts on Z-module M. Then for every x, y∈ M, g∈ G and r∈ Z, we . Since g g g x y g x y g gxg gyg x y Hence g µ is a fuzzy submodule of M.
Definition (3.7)
Let µ be a fuzzy set of Z-module M and G be a finite group which acts on M. Then µ is called a fuzzy G-submodule of M if µ g is fuzzy submodule of M for all g∈G.
Remark (3.8) (i)
From definition (3.7) we see that every fuzzy G-submodule is also a fuzzy submodule, for e µ µ = .
(ii) Note that the fuzzy set μ in example (3.6) is not a fuzzy G-submodule of M, for e µ µ = is not a fuzzy submodule of M. Theorem (3.9)Let µ be a fuzzy submodule of Z-module M and G be a finite group which acts on M, then µ is a fuzzy G-submodule of M if and only if for every g∈G, g µ satisfies the following conditions: Proof: Firstly, let µ be a FSM of Z-module M and g ∈ G such that µ g satisfies the given conditions.Substituting r = s = 1 in condition (ii), we get µ is a fuzzy submodule of M and hence µ is a fuzzy G-submodule of M. Conversely, let µ be a fuzzy G-submodule of M, and g∈G.To establish (i) and (ii), we only need to prove (ii).
Let r, s ∈ Z and x, y ∈ M. Then We now define fuzzy prime submodule (FPSM) and fuzzy G-prime submodule of M.
Definition (3.12)
A fuzzy submodule γ of Z-module M is called a fuzzy prime submodule if for any two fuzzy submodules µ,ν of M such that µν ⊆ γ implies that either µ ⊆ γ or ν ⊆ γ.
Lemma (3.13)Let µ and ν be two fuzzy prime submodules of Z-module M. Then µ∩ν is fuzzy prime submodule of M if and only if either µ ⊆ ν or ν ⊆ µ.
Remark (3.14)
From Lemma (3.13) we infer that in general intersection of two fuzzy prime submodules need not to be a fuzzy prime submodule.
Theorem (3.15)
Let γ be a fuzzy prime submodule of a Z-module M and G be a finite group which acts on M. Then γ g is also a fuzzy prime submodule of M.
Proof: Let µ andν be fuzzy submodules of M such that µν ⊆ γ g .Now, we claim that , M.
Sup min min , Sup min min , Sup min min , .
Definition (3.16)
Let γ be a fuzzy prime submodule of Z-module M and G be a finite group which acts on M. Then γ is called a fuzzy G-prime submodule of M if γ g is fuzzy prime submodule of M for all g∈ G.
Remark (3.17) If we denote
, where γ is a fuzzy prime submodule of M. Then γ G need not be a fuzzy prime submodule of M, because intersection of fuzzy prime submodules of M, in general, is not a fuzzy prime submodule of M (See Remark (3.14)).
Definition (3.18)
A fuzzy submodule γ of Z-module M is called a fuzzy semi prime submodule (FSPSM) if for all fuzzy submodules µ of M such that µ 2 ⊆ γ implies that µ ⊆ γ.
Theorem (3.20)Let γ be a fuzzy semi prime submodule of a Z-module M and G be a finite group which acts on M. Then γ g is also a fuzzy semi prime submodule of M.
Proof.Let µ be any fuzzy submodule of M and g ∈ G be any element such that µ 2 ⊆ γ g , then ( ) ( ) , where γ is a fuzzy semi prime submodule of M. Then γ G is a fuzzy G-semi prime submodule of M.
Following the definition of G-invariant submodule of a module M, we define G-invariant fuzzy submodule and G-invariant fuzzy prime submodule of Z-module M.
Definition (3.23)
Let µ be a fuzzy submodule of a Z-module M and G be a finite group.Then µ is said to be G-invariant fuzzy submodule of M if and only if ( ) ( ) ( ) M, G. Proof: For x∈M, g∈G, we have Proof: Since µ be a fuzzy submodule of Z-module of M and so µ g is fuzzy submodule of M for all g ∈ G.
Also,intersection of fuzzy submodules of M is a fuzzy submodule of M. Now, ( . Further, let ν be any G-invariant fuzzy submodule of M such thatν ⊆ µ.Then for g∈G, we get ( ) ( ) ( ).
Sup min min , Sup min min , using Lemma 3.2 ii .Proof: Let γ be FPSM of M and let µ and νbe two G-invariant FSM of M such that µν ⊆ γ G .Then µν ⊆ γ because γ G ⊆ γ.Since γ is a fuzzy prime submodule of M, either µ ⊆ γ or ν ⊆ γ, thus either µ ⊆ γ G or ν ⊆ γ G .Since γ G is the largest G-invariant fuzzy submodule of M contained in γ.Hence γ G is G-invariant fuzzy G-prime submodule of M.
Homomorphism of Fuzzy G-Submodules
In this section, we study the image and pre image of fuzzy G-submodules under the module homomorphism.
and so is a of M .
H is a fuzzy G-su e bmodule of M nce .( ) Therefore, is G-invariant fuzzy submodule of M.
Conclusion
In this paper, the notion of fuzzy G-submodule of a Z-module is defined and discussed.It has been proved that every fuzzy G-module is a fuzzy module but the converse is not true in general.It has also been proved that intersection and Cartesian product of two fuzzy G-submodules are fuzzy G-submodules.The notions of fuzzy prime submodule, fuzzy G-prime submodule, fuzzy semi prime submodule and fuzzy G-semi prime submodule are introduced and discussed.We have observed that intersection of two fuzzy prime submodules needs not be a fuzzy prime submodule; however intersection of two fuzzy prime submodules is always a fuzzy semi prime submodule.The notions of G-invariant fuzzy subset (submodule) and G-invariant fuzzy prime (G-prime) submodule of Z-module are also introduced and discussed.We have proved that sum and product of two G-invariant fuzzy sub-modules are G-invariant fuzzy submodules.
Proposition ( 3 . 4 )
Let µ be a fuzzy submodule of Z-module M and G a finite group which acts on M, then µ g is also a fuzzy submodule of M.Proof: Clearly, ( ) ( )( )
Proposition ( 3 . 10 )Proposition ( 3 . 11 )
Let µ and ν be two fuzzy G-submodules of a Z-module M and G be a finite group which acts on M. Then µ∩ν is also a fuzzy G-submodule of M.Proof: Since µ and ν are fuzzy G-submodules of M. Therefore g µ and g ν are fuzzy submodules of M for all g ∈ G which implies that g g of M for all g ∈ G [By Lemma (3.2) (ii)].Hence µ ∩ ν is also a fuzzy G-submodule of M. Let µ and ν be two fuzzy G-submodules of Z-module M and G be a finite group which acts on M. Then µ ×ν is also a fuzzy G-submodule of M × M. Proof: Since µ and ν are fuzzy G-submodules of M. Therefore g µ and g ν are fuzzy submodules of M for all g ∈ G which implies that g g µ ν × is fuzzy submodule of M × M [By Theorem (2.6)].Thus ( ) submodule of M × M for all g∈G [By Lemma (3.2) (iv)].Hence µ × ν is also a fuzzy G-submodule of M × M.
from Theorem(3.15)],but γ is fuzzy semi prime submodule.Therefore, g γ is fuzzy semi prime submodule of M. Definition (3.21)Let γ be a fuzzy semi prime submodule of Z-module M and G be a finite group which acts on M. Then γ is called a fuzzy G-semi prime submodule (FGSPSM) of M ifγ g is fuzzy semi prime submodule of M for all g ∈ G.Theorem (3.22)If we denote G G Let µ be a fuzzy submodule of a Z-module M and G be a finite group which acts on M. Then µ is G-invariant fuzzy submodule of M if and only if g µ µ = , ∀g ∈ G.
Let µ be a fuzzy submodule of a Z-module M and G be a finite group which acts on M. µ G is the largest G-invariant fuzzy submodule of M.
FSM of M. Proposition (3.26)A fuzzy submodule µ of Z-module is G-invariant fuzzysubmodule of M if and only if µ G = µ.Proof: This follows from Proposition (3.4) and Proposition (3.24).Proposition (3.27)Let µ be fuzzy submodule of Z-module M. Then µ G is the largest G-invariant fuzzy submodule of M contained in µ.Proof: This follows immediately from Proposition (3.4) and Proposition (3.25) Theorem (3.28)If µand ν are G-invariant fuzzy submodules of M, then µ + ν is also a G-invariant fuzzy submodule on M. Proof.Let x, y∈ M, g∈G be any elements, then Hence µ + ν is a G-invariant fuzzy submodule of M. Theorem (3.29)If µ and ν are G-invariant fuzzy submodules of M, then µν is also a G-invariant fuzzy submodule on M. Proof: Let x, y∈ M, g∈G be any elements, then
=
Hence µν is a G-invariant fuzzy submodule of M. Definition (3.30)A non-constant fuzzy prime submodule γ of Z-module M is called an G-invariant fuzzy G-prime submodule of M if for any two G-invariant fuzzy submodules µ and νof M such that µν ⊆ γ implies that either µ ⊆ γ or ν ⊆ γ.Theorem (3.31)If γ is a fuzzy prime submodule of a Z-module M, then γ G is G-invariant fuzzy G-prime submodule of M.
Theorem ( 4 . 5 )
Let M and M′ be Z-modules on which G acts and let f be a G-module homomorphism from M into M′.If ν be a G-invariant fuzzy submodule of M′, then f −1 (ν) is a G-invariant fuzzy submodule of M.Proof.Since ν is G-invariant fuzzy submodule of M′.Therefore, ν g = ν, for all g ∈ G.
Theorem ( 4 . 6 )
Let M and M′ be Z-modules on which G acts and let f be a bijective G-module homo-morphism from M into M′.If µ is a G-invariant fuzzy submodule of M, then f (µ) is a G-invariant fuzzy submodule of M′.Proof.Since µ is G-invariant fuzzy submodule of M′.Therefore, µ g = µ , for all g∈G.
invariant fuzzy submodule of M . | 3,532.2 | 2016-03-18T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
On the Parameterized Complexity of the Expected Coverage Problem
The Maximum Covering Location Problem (MCLP) is a well-studied problem in the field of operations research. Given a network with positive or negative demands on the nodes, a positive integer k, the MCLP seeks to find k potential facility centers in the network such that the neighborhood coverage is maximized. We study the variant of MCLP where edges of the network are subject to random failures due to some disruptive events. One of the popular models capturing the unreliable nature of the facility location is the linear reliability ordering (LRO) model. In this model, with every edge e of the network, we associate its survival probability 0 ≤ pe ≤ 1, or equivalently, its failure probability 1 − pe. The failure correlation in LRO is the following: If an edge e fails then every edge e′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$e^{\prime }$\end{document} with pe′≤pe\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$p_{e^{\prime }} \leq p_{e}$\end{document} surely fails. The task is to identify the positions of k facilities that maximize the expected coverage. We refer to this problem as Expected Coverage problem. We study the Expected Coverage problem from the parameterized complexity perspective and obtain the following results. 1. For the parameter pathwidth, we show that the Expected Coverage problem is W[1]-hard. We find this result a bit surprising, because the variant of the problem with non-negative demands is fixed-parameter tractable (FPT) parameterized by the treewidth of the input graph. 2. We complement the lower bound by the proof that Expected Coverage is FPT being parameterized by the treewidth and the maximum vertex degree. We give an algorithm that solves the problem in time 2O(twlogΔ)nO(1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ 2^{{\mathcal {O}}({\textbf {tw}} \log {\varDelta })} n^{{\mathcal {O}}(1)}$\end{document}, where tw is the treewidth, Δ is the maximum vertex degree, and n the number of vertices of the input graph. In particular, since Δ ≤ n, it means the problem is solvable in time nO(tw)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ n^{{\mathcal {O}}({\textbf {tw}})} $\end{document}, that is, is in XP parameterized by treewidth.
Introduction
The MAXIMUM COVERING LOCATION PROBLEM (MCLP) is a well-studied problem in the field of operations research [8]. Given a network with demands on the nodes, a positive integer budget k, the MCLP asks to find k potential facility centers in the network such that the neighborhood coverage is maximized. We are interested in investigating the unreliable nature of the MCLP. Unreliability is introduced by associating survival probabilities on the edges of the input network. The notion of unreliability is used in disaster management, surviving network design and influence maximization. Assume that the network is subjected to a disaster event. During the course of disaster, some link may become non-functional. This yields a structural change in the underlying graph of the network. The resulting graph is an edgeinduced subgraph of the original graph. In certain cases, the resulting graph can have multiple connected components. The real challenge is to place a limited number of potential facility centers a priori such that the expected coverage after an event of disaster is maximized. See [9, 13-15, 19, 32] for further references on unreliable MCLP.
In this paper, we study the following model of the MCLP with edge failures. Let G = (V , E, w) be a vertex weighted underlying graph of the MCLP. On each edge e ∈ E, let p e > 0 be the survival probability associated with e such that the edge e can survive in the network with probability p e . Under the assumption that edges fail independently, the input graph can be rendered into one of 2 m edge subgraphs called realization, where m is number of edges in the graph. Each realization will have a non-zero probability of occurrence. Since the number of realizations is exponential and many of them occur with close to zero probability, Hassin et al. [26,27] formulated a dependency model for edge failure in unreliable facility networks called linear reliability ordering (LRO). In LRO model, for each pair of edges e = e ∈ E, p(e) = p(e ), and for any pair of edges e i and e j with p e i > p e j , the Pr[e j fails | e i fails ] = 1. More precisely, if an edge e fails then every edge e with p e < p e surely fails. The LRO model is defined on graphs with distinct edge probabilities. It is clear that, in this model, we have exactly m + 1 edge subgraphs. We consider the LRO model with a relaxation that the edges can have the same probability. If the probabilities of two edges are the same, then either both or neither will survive. In this case, the number of subgraph realization will be at most m + 1.
While in most articles dealing with maximum coverage problems the weights are assumed to be positive, there are situations when the weights can be negative. Such mixed-weight coverage problems are useful for modeling situations when some of the demand nodes are obnoxious and their inclusion in the coverage area may be detrimental [4,5]. Nodes with a negative demand are nodes we do not wish to cover. If a node has negative demand, then we wish to cover as little as possible. For example, opening a new facility (grocery store) close to many positive weighted modes (customers) seems as an excellent opportunity but the proximity of a big supermarket (a neighbor with negative weight) could decrease the expected profit.
Problem Statement Let G = (V , E) be a vertex-weighted undirected graph with a weight function w : V → R and a probability function p : E → Q [0, 1] , and k be a positive integer. Assume that the edges are ordered using p in descending order. That is, p 1 > p 2 > · · · > p m . In the LRO model, let G 0 G 1 · · · G m be the linear ordering of the realizations of G, that occur with probability P (G i ) for 0 ≤ i ≤ m. The value of P (G i ) can be written as follows: The EXPECTED COVERAGE problem asks to find a k-sized vertex set S such that the expected coverage by S on the distribution {G i | 0 ≤ i ≤ m} is maximized. We use the expected coverage function C defined by Narayanaswamy et al. [33]. Given a pair of sets S, T ⊆ V , the expected coverage of T by S is Further, if S or T is a singleton set, we just write the element of the set instead of the set notation. An instance of the optimization version of the EXPECTED COVERAGE problem is denoted by the tuple (G, w, p, k). The decision version of the problem is defined as follows.
An instance of the decision version of the EXPECTED COVERAGE problem is denoted by the tuple (G, w, p, k, t).
Related Works
The facility location problems can take many forms, depending on the objective function. In the most facility location problems, the objective function focuses on comforting the clients. For example, in the k-center problem, the goal is minimizing the maximum distance of each client from its nearest facility center [7]. The facility location problem has received a good deal of attention in the parameterized perspective [1,6,20,21].
The MCLP with edge failure is studied with various constraints. Eiselt et al. [19] considered the problem with a single edge failure. In this case, exactly one edge would have failed after a disaster and the objective is to place k facility centers such that the expected weight of non-covered vertices is minimized. If the number of facility centers is k = 1, and the facility center can cover all the vertices in the connected component, then the problem is studied as MOST RELIABLE SOURCE (MRS) problem. In this problem, the edges fail independently. The MRS problem has received a good deal of attention in literature [9,13,15,32]. Hassin et al. [26] studied the problem with edge failure follows LRO failure model. The problem is referred as MAX-EXP-COVER-R problem. An additional input radius of coverage R is also given such that any facility center can cover a vertex at distance at most R. The MAX-EXP-COVER-R problem is shown to be NP-hard even when R = 1. When R = ∞ (it is sufficient to say R > n), the problem is polynomial time solvable [26].
In the BUDGETED DOMINATING SET problem, we are given a graph G and a positive integer k, and asked to find a set of at most k vertices S maximizing the value w (N[S]) in G. Set theoretic version of the BDS problem is studied as budgeted maximum coverage in [28,29]. The EXPECTED COVERAGE problem can be viewed as a generalization of the BDS problem. When we have probability 1 on all the edges, then both these problems are the same. The BDS problem generalizes PARTIAL DOMINATING SET (PDS) problem, where one seeks a set of size at most k vertices dominating at least t vertices [31]. Of course, all these problems also generalize the fundamental DOMINATING SET problem, where the task is to find a set of at most k vertices dominating all remaining vertices of the graph.
The DOMINATING SET problem parameterized by k (solution size) on general graphs is W[2]-hard [16]. However, on planar graphs it is FPT [25]. Moreover, on planar, and more generally on H -minor-free graphs it is solvable in sub-exponential time [3,11]. It also admits a linear kernel on planar graphs, H -minor-free graphs and graphs of bounded expansion [2,18,23,24,34]. Sub-exponential parameterized algorithm for the PDS problem on planar graphs, and more generally, apex-minorfree graphs, was given in [22].
On graphs of bounded treewidth, the classical dynamic programming, see e.g. [10], shows that the DOMINATING SET problem is FPT parameterized by the treewidth of an input graph. The FPT algorithm for the DOMINATING SET problem can be adapted to solve the BDS problem in FPT time. Further, when we have mixed vertex weights on the BDS problem, the above modified algorithm will work. Narayanaswamy et al. [33] gave an FPT algorithm parameterized by treewidth of the input graph to solve the EXPECTED COVERAGE problem with non-negative weights. 1
Our Results
Since the EXPECTED COVERAGE problem (with mixed-weights) generalizes both the BDS problem and the EXPECTED COVERAGE problem with non-negative weights, it is also natural to ask what algorithmic results for these problems can be extended to the EXPECTED COVERAGE problem. We obtain the following results.
1. For the parameter pathwidth, we show that the EXPECTED COVERAGE problem is W[1]-hard. Moreover, the problem remains W[1]-hard for any combination of parameters pathwidth pw, solution size k and value of coverage t. This is interesting because as it was shown by Narayanaswamy et al. [33], the variant of the problem with only non-negative weight is FPT parameterized by the treewidth. Thus the results for non-negative weights cannot be (unless FPT=W [1]) extended to the mixed-weight model.
2.
We complement the lower bound by the proof that EXPECTED COVERAGE is FPT being parameterized by the treewidth and the maximum vertex degree. We give an algorithm that solves the problem in time 2 O(tw log Δ) n O (1) , where tw is the treewidth, Δ is the maximum vertex degree, and n the number of vertices of the input graph. In particular, since Δ ≤ n, it means the problem is solvable in time n O(tw) , that is, is in XP parameterized by treewidth.
Preliminaries
We recall in this section some notations and definitions used throughout this article. . Other than this, we follow standard graph theoretic notations based on Diestel [12].
A tree decomposition of an undirected graph G = (V , E) is a pair (T, X) where T is a tree whose vertices are called nodes and X = {X i ⊆ V | i ∈ V (T)} such that 1. for each vertex u ∈ V , there is a node i ∈ V (T) such that u ∈ X i , 2. for each edge uv ∈ E, there is a node i ∈ V (T) such that u, v ∈ X i , and 3. for each vertex v ∈ V , the set {i ∈ V (T) | v ∈ X i } forms a subtree of T .
The width of a tree decomposition (T, X) equals max i∈V (T) |X i | − 1. The treewidth of a graph G is the minimum width over all tree decompositions of G. For a node i ∈ V (T), let T i be the subtree rooted at i and X i + = ∪ j∈V (T i ) {X j }. The graph induced by the vertices X i + is G[X i + ] and it is denoted by G i . A tree decomposition (T, X) is said to be a path decomposition if T is a path. The pathwidth of a graph G is minimum width over all possible path decompositions of G. Let pw(G) and tw(G) denote the pathwidth and treewidth of the graph G, respectively.
We give a dynamic programming algorithm working on a so-called nice tree decomposition of the input graph G. A tree decomposition (T, X) is a nice tree decomposition if T is rooted by a node r with X r = ∅ and every node in T is either an insert node, forget node, join node or leaf node. Thereby, a node i ∈ V (T) is an insert node if i has exactly one child j such that X i = X j ∪ {v} for some v / ∈ X j ; it is a forget node if i has exactly one child j such that X i = X j \ {v} for some v ∈ X j ; it is a join node if i has exactly two children j and h such that X i = X j = X h ; and it is a leaf node if X i = ∅. Given a tree decomposition of width tw, a nice tree decomposition of width tw can be obtained in linear time [30]. For a node i ∈ V (T), let T i be a subtree rooted at i and X i + = ∪ j∈V (T i ) {X j }. We will also use the parameters vertex cover, feedback vertex set and distance to star forest of a graph G. For a graph G, by vc(G) we mean the size of minimum vertex set whose deletion leaves the graph edgeless, and by fvs(G) we mean the size of the minimum vertex set whose deletion leaves the graph acyclic. In this article, we consider that the trivial graph structure is a forest that consists of only star trees. Then, for a graph G, by dsf(G) we mean the size of the minimum vertex set whose deletion leaves the graph disjoint union of star trees (a forest of stars). For all these structural properties, we will omit G if it is clear from the context.
We refer to the recent books of Cygan et al. [10] and Downey and Fellows [17] for detailed introductions to parameterized complexity.
Parameterized Intractability: The EXPECTED COVERAGE problem is W[1]-hard for the Parameter Pathwidth
In this section, we show that the EXPECTED COVERAGE problem is W[1]-hard parameterized by the pathwidth. We reduce from the MULTI-COLORED CLIQUE problem which is defined as follows. Given a k- , and a positive integer k, the MULTI-COLORED CLIQUE problem seeks to decide whether there exists a k-clique with exactly one vertex from each part.
Construction
Given an instance (G, k) of the MULTI-COLORED CLIQUE problem, we construct an . Now we describe the construction of the graph H , the function w : V (H ) → Q and the probability function p : For each i ∈ [k], we construct a vertex-partition gadget H i corresponding to the vertex partition V i as follows. For each vertex u ∈ V i , add a vertex a u with w(a u ) = 0 in the gadget H i . Add two more vertices t i with w(t i ) = k 2 , and q i with w(q i ) = k 2 to the gadget H i . For each vertex u ∈ V i , the vertex a u is made adjacent to the vertices t i and q i . For each edge e ∈ E(H i ), we define the survival probability p(e) = 1. Thus, the gadget H i has |V i | + 2 vertices and 2|V i | edges.
For each 1 ≤ i < j ≤ k, we construct an edge-partition gadget H i,j corresponding to the edge partition E i,j as follows. For each edge e ∈ E i,j , add a vertex a e with w(a e ) = 0 in the gadget H i,j . Add two more vertices t i,j and q i,j with w(t i,j ) = k 2 = w(q i,j ) = k 2 to the gadget H i,j . For each edge e ∈ E i,j , the vertex a e is made adjacent to the vertices t i,j and q i,j . For each edge e ∈ E(H i,j ), we define the survival probability p(e) = 1. Thus, the gadget H i has |E i,j | + 2 vertices and 2|E i,j | edges.
Next, we introduce connector vertices to connect the edge-partition gadgets and vertex-partition gadgets.
To establish the edges between the gadgets and the connector vertices, we define a probability function z : V → Q [0,1] such that for any two vertices x, y ∈ V with x = y, z(x) = z(y). For each i ∈ [k], the gadget H i is connected to the set R as follows. For each vertex u ∈ V i and for each j ∈ [k] with j = i, -if i < j then the vertex a u ∈ H i is made adjacent to the vertices s i i,j and r i i,j with survival probabilities z(u) and 1 − z(u), respectively, and -if j < i then the vertex a u ∈ H i is made adjacent to the vertices s i j,i and r i j,i with survival probabilities z(u) and 1 − z(u), respectively.
respectively. An illustration of a vertex-partition gadget and an edgepartition gadget connected to the connector vertices is given in Fig. 1. For clarity, we denote the vertices a u and a e in V (H ) for each u ∈ V and e ∈ E, as selection vertices. Thus, the graph H is constructed with N = n + m + 3k 2 − k vertices and M = 2kn + 6m edges.
Lemma 1 For each i ∈ k, the pathwidth of the gadget H i is two.
Proof We observe that the removal of the vertex t i from H i results a star tree. It is known that the pathwidth of a star tree is one. Let (T, X ) be a path decomposition of Similarly, for each 1 ≤ i < j ≤ k, pw(H i,j ) = 2. We bound some structural properties of the graph H in the following lemma.
Lemma 2 Some structural properties of the graph H are as follows: We prove each of the structural parameters mentioned above as follows.
(a) If we remove R from the graph H , then the resulting graph is a collection of disjoint vertex-partition gadgets and edge-partition gadgets. From Lemma 1, the pathwidth of a gadget is two. Let (T, X ) be a path decomposition of the graph H − R with pathwidth two. Thus, adding the vertex set R to all bags of (T, X ) gives a path decomposition for the graph H with pathwidth at most |R| + 2 = 4 k 2 + 2. (b) If we remove the set T ∪ Q ∪ R from the graph H , then the resulting graph is edgeless. Thus, the set T ∪ Q ∪ R is a vertex cover for H . Therefore, vc(H ) ≤ |T | + |Q| + |R| = 3k 2 − k. (c) If we remove the set T ∪ R from the graph H , then the resulting graph is a forest. Thus, the set T ∪ R is a feedback vertex set for H . Therefore, fvs(H ) ≤ |T | + |R| = 5 k 2 + k. (d) If we remove the set T ∪R from the graph H , then the resulting graph is a forest that consists of only star trees. Therefore, dsf(H ) ≤ |T | + |R| = 5 k 2 + k.
Properties of a feasible solution for the instance (H , w, p, k , t ) of the EXPECTED COVERAGE problem
Observe that any vertex in H can achieve an expected coverage of value at most 2k 2 . In particular, for each u ∈ V (or e ∈ E), the selection vertex a u (or a e ) can achieve an expected coverage of value at most 2k 2 . If a vertex u ∈ V (H ) is not a selection vertex, then u can achieve an expected coverage of value at most k 2 . In the following lemma, we show that the set S consists of only selection vertices.
Lemma 3 Every vertex in the set S is a selection vertex.
Proof We prove this by contradiction. Assume that there exists a vertex u ∈ S where u is not a selection vertex. We know that C(V (H ), u) ≤ k 2 . Then, the expected coverage by the set S is given as follows: This contradicts the feasibility of the set S. Therefore, every vertex in the set S is a selection vertex.
Then, we show that the set S has a non-empty intersection with each gadget in the graph H .
Lemma 4 For each
Proof By construction of the graph H , the vertex-partition gadgets and edgepartition gadgets are disjoint and connected through connector vertices. By contradiction, assume that there exists a gadget with no vertex from the gadget is in S. Since there are k 2 + k gadgets, at least one gadget should have two vertices from the set S. For any gadget, the expected coverage contribution by the vertices in the gadget is at most 2k 2 even if the gadget has more than one vertex from S. Then we have the following: This contradicts the feasibility of the set S. Therefore, the set S has one selection vertex in each gadget.
The Lemmas 3 and 4 together state that for each i ∈ [k], there exists a vertex v ∈ V i such that a v ∈ S, and for each 1 ≤ i < j ≤ k, there exists an edge e ∈ E i,j such that a e ∈ S. Lemma 5 For each 1 ≤ i < j ≤ k, let a u and a xy be the selection vertices in the set S for some u, x ∈ V i and y ∈ V j . Then, the expected coverage of the vertices {s i i,j , r i i,j } by {a u , a xy } is given as follows.
C(r
To maximize the expected coverage, we need the coverage of r i i,j and s i i,j by the pair of vertices a u and a xy is as maximum as possible. This implies the following corollary from Lemma 5.
Equivalence
Now we show the equivalence of both the problems. More precisely, the graph G has a k-clique if and only if H has a k sized vertex set that achieves the expected coverage of value at least t .
Lemma 6 If (G, k) is a YES-instance of the
We apply the Corollary 1 in the second step to replace the exact value of expected coverage. Thus, we showed that C(V (H ), S) = k 4 + k 3 − k 2 + k = t . Therefore, the set S is a feasible solution to the instance (H, w, p, k , t ) of the EXPECTED COVERAGE problem.
Now we prove the other direction of equivalence.
Lemma 7 If (H, w, p, k , t ) is a YES-instance of the EXPECTED COVERAGE problem then (G, k) is a YES-instance of the MULTI-COLORED CLIQUE problem.
Proof Let S be a feasible solution to the instance (H, w, p, k , t ) of the EXPECTED COVERAGE problem. The feasibility of S ensures that every gadget has a selection vertex from the set S. More specifically, each gadget contributes an expected coverage of 2k 2 . Then C(V (H ) \ R, S) = k (2k 2 ) = k 4 + k 3 since S is a feasible solution.
There are 2 k 2 pairs of s i i,j and r i i,j connector vertices in H . By Lemma 5, each pair can contribute at most −1. Then, the value k − k 2 can be achieved only when each pair is contributing exactly −1. From Corollary 1, for each 1 ≤ i < j ≤ k, the pair r i i,j and s i i,j together can contribute exactly −1 when a u ∈ S and a uy ∈ S for some u ∈ V i and y ∈ V j . By construction of H , there is an edge between the vertices u and y in G. Let K = {u ∈ V i | a u ∈ S} be a k sized vertex set from V . For every pair of distinct vertices in K will have en edge between them in G and thus form a k-clique in G.
Thus, we state the following theorem using the Lemmas 2, 6 and 7.
Theorem 2 The EXPECTED COVERAGE problem is W[1]-hard for the parameter pathwidth.
Proof Given an instance (G, k) of the MULTI-COLORED CLIQUE problem, the instance (H, w, p, k , t ) is constructed in polynomial time where k = k + k 2 and t = k 4 + k 3 − k 2 + k. From Lemma 2, we know that pw(H ) is quadratic function of k. Finally, Lemmas 6 and 7 it follows that the instance (H, w, p, k , t ) of the EXPECTED COVERAGE problem output by the reduction is equivalent to the instance (G, k) of the MULTI-COLORED CLIQUE problem that was input to the reduction. Since the MULTI-COLORED CLIQUE problem is W[1]-hard for the parameter k, it follows that the EXPECTED COVERAGE problem is W[1]-hard for the parameter pathwidth of the input graph.
Moreover, the parameterized reduction preserves the parameters k , t and pathwidth of the constructed graph as a functions of k. That is, k = k + k 2 , t = k 4 + k 3 − k 2 + k and pathwidth of the graph H is O(k 2 ). Further, observe that the number of negative demand vertices in the reduced graph is 4 k 2 = O(k 2 ). Thus, we conclude the section with following corollary.
FPT Algorithm for the EXPECTED COVERAGE problem Parameterized by Treewidth on Bounded Degree Graphs
While, as we have seen, the EXPECTED COVERAGE problem is W[1]-hard for the parameter pathwidth of the input graph, we complement the lower bound with an FPT algorithm for the combined treewidth and maximum vertex degree parameters.
Let (G, w, p, k) be an input to the optimization version of the EXPECTED COV-ERAGE problem. Let (T, X) be a nice tree decomposition of G with treewidth tw. Narayanaswamy et al. [33] introduced the notion of best neighbor to solve the EXPECTED COVERAGE problem with non-negative weights on bounded treewidth graphs. Consider a set S ⊆ V of size k. In the LRO model, for each vertex u (with u ∈ N[S]), there exists a unique vertex in S called best neighbor of u in S, denoted by bn(u, S) such that C(u, S) = C(u, bn(u, S)). Definition 1 (Narayanaswamy et al. [33]) Given a vertex u and a set S with u ∈ N[S], by bn(u, S) we denote the best neighbor of u in S defined as follows: If u / ∈ N[S] then bn(u, S) is undefined. We use the fact that the graph G has bounded maximum degree. We define a structural ordering called neighborhood indexing on the neighborhood of each vertex. This LRO specific intuitions "best neighbor" and "neighborhood indexing", help us to solve the problem efficiently in tree decomposition.
Solution structure
For each node i in T, we compute two tables Sol i and Val i . The rows of both tables are indexed by 3-tuple which we refer to as states. Let S i denote the set of all states associated with node i. For a state s ∈ S i , the DP formulation gives a recursive definition of the values Sol i [s] and Val i [s]. Sol i [s] is a set S ⊆ X i + , an optimal solution for the instance specified by the state s.
where α is an element of the state s defined below. A state s ∈ S i is a tuple ( , α, β), where -0 ≤ ≤ k is an integer and specify the size of Sol i [s], -α : X i → {0, 1} is an indicator function for the vertices in X i . This specifies the constraint that whether a vertex in X i is considered for the coverage. -β : X i → {−1, 0, 1, . . . , Δ} is a function. This specifies the constraint that ).
An instance of the EXPECTED COVERAGE problem at the state s is (G i , w i , p i , ) where the functions w i and p i are obtained from w and p for the domain X i + and E(G i ), respectively. Additionally, the solution should satisfy the constraints specified by the state s. A state s is said to be invalid if there is no feasible solution that satisfies the constraints specified by s. We define one more notion of validity of the states called "locally valid". A state s is said to be locally valid if the following properties are satisfied.
If a state s is not locally valid, then s is an invalid state.
State induced at a node in T For any set S ⊆ X i + of size at most k and a function α : Proof The correctness follows from the fact that the graph G i is a null graph. Thus, for a null graph a valid state s, empty set with coverage value zero is the only optimal solution.
Introduce Node Let i be an introduce node with child j such that We define the state s j , and define Sol i [s] in terms of Sol j [s j ]. To define the state s j , we consider two cases depend on the value of β(v).
Case β(v) = 0 In this case, the desired solution for the state s must not contain the vertex v. We define the functions α j : X j → {0, 1} and β j → {−1, 0, 1, . . . , Δ} from α and β by excluding the vertex v in the domains of both functions, respectively. We define s j = ( , α j , β j ). If the state s j at the node j is invalid, then the state s at the node i is invalid. Therefore, we consider that the sate s j is valid. Then, the solution for the state s is defined as follows: and Case β(v) = 0 In this case, the desired solution for the state s must contain the vertex v. We have α(v) = 1 since s is locally valid. The key idea is to figure out the vertices those will have v as its best neighbor in the desired solution. Then, find a suitable optimal state at node j and complete the bottom-up computation. Next we find set of all vertices that will have v as its best neighbor in the desired solution. Let We enumerate all possible subsets of D v to find such a set. For each D ⊆ D v , we define the following: For a set D ⊆ D v and a function f ∈ F D , we define the following: ).
An optimal set D ⊂ D v and f ∈ F D can be computed as follows: If no such valid state found in j, then we mark s in i is also invalid.
Then, the solution for the state s is defined as follows: and The time to compute the state s j is depending on the size of F. Therefore, s j can be computed in O * (Δ tw ) time.
Lemma 9
Let i be an introduce node in T, and let s and s j be as defined above. If S is an optimal solution for the state s, then there exists a stateŝ at the node j such that the set S \ {v} is an optimal solution for the stateŝ, and Val j [s j ] = Val j [ŝ].
Proof Since S is an optimal solution to the state s, the pair (S, α) induces the state s at the node i. We consider two cases based on whether v ∈ S or not. First consider the case v ∈ S. In this case, observe that β(v) = 0. Consider the set D = {u ∈ X i | bn(u, S) = v} and the function α D j as defined above for the case β(v) = 0.
Then, the expected coverage C i (X i + \ α −1 (0), S) can be written as follows: Note that the term C(D, v) + w(v) is independent of the set S. Thus, the set S \ {v} is an optimal solution for the stateŝ at the node j.
Next we consider the case v / ∈ S. In this case, observe that β(v) = 0 and S \ {v} = S. Consider the α j as defined above for the case β(v) = 0. Letŝ = ( , α j ,β j ) ∈ S j be the state induced by the set S at the node j. Observe that the functions β j andβ j are same. Thus,ŝ and s j are same. Then, the expected coverage C(X i + \ α −1 (0), S) can be written as follows: Thus, the set S is an optimal optimal solution for the stateŝ at the node j.
Forget Node Let i be a forget node with child j such that X i = X j \ {v} for some v ∈ X j . Since i is a forget node, N(v) ⊆ X i + . We define the state s j = ( , α j , β j ) and define Sol i [s] in terms of Sol j [s j ]. The state s does not impose any constraint on v since v / ∈ X i . So, we try all possible values of α and β for v, and find an optimal one. We define α j : X j → {0, 1} such that for each u ∈ X j , Therefore, we consider all possible values of β j (v) to define the state s j . For each For each z ∈ {0, 1, . . . , deg(v), we define s z j = ( , α j , β z j ). If for each z ∈ {0, 1, . . . , deg(v)}, the state s z j at the node j is invalid then the state s at the node i is invalid. Therefore, we consider that there exists a z ∈ {0, 1, . . . , deg(v)} such that the state s z j at the node j is valid. Let Define s j = s z j = ( , α j , β z j ). Then, the solution for the state s is defined as follows: and The state s j can be computed in O(Δ) time.
Lemma 10
Let i be a forget node in T, and let s and s j be as defined above. If S is an optimal solution for the state s, then there exists a stateŝ at the node j such that the set S is an optimal solution for the stateŝ, and Val j [s j ] = Val j [ŝ].
Proof Since S is an optimal solution for the state s, the pair (S, α) induces the state s at the node i. Consider the function α j as defined above. Letŝ = ( , α j ,β j ) be the state induced by the pair (S, α j ) at the node j. Let z =β j (v). Note that the functions β z j andβ j are same, and thus the statesŝ and s z j are same. From (6) and the optimality of the set S for the state s, Val j [ŝ] = Val j [s j ]. Then, the expected coverage C(X i + \ α −1 (0), S) can be written as follows: Thus, the set S is an optimal solution for the stateŝ at the node j.
Join Node Let i be a join node with children j and h such that X i = X j = X h . We define two states s j = ( j , α j , β j ) and s h = ( h , α h , β h ) at nodes j and h, respectively, and define Sol i [s] in terms of Sol j [s j ] and Sol h [s h ]. Among vertices in the solution to be computed, |β −1 (0)| many vertices are taken from X i and −|β −1 (0)| many vertices will be chosen from X i + \ X i . Since X i + \ X i is a disjoint union of the sets X + j \ X j and X + h \ X h , we consider a parameter z to partition the value − |β −1 (0)|. For each 0 ≤ z ≤ − |β −1 (0)|, let z j = |β −1 (0)| + z and z h = b − z: we consider the states at nodes j and h with budget z j and z h , respectively. Let Note that the set D is partitioned into D j and D h . We define α j : X j → {0, 1} such that for each u ∈ X j , Similarly, we define α h : X h → {0, 1} such that for each u ∈ X h , α h (u) = 0 ifu ∈ D j and α(u) = 1, α(u) otherwise.
For each 0 ≤ z ≤ − |β −1 (0)|, a ∈ A and b ∈ B, let s z,a j = ( j z , α j , β j a ) and Define s j = s z ,a Then, the solution for the state s is defined as follows: and The subtracted term in the (11) is the over-counting term of the combined solution that is obtained from the states s j and s h . The time to compute the states s j and s h is depending on the sizes of the functions A and B. Therefore, the states s j and s h can be computed in |A| · |B| = O(Δ tw ) time.
Lemma 11
Let i be a join node in T, and let s, s j and s h be as defined above. Let S be an optimal solution for the state s. Let S j = S ∩ X + j and S h = S ∩ X + h . Then, there exists two statesŝ ands at nodes j and h, respectively, such that the sets S j and S h are optimal solutions for the statesŝ ands, respectively. Further, Proof Since S is an optimal solution for the state s, the pair (S, α) induces the state s at the node i. Consider the set D = D j ∪ D h , and the functions α j and α h as defined above. Letŝ = (ˆ j ,α j ,β j ) be the state at the node j induced by the pair (S j , α j ). Lets = (˜ h ,α h ,β h ) be the state at the node h induced by the pair (S h , α h ). Let z =ˆ j − |β −1 (0)|. Define a : D h → {−1, 0, 1, . . . , Δ} such that for each u ∈ D h , a(u) =β j (u). Then, define b : D j → {−1, 0, 1, . . . , Δ} such that for each u ∈ D j , a(u) =β h (u). Note that the functionsβ j andβ h are same as β a j and β b h , respectively. Thus, the statesŝ ands are same as s z,a j and s z,b h , respectively. From (9) (0)) can be written as follows: If either S j or S h is not optimal to the stateŝ ors, then it contradicts the optimality of the solution S for the state s. Thus, the sets S j and S h are optimal solutions for the statesŝ ands, respectively.
Bottom-up evaluation: Correctness of the DP formulation
Correctness Invariant For a node i and a valid state s = ( , α, β) at i, the recursive definition in Section 4.2 ensures that
Lemma 12
For each i ∈ V (T), and each valid state s ∈ S i , the correctness invariant is maintained for Sol i [s].
Proof The proof is by induction on the height of a node in T. The height of a node i in a rooted tree T is the distance to the farthest leaf in the subtree rooted at i. The base case is when i is a leaf node in T and height is zero, and the proof follows from Lemma 8. Let us assume that the claim is true for all nodes in T of height at most h − 1 ≥ 0. We now prove that if the claim is true for all nodes of height at most h − 1, then it is true for a node of height h. Let i be a node of height h ≥ 1. Since i is not a leaf node, its children are of height at most h − 1. Therefore, by the induction hypothesis, the correctness invariant is maintained at all children of i. We now prove that the correctness invariant is maintained at the node i. Let S be an optimal solution for the state s. Then, we show that Val i [s] = C(X i \ α −1 (0), S). If i is an introduce node then from Lemma 9, we have that the set S \ {v} is an optimal solution for the stateŝ ∈ S j . Since j is at height at most h − 1 and by induction hypothesis, the correctness invariant is maintained at state s of the node i. If i is a forget node then from Lemma 10, we have that the set S is an optimal solution for the stateŝ ∈ S j . Since j is at height at most h−1 and by induction hypothesis, the correctness invariant is maintained at state s of the node i. If i is a join node then from Lemma 11, we have that the sets S j and S h (as defined in Lemma 11) are optimal solutions for the stateŝ s ∈ S j ands ∈ S h , respectively. Since j and h are at height at most h − 1 and by induction hypothesis, the correctness invariant is maintained at state s of the node i. This completes the proof.
Thus we conclude the section with following theorem.
Theorem 3
The EXPECTED COVERAGE problem can be solved optimally in time 2 O(tw log Δ) n O (1) .
Proof An optimal solution can be obtained the state s = (k, ∅ → {0, 1}, ∅ → {−1, 0, 1, . . . , Δ}) at the node r. That is, the set Sol r [s] is an optimal solution to the input instance of the EXPECTED COVERAGE problem. The correctness of the tables computation is proved in Lemma 12. Note that every node has a table of size (k + 1)(2Δ + 4) tw and each entry can be updated in time O(Δ tw ). A nice tree decomposition with width tw and O(n · tw) nodes can be computed in polynomial time [30] on inputting a tree decomposition with width tw. Therefore, given a graph G and a tree decomposition of G with width tw, the EXPECTED COVERAGE problem can be solved in time 2 O(tw log Δ) n O(1) .
Conclusion
In this article, we considered the EXPECTED COVERAGE problem. We focus on structural parameterization, due to the EXPECTED COVERAGE problem is W[2]-hard for the solution size k. In particular, we show the parameterized complexity of the EXPECTED COVERAGE problem with respect to the well-known graph parameters treewidth tw, pathwidth pw, vertex cover vc, feedback vertex set fvs, and distance to star forest dsf. Further, we observed from our reduction that the EXPECTED COV-ERAGE problem with respect to number of negative demand vertices as parameter is W[1]-hard. Finally, for the combined parameters treewidth and maximum degree (tw + ), we give an FPT algorithm to the EXPECTED COVERAGE problem. Remaining open questions on the structurally parameterized complexity of the EXPECTED COVERAGE problem concern the tight fine-grained bounds. On the other hand, FPT approximation scheme for any of the above structural properties would be a good way to complement the parameterized hardness result of the EXPECTED COVERAGE problem.
Funding Open access funding provided by University of Bergen (incl Haukeland University Hospital).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 11,169.2 | 2020-06-29T00:00:00.000 | [
"Mathematics"
] |
A method for the experimental characterisation of novel drag-reducing materials for very low Earth orbits using the Satellite for Orbital Aerodynamics Research (SOAR) mission
The Satellite for Orbital Aerodynamics Research (SOAR) is a 3U CubeSat mission that aims to investigate the gas–surface interactions (GSIs) of different materials in the very low Earth orbit environment (VLEO), i.e. below 450 km. Improving the understanding of these interactions is critical for the development of satellites that can operate sustainably at these lower orbital altitudes, with particular application to future Earth observation and communications missions. SOAR has been designed to perform the characterisation of the aerodynamic coefficients of four different materials at different angles of incidence with respect to the flow and at different altitudes in the VLEO altitude range. Two conventional and erosion-resistant materials (borosilicate glass and sputter-coated gold) have first been selected to support the validation of the ground-based Rarefied Orbital Aerodynamics Research (ROAR) facility. Two further, novel materials have been selected for their potential to reduce the drag experienced in orbit whilst also remaining resistant to the detrimental effects of atomic oxygen erosion in VLEO. In this paper, the uncertainty associated with the experimental method for determining the aerodynamic coefficients of satellite with different configurations of the test materials from on-orbit data is estimated for different assumed gas–surface interaction properties. The presented results indicate that for reducing surface accommodation coefficients the experimental uncertainty on the drag coefficient determination generally increases, a result of increased aerodynamic attitude perturbations. This effect is also exacerbated by the high atmospheric density at low orbital altitude (i.e. 200 km), resulting in high experimental uncertainty. Co-rotated steerable fin configurations are shown to provide generally lower experimental uncertainty than counter-rotated configurations, with the lowest uncertainties expected in the mid-VLEO altitudes (∼\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim$$\end{document}300 km). For drag coefficient experiments, configurations with two fins oriented at 90∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\circ }$$\end{document} were found to allow the best differentiation between surfaces with different GSI performance. In comparison, the determination of the lift coefficient is found to be improve as the altitude is reduced from 400 to 200 km. These experiments were also found to show the best expected performance in determining the GSI properties of different materials. SOAR was deployed into an orbit of 421 km ×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document} 415 km with 51.6∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\circ }$$\end{document} inclination on 14 June 2021. This orbit will naturally decay allowing access to different altitudes over the lifetime of the mission. The results presented in this paper will be used to plan the experimental schedule for this mission and to maximise the scientific output.
Introduction
A reduction in the operational orbit altitude of spacecraft has been linked to a number of benefits to their design, particularly for Earth observation (EO) [1] and communications missions [2]. Interest in use of very low Earth orbits (VLEO), those below 450km in altitude, has recently grown correspondingly for such applications. However, critical challenges to the sustained operation of spacecraft in VLEO exist. Most importantly, the atmosphere is significantly denser in VLEO and increases with reducing altitude. The aerodynamic drag experienced by spacecraft in VLEO is therefore increased and the natural lifetime before de-orbit and demise is much shorter than from higher, traditional LEO altitudes. To increase the orbital lifetime, the experienced drag can be counteracted directly by thrust forces generated by a propulsion system, i.e. drag compensation. Drag mitigation can also be performed with the aim of reducing the magnitude of the drag experienced, principally by considering the design of the geometry of the spacecraft and the interaction of the external surfaces with the oncoming atmospheric flow [3].
At VLEO altitudes the atmospheric density is many orders of magnitude lower than at the ground, as shown in Fig. 1, and is, therefore, highly rarefied. For such conditions, the Knudsen number Kn, defined as the ratio between the mean free path of the atmospheric particles and a characteristic length L, can be used to define the flow regime. For an orbiting spacecraft of a typical size, the Knudsen number is large ( Kn ≫ 10) and free-molecular flow (FMF) conditions apply, effectively meaning that the interactions between gas particles and the spacecraft surfaces are more frequent and of greater significance than interactions between the gas particles themselves, including those reflected or re-emitted from the surfaces.
Under such conditions, the aerodynamic forces experienced are determined by the exchange in momentum between the oncoming atmospheric flow and the external spacecraft surfaces. The nature of the interaction of the oncoming atmospheric particles with an incident surface and their subsequent velocity and distribution following scattering are principally control the momentum exchange. As a result, these gas-surface interactions (GSIs) are predominantly responsible for the aerodynamic forces experienced by surfaces in VLEO [6,7].
When considering the momentum exchange occurring on a given surface, two extreme modes of GSI are typically defined: specular reflection and diffuse re-emission. In the case of specular reflection, no thermal energy is transferred to the surface and the particle is elastically reflected from the surface. The momentum exchange therefore only occurs along the surface normal vector. Comparatively, for diffuse re-emission the incident particle is thermally equilibrated with the surface and the subsequent re-emission has a probabilistic velocity and angular distribution based on the surface temperature. A broad range of different scattering dynamics have the potential to exist between these two extremes and may apply under different conditions. GSI behaviour in LEO has been considered to be affected by a number of factors, including the composition, structure, roughness and contamination of the surface, and the temperature, composition, velocity, and incidence angle of the oncoming flow [8][9][10]. However, despite this range of parameters, historical analysis of spacecraft orbiting in LEO has suggested that diffuse re-emission characteristics are dominant, particularly for for spacecraft surfaces below 200 km in altitude [11,12]. These effects have been attributed due to the high concentration of atomic oxygen (AO) Fig. 1 Representative variation of atmospheric density and composition with altitude in LEO, calculated using the NRLMSISE-00 model [4]. Solar activity and geomagnetic index defined as per ISO 14222:2013 [5] He Fig. 1) that adsorbs to and erodes the external spacecraft surfaces. The interactions with the roughened and contaminated surfaces mean that significant energy is transferred from the flow to the spacecraft and largely diffuse and thermal particle re-emission is observed. As a result of this GSI behaviour the magnitude of the drag force is large and the lift force production is comparatively small (an order of magnitude less than the drag) [13]. This GSI behaviour has been measured experimentally at an altitude of 225 km, showing almost completely diffuse re-emission and high accommodation [14]. The role of speed ratio s is also an important factor when considering spacecraft aerodynamics and can be defined as the ratio between the bulk velocity of the flow v ∞ and the most probably thermal velocity v th of the gas particles assuming a Maxwellian velocity distribution where k B is the Boltzmann constant, T is the atmospheric temperature, and m m is the (mean) molecular mass.
When the speed ratio is high, the flow behaviour is dominated by the bulk velocity and is described as hyperthermal. Contrastingly, when the speed ratio is small, the flow is dominated by the random thermal motion of the constituent particles and is described as hypothermal. The speed ratio determines the extent to which atmospheric particles can interact with surfaces parallel to or facing away from the oncoming flow direction and therefore their possible impact on the experienced aerodynamic forces. This can be particularly important when considering slender bodies that may have significant surface area parallel to the nominal flight direction. The speed ratio is generally observed to decrease with increasing altitude due to rising atmospheric temperature and as lighter atmospheric species become more abundant. The assumption of hyperthermal flow is generally made for s > 5.
As indicated in Fig. 1, the atmospheric composition also varies significantly with altitude, and the transition between different regimes of atmospheric composition can critically affect GSI behaviour and the associated aerodynamic coefficients. However, despite the availability of numerous timevariant atmospheric models (e.g. NRLMSISE-00, JB2008, DTM2009 [15]) significant uncertainty remains regarding true atmospheric conditions (principally density, composition, and temperature) due to the complexity of the thermospheric/ionospheric system and lack of both spatially and temporally distributed measurements by both in situ and remote sensing methods [16,17].
Above VLEO altitudes, the general reduction in density and AO concentration has been linked to a reduction in surface adsorption and therefore accommodation [18]. For satellites at or above 500 km altitude, the transition from AO predominance to helium has been linked to drag modelling errors, attributed to assumptions in typical aerodynamic coefficient modelling and the reduction in molecular mass and surface adsorption of AO with altitude [19]. Analysis of satellites at 800-1000 km by Harrison and Swinerd [20,21] has also indicated a reduction in surface accommodation and the presence of quasi-specular reflection behaviour. The use of novel materials that have resistance to the effects of AO erosion and can reduce surface accommodation effects has been proposed as a means to reduce the drag experienced in orbit, even at lower altitudes, for example VLEO. The Satellite for Orbital Aerodynamics Research (SOAR) is a scientific 3U CubeSat that has been developed to provide validation data for extensive ground-based characterisation of GSI behaviour in VLEO and to perform the in situ test of candidate novel materials. SOAR was deployed on 14 June 2021 from the ISS into an initial orbit of 421 km × 415 km with inclination of 51.6 • . This orbit will naturally decay to allow experiments to be performed at different altitudes in VLEO during the mission lifetime.
Modelling of gas-surface interactions and aerodynamic coefficients
A number of different models have been developed to mathematically express the exchange in energy and momentum between the oncoming particles in the atmospheric flow and the spacecraft surfaces with which they interact under different assumptions of the underlying physical processes and parameters of dependence. Such models can be used to calculate the dimensionless aerodynamic coefficients that relate the incident flow conditions to the forces that are experienced by a given surface. In these models, the level of surface accommodation is typically expressed by accommodation coefficients that quantify the change in energy between the incident and reemitted particles. The most basic of these, the thermal or energy accommodation coefficient T , can be defined [8]: where E i and E r are the kinetic energy of the particles incident on and re-emitted from the surface, respectively, and E s is the energy that would be emitted from the surface (wall) if the particles had reached thermal equilibrium with the surface. Similarly, T i , T r , and T w are the temperatures associated with the kinetic energies previously defined.
Alternatively, two separate accommodation coefficients can be used to describe the normal n and tangential t momentum exchange occurring at the surface. These accommodation coefficients can be defined analogously to Equation 2 using quantities of normal p and tangential momentum respectively: A number of GSI models have been developed that are commonly applied to the analysis of spacecraft aerodynamics: 1. Maxwell's model assumes a linear combination of the two extreme scattering behaviours. A fraction of the incoming particles (defined by an accommodation coefficient) are reflected specularly with no accommodation whilst the remaining fraction are re-emitted diffusely with complete thermal accommodation. 2. Schamberg's model [22] allows consideration of quasispecular re-emission using a conical pattern with defined beam-width to define the reflected particle distribution. However, only hyperthermal flow conditions are assumed. This model is often used with Goodman's model for energy accommodation coefficient [23]. 3. Sentman's model [24] is based on complete diffuse reemission and accounts for the random thermal motion of the incoming particles. Moe et al. [25] introduced a thermal accommodation coefficient into this method, replacing the use of the reflected particle temperature to describe incomplete accommodation. This modification is also known as the Diffuse Reflection with Incomplete Accommodation (DRIA) model [10]. 4. The Schaaf and Chambre model [26] utilises normal and tangential momentum accommodation coefficients to allow consideration of quasi-specular re-emission. Closed-form solutions for this model are given by Storch [27]. 5. The Cercignani-Lampis-Lord (CLL) model [28] can reproduce diffuse and quasi-specular re-emission patterns using a normal energy accommodation coefficient and a tangential momentum accommodation coefficient. An analytical implementation of the CLL model based on the Schaaf and Chambre solutions Further in-depth review and comparison of popular models for use in modelling of spacecraft aerodynamics are provided by Mostaza-Prieto et al. [6] and Livadiotti et al. [7]. Various GSI models have been used to study the aerodynamics of past and existing satellite missions and to infer suitable values of the accommodation coefficients that match the observed behaviour. Such analyses are critical for the development and improvement of models for atmospheric density and thermospheric winds that are often dependent on the aerodynamic characterisation of such satellites themselves.
Investigation of the of the Proton 2, Ariel 2, Explorer 6 (paddlewheel-type) and S3-1 satellites by Moe et al. [8,11] showed that below 200 km accommodation was almost complete and associated with diffuse re-emission behaviour. However, at higher altitudes and elliptical orbits associated with higher perigee velocity (and therefore kinetic energy) the accommodation was shown to decrease. Further analysis of a number of spherical satellites produced similar results whilst also indicating that accommodation falls more quickly with increasing altitude at solar minimum conditions due to the shrinking thermosphere and reduced AO abundance at higher altitudes during these periods [12,29].
Pilinski et al. [30] subsequently developed a relationship between AO adsorption and energy accommodation for different altitudes based on the Langmuir isotherm, a function that relates the surface adsorption by a given gas species to its the partial pressure. This method was later modified [31] to incorporate a lower bound on the energy accommodation in the absence of AO based on Goodman's physical interaction model [32]. A similar approach developed by Walker et al. [28] uses the Langmuir isotherm to determine the fractional surface coverage by AO which is then applied as a weighting factor to combine drag coefficients calculated for clean and fully adsorbed surfaces. This fractional surface coverage approach also enables application to a wider range of GSI models (i.e. DRIA and CLL). Alternative adsorption models for fractional surface coverage have also been explored [33].
For very simple convex geometries (e.g. flat plates or spheres), closed-form analytical solutions of the GSI models can be used to calculate the overall aerodynamic forces and torques experienced by the given body exposed to the flow. However, for more complex geometries either panel methods or numerical approaches must be adopted. Flatplate or panel methods operate by discretising a geometric model into a number of smaller elements (often triangular flat-plates) for which the individual aerodynamic contribution can be calculated using closed-form analytical GSI methods and summed together to generate the total aerodynamic coefficients for the body. However, these methods are unable to account for particle-particle interactions and multiple particle reflections from surfaces. Shadowing or shielding effects are also difficult to accurately account for due to the independent treatment of the surface elements, though ray-tracing approaches can be implemented, for example in ADBSat [34]. Panel methods are therefore only suitable for the analysis of generally convex geometries with simple external features, but due to their analytical basis can be executed very quickly for different orientations and environmental conditions [6]. Numerical approaches such as test-particle Monte Carlo (TPMC) or direct-simulation Monte Carlo (DSMC) on the other hand use direct simulation of particles interactions with the spacecraft surfaces combined with statistical approaches to determine the aerodynamic coefficients for a given body. These methods can, therefore, account for more complex shadowing behaviour and the effect of multiple surface reflections, but at the expense of computational effort [35].
Hybridised or parametrised approaches can be used to reduce the computational burden of numerical-based approaches. Mehta et al. [35] implemented a Gaussian process based approach using DSMC results for the Grace spacecraft whilst also incorporating the fractional coverage method for surface accommodation. This response surface method has subsequently been implemented using TPMC to improve computational efficiency, and for a number of further satellite geometries [36,37]. An alternative interpolation-based approach to drag coefficient modelling using DSMC simulations with high-fidelity geometric satellite models was been implemented by March et al. [38]. Using results generated from this interpolation method in an analysis of neutral winds, updated energy accommodation coefficient values of 0.85 and 0.82 were proposed for the CHAMP and GOCE satellites respectively using the DRIA GSI model [39,40].
For conventional material0.s the literature reviewed above, associated with on-orbit experiments and analysis of existing satellites, indicates that GSI behaviour in the AOdominated environment of VLEO is dominated by significant accommodation and diffuse re-emission characteristics. More specifically, whilst n is considered a variable parameter, t is commonly assumed to be unity leading to diffuse re-emission behaviour [37,41].
However, gas beam experiments suggest that characteristics such as the surface topology and orientation at the atomic scale may allow quasi-specular reflection properties in rarefied flows [42][43][44]. Recent experiments on the scattering properties of AO from different surfaces has demonstrated more specular reflection behaviour of silicon dioxide and highly orientated pyrolitic graphite (HOPG) [45], leading to investigation of further materials with AO erosion resistance for use in VLEO [46]. Study of nanofluidic channel flows, with a similar Knudsen number to that in VLEO, has also highlighted the existence of 2D materials exhibiting non-diffuse reflection behaviour [47]. Analysis of satellites at altitudes at and above the oxygen-helium transition regime has also suggested that tangential momentum accommodation may be significantly incomplete [19], suggesting that some conventional materials may demonstrate quasi-specular behaviour in orbit, albeit at altitudes above VLEO.
For consideration of surfaces or materials that may have AO erosion resistance and therefore the potential for a significant reduction in surface accommodation, representation of a wide range of different re-emission qualities is therefore critical. The CLL model based on the closed-form Schaaf and Chambre solutions will therefore be adopted in this work allowing a range of quasi-specular behaviour to be modelled. As there is no simple relationship between n and n , the species-specific fitted parameter approach developed by Walker et al [28] is used. For the CLL model, when n = 1 , the incoming particles are completely accommodated to the surface and re-emitted in thermal equilibrium with the surface. Similarly, when t = 1 information of the tangential velocity of the incoming particles is lost and the re-emission will be diffuse. Contrastingly, when n = t = 0 , the incoming particles will undergo elastic and specular reflection. For other values of n and t varying degrees of accommodation and quasi-specular behaviour can be modelled.
The drag coefficient ( C D ) and lift coefficient ( C L ) for a single-sided flat-plate with varying angle of incidence ( ) and varying values of the associated surface accommodation coefficient(s) are shown in Fig. 2 for the DRIA and CLL models. It should be noted that n and t have been varied together to reduce the parameter space for visualisation in Fig. 2. However, as noted above, for a given material or surface these two parameters may not necessarily be equal. The CLL model is able to express a much larger range of aerodynamic coefficients than presented here.
In Fig. 2a, the results for the DRIA model show that a reduction in results in an increase in drag coefficient even when the surface is oriented at shallow incidence angles. However, when quasi-specular reflection behaviour is considered using the CLL model a reduction in drag coefficient for reducing n and t is observed at shallow angles of incidence. However, when the surface is oriented towards the oncoming flow the reduction in n and t results in an increase in the drag coefficient. Unequal variations of n and t will result in varying behaviour of the aerodynamic coefficients and the surface angle at which a reduction in drag coefficient will be observed.
For both the DRIA and CLL models, an increase in the available lift coefficient is generally observed for reducing accommodation coefficient(s). The quasi-specular reflection behaviour of the CLL model provides a significantly greater magnitude of lift coefficient for most angles (except close to parallel to the flow) in comparison to the DRIA model. For reduced surface accommodation, the peak in the lift coefficient distribution is also slightly biased to orientations towards the normal inclination for the CLL model ( 30 The variation in the aerodynamic coefficients is also shown to diminish with reducing accommodation and appears greatest close to the fully accommodated case. This is due to the expressions in the GSI models that relate the reflected particle energy or momentum with the respective 1 3 Variation of the aerodynamic coefficients with incidence angle and input accommodation coefficient parameters for the DRIA and CLL models. Atmospheric parameters (composition, temperature) have been calculated using the NRLSMSISE-00 model with medium solar activity conditions defined as per ISO 14222:2013 [5] at h = 300km , T w = 300 K.
accommodation coefficient(s) and is physically due the high incident particle energy in comparison to that which a completely thermally accommodated particle would leave the surface with. For the CLL model, this effect is also more notable due to the assumption of quasi-specular scattering and therefore introduction of directionality of the reflected particle flux. As represented in Fig. 2a, this is principally due to the reduction in tangential momentum accommodation and demonstrated most clearly in the lift coefficient. These results have been generated for medium solar activity conditions. At different solar activity conditions, the output force coefficients will change due to the variation in atmospheric composition and also the incident particle temperature at a given altitude (principally via the speed ratio). At increased solar activity conditions, the expanding atmosphere and, therefore, expectation for increased accommodation will generally result in a reduction in the force coefficients [13,29]. Solar maximum conditions are associated with greater variations and uncertainty in the solar activity conditions and will therefore result in greater uncertainty in the aerodynamic coefficients during these periods [15]. A similar effect will also be presented due to atmospheric variations with latitude, shorter-term seasonal and diurnal cycles, and local-solar time [15,17]. However, for VLEO altitudes where surface accommodation is typically assumed to be high, the sensitivity to these parameters is lower. The corresponding sensitivity of the aerodynamic coefficients to the wall temperature (here assumed static) is also low due to the relative magnitude compared to the incident particle temperature, i.e. T w ≪ T i [28,48].
In Fig. 2b the presented drag coefficient and lift coefficient are referenced to the cross-sectional (or projected) area with respect to the flow. This shows the effective variation of the aerodynamic coefficients independent of the change in cross-sectional area due to the orientation of the finite surface with respect to the flow. The rapid increase in the coefficients at incidence angles approaching 90 • (i.e. parallel to the flow) should be recognised as a singularity resulting from the cosine function: cos 2 = 0.
The results in Fig. 2b demonstrate clearly that for diffuse reflection characteristics a reduction in T cannot enable a reduction in drag compared to a fully accommodated surface. However, under the assumption of quasi-specular reflection behaviour, variation of surface incidence can produce a reduction in drag, the magnitude of which is controlled by the specific combination of n and t .
This reduction of drag experienced in orbit is a critical challenge for satellites that operate in the VLEO environment. Contrastingly, the development of surfaces that can provide an increase the drag has application to the development of enhanced deorbit devices. An increase in the production of lift is also desirable for purposes of enhanced aerodynamic control authority. However, to provide these desired aerodynamic characteristics, materials that can produce specular or quasi-specular reflection behaviour with reduced levels of energy accommodation are, therefore, needed. The development of such materials also has significance for the development of atmospheric intakes with greater efficiency [49]. These intakes are an important component of atmosphere-breathing electric propulsion (ABEP) systems that may be used to enable sustained drag compensation in VLEO [50].
The Satellite for Orbital Aerodynamics Research
The identification and characterisation of materials with enhanced aerodynamic properties has been a key focus of the DISCOVERER project that is working towards enabling satellites that can operate sustainably in VLEO [51,52]. Critically, materials that are resistant to the adverse effects of AO adsorption and erosion may hold the promise of improvements to GSI performance in VLEO.
A ground-based experimental facility to perform comprehensive testing and characterisation of different candidate materials is currently being commissioned at The University of Manchester. The Rarefied Orbital Aerodynamics Research (ROAR) facility has been designed to reproduce the characteristic flux and energy of AO found in VLEO within an ultra-high vacuum chamber that reflects the rarefied nature of the flow in the orbital environment. The scattering pattern and energies of the incident and reflected AO beam from the different samples will be measured by a suite of sensors to enable the characterisation of the GSI properties [53,54].
The Satellite for Orbital Aerodynamics Research (SOAR) has been designed to perform in situ characterisation of the aerodynamic performance of different materials at varying altitude and incidence angle in the VLEO environment [55], taking heritage from the previous ΔDsat mission concept [56]. The satellite will test novel 2D materials with promising orbital aerodynamic characteristics in-orbit and provide data to validate the ongoing experiments that will be conducted in the ROAR facility. SOAR will also demonstrate novel aerodynamic control manoeuvres [57,58], provide measurements of the atmospheric flow in VLEO (density, composition, and velocity), and perform characterisation of the thermospheric wind vector [59].
SOAR, shown in Fig 3, is a 3U CubeSat with principal physical parameters reported in Table 1. The satellites features two payloads that work together to perform the envisaged experiments: The satellite also features a capable attitude determination and control system (ADCS) consisting of coarse and fine sun sensors (FSS), magnetometers, a state-of-the-art MEMS inertial measurement unit (IMU), magnetorquers, and a 3-axis reaction wheel assembly (RWA). A GPS receiver is also included to provide more precise orbital position/velocity information than is available from ground-based tracking and TLE information. A breakdown of the contributing components is provided in Fig. 4.
When the surfaces of the steerable fins are all aligned either parallel or perpendicular to satellite body (i.e. the z-body axis in Fig. 3), known as the minimum and maximum drag configurations respectively, naturally restoring aerodynamic torques will be generated if the satellite begins to point away from the direction of the oncoming flow. These configurations are therefore nominally aerostable and are the default configurations for the spacecraft. When other configurations of the steerable fins are adopted, additional aerodynamic forces and torques will be generated and will be used advantageously to perform both the desired material characterisation experiments and the aerodynamic control demonstrations.
On SOAR, four different materials have been selected to cover the steerable fin surfaces: sputter-coated gold, borosilicate glass, and two novel surface coatings (not identified herein for IP protection purposes). The gold and borosilicate glass surfaces are both resistant to AO erosion and are chemically stable in the VLEO environment. However, they exhibit significant differences in their AO recombination [60,61] and have therefore been selected with the intention to provide breadth in GSI performance for previously characterised and well-known materials, i.e. from fully diffuse to more reflective respectively. The two "beyond graphene" 2D surface coating materials have been selected for their potential of improved GSI properties whilst also maintaining the necessary AO resistance. The expected experimental uncertainty achievable by SOAR for materials demonstrating complete thermal accommodation and diffuse re-emission was previously reported in [55]. In this analysis, the lowest uncertainties for the drag coefficient determination were found at 300 km, whilst the determination of the lift coefficient was found to improve as the altitude was reduced to 200 km. However, these analyses may not be representative of the novel materials selected and present on the satellite. If these materials do provide lower energy accommodation or a greater degree of specular or quasi-specular re-emission, the forces and torques generated by the interaction of the coated surfaces would change, affecting the orbital and attitude response of the spacecraft.
Until the GSI characteristics of these materials are established from on-orbit investigation and laboratory-based experimentation their potential performance can only be speculated on. However, in order to explore the effect that such materials could have on the experimental performance of the SOAR mission, variation in the GSI models and associated accommodation coefficients can be applied in simulations and used to perform experimental uncertainty analyses, forming the focus of this paper.
Experimental method for determining aerodynamic coefficients
Characterisation of the GSI performance of the different materials on SOAR will be performed by indirect estimation of the aerodynamic coefficients of the satellite with different configurations of the steerable fins. Measurements of the satellites position, velocity, attitude, and rotation rates will be performed in-orbit and methods of orbit and attitude determination will subsequently be performed to obtain best-fit estimates of the aerodynamic coefficients when the satellite is operated in different configurations. Environmental data measured by the INMS on-board the satellite will also be used to inform the orbit or attitude determination processes and reduce modelling uncertainties [55].
To ensure that only one material is nominally exposed to the flow at a time the steerable fins are operated in opposing pairs and as a result a maximum of four materials can be tested on SOAR. During the experiments, the opposing pairs of steerable fins can either be co-rotated or counterrotated with respect to each other whilst exposing the same material to the flow. In these two configurations, the satellite will respond differently to the oncoming flow as a result of the variation in aerodynamic torques that are generated. The consequences of this on the experimental performance are explored in Sects. 5 and 6.
The INMS payload enables direct measurement of the in-situ density and composition during these experiments, reducing the uncertainty that is associated with the use of atmospheric models. However, to maintain the accuracy of the measurements, the aperture of the INMS instrument should nominally point closely towards the direction of the oncoming flow.
For counter-rotated configurations of two opposing steerable fins (Fig. 5a) and when the spacecraft is nominally pointed towards the oncoming flow, no net torques are generated in pitch or yaw. A rolling moment is, however, produced due to the opposing lift forces generated on the two exposed surfaces, which if left uncontrolled will result in roll of the spacecraft. If the spacecraft is disturbed from the flow pointing configuration, pitch and/or yaw torques will be produced. Coupling between motion in the pitch and yaw axes when the fins not in either the minimum or maximum drag mode may act to further disturb the attitude of the spacecraft from the flow-pointing direction.
For co-rotated configurations of two opposing steerable fins (Fig. 5b), the drag will similarly increase and a net pitch or yaw torque will be generated by the common incidence angle of rotated panels. This will cause the spacecraft to rotate and fly at an angle to the oncoming flow. The satellite will also nominally oscillate about the new offset equilibrium attitude due to aerostability in this configuration.
In principle the counter-rotated configuration should ensure that the INMS instrument is pointed most precisely towards the oncoming flow direction and will be operated within its angular acceptance range. However, in either configuration, the effects of thermospheric winds and presence of other environmental torques (gravity gradient, solar radiation pressure and residual magnetic dipole interactions) will disturb the attitude of the spacecraft. Thus, during experiments for both counter-rotated and co-rotated steerable fin configurations the RWA will be used to selectively damp and control the motion of the spacecraft in different axes to provide the desired stability and pointing performance.
Practically, a maximum duration on operations with the fins in either a counter-rotated or co-rotated configuration will be imposed by the build-up of angular momentum and saturation of reaction wheels. This is a function of the atmospheric density, configuration and incidence angle of the steerable fins, and material GSI performance. The thermospheric wind, solar activity, and other external disturbance torques also contribute to the attitude stability and control performance. At lower altitudes the time-period over which spacecraft can be operated successfully may be significantly limited for some configurations, the impact of which will be investigated in Sects. 5 and 6.
This method, however, only strictly allows for the direct calculation of the composite aerodynamic force coefficients of the entire spacecraft in a given configuration rather than for an individual material. This is principally due to the aerodynamic contribution of the spacecraft body and interference between the body and the panels (i.e. shadowing and secondary particle interactions) that cannot be directly eliminated in the experimental analysis. As described by Virgili Llop & Roberts [56], a differential approach can be applied to calculate the ratio between aerodynamic coefficients for different combinations of material, incidence angle, and altitude. The relative measure of the aerodynamic performance between different materials or configurations can therefore be calculated whilst also avoiding biases and systematic errors, for example in the measurement of the atmospheric density.
Orbit and attitude determination
Orbit and attitude determination methods will be used in ground-based post-processing to determine the aerodynamic coefficients from the experimentally gathered data. These methods utilise models to generate estimates for the contributing perturbing forces and torques. The in situ atmospheric density measured by the INMS can be used to reduce uncertainty and improve the aerodynamic force and torque model outputs. The resulting trajectory or attitude of the spacecraft generated by propagation of these models can be compared to the orbital parameters or attitude measured by the GPS or ADCS, respectively.
Initial values of the drag or lift coefficients, possibly informed by ground-based experimental results if available, will be used to establish the aerodynamic coefficients corresponding to a given configuration of the steerable fins (and therefore exposed material) and initiate the orbit or attitude determination process. The iterative process of least-squares based differential correction will then provide the best-fit output aerodynamic coefficients to match the measured onorbit data and the propagated trajectory or attitude.
The accuracy of this method, and therefore the returned aerodynamic coefficients, are dependent on both the accuracy and fidelity of the models that are used to perform the propagation and the uncertainty of the parameters measured on-orbit that are used as both inputs to these models and the orbit and attitude determination processes. Values for the expected performance of the sensors on SOAR are reported in Table 2. For the INMS, the basic instrument sensitivity can be determined by considering the uncertainty on the number density and combined with the expected reduction in acceptance given the pointing error with respect to the oncoming flow.
Drag coefficient
Experiments to investigate the drag coefficient of the different materials can be performed using both counter-rotated and co-rotated configurations of the steerable fins and determining the overall drag coefficient of the spacecraft. In either configuration, the RWA will be used to stabilise the spacecraft attitude and maintain an approximately flowpointing condition.
As the effect of the drag force accumulates over time to produce a variation to the orbital parameters, the drag coefficient of the spacecraft in a fixed experimental configuration can be determined using the least-squares orbit determination and free-parameter fitting method. With a fixed configuration of the steerable fins and a stabilised attitude, the position and velocity vectors of the satellite can be measured over time using the GPS device and the in-situ atmospheric conditions using the INMS. As the drag force results in a secular variation in the orbital parameters, these experiments can be performed even at the higher altitudes after deployment ( ∼ 400 km) where the atmospheric density is lower, and the relative magnitude of the aerodynamic forces are small. However, at these higher altitudes, the experimental duration required to provide a discernible variation in the measured orbital parameters may be long and eventually limited by the power constraints of the platform when the different payloads, attitude actuators, and sensors are simultaneously operating.
As the satellite descends in altitude, the magnitude of the drag force will increase in comparison to the other external forces and the effect on the orbital parameters in a given period will become more significant. The experimental uncertainty would, therefore, be expected to generally decrease with altitude. However, as a result of the increasing magnitude of the disturbing aerodynamic torques, the maximum experimental duration will likely be constrained by the rate of saturation of the RWA in one or more axes.
The possible experimental duration at both the higher and lower altitudes may, therefore, not be sufficient to provide a good fit for the drag coefficient given the variation in the orbital parameters (i.e. the obtained signal to noise ratio). A trade between the ability for the spacecraft to operate in the experimental mode and the associated uncertainty on the output drag coefficient, therefore, exists. The optimal results are expected to be obtained between altitudes of approximately 350 km and 250 km [55].
Lift coefficient
The lift coefficients of the different experimental materials can be investigated through measurement of the rolling moment coefficient of the satellite. Counter-rotated configurations of the steerable fins will be adopted and the satellite will be allowed to rotate in the roll axis, whilst stabilisation is provided in the pitch and yaw axes by the RWA.
In addition to the intentionally generated aerodynamic torques, the satellite attitude will also be affected considerably by the other external perturbations. At higher altitudes, these other external torques are expected to be of a similar or greater magnitude than those generated by the aerodynamic interactions. As altitude reduces, the aerodynamic torques will increase in both relative and absolute magnitude and their effect on the satellite attitude will become dominant.
Using the on-board attitude sensors (combined using an unscented Kalman filter), the motion of the satellite during the lift coefficient experiments can be recorded alongside the orbit parameters and the properties of the oncoming flow as measured by the INMS. The method of least-squares attitude determination with free-parameter fitting can be used to determine the rolling moment coefficient ( C l ) for the experimental configuration. The lift coefficient of the counter-rotated surfaces can subsequently be determined under the assumption that the two fin surfaces equally contribute to produce the rolling motion and that the remainder of the spacecraft does not produce any torques contributing to this motion.
In contrast to the drag coefficient experiments, the experimental uncertainty for the lift coefficient is expected to decrease as the altitude reduces. This is due to the increasing magnitude of the generated aerodynamic torques that will be clearly identifiable in the attitude measured by the ADCS, despite the reducing experimental period.
Attitude simulations
During the experimental operations co-rotated or counterrotated configurations of the steerable fins will be utilised to expose the different materials to the oncoming flow, modifying the natural stability and attitude dynamics of the spacecraft. Stabilisation and control of the spacecraft against secular external disturbances will cause the build-up of the angular momentum in the RWA towards saturation and ultimately loss in control authority. For aerodynamic torques these effects are strongly linked to the dynamic pressure and will therefore increase with the density as the mission progresses and the satellite descends in altitude. The GSI performance of the surfaces of the steerable fins may also result in variations in the attitude dynamics of the spacecraft and therefore the experimental performance, particularly if the selected materials do demonstrate substantially different energy or momentum accommodation characteristics.
In the following simulations, the attitude behaviour of SOAR with different steerable fin and RWA control modes is considered for two GSI models and different input assumptions: • The DRIA model with complete thermal accommodation ( T = 1.0 ) representing conventional material performance. • The CLL model with incomplete normal energy accommodation ( n = 0.9 ) and tangential momentum accommodation ( t = 0.9 ) representing a material with some quasi-specular reflection properties.
An altitude of 250 km has been selected as this was previously identified as a challenging case for the attitude control system whilst the steerable fins are configured to perform different experiments [55]. At higher altitudes the disturbing aerodynamic torques are smaller in magnitude and attitude control system is expected to be able to provide a stable attitude over a longer period of time before saturation occurs. Commensurately, at lower altitudes the duration for which a stable attitude can be maintained in non-aerostable conditions is expected to decrease. The response of SOAR under three-axis attitude control with one set of steerable fins counter-rotated at an angle of 30 • is shown in Fig. 6. In both cases, a small offset of the attitude in yaw (with respect to the flow) initially appears as the Local-Vertical Local-Horizontal (LVLH) reference for the attitude control is not aligned directly with the oncoming flow due to atmospheric co-rotation. As a result of this misalignment, the aerodynamic torques, and the additional external disturbances, one or more of the wheels in the RWA are shown to reach saturation. This results in a loss of control actuation and the attitude eventually increases beyond the acceptance range of the INMS.
The accumulation of angular momentum and period before saturation and loss of control is much quicker for the case where the CLL model with incomplete accommodation is assumed. This is attributed to the increase of the lift coefficient associated with the GSI performance resulting in greater aerodynamic roll torques generated by the counterrotated configuration and additional disturbing aerodynamic torques that are generated by the misalignments of the fins with the oncoming flow direction.
Corresponding results for a co-rotated steerable fin configuration are shown in Fig. 7. For both GSI models and associated assumptions, the period prior to RWA saturation and loss of attitude control is notably longer than in the counter-rotated configuration. This is principally a result of the symmetry of the co-rotated configuration that yields some aerostable behaviour rather than the diverging roll torques and pitch-yaw coupling effects of the counter-rotated configuration. The difference in duration between the stabilised attitude behaviour of the two GSI models and associated inputs is however still notable.
In Figs. 6 and 7 it can be seen that prior to saturation of the RWA the attitude in pitch and yaw of the spacecraft is maintained within a small angular range (<5 • ) and therefore within the acceptance angle range of the INMS, enabling effective measurement of the in-situ density. However, if the duration over which the desired configuration of the steerable fins can be sustained prior to RWA saturation is shorter, an increase in the uncertainty of the drag coefficient determined using the orbit determination process is likely. This is investigated using statistical methods in Sect. 6.
Despite the expected disadvantage of the co-rotated configuration introducing an offset in the body attitude with respect to the flow (in yaw due to the rotation of the vertical fins in this case), the angle of rotation is small and comparable to that also seen for the counter-rotated configuration. This offset arises primarily due to the difference between the oncoming flow direction and the LVLH reference that is used by the ADCS. This is due to lack of knowledge of the true oncoming flow direction as a result of the thermospheric winds and to a lesser extent atmospheric co-rotation relative to the inclined orbit.
The location of the satellite with respect to the sun (i.e. eclipsed or not eclipsed) is also indicated in Figs. 6 and 7 using the shaded background and represents whether the Fig. 6 Attitude response of SOAR at 250 km altitude with vertical steerable fins counter-rotated at 30 • and three-axis reaction wheel control satellite is also under the influence of effects due to solar radiation pressure (SRP). The variation of the solar vector and interaction with the spacecraft surfaces means that the contribution of torques due to SRP may assist or disturb the spacecraft attitude control and stability at different times. However, at an altitude of 250 km, the torque due to SRP on SOAR is an order of magnitude less than the aerodynamic disturbances and is not therefore the dominant factor in the spacecraft attitude. Moreover, the attitude is further influenced by the other perturbing torques such as magnetic dipole interactions. Within the experimental analysis the effect of these perturbations is accounted for using models within the propagation and least-squares orbit/attitude determination method such that the impact of the aerodynamic forces and torques can be isolated.
The behaviour for a 30 • counter-rotated configuration under two-axis control (pitch and yaw) is shown in Fig. 8. With the roll axis left uncontrolled, the torques generated by the two counter-rotated fins produce a couple (i.e. a moment with no net force) in the roll axis and the satellite will begin to rotate.
The rate at which the spacecraft angular velocity increases in the roll axis is faster for the case in which incomplete accommodation is assumed (using the CLL model). This is the expected result of the significant increase in the surface lift coefficient (see Fig. 2) and is clearly demonstrated by the time taken for a cumulative roll angle of 90 • to be exceeded by the satellite (note the different time-scale of the x-axis for the two cases in Fig. 8). In both cases, the pitch and yaw axes are maintained close to the flow pointing direction, thus enabling characterisation of the oncoming flow by the INMS instrument.
Experimental uncertainty with varying material performance
The experimental performance of SOAR for different assumed GSI properties of the test materials can be explored for different steerable fin configurations, incidence angles, and altitude. Orbit and attitude data generated by numerical propagation is used to simulate on-orbit data and then provided as input to the least-squares orbit/attitude determination method with free-parameter fitting that is used estimate the aerodynamic coefficients. Simulated sensor bias and noise is applied to the initial simulations to resemble the on-orbit sensor performance and therefore allow a measure of the expected experimental uncertainty [55]. For the initial orbital and attitude simulations, higherfidelity perturbation models are utilised: Earth gravity (using EGM), third-body gravity, gravity gradient torques, SRP force and torques, aerodynamic force and torques, and residual magnetic dipole torques. For the SRP and aerodynamic models an interpolated database generated beforehand using the ADBSat panel method is used provide attitude dependent force and torque coefficient values for the SOAR geometry (with selected configuration of the steerable fins). The least-squares orbit and attitude determination processes are subsequently applied using the same propagation method. However, within this iterative process the aerodynamic force and torque models are reduced to a simpler attitude-invariant methods to allow the free parameter fitting of the aerodynamic coefficients.
A Monte Carlo approach (using 50 runs) is been used for each case to consider the effect of variations in the initial orbit and attitude parameters in the absence of the in-flight data. The standard deviation of the returned aerodynamic coefficients, therefore, provides a measure of the expected uncertainty in the experimental and associated data-analysis method. An associated 95% confidence interval on the standard deviation is also included based on the number of Monte Carlo runs performed. Figure 9 shows the mean fitted drag coefficient and associated standard deviation generated using least-squares orbit determination method and Monte Carlo approach for different assumed accommodation coefficient values (using the CLL model) with varying steerable fin configuration, incidence angle, and altitude. These figures express the expected experimental performance of the drag coefficient determination process.
Drag coefficient
Noting the varying scale on the y-axes, these figures show that for inclined steerable fins configurations (i.e. not at 90 • ), decreasing the surface accommodation results in an increase in the experimental uncertainty. This is attributed to the greater lift force that is associated with lower surface accommodation coefficient and therefore an increase in production of disturbing aerodynamic torques that must be compensated for by the ADCS. This effect is especially marked at low orbital altitude when the duration for which the experimental configuration can be effectively managed by the onboard attitude control actuators is very short. When the selected pair of steerable fins are deflected at 90 • (nominally normal to the oncoming flow), the experimental uncertainty is much smaller for all accommodation coefficients. This is due to the aerostable nature of this configuration that reduces the attitude control requirements and allows a greater experimental duration.
For counter-rotated configurations (Fig. 9a) the experimental uncertainty is found to be generally smallest at 400 km in altitude. Comparatively, for co-rotated cases (Fig. 9b) the experimental uncertainty is found to be minimised at 300 km for each combination of the accommodation coefficients and incidence angle. Experiments performed with pairs of steerable fins oriented normal to the oncoming flow do not show the same trend and have the higher experimental uncertainty at 400 km altitude compared to 300 km and 200 km.
For cases with inclined steerable fins, both the counterrotated and co-rotated configurations demonstrate sensitivity to the reduction in accommodation coefficient, resulting in significantly increased uncertainty. For the counter-rotated configuration, the standard deviation of cases with incomplete accommodation at 200 km altitude can exceed the magnitude of the drag coefficient, suggesting that a good estimation cannot be obtained. The co-rotated configuration shows a similar but smaller increase in experimental uncertainty for inclined surfaces at 200 km. In general, this suggests that the proposed experiments with inclined surfaces will be unable to provide clearly differentiable estimations of the drag coefficient, particularly at low orbital altitudes. However, at the higher orbital altitudes the experimental uncertainty is expected to decrease considerably and the variation in the drag coefficient as a result of material properties may observed. Comparatively, when two opposing fins are rotated normal to the flow, the variation in drag coefficient due to different material properties should be more clearly visible.
Rolling moment coefficient
The corresponding experimental performance for varying surface accommodation in the rolling moment coefficient experiments is shown in Fig. 10. The experimental uncertainty is shown to be greater at higher altitudes and is attributed to the longer time taken for the satellite to roll through the requested 90 • angle, due to the lower density and, therefore, lower magnitude of the aerodynamic roll torque. When this manoeuvre takes longer to perform the other perturbing torques are able to have a greater relative impact on the satellite attitude, increasing the uncertainty in the returned coefficient.
Similarly to the results of the drag coefficient, the experimental uncertainty also increases with reducing accommodation coefficient. However, this uncertainty remains very small in comparison to the magnitude of the fitted rolling coefficient. As a result of the significant effect of incomplete accommodation on the lift coefficient (see Fig. 2), the standard deviation relative to the rolling moment coefficient magnitude is in fact improved for the cases n = t = 0.8 and n = t = 0.9 that remain below 5% and 7% respectively. In contrast, the standard deviation as a percentage of the absolute rolling moment coefficient for the case of complete accommodation is much higher, particularly at the 400 km altitude (10-25%).
These results suggest that if materials with some incomplete accommodation characteristics are indeed present on the steerable fin surfaces, the experimental uncertainty associated with the determination of the rolling moment coefficient will remain relatively low even at higher altitudes. Furthermore, given the magnitude of this uncertainty in comparison to the rolling moment coefficient, the variation between materials with different accommodation coefficients are likely to be clearly distinguishable from the experimental results using this method. The Satellite for Orbital Aerodynamics (SOAR) has been developed to perform novel testing of the aerodynamic coefficients of different materials in the rarefied VLEO environment and provide valuable in-situ data to support further systematic ground-based experimentation in the forthcoming Rarefied Orbital Aerodynamics Research (ROAR) facility at The University of Manchester. The aim of this work is to support the development of spacecraft that can operate sustainably at lower orbital altitudes with particular application to future commercial Earth observation and communications missions. SOAR will also test two novel materials that have been identified for their potential to improve the aerodynamic performance of spacecraft in the rarefied flow environment of VLEO. Given the novelty and presently uncharacterised nature of these materials that will be tested on the satellite, uncertainty in their aerodynamic performance exists. In this paper, the experimental performance of SOAR has been analysed for different assumed GSI model characteristics and accommodation coefficient values. The performance of the experiments to determine the drag coefficient of the satellite were found to generally deteriorate with reducing surface accommodation and at very low altitudes (i.e. from 300 to 200 km). Co-rotated configurations of the steerable fins were comparatively shown to provide improved experimental performance. However, the most likely configuration to allow a clear differentiation of different material performance from the drag coefficient is found for pairs of opposing steerable fins rotated normal to the oncoming flow.
In contrast, the performance of the experiments to determine the satellite rolling moment coefficient (associated with the material lift coefficient) was found to be robust against variations in the surface accommodation whilst also improving as the orbital altitude reduces. As a result of this low uncertainty, these experiments are also expected to allow for differentiation of materials with different GSI properties or accommodation coefficients.
Future work will seek to connect the spacecraft aerodynamic coefficients, determined using the method presented in this paper, with the specific material properties of the steerable fin surfaces. The application of different GSI models and associated sets of accommodation coefficients will be considered to provide the best fit to the observed on-orbit behaviour. Subsequent direct characterisation of corresponding material samples from the ROAR facility will be used to provide supporting evidence of the material performance.
As a result of the performed analyses, a number of recommendations can be proposed to improve the aerodynamic coefficient estimation method for SOAR. Development and implementation of improved perturbation models and characterisation of the satellite (e.g. SRP force and torque coefficients and the magnetic dipole) would contribute to reducing the uncertainties associated with the least-squares orbit determination method employed, and will be considered in future work. Improvements to the attitude control capabilities of the spacecraft, particularly with regards to the reaction wheel saturation limits, would also help to improve the experimental performance at lower altitudes by increasing the possible experimental duration. However, such improvements to the platform are limited due to the constrained nature of the 3U CubeSat form.
A number of further improvements may also be considered for future missions. Reduction in the uncertainty of the parameters measured in-orbit, namely the spacecraft position, velocity and, attitude, the atmospheric density, and the oncoming flow vector would help to increase the experimental sensitivity. This would principally require the use of additional or alternative sensors (e.g. use of a star-tracker for attitude determination) and improvements to the flow characterisation instrumentation. Additional on-orbit measurements, for example the direct measurement of accelerations and surface temperatures could help to improve the experimental performance. Finally, the ability to actively determine the oncoming flow direction to use this as the reference direction for attitude control would be advantageous in reducing the errors due to misalignment with the flow. However, such improvements are likely out of scope for a CubeSat-class mission.
SOAR was deployed into a 421 km × 415 km orbit with inclination of 51.6 • in June 2021. This orbit will naturally decay as the spacecraft has no on-board propulsion, allowing access to different altitudes over the satellite lifetime. The presented results will be used to plan and schedule the experiments that will be performed by the spacecraft as it descends in orbital altitude to make best use of the limited lifetime and maximise the scientific return of the mission. Author Contributions NHC and PCER developed the experimental method and analysed the results. NHC performed the simulations, produced the results, and wrote the manuscript. PCER, VH, VS-L, GHH, DG-A, DK, and SS reviewed, provided feedback and approved the manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 13,364.4 | 2022-04-07T00:00:00.000 | [
"Physics"
] |