text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Receptor Interactions of Angiotensin II and Angiotensin Receptor Blockers—Relevance to COVID-19
Angiotensin II (Ang II) may contain a charge relay system (CRS) involving Tyr/His/carboxylate, which creates a tyrosinate anion for receptor activation. Energy calculations were carried out to determine the preferred geometry for the CRS in the presence and absence of the Arg guanidino group occupying position 2 of Ang II. These findings suggest that Tyr is preferred over His for bearing the negative charge and that the CRS is stabilized by the guanidino group. Recent crystallography studies provided details of the binding of nonpeptide angiotensin receptor blockers (ARBs) to the Ang II type 1 (AT1) receptor, and these insights were applied to Ang II. A model of binding and receptor activation that explains the surmountable and insurmountable effects of Ang II analogues sarmesin and sarilesin, respectively, was developed and enabled the discovery of a new generation of ARBs called bisartans. Finally, we determined the ability of the bisartan BV6(TFA) to act as a potential ARB, demonstrating similar effects to candesartan, by reducing vasoconstriction of rabbit iliac arteries in response to cumulative doses of Ang II. Recent clinical studies have shown that Ang II receptor blockers have protective effects in hypertensive patients infected with SARS-CoV-2. Therefore, the usage of ARBS to block the AT1 receptor preventing the binding of toxic angiotensin implicated in the storm of cytokines in SARS-CoV-2 is a target treatment and opens new avenues for disease therapy.
Introduction
The octapeptide angiotensin II (Ang II) (DRVYIHPF) acts on the Ang II type 1 (AT1) receptors in a variety of vascular smooth muscle tissues, eliciting a contractile response. This results in an increase in blood pressure. Several lines of evidence suggest that the interaction of Ang II with its receptors involves a charge relay mechanism (CRS) [1]. Accordingly, folding of the peptide in the hydrophobic membrane receptor environment brings together the Tyr4, His6, and Phe8 side chains of the peptide in a concerted interaction. This results in the transfer of the negative charge at the C-terminal carboxylate to the Tyr4 hydroxyl group via the His6 imidazole (Figure 1), which is analogous to serine proteases. The resulting generated tyrosinate species, which can be chemically and spectroscopically detected [2], are thought to have a pivotal role not only in activating the receptor but also in the mechanism of receptor desensitization. Thus (Sar1 Tyr(Me)4)Ang II (sarmesin) is a surmountable competitive antagonist, illustrating the role of the Tyr hydroxyl for agonist In the present report, we applied molecular modeling calculations to gain further insight into details of the CRS. In particular, the architecture of the triad of interacting groups is such that more than one mechanism for generating the tyrosinate species could exist. Furthermore, the Arg 2 guanidino group of Ang II appears to have a central role in chaperoning the CRS [3][4][5][6]. At the receptor, the role of the Arg2 guanidino group (which chaperones the CRS in ANG II) appears to be substituted by R167 of the receptor (whereupon Arg2 of ANG II presumably interacts with a negative charged group(s) on the receptor). Likewise, the Tyr4 hydroxyl of the CRS in ANG II may also exchange with the Y35 hydroxyl of the receptor, thereby eliciting the response mechanism-as elaborated upon in Figure 6.
In Silico Molecular Experiments
Molecular mechanics was used for molecular dynamics simulations with heating and cooling phases to obtain a low energy starting conformation. Thereafter, semiempirical AM1 energy calculations were conducted to refine energy minima values. Calculations were carried out on isolated side chains from Ang II, as well as the whole molecule. The uncharged and charged forms of the amino acid side chains of Tyr and His were represented by phenol/phenolate and imidazole/imidazolate, and the C-terminal carboxylate was represented by acetic acid/acetate. At a physiological pH, the amino acid side chains of Tyr and His are normally uncharged. In contrast, the carboxylate group is negatively charged, and the Arg guanidinium group carries a positive charge. For the purposes of the present calculations, the carboxylate was considered to be a weak acid (pKa = 3-4) that is able to be protonated by a local donor. However, the guanidinium group is considered to be too strongly basic (pKa = 12-13) to surrender a proton. In the present report, we applied molecular modeling calculations to gain further insight into details of the CRS. In particular, the architecture of the triad of interacting groups is such that more than one mechanism for generating the tyrosinate species could exist. Furthermore, the Arg 2 guanidino group of Ang II appears to have a central role in chaperoning the CRS [3][4][5][6]. At the receptor, the role of the Arg2 guanidino group (which chaperones the CRS in ANG II) appears to be substituted by R167 of the receptor (whereupon Arg2 of ANG II presumably interacts with a negative charged group(s) on the receptor). Likewise, the Tyr4 hydroxyl of the CRS in ANG II may also exchange with the Y35 hydroxyl of the receptor, thereby eliciting the response mechanism-as elaborated upon in Figure 6.
In Silico Molecular Experiments
Molecular mechanics was used for molecular dynamics simulations with heating and cooling phases to obtain a low energy starting conformation. Thereafter, semiempirical AM1 energy calculations were conducted to refine energy minima values. Calculations were carried out on isolated side chains from Ang II, as well as the whole molecule. The uncharged and charged forms of the amino acid side chains of Tyr and His were represented by phenol/phenolate and imidazole/imidazolate, and the C-terminal carboxylate was represented by acetic acid/acetate. At a physiological pH, the amino acid side chains of Tyr and His are normally uncharged. In contrast, the carboxylate group is negatively charged, and the Arg guanidinium group carries a positive charge. For the purposes of the present calculations, the carboxylate was considered to be a weak acid (pKa = 3-4) that is able to be protonated by a local donor. However, the guanidinium group is considered to be too strongly basic (pKa = 12-13) to surrender a proton.
Animal Model and Ethics Approval
Male New Zealand White rabbits (n = 4) at 7 weeks of age were purchased from Flinders City University (SA, Australia). The animals were individually housed at the Victoria University Werribee Campus Animal Facilities until 16 weeks of age. Upon arrival, animals were given a 7-day acclimatization period. Animals were kept on a 12-h day/night circadian rhythm cycle, and they were maintained at a constant temperature of 21 • C and relative humidity level between 40 and 70%. Food and water were supplied ad libitum. All experimental procedures were conducted in accordance with the National Health and Medical Research Council 'Australia Code of Practice for the Care and Use of Animals for Scientific Purposes' (8th edition, 2013), and they were approved by the Victoria University Animals Ethics Committee (VUAEC#17/013).
Sedation and Anesthesia Protocol
Prior to the administration of inhalant anesthesia, animals were sedated using a 0.25 mg/kg subcutaneous injection of medetomidine at the 'scruff' or base of the neck. Once sedated, animals were transferred into an induction chamber and anaesthetized using 4% isoflurane. Once anesthetized, an incision was made at the lower abdomen and the subcutaneous tissue and lower abdominal muscles were dissected to expose the inferior vena cava. The inferior vena cava was perforated, and exsanguination was allowed for 3 min or until loss of color and dilation of pupils was observed. A T-tube was introduced distal to the aortic arch and flushed with a cold, oxygenated Krebs-Henseleit solution (Krebs). Both iliac arteries were retrieved from each animal, and, under a light microscope, they were cleaned of fat and connective tissue and dissected into 2-3 mm rings in preparation for isometric tension analysis.
Drug Incubations and Isometric Tension Analysis
Rings were immediately sequentially transferred into adjacent organ baths (Zultek Engineering, VIC, Australia) filled with 5 mL of Krebs, marinated at 37 • C, and continuously bubbled with 95% carbogen. Rings were allowed to acclimatize for 15 min, and they were then mounted between two metal organ hooks attached to force displacement transducers, stretched to 0.5 g, and allowed to equilibrate for 15 min. Rings were re-stretched, refreshed, and equilibrated for a further 15 min. At this time, iliac artery rings were (a) left to rest for 10 min (control; n = 4), (b) incubated with candesartan (10 −5 M) for 10 min to serve as an internal control (candesartan; n = 3), or (c) incubated with a novel biasartan (10 −5 M) for 10 min (BV6(TFA); n = 3). To determine the ability of the newly formulated bisartan to behave as an ARB, an AngII dose response (from 10 −12 to 10 −5 M) was performed. To determine standardized vasoconstriction abilities, rings were washed, allowed to return to baseline, and constricted with KPSS (125 mM).
Statistical Analysis
GraphPad prism (Version 8.4.2, GraphPad Software Incorporated, San Diego, CA, USA) was utilized to analyze isometric tension data. The significant p-value was set at p < 0.05, and a two-way ANOVA followed by Sidak's multiple comparisons post hoc test was performed to determine significance on isometric tension analysis data. All data are represented as mean ± SEM.
Results and Discussion
Semiempirical energy calculations for the isolated triads were first carried out in the absence of Arg, and the calculated energies are given in Table 1. The sum of the heats of formation for the individual components of the triad was compared with the heats of formation for the complex of interacting triads. The difference in the heats of formation between non-interacting and interacting triads represented the net stabilization energy for complex formation. The computed interaction energies in Table 1 illustrate that the acetate group preferred to bear the negative charge, the phenol and imidazole groups were similarly less inclined, and the energy barrier to charge transfer among the three groups was relatively low (~5 kcal/mole). This suggests that the appearance of another influence such as a receptor-based group in the vicinity could readily influence the outcome of charge transfer and determine the resulting location of the negative charge. In accordance with this general concept, fluorescence lifetime studies on Ang II in receptorsimulating environments have demonstrated the presence of tyrosinate anions that become increasingly stabilized as the dielectric constant of the environment decreases [2].
The Stabilizing Role of Arg in Angiotensin II Conformation
Nuclear Overhauser effect (NOE) connectivities from NMR studies have suggested that the N-terminal part of Ang II is located near the proposed CRS [3]. Since the proximity of the Arg2 guanidino to the CRS could influence the outcome of charge transfer within the triad of interacting groups, it was of interest to calculate the energetics of the quaternary complex comprised of the triad plus guanidino group. Accordingly, the four individual groups were placed in proximity and allowed to optimize until an energy minimum was reached ( Table 2). As expected, the introduction of the positively charged guanidino group to the negatively charged triad increased the stabilization energy of the overall complex ( Table 2) compared to the triad alone (Table 1). In addition, the guanidino group disrupted the geometry of the charge relay triad, as shown schematically in Figure 1, through its insertion (together with the carboxylate) between the phenol and imidazole groups ( Figure 2). The energy barrier for phenolate formation increased from~5 kcal/mol for the triad (Table 1) to~15 kcal/mol for the quaternary complex (Table 2), making charge relay more difficult in the presence of the guanidino group. However, the geometry of the functional groups was such ( Figure 2) that it appears possible to generate phenolate anions through the direct interaction of carboxylate with phenol without invoking the imidazole group as an intermediate. On the other hand, the energy calculations shown in Table 2 illustrate that the carboxylate would prefer to abstract the imidazole proton (−66.2 kcal/mol) rather than the phenol proton (−61.1 kcal/mol), leaving open the possibility for a charge relay mechanism as originally proposed (Figure 1), though with the Arg2 guanidino group acting as chaperone.
Backbone and Mobility of Side Chains in Angiotensin II
These calculations (Tables 1 and 2) explained the unlimited mobility of the functional groups and may not be representative of the situation for Ang II where the side chains are tethered to the peptide backbone and may not be able to access such conformational space. However, NMR studies on the superagonist (Sar 1 )Ang II in receptor-simulating environments [3] have shown the proximity of the three aromatic rings together with the N-terminus, and when the NOE constraints obtained from these NMR studies were included in the modeling process, the conformation shown in Figure 2 emerged. In this conformation, there was electrostatic interaction of the functional groups in a parallel manner to that found for the untethered groups ( Figure 2). Surprisingly, these findings deemphasize the role played by the peptide backbone in creating steric constraints and show that the backbone does not affect the mobility of the sidechains or prevent the formation of the optimal geometric arrangement of functional groups. In fact, energy calculations carried out on the intact (Sar)Ang II molecule indicated that the difference in the heats of formation for the carboxylate (−199 kcal/mol) and tyrosinate (−206 kcal/mol) forms of the peptide was only 7 kcal/mol, suggesting that the energy barrier to charge transfer for the whole molecule was less than for the untethered side chains. This would seem to indicate that there may be another contributing factor in the intact peptide that facilitates charge transfer-possibly the N-terminal amino group.
Backbone and Mobility of Side Chains in Angiotensin II
These calculations (Tables 1 and 2) explained the unlimited mobility of the function groups and may not be representative of the situation for Ang II where the side chains a tethered to the peptide backbone and may not be able to access such conformational spa However, NMR studies on the superagonist (Sar 1 )Ang II in receptor-simulating enviro ments [3] have shown the proximity of the three aromatic rings together with the N-t minus, and when the NOE constraints obtained from these NMR studies were includ
Angiotensin II Receptor Blockers
The ARBs have provided important drugs for treating cardiovascular diseases, such as hypertension. The first nonpeptide ARB reported was the surmountable antagonist losartan, which is metabolized in vivo to the insurmountable inverse agonist EXP3174 ( Figure 3). Most therapeutically useful ARBs contain an imidazole-based carboxylate group like EXP3174 (e.g., valsartan, olmesartan, and candesartan), which imparts inverse agonist effects (biased agonism). Inverse agonism occurs when the nature of the ligand (as well as how it interacts with the receptor) prevents the receptor from binding the G protein and dimerizing (resulting in smooth muscle contraction), instead causing the binding of an alternative second messenger (resulting in relaxation).
tide was only 7 kcal/mol, suggesting that the energy barrier to charge transfer for the whole molecule was less than for the untethered side chains. This would seem to indicate that there may be another contributing factor in the intact peptide that facilitates charge transfer-possibly the N-terminal amino group.
Angiotensin II Receptor Blockers
The ARBs have provided important drugs for treating cardiovascular diseases, such as hypertension. The first nonpeptide ARB reported was the surmountable antagonist losartan, which is metabolized in vivo to the insurmountable inverse agonist EXP3174 ( Figure 3). Most therapeutically useful ARBs contain an imidazole-based carboxylate group like EXP3174 (e.g., valsartan, olmesartan, and candesartan), which imparts inverse agonist effects (biased agonism). Inverse agonism occurs when the nature of the ligand (as well as how it interacts with the receptor) prevents the receptor from binding the G protein and dimerizing (resulting in smooth muscle contraction), instead causing the binding of an alternative second messenger (resulting in relaxation).
Crystallography of Angiotensin Receptor Blockers/Angiotensin II Type 1 Receptors Complex
Crystallographic studies of ARBs bound to the AT1 receptor [7,8] have revealed some critical interactions between receptor and drug molecule. In particular, it has been found that the two anions present in all insurmountable ARBs, namely the imidazole carboxylate and the biphenyl tetrazole (Figures 3 and 4), form salt bridges with the cationic guanidino sidechain of R167 of the receptor. In addition, the Y35 hydroxyl group of the receptor Hbonds to the imidazole N of the ARB [7]. These interactions (Figure 4) have revealed a unique network of charge interactions between ARBs and receptors that are characteristically similar to the CRS elaborated for Ang II (except that the carboxylate in ARB is tethered to the imidazole ring, creating an inductive effect on the imidazole N that accepts the phenolic proton of Y35 rather than a relay of charge per se). The similarity is so striking
Crystallography of Angiotensin Receptor Blockers/Angiotensin II Type 1 Receptors Complex
Crystallographic studies of ARBs bound to the AT1 receptor [7,8] have revealed some critical interactions between receptor and drug molecule. In particular, it has been found that the two anions present in all insurmountable ARBs, namely the imidazole carboxylate and the biphenyl tetrazole (Figures 3 and 4), form salt bridges with the cationic guanidino sidechain of R167 of the receptor. In addition, the Y35 hydroxyl group of the receptor H-bonds to the imidazole N of the ARB [7]. These interactions (Figure 4) have revealed a unique network of charge interactions between ARBs and receptors that are characteristically similar to the CRS elaborated for Ang II (except that the carboxylate in ARB is tethered to the imidazole ring, creating an inductive effect on the imidazole N that accepts the phenolic proton of Y35 rather than a relay of charge per se). The similarity is so striking that it is tempting to speculate that tyrosinate is generated in Ang II at the receptor, not by direct interaction with the C-terminal carboxylate but via relay through the His 6 imidazole.
Effects of Tyrosine Methylation in Activity and Conformation
As outlined above, the guanidino group of Arg 2 in Ang II appears to be important for chaperoning and maintaining the CRS, and this same interaction may be mimicked (replaced) by R167 of the receptor upon binding. A similar interaction was reproduced here for ARBs in the form of two salt bridges (carboxylate and tetrazole) with the R167 of the receptor ( Figure 5A) (the tetrazole of ARB and the carboxylate of Ang II may also bind to K199 (Figure 4)). When ARB and Ang II structures are overlayed, the tyrosinate of Ang II corresponds to the carboxylate of ARB and the carboxylate of Ang II corresponds to the tetrazolate of ARB [1]. This orientation has been confirmed by structure-activity studies, which have revealed that removal of the negative charge by methylation of TyrOH in sarilesin has the same effect as the neutralization of the carboxylate in ARB (Figure 3) (i.e., changing both molecules from insurmountable into surmountable antagonists). Apparently the existence of a salt bridge with R167, which increases the strength of binding of ARBs to the receptor, is what differentiates an insurmountable antagonist from a surmountable one. Sarilesin, which is an insurmountable analogue that demonstrates negative cooperativity/inverse agonism identical to ARBs in many tissues [1], presumably affords the same salt bridge interaction with R167 as a direct consequence of the tyrosinate anion provided by the CRS. Accordingly, when the TyrOH of sarilesin is methylated, this salt bridge is converted to a weaker ion dipole bond, and the result is a surmountable antagonist [1].
Biomolecules 2021, 11, x FOR PEER REVIEW 7 of 14 that it is tempting to speculate that tyrosinate is generated in Ang II at the receptor, not by direct interaction with the C-terminal carboxylate but via relay through the His 6 imidazole.
Effects of Tyrosine Methylation in Activity and Conformation
As outlined above, the guanidino group of Arg 2 in Ang II appears to be important for chaperoning and maintaining the CRS, and this same interaction may be mimicked (replaced) by R167 of the receptor upon binding. A similar interaction was reproduced here for ARBs in the form of two salt bridges (carboxylate and tetrazole) with the R167 of the receptor ( Figure 5A) (the tetrazole of ARB and the carboxylate of Ang II may also bind to K199 (Figure 4)). When ARB and Ang II structures are overlayed, the tyrosinate of Ang II corresponds to the carboxylate of ARB and the carboxylate of Ang II corresponds to the tetrazolate of ARB [1]. This orientation has been confirmed by structure-activity studies, which have revealed that removal of the negative charge by methylation of TyrOH in sarilesin has the same effect as the neutralization of the carboxylate in ARB (Figure 3) (i.e., changing both molecules from insurmountable into surmountable antagonists). Apparently the existence of a salt bridge with R167, which increases the strength of binding of ARBs to the receptor, is what differentiates an insurmountable antagonist from a surmountable one. Sarilesin, which is an insurmountable analogue that demonstrates negative cooperativity/inverse agonism identical to ARBs in many tissues [1], presumably affords the same salt bridge interaction with R167 as a direct consequence of the tyrosinate anion provided by the CRS. Accordingly, when the TyrOH of sarilesin is methylated, this salt bridge is converted to a weaker ion dipole bond, and the result is a surmountable antagonist [1].
Critical Interaction of AT1R 35Y with Angiotensin II and Angiotensin Receptor Blockers
Interestingly, the methylation of the Tyr hydroxyl in Ang II results in a competitive surmountable antagonist (sarmesin), implying that tyrosinate is also required for agonist activity (in addition to its role for insurmountable blockade by sarilesin outlined above). Again, there is a repeating pattern when connecting receptor-binding interactions with bioactivity. What makes Ang II itself different from sarilesin is the Phe ring at the C-terminus-a structural difference that endows agonist activity. One possible explanation for this may be related to the critically important Y35, which is known to be essential for the binding of ARBs and Ang II [7]. In ARBs, the Y35 phenolic group bonds to imidazole N ( Figures 5A and 6A), and it follows that Y35 should also be in the right place to potentially interact with the imidazole N of His in Ang II. For sarilesin, Y35 may be unable to access the imidazole of His (without the assistance of other receptor-based groups) because of
Critical Interaction of AT1R 35Y with Angiotensin II and Angiotensin Receptor Blockers
Interestingly, the methylation of the Tyr hydroxyl in Ang II results in a competitive surmountable antagonist (sarmesin), implying that tyrosinate is also required for agonist activity (in addition to its role for insurmountable blockade by sarilesin outlined above). Again, there is a repeating pattern when connecting receptor-binding interactions with bioactivity. What makes Ang II itself different from sarilesin is the Phe ring at the Cterminus-a structural difference that endows agonist activity. One possible explanation for this may be related to the critically important Y35, which is known to be essential for the binding of ARBs and Ang II [7]. In ARBs, the Y35 phenolic group bonds to imidazole N ( Figures 5A and 6A), and it follows that Y35 should also be in the right place to potentially interact with the imidazole N of His in Ang II. For sarilesin, Y35 may be unable to access the imidazole of His (without the assistance of other receptor-based groups) because of the complexity of the CRS interactions. However, in Ang II itself ( Figure 6B), the presence of the Phe 8 ring offers the possibility of a ring:ring interaction with Y35, which, in turn, could draw the Y35 ring closer to the CRS (probably reinforced by the preexisting Phe:His ring interaction in Ang II [3]). Note that an aromatic ring has a quadrupole moment, which allows it to form a slipped parallel plate or perpendicular plate electrostatic interaction with another ring; consequently, aromatic rings do not interact with hydrophobic sidechains, such as Ile 8 in sarilesin, which is why they are not agonists. Indeed, it is entirely possible that the Tyr 4 of Ang II can swap roles with Y35 of the receptor, and that this interchange is the basis for the agonist activation of the receptor ( Figure 6B). Thus, the CRS may alternate from Ang II Tyr 4 to receptor Y35 (on-off mass action), the latter option being reinforced by the concerted action of intracellular G-protein binding and receptor dimerization leading to the positive cooperativity (amplification) of the contractile response [9,10]. When the supply of G protein is exhausted (e.g., at supramaximal doses of Ang), this concerted mechanism for receptor activation can no longer occur, and Ang II may then bind like sarilesin and become an insurmountable blocker, thereby causing tachyphylaxis effects. Figure 5A), though with tyrosinate replacing the carboxylate in ARBs and the C-terminal carboxylate standing in for the tetrazole of ARBs. Like ARBs, the peptide analogue sarilesin can form a salt bridge via its tyrosinate with R167 and is consequently an insurmountable blocker. In parallel with ARBs, the methylation of the TyrOH of sarilesin eradicates this salt bridge and converts it into a surmountable antagonist. In contrast, the presence of the Phe 8 ring in Ang II provides agonist activity by attracting the receptor Y35 ring towards the CRS, eventually allowing the Y35 OH group to H-bond with the His 6 imidazole N of Ang II (exactly equivalent to ARB binding in Figure 5A) and displacing the TyrOH of Ang II so that it no longer carries a charge and cannot form a salt bridge with the R167 of the receptor. This exchange is reversible and requires a cooperative interaction involving the binding of the G protein intracellular messenger and receptor dimerization. When there is no more available G protein (at supramaximal doses), Ang II can bind just like sarilesin and become an insurmountable blocker, invoking tachyphylaxis.
Surmountable and Insurmountable Blockers
In this model (Figure 5), the bioactivities of agonists, surmountable antagonists, and insurmountable blockers for both peptides and nonpeptides could be accounted for by an interaction with a single residue on the receptor. Thus, the quality of the bond between the ligand and the receptor R167 guanidino group determines the outcome, with 1) a A B Figure 6. Binding of (A) ARB and (B) AngII to angiotensin AR1 receptor. (A) ARBs characteristically contain a carboxylate and a tetrazole group that form two salt bridges with Arg167 of the receptor, resulting in insurmountable blockade of Ang II. When the carboxylate anion is neutralized, as in losartan (CH2OH) or olmesartan (CONH2 in R239470), and changes the salt bridge to a weaker ion dipole bond, the molecule becomes a surmountable antagonist. (B) The charge transfer and separation created by the CRS allows Ang II to bind in a similar manner to ARBs ( Figure 5A), though with tyrosinate replacing the carboxylate in ARBs and the C-terminal carboxylate standing in for the tetrazole of ARBs. Like ARBs, the peptide analogue sarilesin can form a salt bridge via its tyrosinate with R167 and is consequently an insurmountable blocker. In parallel with ARBs, the methylation of the TyrOH of sarilesin eradicates this salt bridge and converts it into a surmountable antagonist. In contrast, the presence of the Phe 8 ring in Ang II provides agonist activity by attracting the receptor Y35 ring towards the CRS, eventually allowing the Y35 OH group to H-bond with the His 6 imidazole N of Ang II (exactly equivalent to ARB binding in Figure 5A) and displacing the TyrOH of Ang II so that it no longer carries a charge and cannot form a salt bridge with the R167 of the receptor. This exchange is reversible and requires a cooperative interaction involving the binding of the G protein intracellular messenger and receptor dimerization. When there is no more available G protein (at supramaximal doses), Ang II can bind just like sarilesin and become an insurmountable blocker, invoking tachyphylaxis.
Surmountable and Insurmountable Blockers
In this model ( Figure 5), the bioactivities of agonists, surmountable antagonists, and insurmountable blockers for both peptides and nonpeptides could be accounted for by an interaction with a single residue on the receptor. Thus, the quality of the bond between the ligand and the receptor R167 guanidino group determines the outcome, with (1) a strong salt bridge providing for insurmountable block/inverse agonism (sarilesin or ARB with carboxylate like EXP3174), (2) a weaker ion:dipole bond providing for surmountable antagonism (sarmesin, O-methyl-sarilesin, or ARB without carboxylate like losartan), and (3) disrupted (exchange) bonding (together with other cooperative factors) leading to agonist action (Ang II) [11].
Bisartans: A New Class of Sartans
This model of receptor interaction ( Figure 5) has enabled the development of more potent nonpeptide Ang II mimetics as potential drugs for treating hypertension and other cardiovascular diseases [12,13]. These new generations of drugs, called bisartans, contain two tetrazole groups (the carboxylate present in all insurmountable ARBs was replaced by its functional mimetic tetrazole) that are mounted on an imidazole template as biphenyl tetrazole groups. Accordingly, both tetrazole groups are available to form salt bridges with R167 on the receptor (as per Figure 5), creating an insurmountable blocker. Additionally, the imidazole cation is at the right distance to mimic the role of the Arg 2 sidechain of Ang II and therefore provide an additional salt bridge to the receptor, which may explain the increased potency of bisartans ( Figure 7). Additionally, the imidazole cation is at the right distance to mimic the role of the Arg 2 sidechain of Ang II and therefore provide an additional salt bridge to the receptor, which may explain the increased potency of bisartans ( Figure 7).
The Novel Bisartan BV6(TFA) Potently Blunts Angiotensin II-Mediated Vasoconstriction in Rabbit Iliac and Arteries
To evaluate the newly synthesized bisartan as an ARB mimetic, iliac artery rings collected from rabbits were incubated with BV6(TFA). An Ang II dose-response assessment was performed to determine the ability of BV6(TFA) to inhibit Ang II-mediated vasoconstriction ( Figure 8). Vasoconstriction responses were then compared to control rings (no incubation) and internal control rings incubated with candesartan [14]. As expected, candesartan was able to potently inhibit vasoconstriction in response to cumulative doses of Ang II: from Ang II [10 -9. 5 (Table 3). Interestingly, similar results were observed in rings incubated with BV6(TFA), as vasoconstriction was significantly inhibited to cumulative doses to Ang II when compared to control rings: from Ang II [10 -9.5 M] (BV6(TFA): 0.49 ± 0.73% vs. con-
The Novel Bisartan BV6(TFA) Potently Blunts Angiotensin II-Mediated Vasoconstriction in Rabbit Iliac and Arteries
To evaluate the newly synthesized bisartan as an ARB mimetic, iliac artery rings collected from rabbits were incubated with BV6(TFA). An Ang II dose-response assessment was performed to determine the ability of BV6(TFA) to inhibit Ang II-mediated vasoconstriction ( Figure 8). Vasoconstriction responses were then compared to control rings (no incubation) and internal control rings incubated with candesartan [14]. As expected, candesartan was able to potently inhibit vasoconstriction in response to cumulative doses of Ang II: from Ang II [10 −9.5 (Table 3). Interestingly, similar results were observed in rings incubated with BV6(TFA), as vasoconstriction was significantly inhibited to cumulative doses to Ang II when compared to control rings: from Ang II [10 −9.5 M] (BV6(TFA): 0.49 ± 0.73% vs. control * p < 0.05) to Ang II [10 −7.5 M] (BV6(TFA): 2.08 ± 1.12% vs. control: 28.10 ± 5.78%, *** p < 0.001) ( Table 3). However, vasoconstriction was seen at Ang II [10 −6.0 M] to Ang II [10 −5.0 M], but no significance was determined. Furthermore, no significance was observed between candesartan, a known AT1 receptor antagonist [14], and BV6(TFA). This suggests that BV6(TFA) may act on the AT1 receptor, potentially eliciting anti-hypertensive abilities as a treatment for cardiovascular diseases. However, further studies are required to determine if the vasoconstriction shown at the higher doses of Ang II could be reduced or blocked by increasing/decreasing the dose of BV6(TFA). To determine the ability of BV6(TFA) to behave as an ARB, like candesartan, rabbit iliac arteries were incubated and then constricted using cumulative doses of Ang II. Candesartan and the novel bisartan BV6(TFA) were able to potently inhibit vasoconstriction responses to Ang II at doses [10 -9.5 M] to [10 -7.5 M] (mean ± SEM is shown; significance is presented in Table 3). Table 3. Significance of vasoconstriction in response to cumulative doses of angiotensin II between control, candesartan, and Bv6(TFA) incubations obtained from Figure 8.
zyme 2 (ACE2) and the renin-angiotensin system (RAS) inhibitors reduce excess AngII and increase the antagonist heptapeptides alamandine and aspamandine, which counterbalance Ang II and maintain homeostasis and vasodilation [13]. In particular, the CRS of Ang II described in the study well-explains tyrosine-based ligand-receptor interactions and can be applied to the new aggressive SARS-CoV-2 mutations, which is a pressing issue. Tyrosine seems to be a major player in this issue, and the N501Y mutation of the UK variant B1.1.7 is an example that shows that tyrosine is a much better binder with ACE2 than asparagine. The RAS and in particular ACE2 are the entry points of the virus, and this study significantly contributes to the understanding of the molecular mechanisms of Ang II and, subsequently, the driving forces that lead to the infectivity and transmissibility of the new mutations. We already reported the first evidence for the benefit of ARBs as promising repurposed drugs to treat infection in recent publications [13,[15][16][17][18][19].
The protective effects of ARBs against SARS-CoV-2 infection was further validated and confirmed in a recent open multicenter randomized clinical trial using the ARB telmisartan and has been postulated to treat coronavirus 2019 (COVID19)-induced lung inflammation [20]. Telmisartan is the strongest binder among all ARBs, and it appears to disrupt the binding between the receptor-binding domain of the spike protein and ACE2 [21]. The mutations in SARS-CoV-2 have led to stronger binding between the receptor-binding domain of the spike protein and ACE2, resulting in increased infectivity [22]. Telmisartan which is large and rich in pi electrons may disrupt this binding, leading to protection from infection. Overall, the elevation of Ang II in the RAS seems to play a pivotal role in promoting inflammation and tissue injury. The hypothesis of the involvement of the RAS in the inflammatory process triggered by the entry of SARS-CoV-2 into tissues (primary site being the lungs) considers that the downregulation of ACE2 causes an imbalance in the RAS that results in an elevation of Ang II concentrations (pro-inflammatory) and the cytokine storm in COVID19 patients. ARBs upregulating ACE2 and decreasing Ang II may comprise an answer to COVID19.
Conclusions
The present study supports the occurrence of a charge transfer system in angiotensin and elaborates on the geometry of the interaction of the functional groups. The introduction of the Arg side chain into the network alters the geometry of the charge relay interaction and has a stabilizing influence on the folded compact charge transfer conformation. If this conformation approximates that which is present when Ang II binds to its receptor, then the Arg guanidino group can be visualized to act as a chaperone for the angiotensin CRS. Mutation-bioactivity studies on AT1 receptors and crystallographic data for ARB binding to the AT1 receptor have implicated R167 (necessary for insurmountable effects) and Y35 (essential for binding of Ang II and ARB) as anchor residues on the receptor [7]. By forming a salt bridge with R167, the insurmountable Ang II analogue sarilesin, as well as the insurmountable nonpeptide ARBs, apparently lock the receptor into a conformation that cannot bind G protein but can bind an alternate messenger and lead to inverse agonism [7,8]. For the receptor binding of Ang II itself, we propose a model in which the Arg2 of Ang II, which chaperones the CRS, can be replaced by the R167 of the receptor upon binding. This interaction sets up a situation where the Tyr4 of the CRS can be replaced by the Y35 of the receptor, creating an intermolecular exchange mechanism for activating the receptor response mechanism. The mutation Y35-A35 disrupts and abolishes the binding [7]. The Phe8 ring of Ang II, which is essential for agonist activity, may have a functional role in guiding the Y35 ring through quadrupolar ring:ring interactions into the correct alignment for receptor activation. Such considerations have led to the development of a new generation of nonpeptide Ang II mimetics as potential drugs for treating hypertension and other cardiovascular diseases, including bisartans [12,13,19]. Recent clinical findings from hospitalized hypertensive patients infected by SARS-CoV-2 have shown a protective effect against the infection by the virus and reduction of morbidity and mortality. The crystal structure of the RBD spike protein/ACE2 complex revealed critical interactions that link the two chains. This binding is strengthened by mutations that stabilize the complex. The disruption of the binding is a key to treatment therapeutics. Thus far, researchers have reported a number of repurposable drugs that interfere in the interface, disrupting binding and consequently decreasing infectivity and transmissibility. One of them is telmisartan, as postulated in recent clinical trial. ARBs generally seem to be tentative repurposed therapeutics for SARS-CoV-2 infection, as shown by clinical and in silico studies. Further studies are required to confirm these early findings.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 8,640.8 | 2021-07-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
NeuTomPy toolbox, a Python package for tomographic data processing and reconstruction
In this article we present the NeuTomPy Toolbox, a new Python package for tomographic data processing and reconstruction. The toolbox includes pre-processing algorithms, artifacts removal and a wide range of iterative reconstruction methods as well as the Filtered Back Projection algorithm. The NeuTomPy toolbox was conceived primarily for neutron tomography datasets and developed to support the need of users and researchers to compare state-of-the-art reconstruction methods and choose the optimal data processing workflow for their data. In fact, in several cases sparse-view datasets are acquired to reduce scan time during a neutron tomography experiment. Hence, there is great interest in improving quality of the reconstructed images by means of iterative methods and advanced image-processing algorithms. The toolbox has a modular design, multi-threading capabilities and it supports Windows, Linux and Mac OS operating systems. The NeuTomPy toolbox is open source and it is released under the GNU General Public License v3, encouraging researchers and developers to contribute. In this paper we present an overview of the main toolbox functionalities and finally we show a typical usage example. © 2019TheAuthors.PublishedbyElsevierB.
Motivation and significance
Neutron Tomography (NT) has become a routine method at many neutron sources to non-destructively investigate the inner structure of a wide range of objects. The commercial software Octopus [1] by Inside Matters is a well established tool for reconstruction of tomographic data at neutron imaging beamlines. However, this software requires a significant investment and generally users can perform a preliminary data processing with Octopus only at the imaging facility. Data analysis is a crucial step for the output of an experiment, so users usually spend time to optimize the data processing mainly at home. This poses a strong demand of freeware and powerful tools to perform data processing of neutron data.
Image acquisition in NT is very time-consuming with respect to X-ray Computed Tomography (CT) and, in several cases, undersampled datasets are acquired to reduce the scan time and optimize beamtime usage during an experiment. The widely used Filtered Back Projection (FBP) algorithm generates reconstructed images affected by aliasing artifacts when the number of projections does not satisfy the Nyquist-Shannon condition [2]. Iterative reconstruction methods generally outperform analytical methods, such as FBP, to handle under-sampled datasets [3]. Octopus software provides only two reconstruction methods: the FBP and the Simultaneous Algebraic Reconstruction Technique (SART). Modern reconstruction methods are not implemented. On the other hand, several open source tools for tomographic reconstruction are available nowadays but they are mainly developed for X-ray CT and they are not ready to handle neutron data. Some image pre-processing algorithms are mandatory in NT to obtain accurate reconstruction, i.e. the estimation of the rotation axis tilt and the related registration of the projections, the suppression of gammaspots and the data normalization with respect to the radiation dose. Reconstruction tools for X-ray CT generally include some, but not all of such correction algorithms. For example, the ASTRA toolbox [4] is a Matlab and Python package that provides highly efficient implementation of iterative methods for CPUs and GPUs. ASTRA toolbox is only focused on the reconstruction step and it does not include any pre-processing, post-processing algorithms or functions to read and write data. On the other hand, the Python package TomoPy [5] includes several pre-processing and postprocessing algorithms and provides implementation for CPUs of a wide range of iterative reconstruction methods. Moreover TomoPy is not ready to handle neutron data, since it does not include functions to estimate the rotation axis tilt and to compute the related correction on projection data. Furthermore, TomoPy is available only for Linux and Mac OS operating systems. MuhRec [6] is the only free software that was conceived for NT. It includes several filters and pre-processing algorithms and it is currently the main free alternative to Octopus for data processing of neutron data. However, at time of writing, MuhRec does not provide any iterative reconstruction method support.
In this paper we present the NeuTomPy Toolbox, a new Python package for tomographic data processing, that is specifically designed to compensate the shortcomings of the aforementioned software tools. The NeuTomPy toolbox was conceived primarily for NT and developed to support the need of users and researchers to compare state-of-the-art reconstruction methods and choose the optimal data processing workflow for their data. The toolbox has a modular design, multi-threading capabilities and it supports Windows, Linux and Mac OS operating systems. The NeuTomPy toolbox is open source and it is released under the GNU General Public License v3, allowing users to freely use it and encouraging researchers and developers to contribute. Previously, this package has been used for comparative studies [3,7] of reconstruction methods in NT and now is freely distributed to the neutron imaging community.
Software description
Here we describe the architecture of NeuTomPy Toolbox and present its main functionalities.
Software architecture
The NeuTomPy toolbox is written in Python. We chose this programming language because it is open-source, cross-platform, human-readable and allows researchers to use and contribute to it easily. The toolbox is divided into several sub-modules, each of these represents a particular phase of a typical CT reconstruction pipeline. The entire chain is represented in Fig. 1. The NeuTomPy toolbox exploits several Python libraries for scientific computing and image processing, i.e. NumPy [8], NumExpr [9], SciPy [10], scikit-image [11], OpenCV [12] and SimpleITK [13]. In particular, the CT reconstruction step is powered by the ASTRA Toolbox. NeuTomPy combined with ITK-SNAP [14] or 3D Slicer [15] turns out to be a complete open-source software suite for CT.
Software functionalities and sample code snippets
The NeuTomPy toolbox allows to perform the steps of a typical CT reconstruction workflow (Fig. 1). The first task is represented by the reading of a raw dataset. The implemented reader handles TIFF and FITS files and converts a stack of images into a numpy array. A dataset containing raw projections, dark-field, flat-field images and the projection at 180 • can be read by: import neutompy as ntp proj ,dark ,flat , proj_180 = ntp. read_dataset ( proj_180 = True) hence the user can select the data to read from a dialog box. Subsequently, the projection data must be normalized with respect to dark-field and flat-field images to compute the transmission images. If the source intensity is not stable the images can be normalized with respect to the radiation dose [3]. In this case, the user must specify a region of interest (ROI) which corresponds to a background area not covered by the specimen in all the projections (we called it the dose ROI). It can be specified in three different ways: drawing interactively a rectangular selection, specifying the ROI' s coordinates or reading an ImageJ .roi file. For example, to normalize data and select interactively the dose ROI, the Python instruction is: where the function normalize_proj returns a 3D array containing the stack of normalized projections (norm) and a 2D array representing the normalized radiograph at 180 • (norm_180). A common experimental issue in NT is the misalignment of the rotation axis with respect to the vertical axis of the detector.
The function correction_COR evaluates the horizontal offset and the tilt angle by minimizing the squared error between two opposite radiographs computed at different vertical positions, as described in [6], and finally it registers all the projections. The Python instruction for this task is: norm = ntp. correction_COR (norm , proj_0 , proj_180 ) where proj_0 and proj_180 are the projections (raw or normalized) at 0 • and 180 • , respectively. The user selects interactively different ROIs where the sample is visible. Subsequently the results and some information about the evaluation of the rotation axis are shown. We report in Fig. 2 an example for the rotation axis correction: the difference of the projections at 0 • (P 0 ) and the mirrored projection at 180 • (P flipped π ) before and after the correction are shown in the left and right side, respectively.
The NeuTomPy toolbox includes an outlier removal which replaces a pixel value by the median of the neighborhood pixels if it deviates from the median by more than a certain value. This threshold value can be specified by the user as a global value or proportional to the local standard deviation. It is provided also a destriping filter, based on combined wavelet and Fourier analysis, to suppress the ring artifacts [16].
The reconstruction module includes all CPU-and GPU-based algorithms for 2D parallel beam geometry implemented in the ASTRA toolbox and some additional reconstruction methods distributed as ASTRA plugins. The available algorithms are summarized in Table 1. The instruction to perform a CT reconstruction is the following: rec = ntp. reconstruct (norm ,angles ,method , parameters ) where rec is the reconstructed volume, angles is onedimensional array containing the view angles in radians, method is a string which indicates the algorithm to use and parameters is a Python dictionary that contains specific settings of the Fig. 1. Diagram representing the typical CT data processing steps that can be performed by NeuTomPy toolbox. The package has a modular structure that follows the data processing chain. reconstruction algorithm. The allowed values for method and parameters follow the convention of the ASTRA toolbox, reported in the documentation [17]. For example, the following instruction is used to compute with GPU support a FBP reconstruction with the Hamming filter: rec = ntp. reconstruct (norm ,angles , method = " FBP_CUDA " , parameters ={ " FilterType " : " hamming " }) while a SIRT reconstruction with 100 iterations and pixel values limited in the range [0, 2] can be performed by: rec = ntp. reconstruct (norm ,angles , method = " SIRT_CUDA " , parameters ={ " iterations " :100 , " MinConstraint " :0.0 , " MaxConstraint " =2.0}) .
The NeuTomPy toolbox allows to compare and evaluate the performance of different reconstruction algorithms in terms of several image quality indexes. The metrics implemented are the Contrastto-Noise-Ratio (CNR) [3], the Normalized Root Mean Square Error (NRMSE) [3], an edge quality metric [3] and the Structural Similarity Index (SSIM) [18].
Illustrative examples
Here we demonstrate the possibility to perform several reconstruction algorithms and compare them quantitatively using the NeuTomPy toolbox. We used neutron images of a phantom sample Table 1 List of the CT reconstruction methods included in Neu-TomPy Toolbox for two-dimensional parallel-beam geometries.
Method
CPU GPU BP [2] x x FBP [2] x x ART [2] x SART [2] x x CGLS [19] x x SIRT [20] x x NN-FBP [21] x x MR-FBP [22] x x acquired at the IMAT beamline [23,24], ISIS neutron spallation source, UK. The phantom, already analyzed in a previous work [3], is an aluminium cylinder containing four holes of different diameters and filled with iron powder. We used for CT reconstruction an under-sampled dataset with 1/3 of the number of projections required by the Nyquist-Shannon condition. We performed FBP, SIRT and CGLS reconstructions and we compare them in terms of the image quality indexes NRMSE, SSIM and CNR. We consider the SIRT reconstruction (200 iterations) of a full-view dataset, which is sampled to fulfill the Nyquist-Shannon condition, as the reference image for the computation of the NRMSE and SSIM. The CNR was computed considering a ROI that includes one iron rod and with the second ROI outside the sample. The results are shown in Fig. 3. It is clear that the two iterative algorithms outperform the FBP method.
In fact, the CGLS and the SIRT reconstructions have higher CNR and SSIM, and lower NRMSE than the FBP, which indicate better image quality. In general, the under-sampling and the noise in the projection data cause in the reconstructed images a broadening of the attenuation coefficients distribution. However, unlike FBP reconstruction, the CGLS and the SIRT images are characterized by a bimodal distribution of the gray values, which reflects the composition of the sample. The source code of this analysis is omitted here for brevity. However, the source code for this and other examples can be found in the GitHub repository.
Impact
Data processing is the last step of a NT experiment but it is crucial for the interpretation of the results. Advanced image processing algorithms can extract hidden information from data and reduce the tomographic scan time. Hence new software tools, specifically designed for neutron data, are required to compare state-ofthe-art image processing algorithms. Working on robust methods and tools to improve image quality means get better output from NT experiments. However, state-of-the-art iterative reconstruction methods are not implemented in Octopus and MuhRec, which are the leading software for NT reconstruction. The NeuTomPy toolbox solves this shortcoming because it is ready to work with neutron data and allows to perform and compare several iterative reconstruction methods. Researches can define the optimal data processing workflow for their specific problem using the Neu-TomPy toolbox. The code is open-source, hence developers and researchers are invited to contribute.
Conclusions
In this paper we presented the NeuTomPy Toolbox, a new Python package for tomographic data processing. We demonstrated that the toolbox is ready to work with neutron data and allows researchers to state the optimal data processing workflow for their specific investigation. The first release includes preprocessing algorithms, artifacts removal and a wide range of classical and state-of-the-art reconstruction methods. The NeuTomPy toolbox supports Windows, Linux and Mac OS operating systems and it is released as open source. Researchers can freely use it and contribute to the project.
The future development will involve improvement of preprocessing algorithms (e.g. scattering correction), addition of new reconstruction methods and finally the implementation of a Graphical User Interface (GUI). | 3,173.8 | 2019-01-01T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Analysis of the Design and Engineering-Process towards a First Prototype in the Field of Sports and Vitality
: The scope of technology has expanded towards areas such as sports and vitality, offering significant challenges for engineering designers. However, only little is known about the underlying design and engineering processes used within these fields. Therefore, this paper aims to get an in-depth understanding of these type of processes. During a three-day design competition (Hackathon), three groups of engineers were challenged to develop experience-able prototypes in the field of sports and vitality. Their process was monitored based on the Reflective Transformative Design process (RTD-process) framework, describing the various activities part of the design process. Groups had to keep track of their activities, and six group reflection-sessions were held. Results show that all groups used an open and explorative approach, they frequently swapped between activities, making them able to reflect on their actions. While spending more time on envisioning and creating a clear vision seem to relate to the quality of the design concept.
Introduction
The scope of engineering design has expanded towards areas like sports, physical activity and vitality.There are several arguments for this.First, there is a growing awareness to tackle physical inactivity and sedentary behaviour, which is a major public health concern [1].Second, there is an increasing attention for healthy lifestyles and vitality.Nowadays, people can choose their own way of being involved in sports, compatible with their own individual lifestyle and consistent with their own interests [2].When incorporating these characteristics, sports can play a determined role towards vitality and contribute to a healthier lifestyle.Third, recent developments in low-cost sensor technologies have opened new markets and possibilities [3].Fourth, the sports participation sector has become a significant economic sector [4].For example, in recent years there has been an exponential increase in the availability and use of sports and physical activity-related monitoring devices [5,6].
It is obvious that technology creates new opportunities for the field of sports and vitality, but also offers significant challenges for engineering designers.For instance, Wilson and colleague's [7] found that in general product design multiple iterations were used within and between different design phases, in the sports product design only iterations within each design phases were used, and rarely between design phases.Thereby, the design space in this field is enormous, and requires a distinctive approach and envisioning of societal and personal needs.Amongst others, the target group is extremely heterogeneous in terms of physical abilities, training load responses, motivational drivers and attitudes [3,5], and it aims to create behaviour changes in patterns that are deeply rooted in daily life [8].Therefore, this paper aims to unravel these processes used by future engineering designers towards a first prototype in the field of sports and vitality.
Reflective Transformative Design Process
The Reflective Transformative Design process framework (RTDP), introduced by Hummels and Frens [9] is an open framework for designing, but can also be used as a framework to describe and analyse design processes.Its structure, by nature open and flexible, based on activities and the links between them, provides an open yet structured way to analyse any design process.The RTDP is: "a design process, particularly aimed to support the design of disruptive innovative and/or intelligent systems, products, and services" [9] (p.147).The model consists of five circles (Figure 1).The middle circle 'decisions' can be seen as a process of making decisions based on information of the other four circles.The remaining four circles can be seen as strategies to generate or gather information.'Envisioning' is information gathering to create a designer's vision.It is used to give direction to the design process.Like every circle, in the beginning, this vision is small, based on little information and must develop during the process.Exploring & validating is used to gather information by validating the design decisions through experience-able prototypes.For example, testing a concept by experts, or validating a simple prototype in real life.The circle of thinking consists of analyzing and abstracting to create a framework or model.Making is the last strategy, creating experience-able prototypes and producing experiential information.Hummels and Frens [9] stated that "Design making enables the designer to use her intuition and through making the designer can open up new solution spaces that go beyond imagination" [9] (p.161).Given the connections and relatedness between all circles and activities, it is recommended to swap frequently from one to the other circle.Through swapping, engineers are forced to incorporating different kinds of information to feed the design decisions.This enables the engineers to reflect on the activities in and during action.In this paper, we analyse the design and research processes of future design engineers towards vitality and sports focused prototypes through the RTDP model.
Hackathon Design Challenge
During a three-day Hackathon, three groups of future engineering designers (n = 14) were challenged to rapidly prototype practical ideas.The focus was to design for sports and vitality, with specific attention to health-related aspects such as increasing (sports) active behaviour, reducing sedentary behaviour, and reducing stress.Participants joined a topic that interested them.The outcome of hackathon should be a pitch of their concept to the audience and jury, including a working prototype.The research conducted was in line with the ethical principles of the Declaration of Helsinki and the Departmental Research Board.The privacy of all participants was guaranteed, and all data was anonymized before analysis.
Procedure
An interactive, qualitative study design was chosen for this study.A protocol of the RTDP framework [9] was used to map the engineering design process.Each group had the responsibility to keep track of all activities conducted.Sticky notes in different colours (representing different members) were used to write down information of each activity and placed on an overview cardboard.For each activity, the following questions were answered: (i) what was the activity done; (ii) how was the activity performed; (iii) did group members worked alone or with others; and (iv) at what time they started and what was the duration of the activity.Next, a minimum of six short sessions (10 min) on fixed moments (11 am and 6 pm, each day) were conducted.These moments stimulated reflection on their activities, but also gave the moderator the opportunity to validate the information on the sticky-notes with the participants.
Measurements and Analysis
The following measurements of the design process were calculated based on the information on the sticky-notes: total number of activities, total time spent on the activity, average time per activity, percentage of the number of activities per strategy and percentage of the time spent per strategy.Next to the process, also the outcome of the hackathon was measured.Seven experts formed a jury and had to score, via a multi-item list, the pitches and the prototypes.Each jury member was forced to rank the groups.In this paper, the rankings of the jury members were summed.If a jury member ranked the group first, 1 point was given.Second place agreed with 2 points, and last place 3 points.The group with the least points won the competition.Spearman's Rho was used to correlate the design process measurements to the jury scores.
Design Process
First, some general results are described.Next, we will focus on differences between the groups (for an overview see Table 1).Results reveal differences, between groups and group members, in the total time spent on the concepts.For instance, some participants spent around 2000 min, while others spent only 1155 min.Also, the contribution of four information gathering/generating strategies is not equally distributed.The future engineering designers spent only between 3.6% and 7.6% of their time on envisioning.Moreover, this approach is mostly used on the first day, and rarely during the second and final day.While information generating by making is by far the most used strategy, every group spent the most of their time (between 54.5% and 61.2%) on activities related to making.Analyses also show differences between groups, group 1 and group 3 did spend about the same total time (4125 min and 4155 min), where group 2 spent only 3210 min.Thereby the analyses show that group 1 and 3 followed a similar pattern, spending the least time on envisioning, followed by thinking and exploring & validating.They spend the most time on making.Group 2 also spent the least time on envisioning but did a lot more thinking compared to the other groups, and less exploring & validating.Group 3 spent 7.6% of their total time on envisioning (315 min).Groups 1 and 2 spent half the time (150 min) and only 3.6% and 4.7% of the total time on envisioning.All groups swapped between strategies, only the strategy of envisioning (and to some extent thinking) was not incorporated, resulting in alternated use of activities only related to exploring & validating and making, instead of using all four strategies frequently and alternated.
Table 1.Overview of all measurements of the design process for the different groups (G1, G2, G3).
Outcome Hackathon: Concepts
The first group designed 'Ambi', a system in the form of a Tamagotchi that warns when you are too long inactive or when the air quality decreases.The second group developed 'Freshlook' a system with a stress ball that stimulates you to go for a walk when you sit too long.The third group choose to design a system that detected positive and negative changes in an office environment.These changes were made visible by ripples in the water.This concept was combined with 'AMP' a workshop that should make participants aware of the risks of stress via an interactive puppet.
Outcome Hackathon: Jury Scores
Based on the rankings of the seven jury members.Group 3 won this hackathon based on their concept 'AMP' (9 points).The jury praised this concept because it provides an actual solution for a societal problem and was realistic in terms of practical feasibility.Group 2 (15 points) and Group 1 (18 points) completed the ranking.
Relation: Design Process and Outcome
To relate the measurements of the design process to the jury ranking, correlations (Spearman's Rho) were executed.The number of different activities, as well as total time spent on the concepts, seems not related to the jury ranking.Secondly, average time spent per activity did correlate to jury ranking, the longer the time spent per activity, the higher the ranking of the jury.Group 3 spent almost twice the time on envisioning compared to groups 1 and 2, using a higher percentage of the total time on creating a vision and scope of the concept.Thereby they compensated this time in the making-related activities and came up with a relatively simple, working 3d model.Ranked correlations showed that spending more time on envisioning did relate to a better ranking.While spending more or less time on the other strategies did not relate to a better ranking.
Discussion
This paper focused on unravelling the design process used by future engineering designers towards a first prototype in the field of sports and vitality.It seems that the winning concept not only spent more time on envisioning, but also envisioned more thoroughly.Resulting in a concept that provides an actual solution for a societal problem and is realistic in terms of practical feasibility.In the field of sports and vitality the design space is enormous, requiring a distinctive approach.Hence, the envisioning of societal and personal needs is key.Therefore, spending more time on envisioning and understanding societal and personal needs more thoroughly may have resulted in a better concept.A possible reason why group 3 (master students only) did a more thoroughly envisioning could be related to their prior experience with the RTDP and user-involvement.A limitation of this study is that we included both bachelor and master students.The concepts of the groups were mainly focused on vitality related topics.This is a general trend in the Netherlands where recreational sports are more and more connected to being active and living healthy, including issues like sedentary behaviour, stress, burn-out, etc.
Vos et al. [3] stated that to understand the societal and personal needs and, the associated crossovers between different professions require a multidisciplinary approach.This is key for the design and provision of products and services targeting mass sports participation.Since the groups were unidisciplinary, these crossovers did not happen and therefore possibly there was even more to gain in terms of envisioning.
Analyses showed that groups did swap between strategies, only the strategy of envisioning and thinking were rarely incorporated.In line with Wilson and colleagues [7], we also found that groups rarely iterated between different phases in the design process, for example none of the groups did go back to envisioning (including their design brief or design rationale) after the first full day of designing (e.g., Figure 2).In future research, it can be interesting to monitor the actual methodology the groups used to gather or generate information within the four circle of the RTDP.This will provide not only insight into the quantity of the activities but also the quality.In addition, also the decisions can be monitored to get more insight into which information is used and is decisive.Finally, in future research the design and engineering processes should be monitored over longer periods of time, taking away the time-pressure of the hackathon and to see if there will be changes in quantity and quality of envisioning.Forming multidisciplinary with different expertise can be interesting to facilitate crossovers during the envisioning.
Conclusions
Technology has created new opportunities for the field of sports and vitality, but also offers significant challenges like the enormous design space in this field and a distinctive approach and envisioning of societal and personal needs for engineering designers [3,5].This study functioned as a first exploration and has given an insight into how engineering designers use design methods within the field of sports and vitality.It seems that time spent on envisioning, but also envisioning more thoroughly affected the outcome.This finding provides an interesting starting point to further investigate engineering design in the field of sports and vitality.
Figure 2 .
Figure 2. Visualisation of the design process including different strategies, activities and time per activity of group 3 during the three-day Hackathon. | 3,208.8 | 2018-02-22T00:00:00.000 | [
"Computer Science"
] |
Decoherence effects in the quantum qubit flip game using Markovian approximation
We are considering a quantum version of the penny flip game, whose implementation is influenced by the environment that causes decoherence of the system. In order to model the decoherence we assume Markovian approximation of open quantum system dynamics. We focus our attention on the phase damping, amplitude damping and amplitude raising channels. Our results show that the Pauli strategy is no longer a Nash equilibrium under decoherence. We attempt to optimize the players' control pulses in the aforementioned setup to allow them to achieve higher probability of winning the game compared to the Pauli strategy.
Introduction
Quantum information experiments can be described as a sequence of three operations: state preparation, evolution and measurement [5]. In most cases one cannot assume that experiments are conducted perfectly, therefore imperfections have to be taken into account while modelling them. In this work we are interested in how the knowledge about imperfect evolution of a quantum system can be exploited by players engaged in a quantum game. We assume that one of the players possesses the knowledge about imperfections in the system, while the other is ignorant of their existence. We ask a question of how much the player's knowledge about those imperfections can be exploited by him/her for their advantage.
We consider implementation of the quantum version of the penny flip game, which is influenced by the environment that causes decoherence of the system. In order to model the decoherence we assume Markovian approximation of open quantum system dynamics.
The paper is organised as follows: in the two following subsections we discuss related work and present our motivation to undertake this task. In Section 2 we recall the penny flip game and its quantum version, in Section 3 we present the noise model, in Section 4 we discuss the strategies applied in the presence of noise and finally in Section 5 we conclude the obtained results.
Related work
Imperfect realizations of quantum games have been discussed in literature since the beginning of the century. Ref. [7] discusses a three-player quantum game played with a corrupted source of entangled qubits. The author implicitly assumes that the initial state of the game had passed through a bit-flip noisy channel before the game began. The corruption of quantum states in schemes implementing quantum games was studied by various authors i.e. in [1] the authors perform an analysis of the two-player prisoners dilemma game, in [2] the multiplayer quantum minority game with decoherence is studied, in [4,13] the authors analyse the influence of the local noisy channels on quantum Magic Squares games, while the quantum Monty Hall problem under decoherence is studied first in [3] and subsequently in [8]. In [9] the authors study the influence of the interaction of qubits forming a spin chain on the qubit flip game. An analysis of trembling hand perfect equilibria in quantum games was done in [12]. Prisoners' dilemma in the presence of collective dephasing modelled by using the Markovian approximation of open quantum systems dynamics is studied in [10]. Unfortunately the model applied in this work assumes that decoherence acts only after the initial state has been prepared and ceases to act before unitary strategies are applied.
Motivation
In the quantum game theoretic literature decoherence is typically applied to a quantum game in the following way: 2. it is transferred through a local noisy channel, 3. players' strategies are applied, 4. the resulting state is transferred once again through a local noisy channel, 5. the state is disentangled, 6. quantum local measurements are performed and the outcomes of the games are calculated.
In some cases, where it is appropriate, steps 4 and 5 are omitted. The problem with the above procedure is that it separates unitary evolution from the decoherent evolution. In [9] it was proposed to observe the behaviour of the quantum version of the penny flip game under more physically realistic assumptions where decoherence due to coupling with the environment and unitary evolution happen simultaneously.
Game as a quantum experiment
In this work our goal is to follow the work done in [9] and to discuss the quantum penny flip game as a physical experiment consisting in preparation, evolution and measurement of the system. For the purpose of this paper we assume that preparation and measurement, contrary to noisy evolution of the system are perfect. We investigate the influence of the noise on the players' odds and how the noisiness of the system can be exploited by them. The noise model we use is described by the Lindblad master equation and the dynamics of the system is expressed in the language of quantum systems control.
Penny flip game
In order to provide classical background for our problem, let us consider a classical two-player game, consisting in flipping over a coin by the players in three consecutive rounds. As usual, the players are called Alice and Bob. In each round Alice and Bob performs one of two operations on the coin: flips it over or retains it unchanged. At the beginning of the game, the coin is turned heads up. During the course of the game the coin is hidden and the players do not know the opponents actions. If after the last round the coin is tails up, then Alice wins, otherwise the winner is Bob.
The game consists of three rounds: Alice performs her action in the first and the third round, while Bob performs his in the second round of the game. Therefore the set of allowed strategies consists of eight sequences (N, N, N ), (N, N, F ), . . . , (F, F, F ), where N corresponds to the non-flipping strategy and F to the flipping strategy. Bob's pay-off table for this game is presented in Table 1. Looking at the pay-off tables, it can be seen that utility function of players in the game is balanced, thus the penny flip game is a zero-sum game. A detailed analysis of this game and its asymmetrical quantization can be found in [14]. In this work it was shown that there is no winning strategy for any player in the penny flip game. It was also shown, that if Alice was allowed to extend her set of strategies to quantum strategies she could always win. In [9] it was shown that when both players have access to quantum strategies the game becomes fair and it has the Nash equilibrium.
Qubit flip game
Following the work done in the aforementioned paper [9] we consider a quantum version of the penny flip game. In this case, we treat a qubit as a quantum coin. As in the classical case the game is divided into three rounds. Starting with Alice, in each round, one player performs a unitary operation on the quantum coin. The rules of the game are constrained by its physical implementation. We assume that in each round each of the players can choose three control parameters α 1 , α 2 , α 3 in order to realise his/her strategy. The resulting unitary gate is given by the equation: where ∆t is an arbitrarily chosen constant time interval. Therefore, the system defined above forms a single qubit system driven by time-dependent Hamiltonian H(t), which is a piecewise constant and can be expressed in the following form Control parameters in the Hamiltonian H(t) will be referred to vector α = are determined by Alice and α B i are selected by Bob. Suppose that players are allowed to play the game by manipulating the control parameters in the Hamiltonian H(t) representing the coherent part of the dynamics, but they are not aware of the action of the environment on the system. Hence the time evolution of the system is non-unitary and is described by a master equation, which can be written generally in the Lindblad form as where H(t) is the system Hamiltonian, L j are the Lindblad operators, representing the environment influence on the system [11] and ρ is the state of the system. For the purpose of this paper we chose three classes of decoherence: amplitude damping, amplitude raising and phase damping which correspond to noisy operators σ − = |0 1|, σ + = |1 0| and σ z , respectively.
Let us suppose that initially the quantum coin is in the state |0 0|. Next, in each round, Alice and Bob perform their sequences of controls on the qubit, where each control pulse is applied according to equation (3). After applying all of the nine pulses, we measure the expected value of the σ z operator. If tr(σ z ρ(T )) = −1 Alice wins, if tr(σ z ρ(T )) = 1 Bob wins. Here, ρ(T ) denotes the state of the system at time T = 9∆t.
Alternatively we can say that the final step of the procedures consists in performing orthogonal measurement{O heads → |0 0|, O tails → |1 1|} on state ρ(T ). The probability of measuring O heads and O tails determines pay-off functions for Alice and Bob, respectively. These probabilities can be obtained from relations p(heads) = 1|ρ(T )|1 and p(tails) = 0|ρ(T )|0 .
Nash equilibrium
In this game, pure strategies cannot be in Nash equilibrium. Hence, the players choose mixed strategies, which are better than the pure ones. We assume that Alice and Bob use the Pauli strategy, which is mixed and gives Nash equilibrium [9], therefore this strategy is a reasonable choice for the players. According to the Pauli strategy, each player chooses one of the four unitary operations {1, iσ x , iσ y , iσ z } with equal probability. Thus, to obtain the Pauli strategy, each player chooses a sequence of control parameters (ξ 1 , ξ 1 , ξ 2 ) listed in Tab. 2. It means that in each round, one player performs a unitary operation chosen randomly with a uniform probability distribution from the set {1, iσ x , iσ y , iσ z }.
Influence of decoherence on the game
In this section, we perform an analytical investigation which shows the influence of decoherence on the game result. In accordance with the Lindblad master equation, the environment influence on the system is represented by Lindblad operators L j , while the rate of decoherence is described by parameters γ j . To simplify the discussion, we consider Hamiltonians H(t) represented by diagonal matrices, i.e. in the following form
Coming back to the original variables we get the expression In order to study the asymptotic effects of decoherence on the results of the game we consider the following limit lim γ→∞ e At ρ(0)e A † t − e −γt σ + ρ(0)σ − = |0 0|ρ(0)|0 0|.
Let ρ(0) = |0 0|, thus the above limit is equal to |0 0|. This result shows that for high values of γ, chances of winning the game by Bob increase to 1 as γ increases. Figure 1 shows an example of the evolution of a quantum system with amplitude damping decoherence. The noisy operator σ + is related to amplitude raising decoherence, and the solution of the master equation has the following form where A = −iH(t) − 1 2 γσ − σ + . It is easy to check that as γ → ∞ the state |1 1| is the solution of the above equation, in which case Alice wins.
Phase damping
Now we consider the impact of the phase damping decoherence on the outcome of the game. In this case, the Lindblad operator is given by σ z . Hence, the Lindblad equation has the following form Next, we make a change of variablesρ(t) = e iHt ρ(t)e −iHt , which is helpful to solve the equation. We obtain It follows that the solution of the above equation is given bŷ Coming back to the original variables we get the expression Consider the following limit lim γ→∞ ρ(t) = |0 0|ρ(0)|0 0| + |1 1|ρ(0)|1 1|.
The above result is a diagonal matrix dependent on the initial state. For high values of γ, the initial state ρ(0) has a significant impact on the game. If ρ(0) = |0 0| then lim γ→∞ ρ(t) = |0 0|. This kind of decoherence is conducive to Bob. Similarly, if ρ(0) = |1 1|, then Alice wins. The evolution of a quantum system with the phase damping decoherence and fixed Hamiltonian is shown in Figure 2.
Optimal strategy for the players
Due to the noisy evolution of the underlying qubit, the strategy given by Table 2 is no longer a Nash equilibrium. We study the possibility of optimizing one player's strategy, while the other one uses the Pauli strategy. It turns out that this optimization is not always possible. If the rate of decoherence is high enough, then the players' strategies have little impact on the game outcome. In the low noise scenario, it is possible to optimize the strategy of both players.
In each round, one player performs a series of unitary operations, which are chosen randomly from a uniform distribution. Therefore, the strategy of a player can be seen as a random unitary channel. In this section Φ A 1 , Φ A 2 denote mixed unitary channels used by Alice who implements the Pauli strategy. Similarly, Φ B denotes channels used by Bob.
Optimization method
In order to find optimal strategies for the players we assume the Hamiltonian in (3) to have the form where ε(t) are the control pulses. As the optimization target, we introduce the cost functional J(ε) = tr{F 0 (ρ(T ))}, where F 0 (ρ(T )) is a functional that is bounded from below and differentiable with respect to ρ(T ). A sequence of control pulses that minimizes the functional (18) is said to be optimal. In our case we assume that where ρ T is the target density matrix of the system. In order to solve this optimization problem, we need to find an analytical formula for the derivative of the cost functional (18) with respect to control pulses ε(t). Using the Pontryagin principle [15], it is possible to show that we need to solve the following equations to obtain the analytical formula for the derivative [6] where ρ s denotes the initial density matrix, λ(t) is called the adjoint state and F 0 (ρ(T )) = ρ(T ) − ρ T .
In order to optimize the control pulses using a gradient method, we convert the problem from an infinite dimensional (continuous time) to a finite dimensional (discrete time) one. For this purpose, we discretize the time interval [0, T ] into M equal sized subintervals ∆t k . Thus, the problem becomes that of finding ε = [ε 1 , . . . , ε M ] T such that The gradient of the cost functional is It can be shown [6] that elements of vector (27) are given by where ρ k and λ k are solutions of the Lindblad equation and the adjoint system corresponding to time subinterval ∆t k respectively. To minimize the gradient given in Eq. (27) we use the BFGS algorithm [16].
Optimization setup
Our goal is to find control strategies for players, which maximize their respective chances of winning the game. We study three noise channels: the amplitude damping, the phase damping and the amplitude raising channel. They are given by the Lindblad operators σ − , σ z and σ + = σ † − , respectively. In all cases, we assume that one of the players uses the Pauli strategy, while for the other player we try to optimize a control strategy that maximizes that player's probability of winning. However, in our setup it is convenient to use the value of the observable σ z rather than probabilities. Value 0 means that each player has a probability of 1 2 of winning the game. Values closer to 1 mean higher probability of winning for Bob, while values closer to -1 mean higher probability of winning for Alice.
Phase damping
The results for the phase damping channel are shown in Figure 3. As it can be seen, in this case both players are able to optimize their strategies, and so Alice can optimize her strategy for low values of γ to obtain the probability of winning grater than 1 2 . The region where this occurs is shown in the inset. For high noise values she is able to achieve the probability of winning equal to 1 2 . On the other hand, optimization of Bob's strategy shows that he is able to achieve high probabilities of winning for relatively low values of γ. Figure 4 presents optimal game strategies for both players. For Alice we chose γ = 1.172 which corresponds to her maximal probability of winning the game. In the case of Bob's strategies we arbitrarily choose the value γ = 1.610.
Amplitude damping
Next, we present the results obtained for the amplitude damping channel. They are shown in Figure 5. Unfortunately for Alice, for high values of γ Bob always wins. This is due to the fact that in this case the state quickly decays to state |0 0|. Additionally, Bob is also able to optimize his strategies. He is able to achieve probability of winning equal to 1 for relatively low values of γ. For low values of γ the interaction allows Alice to achieve higher than 1 2 probability of winning. The region where this happens is magnified in the inset. Interestingly, for very low values of γ Alice can increase her probability of winning. This is due to the fact, that low noise values are sufficient to distort Bob's attempts to perform the Pauli strategy. On the other hand, they are not high enough to drive the system towards state |0 0|. Optimal game results for both players are shown in Figure 6. For both players we
Amplitude raising
Finally, we present optimization results for the amplitude raising channel.
The optimization results, shown in Figure 7, indicate that Alice can achieve probability of winning equal to 1 for lower values of γ compared to the unoptimized case. In this case Bob cannot do any better than in the unoptimized case due to a limited number of available control pulses.
Conclusions
We studied the quantum version of the coin flip game under decoherence.
To model the interaction with external environment we used the Markovian approximation in the form of the Lindblad equation. Because of the fact that Pauli strategy is a known Nash equilibrium of the game, therefore it was natural to investigate this strategy in the presence noise. Our results show that in the presence of noise the Pauli strategy is no longer a Nash equilibrium. One of the players, Bob in our case, is always favoured by amplitude and phase damping noise. Our next step was to check if the players were able to do better than the Pauli strategy. For this, we used the BFGS gradient method to optimize the players' strategies. Our results show that Alice, as well as Bob, are able to increase their respective winning probabilities. Alice can achieve this for all three studied cases, while Bob can only do this for the phase damping and amplitude damping channels. | 4,509.4 | 2013-06-25T00:00:00.000 | [
"Physics"
] |
ABO gene editing for the conversion of blood type A to universal type O in Rhnull donor‐derived human‐induced pluripotent stem cells
Abstract The limited availability of red cells with extremely rare blood group phenotypes is one of the global challenges in transfusion medicine that has prompted the search for alternative self‐renewable pluripotent cell sources for the in vitro generation of red cells with rare blood group types. One such phenotype is the Rhnull, which lacks all the Rh antigens on the red cell membrane and represents one of the rarest blood types in the world with only a few active blood donors available worldwide. Rhnull red cells are critical for the transfusion of immunized patients carrying the same phenotype, besides its utility in the diagnosis of Rh alloimmunization when a high‐prevalence Rh specificity is suspected in a patient or a pregnant woman. In both scenarios, the potential use of human‐induced pluripotent stem cell (hiPSC)‐derived Rhnull red cells is also dependent on ABO compatibility. Here, we present a CRISPR/Cas9‐mediated ABO gene edition strategy for the conversion of blood type A to universal type O, which we have applied to an Rhnull donor‐derived hiPSC line, originally carrying blood group A. This work provides a paradigmatic example of an approach potentially applicable to other hiPSC lines derived from rare blood donors not carrying blood type O.
The transfusion of red blood cells (RBCs), currently obtained from volunteer blood donations, is an essential therapy for patients with chronic or acute anaemia. This form of cell-based therapy is an indispensable part of modern healthcare systems. However, the perspective of insufficient blood supply due to population aging, and the potential risk of transfusion-transmitted infections remain major concerns. 1,2 In addition, the scarcity of donors with rare blood types represents a global challenge when compatible red cells with a rare blood phenotype are required for transfusion. 3,4 For these reasons, the in vitro generation of RBCs, to supplement the donation system, is nowadays a major focus of research in transfusion medicine.
Beyond transfusion requirements, red cells with infrequent phenotypes are also necessary for diagnostic purposes in clinical laboratories. The identification of unusual red cell antibody specificities in patient sera depends on the availability of reagent red cells with rare phenotypes or infrequent antigen combinations for serological crossmatching, which is crucial to allow the accurate selection and effective search of compatible units for transfusion.
During the past decade, enormous progress has been made in the in vitro manufacture of human RBCs from different cell sources. [5][6][7][8][9] Among these, human-induced pluripotent stem cells (hiPSCs) provide an unlimited source of hematopoietic progenitor cells, which can subsequently be differentiated into erythroid cells. hiPSC lines can also be derived from easily accessible peripheral blood mononuclear cells (PBMCs) from selected donors 10,11 and be amenable to gene editing. 12,13 Overall, these features make them a promising source for sustainable production of customized red cells.
Different hiPSC lines have already been obtained from existing donors with rare blood types 14,15 or have been modified using CRISPR/Cas9 gene editing approaches to reproduce uncommon null phenotypes by knocking-out specific blood group genes. 16 However, the potential use of hiPSC-derived red cells is also dependent on the ABO type. Except for the rare Bombay phenotype, extremely infrequent or null blood group types are not necessarily encountered in blood type O donors. Here, we present a CRISPR/Cas9-mediated ABO gene edition strategy for the conversion of blood type A to universal type O, which we have applied to an Rh null donor-derived hiPSC line, originally carrying blood group A. This approach is potentially applicable to other hiPSC lines derived from rare blood donors, not carrying blood type O.
Generation and culture of hiPSCs
To generate Rh null donor-derived hiPSCs, PBMCs were isolated from a 20 ml-whole blood sample of the selected blood donor, previously identified and characterized at the Immunohematology Reference Laboratory of the Banc de Sang iTeixits (Barcelona). MNCs were isolated using standard density gradient centrifugation with SepMate™ tubes (StemCell Technologies, Canada). The PBMCs were carefully recovered from the interface and washed in PBS 1×. PBMCs were reprogrammed using the integration-free CytoTune R -iPS 2.0 Sendai Reprogramming Kit (Ther-moFisher, USA), which contains Sendai virus particles for the expression of the four Yamanaka factors. 17 Undifferentiated hiPSCs were maintained in mTeSR™1 (StemCell Technologies, Canada) on 3 μg/ml of Laminin-521 (StemCell Technologies)-coated plates and expanded using EDTA passaging solution (Ther-moFisher). Media samples were routinely tested for the absence of mycoplasma contamination using the selective biochemical test MycoAlert™PLUS (Lonza, Switzerland).
CRISPR/Cas9 gene editing
For ABO gene targeting, two strategies were approached: (1) the generation of a gene knock-out (KO) and (2) the insertion of the naturally occurring c.261delG single nucleotide deletion through a short sequence knock-in (KI). For both strategies, we designed RNA guides (gRNA) using CRISPR-direct tool (https://portals.broadinstitute. org/gpp/public/analysis-tools/sgrna-design). We selected in both cases the target sites with the lowest number of predicted off-targets. The gRNA sequences are depicted in Table S1. For the KI strategy, we designed a singlestrand donor DNA carrying the c.261delG mutation and a mutated PAM to avoid re-cutting of the target sequence (Table S1). Each guide was previously transfected with the Alt-R R S.p. Cas9 Nuclease V3 (IDT# 1081058) and the Alt-R R CRISPR-Cas9 tracrRNA, ATTO™ 550, (IDT #1075927) in HEK-293T cells according to IDT protocol and tested for cutting efficiency by the T7 endonuclease assay (as described below). The most efficient guide was nucleofected with the Cas9 protein into the parental hiPSC line hiPSC#1 using the Neon Electroporation transfection System (ThermoFisher), and cells were plated into geltrexcoated plates. Nucleofection efficiency was assessed after 24 h. Forty-eight hours post nucleofection cells were plated as single cells on geltrex-coated 96-well plates for clonal selection in mTeSR™1 (StemCell Technologies, Canada) supplemented with 10 μM Y-27632 and cloneR (StemCell Technologies). Single-cell clones were expanded for 2 to 3 weeks and subsequently analysed by Sanger sequencing for the target sites modified.
T7 endonuclease assay
HEK-293T cultures were dissociated with TrypLE express (ThermoFisher) and 2 × 10 5 cells were transfected by CRISPRMAX Transfection Reagent (ThermoFisher) with 12 pmol of each of the RNAs (gRNA and tracrRNA) and the Cas9 protein. Genomic DNA was extracted 2 days after transfection. Genomic regions flanking the CRISPR target sites were PCR amplified (Table S1). PCR products were denatured, re-annealed and subsequently treated with 5U of T7EI at 37 • C for 15 min.
RHAG and ABO gene sequencing
For RHAG gene sequence analysis, DNA was extracted from donor PBMCs or hiPSCs by an automated method using the QIAsymphony instrument (Qiagen, Germany). Primers to amplify RHAG gene exon 6 are listed in Table S2. For CRISPR edit validation, genomic DNA was isolated from each expanded clone using DNeasy Blood and Tissue Kit (Promega). DNA regions encompassing guide sites were amplified using specific primers (Table S2). Amplification was performed with SequalPrep™ Long PCR Kit with dNTPs (Applied Biosystems, USA). The PCR products were Sanger sequenced using the Big Dye Terminator v1.1 kit (Applied Biosystems). Sequencing primers are also listed in Table S2. DNA sequences were aligned with the reference genomic sequences: NG_011704.1 for the RHAG gene and NG_006669.2 for ABO gene, using the CLC GenomicWorkbench 21.0.3 software (Qiagen).
Cell line identity
To confirm the cell line identity, genomic DNA was extracted from iPSC clones as well as donor PBMCs and used for short-tandem repeat (STR) marker analysis using the Mentype R Chimera R system (Biotype R Diagnostic GmbH, Germany).
Karyotype analysis
Genomic integrity of the generated hiPSCs was evaluated by G-banded metaphase karyotype analysis (Molecular Citogenetics Laboratory, Hospital del Mar, Barcelona). Briefly, cultures of hiPSCs (70% confluent) were treated with KaryoMaxcolcemid (Invitrogen), dissociated, incubated in hypotonic solution and fixed in Carnoy solution (75% methanol, 25% acetic acid). Karyotyping was performed following standard procedures. A minimum of 15 metaphases were examined.
RNA isolation and quantitative reverse-transcription polymerase chain reaction
Total RNA was isolated from hiPSCs and developing embryoid body (EB) cells using the RNeasy Micro kit (Qiagen) and treated with RNase-free DNase (Qiagen). Total RNA (1 μg) was reverse transcribed using a highcapacity reverse transcription kit (Applied Biosystems). All quantitative PCR analyses were performed using the Fast SYBR Green Master Mix (Applied Biosystems) following the manufacturer's protocols on the Light Cycler 480 Real-Time PCR System (Roche). Gene-specific primers used for this study are listed in Table S3.
2.9
Hematopoietic and erythroid differentiation hiPSCs were differentiated into hematopoietic progenitor cells (HPCs), using the STEMdiff™Hematopoietic kit (StemCell Technologies) following the manufacturer's recommendations. The 12-day differentiation protocol was performed in two stages. First, to induce hiPSC commitment towards mesoderm, hiPSC aggregates were plated on laminin-521-coated plates and cultured during the first 3 days with STEMdiff™ Hematopoietic Supplement A added to the basal medium. Second, for the subsequent 9 days, mesodermal cells were further differentiated into HPCs using basal medium supplemented with STEMdiff™ Hematopoietic Supplement B performing half-medium changes at days 5, 7 and 10 according to the manufacturer's instructions. At day 12, HPCs were harvested from the culture supernatant and re-cultured in erythroid differentiation media.
Flow cytometry analysis
Cell surface marker staining was performed by direct immunofluorescence with conjugated monoclonal antibodies listed in Table S4. Briefly, a sample of 1 ×
Preparation of cytospins and May-Grünwald Giemsa staining
Cytospins were prepared at the Hematological Cytology Service (Hospital del Mar, Barcelona). Briefly, a sample of 1 × 10 4 cells was prepared by centrifuging onto glass slides at 500 rpm for 10 min in a Thermo Scientific Cytospin 4 cytocentrifuge. The slides were stained with May-Grünwald Giemsa stain (Merck) according to the Hematology Cytology Service's protocol. Cytospins were imaged at 400× using an optical microscope.
2.12
Helix pomatia agglutinin (HPA) Lectin fluorescence staining Expression of A antigen was analysed by direct fluorescence staining with 50 μg/ml AF488-conjugated helix pomatia agglutinin (HPA) lectin (ThermoFisher) in living cultured erythroid cells. As positive and negative controls, RBCs from individuals of A and O blood group types were fixed with 4% paraformaldehyde previous direct staining. Nuclei were stained with DAPI. Images were taken using Zeiss Axio Observer Z1 -Apotome inverted fluorescent microscope and analysed using the Image J software. 19
Serological detection of Rh blood group antigens
Bio-Rad DiaClon Rh-Subgroups+K ID-Cards with monoclonal typing reagents for C (RH2), c (RH4), E (RH3), e (RH5) were used for serological detection of RhCE antigens. Bio-Rad DiaClon ABD-Confirmation for Donors ID-Cards with monoclonal typing reagents for RhD: ESD-1 M and 175-2, were used for serological detection of the RhD antigen. Briefly, cell suspensions prepared from 1-2 × 10 6 hiPSC-derived reticulocytes were pelleted and resuspended in 50 μl of ID-Diluent 2 (Bio-Rad Laboratories, Switzerland). Cards were centrifuged as per the manufacturer's instructions.
A monoclonal anti-k (Cellano) reagent (Pelikloon IgM monoclonal Lk1) was also used to detect Cellano antigen expression using Bio-Rad NaCl, Enzyme test and Cold Agglutinins ID-Cards (Bio-Rad Laboratories). Fifty microliters of prepared cell suspension were added to a column followed by 25 μl of the anti-Cellano antibody. Cards were centrifuged as per the manufacturer's instructions.
Establishment of an Rh null donor-derived hiPSCs
To obtain an integration-free Rh null hiPSC line, PBMCs from an Rh null female blood donor were reprogrammed. The donor subject had been previously identified as a homozygous carrier of a single-base mutation (c.836G > A) in the RHAG gene, leading to the rare Rh null blood type (ISBT RHAG Blood Group Alleles Table:https://www. isbtweb.org/static/5d593bb0-02e1-47a2-9a8fe3e34df68a5e/ ISBT030RHAGbloodgroupallelesv6230-NOV-2021.pdf. PBMCs were reprogrammed using integration-free Sendai virus vectors expressing OCT4, SOX2, KLF4 and cMYC under serum-free and feeder-free conditions. Two hiPSC lines were generated from this donor and representative clones, named BST PBiPS6-SV4F-9 (abbreviated as hiPSC#1) and BST PBiPS6a-SV4F-6 (abbreviated as hiPSC#2), were fully characterized. STR analysis confirmed the identity of both lines when compared to the original PBMCs ( Figure 1A). Both hiPSC lines robustly proliferated for more than 20 passages, showing a normal diploid female [XX, 46] karyotype, without any detectable numerical or structural chromosomal abnormalities ( Figure 1B). The established Rh null hiPSC lines displayed hallmarks of pluripotency, being positive for alkaline phosphatase staining ( Figure 1C) and enhanced endogenous gene expression of common pluripotency markers ( Figure 1D). The stemness of hiPSCs was also verified by immunofluorescence of pluripotency markers in hiPSC colonies from passages 8 to 15 ( Figure S1A). Definitive proof of a pluripotent phenotype was shown in in vitro-directed differentiation assays towards the three germinal layers ( Figure S1B) and in in vivo teratoma formation assays ( Figure S1C,D). Additionally, the presence of the RHAG gene c.836G > A homozygous mutation was also confirmed by gene sequencing ( Figure 1E). These results demonstrate that we have successfully obtained an integration-free and feeder-free hiPSC line carrying the genotype that leads to the Rh null phenotype in derived red cells.
Conversion of blood type A hiPSC line to type O by CRISPR/Cas9-mediated gene edition
The original A blood type of the Rh null donor was associated with a heterozygous A2/O1 ABO genotype, which was also confirmed in the resultant hiPSC line by ABO gene sequencing ( Figure S2). In order to convert the Rh null hiPSC line from blood type A to the universal type O, we designed two strategies based on CRISPR/Cas9 technology. The first approach was based on the generation of a KI, mimicking the natural (c.261delG) polymorphism, present in the most common inactive ABO*O.01 (O1) allele. This deletion of guanine in exon 6 causes a frameshift (p.Thr88Profs*31) in a sequence that otherwise is identical to the consensus A sequence (Figure 2A). The second approach relied on the generation of a KO targeting the third exon of the ABO gene, also generating a frameshift.
In both approaches, three guide gRNA sequences per CRISPR site were designed, transfected and previously tested on HEK293T cells to evaluate their cutting efficiency through endonuclease T7 assay. In the KI strategy, the best-performing gRNA, which specifically targeted the exon 6 ( Figure 2B), was co-transfected with the donor DNA sequence containing the 261G deletion, which is used by the DNA repair machinery as the new template after the cut. For the KO generation, the Rh null hiPSC line was transfected by electroporation with the RNP complex whose gRNA targeted the ABO gene exon 3 ( Figure 2C). Different clones were isolated and screened by Sanger sequencing to confirm the ABO gene CRISPR edition. For the KO strategy we checked 26 clones and found five with indels producing a truncated protein (19% efficiency). For the KI strategy, we identified five clones carrying the 261G deletion out of 16 clones screened (31% efficiency) (see data at https://github.com/anasevilla/ABO-gene-editing). We then selected two KO (KO-C31 and KO-C52) and two KI (KI-C4 and KI-C5) clones for further characterization ( Figure 2B,C). Using the IDT design checker (https://eu.idtdna.com/site/order/designtool/index/ CRISPR_SEQUENCE) software, we analysed the top five in silico-predicted off-targets of ABO E3 sgRNA (CCDC78, HTR5A, PRRG2, RHBDL2 and UCKL1-AS1) and ABO E6.1 sgRNA (C16orf89, LINC02794, LZTS1, NLCN and SLC8A1) by sanger sequencing and found them all consistently unaltered in the four selected clones, demonstrating the specificity of our gene editing strategy (sequences are available at https://github.com/anasevilla/ABO-gene-editing).
Importantly, cell line identity of all four clones was confirmed by STR analysis ( Figure S3A) and also showed normal diploid [XX, 46] karyotypes ( Figure S3B). Furthermore, gene-edited hiPSC clones remained pluripotent after CRISPR/Cas9 gene editing, retained hESC-like morphology and expressed similar (p > 0.05) RNA ( Figure S3C) and protein ( Figure S3D,E) levels of the pluripotency markers SSEA3, SSEA4, TRA-1-60 and TRA-1-81 to the parental Rh null hiPSC line ( Figure S3F). In addition, their pluripotent capacity was tested in vitro through directed differentiation into cell lineages representing all three germ layers through the embryoid body assay ( Figure S4A,B). Although expected differences across the iPSC lines were observed for the early lineage differentiation markers (SOX17, T, TUJ1) in the differentiating embryoid bodies, no statistical differences were observed for and NANOG at the pluripotency state between gene-edited iPSC lines and the parental lines ( Figure S4C). Thus, we have established two CRISPR/Cas9-mediated gene edition strategies to convert blood type A hiPSC lines to type O with no impact on the stemness potential.
Morphological changes and immunophenotype confirm consistent erythroid differentiation in ABO-edited hiPSC lines
The potential of the ABO-edited hiPSC lines to differentiate towards the erythroid lineage was evaluated in parallel with the parental Rh null hiPSCs in three independent experiments. The parental Rh null hiPSCs and the KI-C5 and KO-C52-edited clones were first differentiated towards HPCs with the STEMDiff™ Hematopoietic Kit ( Figure S5A). At day 12 of the differentiation protocol, HPCs released from hematopoietic clusters were harvested from the culture supernatant ( Figure S5B). This population contained around 90% CD34 + cells, and around 60% of these cells were CD45 low/+ ( Figure S5C). To further characterize the CD34 + CD45 low/+ fraction, we also analysed the expression of the erythroid lineage surface markers CD71, CD235a, CD49d and CD233. We confirmed the presence of a variable range (36-75%) of early erythroblasts CD71 + CD235a + CD49d + CD233 − ( Figure S5C). The collected cells, containing erythroid-committed HPCs, were further cultured in erythroid differentiation medium according to the three-step protocol described in Materials and Methods ( Figure 3A). The follow-up of cell viability and expansion throughout the culture showed no statistically significant differences between the edited and the parental cell lines regarding the survival/proliferative capacity of hiPSC-derived erythroid progenitors ( Figure S6A,B). Distinct stages of erythroid maturation were assessed morphologically at four time points (d0, d7, d14 and d21) ( Figure 3B). The erythroid cells progressed through distinct erythroid stages with orthochromatic erythroblasts already appearing at day 7, showing no statistically significant differences between the parental Rh null hiPSC and the edited clones, KI-C5 and KO-C52, across the 21-day differentiation period (Figure S6C). At day 21, orthochromatic erythroblasts were the predominant cells (approximately 80%), with a very low proportion of enucleated cells (6-8%), in both the parental Rh null hiPSC line and the edited clones ( Figure 3C). Moreover, erythroid differentiation was also assessed by flow cytometry immunophenotyping of erythroid surface markers: CD44, CD49d (α4-integrin), CD71, CD235a (glycophorin A, GPA), CD233 (Band3) and CD238 (KEL). The observed dynamic changes of expression revealed an analogous progression through erythroid differentiation in parental Rh null hiPSC line and the edited clones. In brief, we observed two distinct patterns of expression ( Figure 3D). The adhesion molecules CD44 and CD49d, as well as the transferrin receptor CD71 presented a pattern with high levels of expression in early-stage erythroblasts and a progressive decrease in late-stage erythroblasts. In contrast, a different pattern was observed for the CD235a, CD233 and CD238 markers, which displayed low levels of expression in early-erythroblasts with a progressive increase in latestage erythroblasts. Our results concur with the expected progression of erythroid cell differentiation cultures from hiPSC-derived CD34 + HPCs, [21][22][23] with no significant differences between the parental Rh null hiPSCs and the edited clones ( Figure S6D).
The Rh null phenotype is retained in erythroid cells differentiated from parental and ABO-edited Rh null hiPSCs
To confirm the Rh null phenotype, we first assessed RhAG expression by flow cytometry, in cells differentiated from both the parental and ABO-edited hiPSCs, using the LA1818 anti-RhAG monoclonal antibody. No RhAG expression was observed on the membrane of erythroid cells derived either from the parental Rh null hiPSCs or from the ABO-edited clones KI-C5 and KO-C52 ( Figure 4A). As the RhAG glycoprotein is essential for the Rh complex formation, we next assessed the expression of the Rh antigens (D, C, c, E and e) by agglutination tests using gel card technology, and no agglutination was observed with any of the anti-Rh typing reagents ( Figure 4B), further confirming the Rh null phenotype. These data confirm the successful generation of in vitro differentiated erythroid cells repro-ducing the Rh null phenotype of the original donor subject, which has neither been affected by the reprogramming of the donor's PBMCs, nor by the CRISPR edition of the ABO gene.
ABO blood group conversion in erythroid cells differentiated from ABO-edited iPSC lines
To analyse A antigen expression in differentiated erythroid cells from parental Rh null hiPSCs and edited clones, fluorescence labelling was performed using HPA, a lectin that has anti-A human blood group specificity. The erythroid cells derived from ABO-edited clones were negative for A antigen expression in flow cytometry studies ( Figure 4C). Similarly, no differences in H antigen expression were detected between the parental Rh null hiPSCs, expressing blood type A 2 , and the edited lines (blood type O) ( Figure 4D). This result indicates that H antigen, which is the precursor for A and B antigen synthesis, is likewise expressed in erythroid cells differentiated from both the Rh null and edited hiPSCs.
In the parental Rh null hiPSC line, we observed early A antigen expression at day 7 of erythroid differentiation and its maintenance until day 21 ( Figure 4E). As expected, though, cultured red cells showed weak A antigen expression, in agreement with the original A2 subgroup of the Rh null donor ( Figure S7). 24 In contrast, we could not detect A antigen-labelled cells in those cultures differentiated from any of the edited clones. These results confirm the successful conversion of blood type A to blood type O using CRISPR-Cas9 editing strategies in hiPSCs carrying the rare Rh null blood group.
DISCUSSION
Red cells with rare blood types are currently in limited supply due to the scarce representation of these phenotypes in the global population. The provision of compatible red cells for the transfusion of immunized patients carrying rare blood types is one of the first potential target applications of human red cells manufactured in vitro. On the other hand, the diagnosis of red cell alloimmunization, crucial to ensure the safe transfusion of immunized patients, relies on using carefully selected reagent red cells with well characterized phenotypes. These red cells are also obtained from blood donors, so the availability of infrequent phenotypes needed to properly identify rare antibody specificities is likewise very limited. In this sense, having an alternative (unlimited) cell source to derive red cells with rare blood types in vitro, could potentially over- come the current rare blood limitations in both transfusion and diagnostics. One of the approaches that have been considered to address this issue, is the use of hiPSCs obtained from existing donors or patients with rare blood types. 25,26 Of course, such donors are not easily available in practice, as rare phenotypes are usually found in less than 1 per 1,000 in the general population, and donors with exceptional 'null' phenotypes are even much less represented. One such example is the H deficiency, also known as the Bombay (O h ) phenotype, which is found in 1 in 10,000 individuals in India. The generation of hiPSCs from the dermal fibroblasts of a Bombay blood-type individual 14 provided the first proof of concept for this approach. More recently, hiPSC lines have been obtained by reprogramming erythroid progenitors from peripheral blood of individuals with the Jr(a−) and D− rare blood types, 15 demonstrating the feasibility of producing autologous hiPSC-derived red cells for the transfusion of patients with rare blood groups. However, the potential utility of hiPSC-derived red cells with null phenotypes or infrequent antigen combinations, extends beyond an autologous use. Such hiPSC lines could provide cultured red cells with difficult-to-supply blood types for the transfusion of certain groups of immunized patients (e.g., sickle cell disease patients). Likewise, hiPSC lines could solve the limited availability of reagentred cells with rare phenotypes, which are also necessary for the identification of rare RBC antibody specificities.
In this study, we present the generation of a hiPSC line derived from an Rh null donor with blood type A. The donor subject had been previously identified as a homozygous carrier of the c.836G > A single-base mutation in the RHAG gene, leading to the rare Rh null blood type. [27][28][29] The Rhblood group deficiency, or Rh null , is an extremely rare phenotype which lacks all the Rh antigens on the red cell membrane. Such valuable blood is necessary for the transfusion support of Rh immunized patients, not only those with Rh null phenotype but also patients with antibodies against any high-prevalence Rh specificity, for whom compatible blood is always difficult to procure. 3 Nonetheless, the potential use of hiPSC-derived red cells for both, transfusion and diagnostics, is also dependent on ABO compatibility. Extremely infrequent or null blood group types are not necessarily encountered in blood type O donors, as it is the case in this blood type A Rh null donor. This circumstance limits the potential use of the hiPSC-derived red cells due to the naturally occurring ABO hemagglutinins. To overcome this limitation, we considered the conversion of blood type A to universal type O.
The conversion of blood group types A and B to universal type O has been pursued for a long time through approaches based on enzymatic treatment. [30][31][32] Blood types A and B differ from type O in the presence of an addi-tional sugar residue (GalNAc or Gal, respectively) on the precursor H-antigen found on type O RBCs. The concept of removing these immunogenic sugars by specific enzymes (glycosidases) from blood type A or B red cells, was first proposed and demonstrated by Goldstein (1982). 33 The first attempts required massive amounts of enzyme but novel α-galactosidases and α-N-acetylgalactosaminidases have been shown to improve the conversion efficiency. 34 However, this technology has not yet moved into clinical practice, as there are hold-ups pending to be solved. 32 Alternatively, we addressed the conversion of the blood type A Rh null hiPSC line into universal type O using CRISPR/Cas9-mediated gene editing technology, which allows the precise, robust and efficient edition of genes of interest. 13 With the aim to abrogate the expression of the α-1,3-N-acetylgalactosamine transferase (A-transferase), we designed two different ABO gene edition approaches. The first approach was based on the generation of a KI, mimicking the c.261delG single nucleotide deletion, present in the most common inactive ABO*O.01 (O1) allele. 35 The specific and precise incorporation of this c.261delG polymorphism within the ABO gene has been attempted in the present work aiming to reproduce the genetic basis naturally associated to blood type O. Exploiting the CRISPR/Cas9targeted integration to correct genetic defects has led to a number of proof-of-principle works in patient-derived hiPSCs, in which the mutations responsible for cystic fibrosis, haemophilia A and β-thalassemia were successfully corrected, although with a limited efficiency. [36][37][38] Thus, a second approach by gene KO was undertaken in parallel to maximize the possibilities to achieve our final goal, which was to obtain an Rh null hiPSC line converted to universal blood type O. Indeed, both the KI and the KO strategies successfully rendered hiPSC-edited clones with the intended ABO gene modifications, as demonstrated by ABO sequencing analysis. Moreover, the established ABOedited hiPSCs lines maintained the Rh null -related RHAG gene mutation as well.
The results obtained in the characterization of the KI and KO-edited hiPSC lines showed no changes on their stemness potential. Likewise, these lines have been successfully differentiated into HPCs and, subsequently, to the erythroid lineage. No remarkable differences have been observed between the parental Rh null hiPSCs line and the edited clones in erythroid differentiation experiments, with overall results concording with the expected progression of erythroid cell differentiation cultures from hiPSC-derived CD34 + progenitor cells. [21][22][23] It is worth noting that we have been able to produce differentiated erythroid cells reproducing the Rh null phenotype, proving no alteration of the original donor's rare phenotype due to the PBMCs reprograming or to the subsequent hiPSCs CRISPR ABO-gene edition. Remarkably, the results obtained from both KI and KO gene edition strategies provide the first demonstration of blood type A conversion to the universal type O using CRISPR/Cas9 technology. The knock-out of specific blood group genes, other than ABO, in pre-existing hiPSC lines has been recently reported as an strategy to reproduce uncommon null phenotypes. 16 Here, we demonstrate the feasibility of robust and sustainable ABO blood type conversion using these newly designed CRISPR/Cas9 gene editing approaches, allowing the production of cultured red cells with improved ABO compatibility. The potential application of these approaches is not restricted to hiPSC lines, since they can also be applied to other cell lines of interest for cultured red cells production, such as immortalized human erythroblast cell lines, 5,39 derived from individuals not carrying blood type O.
During the past decade, significant advances have been made in the production of manufactured red cells from different cell sources. 7,9,40,41 Despite the known limitations that still need to be overcome (e.g., low enucleation rate and cost-efficient scaling), the deeper knowledge of the regulatory pathways involved in terminal erythroid differentiation, together with the continuous progress in scaled-up protocols and technological achievements, allow to anticipate that in vitro production of RBCs will be possible in the near future. In this context, CRISPR/Cas9mediated blood group gene edition will certainly play an important role as a tool to improve blood group compatibility, like this work demonstrates.
C O N F L I C T S O F I N T E R E S T
The authors declare no competing interests.
D ATA AVA I L A B I L I T Y S ATAT E M E N T
The data that support the findings of this study are openly available at https://github.com/anasevilla/ABOgene-editing and in the Supplementary Files. O R C I D Ana Sevilla https://orcid.org/0000-0002-9251-4759 | 6,841.8 | 2022-10-01T00:00:00.000 | [
"Biology",
"Medicine",
"Engineering"
] |
Functional Interactions of Alcohol-sensitive Sites in the N-Methyl-d-aspartate Receptor M3 and M4 Domains*
The N-methyl-d-aspartate receptor is an important mediator of the behavioral effects of ethanol in the central nervous system. Previous studies have demonstrated sites in the third and fourth membrane-associated (M) domains of the N-methyl-d-aspartate receptor NR2A subunit that influence alcohol sensitivity and ion channel gating. We investigated whether two of these sites, Phe-637 in M3 and Met-823 in M4, interactively regulate the ethanol sensitivity of the receptor by testing dual substitution mutants at these positions. A majority of the mutations decreased steady-state glutamate EC50 values and maximal steady-state to peak current ratios (Iss/Ip), whereas only two mutations altered peak glutamate EC50 values. Steady-state glutamate EC50 values were correlated with maximal glutamate Iss/Ip values, suggesting that changes in glutamate potency were attributable to changes in desensitization. In addition, there was a significant interaction between the substituents at positions 637 and 823 with respect to glutamate potency and desensitization. IC50 values for ethanol among the mutants varied over the approximate range 100–325 mm. The sites in M3 and M4 significantly interacted in regulating ethanol sensitivity, although this was apparently dependent upon the presence of methionine in position 823. Molecular dynamics simulations of the NR2A subunit revealed possible binding sites for ethanol near both positions in the M domains. Consistent with this finding, the sum of the molecular volumes of the substituents at the two positions was not correlated with ethanol IC50 values. Thus, there is a functional interaction between Phe-637 and Met-823 with respect to glutamate potency, desensitization, and ethanol sensitivity, but the two positions do not appear to form a unitary site of alcohol action.
Ethanol is unusual among the major drugs of abuse in that it acts only at high concentrations (in the millimolar range) and that it acts on multiple targets in the central nervous system. For the greater part of the last century, ethanol was generally believed to produce its effects on central nervous system function via nonspecific actions on neuronal lipids, but it is now well accepted that the biologically important actions of ethanol are due to its interactions with proteins (1,2). Of these proteins, the N-methyl-D-aspartate (NMDA) 2 receptor is among the most important target sites of ethanol in the central nervous system. At relevant concentrations, ethanol inhibits ionic current (3), synaptic potentials (4), Ca 2ϩ influx (5,6), and neurotransmitter release (7) mediated by NMDA receptors. Studies of the mechanism of this inhibition have shown that it does not involve competitive inhibition at the glutamate or glycine binding sites (7)(8)(9)(10)(11)(12) or interaction with sites for other allosteric modulators (8,12) or open channel block (13,14) but that it involves changes in NMDA receptor gating, notably mean open time and opening frequency (13,14). Thus, ethanol appears to inhibit NMDA receptors via low affinity interactions with sites that regulate ion channel gating. Although sites in the intracellular C-terminal domain may modulate both ethanol sensitivity of the NMDA receptor (15) and ion channel gating (16 -19), this domain does not contain the site of ethanol action, since removal of this region of the protein does not decrease ethanol inhibition of the receptor (20).
In a previous study, Ronald et al. (21) demonstrated that a phenylalanine residue (Phe-639) in the third membrane-associated (M) domain of the NMDA receptor NR1 subunit influences alcohol sensitivity and shows some characteristics of a site of alcohol action. A previous study from this laboratory (22) identified a methionine residue (Met-823) in the M4 domain of the NMDA receptor NR2A subunit that also influences alcohol sensitivity and that fulfills some of the criteria for a site of alcohol action. The methionine in M4, however, also profoundly affects the gating behavior of the ion channel (23). We have recently shown (24) that the cognate position of NR1(Phe-639) in the NR2A subunit, Phe-637, also regulates alcohol sensitivity as well as desensitization and agonist potency. Studies in ␥-aminobutyric acid A and glycine receptors have demonstrated residues in transmembrane domains 2 and 3 that form sites of alcohol and anesthetic action (25,26). These residues appear to line opposite sides of a binding cavity for alcohol and various anesthetics (27), which modulate ␥-aminobutyric acid A and glycine receptor ion channel gating (28) by occupying a critical volume (29 -33). In the present study, we investigated whether Phe-637 and Met-823 in the NR2A subunit could form a unitary binding site analogous to that found in ␥-aminobutyric acid A and glycine receptors. We report here that these sites appear to functionally interact in regulation of NMDA receptor function and ethanol sensitivity, but they do not appear to form a common site of ethanol action.
EXPERIMENTAL PROCEDURES
Materials-Ethanol (95%, prepared from grain) was obtained from Aaper Alcohol & Chemical Co. (Shelbyville, KY), and all other drugs were obtained from Sigma.
Site-directed Mutagenesis and Transfection-Site-directed mutagenesis in plasmids containing NR2A subunit cDNA was performed using the QuikChange kit (Stratagene), and all mutants were verified by double strand DNA sequencing. Human embryonic kidney 293 (HEK293) cells were transfected with NR1-1a, NR2A, and green fluorescent protein at a ratio of 2:2:1 using the calcium phosphate transfection kit (Invitrogen). In electrophysiological experiments, 100 M ketamine and 200 M D-l,2-amino-5-phosphonovaleric acid were added to the culture medium. Cells were used in whole cell patch clamp experiments 15-48 h after transfection.
Electrophysiological Recording-Whole cell patch clamp recording was performed at room temperature using an Axopatch 1D or Axopatch 200B (Axon Instruments) amplifier. In ethanol concentration-response experiments, electrodes with open tip resistances of 3-8 megaohms were used. After establishing whole cell mode, series resistances of 5-15 megaohms were obtained. In glutamate concentration-response experiments, thin wall glass capillaries were used to pull electrodes with open tip resistances of 1-5 megaohms and series resistances of 2-7 megaohms. In all experiments, series resistance was compensated by 80%. Cells were voltage-clamped at Ϫ50 mV and superfused in an external recording solution containing 150 mM NaCl, 5 mM KCl, 0.2 mM CaCl 2 , 10 mM HEPES, 10 mM glucose, and 20 mM sucrose. In glutamate concentrationresponse experiments, the external solution contained EDTA (10 M) to eliminate the fast component of apparent desensitization due to high affinity Zn 2ϩ inhibition (34). The external solution pH was adjusted to 7.4 with NaOH. The intracellular recording solution contained 140 mM CsCl, 2 mM Mg 4 ATP, 10 mM 1,2-bis(2-aminophenoxy)ethane-N,N,NЈ,NЈ-tetraacetic acid, and 10 mM HEPES. The intracellular solution pH was adjusted to 7.2 with CsOH. Solutions of agonists and ethanol were applied to cells using a stepper motor-driven solution exchange apparatus (Warner Instruments, Inc.) and 600-m inner diameter square glass tubing. Ethanol concentrations greater than 500 mM tended to disrupt the gigaohm seal; thus, 500 mM was the maximum concentration used in ethanol concentration-response experiments. In glutamate concentrationresponse experiments, cells were lifted off the surface of the dish to increase the speed of the solution exchange. We have shown previously that under these conditions, 10 -90% rise times for solution exchange are ϳ1.5 ms (23). Concentrationresponse data were filtered at 2 kHz (8-pole Bessel) and acquired at 5 kHz on a computer by using a DigiData interface and pClamp software (Axon Instruments).
Molecular Modeling and Molecular Dynamics (MD)
Simulations-A model of the transmembrane region of the NR2A subunit of the NMDA receptor was built by homology modeling with InsightII software from Biosym (now Accelrys; San Diego, CA). Aquaporin (Protein Data Bank code 1fqy) served as the template for modeling, since the arrangement of its helices with respect to the M2 segment was the best of several candidates that were considered. The amino-terminal domain and the S2 ligand-binding domain of NR2A were omitted from the model. A segment between the end of M1 and the start of M2 had no counterpart in aquaporin and was created as a helix and loop. The initial model was subjected to energy minimization with a total of 300 iterations of the steepest descent and conjugate gradient algorithms. The AMBER force field was used for all energy calculations. The minimized structure then served as a template for further refinement of the and Met-823. *, EC 50 values that are significantly different from that for wild type NR1/NR2A subunits (*, p Ͻ 0.05; **, p Ͻ 0.01; ANOVA followed by Dunnett's test). Results are means Ϯ S.E. of 5-8 cells. C, average values of maximal steady-state to peak current ratio (I ss /I p ) in lifted cells coexpressing NR1 and wild type NR2A subunits (F/M) or NR2A subunits containing various mutations at Phe-637 and Met-823. Currents were activated by 300 M glutamate in the presence of 50 M glycine and 10 M EDTA. *, values that differed significantly from that for wild type NR1a/NR2A subunits (**, p Ͻ 0.01; ANOVA followed by Dunnett's test). Results are means Ϯ S.E. of 5-8 cells. D, graph plots values of maximal I ss /I p versus glutamate log EC 50 for steady-state current in the series of mutants. Maximal I ss /I p was significantly correlated with glutamate log EC 50 for steady-state (R 2 ϭ 0.349, p Ͻ 0.05) but not peak (R 2 ϭ 0.0515, p Ͼ 0.05; results not shown) current. The line shown is the least-squares fit to the data. Values for wild type, NR2A(F637A), and NR2A(F637W) are from Ref. 24. model. The tilt of the M3 helix was adjusted manually to optimize packing with the other transmembrane segments. In addition, residues along M1 and M3 were shifted one or two positions to account for accessibility data (35,36). Steric clashes were corrected, and the structure was further minimized.
Binding of ethanol to the model NR2A subunit was evaluated by adapting the general Monte Carlo simulation method of Clark et al. (37). For the MD simulation, the NR2A subunit was solvated with a 3.5-Å layer of ethanol, which included 122 solvent molecules. The NR2A subunit was fixed during the calculations (AMBER force field): 2,000 steps (1 fs each) of equilibration and 134,000 steps in the trajectory at 323 K. Snapshots were saved every 1,000 steps. Most of the ethanol solvent (nearly 90%) dispersed from the protein over the course of the simulation; 15 molecules remained bound at the end of the run.
Data Analysis-In concentration-response experiments, IC 50 or EC 50 and n (slope factor) were calculated using the equation, y ϭ E max /1 ϩ (IC 50 or EC 50 /x) n , where y represents the measured current amplitude, x is concentration, n is the slope factor, and E max is the maximal current amplitude. Statistical differences among concentration-response curves were determined by comparing log transformed EC 50 or IC 50 values from fits to data obtained from individual cells using ANOVA followed by the Dunnett's and Tukey-Kramer tests. Comparisons among mean values of log EC 50 , IC 50 , or I ss /I p for the various mutants were made using correlation analysis, and testing for linear relations of these values to amino acid molecular volume (determined as described previously (23)) was performed using linear regression analysis. M4 domain of the NR2A subunit might interactively regulate the function of the receptor, we constructed and tested a series of mutants incorporating dual substitutions at these positions. Because only hydrophobic amino acids were tolerated at the Met-823 position, we chose to test combinations of alanine, tryptophan, and the wild type amino acids methionine and phenylalanine at these sites. Fig. 1 shows that all combinations of these amino acids that were tested at Phe-637 and Met-823 yielded functional receptors and that both glutamate potency and desensitization were altered in some of the mutants. Concentration-response curves for glutamate activation of peak and steady-state current in the series of mutants were parallel, indicating that none of the mutations significantly altered the Hill coefficient (p Ͼ 0.05; ANOVA). We have previously reported that individual mutations at either Phe-637 or Met-823 can alter glutamate EC 50 values (23,24). Of six mutant subunits containing dual substitutions at Phe-637 and Met-823, none exhibited altered EC 50 values for glutamate activation of peak current, whereas four had decreased EC 50 values for glutamate activation of steady-state current ( Fig. 2; p Ͻ 0.0001; ANOVA). In addition, macroscopic desensitization was increased in the majority of these mutants. Values of maximal I ss /I p for currents activated by 300 M glutamate in lifted cells were in the range 0.04 -0.4 for most of the mutants, compared with a value of 0.66 in the wild type receptor (p Ͻ 0.0001; ANOVA). Previous results from our laboratory have demonstrated that for single substitution mutants at NR2A(Met-823), observed changes in steady-state glutamate EC 50 were attributable to changes in desensitization (23). In the present study, we found that there was a significant correlation between steadystate glutamate log EC 50 and maximal I ss /I p for the series of mutants (R 2 ϭ 0.349, p Ͻ 0.05).
Effects of Dual
Effects of Dual Mutations at Phe-637 and Met-823 on Ethanol Sensitivity-We next evaluated the ethanol sensitivity of receptors containing various mutations at Phe-637 and Met-823 of the NR2A subunit by performing concentration-response analysis. All of the mutant receptors tested were inhibited by ethanol (Fig. 3). Concentration-response curves for the mutants were essentially parallel to each other, but ethanol IC 50 values among the mutants varied considerably (p Ͻ 0.0001; ANOVA). The largest change in ethanol sensitivity was observed in the NR2A(F637W) mutant; significant differences from the wild type value were also observed for the M823W mutant and for all mutants involving dual substitution of alanine and tryptophan. It is possible that changes in receptor desensitization could differentially alter ethanol sensitivity of peak and steady-state current. Previous results from our laboratory (22,24), however, have shown that ethanol inhibition of peak and steady-state current does not differ in single substitution mutants at Phe-637 or Met-823 of the NR2A subunit. In the present study, ethanol inhibition of peak and steady-state current did not differ in F637A/M823W or F637W/M823W subunits, the most highly desensitizing of the dual site mutants (repeated measures ANOVA, p Ͼ 0.05; results not shown).
Relation of Ethanol Sensitivity to Receptor Function in Dual
Mutations at Phe-637 and Met-823-It is possible that the changes in ethanol sensitivity we observed in the series of mutants tested might be linked to changes in agonist potency and desensitization. Correlation analysis revealed that ethanol IC 50 values were significantly negatively correlated with values for glutamate steady-state IC 50 (Fig. 4; R 2 ϭ 0.501, p Ͻ 0.05), but not glutamate peak IC 50 (R 2 ϭ 0.327; p Ͼ 0.05) or maximal steady-state to peak current ratio (R 2 ϭ 0.111; p Ͼ 0.05). Upon inspection of the graph for the latter, it appeared that the value for the NR2A(F637W) mutant subunit was an outlier due to the dissociation between steady-state glutamate EC 50 and maximal I ss /I p value in this mutant. When the value for the NR2A(F637W) mutant was excluded from the analysis, a highly significant negative correlation was obtained between maximal I ss /I p and ethanol IC 50 (R 2 ϭ 0.691, p Ͻ 0.001).
Interactions and maximal I ss /I p values we observed could indicate that these sites interact with each other in some manner to regulate receptor function. To investigate this possibility, we constructed interaction plots and analyzed the data using two-way ANOVA (Fig. 5). Although only 2 of the 11 mutations significantly altered the glutamate peak EC 50 value, we observed highly significant effects on glutamate peak EC 50 of the residues at each position individually (p Ͻ 0.0001; ANOVA) as well as a highly significant interactive effect (p Ͻ 0.0001; ANOVA). In addition, we observed significant effects on glutamate steady-state EC 50 of the residue at each individual position (p Ͻ 0.0001; ANOVA), and a significant interaction between the two positions (p Ͻ 0.0001; ANOVA). In contrast, maximal I ss /I p values were altered by mutations at position 823 (p Ͻ 0.0001; ANOVA) but not 637 (p Ͼ 0.05; ANOVA); nevertheless, mutations at both sites interactively regulated maximal I ss /I p (p Ͻ 0.0001; ANOVA).
We also tested whether the substituents at Phe-637 and Met-823 could interactively regulate ethanol sensitivity. Ethanol IC 50 was highly dependent upon the residue at each position individually (p Ͻ 0.0001; ANOVA), and the analysis also revealed a highly significant interaction between the sites in M3 and M4 in regulation of ethanol IC 50 (p Ͻ 0.0001; ANOVA). This interaction was dependent upon the presence of the wild type residue methionine at position 823 in M4, since it was no longer significant if the values for subunits with methionine at 823 were excluded from the analysis (p Ͼ 0.05; ANOVA).
Relation of Molecular Volume of Substituents at Phe-637 and Met-823 to NMDA Receptor Ethanol Sensitivity-In the substitution mutants at sites Phe-637 and Met-823 in the NR2A subunit, the observation that the substituent at one position could alter the effects of substituents at the other position on ethanol sensitivity of the receptor could indicate that these residues physically interact with each other. If these sites lined opposite sides of an ethanol binding cavity and if ethanol acted via volume occupation of this pocket, then one would predict that there should be a significant linear relation between ethanol IC 50 and the combined molecular volume of the amino acid side chains at the two sites. However, no such relation was observed ( Fig. 6; R 2 ϭ 0.0229; p Ͼ 0.05). sible alcohol binding sites. MD simulations were then run in which ethanol was allowed to dissociate from these binding sites over the course of the trajectory. Solvent molecules that are the last to leave represent the strongest interactions. In this model (Fig. 7), ethanol bound to a number of sites throughout the M domains, including Phe-637 and Met-823. Although the MD simulation did not permit precise quantitation of binding affinity, the affinity of ethanol for Phe-637 and Met-823 was in the top 10% of all sites.
DISCUSSION
We have previously shown that Phe-637 in M3 and Met-823 in M4 of the NR2A subunit influence both receptor function and alcohol sensitivity (22)(23)(24). In the present study, we observed that the substituent at either of these positions can alter the influence of the substituent at the other position on both receptor function (agonist potency and desensitization) and ethanol sensitivity of the receptor. A direct interaction, such as hydrogen or hydrophobic bonding, between the amino acid side chains at positions 637 and 823, however, appears unlikely. The functional interaction depended upon the inclusion of the mutant containing the wild type residue, methionine, at position 823 and tryptophan at position 637, because when the value for this mutant was removed, the interaction plots were essentially parallel. We have previously reported that this mutant, NR2A(F637W), showed a marked decrease in the glutamate EC 50 values for both peak and steady-state current, apparently due to changes in gating (24). Substitution of alanine, phenylalanine, or tryptophan at position 823 reduced, or in some cases eliminated, this effect. This observation is difficult to reconcile with a direct interaction between the side chains at these positions. A more probable explanation is that when the wild type methionine is present at position 823, it may interact with other groups in its environment in a manner that places a conformational constraint on the M4 domain, which in turn restricts the conformation of the other M domains. This conformational restriction could then allow tryptophan substitution at position 637 to produce a substantial change in gating. Substitution of methionine with a hydrophobic amino acid at 823 may release this constraint, diminishing the effect of tryptophan at 637. Recent evidence from a study using cysteine substitution in NR1/NR2C NMDA receptors (41) is also not consistent with an interaction between positions 637 and 823. In this study, the location of the amino acids corresponding to Phe-637 and Met-823 in the NR1 and NR2C subunits differed by 6 -10 positions within the plane of the membrane and thus are not likely to be in close proximity to each other. The topology of the M domains derived from homology modeling in the present study is also consistent with the view that Phe-637 and Met-823 do not directly interact. In this model, Phe-637 and Met-823 are ϳ14 Å apart with the M2 loop sandwiched in between.
In ␥-aminobutyric acid A and glycine receptors, amino acid substitutions at critical positions in transmembrane domains 2 and 3 can alter alcohol and anesthetic potency (25,26). These residues appear to form opposite sides of a cavity that binds alcohol and various anesthetics (27). The accessibility and dimensions of this cavity change during ion channel gating (38,39), and occupation of the cavity by an alcohol or anesthetic molecule alters gating of the ion channel (28,38). The potency of various alcohols and anesthetics for enhancing agonist-activated currents in these receptors is positively correlated with the molecular volume of the alcohol or anesthetic and negatively correlated with the molecular volume of the amino acid side chain (29 -33). The occupation of a critical volume in this cavity by the alcohol or anesthetic molecules is thus thought to be responsible for their modulation of receptor function. If there were a similar cavity between the M3 and M4 domains of the NR2A subunit incorporating positions 637 and 823 and if ethanol acted by occupying a critical volume to destabilize a conformational change in this cavity associated with ion channel gating, one would also predict that the molecular volume of the substituents at positions 637 and 823 would be correlated with ethanol potency. Although this is the case for substitutions at position 823 (22), there is a negative correlation with ethanol potency for substitutions at position 637 (24). In the present study, there was no relation between the sum of the molecular volumes of the substituents and ethanol potency, which may indicate that the opposing effects of molecular volume at positions 637 and 823 negated each other. The results obtained in MD simulations were also not consistent with a single site of ethanol action formed by positions 637 and 823. Although both Phe-637 and Met-823 contribute to ethanol binding sites in this model, these residues are separated by an estimated distance of 13-14 Å, which means they are not sufficiently close to form a common binding site for a single ethanol molecule. Thus, it is more likely that Phe-637 and Met-823 form separate sites of alcohol action.
In previous studies from this laboratory, ethanol potency was not related to measures of desensitization for single-site substitution mutants at NR2A(Phe-637) (24) or NR2A(Met-823) (22). Ethanol potency and maximal glutamate I ss /I p were not correlated in the present study for the entire panel of substitution mutants at both positions but were significantly negatively correlated when the value for the F637W mutant was excluded. Thus, in mutants other than NR2A(F637W), ethanol potency decreased with increases in desensitization. The observation that the correlation was vitiated by the inclusion of the NR2A(F637W) mutant, which exhibited low desensitization and low ethanol potency, argues against modulation of desensitization as the critical factor in the action of ethanol. Nevertheless, in light of the observations of an apparent trend toward a correlation of ethanol sensitivity and desensitization in our previous study (22) and the stabilization of desensitization by ethanol in a non-NMDA glutamate receptor (40), the results of the present study raise the possibility that desensitization may contribute to the action of ethanol on NMDA receptors. | 5,457.2 | 2008-03-28T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Clustering of primordial black holes with non-Gaussian initial fluctuations
We formulate the two-point correlation function of primordial black holes (PBHs) at their formation time, based on the functional integration approach which has often been used in the context of halo clustering. We find that PBH clustering on super-Hubble scales could never be induced in the case where the initial primordial fluctuations are Gaussian, while it can be enhanced by the so-called local-type trispectrum (four-point correlation function) of the primordial curvature perturbations.
Introduction
Thanks to the recent detections of gravitational waves from binary black holes by LIGO/VIRGO collaboration, primordial black holes (PBHs) are attracting attention as a candidate for such binary black hole systems. In Ref. [1] (see also Ref. [2]), we estimated the merger rate of PBH binary systems and found that if the primordial black holes account for 0.1% ∼ 1.0% of the dark matter in the Universe, the PBH scenario can be consistent with the first LIGO gravitational wave (GW) event, GW150914. In the analysis in Ref. [1], we have assumed that the distribution of PBHs is spatially uniform. As discussed in Ref. [3] (see also [4][5][6]), #1 if PBHs spatially clustered at the formation, which can be characterized by the two-point correlation function, it would affect the probability of PBH binary formation and the estimation of the merger rate of PBH binaries.
There are several works about the initial clustering of PBHs. #2 As a pioneer work, Ref. [11] (and also Ref. [12]) discussed galaxy formation due to the spatial fluctuations in PBH number density. More detailed discussion was given in Ref. [13]. The PBH twopoint correlation function and the power spectrum were estimated in Ref. [13] by making use of the peak formalism developed in the context of the clustering of galaxies/halos. As a result, the power spectrum of PBH distribution on large scales should be dominated by the Poisson noise and it behaves as matter isocurvature perturbations. Based on such a PBH isocurvature due to the Poisson noise, Ref. [14] put a constraint on the abundance of PBHs from Ly-α forest observations, and also Refs. [15,16] estimated an expected constraint on it by making use of future 21cm observations. Recently, Ref. [17] studied the initial clustering of PBHs in more detail and found that even on much smaller scales PBHs should not be clustered beyond Poisson. Furthermore, Ref. [18] investigated the dependence of the clustering feature on the shape of the primordial curvature power spectrum. While Refs. [?, 13] assumed Gaussian initial fluctuations, Refs. [19,20] investigated the impact of primordial local-type non-Gaussianity on the super-Hubble density fluctuations of PBHs, based on the peak-background split picture. Although they found that local-type non-Gaussianity could induce the super-Hubble correlations of the PBH density fluctuations, they focused on a specific type of non-Gaussianity and did not obtain a formula for a PBH two-point correlation function which is applicable for more general types of non-Gaussianity. Recently, Ref. [21] also investigated the effect of primordial non-Gaussianities not only on the abundance but also on the clustering property of PBHs. In Ref. [21], the threshold of PBH formation is supposed to be given in terms of primordial curvature perturbations. Their result indicates that PBHs are clustered beyond Poisson on super-Hubble scales even for Gaussian primordial curvature perturbations, which would be apparently inconsistent with the results obtained in previous works. #1 Clustering of primordial black holes has also been discussed in Refs. [7,8]. These papers have investigated the observational effects of PBH clustering, not only on the binary formation but also on the formation of the supermassive BHs. #2 There have been several works about late-time clustering of PBHs, e.g. in dark matter halos (see, e.g., Refs. [9,10]).
In this paper we estimate the two-point correlation function of PBHs by making use of a functional integration approach. This method is powerful in studying correlation functions of biased objects since it allows us to systematically include the effect of non-Gaussian properties of the underling density fluctuations [22][23][24]. Actually, this approach has also been used in Ref. [21]. In the radiation-dominated era, PBHs are actually formed soon after horizon reentry if the amplitude of primordial fluctuations is greater than a certain threshold. The primordial fluctuation often used to study a criterion for PBH formation is the density contrast in a comoving slice. This quantity represents a local three-curvature and is in good accordance with the physical argument that PBH formation should be determined by local dynamics (see, e.g., Ref. [25]). Thus, contrary to Ref. [21], we employ the density contrast in a comoving slice as a critical quantity for PBH formation.
This paper is organized as follows. In the next section, based on the functional integration approach (or path integral method: see, e.g., Ref. [22]), we formulate a two-point correlation function for PBHs which is applicable to non-Gaussian primordial perturbations. In Sect. 3 we investigate the possibility that PBHs are clustered on large scales due to primordial non-Gaussianities. In order to discuss how PBHs are clustered, we simply assume scale-independent local-type non-Gaussianity of the primordial curvature perturbations and show the relation between the PBH two-point correlation on large scales and the non-linearity parameter. Section 4 is devoted to the conclusion.
Formulation
Since during the radiation-dominated era PBHs are formed in the overdense (or positive spatial curvature) region with the Hubble horizon size, the criterion for PBH formation is supposed to be locally determined independently of super-Hubble scale fluctuations. Thus we introduce local smoothed primordial fluctuations θ local (x) as where the window function W local (x) is a smoothing function with scale R that also removes wavelength modes longer than the scale R. For PBH formation, the scale R is roughly matched with the Hubble horizon size at the formation. In this section, to maintain generality we do not specify a particular gauge for defining the primordial fluctuations θ(x), and assume the criterion for PBH formation is given by #3 In the next section we will choose the density contrast on the comoving slice as θ local (x).
#3 Strictly speaking, the threshold depends on the perturbation profile (see, e.g., Ref. [26]). In this paper we ignore this dependence and assume that the threshold is the same for all profiles. This assumption does not affect the main result of this paper.
Probability of PBH formation
The probability that a point x becomes a PBH can be given by where P [θ] is a probability distribution function for the primordial fluctuations, θ(x); also, the probability that two points x 1 and x 2 are PBHs can be given by By using the expression for the one-dimensional Dirac delta function given as and the expression for the local smoothed fluctuations given by Eq. (1), we have and 2.2 P 1 and P 2 in terms of the n-point correlators of the primordial fluctuations Let us introduce a generating function given by with Z[0] = 1. The "connected" n-point correlation functions of θ(x) can be given in terms of the generating function as Inversely, we can obtain the expression for log Z[J] in terms of the "connected" n-point correlation functions as i n n! d 3 y 1 d 3 y 2 · · · d 3 y n ξ θ(c) (y 1 , y 2 , · · · , y n ) J(y 1 )J(y 2 ) · · · J(y n ) . (10) Choosing J(y) := φ W local (x − y), the one-point probability of PBH formation, P 1 , can be written as where ξ (n) local(c) correspond to the moments of θ local : In the same way, the two-point probability of PBH formation, P 2 , can be expressed as where This can be regarded as a cross correlation between the mth and (n − m)th moments of the local smoothed primordial fluctuations. In the above expression for P 1 , we can perform the integration with respect to φ, as and then the expression for P 1 can be reduced to where w := α/σ local , ν := θ th /σ local , and σ local := ξ . We can also obtain a reduced form for P 2 :
PBH two-point correlation function
Let us obtain an approximate formula for the PBH two-point correlation function, by performing the integration with respect to w in the above expressions. To do so, we employ two approximations: (a) a weak non-Gaussian limit and (b) a high peak limit.
In the weak non-Gaussian limit, we assume that ξ With the assumptions ξ (n) local(c),m /σ n local ≪ 1 for n ≥ 3 and ξ (2) local(c) (x 1 , x 2 )/σ 2 local ≪ 1, we can also obtain Furthermore, by making use of the Hermite polynomials, H n (x) given as H n (x) := (−1) n e x 2 (d/dx) n e −x 2 , we can replace the derivatives in terms of w, with the Hermite polynomials. Then, finally, by employing the high peak approximation, ν ≫ 1, which might naturally be valid for the PBH formation, we can perform the integration approximately in terms of w and obtain approximate expressions for P 1 and P 2 as and Then, by using these expressions for P 1 and P 2 , the two-point correlation function of the PBHs can be evaluated as Note that, strictly speaking, in order for the above expression to be valid, we should require stronger assumptions given as and ξ Up to the four-point correlation function of the local smoothed primordial fluctuations, by using an approximate form of the Hermite polynomials, H n (x) ∼ 2 n x n for x ≫ 1, we have This expression corresponds to the result obtained in Refs. [23,24] in the context of the halo bias. Under the assumptions given by Eqs. (23) and (24), this equation is valid for ξ PBH (x 1 , x 2 ) ≪ 1.
PBH clustering with local-type non-Gaussianity
Before a quantitative discussion for the amplitude of the PBH two-point correlation function, let us first provide an intuitive idea of why we take into account up to the four-point correlation function of the primordial fluctuations in the above formulation.
Hereafter, as the primordial fluctuations, θ(x), in the above formulation, we use the primordial curvature perturbations on the comoving slice, often denoted by R c (x). In the long-wavelength approximation, comoving density fluctuations can be given in terms of R c (x) as [27] δ where w, a and H are respectively an equation of state of the Universe, a scale factor and the Hubble parameter. As can be seen in the above expression, if we use the primordial curvature perturbations on the comoving slice as θ(x) in the previous section, a natural variable for the local primordial fluctuations θ local would be the comoving density fluctuations. In fact, this quantity represents a local three-curvature and is in good accordance with a physical argument that the criterion for PBH formation should be determined by local dynamics (i.e. within the Hubble horizon) and be free from the addition of super-Hubble modes. By introducing a local scale factor a e Rc(x) → a [25], at the linear order, the above expression can be reduced to Here, we take w = 1/3 in the radiation-dominated era. The two-point correlation function of δ is given by Because of the two Laplacians, δ(x)δ(y) rapidly approaches zero for |x − y| ≫ (aH) −1 unless R c is extremely red-tilted. Thus, in general, δ(x)δ(y) is suppressed on super-Hubble scales. Due to the locality, at the leading order in δ the PBH abundance at point x would be determined by the local variance δ 2 (x) , and then the PBH two-point correlation function is given by the its correlation. If δ is Gaussian, the correlation of the local variance is given as and it should be suppressed on super-Hubble scales, as can be seen by Eq. (28). Thus, PBHs are produced by the same amount in every super-Hubble size region. In other words, PBHs are not clustered on super-Hubble scales. If, on the other hand, δ is non-Gaussian, the correlation of the local variance may remain on super-Hubble scales. In order to see this explicitly, let us focus on the simple case where R c consists of two uncorrelated Gaussian fields φ, χ as Here it is assumed that χ has super-Hubble scale correlation and φ gives a dominant contribution to PBH formation. For such a case, the density contrast on the comoving slice can be given as For super-Hubble distance |x − y| ≫ (aH) −1 , we obtain There are two remarks regarding this result. First, on super-Hubble scales the correlation of the local variance is directly proportional to the correlation function of χ. This result simply reflects our naive intuition that a local quantity can possess correlation over super-Hubble distance only when the quantity is sourced by another quantity having correlation over super-Hubble distance. Secondly, the correlation of the local variance is proportional to a part of the connected part of the four-point function of R c . More explicitly, the connected part of the four-point function of R c is given by and the right-hand side of Eq. (32) is obtained by ignoring terms proportional to the twopoint correlation function of R c , R c (x)R c (y) , in the above equation (i.e., only the first term).
In particular, if the power spectrum of χ is the same as that of φ, the parameter α is related to the local-type trispectrum parameter τ NL as τ NL = α 2 [28], and Eq. (32) becomes Thus, the correlation of the local variance is proportional to τ NL and the correlation function of the curvature perturbation. Notice that the bispectrum parameter is f NL = 0 in the present case, and it is actually the trispectrum (not the bispectrum) that determines the clustering of PBHs over the super-Hubble distance.
On the other hand, if f NL = 0, τ NL is also non-zero with the lower bound τ NL ≥ 36 25 f 2 NL [29]. Thus PBHs are necessarily clustered on super-Hubble scales in this case, which is consistent with Refs. [19,20] which showed that the clustering is characterized by f NL . In the following discussion, we actually employ the local-type ansatz for the non-Gaussianities of primordial curvature perturbations. Based on our result in Eq. (25), we explicitly show that the PBH two-point correlation function is proportional to τ NL and the two-point correlation function of the primordial curvature perturbations.
Primordial local-type non-Gaussianity
The local-type primordial non-Gaussianity in Fourier space #4 has conventionally been characterized by introducing three constant parameters, so-called non-linearity parameters, f NL , g NL , and τ NL , which respectively represent the amplitudes of the bispectrum and trispectrum of the primordial curvature perturbations as [28] where P Rc (k) is the power spectrum of the primordial curvature perturbations given as
PBH power spectrum with non-Gaussian primordial fluctuations
In the previous section we discussed the two-point correlation function of the spatial distribution of PBHs. The two-point correlation function can be expressed by the PBH power spectrum as where δ PBH (x) is the number density field of PBHs, and P PBH (k) is the PBH power spectrum: with δ PBH (k) being a Fourier transform of the PBH number density field. Assuming statistical isotropy, the PBH power spectrum can be inversely given by #4 We use the Fourier transform expression given by where r := x 1 − x 2 . Substituting Eq. (25) into this equation and noting that we apply the comoving density fluctuations, δ(x), to θ local (x) used in the previous section, we obtain where P δ , B δ , and T δ are respectively power, bi-, and trispectra of the comoving density fluctuations, W R (k) is a window function smoothed with the comoving scale R = (aH) −1 at the PBH formation in Fourier space. For primordial local-type non-Gaussianity as defined by Eq. (36), P δ , B δ , and T δ are respectively given in terms of the non-linearity parameters and the power spectrum of the primordial curvature perturbations as P δ (k) = 4 9 2 (kR) 4 P Rc (k), Substituting Eq. (42) into Eq. (41), we have ×P Rc (p 1 )P Rc (p 2 )P Rc (|k + p 1 |) where W local (k) := (kR) 2 W R (k). As can be seen in the above equation, the contributions from non-zero g NL and τ NL are of the order of P 3 Rc . Note that this equation is derived based on the approximate expression (25). With respect to the order of P Rc , (ξ (2) ) 2 , (ξ (2) ) 3 , and ξ (2) ξ (3) terms should also be of the order of P 3 Rc and hence they should be taken into account in the expression in Eq. (25) (or Eqs. (20) and (21)). However, one can show that these quadratic and cubic terms are included in the k-independent contribution as in the following expression. Thus, for the super-Hubble correlation of PBHs, these terms can be neglected.
For the kR ≪ 1 limit, which we focus on in this paper, noting that we can take W local (k) → 0 and |k + p| → p, the above expression becomes Thus, in the case where the primordial curvature perturbations have local-type non-Gaussianity parameterized by τ NL , the PBH power spectrum does not decay even on super-Hubble scales and is proportional to the power spectrum of the primordial curvature perturbations. Inversely, the two-point correlation function of the PBHs is obtained as where a constant, C, corresponds to the k-independent terms in Eq. (44). A typical value of the enhancement factor is τ NL ν 4 ∼ O(10 6 ) × (τ NL /10 3 ). For super-Hubble scales, only the first term on the right hand side is relevant, and thus, if τ NL -type non-Gaussianity exists, it would give a large effect on the clustering behavior of PBHs even on super-Hubble scales (r ≫ R) at formation. As discussed in Refs. [19,20], if PBHs contribute to the dark matter component of the Universe, their spatial distribution behaves as dark matter isocurvature perturbations. Recent cosmic microwave background (CMB) observations give a tight constraint on the fraction of dark matter isocurvature perturbations [30]. By introducing a parameter representing the mass fraction of PBHs in the total dark matter, f PBH , the observational constraint roughly means f 2 PBH τ NL ν 4 O(10 −2 ). If PBHs comprise a dominant component of dark matter, adopting ν = 10 as an approximate value necessary for PBH formation gives an upper limit on τ NL of τ NL 10 −6 .
Note that this constraint for τ NL is obtained by assuming that the simple expression for the primordial trispectrum given by Eq. (36) is valid for CMB scales which are super-Hubble ones at the PBH formation. Although the slow-roll condition would be violated when PBH formation scale exits the horizon, in general, the CMB scales are considered to exit the horizon in the slow-roll phase. As discussed in e.g. Ref. [31], it might be difficult to realize simple expressions for the primordial non-Gaussianity given by Eq. (36) in singlefield slow-roll inflation. Thus, the above constraint for τ NL cannot be simply applied in single-field inflation and it does not tightly constrain the PBH formation scenario in such a single-field class.
Conclusion
We have investigated the clustering behavior of PBHs at formation during the radiationdominated era. Since during the radiation-dominated era PBHs would be formed from the direct collapse of the overdense region with Hubble scales, for PBH clustering, super-Hubble fluctuations might be important. We formulated the two-point correlation function of PBHs by making use of a functional integration approach which takes into account the non-Gaussian property of the primordial fluctuations. Our result shows that PBHs are never clustered at formation as long as the primordial fluctuations obey Gaussian statistics and the super-Hubble two-point correlation function is induced by the connected part of the four-point correlation function of the primordial fluctuations. In order to evaluate the super-horizon two-point correlation function of PBHs quantitatively, we considered non-Gaussian primordial perturbations of the local type up to the four-point function(trispectrum) parametrized by two constant parameters, g NL and τ NL . We found that the τ NL -type non-Gaussianity determines the super-Hubble two-point correlation function. Thus, to estimate the clustering behavior of PBHs we should carefully investigate the non-Gaussian property of the primordial fluctuations which strongly depends on the generation mechanism, such as the inflation model. | 4,839.8 | 2019-06-12T00:00:00.000 | [
"Physics"
] |
A New Traffic Distribution Routing Algorithm for Low Level VPNs
Virtual Private Networks (VPN) constitute a particular class of shared networks. In such networks, the resources are shared among several customers. The management of these resources requires a high level of automation to obtain the dynamics necessary for the well-functioning of a VPN. In this paper, we consider the problem of a network operator who owns the physical infrastructure and who wishes to deliver VPN service to his customers. These customers may be Internet Service providers, large corporations and enterprises. We propose a new routing approach referred to as Traffic Split Routing (TSR) which splits the traffic as fairly as possible between the network links. We show that TSR outperforms Shortest Path Routing (SPR) in terms of the number of admitted VPN and in terms of Quality of Service. Keywords—Virtual Private Networks (VPN); Quality of Service (QoS); NS-2; Simulations; Shortest Path Routing (SPR); Traffic Split Routing (TSR); Routing algorithm
I. INTRODUCTION
With the exponential growth of the Internet and increasingly supports various types of applications, especially those calling on multimedia as well as several users simultaneously, the Internet service provider as well as the network operator are called upon to guarantee commitments of quality of service to their subscribers. The simplicity and low cost of IP networks are some of the reasons why users are deploying new types of applications on these networks. However, some types of real-time applications including video conferencing and VoIP which are very sensitive to variations in delay (jitter) and throughput are not guaranteed with IP.
Reservation of resources for multimedia applications is necessary to ensure end-to-end performance. However, this reservation is not supported with IP. Also, real-time applications also require a guarantee on resources such as storage space, CPU time etc. Thus, the packets must be routed based on the required QoS, which is not possible with the Internet today. However, the Internet is by nature "Best Effort" and lacks any control over the quality of service. Traditional Layer 3 routing methods like Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP) would become obsolete if we want to support QoS in the Internet.
A more viable alternative to traditional IP routing and which includes the use of technologies and network infrastructures that guarantee QoS, we can cite IP / MPLS, IP over Metro Ethernet or also IP over ATM. However, even if these technologies support QoS, the topology of the network as well as the routes which will carry the traffic must be correctly chosen otherwise the cost of QoS would be prohibitive. Perhaps the best-known example is the problem of the taxi driver who has to find the quickest and easiest way to get from one place to another. Therefore, instead of letting each driver individually make the decision to choose their route, we instead need to inform them in advance which route they should take. Therefore, the optimal choice of routes must be calculated in advance, i.e. the solution to adopt to guarantee QoS is the "proactive" approach and not the "reactive" approach.
In order to find the optimal routes, we have to define the optimality criteria, i.e. the objectives and the constraints. When a network is given, which is the case with an operator who owns the physical infrastructure, the goal is to increase the number of "satisfied" customers and therefore income. When we begin to build a network, as in the case of a service provider that does not have the infrastructure, the goal is to minimize the use of the leased resources and therefore the cost of the network. In both cases, QoS is a constraint. Moreover, Virtual Private Networks (VPN) constitute a particular class of shared networks. In such networks, the resources are shared among several customers. The management of these resources requires a high level of automation to obtain the dynamics necessary for the well functioning of a VPN.
It is in this context that the present research work "Distributed traffic routing for low-level VPNs" falls within this framework, where we will design in an optimized way VPNs based on logical topologies or multipoint virtual circuits such as such as VPLS (Virtual Private LAN Service), E-LAN (Ethernet LAN Services) etc. We propose to study the case of an operator who has the physical infrastructure and who wants to offer this kind of VPN service to its customers. The operator then seeks to maximize the number of customers while providing the QoS required by each of them.
The rest of the paper is organized as follows: Section II provides an overview on the graph generations using Waxman and Brite algorithms. Section III describes the proposed solution. Section IV reports the performance evaluation environment and the simulation methodology. We report and explain simulation results, useful to assess the validity of our proposed traffic distribution algorithm. Finally, the last section draws conclusions. 221 | P a g e www.ijacsa.thesai.org
II. RELATED WORKS
The design of virtual private networks brings together a whole set of optimization problems that differ by the constraints imposed, and sometimes by the data considered. Studying these issues is very important for network operators as well as service providers. The methods of solving these problems often call graph theory, performance analysis and optimization, descriptive as diverse as they are complex.
For a supplier, it is about setting up a network that guarantees the delivery of all its customers' requests with quality of service requirements, while minimizing network operating costs. Operators seek customer satisfaction by trying to make the most of all the resources available in their networks.
A. Random Generation of Graphs
The study of large networks is becoming increasingly important, especially thanks to the evolution of telecommunications networks and the Internet. These networks can be of a different nature. To model them, we often use the formal structure of graphs. A graph is modeled by a set of vertices connected by edges. We can enrich the structure of the graph by assigning a cost to each of the edges. It is often difficult to represent a network accurately. We often prefer to represent the local properties of a network, then we generate the graph while respecting these properties as much as possible. Among other things, the generation of graphs allows us to do simulations.
During this research work, we used a tool that generates random graphs known as "Brite" [1]. Brite allows a random generation of several graphs following the "Waxman" model [2]. The latter offers us the possibility of obtaining large networks whose characteristics resemble those of an Internet network. Once a network is generated, the "Waxman" model assigns two parameters to each link: cost and time.
The principle of the "Waxman" method is as follows [2]: 1) Enter the number of nodes to generate. 2) Calculate the probability P (u, v) of adding a link between each pair of nodes u and v. The increase in β results an increase in the density of links in the graph.
The decrease in α results in an increase in the density of the short links between the nodes.
For each ( , ), draw a random number T between (0,1]. If T <P, then add a link between u and v.
B. Routing Algorithms
New VPN technologies have greatly expanded the range of possibilities for users. On the one hand, they allow very great flexibility for users, and on the other hand, they lead to increasing complexity for operators and service providers. Several considerations must be examined in order to ensure satisfactory quality of service (QoS) following customer requests. Among these considerations, we can cite: 1) Optimal allocation of communication resources according to user needs and available resources in the network.
2) The establishment of reliability control mechanisms.
On the other hand, the quality of service offered to a connection is directly related to the choice of the path between a source and a destination. The route calculation must take into account the various constraints imposed by a connection (speed, variation in delay, loss rate, etc.). In this outcome, it is necessary to set up a routing algorithm whose role is to find the best path between a source and its recipient while respecting the various constraints imposed. We speak of routing with constraints. Because these constraints vary from customer to customer, and from one type of network to another, it is almost impossible to find a routing algorithm that meets all needs. Indeed, it was proved in [3], that the problem of finding a path with multiple constraints is NP-Complete.
Several heuristic proposals were then presented to solve this problem. These proposals can be classified into five categories: 1) The first approach is to minimize a single QoS parameter. The algorithms of Dijkstra and Bellman-Ford are examples of this approach. They find the shortest route between a source and its destination.
2) The second approach is presented by [4]. An algorithm based on the minimization of a QoS parameter subject to a second constraint is proposed. It uses the cost and the delay calculated by a "distance vector" protocol maintained at each node.
3) The third approach is to build a path under two constraints simultaneously (usually time and cost). Chen et al. [5] and Jaff et al. [6] have proposed algorithms to solve the problem with two constraints. The major problem with this proposal is that it is more complex than the other heuristics and it does not guarantee scalability.
4) The fourth approach is based on minimizing the different parameters in a specific order. The Widest-Shortest and Shortest-Widest [7] algorithms are examples of this approach.
5) The fifth approach to the routing problem with QoS is to construct paths using a combined metric that is calculated based on two (or more) constraints. Verma et al. [8] combined cost and bandwidth into a single metric.
The routing approaches with QoS presented previously vary from the simplest, such as Dijkstra and Bellman-Ford, which are based on a single constraint, to the more complex exploiting two or more constraints. However, these approaches have a common weakness in that they do not 222 | P a g e www.ijacsa.thesai.org guarantee the formation of a balanced system when distributing the load. They use an order of priority in the choice of constraints which leads to the construction of unbalanced paths.
In this research work, we consider the problem of a network operator who owns the physical infrastructure and who wishes to deliver VPN service to his customers. These customers may be Internet Service providers, large corporations and enterprises. We propose a new routing approach referred to as Traffic Split Routing (TSR) which splits the traffic as fairly as possible between the network links. We show that TSR outperforms Shortest Path Routing (SPR) in terms of the number of admitted VPN and in terms of Quality of Service.
III. PROPOSED ALGORITHM
In what follows, we will present a simple algorithm, Traffic Split Routing (TSR) [9], having as main objective the load sharing in a network. Indeed, with the use of TSR we will try to distribute the traffic in the network as homogeneously as possible. Our approach is to be able to use the network for a balanced sharing of traffic [10], [11]. Our main goal is to avoid overloading some links while others remain unused. This is often achieved by creating disjointed trees and/or paths and small sizes.
We present in what follows the heuristic of traffic distribution used:
Traffic distribution heuristic
Given ls the number of times a link s appears in a VPN tree. First, we define a variable called Link-Usage Count [LUC] (LUC refers to the variable "ls" in the previous algorithm) which gives the number of times a link appears in a VPN tree. This variable will be used as a metric for the generation of trees. In fact, every time a link is used in a tree, its LUC [12] is incremented by one. When generating new trees, our algorithm will try to avoid links with the highest LUC.
Obviously, when a VPN connection ends or one of the sites disconnects, the "ls" value of each link belonging to that connection is decremented by one. The generic tree generated in step 3 can be obtained with any algorithm or protocol. For example, we can use the minimum weight tree (MST) [13]. Since this tree is determined according to the value of the variable "ls", we must verify that the number of jumps in this tree should not be arbitrarily long [11].
IV. SIMULATIONS AND PERFORMANCE ANALYSIS
This section presents the simulation input parameters for the different simulated VPN networks. All the simulation parameters are given in Table I. For accuracy and compliance, we ran each simulation scenario six times and averaged the measurements. Note that each of the six measurements conforms to the simulation parameters already described. To study the behavior of the two routing algorithms SPR and TSR according to the traffic intensity in the network, we varied the number of VPNs to be simulated. Fig. 1 gives an example of a VPN network that we have simulated. Each VPN is made up of a source and a set of destinations. Nodes in the form of a hexagon represent the sources. The nodes in the form of a circle represent the destinations. Rectangular nodes schematize transit nodes (Steiner) used to reach stations belonging to the same VPN.
We assume that data streams are sent from one source to a destination within the same VPN [14]- [21]. By applying the two heuristics SPR and TSR important differences in traffic distribution are remarkable. Both Fig. 2 and Fig. 3 represent an example of a scenario to be simulated where we have fixed the source and the destination while applying the two routing algorithms already described. Fig. 2 schematizes a scenario where we have called the shortest path algorithm. By analyzing this scenario, we can see that the traffic going from source node 2 to destination node 4 is always focused on the same path, the shortest path (2-6 and 6-4). 223 | P a g e www.ijacsa.thesai.org On the other hand, using the TSR heuristic, in Fig. 3, we notice that the traffic is shared in a more equitable way in the network. In fact, from source node 2 we can reach destination 4 by taking different paths (2-6 and 6-4 or even 2-5 and 5-4, etc…). This approach allows the maximum use of the network links. In order to be able to compare the two heuristics SPR and TSR in a more rigorous way, we will calculate the quality of service parameters: the average reception rate, the delay, the loss rate and the data flow sent.
Indeed, the routing technique will have a great influence on these parameters. This influence will be presented and highlighted later by the simulation results which concern the cases of 4 to 24 VPN sources representing respectively low and high traffic intensities. Fig. 4 details the flow variation for the two heuristics TSR and SPR. It clearly illustrates the speed changes depending on the number of VPN sources. Indeed, with 4 VPN sources, the heuristic SPR offers a throughput of 6.56Mbps while with TSR the throughput is 6.16Mbps. We can thus conclude that under a low traffic intensity, the application of the shortest path algorithm for traffic routing is more efficient than the application of traffic distribution. Subsequently, with a high number of VPNs, the throughput with the SPR heuristic was significantly reduced to 4.74Mbps with 10 VPN traffic sources and 3Mbps with 24 VPN traffic sources. This decrease in throughput is due to the amount of traffic that is focused precisely on the shortest path.
A. Average Reception Data Rate
On the other hand, with the TSR heuristic, we can notice that for 10 VPN sources the throughput is 5Mbps and for 24 VPN sources it can reach a value of 3.95Mbps. By comparing these results with the previous ones we deduce that the TSR algorithm offers a higher throughput especially for a large volume of traffic.
The analysis of the average reception rate allowed us to deduce that the TSR algorithm tends to use the maximum number of links, unlike the shortest path algorithm where the traffic always takes the shortest path which, in steady state, causes some links to become overloaded, leaving others unused. This had an influence on the flow.
Moreover, for a given throughput, the number of VPN sources admissible by the TSR method is significantly higher than that obtained with SPR. For example, if we want a speed of 4Mbps, with TSR we can admit up to 24 VPN sources while with SPR, this number is 14 sources.
B. Average End-To-End Delay
In this part we measure the average time taken for a 1000byte size packet to be transferred from a source to a destination. Fig. 5 shows the average delay for the two routing techniques used. This delay is given according to the number of VPN sources. Network reactions to the increased number of VPN sources for the two routing techniques are diverse. In fact, the average delay for low traffic intensity calculated with the shortest path algorithm exhibits a brief variation compared to that determined by the TSR algorithm. Indeed, for 4 VPN sources the average delay determined by the SPR approach is 24ms while with TSR this delay is 25ms.
By increasing the number of VPNs we can notice changes in the shape of the two curves. In fact, the average delay increases as a function of the number of VPN sources in an almost logarithmic fashion. However, these two curves look almost the same except that the average delay calculated by the TSR algorithm remains lower than that calculated by the SPR approach. Take for example the case of 16 VPNs where the average delay determined by the TSR heuristic is 27ms compared to that of SPR which is 37ms.
We define the gap as the difference between the average delay calculated for the same number of VPN sources for each of the two TSR and SPR heuristics. We notice that this gap is growing depending on the VPN sources (Fig. 5). From these results, we deduce that the difference between the delays obtained with the two approaches SPR and TSR is quite remarkable. This delay is reduced by applying the TSR heuristic because of the distribution of traffic over a large number of links, which offers more chances of going through small queues.
The fact of going through small queues means that the delay variation is smaller. With TSR, this is illustrated in Fig. 6. This figure gives a variation of delay for the two heuristics SPR and TSR. Indeed, as we have already mentioned during the throughput evaluation, the application of the SPR heuristic under a low traffic intensity is more efficient than that of traffic distribution. The curve presented in Fig. 6 confirms this result.
C. Loss Rate
We propose in the following to estimate the rate of lost packets for each routing technique. As shown in Fig. 7, a large gap between the loss rates obtained with the two heuristics TSR and SPR is perceived. Indeed, we can see that with low traffic intensity, the rate of packets lost by applying the shortest path algorithm is negligible. This rate increases as the number of VPN sources increases to reach a rate of 17×10 -4 packets lost with 14 sources and 46×10 -4 packets lost with 24 sources.
However, the traffic distribution algorithm has a higher loss rate than shortest path, which is 4×10 -4 packets lost for 4 sources. Adding 10 more sources increased this loss rate to 16×10 -4 lost packets. Similarly, we can notice that from this number of VPN sources (14 sources) the loss rate resulting from the use of the TSR heuristic becomes significantly lower than that obtained by the SPR algorithm.
From the results of the packet loss rate, we can conclude that the TSR algorithm gives lower loss rates for high load networks. Likewise, from Fig. 8, we see that the number of packets sent over the network using the distributed traffic routing technique is larger than that sent by applying the shortest path technique. In fact, with the use of the shortest path algorithm, packets have a high chance of passing through overloaded queues, thus rejecting excess packets and therefore retransmission of rejected packets. Furthermore, with the use of the traffic distribution heuristic, there is a high probability of going through lightly loaded paths which results in a shorter routing time, a lower loss rate, as well as more packets sent.
V. CONCLUSION
This paper has been devoted to present a new routing approach, TSR (Traffic Split Routing) and to compare it to the classic routing of the shortest path SPR (Shortest Path Routing). In the first part, we presented different performance evaluation techniques. Then, we were interested in presenting different simulation tools and we justified our choice for the NS-2 tool. We went on to define the various quality of service parameters that we evaluated. Then, we detailed and analyzed the different simulation results obtained with different scenarios for the two approaches SPR and TSR.
Simulation results presented have demonstrated the effectiveness of TSR in the case of high traffic intensity. Thus, we were able to demonstrate that our approach, TSR, is more satisfactory for ensuring a better quality of service for certain types of applications such as real-time multimedia applications and VOIP which are very sensitive to the variation of speed and delay.
On the other hand, from the simulations, we noticed that the application of the traffic distribution algorithm allowed us to use a maximum of network resources. Indeed, for a scenario with 14 VPNs, we observed that the TSR heuristic used 69% of the network links. While with the shortest path heuristic, only 41% of all links in the network are used. In addition, the TSR approach makes it possible to accommodate a larger number of VPNs for a given objective (given loss rate, given throughput, etc.). | 5,164.6 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Spatio-Temporal changes of Land use/Land cover of Pindrangi Village Using High Resolution Satellite Imagery
Interpretation of high resolution satellite imagery revealed various Land use/Land cover features in Pindrangi village , High resolution Satellite imagery was acquired from the goggle earth through Sasplanet software, The data were acquired for the years 1984, 1994, 2004 and 2014. High resolution Satellite imagery was processed in Arc Map 10.4.1. Further an analysis of the decadal sequence imagery pertaining to decadal aimed at detecting the land use / land cover change has indicated that the plantation has phenomenally increased by 235.20he during the study period, at the same period, Crop land (Paddy)which occupied about 26.875he in 1984 has been reduced to 17.29 he by 2014 mainly due to encroachment of plantation like Casurina/Eucalyptus and mango Scrubs area has decreased by 35.52 he. The present study with the help of GIS and remote sensing (RS) is also a similar attempt in recording and quantifying change in land use and land cover in village level as spatial and temporal extents. The conversion of fallow land and Crop land in to plantation around 12.91 % in study area.
I. Introduction
A rational assessment of land and itsscientific utilization has become important. It is possible only if the whole complex of land use is studied at the district, tahsil or even village levelby taking into account the local physical and socio-economic conditions(Ali Mohmmed 1978); the present study is mainly micro level in Pindrangi village has identified Plantation boom and decreasing trend of paddy, has transformed in to physical and socio-economic conditions of study area within the four decades. lMicro level landuse mapping is important for evolution, management and conservation of natural resources of an area. Hence, it is desirable to monitor the trends in land use/land cover. The modern techniques of remote sensing and Geographic information systems(GIS) are very useful tools for analysing the trends in land use/land cover through time (Obi Reddy et al., 2001).Remote sensing and gis applications in the identification of aquaculture hotspots at village level (K. Nageswararao et al.,2003). It is also useful for planners to evolutes the possibilities and limitations of further spatial development to avoid or restrict undesirable trends of land exploitation to adjust the forms of land use to the land capability and to direct the expansion of intensive land utilization into suitable areas (Nageswar Rao and Vaidyanathan, 1990).Impactof human land use practices on the occurrence of droughtsa case study of Godhavari delta region (B.Hema Malini et al.,) Land use/ land cover inventories form essential component in land resources evolution and environmental studies (NRSA).Land use is any kind of permanent or cyclic human intervention onthe environment to satisfy human needs and the land use capability orland suitability is the potential capability of given tract and to supportdifferent types of land utilization under given cultural and socioeconomicconditions (Vink A.P.A. 1975).The study of land use pattern is of prime concern to geographers toknow the relationship between man and natural environment (Tripathiand Vishwakarma, 1988).In this paper, an attempt is made to study the extent of changing land use practices due to the plantation (Casurina , Mango) boom in study area.
Study area
The Study area has witnessed a large-scale Plantation(casurina, Eucalyptus and Mango) development during the recent years, as evident from the satellite imagery of the area. While the Plantation(casurina, Eucalyptus and Mango) has encroached on to the paddy area in the north western parts of the study area, its spread is mainly into the fallow lands in the south, western and eastern parts. In order to achieve the stated objective of identifying the plantation growthin this village, due encroaching predominantly on to the cropland as well as fallow land, is selected for the study.The study area situated between the latitude of 17 o 56ʼN and Most of the people of this areaworking inmining like quary which was situated near to this village, thatswhy most of the study area has fallow land in this village during the period of 1984 to 2004 later on due to decreasing of mining activity simelteniously the fallow land converted in to Plantaion.The major crops in this area is plantation such asCasurina, Eucalyptus and Mango followed by paddy and Vegetable crops in monsoon season.The general climate of the area is tropical type mostly the rainfall occurred during southwest monsoon season (June-September), while the retreating monsoon season (October-November) accounts for the rest of the rainfall. October is the rainiest month. Hottest month of the study area is May Mean monthly temperatures range from 25 o C to 39 o C. Coldest month is December.
II. Methodology
In the present study we used the four high resolution datasets which we acquire from the google earth in distinct years on decadel, they are 1984, 1994, 2004 and 2014. We rectified these datasets with base map of Arc Gis 10.4.1 and elevation and eye alt of vary among them 1984 and 1994 around 90 mt and 11.90 km, 2004 and 2014 are around 126 mt and 4.53 km respectively, further they were digitally processed and the various geomorphological and land use/land cover features were interpreted (Fig. 2)supplemented by field observations. in order to understand the trends in land use/land cover changes in the area. on-screen digitisation was made to map land use land cover based on their geometric boundaries. The land use/land cover datasets of 1984, 1994, 2004 and2014 were 'unioned' in the GIS to extract (by 'querying'), the data on conversion of each land use/land cover category into other types in the study area as a whole. Change matrices were prepared for the study area,separately. Further, based on the magnitude of conversion of fallow land and agriculture land into plantation at each dataset shows the drastic change in study area during 1984, 1994, 2004
III. Results And Discussion
The term land use is more commonly referred to the human activity on the Earth's surface. The main reason behind the LU/LC changes includes lack of water availability and low fetile soils , In the study area have experienced so many significant changes in land use pattern during the study period,the socio-economic development plays very important role in bringining changes in land use pattern , in the time span of study period Scrubs, Paddy, fallow land have shows the negative trend (Table 1)out of total geographical area.we used modern technologies like remote sensing and GIS to enumerate LU/LC. On the basis of interpretation of remote sensing imagery, field surveys, and existing study area conditions, we have classified the study area into seven categories, they arePaddy, built-up area, scrubs, plantation(Casurina, eucalyptus and mango)fallow land, water body (Table 1). We used multiple datasets (1984,1994,2004 and 2014) to enumerate land use/land cover changes in study period (Fig 3)
Coversion of landuse/Landcover and change detection between 1984 -1994
The categories of landuse landcover of study area showed both positive and negative growth in area of total geographical area (473.48 he). Table 1 the built up area occupies an area of 1.6 % in 1984 , it could be shown the positive trend in 1994 around 2.0 %. Where as the rest of the categories like plantation (31.0% -33.6%) tanks (3.0 % -3.5%) are shown positive trend some of the categories are shown the negative trend such as scrubs (14.9% -14.6%), paddy (13.5%-10.7%) and uncultivated land (36.1%-35.6%) during the period of 1984-1994. In the change detection of the landuse landcover plantation shows the positive trend (+2.59%) followed by tanks (0.55%) and builtup area (0.38%) during the period of 1984-9194.While -13.3 hectares paddy, uncultivated land -2.29 hectares and -1.18 hectares scrubs of area were converted into plantation. (Fig 3).
Coversion of landuse/Landcover and change detection between 1994 -2004
During the study period all the categories showed showed both positive and negative growth in total geographical area (473.48 he). Table 1 the built up area occupies an area of 2.0% in 1994 , consequently the built up area hasshown the increasing in 2004 around 2.51% because builtup area increased due to the gradual constructional development and isolated settlements are gradually increased. However the rest of the categories like plantation (33.6%-36.67%) tanks (3.5% 3.95%) are shown positive trend some of the categories shows the negative trend such as scrubs (14.5-10.08%), paddy (10.07-9.59%) and uncultivated land (35.6-25.38%) during the period of 1994-2004. In the change detection of the landuse landcover plantation shows the positive trend (+3.16%) followed by builtup area (+0.54%) and tanks (+0.46%) (Fig 3) ( Table 1).
Coversion of landuse/Landcover and change detection between 2004 -2014
The categories of landuse landcover of study area showed both positive and negative growth in total geographical area (473.48 he). Table 2 (2.99%) (Fig.4)study period.However the rest of the categories like plantation (36.76% -49.67%) tanks (3.96 % 5.26%) are shown positive trend some of the categories are shown the negative trend such as scrubs (10.08% -7.50%), paddy (9.59%-3.81%) and uncultivated land (25.38%-5.22%) during the period of 2004-2014. In the change detection of the landuse landcover plantation shows the positive trend (+12.91%) followed by builtup area (2.99%) and tanks (+1.30%). In 2014 paddy area converted in to Casurina around 11.58%consequently northern eastern and the plantation area and paddy converted in to Eucalyptus 23.00% in 2014 because of lack water speciality in study area one more thing is most of the people are moving to Visakhapatnam for employement.
IV. Conclusion
The present study undertaken for the detection of possible land use and land cover changes, monitoring and evaluation in Pindrangi village high resolution landsat images temporally like 1984,1994,2004 and 2014 which we downloaded from the sasplanet software . The result of present work indicates there have been important land use land cover changes in between 1984 to 1994 and 2004-2014 time periods in the study area. The statistical analysis shows that the major changes have been occurred inuncultivated land , paddy and platntation, mostly we findout positive and negative variations in landuse/landcover. The positive change detection occurred in plantation, paddy and tanks Similarly the rest of the categories such as scrubs, paddy uncultivated land shows negative trend( Fig.3 and Fig.4). So that lack of water speciality for cultivation paddy could decline and plantation improves the rest of the categories also shows change like uncultivated land decline but it can converted in to plantation(Casurina, Eucalyptus and Mango) specially between 2004 to 2014 land change occured within the plantation area they are casurina and eucalyptus, casurina converted in to eucalyptus due to insufficient water in study area. | 2,431.8 | 2017-06-24T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
E NSEMBLE M ACHINE L EARNING A PPROACH FOR I O T I NTRUSION D ETECTION S YSTEMS
- The rapid growth and development of the Internet of Things (IoT) have had an important impact on various industries, including smart cities, the medical profession, autos, and logistics tracking. However, with the benefits of the IoT come security concerns that are becoming increasingly prevalent. This issue is being addressed by developing intelligent network intrusion detection systems (NIDS) using machine learning (ML) techniques to detect constantly changing network threats and patterns. Ensemble ML represents the recent direction in the ML field. This research proposes a new anomaly-based solution for IoT networks utilizing ensemble ML algorithms, including logistic regression, naive Bayes, decision trees, extra trees, random forests, and gradient boosting. The algorithms were tested on three different intrusion detection datasets. The ensemble ML method achieved an accuracy of 98.52% when applied to the UNSW-NB15 dataset, 88.41% on the IoTID20 dataset, and 91.03% on the BoTNeTIoT-L01-v2 dataset.
I. INTRODUCTION
Discovering emerging and unknown attacks requires an approach that can detect Internet of Things (IoT) intrusion; machine learning (ML) possesses this ability [1].The rapid growth of cyberattacks has resulted in the need of IoT's security architecture for intrusion detection.The security field faces serious challenges in the development of technology and the IoT.Current security methods do not provide adequate protection; hence, cyberattacks are increasing.[2].With the use of an ML-based approach, an intrusion detection system (IDS) was proposed for use on the IoT.The proposed model can be trained on different sources from large and classified datasets.This model can work effectively after being trained on smaller-sized data and classifying them in the target domain [3].Another IoT IDS has been proposed using ML and enhanced transient search optimization.The proposed system uses an enhanced transient search optimization algorithm to optimize the hyperparameters of the ML model.The outcomes of this paper show that the recommended system outperforms other IDS in terms of accuracy and false alarm rate [4].This work uses ensemble ML methods to detect intrusion in IoT networks.This article is organized as follows: Section 2 presents the related work, Section 3 presents the IoT intrusion detection system, Section 4 introduces ensemble ML, Section 5 provides the classifiers, Section 6 presents the proposed method, Sections 7 and 8 detail the experimental results, and Section 9 concludes this paper.
II. RELATED WORK
In this section, some previous works in the field of IoT IDS are reviewed.In [5], feature sets were used, and ML methods using multiple over-cluster approaches (artificial neural networks (NN), backing machines, and random forests (RF), and message queue telemetry transport (MQTT), a transport metric for waiting messages, UNSW-NB15, which is feature-based by TCP.The best features in the two groups were obtained, with high accuracy and less time for the ML algorithms.RF, binary, and the use of radio frequency on stream data and MQTT achieved accuracies of 97.37%, 98.67%, and 97.54%, respectively.
In [6], four algorithms-naive Bayes (NB), RF, J48, and zero-were utilized to categorize cyberattacks on the UNSW-NB15 dataset.Two groups were created using the UNSW-NB15 dataset using K-means and expectation maximization clustering techniques, depending on whether the objective attack is used or regular network traffic only.Following the classification above to create a subset of features, correlationbased features were used.The techniques are useful for research on intrusion detection in widespread networks.The results demonstrate that the RF and J48 algorithms achieved accuracies of 97.59% and 93.78%, respectively.
In [7], NN, logistic regression (LR), NB, decision tree (DT), SGD, and RF classifiers were evaluated empirically and tested using the UNSW-NB15 dataset.Accuracy indicates a correlation between classifiers.The RF classifier outperformed the other methods, having an accuracy of 95.43%.
In [8], the proposed system called MidSiot is used on the IoT.It consists of several stages, including identifying and classifying attacks and real network traffic, and achieved an average accuracy of 99.68%.
In [9], an IDS called Pearson correlation coefficientconvolutional neural networks (PCC-CNN) was established for the deep learning model.Intrusion detection was performed by collecting features, detecting changes, and extracting linear operations.Attacks are detected using the binary classifier based on three sets of data, achieving 98%, 99%, and 98% similarity accuracy in the three datasets.
In [10], a modified IDS was proposed based on ML, and the RF algorithm was used to enter features.The output of the IoTID20 dataset after removing the nominal features is 79 characters.The accuracy of the proposed model was 96.5%.The categorical values were converted into numeric values because the inputs of all algorithms must be numeric values.Most researchers used binary classification.In this paper, multiple classification of 9 or 10 categories will be used.
III. IOT INTRUSION DETECTION SYSTEM
The intrusion detection process involves monitoring and analyzing the events in a computer system or network for indicators of intrusions (attempts to undermine the confidentiality, integrity, or availability of a computer system or network).Attackers who access systems over the Internet, authorized users who try to gain unauthorized access rights, and authorized users who abuse their powers are all sources of intrusion.This monitoring and analysis process is automated by software or hardware solutions.
Intrusion detection enables organizations to defend their systems against risks brought on by growing network connections and dependence on information systems.Security professionals should decide whether to utilize intrusion detection rather than decide which intrusion detection features and capabilities to deploy, given the severity and type of contemporary network security threats.IDSs are now widely recognized as crucial to any organization's security architecture.Even though IDSs have been shown to improve system security, many organizations still need justification to purchase an IDS [11].
A security system for an IoT environment needs to be created while considering security precautions.Data-oriented security mechanisms must be prioritized to stop hostile users from gaining unauthorized access to data sources.Focusing on data integrity and confidentiality is crucial because doing so significantly lowers the major security dangers in an IoT context.Conventional security procedures, which are designed using cryptographic techniques, are not often used in IoT environments because of the huge amount of data.Network problems will be lessened if threats are discovered quickly.Conventional security models take more time to evaluate such a large volume of data to identify the risks.A bad user just needs brief unauthorized access to data to obtain sensitive information, and changing that information might significantly negatively affect the user.By blocking access from unauthorized users, an IDS identifies intruders and safeguards the network and data.A central IDS that monitors the network and distant nodes and detects intrusions might be employed to decrease this complexity.As a result, the network administrator receives a notification to take action on the security vulnerabilities [12].
Three steps make up the IDS's functionality.The first monitoring phase is based on network or host sensors.The second phase is analysis, which involves feature extraction and pattern recognition.The last stage is detection, which involves finding network anomalies or intrusions.IDS aids in quickly detecting vulnerabilities and monitoring and analyzing data, services, and networks as well as traffic analysis via efficient network management.It enhances data, network secrecy, and integrity while defending the network against threats.An IDS compiles and examines the system's data stream to find any malicious or dangerous activity.Traditional IDS design lacks real-time security for huge volume data streams and primarily focuses on providing security for Internet management features.
The IDS operates primarily in the network layer of the IOT system [11].The network layer of an IoT NIDS monitors Internet data transferred between the network's devices.Also, it serves as a second line of defense to detect and protect the network from threats from unauthorized users [12].
Typically, an IDS consists of sensors, which collect the data to be analyzed by IDS tools.These tools report abnormal activities such as attacks or unauthorized access.An intrusion can be defined as any assault that compromises the availability, confidentiality, or integrity of information.An IoT system's IDS should be able to analyze data packets and respond in real time at different IoT network levels utilizing different protocol stacks and adjust to different threats [13].
IV. ENSEMBLE MACHINE LEARNING
Ensemble approaches may combine many algorithms instead of just one ML classification algorithm.The model's accuracy is enhanced by using this method.Algorithms for supervised learning are ensemble approaches.Different training algorithms benefit from ensemble approaches, which increase the training accuracy to raise the testing accuracy.The ensemble approach may use different training algorithms to provide flexible training [14].
V. CLASSIFIERS ML is a subtype of artificial intelligence that allows a computer to make decisions independently without human input, enabling computers to learn independently without being explicitly programmed.The fundamental objective of ML is to create computer software that can access data and use it for learning procedures.
Several kinds of ML exist [15].Six ML methods (both linear and nonlinear) were extensively utilized for IDS data classification.Therefore, the background of the ensemble ML and six methods (DT, GB, and extra tree) should be understood so they can be utilized for intrusion detection.
A. Decision Tree
The DT is a supervised learning technique that is used to handle classification and regression problems and is most often selected to do both.It is a tree-structured classifier in which each leaf node represents the classification structure, and the interior nodes reflect the dataset's characteristics.A DT comprises two nodes: the decision node and the leaf node.In contrast to leaf nodes, which indicate choices' results and have no other branches, decision nodes are used to make decisions and contain multiple branches.Two possible answers represent each question in a DT: "yes" or "no," which enables the creation of branches.The tree could be split up into smaller trees (Figure 1) [16].
B. Random Forest
Many DT classifiers, each built using a random vector sampled independently from the input vector, make up the RF classifier.Each tree casts a unit vote for the dominant class to classify an input vector.Most DTs simulate scenarios that do not operate well but may provide the foundation for other trees to work better.The Gini index, which measures an attribute's impurity in classes, is used as an attribute selection metric.Every time a tree is developed to its maximum depth, a mix of features fresh training data is utilized.These mature trees have yet to be trimmed.This ability is one of the RF classifier's main benefits over other DT approaches (Figure 2) [17] [18].
C. Naive Bayes
The Bayes theorem is the foundation of NB classifiers.It is based on conditional probability, which refers to the chance that an event (A) will occur given that another event (B) has already occurred.Essentially, the theorem permits a hypothesis to be revised whenever new data are presented.It is a simple and effective predictive modeling technique.The model may directly extract two types of probabilities from the training data: the likelihood of each class and the conditional probability for each class given each x value.The Bayes theorem may be used to forecast new data using the probability model, as shown in Eq. ( 2) [19].
D. Logistic Regression
LR is used to predict a binary result (1 or 0, yes or no, true or false) given a collection of independent factors to depict binary or categorical outcomes.When the log of chances is used as the dependent variable when the outcome variable is categorical, LR is a particular instance of linear regression (Figure 3) [20], [18], [21].
E. Gradient Boosting
Gradient-boosted machines (GBMs) are popular ML algorithms that are widely used in many different sectors and are one of the most effective ways to win Kaggle tournaments.While RF constructs an ensemble of deep, autonomous trees, GBMs construct an ensemble of shallow, weak, consecutive trees, with each tree learning from and improving upon the previous ones.These numerous weak consecutive trees come together to form a potent "committee," frequently challenging other algorithms [22].
F. Extra Tree
The different trees and RF differ primarily in two ways.First, unlike RF, the different trees do not create the training subset for each tree using the tree bagging step.All DTs in the ensemble are trained using the whole training set.Second, the extra trees randomly choose the characteristic and its corresponding value during the node-splitting stage.As a result of these two variations, the trees are less prone to overfitting and have improved performance [23].
VI. PROPOSED METHOD
This research used three datasets: UNSW-NB15, IoTID-20, and BotNetIoT.Six types of ML architectures were tested to determine the effectiveness of various ML architectures on these datasets.Before the models were trained on the datasets, the data underwent preparation.Subsequently, two of the datasets, namely, UNSW-NB15 and BotNetIoT, were split into training and testing sets in a 70:30 ratio, while the IoTID-20 dataset was split into training and testing sets in an 80:20 ratio.The training data were then fed into ML algorithms, which included LR, NB, DT, extra trees, RF, and gradient boosting.Finally, the strongest results were voted on by using the ensemble method.The effectiveness of the trained models was evaluated using the test data, as presented in Figure 4.
A. Datasets
This paper used three IoT intrusion detection datasets.First, the UNSW-NB15 [24] dataset is a labeled network traffic dataset that contains more than two million records of network traffic captured from a realistic network environment, including benign and malicious attributes.The dataset includes 49 network features extracted from each n flow and labels that indicate whether the traffic is malicious or benign, making it a useful resource for evaluating the effectiveness of intrusion detection methods for IoT networks.Second, the IoTID-20 [25], [26] dataset is a publicly available labeled dataset that was specifically designed for IoT intrusion detection research.It contains network traffic data collected from a real-world IoT environment with 20 different types of IoT devices.The dataset includes benign and malicious attributes, with a total of 15 attack scenarios generated by using various network attacks, such as brute-force attacks, DoS attacks, and malware infections.The IoTID-20 dataset is useful for evaluating the effectiveness of various IDS and ML algorithms in detecting IoT-specific attacks.Table I shows the attack types in each dataset.This study uses an IoT dataset for IDS, specifically the Malicious BotNet dataset (BotNetIoT), which consists of data files collected during the detection of IoT botnet attacks on a cybersecurity system.This dataset is publicly available on Kaggle [27].
To create this dataset, researchers used Wireshark software to capture network traffic data from nine IoT devices in a local network.The data were collected in packet capture (PCAP) file format, which is commonly used for network analysis.The PCAP file contains data packets from the network, including 23 statistical features for the central switch in the network.
The data in the BotNetIoT dataset include benign and malicious traffic, with the malicious traffic generated by various IoT-specific attacks, such as botnets and infiltration attacks.The dataset is useful for evaluating the effectiveness of IDS in detecting IoT-specific attacks and assessing network health.It is also useful for training and testing ML algorithms for IoT intrusion detection.Table II shows the specification of the three datasets.B. Data Preprocessing 1) Data Cleaning: In this preprocessing step, the features that were not useful in the prediction process and had only one value are deleted.Moreover, rows that contain duplicate data were identified and deleted.
2) Handling Missing Values: The dataset has some missing values, which were substituted with the value of 0.
3) Normalization: Feature normalization is an essential step in data preprocessing.Data normalization is a practical approach to improving ML accuracy.The standard scaler transforms the data of the three datasets to a range between 0 and 1.It was implemented before being integrated into the proposed deep learning classification model, as shown in Eq. ( 2)
C. Ensemble Machine Learning Approach to Detecting IoT Intrusion
A voting-based ensemble classification technique is used.Several voting procedures exist, such as hard voting (voting based on a majority) and soft voting.Soft voting may be performed by using the average of probabilities, the product of probabilities, the lowest or maximum of probabilities, or none of them.
In this work, hard voting (voting based on a majority) was used to assess the voting mechanisms.
VII. EXPERIMENTAL RESULTS
In this part, the confusion matrix-based findings for multi-class classification were provided.The model's performance based on accuracy, precision, recall, and F1 score was assessed.In contrast to recall, which is determined by dividing the total number of positive class values into the test data by the number of true positive predictions, precision is calculated by dividing the total number of true positive predictions by the total number of positive class values predicted.The weighted average of recall and accuracy is the F1 score.Accuracy is determined by dividing the total number of forecasts by the number of right predictions (including true positive and true negative predictions).Poor recall is reflected by a large number of incorrect negative predictions, and low accuracy is indicated by a high proportion of false positive predictions.A high F1 score indicates accuracy and recall that are in balance, with few false negatives and positives.These measures were calculated using the appropriate equations, which were based on sources [28][29][30][31].(6) where TP is the true positive, TN is the true negative, FP is the false positive, and FN is the false negative.
VIII. CONCLUSION
Ensemble techniques mix several learning algorithms to achieve prediction performance that is better than that of any one of the component learning algorithms alone.Empirically, ensemble ML provides more accurate findings when models exhibit considerable variations.As a result, many ensemble approaches encourage variation among the models they combine.In this research, three intrusion detection datasets for the IoT (IoTID20, UNSW-NB15, and BoTNeTIoT-L01-v2) were employed to evaluate the performance of the ensemble classification method.The results indicate a preference for the ensemble classification method over the other algorithms, with accuracy rates of 88.41% on the IoTID20 dataset, 98.52% on the UNSW-NB15 dataset, and 91.03% on the BoTNeTIoT-L01-v2 dataset.In conclusion, ML approaches show great potential for IoT IDS.They can provide important solutions with their anomalybased approach and ability to detect unknown attacks.As a future research direction, a recommendation using several feature selection methods can be formulated.Hybrid feature selection methods can also be used.
Fig. 4
Fig. 4 Applying ensemble ML algorithms to different datasets.
TABLE I TYPES
OF ATTACKS IN EACH DATASET.
TABLE IV PERFORMANCE
METRICS IN THE UNSW_NB15 DATASET.
TABLE V PERFORMANCE
METRICS IN THE BOTNETIOT DATASET.
TableVshows the preference for extra tree algorithms over other algorithms, and the accuracy of this algorithm was 91.03%.In this study, this method was compared with methods in several recent studies.Table VI provides a comparison of the overall performance in multiple classifications on the UNSW_NB15 dataset in terms of accuracy.Table VII compares studies conducted on the IoTID20 dataset for subcategories in terms of accuracy.The proposed approach outperformed the other methods in terms of accuracy measures.
TABLE VI GENERAL COMPARISON OF MULTIPLE CLASSIFICATION ACCURACY MEASURES FOR THE UNSW_NB15 DATASET.
TABLE VII GENERAL COMPARISON OF SUBCATEGORIES WITH PRECISION MEASURES OF THE IOTID20 DATASET. | 4,342.8 | 2023-12-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Update (1.2) to ANDURIL and ANDURYL: Performance improvements and a graphical user interface
This is an update to PII: S2352711018300608 and S2352711019302419 In this paper, we present three main improvements of ANDURIL and its python version ANDURYL. First the MATLAB version ANDURIL is brought to the Python version standard by implementing (i) user defined quantiles and (ii) the possibility to deal with missing values. Second, the computational engines of both ANDURIL and ANDURYL were significantly improved making calculation time lower and improving further accuracy. Finally a standalone Graphical User Interface is presented which we believe will make the software more accessible to practitioners of Cooke’s method
. Illustration of decision maker interpolation. 1. ANDURIL is brought to the Python version standard by implementing: (i) user defined quantiles and (ii) the possibility to deal with missing values. These features will not be discussed further. The reader is referred to [3] for an explanation of the main features now also available in AI v1. 2 The new code led also to improved accuracy of both AI and AY. That is, both solutions are closer to EXCALIBUR (CC). The differences between CC and AI and AY for the 7 studies where differences were observed, are shown in Table 2. This will be elaborated further below. 3. A standalone Graphical User Interface of ANDURYL is presented. A screen shot of the GUI is presented in Fig. 2
ANDURYL and ANDURIL code improvement
The main improvement in speed and accuracy is the result of a different implementation for calculating the Decision Maker's (DM) cumulative distribution function (CDF). In version 1.0 and 1.1, the DM's CDF was calculated by integrating the probability density function (PDF) of the weighted DM's numerically (quadrature method) through an anonymous function. Solving this integral is numerically expensive and when the probability density of one or more expert are very concentrated in a range in relation to that of other experts, parts of the PDF were skipped in the discretization used in the numerical integration.
In the new (AY v1.2 and AI v1.2), the old implementation of the integral is replaced by an interpolation of the CDF. As long as the PDF between the given quantiles is uniform (or loguniform), this gives the same results as solving the integral, but much quicker and without inaccuracies due to the discretization of the integral. Fig. 1 illustrates the process of interpolation for the decision maker.
Note that the DM quantiles (''DM full'' in the figure) are determined by interpolating each of the (two in this case) experts' answers (following the dashed lines). This results in the full detailed CDF of the decision maker. This can subsequently be interpolated at the percentiles of interest (which is EXCALIBUR's output). Note that the interpolation is not carried out over the quantile direction.
ANDURYL GUI
The main improvement for the Python version is the graphical user interface. This interface, programmed with the Python module PyQt5, is compiled with PyInstaller (for Windows), such that it is a stand-alone executable. This makes ANDURYL accessible to non-Python users. The layout of the user interface consists of 4 overviews, for the experts, items, assessments and results, as shown in Fig. 2.
The following list gives an overview of the functionalities that the stand-alone GUI offers: • Assessments per expert or item can be plotted as a PDF, CDF, survival function or range. The CDF option is shown in Fig. 2 on the foreground.
• Because of the improvements in computational performance, it is now less demanding to do a robustness analysis for excluding multiple experts or items. The results of the robustness analysis can be shown in box plots.
• The program has options for saving the project in EXCAL-IBUR format or a more common JSON format.
• Separate DM's results, such as the full CDFs, can be exported or copied to clipboard.
• The AY code is separated between calculation and user interface functionalities so that the Python-module can also be used from a script or Jupyter notebook. For research purposes this is a useful functionality.
• The fact that AY is still significantly faster than AI, as shown in Table 1, is due to differences in implementation. In AI several expensive operations are re-calculated for different iterations. In AY the amount of data that is re-calculated is minimized.
Comparing with previous studies
In [4], 33 post-2006 studies using Cooke's classical method are presented using CC. We use these data to compare output from AY and AI to both CC, the MATLAB implementation AI of the v1.0 paper [2] and the Python implementation of the paper [3].
The differences are smaller compared to the results from the last code version. For two studies, ''Hemophilia'' and ''Ice sheets'' the differences are still significant. For four other studies the results seem to be due to rounding errors. Of the remaining 26 studies, the majority have equal results. Table 2 shows the differences for the studies where differences are still observed.
Conclusions
The Python module named ANDURYL (AY) has been extended with a graphical user interface and is available as stand-alone executable. The MATLAB toolbox named ANDURIL (AI) for combining expert judgments applying Cooke's method has been further extended by adding functionalities for user defined quantiles and handling missing values. The stand-alone GUI enables practitioners and researchers that have no Python or MATLAB experience to apply Cooke's method with ANDURYL. For users that are more familiar with programming, the MATLAB toolbox and Python GUI are a means to perform or analyze expert elicitations in a reproducible way. The improved speed and accuracy contribute to this cause. Both codes are open source to encourage usage and further development.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 1,362.2 | 2020-07-01T00:00:00.000 | [
"Computer Science"
] |
Automatic Parking Path Planning and Tracking Control Research for Intelligent Vehicles
: As a key technology for intelligent vehicles, automatic parking is becoming increasingly popular in the area of research. Automatic parking technology is available for safe and quick parking operations without a driver, and improving the driving comfort while greatly reducing the probability of parking accidents. An automatic parking path planning and tracking control method is proposed in this paper to resolve the following issues presented in the existing automatic parking systems, that is, low degree of automation in vehicle control; lack of conformity between segmented path planning and real vehicle motion models; and low success rates of parking due to poor path tracking. To this end, this paper innovatively proposes preview correction which can be applied to parking path planning, and detects the curvature outliers in the parking path through the preview algorithm. In addition, it is also available for correction in advance to optimize the reasonable parking path. Meanwhile, the dual sliding mode variable structure control algorithm is used to formulate path tracking control strategies to improve the path tracking control e ff ect and the vehicle control automation. Based on the above algorithm, an automatic parking system was developed and the real vehicle test was completed, thus exploring a highly intelligent automatic parking technology roadmap. This paper provides two key aspects of system solutions for an automatic parking system, i
Introduction
The increase of car ownership in urban areas and the challenges of traffic congestion and insufficient parking spaces are great concerns to urban planners and managers. Consequently, the size of parking space in most urban centers is becoming smaller to overcome the difficulty of inadequate parking space, and the following problem is usually associated with parking difficulties and accidents, such as scuffing and collisions [1][2][3]. Facing these challenges, automatic parking technology for intelligent vehicles receives extensive attention from both the automotive industry and research institutions.
Automatic parking technology refers to the parking process that completes the parking operations safely and quickly without a driver and can effectively improve driving comfort while greatly reducing the probability of accidents during parking. Also, the promotion of automatic parking technology can promote the development and deployment of autonomous driving and intelligent vehicles [4][5][6][7][8][9][10].
A major component of automatic parking technology is parking path planning and tracking control, which significantly affects the requirements of the parking space size and parking success rate. Parking path planning is widely investigated in previous researches with most studies based on the three-segmented path. The three-segment path is composed of two segments of arcs with constant In order to ensure the effectiveness of the research method introduced in this article on vehicle parking control, some assumptions are made on the parking research in combination with the with actual parking restriction requirements.
Co-Simulation Platform
(1) Weather conditions are not heavy rain or heavy snow; (2) Parking slot is flat and the road slope does not exceed 10%; (3) Parking slot length ≥ vehicle length + 0.8 m, parking slot width ≥ vehicle width + 0.3 m; (4) Automatic parking speed does not exceed 3 km/h; (5) The distance between the obstacle on the opposite side of the parking slot and the vehicle is not less than 1.0 m. In order to ensure the effectiveness of the research method introduced in this article on vehicle parking control, some assumptions are made on the parking research in combination with the with actual parking restriction requirements.
Modeling and Parking Path Planning
(1) Weather conditions are not heavy rain or heavy snow; (2) Parking slot is flat and the road slope does not exceed 10%; (3) Parking slot length ≥ vehicle length + 0.8 m, parking slot width ≥ vehicle width + 0.3 m; (4) Automatic parking speed does not exceed 3 km/h; (5) The distance between the obstacle on the opposite side of the parking slot and the vehicle is not less than 1.0 m.
Establishment of the Kinematic Model
Vehicle parking is a low-speed (usually below 5 km/h) movement, and when a vehicle's wheels roll at a low speed, it does not undergo lateral sliding. Thus, the lateral force can be neglected, and there is no wheel side slip angle. Therefore, in this application, the limitations we considered were the response speed and the control accuracy of the associated actuator. The vehicle kinematic model is usually simplified, that is, the vehicle parking kinematics model is established based on the vehicle kinematics model for the parking movement. This paper establishes the following simplified model for low-speed parking.
In this chapter, the kinematics model of a vehicle is established, and the path planning method for an automatic parking system is studied based on this model. As shown in Figure 2, (x r , y r ) present the midpoint coordinates of the rear axle of the vehicle, while (x f , y f ) refer to the midpoint coordinates of the vehicle's front axle, W represents the width of the vehicle, H refers to the width of the road, L 1 and L 2 denote the width and length of the target parking space, respectively, h represents the distance between the midpoint of the rear axle and a lateral obstacle, S refers to the distance between the midpoint of the rear axle and the end of an obstacle in front of the target parking space, θ represents the driving direction angle of the vehicle, and ϕ is the Ackerman angle, besides, the clockwise direction is positive.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 27 kinematics model for the parking movement. This paper establishes the following simplified model for low-speed parking.
In this chapter, the kinematics model of a vehicle is established, and the path planning method for an automatic parking system is studied based on this model. As shown in Figure 2 Assuming that the lateral velocity of the vehicle rear wheel (be perpendicular to the wheel direction) is zero, the vehicle movement equation in the vertical direction can be obtained as follows: According to the Ackerman steering geometrical principle, the Ackerman angle ϕ in the process of car steering is approximately equal to the steering angle of the midpoint of the vehicle front axle, and the center angle of the front axle of the vehicle is approximately in linear proportion to the steering wheel angle γ .
where K refers to the ratio constant. The midpoint of the rear axle is taken as the origin; then, the coordinate system is established, as shown in Figure 2. The coordinate of the front axle midpoint can be expressed as follows: Integrating the above equation, we can get Assuming that the lateral velocity of the vehicle rear wheel (be perpendicular to the wheel direction) is zero, the vehicle movement equation in the vertical direction can be obtained as follows: .
x r · sin θ − . y r · cos θ = 0 (1) According to the Ackerman steering geometrical principle, the Ackerman angle ϕ in the process of car steering is approximately equal to the steering angle of the midpoint of the vehicle front axle, and the center angle of the front axle of the vehicle is approximately in linear proportion to the steering wheel angle γ.
where K refers to the ratio constant.
of 27
The midpoint of the rear axle is taken as the origin; then, the coordinate system is established, as shown in Figure 2. The coordinate of the front axle midpoint can be expressed as follows: Integrating the above equation, we can get Based on the positional relationship between the midpoints of the front and the rear axles of the vehicle, the follows equation can be obtained.
Thus, the speed relationship between the midpoint of the front and rear axis can be obtained as follows: Then, the vehicle movement equation can be written as: Substituting Equation (2) into Equation (6), we can get: Substituting Equations (2) and (7) into Equation (5), the follow equation is obtained: Then, the coordinates of the rear axle midpoint and the vehicle driving direction angle at time t can be expressed as follows: where, ∆t refers to the sampling time.
Based on the geometric relation between the vehicle parameters and the coordinate positions, the trajectory equations of the four vehicle wheels and envelope points can be obtained. Thus, the actual trajectory of the vehicle during the whole parking process, from the starting point to the terminal point, can be calculated.
Analysis of Parking Space Constraints
The path curve should not only satisfy the requirements of the geometric characteristics of the movement of the vehicle but also ensure that the process does not result in an accident. Therefore, it is necessary to establish the corresponding constraints, and plan the appropriate parking path curve so that the parking process is safe and accurate. This section analyzes the possible collision points in the parking process.
According to the planned parking path, namely, the rear parking trajectory function, the theoretical curvature ρ of the vehicle at an arbitrary point in the parking process can be given as follows: According to the relation of the Ackerman angle: where, L refers to the wheel base, R represents the radius of the turning circle, and R = 1/ρ. According to the above equations, the Ackermann angle of the vehicle at arbitrary point is as follows: Based on the coordinates of the midpoints of the rear axle and their mutual relationship, the trajectories of A, B, C and D can be obtained as follows: where, L f refers to the length of the front overhang, and L r represents the length of the rear overhang. According to the established parking kinematics model, there are four positions at which danger exists in the process of parallel parking, as shown in Figure 3.
where, L refers to the wheel base, R represents the radius of the turning circle, and R = 1/ρ. According to the above equations, the Ackermann angle of the vehicle at arbitrary point is as follows: Based on the coordinates of the midpoints of the rear axle and their mutual relationship, the trajectories of A, B, C and D can be obtained as follows: where, f L refers to the length of the front overhang, and r L represents the length of the rear overhang.
According to the established parking kinematics model, there are four positions at which danger exists in the process of parallel parking, as shown in Figure 3. To ensure that no collision occurs during the parking process, the trajectory function for vertical parking has to satisfy the following conditions: To ensure that no collision occurs during the parking process, the trajectory function for vertical parking has to satisfy the following conditions: When The analysis of the kinematics constraints in the process of parking ensures the safety of the running vehicle during the parking operation, and lays a foundation for the vehicle trajectory planning and path tracking in the parking process.
Parallel Parking Path Planning
According to the analysis of space constraints, this section uses M language to simulate the parallel parking path of vehicles in MATLAB, as shown in Figure 4. Refer to Table 1 for vehicle parameters.
The analysis of the kinematics constraints in the process of parking ensures the safety of the running vehicle during the parking operation, and lays a foundation for the vehicle trajectory planning and path tracking in the parking process.
Parallel Parking Path Planning
According to the analysis of space constraints, this section uses M language to simulate the parallel parking path of vehicles in MATLAB, as shown in Figure 4. Refer to Table 1 for vehicle parameters. The content of the program is as follows: (1) Set parameters such as road color and indicator signs, set parameters of parking scene, slot length is set at 6.5 m, slot width is set as 2.5 m, and side distance is set as 1.2 m, (2) Define the initial position of the vehicle and the location of the target parking space; (3) As for the turning radius and travel distance of the vehicle in a circular motion, the radius of the circle is defined as the minimum turning radius of the vehicles, that is, 5.8 m; The content of the program is as follows: (1) Set parameters such as road color and indicator signs, set parameters of parking scene, slot length is set at 6.5 m, slot width is set as 2.5 m, and side distance is set as 1.2 m, (2) Define the initial position of the vehicle and the location of the target parking space; (3) As for the turning radius and travel distance of the vehicle in a circular motion, the radius of the circle is defined as the minimum turning radius of the vehicles, that is, 5.8 m; (4) The driving distance of the vehicle in a straight line is about 4.5 m.
The relevant parameters of the vehicle are shown in Table 1. As shown in Figure 5, the parking path meets the requirements of the parallel parking space constraints, that is, no collision with surrounding obstacles, and can maintain a certain safe distance, which verifies the feasibility of the planned path. The relevant parameters of the vehicle are shown in Table 1. As shown in Figure 5, the parking path meets the requirements of the parallel parking space constraints, that is, no collision with surrounding obstacles, and can maintain a certain safe distance, which verifies the feasibility of the planned path. However, the path is composed of two segments, i.e., the arc with constant curvature and a segment of straight line between two arcs. The curvature of the connecting point of arc and line in the path is discontinuous. In the next section, the preview theory is introduced to modify the curvature outlier of the path to optimize the parking path.
Optimization of Parking Path Curvature
Based on the preview theory, the curvature outlier can be found in the path, i.e., the abrupt change point of the steering wheel. During the advanced and delayed time τ of the steering wheel angle step input, connect the curves before and after the curvature outliers with a sinusoidal that conforms to the changing law of the steering wheel angle. In this way, the curvature of outliers is corrected from step signal to gradient signal [28][29][30].
In order to correct the curvature of the path, it is necessary to obtain the coordinate of the curvature outliers in the path in advance, and make sure that the driving speed is low and stable during parking. Therefore, the preview distance can be set to 0 v τ . In the case that there is no curvature outlier within the preview distance, no path correction will be carried out. In contrast, if there is a curvature outlier within the preview distance, the path curvature will be corrected with the correction algorithm [31]. As shown in Figure 1, the curvature 0 ρ refers to the input data at time 0 t , which is predicted before time τ . It is assumed that there is a step signal of curvature from 0 ρ to ρ in the preview distance. The curvature of the planned parking path is corrected immediately. That is, the steering wheel angle input time is corrected and changed in terms of rate. The effect after correction is shown in Figure 3 by dotted line.
The correction curve is part of the sinusoidal. Based on the diagram in Figure 6, it is assumed that the expression of the correction curve is as follows: The period of the sine function is 4 T π = , hence = 2 / /2 T ω π π τ = , therefore substituting ω into Equation (1), the following equation can be obtained: However, the path is composed of two segments, i.e., the arc with constant curvature and a segment of straight line between two arcs. The curvature of the connecting point of arc and line in the path is discontinuous. In the next section, the preview theory is introduced to modify the curvature outlier of the path to optimize the parking path.
Optimization of Parking Path Curvature
Based on the preview theory, the curvature outlier can be found in the path, i.e., the abrupt change point of the steering wheel. During the advanced and delayed time τ of the steering wheel angle step input, connect the curves before and after the curvature outliers with a sinusoidal that conforms to the changing law of the steering wheel angle. In this way, the curvature of outliers is corrected from step signal to gradient signal [28][29][30].
In order to correct the curvature of the path, it is necessary to obtain the coordinate of the curvature outliers in the path in advance, and make sure that the driving speed is low and stable during parking. Therefore, the preview distance can be set to v 0 τ. In the case that there is no curvature outlier within the preview distance, no path correction will be carried out. In contrast, if there is a curvature outlier within the preview distance, the path curvature will be corrected with the correction algorithm [31]. As shown in Figure 1, the curvature ρ 0 refers to the input data at time t 0 , which is predicted before time τ. It is assumed that there is a step signal of curvature from ρ 0 to ρ in the preview distance. The curvature of the planned parking path is corrected immediately. That is, the steering wheel angle input time is corrected and changed in terms of rate. The effect after correction is shown in Figure 3 by dotted line.
The correction curve is part of the sinusoidal. Based on the diagram in Figure 6, it is assumed that the expression of the correction curve is as follows: The period of the sine function is T = 4π, hence ω = 2π/T = π/2τ, therefore substituting ω into Equation (1), the following equation can be obtained: Appl. Sci. 2020, 10, 9100 Substitute the points (t 0 , ρ 0 ) and (t 0 +2π, ρ) on the curve into Equation (2), the parameters of A and B can be obtained by solving the equations: The expression of the curvature correction curve can be derived as follows, by substituting the parameters of A and B into Equation (9): As shown in Figure 7, when τ = 1.0 s, the steering wheel angle change the curve before and after preview correction. The blue dotted line indicates the case before the preview correction, and the red solid line indicates the case after the preview correction. As per the results, the corrected steering wheel angle changes uniformly without step changes.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 27 Substitute the points 0 0 ( , ) t ρ and 0 ( +2 , ) t π ρ on the curve into Equation (2), the parameters of A and B can be obtained by solving the equations: The expression of the curvature correction curve can be derived as follows, by substituting the parameters of A and B into Equation (9): As shown in Figure 7, when 1.0s τ = , the steering wheel angle change the curve before and after preview correction. The blue dotted line indicates the case before the preview correction, and the red solid line indicates the case after the preview correction. As per the results, the corrected steering wheel angle changes uniformly without step changes.
Establishment of the Path Tracking Model
In this section, the simplified parking path tracking model of a vehicle is established, and the path planning and tracking for an automatic parking system is studied based on this model. As shown in Figure 8, the simplified path tracking model for parking is the cartesian coordinate system, t π ρ on the curve into Equation (2), the parameters of A and B can be obtained by solving the equations: The expression of the curvature correction curve can be derived as follows, by substituting the parameters of A and B into Equation (9): As shown in Figure 7, when 1.0s τ = , the steering wheel angle change the curve before and after preview correction. The blue dotted line indicates the case before the preview correction, and the red solid line indicates the case after the preview correction. As per the results, the corrected steering wheel angle changes uniformly without step changes.
Establishment of the Path Tracking Model
In this section, the simplified parking path tracking model of a vehicle is established, and the path planning and tracking for an automatic parking system is studied based on this model. As shown in Figure 8, the simplified path tracking model for parking is the cartesian coordinate system,
Establishment of the Path Tracking Model
In this section, the simplified parking path tracking model of a vehicle is established, and the path planning and tracking for an automatic parking system is studied based on this model. As shown in Figure 8, the simplified path tracking model for parking is the cartesian coordinate system, (x 0 , y 0 ) and θ 0 are the current midpoint coordinate of the rear axle and the driving direction angle of the vehicle, respectively, while (x d , y d ) and θ d are the ideal midpoint coordinate of the rear axle and the driving direction angle of the vehicle. The simplified parking path tracking model can be described by the following nonlinear differential equations: 11) axle and the driving direction angle of the vehicle. The simplified parking path tracking model can be described by the following nonlinear differential equations: . .
In Equation (11), , v refers to the speed in parking, w represents the vehicle angular velocity during parking, θ is the angle between the driving direction of the vehicle and the x-axis, besides, v and w are the input variables controlled in the kinematic model.
Based on the analysis, the kinematic equation has two degrees of freedom, and the vehicle kinematic model has three output variables. The number of input is less than that of output. Therefore, the kinematic model is a typical underdrive system. In the path tracking process, the control law
Design of Closed-Loop System of Tracking Controller
In engineering application, the more complex the control algorithm used in the controller is, the larger the amount of calculation and the worse the real-time performance will be. Automatic parking needs to accurately track the ideal path in a restricted area, therefore, a high level in terms of real time is required. Usually, it has to complete the planning of an upper-level decision within 60 ms system cycle. In order to reduce the complexity of the control algorithm, and meet the requirements of engineering applications, the tracking controller system of the kinematic model is transformed into a cascade system, including vehicle position control and vehicle body attitude control, as shown in Figure 9, where the inner-loop refers to the vehicle body attitude subsystem, and the outer-loop represents the vehicle position subsystem. When the inner-loop received the command signal d θ In Equation (11) , v refers to the speed in parking, w represents the vehicle angular velocity during parking, θ is the angle between the driving direction of the vehicle and the x-axis, besides, v and w are the input variables controlled in the kinematic model. The vehicle kinematics model can be obtained from the automatic parking kinematics equation: Based on the analysis, the kinematic equation has two degrees of freedom, and the vehicle kinematic model has three output variables. The number of input is less than that of output. Therefore, the kinematic model is a typical underdrive system. In the path tracking process, the control law can be designed to track the target coordinates [x, y] during parking. At the same time, it can quickly converge and realize the tracking of the vehicle's driving angle θ [32,33].
Design of Closed-Loop System of Tracking Controller
In engineering application, the more complex the control algorithm used in the controller is, the larger the amount of calculation and the worse the real-time performance will be. Automatic parking needs to accurately track the ideal path in a restricted area, therefore, a high level in terms of real time is required. Usually, it has to complete the planning of an upper-level decision within 60 ms system cycle. In order to reduce the complexity of the control algorithm, and meet the requirements of engineering applications, the tracking controller system of the kinematic model is transformed into a cascade system, including vehicle position control and vehicle body attitude control, as shown in Figure 9, where the inner-loop refers to the vehicle body attitude subsystem, and the outer-loop represents the vehicle position subsystem. When the inner-loop received the command signal θ d generated by the outer-loop, tracking θ d is achieved through sliding mode control law (θ d refers to the ideal driving direction angle).
Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 27 generated by the outer-loop, tracking d θ is achieved through sliding mode control law ( d θ refers to the ideal driving direction angle).
Design of Vehicle Position Control Law
In the parking process, the tracking of the vehicle position relative to the target position is For Equations (13) and (14), take the sliding mode function . .
Cited the global asymptotic stability theorem of dynamic systems [34].
Design the following control law as follows:
Design of Vehicle Position Control Law
In the parking process, the tracking of the vehicle position relative to the target position is realized through the vehicle position control law by controlling the speed v. The error tracking equation is expressed as follows: . . where, For Equations (13) and (14), take the sliding mode function Cited the global asymptotic stability theorem of dynamic systems [34].
The obtained θ from Equation (15) is the driving direction angle required by the position control law. The tracking control of the ideal path can be achieved only when θ and θ d are equal. But in fact, θ and θ d cannot be equal in the initial stage of control. It is easy to cause the instability of the closed-loop control system. Therefore, it is necessary to take θ obtained from Equation (15) as an ideal value. Therefore, taking θ d = arctan(u 2 /u 1 ), the actual vehicle position control law is expressed as follows:
Design of Vehicle Attitude Control Law
It can be seen from the above analysis that the deviation of the actual driving direction angle θ from the ideal driving direction angle θ d will cause the instability of the closed-loop system in the initial stage of control. Hence, it is necessary to design the vehicle body attitude control law to make θ track θ d as soon as possible.
Make θ e = θ − θ d , where,θ e refers to the angle deviation value, take the sliding mode function Compared with the traditional exponential approach control law: .
An approach law combining exponential and power is designed in this paper: .
where, ε, a, b, k refer to the positive design parameters respectively, and meet the requirements of 0 < ε < 1, 0 < a < 1, b > 1, k > 0. Compared with the control law (21), the |s| a term in the control law (22) improves the smoothness of the approach. The k|s| b term ensures that the approach speed is faster when it is far away from the sliding mode surface, and the k|s| b term is smaller when it is near the sliding mode surface, thereby reducing chatter. When the trajectory of the vehicle body attitude system is outside the sliding mode surface, take Lyapunov function V = s 2 /2 derivative, and substitute Formula (21) into it to get Therefore, the actual vehicle body attitude control law can be obtained as follows:
Global Stability Analysis of Closed Loop Controller
Due to the existence of a deviation between the actual driving direction angle θ and the ideal driving direction angle θ d in the initial stage of control, the kinematic model (12) can be written as follows: Then, Equation (18) can be written as follows: .
Appl. Sci. 2020, 10, 9100 13 of 27 The certification process when t → ∞ , x e → 0 is as follows: Select the Lyapunov function as Derivation of the above Lyapunov function can obtained as follows: Take . Then Equation (26) can be written as follows: From the nature of the sine function cos θ − cos θ d = 2 sin , when θ − θ d converges, |cos θ − cos θ d | converges, which is followed by the converge of n 2 .
To sum up, the double closed-loop sliding mode variable structure system designed in this paper is globally and asymptotically stable. The path tracking control law designed in this paper, that is, the input control of vehicle kinematics model is: Based on the above control law, the path tracking control model is built in MATLAB/Simulink, as shown in Figure 10. The control parameters of control law are selected as p = 2.9, g 1 = 10, q = 2.9, g 2 = 10, k 3 = 4, η 3 = 0.5, a = 0.5, b = 5. To sum up, the double closed-loop sliding mode variable structure system designed in this paper is globally and asymptotically stable. The path tracking control law designed in this paper, that is, the input control of vehicle kinematics model is:
Subsection Build Test Vehicle Model in CarSim
We set the parameters in CarSim according to Table 1, and use the default values for others.
Subsection Build Test Vehicle Model in CarSim
We set the parameters in CarSim according to Table 1, and use the default values for others. Figure 11 shows the basic parameter setting interface of the CarSim test vehicle model.
Subsection Build Test Vehicle Model in CarSim
We set the parameters in CarSim according to Table 1, and use the default values for others. Figure 11 shows the basic parameter setting interface of the CarSim test vehicle model.
Path Planning and Tracking Control Model Design
We set the input and output parameters of the CarSim module. The input refers to the steering wheel angle, and the output represents the vehicle front axle midpoint coordinate. In the FCN module, the vehicle coordinate system in CarSim was converted to a global coordinate system. We recorded the real-time coordinates of the planned path and output the ideal coordinates in real time, which is different from the actual coordinates of the model car in CarSim. Finally, the steering wheel angle was controlled by the sliding mode algorithm, therefore the model vehicle travelled along the planned path. Figure 12 shows the parking path planning and tracking control model built in the CarSim and MATLAB/Simulink co-simulation platform.
Path Planning and Tracking Control Model Design
We set the input and output parameters of the CarSim module. The input refers to the steering wheel angle, and the output represents the vehicle front axle midpoint coordinate. In the FCN module, the vehicle coordinate system in CarSim was converted to a global coordinate system. We recorded the real-time coordinates of the planned path and output the ideal coordinates in real time, which is different from the actual coordinates of the model car in CarSim. Finally, the steering wheel angle was controlled by the sliding mode algorithm, therefore the model vehicle travelled along the planned path. Figure 12 shows the parking path planning and tracking control model built in the CarSim and MATLAB/Simulink co-simulation platform. As shown in Figure 13a, the path tracking takes its effect when the speed is set as 1 m/s and the turning radius of the planned path is 37 m. Through the analysis of the data, it can be concluded that there is almost no deviation from the track of the planned path. It is proved that the model can track the parking path ideally at lower speed and a larger turning radius. As shown in Figure 13a, the path tracking takes its effect when the speed is set as 1 m/s and the turning radius of the planned path is 37 m. Through the analysis of the data, it can be concluded that there is almost no deviation from the track of the planned path. It is proved that the model can track the parking path ideally at lower speed and a larger turning radius. As shown in Figure 13a, the path tracking takes its effect when the speed is set as 1 m/s and the turning radius of the planned path is 37 m. Through the analysis of the data, it can be concluded that there is almost no deviation from the track of the planned path. It is proved that the model can track the parking path ideally at lower speed and a larger turning radius. As shown in Figure 13b, the path tracking takes its effect when the speed is set as 1 m/s and the turning radius of the planned path is 12 m. The plot shows that although the tracking trajectory slightly deviates from the planned path, it can still effectively track the planned parking path. The effectiveness of the model under a lower speed and a medium turning radius is demonstrated.
As shown in Figure 13c, the path tracking takes its effect when the speed is set as 1 m/s and the turning radius of the planned path is 6 m. It can be seen from the figure that the trajectory of the model vehicle clearly deviated from the planned parking path. It shows that the model fails to track the parking path under a tiny turning radius.
In the process of parking, the steering wheel of the vehicle in the turning path is mostly close to the limit position, that is, driving at the minimum turning radius. Therefore, the control model has to be optimized to address the two main issues encountered in the model: As shown in Figure 13b, the path tracking takes its effect when the speed is set as 1 m/s and the turning radius of the planned path is 12 m. The plot shows that although the tracking trajectory slightly deviates from the planned path, it can still effectively track the planned parking path. The effectiveness of the model under a lower speed and a medium turning radius is demonstrated.
As shown in Figure 13c, the path tracking takes its effect when the speed is set as 1 m/s and the turning radius of the planned path is 6 m. It can be seen from the figure that the trajectory of the model vehicle clearly deviated from the planned parking path. It shows that the model fails to track the parking path under a tiny turning radius.
In the process of parking, the steering wheel of the vehicle in the turning path is mostly close to the limit position, that is, driving at the minimum turning radius. Therefore, the control model has to be optimized to address the two main issues encountered in the model: (1) The output coordinate parameters of CarSim are for the coordinate of vehicle front axle midpoint, so the midpoint of front axle of the vehicle is selected as the control target in the model to track the planned parking path. However, compared with the midpoint of front axle, the deviation of speed at the rear axle midpoint is smaller, which can better reflect the real trajectory of the vehicle. In that case, the selection of the midpoint of the front axle will produce certain error. (2) In the actual parking operation, the steering wheel angle, speed, deceleration and other factors affect each other. In the CarSim model, the speed is set as a constant value, and only the steering wheel angle is controlled in the path tracking process, which lacks authenticity and rationality.
Optimization of Path Planning and Tracking Control Model
In view of the problems existing in the path planning and tracking control model, the following optimization is carried out [35]: i.
By adopting the method of coordinate conversion, the rear axle midpoint of the model vehicle is set as the reference point to track the planned parking path, thereby reducing tracking error. 1.
ii. The state flow module is used to optimize the model and embed it into the built co-simulation platform. The sliding mode variable structure control algorithm is utilized to control the three input variables of CarSim: steering wheel angle, speed, and deceleration. The optimized path planning and tracking control model consists of three main functional modules, detailed below.
The first module outputs the rear axle midpoint coordinates, as shown in Figure 14. Using three output variables of CarSim, including left rear wheel speed, right rear wheel speed, and vehicle yaw rate, the coordinates of rear axle midpoint are derived based on unit time of ∆t. The memory module is used to output the coordinates of the rear axle midpoint. In the module, the input signal includes steering wheel angle, throttle depth, and brake pressure. The output signal includes vehicle speed, driving direction angle, rear-wheel speed, steering wheel angle, front axle midpoint coordinates, and vehicle yaw angle signals.
planning and tracking control model consists of three main functional modules, detailed below.
The first module outputs the rear axle midpoint coordinates, as shown in Figure 14. Using three output variables of CarSim, including left rear wheel speed, right rear wheel speed, and vehicle yaw rate, the coordinates of rear axle midpoint are derived based on unit time of Δt. The memory module is used to output the coordinates of the rear axle midpoint. In the module, the input signal includes steering wheel angle, throttle depth, and brake pressure. The output signal includes vehicle speed, driving direction angle, rear-wheel speed, steering wheel angle, front axle midpoint coordinates, and vehicle yaw angle signals. The second function module is used to transform the vehicle coordinate system into a global coordinate system, and design an ideal parking path based on the preview curvature correction algorithm, thus further optimizing the parking path. The coordinate error is obtained by subtracting the real-time coordinate of the model vehicle with the ideal path coordinate which are output continuously. The steering wheel angle is controlled based on the feedback of the coordinate error by the dual closed-loop sliding mode variable structure control algorithm, as shown in Figure 15. In this module, the input signal includes the coordinate [y] of the planned path and the coordinate [Y_lat] of the vehicle rear axle midpoint, besides, the output signal is the steering wheel angle, and [yd] is a constant. The second function module is used to transform the vehicle coordinate system into a global coordinate system, and design an ideal parking path based on the preview curvature correction algorithm, thus further optimizing the parking path. The coordinate error is obtained by subtracting the real-time coordinate of the model vehicle with the ideal path coordinate which are output continuously. The steering wheel angle is controlled based on the feedback of the coordinate error by the dual closed-loop sliding mode variable structure control algorithm, as shown in Figure 15 The third function module made use of the state flow module to output the target speed icle. Based on the planned path, the target speed of different path segments is designed a ut continuously. The speed error can be obtained by subtracting the real-time speed of the mo The third function module made use of the state flow module to output the target speed of vehicle. Based on the planned path, the target speed of different path segments is designed and output continuously. The speed error can be obtained by subtracting the real-time speed of the model vehicle with the target speed. Dual closed-loop sliding mode variable structure control algorithm is used to control the output speed of the vehicle based on the feedback of the speed error, as shown in Figure 16. In this module, the input signal includes the coordinates of the vehicle, the planned path and control parameters. While the output signal includes the coordinates of vehicle rear axle midpoint, vehicle speed, and steering wheel angle. The flag is used to determine in which stage the vehicle is in parking, and give the next path coordinate, the target steering wheel angle, and the target speed. The throttle and brake in the first module are controlled by feedback of the difference in terms of vehicle speed. The third function module made use of the state flow module to output the target speed of vehicle. Based on the planned path, the target speed of different path segments is designed and output continuously. The speed error can be obtained by subtracting the real-time speed of the model vehicle with the target speed. Dual closed-loop sliding mode variable structure control algorithm is used to control the output speed of the vehicle based on the feedback of the speed error, as shown in Figure 16. In this module, the input signal includes the coordinates of the vehicle, the planned path and control parameters. While the output signal includes the coordinates of vehicle rear axle midpoint, vehicle speed, and steering wheel angle. The flag is used to determine in which stage the vehicle is in parking, and give the next path coordinate, the target steering wheel angle, and the target speed. The throttle and brake in the first module are controlled by feedback of the difference in terms of vehicle speed.
Co-Simulation Experiment
The co-simulation experiment of CarSim and MATLAB is used to verify the rationality of path planning and tracking control effect with the experimental results shown in Figure 17. In the figure, the red curve refers to the planned parking path based on preview correction, and the blue curve represents the actual tracking path of the vehicle. It can be clearly seen that under the effect of the path planning and tracking control model designed in this paper, the model vehicle is able to track the optimized parking path well in the parking process, and the maximum deviation of coordinates does not exceed 15 cm.
Co-Simulation Experiment
The co-simulation experiment of CarSim and MATLAB is used to verify the rationality of path planning and tracking control effect with the experimental results shown in Figure 17. In the figure, the red curve refers to the planned parking path based on preview correction, and the blue curve represents the actual tracking path of the vehicle. It can be clearly seen that under the effect of the path planning and tracking control model designed in this paper, the model vehicle is able to track the optimized parking path well in the parking process, and the maximum deviation of coordinates does not exceed 15 cm. Figure 18 shows the data curve of the co-experiment process. It can be seen from (a) that the steering wheel can quickly track the requested angle. The stable slope during tracking indicates that the steering wheel is turning smoothly. The horizontal line segment in the figure represents the time period for the steering wheel to maintain the angle (such as 540 • , 540 • , and 0 • ), indicating that the steering wheel shows good angle retention. As the conclusion, the data in (a) shows that the steering wheel angle changes smoothly, evenly and has good angle maintenance. On the other hand, the data in (b) and (c) show that with the control model, the speed-tracking effect is good, and the variation range of vehicle yaw angle basically conforms to the driving habits of skilled drivers through comparison. Figure 18 shows the data curve of the co-experiment process. It can be seen from (a) that the steering wheel can quickly track the requested angle. The stable slope during tracking indicates that the steering wheel is turning smoothly. The horizontal line segment in the figure represents the time period for the steering wheel to maintain the angle (such as 540°, 540°, and 0°), indicating that the steering wheel shows good angle retention. As the conclusion, the data in (a) shows that the steering wheel angle changes smoothly, evenly and has good angle maintenance. On the other hand, the data in (b) and (c) show that with the control model, the speed-tracking effect is good, and the variation range of vehicle yaw angle basically conforms to the driving habits of skilled drivers through comparison. Figure 19 shows the path tracking results of the method designed in Reference [22]. Compared with the simulation results in Figure 17, the path planning and tracking control model designed in this paper is more reasonable and effective. In terms of path planning, the black line in Figure 19 refers to a parallel parking path planned based on the method in Reference [22]. It can be seen that the positions near 3 m and 5 m on the x-axis show obvious curvature changes. The red line in Figure 17 represents the planned path obtained using this research method. The simulation results show that the path exhibits good smooth curvature at any point on the x-axis. It can be seen that the curvature of the parking path obtained by the path planning method and designed in this paper is smoother, that is, the method is more reasonable and effective in parking path planning. In terms of tracking control, by setting the same initial tracking error, comparison is done for the path tracking control effect. Under the tracking control of the method designed in Reference [22], the initial error is eliminated at the position of 1.3 m. There is overshoot and the convergence speed is slow, and the overshoot is eliminated at the position of 2.8 m. During the path tracking process, the maximum tracking error in x-axis exceeds 20 cm, and the error in y-axis also exceeds 15 cm. The research method in this paper is available to track the target position at 0.8 m, and eliminate the overshoot at 1.5 m.
The overshoot is small, and the convergence speed is fast. In the whole path tracking process, the Figure 18 shows the data curve of the co-experiment process. It can be seen from (a) that the steering wheel can quickly track the requested angle. The stable slope during tracking indicates that the steering wheel is turning smoothly. The horizontal line segment in the figure represents the time period for the steering wheel to maintain the angle (such as 540°, 540°, and 0°), indicating that the steering wheel shows good angle retention. As the conclusion, the data in (a) shows that the steering wheel angle changes smoothly, evenly and has good angle maintenance. On the other hand, the data in (b) and (c) show that with the control model, the speed-tracking effect is good, and the variation range of vehicle yaw angle basically conforms to the driving habits of skilled drivers through comparison. Figure 19 shows the path tracking results of the method designed in Reference [22]. Compared with the simulation results in Figure 17, the path planning and tracking control model designed in this paper is more reasonable and effective. In terms of path planning, the black line in Figure 19 refers to a parallel parking path planned based on the method in Reference [22]. It can be seen that the positions near 3 m and 5 m on the x-axis show obvious curvature changes. The red line in Figure 17 represents the planned path obtained using this research method. The simulation results show that the path exhibits good smooth curvature at any point on the x-axis. It can be seen that the curvature of the parking path obtained by the path planning method and designed in this paper is smoother, that is, the method is more reasonable and effective in parking path planning. In terms of tracking control, by setting the same initial tracking error, comparison is done for the path tracking control effect. Under the tracking control of the method designed in Reference [22], the initial error is eliminated at the position of 1.3 m. There is overshoot and the convergence speed is slow, and the overshoot is eliminated at the position of 2.8 m. During the path tracking process, the maximum tracking error in x-axis exceeds 20 cm, and the error in y-axis also exceeds 15 cm. The research method in this paper is available to track the target position at 0.8 m, and eliminate the overshoot at 1.5 m. The overshoot is small, and the convergence speed is fast. In the whole path tracking process, the Figure 19 shows the path tracking results of the method designed in Reference [22]. Compared with the simulation results in Figure 17, the path planning and tracking control model designed in this paper is more reasonable and effective. In terms of path planning, the black line in Figure 19 refers to a parallel parking path planned based on the method in Reference [22]. It can be seen that the positions near 3 m and 5 m on the x-axis show obvious curvature changes. The red line in Figure 17 represents the planned path obtained using this research method. The simulation results show that the path exhibits good smooth curvature at any point on the x-axis. It can be seen that the curvature of the parking path obtained by the path planning method and designed in this paper is smoother, that is, the method is more reasonable and effective in parking path planning. In terms of tracking control, by setting the same initial tracking error, comparison is done for the path tracking control effect. Under the tracking control of the method designed in Reference [22], the initial error is eliminated at the position of 1.3 m. There is overshoot and the convergence speed is slow, and the overshoot is eliminated at the position of 2.8 m. During the path tracking process, the maximum tracking error in x-axis exceeds 20 cm, and the error in y-axis also exceeds 15 cm. The research method in this paper is available to track the target position at 0.8 m, and eliminate the overshoot at 1.5 m. The overshoot is small, and the convergence speed is fast. In the whole path tracking process, the tracking error of the x-axis does not exceed 10 cm, and that of the y-axis does not exceed 5 cm. In contrast, the control method designed in this paper shows a better path tracking effect.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 19 of 27 tracking error of the x-axis does not exceed 10 cm, and that of the y-axis does not exceed 5 cm. In contrast, the control method designed in this paper shows a better path tracking effect.
Automatic Parking Test of Real Vehicle
To verify the effectiveness of the research in this paper, the automatic parking system developed is based on a control model and a real vehicle test. Figure 20 shows the architecture of the automatic parking system developed in this paper. The automatic parking system uses 12 ultrasonic radar sensors to reduce the detection blind area around the vehicle. Among them, the eight radar sensors installed both in front of and at the rear of the vehicle are short-range ultrasonic radar with a detection range of more than 2.5 m, which are mainly used to detect obstacles in the parking path, while the four radar sensors installed on the sides of the vehicle are long-distance ultrasonic radar with the detection distance of more than 4.5 m, which are mainly used for parking space detection. Meanwhile, the four radar sensors on the vehicle sides can detect obstacles around the vehicle during parking operation.
Automatic Parking Test of Real Vehicle
To verify the effectiveness of the research in this paper, the automatic parking system developed is based on a control model and a real vehicle test. Figure 20 shows the architecture of the automatic parking system developed in this paper. The automatic parking system uses 12 ultrasonic radar sensors to reduce the detection blind area around the vehicle. Among them, the eight radar sensors installed both in front of and at the rear of the vehicle are short-range ultrasonic radar with a detection range of more than 2.5 m, which are mainly used to detect obstacles in the parking path, while the four radar sensors installed on the sides of the vehicle are long-distance ultrasonic radar with the detection distance of more than 4.5 m, which are mainly used for parking space detection. Meanwhile, the four radar sensors on the vehicle sides can detect obstacles around the vehicle during parking operation.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 19 of 27 tracking error of the x-axis does not exceed 10 cm, and that of the y-axis does not exceed 5 cm. In contrast, the control method designed in this paper shows a better path tracking effect.
Automatic Parking Test of Real Vehicle
To verify the effectiveness of the research in this paper, the automatic parking system developed is based on a control model and a real vehicle test. Figure 20 shows the architecture of the automatic parking system developed in this paper. The automatic parking system uses 12 ultrasonic radar sensors to reduce the detection blind area around the vehicle. Among them, the eight radar sensors installed both in front of and at the rear of the vehicle are short-range ultrasonic radar with a detection range of more than 2.5 m, which are mainly used to detect obstacles in the parking path, while the four radar sensors installed on the sides of the vehicle are long-distance ultrasonic radar with the detection distance of more than 4.5 m, which are mainly used for parking space detection. Meanwhile, the four radar sensors on the vehicle sides can detect obstacles around the vehicle during parking operation. Note: In the above framework diagram, radar refers to ultrasonic radar sensor, EMS/VCU represents the power control unit, GSM denotes the shift control unit, EPS is steering control unit, EPB refers to the parking control unit, HMI represents the human-machine interaction unit, and ABS/ESU is brake control unit. Besides, Controller means the automatic parking controller designed in this paper, and CAN refers to the vehicle controller area network.
The equipment required for real vehicle test included the following: Test Vehicle, Automatic Parking System, CANoe, Laptop (Equipped with the software CodeWarrior 10.6.4, FreeMaster and MATLAB), PE Downloader, Oscilloscope, etc., as shown in Figure 21. Among them, the automatic parking system is installed on the test vehicle for parking test, CANOE is used to collect vehicle operating data, and the laptop is used for data recording and parameter debugging (CodeWarrior 10.6.4 provides the editing environment of the single-chip microcomputer used in this system, while FreeMaster and MATLAB Software is used for data recording and analysis). The PE downloader is available to download the program to the controller ECU, and the oscilloscope is used for signal acquisition and signal monitoring.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 20 of 27 Note: In the above framework diagram, radar refers to ultrasonic radar sensor, EMS/VCU represents the power control unit, GSM denotes the shift control unit, EPS is steering control unit, EPB refers to the parking control unit, HMI represents the human-machine interaction unit, and ABS/ESU is brake control unit. Besides, Controller means the automatic parking controller designed in this paper, and CAN refers to the vehicle controller area network.
The equipment required for real vehicle test included the following: Test Vehicle, Automatic Parking System, CANoe, Laptop (Equipped with the software CodeWarrior 10.6.4, FreeMaster and MATLAB), PE Downloader, Oscilloscope, etc., as shown in Figure 21. Among them, the automatic parking system is installed on the test vehicle for parking test, CANOE is used to collect vehicle operating data, and the laptop is used for data recording and parameter debugging (CodeWarrior 10.6.4 provides the editing environment of the single-chip microcomputer used in this system, while FreeMaster and MATLAB Software is used for data recording and analysis). The PE downloader is available to download the program to the controller ECU, and the oscilloscope is used for signal acquisition and signal monitoring. To verify the effectiveness of the path planning and path tracking, control model is designed in this paper through real vehicle automatic parking. First of all, an open space was used to build a parking slot with sufficient length. The requirements in terms of parking slot size can be satisfied by only one reverse gear, thus completing the parking. The parameters of the parking slot used in the test are: length × width = 7 m × 2.5 m. After that, FreeMaster software is used to record the driving trajectory data during the whole parking process. Finally, the coordinates of the rear axle midpoint of the vehicle are selected and imported into MATLAB for conversion and processing. The trajectory of the rear axle midpoint of the test vehicle during the parking is plotted, as shown in Figure 22. To verify the effectiveness of the path planning and path tracking, control model is designed in this paper through real vehicle automatic parking. First of all, an open space was used to build a parking slot with sufficient length. The requirements in terms of parking slot size can be satisfied by only one reverse gear, thus completing the parking. The parameters of the parking slot used in the test are: length × width = 7 m × 2.5 m. After that, FreeMaster software is used to record the driving trajectory data during the whole parking process. Finally, the coordinates of the rear axle midpoint of the vehicle are selected and imported into MATLAB for conversion and processing. The trajectory of the rear axle midpoint of the test vehicle during the parking is plotted, as shown in Figure 22.
The optimized parking path is compared with the actual trajectory of the test vehicle in Matlab. As per Figure 23, the results show that the deviation between the actual trajectory and the planned path is small, and the test vehicle is able to track the planned path to complete the parking well.
Since the above parking slot is large and relatively rare in parking lots, in order to verify the effectiveness of the design control model in the real parking scenario, parking slots were built based on a real parking scenario, and several automatic parking tests were conducted. There are two kinds of parking slots for the automatic parking test. One is the standard parking slot on the right side with a length of 5.9 m (i.e., vehicle length + 1.2 m), which is composed of the front and rear obstacles with the curbstone. The other is the medium parking slot on the right side with a length of 5.6 m (i.e., vehicle length + 0.9 m), which is composed of the front and rear obstacle with the curbstone. In addition, Figure 24 shows the parking scenario for the automatic parking test.
parking slot with sufficient length. The requirements in terms of parking slot size can be satisfied by only one reverse gear, thus completing the parking. The parameters of the parking slot used in the test are: length × width = 7 m × 2.5 m. After that, FreeMaster software is used to record the driving trajectory data during the whole parking process. Finally, the coordinates of the rear axle midpoint of the vehicle are selected and imported into MATLAB for conversion and processing. The trajectory of the rear axle midpoint of the test vehicle during the parking is plotted, as shown in Figure 22. The optimized parking path is compared with the actual trajectory of the test vehicle in Matlab. As per Figure 23, the results show that the deviation between the actual trajectory and the planned path is small, and the test vehicle is able to track the planned path to complete the parking well. Since the above parking slot is large and relatively rare in parking lots, in order to verify the effectiveness of the design control model in the real parking scenario, parking slots were built based on a real parking scenario, and several automatic parking tests were conducted. There are two kinds of parking slots for the automatic parking test. One is the standard parking slot on the right side with a length of 5.9 m (i.e., vehicle length + 1.2 m), which is composed of the front and rear obstacles with the curbstone. The other is the medium parking slot on the right side with a length of 5.6 m (i.e., vehicle length + 0.9 m), which is composed of the front and rear obstacle with the curbstone. In addition, Figure 24 shows the parking scenario for the automatic parking test. We carried out the automatic parking tests with the side distance of 0.5 m~1.5 m. We chose five side distances of 0.5 m, 0.8 m, 1.0 m, 1.3 m and 1.5 m for the parking test, and five parking tests were carried out with each side distance continuously. We used the automatic parking system performance The optimized parking path is compared with the actual trajectory of the test vehicle in Matlab. As per Figure 23, the results show that the deviation between the actual trajectory and the planned path is small, and the test vehicle is able to track the planned path to complete the parking well. Since the above parking slot is large and relatively rare in parking lots, in order to verify the effectiveness of the design control model in the real parking scenario, parking slots were built based on a real parking scenario, and several automatic parking tests were conducted. There are two kinds of parking slots for the automatic parking test. One is the standard parking slot on the right side with a length of 5.9 m (i.e., vehicle length + 1.2 m), which is composed of the front and rear obstacles with the curbstone. The other is the medium parking slot on the right side with a length of 5.6 m (i.e., vehicle length + 0.9 m), which is composed of the front and rear obstacle with the curbstone. In addition, Figure 24 shows the parking scenario for the automatic parking test. We carried out the automatic parking tests with the side distance of 0.5 m~1.5 m. We chose five side distances of 0.5 m, 0.8 m, 1.0 m, 1.3 m and 1.5 m for the parking test, and five parking tests were carried out with each side distance continuously. We used the automatic parking system performance evaluation method to record all test data [36][37][38]. Tables 2 and 3 show the automatic parking evaluation index, evaluation criteria and test data of the above standard parking slot and medium parking slot, respectively. Note: the item with gray background in the table refers to the failed item. We carried out the automatic parking tests with the side distance of 0.5 m~1.5 m. We chose five side distances of 0.5 m, 0.8 m, 1.0 m, 1.3 m and 1.5 m for the parking test, and five parking tests were carried out with each side distance continuously. We used the automatic parking system performance evaluation method to record all test data [36][37][38]. Tables 2 and 3 show the automatic parking evaluation index, evaluation criteria and test data of the above standard parking slot and medium parking slot, respectively. Note: the item with gray background in the table refers to the failed item. Based on the 50 times parking experiment data and the evaluation criteria, the number of both successful and failed parking operations, as well as the reason of parking failure were obtained. The success rate of parking is 90%. Besides, there are two times/types of parking failure caused searching failure of the parking slot. The real parking success rate can reach 93.75% by removing the data of the two times/types of failure. Since the content of this paper does not involve the study of the parking slot search algorithm, instead of the failure analysis of the search, only the cause of the parking failure based on path planning and tracking control is analyzed. By analyzing the real-time bus data collected during the parking process, it is concluded that there are two reasons for the parking failure. First, the deviation from the curbstone data detected by the long-distance ultrasonic radar and the real curbstone distance is more than ±8 cm. This may result in the deviations in the parking path planning, and in turn lead to too low or too high distance between the wheel and the curbstone when the parking is completed, that is, the above distance exceeds the distance limit of 10 cm~25 cm from the wheel to the curbstone required by the parking success standard. Second, due to the long driving distance in the parallel parking process, there are accumulated wheel speed pulse errors, steering wheel angle errors, and wheel slip during steering. These errors may result in the positioning and attitude errors of the vehicle. However, the system is not available to detect the existence of the above errors, which leads to a large deviation in path tracking control, and in turn, the parking failure.
The test results show that the parking path planning and tracking control model designed in this paper is of great significant in the improvement of the parking success rate and the parking effect in the real parking scenario. The repeatability test of the real vehicle automatic parking verifies that the designed model shows high reliability and stability. Figure 25 shows some photos during the automatic parking test. Based on the 50 times parking experiment data and the evaluation criteria, the number of both successful and failed parking operations, as well as the reason of parking failure were obtained. The success rate of parking is 90%. Besides, there are two times/types of parking failure caused searching failure of the parking slot. The real parking success rate can reach 93.75% by removing the data of the two times/types of failure. Since the content of this paper does not involve the study of the parking slot search algorithm, instead of the failure analysis of the search, only the cause of the parking failure based on path planning and tracking control is analyzed. By analyzing the real-time bus data collected during the parking process, it is concluded that there are two reasons for the parking failure. First, the deviation from the curbstone data detected by the long-distance ultrasonic radar and the real curbstone distance is more than ±8 cm. This may result in the deviations in the parking path planning, and in turn lead to too low or too high distance between the wheel and the curbstone when the parking is completed, that is, the above distance exceeds the distance limit of 10 cm~25 cm from the wheel to the curbstone required by the parking success standard. Second, due to the long driving distance in the parallel parking process, there are accumulated wheel speed pulse errors, steering wheel angle errors, and wheel slip during steering. These errors may result in the positioning and attitude errors of the vehicle. However, the system is not available to detect the existence of the above errors, which leads to a large deviation in path tracking control, and in turn, the parking failure.
The test results show that the parking path planning and tracking control model designed in this paper is of great significant in the improvement of the parking success rate and the parking effect in the real parking scenario. The repeatability test of the real vehicle automatic parking verifies that the designed model shows high reliability and stability. Figure 25 shows some photos during the automatic parking test.
Conclusions and Prospects
A new method is proposed in this paper for path planning and tracking control of an automatic parking system for intelligent vehicles, which involves the optimization of the parking path planning method, verification of the path tracking control algorithm, and the simulation experiments and real vehicle testing of the proposed parking path planning and tracking control model. The results obtained from the automatic parking system developed based on the proposed parking path planning and tracking control model indicates that it is not only highly intelligent, but also available to lead to a higher parking success rate and a better parking efficiency, and offer a higher parking reliability and stability to drivers.
The main objectives and conclusions of this paper are summarized as follows: (1) Established the vehicle kinematic model of parking and analyzed the parking movement constraints. A reasonable and feasible parallel parking path planning program was proposed, and the simulation analysis was carried out. In addition, an optimization method for the curvature outliers in the path was studied based on the preview theory. Thus, the parking path was ultimately optimized.
Conclusions and Prospects
A new method is proposed in this paper for path planning and tracking control of an automatic parking system for intelligent vehicles, which involves the optimization of the parking path planning method, verification of the path tracking control algorithm, and the simulation experiments and real vehicle testing of the proposed parking path planning and tracking control model. The results obtained from the automatic parking system developed based on the proposed parking path planning and tracking control model indicates that it is not only highly intelligent, but also available to lead to a higher parking success rate and a better parking efficiency, and offer a higher parking reliability and stability to drivers.
The main objectives and conclusions of this paper are summarized as follows: (1) Established the vehicle kinematic model of parking and analyzed the parking movement constraints. A reasonable and feasible parallel parking path planning program was proposed, and the simulation analysis was carried out. In addition, an optimization method for the curvature outliers in the path was studied based on the preview theory. Thus, the parking path was ultimately optimized. (2) To reflect the vehicle movement correctly and accurately, the simplified path tracking model for parking was developed. To improve the path tracking accuracy, an automatic parking path tracking controller was designed based on the dual closed-loop sliding mode variable structure control algorithm. (3) The testing vehicle model was built in CarSim, and the input and output variables of the control were predefined. The co-simulation platform was built with CarSim and MATLAB/Simulink. The parking path planning and tracking control model was designed and optimized in the platform. Besides, the effectiveness of the control model was verified by the co-simulation experiment. (4) An automatic parking system was developed based on the control model designed in this paper, and the real vehicle parking testing was carried out. The effectiveness of the control mode was further verified, and the high reliability and stability of the control mode were justified.
In the follow-up research, we will further optimize the driver and threshold configuration of ultrasonic radar, and improve the parking slot searching algorithm thus improving improve the accuracy of boundary detection, and laying the foundation for further optimizing of the parking path. In addition, we will improve the vehicle control model to eliminate vehicle positioning and attitude errors caused by accumulated errors, thereby improving tracking control accuracy and further increasing the parking success rate. | 16,647.4 | 2020-12-19T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Corporate Social Responsibility Disclosure and Firm Performance of Malaysian Public Listed Firms
Corporate Social Responsibility (CSR) disclosure has become a rising concern for the public listed firms worldwide due to its ability to enhance firm’s market performance and financial performance. The main objective of this study is to investigate the relationship between CSR disclosure and firm performance of Bursa Malaysia’s listed companies based on their market value added (MVA), return on equity (ROE) and return on assets (ROA). 324 samples of public listed companies’ annual report for the period of 2014 to 2016 were obtained from Bursa Malaysia and examined. The extent of their CSR disclosure were measured and analyzed. After accounting for control variables such as firm size, firm age, firm leverage and firm liquidity, the result shows that there is a positive significant relationship between CSR disclosure and firm performance in terms of ROA and ROE. This reveals that high level of CSR disclosure helps firms to achieve optimum performance through increased competitiveness, improved firm’s image amongst society, and creates new opportunities in the marketplace. The findings also showed mix results among the control variables towards firm performance. For future research, this paper recommends to extend the study by using different CSR disclosure measurement, different firm performance measurement such as return on investments (ROI) and Tobin’s Q and different samples.
Introduction
For many years, there has been a significant growth in the awareness and practices of Corporate Social Responsibility (CSR) disclosure worldwide.CSR disclosure has been a rising concern for the public listed companies as Malaysia is one of the emerging capital markets.On 14 th December 2006, Bursa Malaysia requires all listed companies to report their CSR actions in their annual report (Bursa Malaysia Securities Berhad, 2015).Bursa Malaysia has prepared a CSR framework in 2006 that acts as a guideline for listed companies to report their CSR activities and practices.Although the practice of CSR in Malaysia is completely voluntary and not mandatory, the disclosure of CSR activities is known to provide better transparency.As a result many companies in Malaysia are actively engaging themselves with CSR activities not only to meet the requirement, but also to gain corporate image and competitive advantage.For example, DiGi Telecommunications Sdn.Bhd. as one of the mobile communication firms in Malaysia has well practice CSR disclosure.DiGi had been awarded the "Best overall CSR Programme" in Prime Minister's CSR Awards in 2007 for its efforts and contributions to the society through CSR activities (Wahari, 2007).
The inclusion of CSR activities in annual report is believed to encourage more investors to invest in the firms due to the notion of responsible business conduct has the potential of influencing customers' purchasing behavior which leads to increase in sales of products and customers' loyalty to the brand and hence enable the firms to survive and have better firms' performance in the competitive market.Although all listed companies in Malaysia has disclosed their CSR activities and practices in the annual report, a generic conclusion can be made that these organizations are only focused on certain aspects of CSR especially those that provides them with the highest visibility such as philanthropy and public relation part of CSR that concerns the community and society.Thus, it is crucial to discover the relationship between CSR disclosure and firm performance in term of market and accounting performance to encourage more firms in Malaysia to practice CSR disclosure.It is because the efforts toward better CSR practices is vital to create superior development of economy and capital market in Malaysia.
Most of the abroad researchers claimed that CSR has a positive impact on firms' performance.Accounting-based financial indicators like return on equity (ROE) and return on assets (ROA) are widely used to evaluate the firms' performance.In Malaysia, Yusoff and Adamu (2016) discovered that the relationship between CSR disclosure and financial performance in term of ROE and ROA were mostly positive.The stakeholders have more confidence toward the firms that well practicing CSR disclosure would increase when the firms were concerned about the issues in society.Therefore, stakeholders will fully support any ethical actions of the firms to improve firms' performance.This implies that firms' performance can be enhanced by practicing good CSR activities.
On the other hand, market-based financial indicator like market value added (MVA) is less used in earlier studies.The MVA acts as an indicator of how well a firm is able to create returns to investors as well as act as a signal whether the firm has strong leadership and sound governance.CSR disclosure and MVA were found to have mixed relationship.Fooladi and Kolaie (2015) claimed that CSR disclosure had positive impact on MVA.CSR activities could increase the stock price of firms because investors were more confidence with the firms.Meanwhile, it was argued by Dewi, Sudarma, Djumahir, and Ganis (2014) that there was no connection between CSR disclosure and MVA.It was claimed that MVA is derived from other aspects, especially aspects related to return of equity.This indicated that MVA was not totally influenced by company's CSR activities, but also affected by yield of shareholders' capital.Thus, the influence of CSR on improving MVA was small.
There are numerous literatures that explained the effect of CSR disclosure towards firms' performance in Western and European countries, but no works have been done in Malaysia in term of market value added.This causes the extent of the CSR disclosure affecting firms' performance of listed companies in Bursa Malaysia to be less noticeable.Thus, this study is an initiative to further examine the extent of CSR disclosure relationship with firms' performance in term of MVA, ROE and ROA as prior results were inconsistent.This study is designed to fill the gap for earlier studies and offered enhanced evidences in this field.
Literature Review
Stakeholder theory has been extensively used by empirical researchers to describe the link between CSR disclosure and firm performance.Based on stakeholder theory, CSR activities had effects on revenues and costs.CSR activities could create extra revenue directly or indirectly.Purchasing behavior of customers had direct effect on a company's revenues.With rising consciousness of social and environmental concerns, customers were demanding with CSR related products and remaining loyal to the brands (Servaes & Tamayo, 2013).Consumer-oriented CSR activities also include intangible elements such as reputation for quality and trustworthiness, which could create product differentiations and generate more revenues (Lev, Petrovits, & Radhakrishnan, 2010).
The positive relationship between CSR disclosure and firm performance had stayed strong among most of the prior empirical studies.Most of the abroad researchers claimed that the CSR disclosure and financial performance indicators like return on assets (ROA) and return on equity (ROE) had positive relationship.Researchers like Mittal, Sinha and Singh (2008), Fooladi and Kolaie (2015), Mujahid and Abdullah (2014), Kabir and Hanh (2017), Uadiale and Fagbemi (2012), Kanwal, Khanam, Nasreen, and Hameed (2013), Yusoff and Adamu (2016), and Dkhili and Ansi (2012) found the existence of positive effect of CSR on firm performance.A study by Mittal, Sinha and Singh (2008) stated that there was a slight proof that firms with code of ethics had produced additional market value added (MVA) than those without codes.A firm with code of ethics would create significantly more MVA than a firm without the code.These firms had more desirable reputation and image in the marketplace.Then, the desirable reputation and image of firm increased the investors' confidence and thus increased the MVA of firm.Besides, Fooladi and Kolaie (2015) examined that the state of CSR disclosure and MVA of listed companies had a relationship.The state of CSR disclosure could affect the operating performance of the firms by increasing the interest of investors to invest in the firm.This is because CSR actions have increased the firm's stock price which led to a better firm performance.
In addition, CSR has a positive association with return on equity (ROE), return on assets (ROA), earnings per share and price of stock (Mujahid and Abdullah, 2014).This research compared the financial performance and shareholders wealth of CSR firms with non-CSR firms.It was found that CSR concept helped the firms to achieve optimum financial performance in the competitive surroundings by increasing the competitiveness of firms.Other than that, firms that actively involved in CSR activities were found to experience higher financial performance than other firms.Kabir and Hanh (2017) explained that CSR activities helped to enhance the reputation of firms in the eyes of public.Besides, Vietnamese firms that actively involved in CSR activities were also found to experience higher financial performance than other firms.It was because involvement of firms in CSR activities could increase the intention of investors to play parts in Vietnamese firms and increase the loyalty of customers towards the brands.The previous findings by Uadiale and Fagbemi (2012) revealed that earnings of organization increased when the firm increased CSR activities.Firm's spending on CSR activities was found to improve the ROE and ROA of firm.CSR activities could create new opportunities in the marketplace and improve image of firm in the society.Kanwal, Khanam, Nasreen, and Hameed (2013) also stipulated that CSR associated with ROA and ROE of the firm positively.Kanwal et al. (2013) added that Pakistani firms' spent on employees' wellbeing to keep existing employees and invited potential employees by building high confidence investors and employees toward the firms.As a result, firms' expenditure on CSR helped the firms to have a long-term supportable development and boosted financial performance of firms.Also, research by Dkhili and Ansi (2012) debated that CSR and ROE is positive related.The positive relationship between CSR and ROE was due to the presence of stakeholders would improve performance in term of economic.In other words, a company with stakeholders that have more favorable behavior than competitors would have a greater level of financial performance.In Malaysia, Yusoff and Adamu (2016) viewed that positive relationship existed between CSR with ROE and ROA.The comprehensive financial managing could be effectively achieved through proper CSR exercises.Companies' performance could be improved by enhancing good CSR practice.Yusoff and Adamu (2016) also added that sufficient implementation of workplace activities by firms were associated with firms' financial performance.It was pointed out that workplace activities were connected with human capital, value of portfolios and expenses on operating activities directly.
On the other hands, there were few researchers who claimed that CSR disclosure and MVA, ROA and ROE did not have any relationship, such as Dewi et al. (2014) and Kamatra and Kartikaningdyah (2015).Dewi et al. (2014) pointed out that CSR had no effect on MVA.It was refuted that CSR had very little influence on increasing MVA as MVA was also determined from the economic aspects such as inflation and GDP.Increase in MVA does not produce gain directly and could not increase the social actions of firm.Kamatra and Kartikaningdyah (2015) had expressed the view that CSR had no effect on ROE.CSR did not affect ROE because some of the investors did not care about the CSR activities done by the firms as CSR activities was thought as an imaging.
Based on the above literature review, most of the studies claimed that the CSR disclosure and financial performance has a positive relationship (such as Mittal, Sinha & Singh, 2008;Fooladi & Kolaie, 2015;Mujahid & Abdullah, 2014;Kabir & Hanh, 2017), hence we hypothesized our study as follow: H 1 : There is relationship between CSR disclosure and firm performances.
Sample
The sample in this study was drawn from a set of public listed firms from different sectors, such as Consumer Products, Construction, Properties and Trading-Service in Bursa Malaysia.The final sample includes 324 firms after the removal firms without complete annual reports.The study periods were three years, which is from year 2014 to 2016.In this research, researchers used secondary resources to collect data of Corporate Social Responsibility (CSR) disclosure from annual reports of firms.The financial data of dependent variables such as market value added (MVA), return on equity (ROE) and return on assets (ROA) and control variables like firm's size, age, leverage and liquidity were collected from DataStream by Thomson Reuters.
Corporate Social Responsibility (CSR) Disclosure
CSR disclosure is an instrument for organizations to include social and environmental concerns voluntarily into their operations and relations with stakeholders, which goes beyond the organization's responsibility in regulation (Kusumadilaga, 2006).To measure the disclosure level of CSR in the annual reports, a content analysis approach was adopted.Bowman (1978) came out with the idea of content analysis is an inquiry process which does not rely on casual reading but on rather explicit counting and coding of particular lines of prose, of word usage and of disclosure.The measurement of disclosure was an unweighted count on the number of words on CSR's themes in the annual report.Usage of word counts method assisted in shielding against irregularities in calculating the quantity of disclosure (Zeghal and Ahmed, 1990).In this study, 4 main themes or areas of CSR recommended by Bursa Malaysia to be examined were Environment, Community, Workplace and Marketplace.The words "Environment", "Community", "Workplace" and "Marketplace" which appeared in the annual reports were scanned and counted by using computer.
Market Value Added (MVA)
MVA is a performance measurement tool that calculates increase in the value of the firm's stock price.Investors always use MVA to estimate the efficiency of management team of firm on using its capital to increase shareholders' value.A positive value of MVA indicates that the firm has improved in value.A negative value of MVA indicates that the value of firm is destroyed.Berceanu, Siminica, and Circiumaru (2010) have used ratio of change in market value added of the year to total equity of the year to measure MVA.
The measurement of MVA:
MVA =
Change in market value added of the year Total equity of the year
Return on Equity (ROE)
ROE can test the capability of firm to generate profits from the equity.ROE can show the effectiveness of firm using their capital to generate profits.A high ROE indicates the firms management is running efficiently.Kamatra and Kartikaningdyah (2015) have used net income to total equity to measure ROE.
The measurement of ROE: ROE= Net Income Total Equity
Return on Assets (ROA)
ROA is used to examine the ability of firms to effectively use its assets to generate profit.A high ROA indicates that the firm is able to achieve higher earnings from its assets and that the firm's assets are used efficiently to generate profits.Dkhili and Ansi (2012) have used net income over total equity to measure ROA.As for this study, the measurement of ROA is as follows:
ROA=
Net Income Total Assets
Regression Model
The multiple regression analysis was used to measure the relationship between Corporate Social Responsibility (CSR) disclosure and firm performance in term of market value added (MVA), return on equity (ROE) and return on assets (ROA), while control variables like firm age, firm leverage, firm liquidity, and firm size were adopted to test their effect toward firm performance.The multiple regression model allowed greater flexibility as we were able to control the variables that influence the dependent variables explicitly.
Functional form: Firm Performance = f (CSRD, firm age, firm leverage, firm liquidity, firm size) Full model: 2 shows that MVA has a positive correlation with ROA and ROE at 1% significant level.MVA also has a negative correlation with CSR at 5% and a negative correlation with LSIZE at 1% significant level.ROA is second measurement of performance has positive correlation with ROE and LIQ at significant level of 0.01.ROE is the third dependent variable positively correlated with CSR and LSIZE at 1%. CSR as independent variable has positive relationship with LEV at 1% significant level.Meanwhile, for control variables, LEV has positive correlation with LSIZE and negative correlation with LIQ at 1%. LIQ has negative correlation with LSIZE at 1% significant level.Correlation does not exist between AGE and LIQ and LSIZE.CSR disclosure was expected to have positive relationship with firm performance in term of MVA, ROE and ROA.The results summarized in Table 3 showed that coefficient value of CSR was 0.0004, 0.0004, and 0.0002 respectively.Based on Model 2 and Model 3, there was a positive significant relationship between CSR and firm performance in term of ROA and ROE with p-value of 0.0535 and 0.0916 respectively.The p-value was less than the level of significance at 0.10.The higher the disclosure level of CSR, the better the firms' performance.Firms' investment on CSR helped the firms to have long-term supportable development and boosted financial performance of firms.The results were consistent with Mujahid and Abdullah (2014) who concluded that CSR has a positive association with ROA and ROE.It was claimed that CSR concept helped the firms to achieve optimum financial performance in the competitive surroundings by increasing the competitiveness of firms.Firm's spending on CSR activities was found to improve the ROE and ROA of a firm.In addition, CSR activities could create new opportunities in the marketplace and improve image of firm in the society (Uadiale and Fagbemi, 2012).The results of Model 2 and Model 3 had strengthened that CSR has a significant positive relationship with firm performance in term of ROA and ROE.However, the findings in Model 1 indicated that MVA is positively insignificant to CSR.The overall outcome was consistent with the study made by Dewi et al. (2014), where there is a positive significant relationship between CSR with ROA and ROE, while positive insignificant relationship between CSR and MVA.Dewi et al. (2014) had enlightened that CSR indexes had supported firms for image building, reputation maintenance and legitimacy of investors to increase capacity of firms which could influence the competitiveness of firms.CSR index is also able to reflect the disclosure level of firms' CSR activities.The high disclosure level of CSR provided trust and increased the desire of investors to play their roles in firms.When more capitals were obtained by the firms, more investments could be done by the firms, and hence the higher the ROE.As a result, CSR is able to provide good financial performance in the form of good ROA and ROE.However, it was refuted by Dewi et al. (2014) that CSR had no effects on increasing MVA.MVA was used to measure effect of managerial performance and influenced by many other economic aspects such as inflation and gross domestic product (GDP).Therefore, increase in CSR activities did not have effect on MVA and this had led to insignificant relationship between CSR and MVA.
Based on Model 2 and Model 3 in Table 4.5, there is positive significant relationship between firm size with ROA and ROE.The p-value of ROA and ROE were 0.0000, which were less than the level of significance at 0.01.The bigger the firm's size, the better the firms' performance in term of ROA and ROE.This outcome is consistent with Kabir and Thai (2017), Babalola (2013) and Ofuan and Izien (2016).These prior researchers claimed that in emerging countries, larger firms have more resources to have better equipment and professional experts than smaller firms to improve firms' profitability because they had more bargaining power over the market.However, in Model 1, there was negative significant relationship between firms' size and MVA at 1% significance level.This means that the smaller the firm's size, the better the firms' performance in term of MVA.This result was consistent with Hannan and Freeman (1989) where smaller firms were more creative and innovative where they could easily transform to enhance their values if compared with larger firms.
Next, there is a negative significant relationship between firm age with ROA and ROE.The p-value of ROA and ROE were both 0.0000, which were less than the level of significance at 0.01.The younger the firms, the better the firms' performance in term of ROA and ROE.However, in Model 1, there was negative insignificant relationship between firms' age and MVA.The younger the firm's age, the better the firms' performance in term of MVA.The outcome of negative significant relationship between firm age with ROA and ROE was supported by Loderer and Waelchli (2010) where the negative relation of firm age and firm performance could be due to inelasticity and inertia of organizational to identify and involve new innovation in the firm.Contrary, the outcome was inconsistent with prior studies made by Fooladi and Kolaie (2015), Kabir and Thai (2017), and Ofuan and Izien (2016), where these researchers claimed that there was positive significant relationship between firm age and firm performance.These researchers clarified that knowledge of effective production techniques was found to be increase over time and thus improved the performance of the firm which was parallel to the theory of learning by doing.Therefore, old firms would have a better firm performance if compared to younger firms.
Furthermore, firm leverage has a positive relation with ROA and ROE.The p-value of ROA and ROE are 0.0000, which were less than the level of significance at 0.01.The lower the firm's leverage, the better the firms' performance in term of ROA and ROE.Similar results were found by Fooladi and Kolaie (2015) and Kabir and Thai (2017).Fooladi and Kolaie (2015) observed that growing firms were more conservative when using debt financing in order to maximize their performance.Low level of firm leverage is needed by growing firms as they have low intention of paying high interest.The growing firms also did not totally depend on debt to finance their operating activities as they had more chances to involve in flexible investments.The retained earnings of the firms are also enough for them to finance the firm's activities.However, in Model 1, there is positive significant relationship between firms' leverage and MVA at 5% significance level with p-value of 0.0170.The positive significant relationship between firms' leverage and MVA is consistent with past researchers like Modigliani and Miller (1963) and Jensen (1986).The action of firm on using obligation to finance firms' operating activities would increase the value of firms (Modigliani and Miller, 1963).Then, Jensen (1986) had concluded that rises in debt level would raise the firms' market value by keeping low bankruptcy costs.This is because Jensen (1986) found that profitable firms would use more financial leverage to solve the agency problem happened between shareholders and managers which due to inconsistency of interests.To maximize market value of firms, financial leverage was used by shareholders to change the aims of managers on maximizing their own interests only and decrease the agency costs related with equity (Jensen, 1986).Hence, the higher the firm's leverage, the better the firms' performance in term of MVA.
The final variable, which is firm liquidity has a positive significant relationship with ROA and ROE.The p-value of ROA is 0.0002, which is less than the level of significance at 0.01.Meanwhile, the p-value of ROE was 0.0509, which is less than the level of significance at 0.10.The higher the liquidity of the firms, the better the firms' performance in term of ROA and ROE.This result is consistent with prior studies made by Bibi and Amjad (2017) and Janjua, Asghar, Munir, Raza, Akhtar, and Shahzad (2016).This inferred that firms have enough resources to pay off their debts with the available current assets.In other words, firms did not have to sell their profit-generating assets to pay off their obligations.However, in Model 1, there is a negative insignificant relationship between firms' liquidity and MVA.The lower the firm's liquidity, the better the firms' performance in term of MVA.The result is inconsistent with Demirgüneş (2016) where it was claimed that firms with high liquidity could earn more profits because they could make additional profitable investments as they had extra resources in the business.
Conclusion and Implication of the Study
The result proved that there is positive significant relationship between CSR and firm performance in terms of ROE and ROA which is in line with the stakeholder theory.Therefore, the hypothesis of this study is supported.These findings suggest that high CSR disclosure can enhance firm's corporate image, maintain good reputation, and attracts investors to increase capacity of company, hence, leads to better firm performance.However, not all the predictions made on CSR and firm performance was significant.The outcome shows that CSR has a positively insignificant relationship with MVA.
Firm size as one of the control variables had a positive significant relationship with firm performance in term of ROA and ROE.Besides, firm age is found to have negative significant relationship with firm performance in term of ROA and ROE.Consistent with the hypothesis made, firm leverage is found to have negative significant relationship with firm performance in term of ROA and ROE and positive significant relationship was found between firm leverage and MVA.Lastly, firm liquidity has a positive significant relationship with ROA and ROE.
This study can serve as a guide for investors when making investment choices.The research findings show that CSR disclosure has a positive and significant relationship with firm performance based on their ROA and ROE.This study contributes to a better understanding for investors as to how CSR disclosure helps to create value and increase firm profitability.Besides that, this research aimed to extend and help academics to have better understanding on the factors that determine firm's performance and provide substantial information on CSR in Malaysia context.
On the other hand, this study could also help managers to gain better perception on how to increase firm performance.Manager together with management team must realize that it is crucial to incorporate CSR activities in their business operations due to its ability to create a positive corporate image and portray themselves as a responsible and caring firm towards the societal wellbeing and environmental concerns rather than a mere profit maximizing entity.By disclosing high CSR practice, firm efforts on image building and reputation maintenance can attract more investors that wish to play parts in the firm and hence improve firms' performance.
This study can also act as a point of reference for regulator such as Bursa Malaysia.Bursa Malaysia has to prescribe the contents of CSR activities or practices that the listed issuers should disclose in the annual reports by providing clear guidelines to the listed issuers.This is because most of Malaysian firms have the tendency to focus on community-based and environment-based CSR activities.Bursa Malaysia also needs to provide educational and engaging courses for the listed issuers to familiarize them with moral practices on reporting CSR activities in the annual reports.
Limitation of Study and Recommendation of Future Research
There are several limitations of this research.Firstly, the CSR disclosure in this study only focuses on the 4 themes or areas of CSR that were outlined by Bursa Malaysia.There are various other themes or areas that can be used to reflect CSR such as human right, ethics and governance, environmental impact, customer health and etc. Sample of this study is also limited to non-financial listed firms.The CSR disclosure may have a different effect towards the performance of financial listed firms and non-listed firms.This study also uses only three performance indicators which are Market Value Added (MVA), Return on Assets (ROA) and Return on Equity (ROE).There are other performance indicators that could be used to measure firm's performance such as Return on investments (ROI) and Tobin's Q and also non-financial indicators that may yield different result.For future research, this paper recommends to extend the study by using different CSR disclosure measurement, different firm performance measurement such as return on investments (ROI) and Tobin's Q and different samples.
is expressed in terms of: MVA = Market value added of firm ROE = Return on equity of firm ROA = Return on assets of firm Table1reveals that the mean of market value added (MVA) for the sample firms is 0.0426 times and varies from -2.9878 times (minimum) to 5.2713 times (maximum).The gap between minimum and maximum score of MVA for this study is quite high.The standard deviation is 0.6310 times.MVA is used to measure market-based performance which calculated by measured by ratio of change in market value added of the year to total equity of the year.ROE is the second dependent variable.ROE is measured by ratio of net income to total equity.The average ROE for sample companies is 0.0765 or 7.65% and ranging from -0.6817 to 0.9507 or ranging from -68.17% to 95.07%.The standard deviation is 15.10%.ROA is the third dependent variable.ROA is a measured ratio of net income over total assets.The mean of ROA for the sample firms is 5.12% with a minimum value of ROA at -21.67% and maximum value at 42.55%.The standard deviation of ROA is 7.32%.
Table 1 also displays the descriptive statistics related to mean of the corporate social responsibility (CSR) disclosure.CSR disclosure in the study was measured by using content analysis of word counts.The number of words appeared in annual reports such as "environment", "community", "marketplace", and "workplace" were counted.CSR disclosure in the study has mean of 29.6337 words (range from 3.0000 words to 300.0000 words).The standard deviation lied on 31.0155words.Table1also showed descriptive statistics for control variables in the study.The firm age (AGE) which was calculated by difference between current year to year to incorporation date had an average of 29.1543.The minimum and maximum value for AGE were 3 years and 82 years respectively.The standard deviation of firm age was 14.8648 years.The firm leverage represented by LEV with a mean of 0.2505 times is calculated by ratio of total liabilities to total assets.The range of firm leverage is from the lowest number of 0.0004 times to the highest number of 0.7981 times with standard deviation of 0.1689 times.The firm liquidity represented by LIQ with a mean of 2.7821 times was calculated by ratio of total assets to total liabilities.The range of firm liquidity is from minimum value of 0.1206 times to the maximum value of 45.8595 times with standard deviation of 3.4604 times.Furthermore, firm size represented total assets before natural logarithms has the lowest number of RM23.209 million and highest number of RM133000 million.It has a mean of RM3200 million with standard deviation of RM11100 million.
Table 2
CSR it + β 2 LSIZE it + β 3 AGE it + β 4 LEV it + β 5 LIQ it + є it In order to see the relationship between Corporate Social Responsibility (CSR) disclosure and these variables, equations were re-estimated by substituting the dependent variable, Market Value Added (MVA) with Return on Assets (ROA) and Return on Equity (ROE).Table4.5 summarized the results of the Panel data.Accordingly, equation (1) includes a wider set of dependent variables, like MVA, ROA and ROE represented Model 1, 2, and 3 respectively as different measurements of firm performance. | 6,980.4 | 2018-08-27T00:00:00.000 | [
"Business",
"Economics"
] |
A Smart-hand Movement-based System to Control a Wheelchair Wirelessly
The number of elderly and disabled people worldwide has increased and their day-to-day activities depend on others’ help. Improving the quality of life of these people has become the most important responsibility of society and it is the role of technology specialists to make their life as normal as possible and easy so that they can do their day-to-day activities at the right time without othersʼ help. Many researchers have proposed several solutions, but they have limitations such as poor performance and usability. In this paper, we propose a smart wirelessbased wheelchair system that completely controls the motion of a wheelchair wirelessly through hand movements to help partially quadriplegic people to perform their daily activities easily. In the proposed approach, sensors with relevant materials and technologies have been integrated with microcontrollers to capture hand movement signals and process them to the fully control the wheelchair wirelessly. An accumulator sensor is placed on the user’s hand to acquire the directions of hand movements and translate them into movement commands using an Arduino microcontroller that is directly connected to the wheelchair and moves it. The proposed system has been simulated, and the obtained results show the effectiveness of the proposed system and its applicability for use by most physically disabled persons.
Introduction
Wheelchair users are the most visible among the physically disabled community. Elderly and partially quadriplegic people are the group with the highest rates of manual and electric wheelchair use. (1) Wheelchair users face difficulties in their daily activities. It is difficult for the elderly and partially quadriplegic people to maneuver electrical and mechanical wheelchairs.
Nowadays, a wide range of technologies is available to help the disabled and physically challenged, and many things around us heavily depend on technologies that make life easier and more flexible. (1) These technologies gave solutions to problems that are difficult to solve by humans. (2,3) Control systems are the most recent technological advancement designed for various purposes, specifically to help the disabled and physically challenged, and to replace conventional manual support systems. (4,5) According to the results of a recently completed clinical trial, an assistive technology that enables individuals to maneuver a powered wheelchair with a variety of guided systems, such as those using a mouse cursor or a joystick and a tactile screen, and systems based on voice recognition, can be operated by individuals with a certain amount of upper body mobility. (2,10) Moreover, technologies provide solutions in the medical field, which is our concern in this paper. Therefore, many attempts have been made to develop various solutions to help those with disabilities to move and to minimize their level of dependence on others to help them move. (6) Elderly and partially quadriplegic people suffering from severe paralysis may not be able to use these technologies since they require accurate control. To improve the lifestyle of the physically challenged, in this work, we aim to develop a wheelchair system that moves in accordance with the signals obtained from hand movements through an accumulator sensor. Since hand movements are limited and identified by processing data, in this work, we aim also to explore the signals collected by the accumulator for better maneuver ability of the wheelchair.
The accumulator sensor is used to acquire specific activities and convert them to an analog signal by using encoder and decoder protocols, and send them to other connected devices by using an RF transmitter and an RF receiver. A microcontroller (AT89C51) is used to analyze the encoded signals and translate them into useful commands. An Arduino microcontroller (ATMEGA328P-PU) is used to directly connect the devices in order to perform fully wired or wireless control.
In this study, using the above-mentioned techniques, we developed a smart wireless wheelchair control using hand gesture (SWWCHG) system to solve physical disability challenges and assist the elderly and fully or partially quadriplegic people to perform their daily life activities without othersʼ help. The SWWCHG system consists of a model that performs in collaboration with an accumulator sensor with relevant materials, technologies, and an Arduino microcontroller unit, which is directly connected to the wheelchair and moves it. The hand movements of people are used to achieve full control of a wireless wheelchair.
The major objective of the SWWCHG system is to solve physical disability challenges and improve the quality of life of the elderly and fully or partially quadriplegic people, and help them perform their daily life activities without othersʼ help.
The main contributions of our proposed system are as follows. 1) This new SWWCHG system will improve the day-to-day life quality of the elderly and partially or fully quadriplegic people.
2) It provides a solution familiar to users with full wireless control of a wheelchair using hand movements. 3) Hand gestures are identified and analyzed using a microcontroller without the need for a computer to control the wheelchair.
To the best of our knowledge, the SWWCHG system, which adopts hybrid techniques, is efficient. Baseline approaches make use of traditional wired techniques that are not familiar to elderly and partially or fully quadriplegic people. However, the SWWCHG system is wireless and a hybrid because it uses both the above-mentioned mechanisms and addresses the problems encountered in using and controlling a wheelchair by elderly and disabled users. The rest of the paper is organized as follows. In Sect. 2 we explain the existing work done so far. In Sect. 3, we present the proposed system. In Sect. 4, we describe the simulation and implementation procedures. In Sect. 5, we provide the results and discussion. Finally, in Sect. 6, we conclude this paper.
Literature Review
Over the years, several approaches, such as joystick, gesture and chain-based control mechanisms, have been proposed to help physically disabled people reduce their level of dependence on others for movement. An image-processing-based approach has been proposed in Ref. 3 to control a wheelchair. This approach relies on image processing to recognize a gesture. The proposed approach is easy to handle and operate by users. However, it requires a wired web camera to perform the operation, which makes it unreliable and difficult to use. A gesture-based wheelchair approach has been proposed in Ref. 4 to control a wheelchair using hand movements for disabled people. The proposed approach uses the MEMS sensor connected to the hand and 3-axis accelerometer with digital output (12 C ), which detects and converts hand gestures to the 6-bit digital values. The main drawback of this approach using a remotely controlled system, is that the manner of holding the remote control may inconvenience users.
A smart wheelchair prototype based on hand gesture control has been proposed in Ref. 5 to help disabled people control a wheelchair. Many wired hardware components are required in this prototype, which makes it inconvenient for users. A model for a hand-gesture-controlled user interface has been presented in Ref. 6 to control a wheelchair using hand movements by using an accelerometer sensor. The main drawback of this model is its low accuracy, which makes the model inapplicable for users with low confidence. In addition, wired devices such as a global system for a mobile communications (GSM) modem are required, making the use of the model more difficult for the elderly and physically disabled people.
A hand-gesture-based approach using a touch sensor for users to control a wheelchair effectively has been proposed in Ref. 7. This approach however uses a wired hand gesture hardware component, which makes it difficult for the elderly and physically disabled people to use. In Ref. 8, the authors proposed a hand-gesture-based approach to control a wheelchair. This approach relies on the use of the global positioning system (GPS) and GSM to identify locations, which makes it unreliable to use in noisy environments.
The hand-gesture-based approach that uses Raspberry pi and image processing techniques to control a wheelchair has been proposed in Ref. 9. This approach relies on an image processing technique and a USB web camera to recognize gestures. The main drawback of this approach is that it is difficult to handle and operate by users, and it requires a wired web camera to perform the operations. An accelerometer-based-gesture approach has been proposed in Ref. 11 to control a wheelchair using GPS and GSM navigation. In this approach, a wheelchair component should be transmitted to a control room and the location of the user is determined by a navigation application. In Ref. 12, the authors presented a framework to help people who cannot walk owing to physiological or physical illness. The proposed framework involves the use of computer-controlled wheelchairs, which will be unfamiliar to the elderly and physically disabled people.
A model for a hand-gesture-wired-controlled user interface has been presented in Ref. 13; this model uses an accelerometer sensor to control the direction of a wheelchair using hand movements. The main drawback of the proposed model is that the transmission control is wired, making it difficult for users to handle and operate. In Refs. 14-16, 18, and 25, the authors developed hand-gesture-based methods to control wheelchair movement using MEMS and acceleration technologies. The main drawbacks of these methods are their non-costeffectiveness and difficulty to use and operate because they are unfamiliar to the elderly and severely physically disabled people. A gesture-recognition-based approach has been proposed in Ref. 17 to control a wheelchair using an android application. A smartphone and a connection should be used to control a wheelchair. Using the touch screen of the smartphone, the user has to choose the direction specified within the four quadrants on the screen, which is difficult for users to use. In Ref. 19, the authors present a hand-gesture-reorganization approach using a real-time tracking method and a hidden Markov model that introduces the hand gesture reorganization system to recognize continuous gestures before a stationary background. In this approach, the motion of the object gives important and useful information for object localization and extraction. However, to recognize the gesture, the complexity is higher and the accuracy is lower, so it is not beneficial and not compatible with the users.
A visual-based human-machine interface (HMI) solution has been proposed in Ref. 20 to control a wheelchair by head gestures, which were recognized by detecting the position of the nose on a user's face. The proposed approach is uncomfortable and not applicable to all users, which requires constant shoulder or neck movements. The head of the user should always be within the range of the sensor; otherwise, the user cannot control the movement of the cursor. In Ref. 21, the authors presented an approach using closeness matching to detect hand motion, whereas in Ref. 22, the authors presented a method that utilize a strategy for signal coordination with the utilization of the ARM 11 Raspberry pi and Zigbee module. Another approach that uses wearable hand gloves to capture hand movement to control a wheelchair has been proposed in Ref. 23. In Ref. 24, the authors described their work on gesture reorganization applied to wheelchair control. In this case, gestures are recognized through a three-axis accelerometer sensor.
Proposed System
In this paper, we propose the SWWCHG system to overcome physical disability challenges and assist the elderly and fully or partially quadriplegic people to perform their daily life activities without othersʼ help.
We assume the following for our system: 1) An accumulator is placed on the user's hand to sense the movements of the hand and change them into analog signals; then, the signals will be directed to an encoder to be sent by an RF transmitter.
2) The hand movements of the user will be translated into movement commands by an Arduino microcontroller unit that is directly connected to the wheelchair to move it.
3) The hand movements of the user will be converted into numeric data using transmission and reception circuits. 4) There are five commands that provide adequate and proper control, namely, forward, backward, turn right, turn left, and stop commands.
Core components of SWWCHG
In the SWWCHG system, an accumulator with an RF transmitter, an RF receiver, and microcontrollers (AT89C51 and ATMEGA328P-PU) are used as its core components. These components are integrated with a wheelchair and programmed to fully control the movement and directions of the wheelchair. Figure 1 shows the core components and work mechanism of the SWWCHG system.
Hand gestures
For people who are partially paralyzed and can move their hands only straightly, we provide a solution that depends on hand gestures to represent the directions of a wheelchair, as shown in Fig. 2.
Accumulator
The accumulator in the proposed system is used to sense the movements of the hand and change them into analog signals, which will then be directed to an encoder sent by an RF transmitter, as shown in Fig. 3.
AT89C51 microcontroller
After the signals are received by the RF receiver to be directed to the encoder (to be encoded), we use another microcontroller (AT89C51) to analyze the encoded signals and then send them to the ATMEGA328P-PU microcontroller to convert them into movement commands, as shown in Fig. 4.
Servo motor, IS sensor, and wheelchair
A wheelchair is directly connected to the ATMEGA328P-PU microcontroller. Small controlled servo motors are also connected to the wheelchair for speed control. An IS sensor is connected to the wheelchair to measure distance and stop the servo motors from moving if there is a barrier in front of the wheelchair.
Conversion of hand movements into numeric data
The proposed system uses two circuits for the transmission and reception of data. The work mechanism of each will be explained separately as follows.
Transmission circuit
This circuit will be installed on a user's hand to identify hand movements, which are forward, backward, rightward, and leftward. After identifying the hand movements that can be read, we assigned each movement to each of the five commands as follows: • The order to move forward corresponds to the forward movement of the hand.
• The order to move backward corresponds to the backward hand movement. • A right turn corresponds to the hand movement to the right. • A left turn corresponds to the hand movement to the left. • The stop command corresponds to the hand remaining straight without movement toward any direction. The direction of the hand movement depends on the output of the motion sensor, which gives various efforts when moving the hand. X-and Y-axes were used only to determine the direction of the hand movement. The hand movement to the front and back is represented by the X-axis, and that to the right and left is represented by the Y-axis, as shown in Fig. 2.
The following mechanism has been used to determine the directions of the hand movements and send them through an RF transmitter: 1) The direction of the hand movement is determined by using the accumulator. Since the accumulator gives analog output, it should be converted into digital output using the LM324 IC comparator. The accumulator output is compared with a reference voltage, and this voltage is approximated to half the sensitivity of the input voltage. An experiment showed that the appropriate reference voltage for the circuit is 2.55 V, because the accumulator output is higher or lower than this value when a movement occurs. The comparator gives either a high-or low-voltage output through the ports (1,7,8,14) to provide digital output in four bits. Table 1 shows the relationship between the directions of hand movements and the comparator output.
2) The HT12E encoder is used to convert the four bits from parallel processing to serial data so that the data can be transmitted via the transmitter.
3) The outputs of the encoder will be the inputs to the RF transmitter to send them to the receiving circuit. Figure 5 shows a diagram of the transmission circuit.
Reception circuit
This circuit receives the data transmitted by the transmission circuit and modifies them using the AT89c51 controller, and then sends them to the ATMega328P-PU controller to control the wheelchair. The mechanism used to perform these tasks is as follows: a. The data transmitted is received by the RF receiver (433 MHz). b. The data is serialized when received and therefore passed through the HT12D decoder in parallel as four-bit output by ports (10, 11, 12, and 13). c. The data is sent to the AT89c51 controller to be processed and represented according to the hand movement in three-bit form, and then sent to the three remaining ports of the ATMega328P-PU controller to convert them into the control commands of the wheelchair. Figure 6 shows the diagram of the reception circuit. Table 1 Relationship between directions of the hand movements and comparator output.
Convert hand movements to control commands of servo motors
Here, we will explain how to connect the ATMega328P-PU controller to the AT89c51 controller and convert hand movements to commands to control the servo motors of a wheelchair.
There are three ports available in the ATMega328P-PU controller. Since there are five hand movements, these movements can be expressed using only three bits; thus, the four-bit data should be converted to three-bit data. The following steps have been taken to convert the data from four to three bits as well as to convert hand movements into commands: 1) The AT89c51 controller has been programmed using C programming language, which will read the data from the decoder12D and convert it from four to three bits, using the program Keil uVision4. 2) The three ports (P2.2, P2.1, and P2.0) of the AT89c51 controller are connected to the ports (18,19, and 27) of the ATMega328P-PU controller, and these ports will be used to read data on the hand movements and convert them into commands. These ports in the Arduino are called digital pin13, digital pin12, and analog pin4. Figure 7 shows the graph of the data conversion algorithm to three bits as shown in Table 2. Figure 7 shows the flowchart of the receiving circuit to convert the data into three bits.
3) An appropriate algorithm has been developed and implemented using Arduino C for the ATMega328P-PU controller to convert the hand movement data received by the AT89c51 controller to the control commands of the wheelchair. Table 2 shows how to convert hand movements into commands to control the wheelchair and convert them to 3-bit data. Figure 8 shows the flowchart of the hand-translation algorithm for commands to control the wheelchair.
Simulation, Implementation, and Discussion
To evaluate the performance and efficiency of our proposed system, we carried out simulation and implementation using the C programming language with an Arduino C IDE and Protues simulation software. The IDE has several built-in functions and falls under the AVRembedded C design based on processing.
Accumulator simulation
The accumulator was tested in four cases in a practical experiment on the sensitivity of the sensor at a 45° angle. When the feeding voltage was 5 V, the following values were given as shown in Table 3.
From Table 3, we show the following observations: • The X, Y, and Z values at the fixed level are found in the range of 2.48-2.68. • If the sensor is tilted forward, the X value increases; however, if it is tilted backward, the X value decreases. • If the sensor is tilted to the left, the Y value increases; however, if it is tilted to the right, the Y value decreases. • The Z values is almost constant in previous cases.
Simulation of connecting an accumulator with a voltage comparator
The X and Y outputs from the sensor ports are connected to the lm324 voltage comparator, where the obtained values are compared to read the change; there are four voltage comparisons, each of which is linked as follows: • 1st comparison: The increase in X value is compared with the constant level by a reference voltage; this value is input via the inverting port of the comparator. • 2nd comparison: The decrease in X value is compared with the constant level by a reference voltage; this value enters via the nonaveraging negative port of the comparator. • 3rd comparison: The increase in Y value is compared with the constant level by a reference voltage; this value is input via the inverting positive port of the comparator. • 4th comparison: The decrease in Y value is compared with the constant level by a reference voltage; this value enters via the noninverting negative port of the comparator. A variable voltage resistor is connected to the variable voltage to adjust the reference voltage for comparison. Figure 9 shows the simulated link of the accumulator with the voltage comparator.
Encoder and decoder connection
The output data from the lm324 voltage comparator connects to this unit in parallel. Output should be sequentially so that it will be easy to send wirelessly and the input ports are arranged as front, back, right, and left and output through one serial port. On the decoder side, the data enters this unit through the wireless receiver as serial data and outputs it in parallel.
Simulation of connecting wireless RF transmitter and receiver module
The transmitter receives the data from the HT12E encoder unit in the form of serial data in the first circuit and sends them to a similar electronic receiver in the other circuit as illustrated in Fig. 10.
Simulation of AT89C51 controller
This controller receives the data from the decoding unit, which will be input through its control ports for processing and sends it to the other ATMega328-PU controller as illustrated in Fig. 11.
Simulation of connecting distance sensor to microcontroller
In the proposed approach, we connect the pin3 output to one of the analogue input ports of the microcontroller (ATMega328P-PU) to avoid the analogue output problem. The analog of input1 is used to connect with the sensor. Figure 12 shows the simulation of connecting the Sharp2Y0A21 sensor to the microcontroller.
Results and Discussion
The effectiveness of the SWWCHG system is validated through extensive simulations. In this section, we show the performance metrics and simulation results and analysis.
Performance metrics
The performance of the SWWCHG system is evaluated using the following metrics: 1) hand movement change of wheelchair direction, 2) response accuracy of control commands, 3) response time of control commands, 4) issuance of distress alert if needed, 5) edge detection and avoidance, and 6) obstacle detection.
Baseline approach
The performance of the SWWCHG system is compared with that of the wireless smart mind wheelchair control WSMWC system. The WSMWC system involves a mind-signal and eyeblink-based control of a wheelchair. A comparison is carried out under all performance metrics explained in Sect. 5.1. The baseline approach and its objectives are stated in Table 4.
Analysis of results
All the components of the proposed system after integration, simulation, implementation, and experiments show that the wheelchair model works perfectly with good performance and efficiency according to the hand gestures. Table 5 shows the reaction time obtained from several experiments and trial runs of a wheelchair. The results are calculated using Eq. (1).
where SR is the success rate, ST is the number of successful trials of wheelchair runs, and ToT is the total number of trials of a wheelchair run.
The results in Table 4 show high response accuracy and rapid response. These features give high user confidence and prove the wireless control efficiency of the proposed system. The results show the capabilities of the proposed system to detect edges and obstacles, and to avoid them at a distance of 20 cm from the wheelchair. The results also show that approximately 1% simulation of the wheelchair turns go to other directions in the case of turns to backward as no accurate movement representation of this gesture. Figure 13 shows a comparison of the SWWCHG and WSMWC systems under all scenarios of performance metrics. As shown in Fig. 13, the SWWCHG system outperforms the WSMWC system in terms of movement change, response time, and obstacle detection. However, the WSMWC system outperforms the SWWCHG system in terms of response accuracy because the hand gestures of people with quadriplegia are not accurate. This means that the SWWCHG system is highly reliable; thus, it is recommended and applicable to use in all types of disability.
Conclusions
The proposed system can be successfully applied on a large scale to the elderly and partially disabled people. In our proposed system, wireless technologies and sensors with relevant materials and technologies have been integrated with microcontrollers used to enhance the confidence, function, and willpower of the elderly and physically challenged people as it will help them to be self-reliant with no need to use any extra devices. The results of the proposed system show its cost-effectiveness and prove its very competitive performance, accuracy, and efficiency.
The SWWCHG system was simulated and implemented using the C programming language, Arduino C IDE, and Protues simulation software. The simulation results of the SWWCHG system and comparison with the WSMWC system show that the SWWCHG system is familiar to elderly and disabled people and cost-effective. Moreover, it provides very competitive performance, accuracy, and efficiency. Although the SWWCHG system is efficient, its response accuracy needs to be improved. For our future work, we will consider response accuracy and response time for all types of disability.
Conflicts of Interest
The authors declare that there are no conflicts of interest. | 5,878.6 | 2019-09-30T00:00:00.000 | [
"Computer Science"
] |
Variance and Covariance of Several Simultaneous Outputs of a Markov Chain
The partial sum of the states of a Markov chain or more generally a Markov source is asymptotically normally distributed under suitable conditions. One of these conditions is that the variance is unbounded. A simple combinatorial characterization of Markov sources which satisfy this condition is given in terms of cycles of the underlying graph of the Markov chain. Also Markov sources with higher dimensional alphabets are considered. Furthermore, the case of an unbounded covariance between two coordinates of the Markov source is combinatorically characterized. If the covariance is bounded, then the two coordinates are asymptotically independent. The results are illustrated by several examples, like the number of specific blocks in $0$-$1$-sequences and the Hamming weight of the width-$w$ non-adjacent form.
Introduction
We investigate the random vector defined as the n-th partial sum of a Markov source over a higher dimensional alphabet. Under suitable conditions, this random variable is asymptotically jointly normally distributed. Its mean and variance-covariance matrix is linear in the number of summands (cf. [6, Theorem 2.22]). On the one hand, these conditions include irreducibility and aperiodicity of the underlying graph of the Markov chain, which can be checked easily for a given Markov chain. On the other hand, we also have to check that the variance-covariance matrix is regular, which requires technical computations. In this article, we give a simple combinatorial characterization of Markov sources whose corresponding variance-covariance matrix is singular.
The covariance between two coordinates of this random vector is also of interest: If it is bounded, then these two coordinates are asymptotically independent because of the joint normal distribution. We give a combinatorial characterization of this case.
These characterizations are given in terms of subgraphs of the underlying graph of the Markov chain: For the variance-covariance matrix, we only have to consider all cycles. A regular variance-covariance For the characterization of an unbounded covariance, we have to consider functional digraphs. This result is proven using an extension of the Matrix-Tree Theorem in [5,20]. As Markov sources are closely related to automata and transducers, our results can also be used for the asymptotic analysis of sequences which can be computed by transducers. This includes the Hamming weight of many syntactically defined digit expansions as performed in [11,16,15,13,14]. Furthermore, occurrences of digits or subwords can also be computed by transducers. Their variance (and covariance) is analyzed in [12,2,19,3,22,8,10].
In [18], the variance of the output of a transducer as well as the covariance between the input and the output were analyzed. In this article, we consider the more general setting of Markov chains. The proofs are similar as those in [18], but the results are valid in a broader context and can be formulated more clearly. In contrast to [18], we allow the input sequence of the transducer to be generated by a Markov source. This allows us to model an input sequence for a transducer whose letters do not occur with equal probabilities and/or have dependencies between the letters. The precise relation between the setting of this article and that of [18] is given in Section 3.
As an example, we prove that the Hamming weight of the so-called width-w non-adjacent form is asymptotically jointly normally distributed for two different values of w ≥ 2. The width-w non-adjacent form is a binary digit expansion with digits in {0, ±1, ±3, . . . , ±(2 w−1 − 1)} and the syntactical rule that at most one of any w adjacent digits is non-zero. This digit expansion exists and is unique for every integer (cf. [21,1]). Furthermore, it has minimal Hamming weight among all digit expansions with this base and digit set.
The outline of this article is as follows: In Section 2, we define our setting and the types of graphs we use to state the combinatorial characterization of independent output sums and singular variance-covariance matrices. These characterizations are given in Section 3 and examples are given in Section 4. In Section 5, we finally prove the results of Section 3.
Preliminaries
In this article, a finite Markov chain consists of a finite state space {1, . . . , M }, a finite set of transitions E between the states, each with a positive transition probability, and a unique (i) initial state 1. We denote the transition probability for a transition e by p e . Then we have e∈E e starts in i p e = 1 for all states i. Note that for all transitions e ∈ E, we require p e > 0. Further note that there may be multiple transitions between two states but always only a finite number of them. This may be useful for different outputs later on.
The transition probabilities induce a probability distribution on the paths of length n starting in the initial state 1. Let X n be a random path of length n according to this model.
(i) This is no restriction as we can always add an additional state and the transitions starting in this state with probabilities corresponding to the non-degenerate initial distribution. The output functions are then extended by mapping these transitions to 0.
All states of the underlying digraph of the Markov chain are assumed to be accessible from the initial state. Contracting each strongly connected component of the underlying digraph gives an acyclic digraph, the so-called condensation. We assume that this condensation has only one leaf (i.e., one vertex with out-degree 0). The strongly connected component corresponding to this leaf is called final component. We assume that the period (i.e., the greatest common divisor of the lengths of all cycles) of this final component is 1. We call such Markov chains finally connected and finally aperiodic.
Additionally we use output functions k : E → R. The corresponding random variable K n is the sum of all values of k along a random path X n . We call K n the output sum of the Markov chain with respect to k. We use several output functions k 1 , . . . , k m and the corresponding random variables K Thus, our setting can be seen as a Markov source with a finite set of m-dimensional vectors as alphabet.
We are interested in the joint distribution of the random variables K n . For one coordinate, we will prove that the expected value of K will turn out to be Σn + O(1) for a matrix Σ. We call Σ the asymptotic variance-covariance matrix and its entries the asymptotic variances and covariances.
We will combinatorically characterize Markov chains with output functions such that the variancecovariance matrix is regular. Furthermore, we give a combinatorial characterization of the case that the asymptotic covariance is zero. As this is only influenced by two output functions, we restrict ourselves to K (1) n and K (2) n in this case. Remark 2.2. Markov chains with output functions are closely related to transducers with a probability distribution for the input: A transducer is defined to consist of a finite set of states, an initial state, a set of final states, an input alphabet, an output alphabet and a finite set of transitions, where a transition starts in one state, leads to another state and has an input and an output label from the corresponding alphabets. See [4, Chapter 1] for a more formal definition. An example of a transducer is given in Figure 1. We label the transitions with "input label | output label". The initial state is marked by an ingoing arrow starting at no other state and the final states are marked by outgoing arrows leading to no other state.
A Markov chain with one output function can be obtained by a transducer with additional probability distributions for the outgoing transitions of each state and by deleting the input labels of the transducer.
If we have two transducers where only the outputs of the transitions are different, we can choose probability distributions for the outgoing transitions of each state. Then we obtain a Markov chain with two output functions. Thus, we can use our results for two output functions (see Examples 4.2 and 4.3). Remark 2.3. We can additionally have final output functions f : {1, . . . , M } → R for each output function k and redefine the random variable K n as the sum of the values of the output function k along a random path X n plus the final output f of the final state of this path. We will see that this does not change the main terms of the asymptotic behavior. Thus, the results in Section 3 are still valid (see also Remark 5.5).
Remark 2.4. The Parry measure are probabilities p e such that every path of length n has the same weight up to a constant factor (cf. [24,23]). If we are interested in probabilities such that every path of length n starting in the initial state 1 has exactly the same weight, we have to use the Parry measure with additional exit weights: Each path is additionally weighted by these exit weights according to the final state of the path (cf. [17,Lemma 4.1]).
However, the sum of the weights of all paths of length n is no longer normalized: It differs from 1 by an exponentially small error term for n → ∞. This gives an approximate equidistribution of all paths of length n. As we are interested in the asymptotic behavior for n → ∞, the expected value and the variance of the corresponding measurable function K n can still be defined as usual.
If we use these exit weights w s in our setting, the main terms of the asymptotic behavior are not changed. Thus, the theorems in Section 3 are still valid (see also Remark 5.5).
These exit weights can also be used to simulate final and non-final states of a transducer by setting the weights of non-final states to 0. However, not all exit weights of the final component are allowed to be zero.
Next, we define some subgraphs of the underlying graph of the final component and extend the probabilities and the output functions to these subgraphs. Definition 2.5. We define the following types of directed graphs as subgraphs of the final component of the Markov chain.
• A rooted tree is a weakly connected digraph with one vertex which has out-degree 0, while all other vertices have out-degree 1. The vertex with out-degree 0 is called the root of the tree.
• A functional digraph is a digraph whose vertices have out-degree 1. Each component of a functional digraph consists of a directed cycle and some trees rooted at vertices of the cycle. For a functional digraph D, let C D be the set of all cycles of D.
The probabilities p e can be multiplicatively extended to a weight function for arbitrary subgraphs of the Markov chain: Let D be any subgraph of the underlying graph of the Markov chain, then define the weight of D by For a path P of length n, this is exactly the probability P(X n = P ). However, the output function k is additively extended to cycles C of the underlying graph of the Markov chain by This can further be extended to functional digraphs: Definition 2.6. Let D 1 and D 2 be the sets of all spanning subgraphs of the final component of the Markov chain M which are functional digraphs and have one and two components, respectively. For functions g and h : E → R, we define As functions g and h, we use the output functions k 1 , . . . , k m and the constant function 1(e) = 1.
Main Results
In this section, we present the combinatorial characterization of output functions of Markov chains which are asymptotically independent and of Markov chains with output functions with a singular variancecovariance matrix. The proofs can be found in Section 5.
If the underlying directed graph of the Markov chain is j-regular, every transition has probability 1/j, we only have two output functions and the first output function k 1 : E → {0, 1, . . . , j − 1} is such that the restrictions of k 1 to the outgoing transitions of one state is bijective for every state, then these results are stated in [18] (see also Remark 2.2).
The next definition describes a sequence of random variables whose difference from its expected value is bounded for all elements. holds for all n.
Next we give the combinatorial characterization of output sums with bounded variance in the case of a not necessarily independent identically distributed input sequence.
In that case, an + O(1) is the expected value of the output sum and Statement (b) holds for all states s of the final component.
If M is furthermore strongly connected, the following assertion is also equivalent: In the case that the value of the output function is 0 or 1 for each transition, there are only two trivial output functions with asymptotic variance zero. The next theorem gives a combinatorial characterization of output functions of a Markov chain which are asymptotically independent. As this characterization is given by the covariance, we can restrict ourselves to two output functions without loss of generality.
The covariance of K
Examples
In this section, we first prove the asymptotic joint normal distribution of the Hamming weights of two different digit expansions by using Theorem 2. Then we investigate the independence of length 2 blocks of 0-1-sequences by using Theorem 3. In both cases we start with two transducers to construct a Markov chain with two output functions, once as a Cartesian product, once via Remark 2.2. Example 4.1 (Width-w non-adjacent forms). Let 2 ≤ w 1 < w 2 be integers. We consider the asymptotic joint distribution of the Hamming weight of the width-w 1 non-adjacent form (w 1 -NAF) and the Hamming weight of the w 2 -NAF. The width-w non-adjacent form is a binary digit expansion with digit set {0, ±1, ±3, . . . , ±(2 w−1 − 1)} and the syntactical rule that at most one of any w adjacent digits is non-zero.
It will turn out that this distribution is normal if and only if the variance-covariance matrix is regular. Using Theorem 2, we have to find closed walks in the corresponding Markov chain such that all coefficients in (1) have to be zero.
The transducer T (w) in Figure 2 computes the Hamming weight of the w-NAF of the integer n when the input is the binary expansion of n (cf. [15]). It has w + 1 states. Next, we construct the Cartesian product of the transducers for w 1 and w 2 and choose any non-degenerate probability distribution, i.e. with all probabilities non-zero, for the outgoing transitions of a state. Thus, we obtain a Markov chain M with (w 1 + 1)(w 2 + 1) states with two different output functions h 1 and h 2 corresponding to the outputs of the transducers for w 1 and w 2 , respectively. We can now use Theorem 2 to prove that these two Hamming weights are asymptotically jointly normally distributed.
The Cartesian product of two closed walks in T (w 1 ) and T (w 2 ) with the same input sequence is a closed walk in M. We construct three different closed walks and prove that all three coefficients in (1) have to be zero. For brevity, we denote a closed walk in the Cartesian product M and its projections to T (w 1 ) and T (w 2 ) by the same letter.
First, we choose the closed walk C 1 starting in state 1 with input sequence 0. We obtain h 1 (C 1 ) = 0 in T (w 1 ), h 2 (C 1 ) = 0 in T (w 2 ) and 1(C 1 ) = 1. Second, we choose the closed walk C 2 starting in 1 with input sequence 10 w2−1 . Because w 1 < w 2 and the loop at state 1, C 2 is a closed walk in T (w 1 ) and T (w 2 ). We obtain h 1 (C 2 ) = 1 in T (w 1 ), h 2 (C 2 ) = 1 in T (w 2 ) and 1(C 2 ) = w 2 . The third choice depends on whether w 1 = w 2 − 1 or not: • w 1 = w 2 − 1: We choose the closed walk C 3 starting in 1 with input sequence 10 w1−1 10 w1−1 0 α where α = max(w 2 − 2w 1 , 0). On the one hand, this is a closed walk in T (w 1 ) consisting of two times the cycle 1 → w 1 → 1 and α times the loop at state 1. On the other hand, this is a closed walk in T (w 2 ) consisting of the cycle 1 → w 2 → 1 and the correct number of loops at state 1. We • w 1 = w 2 −1: We choose the closed walk C 3 starting in 1 with input sequence 10 w1−1 10 w1−1 10 w1−1 .
On the one hand, this is a closed walk in T (w 1 ) consisting of three times the cycle 1 → w 1 → 1. On the other hand, this is a closed walk in T (w 2 ) consisting of the closed walk 1 → w 2 → w 2 + 1 → w 2 → 1 and the correct number of loops at state 1. We obtain h 1 (C 3 ) = 3 in T (w 1 ), h 2 (C 3 ) = 2 in T (w 2 ) and 1(C 3 ) = 3w 1 . This yields a system of linear equations for the coefficients a 0 , a 1 and a 2 with coefficient matrix which only has the trivial solution. Thus, the Hamming weights of the w 1 -NAF and the w 2 -NAF are asymptotically jointly normally distributed, independently of the choice of the distributions for the Markov chain.
The next two examples investigate the asymptotic independence of length two blocks of 0-1-sequences. Example 4.2 (10-and 11-blocks). The two transducers in Figure 3 count the number of 10and 11-blocks in 0-1-sequences. After deleting the outputs, both transducers are the same. Thus, any non-degenerate probability distribution on the outgoing edges of the states gives a Markov chain with two output functions k 10 (for the 10-blocks) and k 11 (for the 11-blocks).
Because of the two loops and the cycle 0 → 1 → 0, Theorem 2 implies that the number of 10and 11-blocks is asymptotically normally distributed. The next question is: For which choices of probability distributions is the number of 10and 11-blocks asymptotically independent? All functional digraphs with one or two components are given in Figure 4. Using Theorem 3, we obtain the following system of equations for the values of the probabilities such that the numbers of 11-blocks and 10-blocks are asymptotically independent: first by definition then by (2) and finally for the independence This system has non-trivial real solutions, i.e. solutions where all probabilities are non-zero, with for all 0 < p 1→1 < 1. Then we have 2 − √ 2 < p 0→0 < 1. Thus, for these transition probabilities, the number of 10-blocks and the number of 11-blocks are asymptotically independent.
One such example of a non-trivial solution is p 1→1 = p 1→0 = 0.5, p 0→0 ≈ 0.7192 and p 0→1 ≈ 0.2808. Note that for the symmetric distributions p 0→0 = p 0→1 = p 1→1 = p 1→0 = 0.5, we obtain asymptotic dependence of the number of 10and 11-blocks. Example 4.3 (00-and 11-blocks). The two transducers in Figure 5 count the number of 00and 11-blocks in 0-1-sequences. They have the same underlying graph and the same input labels. Thus, choosing any non-degenerate probability distribution of the outgoing edges of the states yields a Markov chain with two output functions.
Because of the two loops and the cycle 0 → 1 → 0, Theorem 2 implies that the number of 00and 11-blocks is asymptotically normally distributed.
These equations have no solution with 0 < p e < 1 for all transitions e. Thus, the numbers of 00and 11-blocks are asymptotically dependent for all choices of the input distributions, as expected.
Proofs
In this section, we prove the results from Section 3. Most of the proofs follow along the same ideas as in [18]. The main differences are that one has to replace "complete transducer" by "Markov chain" and the input sum by the output sum K n . We first prove Theorem 3 with the help of two lemmas. For one of these lemmas, we use a version of the Matrix-Tree Theorem for weighted directed forests proved in [5,20]. At the end of this section, we prove Theorems 1 and 2. Let A = {i 1 , . . . , i n } and B = {j 1 , . . . , j n } with i 1 < · · · < i n and j 1 < · · · < j n . For F ∈ F A,B , we define a function g : B → A by g(j) = i if j is in the tree of F which is rooted in vertex i. We further define the function h : A → B by h(i k ) = j k for k = 1, . . . , n. The composition g • h : A → A is a permutation of A. We define sign F = sign g • h.
If |A| = |B|, then F A,B = ∅. If |A| = |B| = 1, then sign F = 1 and F A,B consists of all spanning trees rooted in a ∈ A.
Theorem (All-Minors-Matrix-Tree Theorem [5,20]). For a directed, weighted graph with loops and multiple edges, let L = (l ij ) 1≤i,j≤N be the Laplacian matrix, that is where * denotes any matrix. If the Markov chain is strongly connected, the matrices * are not present (they have 0 rows).
We first use the All-Minors-Matrix-Tree Theorem to connect the derivatives of the characteristic polynomial of the transition matrix with a sum of weighted digraphs in the next lemma.
This lemma can be proven in the same way as [18, Lemma 5.3] using the All-Minors-Matrix-Tree Theorem [5,20].
The following lemma will be used for m ≥ 2 output functions later on. and v(s 1 , . . . , s m ) are analytic functions in a small neighborhood of (0, . . . , 0).
Proof:
The moment generating function of (K (3), we obtain F 1 (x 1 , . . . , x m , z) F 2 (x 1 , . . . , x m , z)f (x 1 , . . . , x m , z) for "polynomials" F 1 and F 2 , i.e. finite linear combinations of x α1 1 · · · x αm m z β for α i ∈ R and β a nonnegative integer. The function F 2 corresponds to the determinant of the non-final part of the Markov chain.
We obtain the coefficient of z n by singularity analysis (cf. [7]): Since the final component of M is again a Markov chain, the dominant singularity of 1/f (1, . . . , 1, z) is 1 by the theorem of Perron-Frobenius (cf. [9]). By the aperiodicity of the final component, this dominant singularity is unique and it is ρ(1, . . . , 1) = 1.
Next, we consider the non-final components of the Markov chain using the same arguments as in [18]. The corresponding non-final component M 0 is not a Markov chain as the transition matrix is not stochastic. Let M + 0 be the Markov chain that is obtained from M 0 by adding loops with the missing probabilities where necessary. The dominant eigenvalue of the transition matrix of M + 0 is 1. As the transition matrices of M 0 and M + 0 satisfy element-wise inequalities but are not equal (at (x 1 , . . . , x m ) = (1, . . . , 1)), the theorem of Perron-Frobenius (cf. [9, Theorem 8.8.1]) implies that the dominant eigenvalues of M 0 have absolute value less than 1. Thus, the dominant singularities of F 2 (1, . . . , 1, z) −1 are at |z| > 1.
Then Σ is singular if and only if the diagonal matrix D is singular. This is equivalent to holds for a j ∈ {1, . . . , m}. Now consider the output function t j1 k 1 + · · · + t jm k m . By Theorem 1, (4) is equivalent to t j1 k 1 (C) + · · · + t jm k m (C) = 0 holding for all cycles of the final component (since the expected value of this output function is O(1)).
If we shift back the output function such that the expected value is no longer bounded, we obtain an additional summand a 0 1(C).
The asymptotic joint normal distribution follows from Lemma 5.4 and the multidimensional Quasi-Power Theorem [6,Theorem 2.22]. | 5,971 | 2015-08-24T00:00:00.000 | [
"Mathematics"
] |
Surveying the Oral Drug Delivery Avenues of Novel Chitosan Derivatives
Chitosan has come a long way in biomedical applications: drug delivery is one of its core areas of imminent application. Chitosan derivatives are the new generation variants of chitosan. These modified chitosans have overcome limitations and progressed in the area of drug delivery. This review briefly surveys the current chitosan derivatives available for biomedical applications. The biomedical applications of chitosan derivatives are revisited and their key inputs for oral drug delivery have been discussed. The limited use of the vast chitosan resources for oral drug delivery applications, speculated to be probably due to the interdisciplinary nature of this research, is pointed out in the discussion. Chitosan-derivative synthesis and practical implementation for oral drug delivery require distinct expertise from chemists and pharmacists. The lack of enthusiasm could be related to the inadequacy in the smooth transfer of the synthesized derivatives to the actual implementers. With thiolated chitosan derivatives predominating the oral delivery of drugs, the need for representation from the vast array of ready-to-use chitosan derivatives is emphasized. There is plenty to explore in this direction.
Introduction
Chitosan, made up of β-(1,4)-N-acetyl-glucosamine [1][2][3], is obtained following the deacetylation of chitin. Chitin is found extensively in the exoskeletons of crustaceans and insects and in the cell walls of bacteria and fungi [4]. The quality of chitosan is influenced by the source of chitin, separation method and the degree of deacetylation [5]. The major advantages of chitosan are that it is nontoxic, mucoadhesive, hemocompatible, biodegradable and able to exhibit antioxidant, antitumor, antimicrobial properties. These properties render chitosan a highly attractive biomaterial option. The iconic characteristic of chitosan is that it does not provoke intense inflammation nor induce the body's immunity. Researchers have confirmed that chitosan with different molecular weights and degrees of deacetylation exhibit low toxicity [6][7][8][9]. The catatonic nature of chitosan gives it its bactericidal and bacteriological properties [10,11]. However, chitosan is not soluble in aqueous solutions, a major disadvantage that limits its widespread application in living systems [12].
Chitosan's surface adherence comes in handy when delivering useful molecules across mucosal pathways and adsorbs molecules that do not have any affinity for mucus [13].
Chitosan, through its permeation-related attributes, is able to open the tight epithelial junctions [14]. Chitosan also plays a role in coagulation. It accelerates the rate of wound healing by enabling interactions between amino and platelet groups [15]. These hemostatic properties are used with respect to wound healing applications. As a material for wound dressing, chitosan possesses chemoattraction, macrophage and neutrophil activation, analgesic properties, acceleration of granulation tissue/re-epithelization, limited scar formation and contraction, hemostasis and antibacterial properties [16]. The antitumor properties of chitosan and its derivatives have been well demonstrated in both in vitro and in vivo models [17]. The beneficial effects of antioxidants are well known [18], chitosan and its derivatives are able to scavenge free radicals in vitro [19,20]. The biodegradability of chitosan is yet another unique feature in biological organisms. Within the system, chitosan interacts with bioenzymes to depolymerize. The degradation breakdown products, N-acetyl glucose and glucosamine, are nontoxic to the human body. These degraded intermediates do not stay in the body and have no immunogenicity.
This review focusses on surveying the various novel chitosan derivatives that are available for use as drug delivery options. The milestones achieved based on the use of chitosan derivatives in the area of oral drug delivery has been comprehensively reviewed. The lack of implementing the various chitosan derivatives for oral drug delivery has been highlighted. The plausible reasons for this gap in the application of the various chitosan derivatives for oral drug delivery has been discussed. The possible accomplishments that could be achieved through utilization of the available resources has been addressed under future perspectives.
Comprehensive List of Novel Chitosan Derivatives
This section deals with a brief overview of the various chitosan derivatives that have been synthesized and are available for biomedical applications. The synthesis and their characterization and their applications have been elaborately reviewed by various authors [21][22][23]; here, we are restricted to a snapshot of their names. Figure 1 gives an overview of the various modification processes involved in the making of various chitosan derivatives.
N-(Aminoalkyl) Chitosan is a broad category of chitosan derivatives, which house many other forms. The encapsulation of calcium alginate beads with poly(L-lysine) (PLL), is the most accomplished encapsulation system for sustained delivery of bioactive agents. However, due to its high cost, large scale usage of this system for oral vaccination of animals is not possible. This is why a more economic and reliable microencapsulation chitosan and alginate system was sought after. Succinyl, Quateraminated, and Octanoyl Chitosan Porous chitosan microspheres for the delivery of antigens have been reported by Mi et al. [24]. The porous chitosan microspheres were chemically modified incorporating carboxyl, hydrophobic acyl, and quaternary ammonium groups.
Mitomycin C Conjugated N-succinyl Chitosan is the other class of chitosan derivatives. N-succinyl-chitosan, due to its carboxyl groups, has low toxicity, excellent biocompatibility and is retained in the body as a drug carrier for prolonged periods. This the reason why highly succinylated succinyl-chitosan (degree of succinylation: [25,26] can be dissolved in alkaline aqueous media, whereas chitosan cannot [27]. Succinyl-chitosan can react easily owing to the -NH 2 and -COOH groups. The N-Alkyl and Acylated Chitosan derivatives, which greatly benefit from the introduction of an alkyl or acyl chain, contribute greatly to chitosan's molecular design. This modification of chitosan with hydrophobic branches, improved its solubility properties [28,29]. The introduction of an alkyl chain to water soluble modified chitosan (N-methylene phosphonic chitosan) enabled the co-existence of hydrophobic and hydrophilic branches [30]. The alkyl groups in N-lauryl-N-methylene phosphonic chitosan weaken its hydrogen bonds and provide good solubility in solvents. Holding amphiphilic properties, which are typical for surfactants, this derivative has prospective demands in pharmaceutical and cosmetic fields. N-(Aminoalkyl) Chitosan is a broad category of chitosan derivatives, which house many other forms. The encapsulation of calcium alginate beads with poly(L-lysine) (PLL), is the most accomplished encapsulation system for sustained delivery of bioactive agents. However, due to its high cost, large scale usage of this system for oral vaccination of animals is not possible. This is why a more economic and reliable microencapsulation chitosan and alginate system was sought after. Succinyl, Quateraminated, and Octanoyl Chitosan Porous chitosan microspheres for the delivery of antigens have been reported by Mi et al. [24]. The porous chitosan microspheres were chemically modified incorporating carboxyl, hydrophobic acyl, and quaternary ammonium groups.
Mitomycin C Conjugated N-succinyl Chitosan is the other class of chitosan derivatives. N-succinyl-chitosan, due to its carboxyl groups, has low toxicity, excellent biocompatibility and is retained in the body as a drug carrier for prolonged periods. This the reason why highly succinylated succinyl-chitosan (degree of succinylation: [25,26] can be dissolved in alkaline aqueous media, whereas chitosan cannot [27]. Succinyl-chitosan can react easily owing to the -NH2 and -COOH groups. The N-Alkyl and Acylated Chitosan derivatives, which greatly benefit from the introduction of an alkyl or acyl chain, contribute greatly to chitosan's molecular design. This modification of chitosan with hydrophobic branches, improved its solubility properties [28,29]. The introduction of an alkyl chain to water soluble modified chitosan (N-methylene phosphonic chitosan) enabled the co-existence of hydrophobic and hydrophilic branches [30]. The alkyl groups in N-lauryl-N-methylene phosphonic chitosan weaken its hydrogen bonds and provide good solubility in solvents. Holding amphiphilic properties, which are typical for surfactants, this derivative has prospective demands in pharmaceutical and cosmetic fields. Chitosan hydrochloride derivatives have been demonstrated for their effective in vitro release of ofloxacin from mucoadhesive erodible ocular inserts and ocular pharmacokinetics [31].Thiolated chitosans are obtained by the modification of chitosan with 2-iminothiolane [32], in order to improve the properties of chitosan as excipients in drug delivery systems. Chitosan-2-iminothiolane was obtained by grafting 2-iminothiolane onto the chitosan backbone. This exhibits excellent in situ gelling properties and improved mucoadhesive and drug releasing properties due to the thiol groups on chitosan. Phosphorylated chitosan, which is prepared by reacting chitosan with orthophosphoric acid and urea in DMF [33] or phosphorous pentoxide in methanesulphonic acid, is a water-soluble derivative of chitosan with huge potential for drug delivery.
MCC and SNOCC chitosan derivatives are a biomedically significant class. Mono-N-Carboxymethyl Chitosan (MCC), is a polyampholytic chitosan derivative, soluble at both neutral and alkaline pH [34], synthesized using glyoxolic acid in chitosan [34]. These derivatives are highly soluble and applicable for the administration of polyanionic drugs. It has also been demonstrated by the same group that MCC can improve low molecular weight heparin (LMWH) transport through Caco-2 cells.
Anionic chitosan derivatives were also attempted. N-sulfonato-N,O-carboxymethylchitosan (SNOCC) was produced [35], which retains around 50% of its nitrogen centers on the glucose subunits as free amino groups [36], which contribute to its unique biomedical characteristics.
PEGylated Chitosans are a prominent group of derivatives. Chitosan-PEG for oral peptide delivery was attempted by Prego et al. [37]. PEGylation of chitosan is apt for oral peptide/protein delivery, because generally PEGylation improves biocompatibility [38] and improves stability in GI fluid [37]. PEGylated chitosan showed enhanced solubility of hydrophobics.
Oral Drug Delivery by Chitosan Derivatives
Although drug delivery is a broad terminology, which is backed up by enumerable reviews when it comes to chitosan and drug delivery and good number of reviews when it comes to chitosan derivatives, this review chooses to specifically delve into oral drug delivery applications. The sections below consolidate what has been achieved in the area of oral drug delivery based on chitosan derivatives and micro/nano particulate chitosan.
Chitosan/Chitosan Derivatives
When drugs are administered orally, they must be able to survive various ranges of pH and gastrointestinal tract (GIT) secretions. The very process of oral drug absorption rests on transport (via passive diffusion, carrier-mediated transport, or pinocytosis) across the GIT membrane. This is impacted by various GIT physiological. The oral mucosa has a thin epithelium and rich vascularity, which is makes it ideally fit for buccal and sublingual administration [39]. The release of drugs from chitosan and its derivatives follows the conventional protocol that holds good for chitosan. Drug release is influenced by the hydrophilicity of chitosan and pH of the swelling solution. The chitosan-drug release mechanism involves swelling, diffusion of drugs through the polymeric matrix and polymer erosion [40] (Figure 2). Figure 3 lists the limitations that chitosan and its derivatives have broken can to oral drug delivery. Drug delivery via the oral route is the easiest and e m venient for patients. Chitosan because of its mucoadhesive nature, is able to pro drugs from GIT enzymatic degradation. Additionally, it is able to enhance abso administered therapeutic agent without affecting the biological system. This m tosan a valuable candidate as an oral delivery agent. Not only chitosan, but also micro-/nanoparticles have been demonstrated for oral drug delivery. Intestinal tion, suppression of Helicobacter pylori and dealing with ulcerative colitis, have complished following treatment with antibiotic loaded chitosan particles. Amoxi clarithromycin loaded into chitosan particles inhibited H. pylori [41,42]. The Figure 3 lists the limitations that chitosan and its derivatives have broken, when it can to oral drug delivery. Drug delivery via the oral route is the easiest and e most convenient for patients. Chitosan because of its mucoadhesive nature, is able to protect labile drugs from GIT enzymatic degradation. Additionally, it is able to enhance absorption of administered therapeutic agent without affecting the biological system. This makes chitosan a valuable candidate as an oral delivery agent. Not only chitosan, but also chitosan micro-/nanoparticles have been demonstrated for oral drug delivery. Intestinal disinfection, suppression of Helicobacter pylori and dealing with ulcerative colitis, have been accomplished following treatment with antibiotic loaded chitosan particles. Amoxicillin and clarithromycin loaded into chitosan particles inhibited H. pylori [41,42]. The mucoadhesive properties of chitosan enabled prolonged delivery and oral bioavailability of acyclovir, an antiviral agent. This was because acyclovir chitosan microspheres could enhance drug retention in the upper GIT [43]. Protection against GIT degradation, improvement of oral bioavailability of insulin and enhancement of bioadhesion, have been reported as a result of its encapsulation into chitosan microspheres [44]. Chitosan-based delivery systems have been applied for the protection of insulin from degradation in the upper GIT. Furthermore, it has been used to carry out the release of insulin at the colon (through degradation of the chitosan glycosidic linkage by colon microflora) [45]. Chitosan microspheres coated with cellulose acetate butyrate, loaded with 5-aminosalicylic acid (5-ASA) to treat ulcerative colitis is reported. Here, the bioadhesive nature of chitosan microspheres comes handy [46]. Another study reported localization of 5-ASA in the colon and low drug systemic bioavailability following oral administration of 5-ASA-loaded chitosan-Ca-alginate microparticles to Wistar male rats [47]. The fact that chitosan is highly soluble in the acidic medium, leading to drug burst in the stomach, has been mitigated using pH-sensitive polymer coatings [48][49][50].
Chitosan derivatives have also been reported for oral delivery of therapeutic peptides and proteins. Unmodified native chitosan itself has been proven for its oral peptide and protein delivery (e.g., capability to open tight junctions, mucoadhesive properties), with this being the case, how much more so with the use of chitosan derivatives. Recently, the potential of certain modified chitosans including TMC [51], thiolated chitosan [52,53] and chitosan-enzyme inhibitor conjugates [54][55][56] for noninvasive gene delivery has been widely reported. In addition, thiolated chitosan is able to inhibit efflux pumps, in particular P-glycoprotein (P-gp). In this way, thiolated chitosan comes handy when it comes to oral delivery of P-gp substrates [57][58][59]. The potential of chitosan, TMC and MCC for oral delivery of vaccine have been previously reviewed [60]. We touch on the highlights of these [61,62] reviews here.
The effect of two different trimethyl chitosans (TMC) on the oral absorption of buserelin, a peptide drug, after intraduodenal administration in rats is reported [63] Both formulations significantly enhanced buserelin plasma levels. Enhanced absorption in the presence of TMC60 (60% trimethylation) is because of the inherent ability of TMC60 to Chitosan-based delivery systems have been applied for the protection of insulin from degradation in the upper GIT. Furthermore, it has been used to carry out the release of insulin at the colon (through degradation of the chitosan glycosidic linkage by colon microflora) [45]. Chitosan microspheres coated with cellulose acetate butyrate, loaded with 5-aminosalicylic acid (5-ASA) to treat ulcerative colitis is reported. Here, the bioadhesive nature of chitosan microspheres comes handy [46]. Another study reported localization of 5-ASA in the colon and low drug systemic bioavailability following oral administration of 5-ASA-loaded chitosan-Ca-alginate microparticles to Wistar male rats [47]. The fact that chitosan is highly soluble in the acidic medium, leading to drug burst in the stomach, has been mitigated using pH-sensitive polymer coatings [48][49][50].
Chitosan derivatives have also been reported for oral delivery of therapeutic peptides and proteins. Unmodified native chitosan itself has been proven for its oral peptide and protein delivery (e.g., capability to open tight junctions, mucoadhesive properties), with this being the case, how much more so with the use of chitosan derivatives. Recently, the potential of certain modified chitosans including TMC [51], thiolated chitosan [52,53] and chitosan-enzyme inhibitor conjugates [54][55][56] for noninvasive gene delivery has been widely reported. In addition, thiolated chitosan is able to inhibit efflux pumps, in particular P-glycoprotein (P-gp). In this way, thiolated chitosan comes handy when it comes to oral delivery of P-gp substrates [57][58][59]. The potential of chitosan, TMC and MCC for oral delivery of vaccine have been previously reviewed [60]. We touch on the highlights of these [61,62] reviews here.
The effect of two different trimethyl chitosans (TMC) on the oral absorption of buserelin, a peptide drug, after intraduodenal administration in rats is reported [63] Both formulations significantly enhanced buserelin plasma levels. Enhanced absorption in the presence of TMC60 (60% trimethylation) is because of the inherent ability of TMC60 to open tight junctions. The impact of TMC solutions on octreotide in vitro permeation and in vivo absorption in rats was also investigated [63]. The intrajejunally administered TMC solution led to a fivefold increase in the absorption of octreotide compared to octreotide standalone. The effect of various liquid formulations on the oral bioavailability of octreotide was studied in pigs [64]. Studies with MCC and SNOCC towards oral delivery of LMWH [34,35], confirmed that chitosan derivatives in a concentration of 3% improved the oral bioavailability of LMWH.
In vivo studies using thiolated chitosan tablets were applied using peptide drugs as well as efflux pump substrates. Enteric coated chitosan-TBA conjugated with salmon calcitonin for the oral administration to rats were tested. Besides chitosan-TBA, the tablets contained two different chitosan-enzyme inhibitor conjugates, (chitosan-BBI conjugate and chitosan-elastatinal) [65]. Oral administration of this chitosan conjugate showed decreased plasma calcium levels for several hours [66]. Another study, where stomach targeted delivery system for salmon calcitonin was investigated using tablets containing chitosan-TBA as well as chitosan-pepstatin [67]. The efficacy of chitosan-TBA/GSH for oral peptide delivery was studied using the peptide drug antide. Antide was not absorbed after oral administration; however, absorption of the drug was reported following oral administration of chitosan TBA/GSH tablets [26]. Besides peptides and proteins, oral bioavailability of efflux pump substrates was improved using thiolated chitosan tablets were used. Oral bioavailability of the P-gp substrate Rhodamine 123 (Rho-123) was reported [59]. Guggi et al., used optimized tablets comprising of chitosan-TBA with lower molecular mass (75-150 kDa instead of 400 kDa) and demonstrated a 5.5-fold increase in Rho-123 AUC in comparison to the Rho-123 buffer solution. Guggi et al. investigated the effect of various calcitonin containing tablets on the blood calcium level of rats after oral administration. Compared to tablets containing calcitonin and chitosan only, marginal reduction of the calcium level was observed after administration of chitosan-pepstatin conjugate tablets [67]. Oral insulin delivery using insulin and chitosan-aprotinin conjugate, showed reduced blood glucose level, 8 h after oral administration [68].
Microparticulate Chitosan Derivatives Oral Drug Delivery Systems
Authors reported the preparation of liposome microspheres were coated with TMC and chitosan-EDTA. In vivo studies on oral absorption of insulin, confirmed that chitosan EDTA coated liposomes decreased blood glucose [69]. Microspheres based on chitosansuccinate proved their potential for oral delivery of insulin [25]. The delivery system was tested in vivo in diabetic rats, with chitosan-succinate microspheres, the relative pharmacological efficacy showed fourfold improvement [25]. Intragastric administration of calcitonin containing liposomes coated with dodecylated chitosan was confirmed in rats. Similar results were obtained in case of chitosan-phthalate microspheres too. PEGylated chitosan was tested for oral delivery of salmon calcitonin. Alginate-chitosan microspheres with narrow size distribution were prepared by membrane emulsification technique in combination with ion (Ca 2+ ) and polymer (chitosan) solidification. The blood glucose level of diabetic rats was effectively reduced. It was made available for as long as 60 h after oral administration of the insulin-loaded alginate-chitosan microspheres. Therefore, the alginate-chitosan microspheres were found to be promising vectors showing a good efficiency in oral administration of protein or peptide drugs [70]. Chitosan microparticles prepared using the precipitation/coacervation method to obtain biodegradable chitosan microparticles. The entrapped ovalbumin was released after intracellular digestion into the Peyer's patches. The proved that the labeled chitosan microparticles could be taken up by the epithelium of the murine Peyer's patches. Since uptake by Peyer's patches is an essential step in oral vaccination, these results confirmed that the chitosan microparticles are useful when it comes to vaccine delivery system [71]. Chitosan and chondroitin sulphate microspheres were prepared and reported for controlled release of metoclopramide hydrochloride in oral administration [72]. Microparticles prepared by ionic crosslinking between tripolyphosphate (TPP) and chitosan (Cs) were applied to enable the oral bioavailability of curcumin. The developed microparticles are reported to successfully enhance the dissolution of the poorly water-soluble drug Cur, and eventually, improve its oral bioavailability effectively [73].
Nanoparticulate Chitosan Derivatives Oral Drug Delivery Systems
TMC-based insulin-loaded nanoparticles were investigated, it was reported that insulin-TMC polyelectrolyte complexes exhibited higher colloidal stability in simulated intestinal fluid and protected insulin from trypsinic degradation [74]. TMC nanoparticles has also been demonstrated for its oral vaccine delivery. Intragastrical (IG) administration of TMC-nanoparticles containing the model vaccine urease could result in higher IgG and IgA levels [75]. Another study reported the efficiency of TMC as vector for in vitro and in vivo gene delivery [76]. Three different TMC-based nanoparticles encapsulated pDNA encoding green fluorescent protein (GFP) were demonstrated for their successful delivery attributes. Nanoparticles based on chitosan-TGA and pDNA for oral delivery are also reported [53]. Acrylic nanoparticles with chitosan-TBA are also reported. In vivo studies with thiolated chitosan nanoparticles for oral delivery are still lacking, however, oral insulin delivery using thiolatedpoly(acrylic acid) nanoparticles [77] and intranasal gene delivery using chitosan-TGA nanoparticles have been demonstrated [52]. Fucoidan (FD) has hypoglycemic effects, TMC and FD were loaded with insulin. TMC/FD NPs are pH sensitive and defend insulin from degradation in the GIT. Moreover, they enhance the cellular transport of insulin across the intestinal barrier [78]. The delivery of insulin via glycerol monocaprylate-modified chitosan nanoparticles has also been demonstrated using TMC/FD NPs [79]. A nanoemulsion was coated with two different PEGylated chitosans. In vivo studies in rats showed, that the oral uptake of salmon calcitonin when administered in carriers coated with PEGylated chitosan was higher than the nanoemulsion alone [37]. Table 1 gives the consolidated list of chitosan derivatives that have been employed for oral drug delivery applications. Table 1. Chitosan derivatives that have been used for oral drug delivery applications.
Future Endeavors
This review briefly surveyed the current scenario of oral drug delivery using chitosan derivatives. Drug delivery is a very appropriate subject area, which chitosan have enormously impacted. We ran a pubmed search, using keywords, chitosan and drug delivery, chitosan derivatives and oral drug delivery, chitosan derivatives and drug delivery. Backed up by a total of 10,000 odd publications as per our pubmed search from 1981-2022, chitosan has indeed generously contributed to drug delivery. Novel chitosan derivatives, which are the second-generation innovations emerging from chitosan, have a 2635 publication record when it comes to drug delivery applications.
Chitosan derivatives are well reported for their use in delivery of poorly soluble drugs, for colon-targeted drug delivery, for mucosal drug delivery, ocular drug delivery and topical delivery [81][82][83][84].
Chopra et al. [85] have extensively reviewed the advances and potential applications of chitosan derivatives as mucoadhesive biomaterials in modern drug delivery. When it comes to drug delivery, the drawbacks of chitosan have been overcome through derivatives such as carboxylated, various conjugates, thiolated, and acylated chitosan and Tan et al. have reviewed the applications of quaternized chitosan as antimicrobial agents, including their antimicrobial activity, mechanism of action and biomedical applications in orthopedics [86]. These have become an appropriate platform for sustained release at a controlled rate, prolonged residence time, improved patient compliance through reduced dosing frequency, enhanced bioavailability leading to significant improvement in therapeutic efficacy.
Currently, chitosan derivative nanoparticles are mainly used for sustained release, preparation of targeted drugs and as vectors for gene therapy. As delivery carriers, chitosan and its derivatives are usually available as microspheres, nanoparticles, micelles, and gels in delivery carriers [87,88]. Besides these options, chitosan derivative nanoparticles are also used for the delivery of polypeptides. Chitosan derivative nanoparticles interact with peptides through strong hydrogen bonds and static electricity, obtaining peptide-loaded nanoparticles. Fatty-acid-modified quaternary ammonium chitosan nanoparticles loaded with insulin have been shown to be beneficial [89]. Chitosan derivative nanoparticles have also been applied for gene delivery. Gene therapy is a promising strategy for challenging diseases. A key step in gene therapy is the successful delivery of genes [90,91]. Chitosan derivative nanoparticles, as non-viral vectors, have excellent solubility, biodegradability, biocompatibility, non-toxicity and a higher transfection rate than chitosan nanoparticles [92]. Methoxy polyethylene glycol-modified trimethyl chitosan (mPEG-TMC) has been covalently linked to doxorubicin (DOX) and cis-itaconic anhydride (CA), for better anti-tumor activities [93,94]. O-carboxymethyl chitosan inhibited tumor cell migration in vitro [95]. The poly-β-amino ester nanoparticle loading gene, after the addition of thiolated O-carboxymethyl chitosan, showed a higher cell transfection rate [96]. These are a notable few brief mention of the drug delivery potentials of chitosan derivatives, which have been dealt in detail by earlier reviews.
Chitosan derivatives upgraded to break many of the limitations that chitosan was facing, and with that reputation, it was believed that higher research curiosity and much more research interest would be evident. This expectation is well below the actual trend. As for oral drug delivery, chitosan derivatives are within the 500-article mark, which is one fifth lesser than the interest on chitosan derivatives and drug delivery. Figure 4 summarizes this trend. However, as this review points out, there is definitely a high potential contribution from chitosan derivatives in biomedical applications and drug delivery, which we stress has not been fully tapped into in terms of oral drug delivery applications. This review hopes to provoke some though and awareness towards this area of research.
Non-invasive oral drug delivery is the crown of drug delivery approaches, chitosan derivatives are the latest generation upgrades, a fusion of both these should break numerous boundaries and limitations. The fact that this is truly an interdisciplinary area, where synthetic chemists and pharmacologist need to collaborate to access the full potential of either expertise, may be the retardant. The reason for the low enthusiasm could be the interdisciplinary nature of this area of research. There is no dearth for chitosan derivatives, as pointed out by the review, diverse chitosan derivatives are in the market. Yet, as pointed out in this review, only thiolated chitosans have been predominantly applied, and few other scattered versions too. There are a whole lot of options to consider and avenues that they would open up which are yet to be looked into. This review hopes to enthuse the researchers in this direction.
Combining nanoaspects of chitosan with synthesis of chitosan derivatives is definitive progress in this area. Nanoforms have always pushed limitations of various applications, and there is surely a lot more to derive from nanostucturization of the chitosan derivatives. Oral drug delivery has benefitted greatly from the use of nanochitosan forms; combining chitosan derivatives with nano aspects could prove highly beneficial.
Conclusions
The objective of this review was to showcase the wealth of available chitosa atives and to evaluate their achievements in the area of oral drug delivery. N reviews exist in the area of chitosan and drug delivery, chitosan derivatives a
Conclusions
The objective of this review was to showcase the wealth of available chitosan derivatives and to evaluate their achievements in the area of oral drug delivery. Numerous reviews exist in the area of chitosan and drug delivery, chitosan derivatives and drug delivery applications are also well reported. We reviewed the comparatively less-reported chitosan derivative application into oral drug delivery. During the review process, it became clear that there is no doubt as to the advantages of employing the use of chitosan derivatives for oral drug delivery purposes. However, as pointed out in the review, there is a huge gap between the available knowledge and the synthesized chitosan derivations and their oral drug delivery applications. There are so many derivatives synthesized, yet only few have been used for oral drug delivery applications. The reasons for this gap and the various reasons that could have led to this have been speculated. The need to bridge these ends have been emphasized. There is definitely much to harness and more to achieve, through proper inclusion of chitosan derivatives that have so far not been attempted for oral drug delivery applications. | 6,357.4 | 2022-05-24T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Superoxide Dismutase: A Key Enzyme for the Survival of Intracellular Pathogens in Host
Superoxide dismutase (SOD) is a crucial enzyme required to maintain the redox potential of the cells. It plays a vital role in protecting normal cells from reactive oxygen species (ROS) produced during many intracellular pathogens infections. SOD removes excess superoxide radicals (O) by converting them to hydrogen peroxide (H2O2) and molecular oxygen (O2). Several superoxide dismutase enzymes have been identified based on the metal ion as a cofactor. Human SOD differs from the intracellular pathogens in having Cu/Zn and Mn as metal cofactors. However, SOD of intracellular pathogens such as Trypanosoma, Leishmania, Plasmodium, and Mycobacterium have iron (Fe) as metal cofactors. Iron Superoxide Dismutase (FeSOD) is an essential enzyme in these pathogens that neutralizes the free radical of oxygen (O) and prevents the formation of Peroxynitrite anion (ONOO), helping the pathogens escape from redox-based cytotoxic killing. Moreover, most intracellular bacteria hold MnSOD or FeSOD in their cytoplasm such as Salmonella and Staphylococcus, whereas periplasm of some pathogenic bacteria and fungi are also cofactors with Cu/Zn and identified as CuZnSOD. This chapter will review the various types SOD present in intracellular pathogens and their role in the survival of these pathogens inside their host niche.
Introduction
Reactive oxygen species are primarily the result of the by-product of the redox process and may also be produced to initiate intracellular signaling and antimicrobial activity. The general phenomenon is to maintain the ROS level in the cell by antioxidant enzymes and antioxidants molecules present in cells [1]. One of the prime sources of ROS in mammalian cells is the respiratory chain in mitochondria. It's well established that ROS generation is an essential modulator of inflammatory reactions in mammals. The enzyme NADPH oxidase induced the oxidative burst, leading to a dramatic increase in oxygen consumption and increasing the phagocytosis process. Activated macrophage induced the expression of IFN-γ and TNF-α cytokines, improving NADPH oxidase activity resulting in ROS production, such as oxygen-free radicals (O 2 •− ) superoxide. The O 2 •− species are converted into hydroxyl radical (HO • ), hydrogen peroxide (H 2 O 2 ), and peroxynitrite (ONOO − ) by spontaneously or enzymatic reaction [2,3]. Activation of nitric oxide synthase (iNOS) or (NOS 2 ) protein in macrophage stimulates the increased secretion of nitric oxide (NO) and • NO-metabolite levels within the cell. ROS is the first superoxide radicals produced by mitochondria. ROS is a highly reactive oxygen species and does not diffuse quickly from cells since the leading site of ROS production is in the inner mitochondrial membrane. H 2 O 2 is derived from mitochondrial ROS and detoxified by superoxide dismutase. ROS detoxification has been assigned in ROS-generating sites in the cell, such as mitochondria, glycosome, endoplasmic reticulum, and cytosol. Hydrogen peroxide (H 2 O 2 ) is not considered a free radical by definition since it lacks free electrons. Still, NO is deemed to be free radicals, has also been involved in ROS-mediated damage. However, NO has a dual nature, like as beneficial as well as vicious [4][5][6].
Aerobic organisms exhibited two major antioxidant defense systems to minimize the ROS-mediated damage occurring due to oxygen-free radicals. The first one is enzymatic defense, and the second is low molecular weight antioxidants such as vitamins and phytochemicals. In general, cells control oxidative stress by three essential antioxidant enzymes which are present in it; (i) Superoxide dismutase is a class of oxidoreductase enzymes that contain metal ions in their active site (Fe or Mn and/or Cu/Zn) and is responsible for converting superoxide anion into H 2 O 2 . (ii) glutathione peroxidase is responsible for the reduction of H 2 O 2 into hydroperoxides using glutathione as hydrogen donor, and (iii) catalase is responsible for the breakdown of H 2 O 2 into O 2 and H 2 O [7]. Since the activity glutathione peroxidase is required glutathione as hydrogen donor. Thus, the NADPH-dependent reduction of oxidized glutathione to maintain a steady state of glutathione is needed for GSH activity [1].
Superoxide dismutase catalyzes the dismutation of oxygen free radical to O 2 and H 2 O 2 in the cell. SOD enzymes also participate in signaling pathways by controlling ROS action and protecting the cells from the toxic effects of superoxide radicals. Intracellular SODs mainly restrict superoxide action, which harms the cells by damaging the Fe-S cluster-containing enzymes. Extracellular SODs also guard the cells from superoxide released by the host or pathogens. For example, extracellular SODs of microbial pathogens are protected by ROS-mediated killing of host cells. The host cells antioxidant system includes enzymes such as SOD, catalases, and peroxidases [8,9].
Superoxide dismutase
The evolutionary history of metalloenzyme superoxide dismutase (SOD) is aged and has been there before the differentiation of eubacteria from archaea bacteria. It is ubiquitous protein present in all living organisms and plays a vital role in the extreme pressure defense against superoxide radicals in the cell. The SOD catalyzes the conversion of the two molecules of virulent oxygen free radical (O − ) into molecular oxygen (O 2 ) and hydrogen peroxide (H 2 O 2 ) by using two equivalents of H + ions [10]. SOD is marked as a strong free radical scavenger that can eliminate the toxic effects of superoxide produced during the reduction of molecular oxygen. SODs enzyme family have been classified based on several factors, and one is on the metal ion. In general, SODs contain the metal cofactor at their catalytic core and are classified into three major groups: copper/zinc (Cu/Zn-SOD) [11,12], manganese (Mn-SOD) [13], and iron (Fe-SOD) [14][15][16]. SOD containing MnSOD, FeSOD and CuZnSOD are encrypted by the gene sodA, sodB, and sodC, respectively. Nickel (Ni)-and iron-zinc (Fe/Zn) containing isozymes have also been identified in several bacteria [17,18]. FeSOD has mainly reported in prokaryotes except few protozoan parasites, whereas MnSOD and CuZnSOD are found in both prokaryotes and eukaryotes. All these isoforms were identified based on their diverse sensitivities to cyanide (CN) and H 2 O 2 . The Cu, Zn-SOD is extremely sensitive to CN and H 2 O 2 [19]. Mn-SOD is insensitive to CN and H 2 O 2 [20], while Fe-SOD is not sensitive to CN but sensitive to H 2 O 2 [21]. In addition, Mn-SOD and Fe-SOD, both were inhibited by chloroform-ethanol, but Cu, Zn-SOD is insensitive [22].
Moreover, SODs of intracellular bacteria are further classified into three groups based on their localization; Mn-and Fe-cofactor SODs are found in the cytosol. In contrast, the third one of SOD cofactor by Cu-Zn and is attached with periplasm or anchored with the lipid of the outer envelope [23,24]. Cu/Zn-SOD of bacteria dismutase superoxide produced by host cell during phagocytosis contributes to helping bacterial virulence [25,26]. Additionally, few families of SODs also use a Ni ion as cofactor at their catalytic core to initiate its functions [27]. A study has shown that superoxide dismutase from Streptococcus is capable of making a cofactor substitution with Fe in place of Mn [28]. On the other hand, Leishmania tropica, Trypanosoma brueci, and Crithidia fasciculate have superoxide dismutase, which is insensitivity to cyanide but sensitive to azide and peroxide [29]. SODs of Trypanosomatids are having Fe as a metal cofactor at their catalytic core and are categorized as iron superoxide dismutase (Fe-SOD). Other protozoan parasites also have the same Fe-SOD, such as Plasmodium falciparum and Entamoeba histolytica, where enzyme-mediated free radical catabolism is fully Fe-SOD dependent [30]. Fe-SOD isoform was first discovered in Escherichia coli in 1973 by Yost and Fridovich. Subsequently, the same isoform was characterized in T. cruzi in 1977. Like Trypanoredoxin (TR), SODs of T. cruzi differ from the mammalian host. Trypanosomatids, other protozoan parasites (P. falciparum and E. histolytica), some plants, and Archaea possess only Fe-SOD. However, humans and other mammalian hosts contain Cu/Zn-SOD and Mn-SOD as core metal Figure 1 [31].
This chapter will discuss role of superoxide dismutase in various intracellular pathogens that are belong to protozoan parasites genus Trypanosoma, Leishmania, Plasmodium and Toxoplasma, bacterial intracellular pathogens belongs to genus Mycobacterium, Salmonella, Francisella and Staphylococcus and fungal intracellular pathogens belongs to genus Cryptococcus and Histoplasma etc.
Role of SOD in intracellular parasites
There are several intracellular protozoan parasites which are causing severe illness in human's beings and if left untreated 100% mortality. These intracellular parasites belonging to the genus Plasmodium, Leishmania, and Trypanosoma, causing a spectrum of diseases like malaria, Leishmaniasis, African sleeping sickness, and Chagas disease in humans [1]. Antioxidant defense of pathogenic protozoan parasites is significantly distinct from each other as well as compared to their mammalian host. Trypanosomatids, as well as Plasmodium species have an Fe-containing SOD isoform, which is typically found in bacteria but absent in other eukaryotic cells [32,33]. The main function of Fe-SOD is to neutralizing the O [−] that are formed during the generation of the superoxide radical [34]. Parasite persistence is determined by a balance between the ability of the immune response and resistance against free radicals produced by host cells. Leishmania-infected macrophages are able to produce inflammatory cytokines, ROS, and • NO derivatives, which usually lead to the killing of the phagocytosed microorganism. However, Leishmania and Trypanosoma spp. are few protozoa that can survive and resist cytotoxic environments within the macrophage, and further, they can able to replicate in such a hostile condition Table 1 [4, 5].
Trypanosomiasis
Chagas is a parasitic disease caused by intracellular parasites Trypanosoma cruzi. The prevalence of the disease is around 6-7 million worldwide, mainly in Latin America and listed in 17 neglected tropical diseases (NTD) classified by the WHO (WHO-2021). The present chemotherapy is relay on two available drugs 5-nitrofurannifurtimox (NFX) and 2-nitroimidazole benznidazole [65]. T. cruzi contains only Fe-dependent superoxide dismutase (Fe-SOD). Parasites have two dimeric Fe-SOD isoforms, one mitochondrial and one cytosolic isoform. However, Mateo et al. [35] investigated and characterized 4 Fe-SODs in T. cruzi epimastigotes, mainly cytosolic. The level of Fe-SOD increases during the differentiation of short stumpy forms of the parasite into dividing procyclic forms [66]. Therefore, Fe-SODs could be a promising drug target for the development of anti-chagasic drugs because of their exclusivity in T. cruzi. Furthermore, the crystal structures of the cytosolic Fe-SOD and the mitochondrial Fe-SOD from T. cruzi suggest that each enzyme has two polypeptide chains and two active sites composed of a Fe2+/ Fe3+ ion, respectively. In Chagas disease, phagocytosis of parasites by macrophages is the first line of defense against the parasites by the host. Macrophage produces superoxide radical (O2 •− ), which diffuses into parasitophorous vacuoles, causing toxic environments to the parasites. However, T. cruzi is also equipped with an antioxidant network to counter the host-derived ROS activity. During infections, parasites are internalized into the phagolysosomal compartment and activate the NADPH oxidase 2 complex (Nox2) of the host macrophage [67]. Nox2 activity in macrophages results in intraphagosomal formation of oxygen free radicals (O2 •− ) and O2 •− derived ROS, which is required to neutralize parasite proliferation and disrupt its differentiation in the early stage of infection. Macrophages derived from Nox2-deficient (gp91phox−/−) mice produced marginal amounts of superoxide radical and are more susceptible to parasite infection than those macrophages derived from wild-type mice. Nox2-derived superoxide radical plays a crucial role in controlling T. cruzi infection in the early phase of a murine model of Chagas disease [68]. Inhibition or ablation of the Nox2 enzyme has shown to be detrimental for controlling the infection of a number of pathogens in vitro and in vivo [69,70].
Distribution of superoxide dismutase (SOD) & their sub-class in various intracellular pathogens and their role in pathogenesis of respective diseases.
Trypanosoma brucei is an obligate intracellular protozoan parasite that causes sleeping sickness in humans in many countries of sub-Saharan Africa. Various sub-species of parasites cause the disease and responsible for more than 90% of all trypanosomal diseases in humans [71]. Overexpression of SOD-B1 in T. brucei has shown hypersensitivity to a trypanocidal agent such as benznidazole and gentian violet. A similar study in L. chagasi revealed that an increase in SOD-B1 protein leads to resistance toward paraquat and nitroprusside [72]. Deleting one copy of Sod-B1 gene in the L. chagasi increased the sensitivity to the drug and a significantly decreased the parasites survival within the host macrophage. T. brucei serves four SOD isoforms, of which three are iron-dependent, which is typically very much similar to prokaryotic SODs. Localization studies reveal that out of four SOD, two are predominantly found in the glycosome (TbSOD-B1 and TbSOD-B2), and the other two are found in mitochondria (TbSOD-A and TbSOD-C) [30]. Overexpression of cytosolic Fe-SOD-B of T cruzi showed more resistance to the phagocytic killing of macrophages and increased intracellular proliferation than wild-type (WT) parasites. Fe-SOD-B overexpressed mutant parasites showed higher infectivity than WT but lost in gp91-phox−/− macrophages, emphasizing the role of O2 •− in parasite killing [67]. TcFeSOD-A gene amplification increases the TcFeSOD protein expression and enzyme activity in a T. Cruzi induced resistance to benznidazole and gentian violet treatments [44]. The reduced expression of TbSOD-B leads to rapid accumulation of superoxide anion within the trypanosome responsible for detoxifying highly toxic radical in the parasite [74].
Leishmaniasis
Leishmaniasis is an intracellular protozoan disease caused by Leishmania parasites. Leishmaniasis is usually prevalent in tropical and subtropical regions of the world [36,75]. Leishmania parasite infects host macrophages, survives in parasitophorous vacuoles of the macrophage, and escapes from the oxidative killing of the parasite by neutralizing the ROS activity. Leishmania Fe-SOD can be classified into two types based on their localization: FeSOD-A isoform is localized in mitochondria, and is related to cellular respiration; FeSOD-B1 and FeSOD-B2 are localized in glycosomes and reduce the oxidative stress generated from cellular reactions [37]. L. major contains Sod-B1, Sod-B2, and Sod-C genes on chromosome 32 and sod-A gene on chromosome 8. Sod-B1 and Sod-B2 genes are organized in tandem in both L. chagasi and L. donovani. Metacyclic promastigote of L. amazonensis, when lacking one allele of the Sod-A gene, failed to replicate in macrophages and severely attenuated their ability to established the cutaneous lesions in mice. In addition, the reduction of SOD-A expression in parasites resulting in increased susceptibility to oxidative damage. The failure of SODA/sod-A functions in promastigotes compromised their differentiation into axenic amastigotes. Hence, SOD-A promotes Leishmania virulence by protecting the parasites against oxidative stress and initiating ROS-mediated signaling mechanisms, which are required to determine infective forms [37]. L. chagasi SOD-B1 null mutant parasites are not viable inside host macrophages. Furthermore, parasites lacking one SOD-B1 allele have markedly reduced their viability [38]. Moreover, WT and SOD-B1/Δsodb1 L. major promastigotes have equal capacity to establish infection in murine bone marrow macrophages. However, in contrast to WT parasites, L. major SOD-B1/Δsodb1 deficient parasites are declined in number over time in macrophages. The results suggesting its normal level of SOD-B1 is required for L. major endurance in macrophages and virulence in mice [76]. The Fe-SOD transcript level and enzyme activity are higher in the amastigote than in the promastigote stage of the parasite when treated with nitroprusside and parquet in L. chagasi [72]. In Leishmania, FeSOD-A appears to be the first line of defense against ROS and is crucial for parasite survival inside macrophages. Antimony (SbIII) resistant L. (Viannia) brazilensis (LbSbR) and L. (Leishmania) infantum (LiSbR) lines express higher FeSOD-A specific enzyme activity compared to wild type control and showed more resistance toward Antimony (SbIII) [77,78]. Moreover, miltefosine resistant L. donovani are able induce the overexpression of LdFeSODA to protects from drug-induced cytotoxicity, reduces superoxide generation, and involves in suppression of oxidative stressinduced programmed cell death by reducing the phosphatidylserine exposure, DNA damage [79,80]. Increased exposure of L. donovani to miltefosine makes resistance due to the release of LdFeSOD-A into the cytosol from mitochondria. This release of LdFeSOD-A into the cytosol or the inhibition of LdFeSOD-A import into the mitochondria makes the mitochondria even more susceptible to oxidative stress due to the accumulation of ROS. Mitochondria of the parasite are more vulnerable to ROS, leading to programmed cell death, emphasizing its role in keeping healthy mitochondria [39].
Malaria
Malaria is caused by an intracellular protozoan parasite belongs to the genus Plasmodium. Malaria is endemic in most of tropical countries and subtropical regions of Asia, Africa, South, and Central America. Plasmodium can differentiate and replicate inside hepatocytes, and then released as merozoites into the bloodstream, which subsequently invades red blood cells (RBCs) [81]. Plasmodium parasite uses SOD to reduce the toxicity of ROS throughout the intra-erythrocytic stage of parasite survival. The SOD activity in Plasmodium falciparum and rodent malaria species is characterized as iron-dependent and the first level of the antioxidant defense system of the parasite [40,81,82]. P. falciparum consists two distinct genes coding for different SOD such as PfFeSOD-1 and PfFeSOD-2 [40]. PfFeSOD-1 is a cytosolic protein and expressed during the intra-erythrocytic cycle of the parasite [41,83]. FeSOD-1 is also reported in P. ovale, P. malariae, and P. vivax and very close apicomplexan parasites such as Toxoplasma gondii [42]. Since FeSOD-1 is a cytosolic protein, it is unlikely to act on a superoxide anion in the parasite food vacuole during hemoglobin digestion. Thus, it is plausible that parasites might be taking a large amount of Cu/Zn-SOD from the host erythrocyte to detoxify the superoxide anions in their organelles [84]. Plasmodium parasite utilizes SODs enzymes to limit the toxicity of ROS produced during hemoglobin degradation in the erythrocytic cycle. These enzymes play a crucial role in parasite persistence and their intracellular survival during the intra-erythrocytic stage of the life cycle. FeSOD1 of Plasmodium vinckei (PvSOD1) also plays a central role in the oxidative defense of these parasites. However, PvSOD1 is inhibited by H 2 O 2 and peroxynitrite, but not by cyanide and azide [85]. The FeSOD-2 of P. falciparum is a mitochondrial SOD with an elongated N-terminal protein extension, reminiscent of a bipartite apicoplast-localized protein [43,86]. An inhibition study of recombinant P. falciparum FeSOD suggested that SOD is a highly selective drug target to designed antimalarial drugs. The study further identified many antimalarial drugs which have shown antimalarial activities against P. falciparum and even a strain moderately resistant to chloroquine [87].
Toxoplasmosis
Toxoplasma gondii is an obligate intracellular protozoan pathogen that infects nearly all warm-blooded animals. Toxoplasmosis is one of the most prevalent parasitic diseases, an estimated one-third of the global population are at risk. Still, it is considered a neglected parasitic disease [88]. T. gondii causes life-threatening illnesses in developing fetuses and in persons with immunocompromised [89]. In chronic infection, T. gondii spreads in various organs such as the heart and brain through the circulatory system [90]. T. gondii RH tachyzoites treated with resveratrol and pyrimethamine significantly increased SOD activity to restrain ROS action for their survival [44]. Interestingly, human macrophages failed to produced ROS during T. gondii-infection [91], possibly due to an immune evasion mechanism of parasites. T. gondii targets the host NADPH oxidase enzyme by reducing the expression of Nox4 transcript and protein, resulting in diminished the release of intracellular ROS. In infected cells, Nox4 gene expression was associated with activation of PI3K/AKT signaling [92]. However, superoxide dismutase and catalase enzymes might be playing a role in intracellular survival but, it does not have a basis for differences in virulence to mice [93]. In T. gondii, SODs are found in nearly all developmental stages of parasites, suggesting their importance in detoxifying superoxide radicals to protect the parasite. T. gondii contains three types of SOD; SOD-B1 (Fe-SOD), different from the Mn-binding SOD of humans. SOD-B1 is a cytoplasmic and essential enzyme, and SOD-B1 gene knock-outs lead to be lethal for parasites [94,95]. SOD2 and SOD3 are found in the mitochondria of parasites and have conserved residues to bind iron. However, they are very similar in the primary sequence to SODs from P. falciparum [45] T. gondii superoxide dismutase (TgSOD) also affects the intracellular multiplication of both bradyzoite and tachyzoite forms of parasites. A recombinant DNA vaccine containing the antigen gene of T. gondii were elicited high levels of antibodies, a Th1 type of immune response with significant production of IFN-γ, and low levels of IL-4 or IL-10 in BALB/c mice [96]. Moreover, a DNA vaccine containing the TgSOD gene triggered potent humoral and cellular immune responses, and it stimulates biased protective immunity against acute T. gondii infection in BALB/c mice [46]. SOD-DNA vaccines of L. amazonensis immunized mice were partially protected from parasites once challenged. Mice showed a mixed immune response, including the production of IFN-γ and IL-4 from CD4+ and CD8+ T lymphocytes [69]. In addition, the SOD vaccine of Brugia malayi was also shown to trigger a typical Th1 response against infective larvae and microfilariae in jirds with filarial infection [97]. The above finding reveals that SOD-dependent vaccines have potential vaccine efficacy, either by protein or DNA-based vaccines, to control intracellular pathogen by activating the protective Th1 type of immune responses in animals.
Role of SOD in intracellular bacteria
There are several intracellular bacteria which are causing severe illness in human's beings and if left untreated 100% mortality. Most pathogenic bacteria contain MnSOD or FeSOD in their cytoplasm, while CuZnSOD has been found on the periplasm of pathogenic bacteria and played an essential role during phagocytosis [11,23]. In addition to their ability to detoxify free radicals during aerobic growth, bacterial SODs are also critical in determining the virulence factors. In several intracellular bacterial infections, SOD-C acts as a critical virulence factor, and its localization to the periplasmic membrane protects bacteria from ROS derived from host cells [49,[98][99][100]. Moreover, many virulent bacteria maintain two copies of the sodC gene [101]. The evolutionary maintenance of an extra sodC gene copy indicates that SOD is essential for pathogenic bacteria for their survival inside the host niche [101]. These pathogens belong to the categories of genus Mycobacterium, Salmonella, Staphylocccus and Francisella, causing spectrum of disease like tuberculosis, leprosy, typhoid, boils, furuncles, cellulitis and tularemia etc.
Tuberculosis and leprosy
Mycobacterium is an intracellular bacterium, which is causing two distinct disease manifestations in humans, such as Tuberculosis and Leprosy. Tuberculosis (TB) is caused by M. tuberculosis, a leading infectious agent that claims millions of deaths worldwide/year [102]. M. tuberculosis is encountered several exogenous and endogenous redox pressures throughout its pathogenic life cycle. Therefore, they use various in-house enzymes to detoxify and neutralize the redox potential produced by host cells. Catalase-Peroxidase, Superoxide dismutase, and Alkyl Hydroperoxidase are the enzymes involved in the clearance of oxidative stress [47].
M. Tuberculosis is a highly pathogenic bacterium contains Fe-SOD and expresses 93-fold more superoxide dismutase. In contrast, non-pathogenic mycobacterium M. Smegmatis has Mn-SOD, and M. Tuberculosis export more enzyme than M. smegmatis [48]. Superoxide dismutase (SOD) of M. tuberculosis is a 207-residue enzyme with molecular mass of 23 kDa [103]. Treatment with diethyldithiocarbamate, a potent inhibitor of SOD, increased M. lepraemurium survival in murine splenic macrophages [104], suggesting that SOD protein is required for the long-term survival of mycobacterium in vivo [104] M. tuberculosis has two distinct SOD proteins, SOD-A and SOD-C. SOD-A is one of the main extracellular proteins contains Mn, Fe-SOD. SOD-C is much lower protein contains Cu, Zn SOD, and present in the outer membrane of the bacteria. SOD-C was upregulated during phagocytosis by macrophage, suggesting its importance in protecting the M. tuberculosis membrane against damage from superoxide radicals [25]. SOD of M. tuberculosis scavenge oxygen free radicals and inhibits the release of NO by inhibiting iNOS activity. It impairs acquired by down-regulating the IFN-γ expression as well as control the caspase-dependent apoptosis. SOD also inhibits innate immunity by downregulating TLR2 expression as well as control the TLR2 dependent signaling in the cells [104].
Mycobacterium leprae is the causative agent of leprosy or Hansen's disease. M. leprae is the single known bacterial pathogen that infects superficial peripheral nerves. It is an intracellular pathogen that infects both myelinated and nonmyelinated Schwann cells of the nerve and proliferates within the monocyte/macrophage series cells. Peripheral nerves are not protected from the immune response of host due to the blood-brain barrier [105]. Hence, the advantage of M. leprae is to escape from the phagocytosis actions of the macrophage may be a critical factor in its pathogenicity [106]. The SOD activity of M. leprae is lower than the other mycobacteria species such as M. lepraemurium, M. phlei [107]. Therefore, the ability to clear the M. leprae infection via SOD pathway appeared to be a distinct mannerism and is not dependent on macrophage activation and differentiation.
Salmonellosis
Salmonella typhimurium is a facultative intracellular bacterium that resides within modified phagosomes in macrophage promotes replication and escape from killing by ROS [108]. S. typhimurium infects a wide range of hosts, including animals, humans, and poultry. S. typhimurium causes acute gastroenteritis in humans and typhoid-like disease in mice. If left untreated, 100% fatal [50]. Salmonella infects the epithelial wall of the intestine and escapes from the innate immunity and ROS activity of the host. The SOD of S. typhimurium protects the bacterium from excessive ROS activity produced outside or inside of the host cell [109,110]. Thus, SOD was considered a critical factor for bacterial survival by neutralizing the ROS activity [111]. The sod-A gene inactivation in Salmonella species is connected with limited protection from ROS and decreased virulence during mice infection [26,109].
sod-A-deficient bacterium displayed a slightly lower growth rate compared to the wild-type strain. The loss of the sod-A gene in mutant bacteria harms the ability to infect the host cell. Consequently, the sod-A mutant bacterium is highly susceptible to the bactericidal action of host cells and has also shown attenuated virulence properties. More specifically, SOD-A plays a vital role in biofilm formation, increased resistance against oxidative stress, and overcome from bactericidal complement system of serum [51]. Salmonella combats phagocytic free radicals by producing the periplasmic superoxide dismutase. Periplasmic Cu, Zn-cofactor superoxide dismutase (SOD-C) protects S. typhimurium from extracellular phagocyte-derived oxidative damage by host cells. Salmonella deficient sod-C gene has shown abated survival inside the macrophage, increased ROS susceptibility, and attenuated virulence factor during in-vivo infection. Conclusively, SOD protects periplasmic or inner membrane targets by controlling the phagocytosis-dependent oxidative burst or inducible nitric oxide synthase activities during in vivo infection [49]. The evolutionary acquisition of the sod-C gene in Salmonella species extends an increased virulence trait of bacterium [52].
However, cytosolic Mn-SOD enzyme is essential for detoxifying intracellular superoxide radicals but not involved virulence [112]. SOD of Streptococcus suis resistant to anti-oxidative stress and ROS-generating herbicides, which is known to cause a severe damage to DNA, RNA, and proteins molecules that might contribute to its virulence in mice [53].
Tularemia
Francisella. tularensis is an intracellular pathogen that causes a disease called Tularemia. The disease is considered a potential biological threat for humans due to its extreme infectivity and substantial capacity to cause severe illness and death. The hallmark of the bacterium is their capability to survive and replicate within macrophages [113] and other cell types [114,115]. The bacterium's survival depends on its ability to combat the microbicidal activity of macrophages such as ROS and reactive nitrogen species. F. tularensis require oxygen for their growth and possess ROS-scavenging enzymes such as super oxide dismutases, peroxidases, and catalases [116,117].
Like other bacterial pathogens, F. tularensis contains two types of SOD gene: FeSOD (sod-B) and CuZnSOD (sod-C). SOD-B plays a dual role in protecting F. tularensis from the oxidative stress of the host. SOD-B binds to the iron with high affinity and limits the availability of iron requirement to produce the highly lethal OH·. Secondly, detoxification of superoxide prevents cellular damage of DNA, proteins, and lipids associated with O 2− toxicity [53,54]. SOD-B dismutation decreasing the reaction of O2 with NO to form peroxynitrite (ONOO) and protect bacteria from ONOO-toxicity [55]. ONOO-has been shown to have a significant role in the IFN-γ -induced killing of F. tularensis (live vaccine strain) LVS by murine macrophages [99,118]. However, the genome sequence of F. tulrensis LVS has possessed a single functional copy of the sod-B gene [117]. Hence, sod-B gene alteration leads to reduced SOD-B enzyme expression might be associated with high sensitivity to oxidative stress suggesting that sod-B is essential for bacterial survival under oxidative stress conditions. Therefore, increased survival of mice infected with sod-B mutant F. tularensis suggesting that SOD-B plays a role in virulence [56].
A recent study suggests SOD-C (CuZnSOD) of F. tularensis also plays a vital role in virulence factors. SOD-C is localized in the periplasm to protect from superoxide radicals (O 2− ) derived from host cells. F. tularensis depleted sod-C (∆sodC) mutant and F. tularensis ∆sodC mutant with attenuated sod-B gene expression (sodB ∆sodC) exhibited attenuated intracellular survival in IFN-γ-activated macrophages compared to the wild-type F. tularensis LVS. Transcomplementation of the sod-C gene in ∆sodC mutant bacteria or checking the IFN-γ-dependent production of O 2− or NO enhanced the survival of the sod mutant's bacteria in macrophage. The virulence capacity of the sodB ∆sodC mutant bacteria was significantly more attenuated as compared to ∆sodC mutant. Furthermore, lack of IFN-γ, iNOS, or PHOX restored the virulence of ∆sodC mutant strains, suggesting that the CuZnSOD of the bacterium is playing a critical role in restricting the bactericidal activities of ROS and RNS. The ∆sodC and sodB ∆sodC mutants were also significantly attenuated for virulence in intranasally challenged C57BL/6 mice compared to the wildtype F. tularensis LVS, indicating that SOD-C is required for resisting host-generated ROS and contribute to survival and virulence of F. tularensis in mice [119].
Staphylococcus (boils and toxic shock syndrome)
Staphylococcus aureus is a gram-positive bacterium, which causes a broad spectrum of diseases in humans. It is a facultative intracellular bacterium that invades and replicates within many types of phagocytic and non-professional phagocytes cells, such as endothelial cells, mammary cells, fibroblasts, and osteoclasts [120]. Bacterium commonly symptomatically colonizes in one-third of the population of the globe and is a leading cause of antibiotic-resistant [121]. Methicillin-resistant S. aureus (MRSA) strains are one of the utmost dangerous species and have shown resistance to all β-lactam antibiotics as well as other antimicrobials [122]. S. aureus is capable of subverting xenophagy and escaping from the cytosol of the host cell during intracellular infection [118,123,124]. During intracellular survival, S. aureus is capable to protects itself from the oxidative burst by numerous mechanisms, including enzymes such as SODs that detoxify the action of ROS activity [125,126]. S. aureus serves two distinct SODs, SOD-A and SOD-M, both of which are cytoplasmic and reported as Mn-dependent [57,127]. All Staphylococci species are contained SOD-A protein, while S. aureus also has a unique protein SOD-M [58]. The loss of either SOD-A or SOD-M in a skin model of infection or loss of both SODs in a systemic mouse model of infection diminishes the ability of S. aureus to cause disease, highlighting the importance of SOD in the virulence [128,129].
The lack of both SODs in S. aureus shown bacterium is more sensitive to host cells during manganese starvation, suggesting the importance of SOD in overcoming nutritional immunity. Mn starvation in host-mediated protein calprotectin reduces staphylococcal SOD activity during in vitro and in-vivo infection. Hence, Mn deficiency renders S. aureus more sensitive to oxidative stress and neutrophilmediated killing [128,130,131]. SOD-A protein is essential for countering oxidative stress and disease progression when manganese is abundant. At the same time, SOD-M is important under manganese-deplete conditions. However, SOD-A is strictly manganese-dependent, whereas SOD-M contains either of two or more different metal atoms, having similar enzymatic activity when filled with manganese or iron. During host-dependent Mn starvation, S. aureus enables the ability of SOD-M to utilize Fe to retain its SOD activity. Subsequently, S. aureus enhances the ability to overcome nutritional immunity, resistance to oxidative stress, and ultimately induced virulence and infection [59].
Role of SOD in other fungal infection
Superoxide of pathogenic fungus are cofactors with Cu/Zn or Mn metals. The enzymes are localized in the cytosol as well as in mitochondria and involved in cell differentiation and multi-stress conditions. Mitochondrial Mn-SODs prevent the damages of oxidative stress, osmotic and thermal stresses in yeast cells. SODs protein has been shown to contribute to the virulence of many intracellular pathogenic fungi, such as C. neoformans [60], and H. capsulatum, both are capable to some degree of neutralizing the lethal levels of ROS produced by the host cells [64]. C. neoformans have Zn-SOD and Mn-SOD, while H. capsulatum has Cu/Zn-SOD. However, some fungal pathogens and fungal-like oomycetes have a unique SOD, such as Cu-SODs (SOD5). SOD5 are closely associated with the ubiquitous class of Cu/Zn-SODs but lack a Zn cofactor [34] and are believed to act on substrate level [132][133][134]. Unlike Cu/ZnSODs, which is found in both intra-and extracellularly, Cu-SODs are found exclusively in extracellular, and they appear primarily appended to the GPI anchors protein of cell surface [135,136]. Cu-SODs have been proved to protect pathogens from the oxidative burst of the host regulated by immune cells [9] Table 1.
Cryptococcosis
Cryptococcus neoformans (Cn) is a facultative intracellular fungal pathogen and can propagate inside the host macrophages during many stages of experimental and human infections [137,138]. Cryptococcus is a soil fungus that causes life-threatening meningitis in immunocompromised patients [139,140]. Cryptococcus is an encapsulated pathogenic yeast composed primarily of glucuronoxylomannan (GXM). This polysaccharide helps the fungus play a defensive and offensive role during pathogenesis. It protects the fungus against phagocytosis and promoting intracellular pathogenesis through the cytotoxic release of polysaccharides into macrophage vacuoles [137]. Cryptococcus rarely causes clinically visible infections in healthy hosts, but it can be present in latency and persistence inside macrophages [61,62]. C. neoformans var. gattii predominantly infects individuals having a normal immune response, whereas var. grubii and neoformans are common in immunocompromised individuals. C. neoformans var. gattii hinders macrophage phagocytic response, whereas the other two varieties are readily killed by ROS released by phagocytic cells [141,142].
C. neoformans is resistance to ROS mediated oxidative killing of macrophage by inducing the SOD activity and might be playing an important role in virulence of this fungus. Exogenous supplementation of SOD significantly controlled the bacterial growth by inducing human neutrophil function, suggesting that SOD plays a protective role during C. neoformans infection [63]. Cryptococcus neoformans var. gattii contains two types of SODs such as copper, zinc-depend SOD (SOD1) and Mn-dependent (SOD2) isoenzymes [143]. Both SOD1 and SOD2 are intracellular SODs, and deletion of their encoding genes reduces the fungal virulence in vivo model of infection. Furthermore, the mutant fungus also increases sensitivity to pharmacologicallyinduced intracellular oxidative stress [144]. The sod1 mutant C. neoformans was shown three characteristic features 1) highly sensitivity toward oxidative killing by human polymorphonuclear (PMN) cells and by the redox cycling agent menadione. 2) The sod1 mutant was markedly attenuated in virulence when raising the infection in mice, and it also showed significantly susceptibility to in vitro killing by human neutrophils. 3) SOD1 deletion also appeared to be defects in the expression of a number of virulence factors such as laccase, urease, and phospholipase. Complementation of the sod1 gene mutant C. neoformans with SOD1 protein regained the virulence factor and menadione resistance. Hence, the antioxidant function of SOD1 is critical for the pathogenesis of the fungus during intracellular survival [60,142,145].
Histoplasmosis
Histoplasma capsulatum is an intracellular fungal pathogen structurally similar to yeast cells. Latin America. Macrophages efficiently phagocytize the Histoplasma cells, but they failed to kill the fungus despite having ample ROS production. Histoplasma cells counter the ROS-mediated oxidative stress of the host by three proteins that are possibly involved in defending Histoplasma from ROS. sod1 and sod3 gene deficient Histoplasma strains shown the spatial specificity of the SOD1 and SOD3 superoxide dismutases for internal and external (i.e., host-derived) superoxide, respectively. SOD-3 is the primary source of extracellular SODs, and its expression is significantly enriched in the pathogenic phase of fungus cells. Histoplasma SOD-3 offers higher resistance of fungus against the phagocytic killing of host cells leading to increased capacity to cause disease in immunocompetent hosts. In in vivo studies, sod-3 gene deficient Histoplasma strains were shown the attenuation in virulence in mice. Furthermore, restoration of ∆sod3 mutant Histoplasma virulence in mice unable to produce superoxide radicals conclusively proves that SOD3 functions in the detoxification of superoxide generated by the host. SOD-3 also prevents the superoxide-dependent killing of Histoplasma yeast cells. The host to control the infection of Histoplasma requires ROS production. Hence, SOD-3 is a central virulence factor of Histoplasma and help to fungus survives under oxidative stress produced by host phagocytic cells during infection [64].
Conclusion
Superoxide's are the critical molecules produced by host cells to counter intracellular pathogens during infection. ROS is mainly produced within mitochondria of cells as byproducts of normal cell respiration. Defects in oxidative phosphorylation in cells could lead to an increase or decrease in ROS production by host cells. ROSmediated destruction can directly affect the components of the electron transport system of host cells. Therefore, to reduce the ROS activity, host cells are evolved with three types of SODs such as NiSOD, Fe or MnSOD, and CuZnSOD to control the ROS activity produced by itself. More importantly, the immune cells of the host used ROS as defense molecules against various kinds of human pathogens during their infection.
Intracellular pathogens are also furnished with all types of SODs such as NiSOD, Fe or MnSOD, and CuZnSOD. Pathogens are using these SODs in neutralizing the free radicals produced by host cells during infection. SODs of intracellular pathogens can modulate the interaction with phagocytic cells at the onset of phagocytosis by altering the local concentrations of superoxide anion in parasitophorous vacuoles of host cells. SODs of these pathogens are also required to neutralized O2-generated by IFN-γ-activated macrophages, but not necessary for survival in quiescent macrophages. However, the role of SOD in combating other infection does not solely depend on the phagocytic ability of macrophages. In conclusion, SODs of intracellular pathogens are the key determinants of their survival inside the host niche. Furthermore, it also plays a vital role in the severity of disease and virulence of these pathogens by protecting them from extracellular host-derived ROS activity. | 8,831.2 | 2021-10-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
Refractory autoimmune hemolytic anemia in a systemic lupus erythematosus patient: A clinical case report
Abstract Warm autoimmune hemolytic anemia (AIHA) is a hematologic disorder with an incidence of 1–3 per 105 individuals/year. Patients with systemic lupus erythematosus (SLE) develop AIHA in 3% of adult cases and 14% of pediatric cases. We report a case of AIHA refractory to multiple lines of treatment in a patient with SLE, who eventually responded to a proteasome inhibitor‐based combination. A patient with systemic lupus erythematous was diagnosed with symptomatic autoimmune hemolytic anemia. The patient was refractory to multiple lines of treatment including prednisone, intravenous immune globulin, methylprednisolone, rituximab, cyclophosphamide, mycophenolate mofetil, and splenectomy. She eventually had a beneficial response to a proteasome inhibitor‐based combination with bortezomib plus mycophenolate mofetil. The treatment of refractory autoimmune hemolytic anemia can be challenging. Patients with AIHA refractory to primary or secondary treatments must resort to receiving novel therapeutic modalities including combinations targeting plasma cell, T‐ and B‐cell proliferation.
| INTRODUCTION
Warm autoimmune hemolytic anemia (AIHA) is a hematologic disorder with an incidence of 1-3 per 10 5 individuals/year and an accompanied prevalence of 17:100,000. 1 Its pathophysiologic process involves IgG antibodies (warm agglutinins) targeting antigens on red blood cells (RBCs). This, in turn, initiates premature erythrocyte destruction through the reticuloendothelial or complement systems within the liver and spleen. 2,3 Erythrocytes coated by IgG antibodies are recognized by macrophages in the spleen and undergo membrane removal or phagocytosis. 3 Approximately 40%-50% of cases of warm AIHA stem from an idiopathic causeprimarily from immune system activation, deficiency, or dysregulation. The remaining cases are associated with autoimmune or lymphoproliferative diseases, immunodeficiencies, infections, pregnancy, solid tumors, allogenic stem cell transplant, and drug reactions. 3,4 Furthermore, patients with systemic lupus erythematosus (SLE) develop AIHA in 3% of adult cases and 14% of pediatric cases. 5 We report a case of AIHA refractory to multiple lines of treatment in a patient with SLE, who eventually responded to a proteasome inhibitor-based combination with bortezomib plus mycophenolate mofetil (MMF) leading to an ongoing partial response.
To our knowledge, this is the first report describing successful use of this combination regimen for a patient with heavily pre-treated refractory AIHA.
| CASE REPORT
A 44-year-old African American female with SLE presented to Yale New Haven Hospital as a transfer from a community hematology practice due to worsening symptomatic anemia. The patient had been diagnosed with SLE, displayed +ANA (antinuclear antibodies) titer 1:1280, +Sm/RNP (smith/ribonucleoprotein antibodies), and experienced arthralgias and fatigue; she had been evaluated by a rheumatologist and treated with hydroxychloroquine for a brief period of time. Overall, she was felt to have mildly symptomatic SLE without any active visceral organ involvement, thus SLE was not deemed active enough to merit continues systemic immunosuppressive therapy. Two years following her SLE diagnosis, she developed anemia with a hemoglobin of 7 g/dl and was diagnosed with warm AIHA with a positive direct antibody test (DAT) of 3+ IgG. At the time of diagnosis of AIHA, bone marrow biopsy demonstrated normocellular marrow with erythroid hyperplasia with no evidence of plasmacytoma, lymphoproliferative disorder, or any plasma cell neoplasm. Serum protein electrophoresis and immunofixation electrophoresis showed no monoclonal protein and CT scans did not demonstrate any hepatosplenomegaly or pathologic lymphadenopathy. She was initially treated with prednisone 1 mg/kg daily for several weeks with no response and developed worsening anemia with a hemoglobin of 4 g/dl, requiring repeated red blood cell (RBC) transfusions. Rituximab was added to the regimen without any response as hemoglobin decreased to 3.7 g/dl. The patient received RBC transfusions and was transferred to our center for inpatient management in consideration of plasmapheresis. Over the next few months, the patient received several lines of therapy including intravenous immune globulin, high dose methylprednisolone intravenously (IV) 1 g/day for 3 days, repeat doses of rituximab intravenously 375 mg/m 2 once weekly for 4 doses in combination with steroids, five sessions of therapeutic plasma exchange, and later cyclophosphamide (1000 mg IV every 21 days for four cycles) with lack of response. Hemoglobin remained in the 3-4 g/dl range and the patient continued their transfusion requirement. Thus, further treatment was needed, and MMF at 500 mg BID was administered as the next line of immunosuppressive therapy with a transient partial response and an improvement of hemoglobin to 7.6 g/dl with subsequent worsening of anemia. The patient continued to experience episodes of lightheadedness and dyspnea on exertion.
Due to her severe refractory course, she underwent a splenectomy. The patient had a successful post-operative recovery, but the procedure only resulted in a minimal response in hemoglobin. Post-operatively she continued a prednisone taper in combination with MMF. Her hemoglobin remained between 6 and 7 g/dl, while still noting dizziness and requiring RBC transfusions once a week. The patient was next treated with azathioprine briefly, but this was discontinued due to lack of response and poor tolerance. Given that there was no longer a clear standard of care, a discussion held at a consensus hematology conference with multiple hematology experts in the field determined that bortezomib was the patient's best next step. The drug was commercially obtained and covered by the patient's insurance; therefore, this was not an investigational drug use and no IRB approval was needed. The patient was started on bortezomib as the next line of therapy at a dose of 1.3 mg/m 2 subcutaneously weekly on days 1, 8, 15, and 22 of every 28-day cycle. Within 2-3 months of therapy, she experienced a gradual hematologic response with decreasing transfusion requirements. Eventually, she achieved a partial hematologic response with hemoglobin in the 8-9 g/dl range and became transfusion independent. Given her encouraging response to bortezomib, this therapy was continued with an ongoing partial response for 24 cycles and subsequently discontinued. During this time, she had clinical improvement with decreased episodes of shortness of breath and dyspnea on exertion and remained transfusion independent. Of note was the development of transfusion hemosiderosis (peak ferritin 4770) which required iron chelation with deferasirox.
After initial course of bortezomib, the patient remained off systemic therapy with ongoing partial response for 2 years. Subsequently, she presented with increasing exertional dyspnea and developed recurrence of AIHA with hemoglobin decreasing from 9 to 7 g/dl. Once hemoglobin dropped to the 6-7 g/dl range treatment was resumed with weekly bortezomib at 1.3 mg/m 2 on days 1, 8, 15, and 22, every 28-day cycle. Initially, she had minimal response to bortezomib and had to resume RBC transfusions. A combination of bortezomib and rituximab was attempted; however, the patient developed a severe infusion-related reaction during rituximab, thus infusion was interrupted. In the following days, she displayed clinical manifestations of serum sickness (fever, malaise, arthralgia, myalgia, abdominal pain, dyspepsia, and faint morbilliform rash) attributed to rituximab; therefore, the latter was discontinued. Based on previously published literature, [6][7][8] consideration was given to daratumumab as the next line of the treatment; however, daratumumab administration was not feasible due to medication coverage issues. Thus, the patient received the regimen of bortezomib at a dose of 1.3 mg/m 2 once weekly on days 1, 8, 15, and 22 of each 28-day cycles in combination with MMF and continued this combination for six months. This combination therapy eventually resulted in partial response with an improvement of hemoglobin to 9 g/dl and transfusion independence. She continued this combination regimen, completed six cycles of bortezomib, and remained in partial response on a steady dose of MMF for 12 months. Of note, during the treatment, the patient developed a segmental pulmonary embolism, a common event in the context of AIHA, and was treated with Apixaban. Her transfusion-related hemosiderosis was effectively treated with iron chelation therapy using oral deferasirox.
Our report is unique as it describes a case of severe AIHA completely refractory to multiple lines of treatment, later successfully managed with a proteasome inhibitorbased combination with bortezomib plus MMF leading to a long-lasting partial response. To our knowledge, this is the first report describing successful use of this combination regimen for heavily pre-treated refractory AIHA.
| DISCUSSION
The management and prognosis of refractory AIHA continue to perplex hematologists. This stems from science's incomplete understanding of the disease process, the complex nature of its pathophysiology, patient heterogeneity, the lack of large-scale clinical trial data and treatment standardization. Historically, the first-line therapy for AIHA involved the implementation of a corticosteroid regimen-which is successful in roughly two-thirds of patients. 9 However, the remaining one-third of patients have to resort to additional lines of treatment. These include the following: splenectomy, rituximab, azathioprine, cyclophosphamide, and cyclosporine. [9][10][11] For many years, the preferred second-line treatment for these patients was splenectomy, but recently guidelines have begun to favor rituximab given its increased effectiveness and tolerability. 10 Additionally, these patients suffer from multitudes of other problems stemming from their medications and physiologic states. It is well documented that patients often have infectious complications due to the immunosuppressive nature of their treatments; additionally, those that undergo splenectomy are further susceptible to serious infections. 12,13 Clinician's attention toward infectious complications through vigilant vaccinations and antimicrobial prophylaxis is crucial. Additionally, (as observed with our patient) those with AIHA often experience venous thromboembolic events. 14 Specifically, it is estimated that 15%-33% of adults with warm AIHA will have a venous thromboembolic event.
As discussed previously, our patient was refractory to several lines of treatment. However, she attained a response to bortezomib leading to a durable partial remission. After relapse, she received a combination of bortezomib and MMF resulting in a durable partial hematologic response, transfusion independence, and significant clinical improvement. Recently, bortezomib (a 26S proteasome inhibitor) 15 has been used as a successful therapy for immune hemolytic anemia. Bortezomib functions to inhibit the ubiquitinproteasome pathway which then brings about apoptosis through an augmented unfolded protein response in antibody-producing cells. 16,17 Additionally, it has widespread immune system effects by downregulating NF-kB's inflammatory signaling, impairing antigenic presentation, and depleting autoreactive T cells, B cells, and plasma cells-thereby reducing antibody and autoantibody responses. Since AIHA is an immune autoantibody-mediated process, agents suppressing Bcell and plasma cell autoimmunity have been proven effective in its therapy. Plasma cell-directed therapy in combination with other immunosuppressants like MMF used in our case appears effective and promising. In one case report, the authors (after previously using bortezomib to treat a patient with pure red-cell aplasia stemming from an ABO-mismatched stem cell transplantation) employed the same treatment in a patient with Cold Agglutinin Disease (CAD) secondary to IgM κ monoclonal gammopathy. 18 These findings support the use of bortezomib for treatment of AIHA by targeting plasma cells responsible for producing pathogenic autoantibodies. 18,19 Daratumumab is an IgG kappa anti-CD38 monoclonal antibody approved for treatment of plasma cell neoplasms. In a case report, a patient with AIHA following an allogeneic hematopoietic stem cell transplantation was successfully treated with improvement of hemolysis with administration of daratumumab. 20 The role of complement is increasingly recognized in etiopathogenesis of hemolytic anemias. Thus, eculizumab is emerging as a novel treatment strategy for AIHA. In one trial, inhibition of terminal complement activation by eculizumab led to a significant reduction in hemolysis-decreasing anemia, fatigue, and the need for blood transfusions. 21 Additionally, the recent DECADE trial found that eculizumab was able to suppressed hemolysis resulting from cold agglutinin in patients; however, circulatory symptoms of the study subjects were not significantly improved. These findings show promise for eculizumab's use in treatment for hemolytic anemia. As evidenced by this report, there are numerous different modalities that can be successfully utilized to treat AIHA, including several combination therapies. In our experience, a combination of plasma cell-directed therapy with proteasome inhibitor along with suppression of T cells with a selective inhibitor of inosine monophosphate dehydrogenase (IMPDH) MMF proved to be an efficacious therapy. Despite multiple treatment options, many patients with AIHA remain refractory to treatment. Ongoing studies are exploring novel therapies targeting plasma cell, B-cell lineage, and other pathways, for instance, ongoing clinical trials evaluating daratumumab hyaluronidase NCT05004259, isatuximab NCT04661033, BTK inhibitors, and other agents in adults with warm AIHA to advance current treatment. Future novel combination therapies hold promise. Further research and clinical trials are needed to achieve progress in therapy for patients with refractory autoimmune hemolytic anemia. | 2,830.8 | 2022-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Molecular and dissociative adsorption of CO and SO on the surface of Ir(111)
This study investigates the molecular and dissociative adsorption of CO and SO molecules on the perfect and a defective Ir(111) surface. It is aimed at providing a broad spectrum of adsorption site...
I. INTRODUCTION
Attributed to both the fundamental and the applied interest, chemisorption/physisorption of small molecules on the surface of transition metals has received considerable attention. [1][2][3] The surface-molecule interaction has been studied for potential applications in different technology and industrial processes. For instance, the interaction between transition metal surfaces and small molecules has a tendency to trigger important surface catalyzed reactions. 4 Other applications include corrosion, lasers, and sensors. 1,3,[5][6][7][8][9][10] Moreover, the reconstruction of transition metal surfaces draws much attention from both researchers and technologists. While the adsorption of small molecules, such as CO, NO, O 2 , and H 2 , has been reported to lift the reconstruction, the adsorption of small adsorbates, such as C, N, and O, causes reconstruction. [11][12][13][14] In the past, several theoretical and experimental investigations on the atomic and molecular adsorption on the transition metal surfaces have been performed. 1,[15][16][17][18][19][20][21][22][23] The strength of chemisorption and preference for the specific adsorption site for atomic/molecular adsorption on metal surfaces is among the main concerns of the studies in the field.
The model for chemisorption on transition metal surfaces was first proposed by Nørskov, which mainly features the importance of the position of the d-band center relative to the HOMO and LUMO of the adsorbate. 24 Carbon monoxide (CO) as a probe molecule on transition metal surfaces is one of the most studied chemisorption systems from both experimental and theoretical points of view. Many of the studies give due attention to the geometrical properties, binding site, coverage effects, and chemisorption mechanism. 1,18,[25][26][27] The chemisorption mechanism of CO was first modeled by Blyholder in 1964, 25 according to which the CO adsorption takes place in a linear metal -C-O structure involving the charge transfer from CO-5σ to unoccupied metal orbitals and back donation from metal to 2π empty orbitals of CO. There are several literature studies reporting the properties of the CO adsorption on transition metal surfaces. 1,16,17,26,28 Both dissociative and molecular adsorptions of CO on metallic surfaces are observed. Previous research outputs reveal that on going from left to right and from 3d to 5d in the Periodic Table of transition metal elements, dissociative adsorption is suppressed. 4 The first self-consistent density functional theory (DFT) study of chemisorption on metal surfaces was reported by Ying et al., 29 which is followed by a vast number of theoretical adsorption studies of atoms/molecules on different surfaces. The increasing number of studies in this field is mainly attributed to its importance in catalysis and other applications. 1 Despite the fact that the adsorption of small molecules on different metal surfaces, including Ni, Pb and Pt, 16 has been intensively studied, less attention has been given to some of the transition metals. For instance, the adsorption of small molecules on Ir has not been sufficiently studied and its basic properties, such as adsorption structure, are not fully understood even for the most stable (111) surface. Ir shows a wide variety of potential applications as a heterogeneous catalyst in the chemical industry. 17 Catalysts of both clean Ir and its alloys are used in reactions that require the activation of C-H bonds. 30 Ir is also reported to be a potential catalyst for COx free production of hydrogen from ammonia, production of hydrogen gas from gasoline, and selective catalytic reduction of NOx. [31][32][33][34] Therefore, it is worthwhile to investigate the dissociative and molecular adsorption on the surface of Ir, which might provide better understanding on the mechanism and features of molecule-Ir interactions. This study is then designed to investigate the properties and mechanism of adsorption and possible dissociation of the selected molecules on the Ir(111) surface.
As a continuation to one of our previous studies, which addresses the atomic adsorption on the Ir surface, 35 we have performed theoretical investigation of functionalization of the Ir surface, this time, focusing on the dissociative and molecular adsorption. In this work, the adsorption of CO and SO molecules on the Ir(111) surface has been analyzed, where the site preference, structural parameters, and energetics of the system have been given due attention. The electronic structure interpretation has been used to provide detailed information on the molecule-surface interaction. The rest of the paper is organized as follows: details of the theoretical method are presented in Sec. II, followed by the discussion of results obtained from the DFT calculation in Sec. III, and the final section (Sec. IV) is devoted to the summary and conclusions.
II. METHOD
The interaction of the molecules with the Ir(111) surface has been investigated by means of the first-principles density functional theory (DFT) calculation as it is implemented in the Vienna ab initio simulation package (VASP). 36 The projector augmented wave (PAW) 37 method and a plane wave basis set with a maximum planewave energy of 450 eV cutoff are used. The exchange-correlation functional is described by the Perdew-Burke-Ernzerhof (PBE) 38 potential of the generalized gradient approximation (GGA). 39 All parameters in the calculation are chosen to converge the total energy of 10 μeV. A 3 × 3 × 1 k-point grid is used to sample the Brillouin zone in all calculations. The substrate was represented by three layers of 5 × 5 Ir atoms for all the adsorbates included in this study. All atoms were fully relaxed except for the bottom Ir layer, which were fixed at their initial bulk-truncated positions. The C(S)O/Ir(111) adsorption was modeled by placing one C(S)O molecule onto a (5 × 5) unit cell; the molecule was placed only on one (the relaxed) side of the slab. The x and y axes were set to be parallel and the z axis perpendicular to the substrate plane. The supercell has given a dimension of 13.57 × 13.57 Å 2 with periodic boundary conditions along the surface in order to represent an infinite sheet. The dimension of the supercell is believed to be large enough to avoid interactions between adatoms on periodically repeated adjacent cells. The substrate is bounded by a 20 Å dimension vacuum along the vertical direction (z axis), in order to avoid interactions between repeated slabs. The calculation has been performed for different geometrically high symmetric adsorption sites. The combined system was fully relaxed with spin polarized calculation for the initial configuration, in which the adsorbates were set to be at about 1.9 Å above the surface of the substrate. The geometry optimization of C(S)O/Ir(111) and individual components, such as the free C(S)O molecule and clean Ir(111) surface, was performed by using the conjugate gradient (CG) algorithm. The first-order Methfessel-Paxton scheme 40 was used to smear the electronic states with a width of 0.1 eV.
The total energy of an isolated molecule, which is required to calculate the adsorption energies, was approximated by the calculation with a single atom in a supercell of dimension 15 × 16 × 17 Å 3 . The symmetry breaking is introduced to lower the energy. The Brillouin zone in this case is sampled with only the gamma point. The DFT calculation employing the GGA is found to be quite successful in determining the bond length of the free molecule, CO. The optimized bond length obtained for the free CO molecule at equilibrium is in good agreement with an experimental value, with the error of less than 2%.
The adsorption energy of an adsorbate/molecule on the surface was calculated as where E total , E substrate , and E ad are the spin polarized calculated total energies of the molecule-slab system, the pristine slab, and the isolated molecule, respectively.
III. RESULTS AND DISCUSSION
The Ir(111) surface was represented by a slab of three layers, for which the periodic boundary condition is applied laterally. The supercell is given a 5 × 5 dimension containing a total of 75 Ir atoms. The molecular adsorption was considered only on one face of the slab. The spin polarized relaxation calculation was performed for all of the atomic positions of the molecule and all the Ir atoms of the top two layers. The bottom layer of the substrate was set to be fixed for all calculations in order to model the surface properties.
The DFT calculation was performed for different possible adsorption sites. Both vertical and horizontal orientations of the molecules on the surface were taken into account. For the vertical configuration, four high symmetric adsorption sites were identified: V-top, V-bridge, V-fcc, and V-hcp. On the other hand, the horizontal
A. Ir(111)-CO system
This section of the paper presents and discusses the calculated results for the CO adsorption on the Ir(111) surface. The site preference and structural properties were investigated. The study involves both vertical and horizontal orientations of the CO molecule. Our calculation reveals that no adsorption is possible with the horizontal orientation; all the initial configurations considered in this study relaxed to structures with the vertical orientation of the CO molecule. Moreover, the O end-on configurations were repulsive for all adsorption sites. Hence, all vertical configurations mentioned hereafter refer to the C end-on (C atom facing the surface) configurations. The energetics and structural properties of the adsorption are summarized in Table I.
The adsorption configurations of CO on the Ir(111) surface, both the initial configuration and the one after adsorption, are shown in Fig. 1. For all initial configurations with vertical orientation, the CO molecule was found to be adsorbed at the corresponding site and optimized to the structure of almost the same vertical orientation. As depicted in Table I, after optimization, the CO bond distance ranges between 1.165 Å and 1.204 Å, which is longer than the bond length of the isolated CO molecule (1.15 Å). The observed bond length enlargement confirms the activation of
(d Ir−C ) is the bond length between the C and O(nearest Ir) atom.
h is the planar averaged vertical height of the C atom above the surface. dsur is the maximum deviation of the Ir atom of the top layer from the surface. The units used for energy and length quantities are eV/molecule and Å, respectively. (111) surface. The adsorption strength, however, was found to follow a different approach as it increases when moving CO from fcc to bridge to hcp to top sites with vertical orientation of the molecule on the surface. Therefore, one can deduce that the C-O elongation does not follow the energetic trend.
The study reveals that the top site of the vertical orientation is the most stable adsorption site with the adsorption energy of −1.967 eV/CO. It is more stable than the bridge and three-fold hollow sites by 0.389 eV and 0.374 eV, respectively. The obtained result is in accordance with the value of PW91 and RPBE calculations. 1,17 Experimental observations also proved that the top site is energetically the most favorable adsorption site for CO; 1,41 the adsorbtion energy, however, was found to be overestimated as compared to the experimental value of about −1.7 eV. 41,42 The adsorption of the CO molecule was thoroughly investigated for different transition metal surfaces, such as Fe, Ni, Cu, Pb, and Rh, for which the adsorption energy ranges from −1.28 eV to −5.2 eV. [43][44][45][46][47] The calculated value for the CO adsorption on the Ir surface lies within this range, implying a fairly strong chemical effect as compared to the aforementioned surfaces.
The adsorption geometry at the most stable adsorption site, V-top, provides the minimum bond length enlargement of the CO molecule and the maximum deformation of the surface as compared to that of all the other sites. This is believed to be the direct consequence of the strong binding of C to the top of an Ir atom of the surface. At this adsorption site, C is closer to one Ir atom, to which it binds, than all the other neighboring atoms. Hence, the interaction of the CO molecule is more of with this Ir atom, which leads to the highest deviation of the Ir atom from the surface with the minimal effect on the molecular structure of CO. This concept is asserted by the smallest bond length (1.848 Å) of Ir-C, the longest height h (2.111 Å) of the CO molecule from the surface, and the maximum deformation (0.263 Å) of the surface as compared to those of all the other adsorption sites.
ARTICLE scitation.org/journal/adv
We have also investigated the electronic structure of the CO/Ir(111) system with the purpose of providing further information on the adsorption mechanism and site preference. The projected density of states (PDOS) has been calculated for the most energetically favorable adsorption site, V-top. The PDOS for the CO molecule and the nearest Ir atom from the surface has been analyzed. The calculation has been performed both before adsorption [for the isolated CO molecule and the isolated Ir(111) surface] and after adsorption [CO/Ir(111) system] for comparison purposes. Figure 2 depicts that the density distribution for the free CO molecule is discrete. For the adsorbed CO molecule, on the other hand, the DOS distribution is found to be continuous in some specific energy levels. The continuous distribution of DOS for those particular energy levels implies the presence of a series of unoccupied states. Comparing the PDOS distribution of the free CO molecule and that of the adsorbed one reveals that there is a clear shift of states down to the lower energy levels. The sharp states observed for the free CO molecule were found to be broadened into resonance at a lower energy, implying the occurrence of electron transfer from the CO molecule to the Ir surface.
The DOS distribution of the d state of the closest Ir atom of the surface, as shown in Fig. 2, exhibits a significant difference between the free surface and the CO/Ir system. Between −1 eV and −4 eV and around −7 eV, a number of new DOS peaks arise compared to those on the clean Ir(111) surface. The new DOS peaks are indicators of the transfer of electrons from CO to the surface. Between −4 eV and −7 eV and near the Fermi surface, the DOS of the nearest Ir atom of the CO/Ir system is found to be slightly lower than that of the clean surface. This phenomenon too can be thought to be evident for the possible electron transfer between the CO molecule and the surface. Therefore, we deduce that the CO adsorption on the Ir(111) surface is driven by hybridization of electronic states, which is also reported to be the mechanism for the adsorption of CO on the surfaces of different simple metals, including Al, Co, and Ni. 2 depicts that the 5σ and 2π * orbitals are affected significantly due to the adsorption. The charge depletion from the CO molecule is believed to cause the reverse in the 5σ and 1π orbitals. The alteration in the 2π * band evidences the transfer of charges from the 5d metal band to the 2π * orbital of the CO molecule. This obeys the Blyholder model of CO adsorption on metallic surfaces. 25
B. Ir(111)-SO system
The adsorption of the SO molecule on the Ir(111) surface has also been thoroughly investigated. Both horizontal and vertical orientations of the molecule were considered in the initial configurations, and, unlike that of the CO adsorption case, now adsorption is observed with both orientations. However, the O end-on configurations of vertical orientation were revealed to be repulsive for all adsorption sites, except at the top site, for which a weak binding with the binding energy of −0.056 eV was observed. Therefore, the vertical configurations hereafter refer to those of the S end-on (S atom facing the surface) configurations. All the initial configurations selected to investigate the SO adsorption are similar to those used for the CO adsorption, as shown in Fig. 1. The values computed for the physical quantities of our interest in relation to the SO adsorption are summarized in Table II.
Unlike that of the CO adsorption case, the optimized structures of both vertical and horizontal orientations of the SO molecule on the surface were obtained, implying the possible adsorption of the SO molecule with both vertical and horizontal orientations on the Ir(111) surface. However, the adsorption for the later orientation is obtained to be energetically the most favorable. It is more stable by a minimum of 0.09 eV energy as compared to that of the former orientation, implying that the orientation of the SO molecule significantly impacts the adsorption.
The most stable adsorption site for the molecular adsorption is revealed to be the H-bridge site with the adsorption energy of −2.992 eV/SO. The molecular adsorption takes place with bond length enlargement. The observed bond length enlargement is an evident implication of the SO activation on the Ir(111) surface. The adsorption geometry at the most stable molecular adsorption site, H-bridge, provides the maximum bond length enlargement as compared to that of the other molecular adsorption sites. The bond length of SO adsorbed at this site was measured to be 1.602 Å, which is too large as compared to the bond length obtained for the , just next to that of the adsorption at the V-top site, of the molecule above the surface. The adsorption strength, in general, shows positive correlation with the enlargement of the bond length of the molecule; stronger adsorption causes a higher bond length enlargement.
In addition to the molecular adsorption, dissociative adsorption of SO on the Ir(111) surface was also observed. Among all the initially selected structures, dissociative adsorption has been observed only for one case, for which the structure includes the H-hollow adsorption site. It is interesting to notice that the dissociative adsorption is energetically more favorable than the molecular adsorption. There is a 0.714 eV energy difference between the dissociative adsorption and the most stable molecular adsorption, in favor of the former. The optimized structure for the SO adsorption at the H-hollow site (horizontal orientation of SO between successive fcc-hollow and hcp-hollow sites on the Ir(111) surface exhibits independent adsorption of Sand O atoms at two successive fcc-hollow sites, Fig. 3, yielding a 3.022 Å separation between S and O. The DFT calculation of the atomic adsorption also reveals that the fcc-hollow site is the most stable adsorption site for both S and O atoms on the Ir(111) surface. Hence, the phenomenon of the dissociative adsorption of SO can be understood as both S and O atoms of the molecule start to pull apart from each other due to the respective interaction with the corresponding closest fcchollow sites. This, in turn, triggers the SO bond break ending up with independent adsorption of the atoms at their energetically favorable adsorption sites.
The nudged elastic band (NEB) method has been employed to investigate the transition state (TS) of the dissociative adsorption of SO for the minimum energy pathway (MEP), which was converged to the total energy accuracy of 10 −5 eV. The energy barrier for the dissociative chemisorption of the SO molecule on the Ir(111) surface is found to be about E b = 0.6 eV. Note that the dissociative barrier is defined as the energy difference between the initial and transition states, as shown in Fig. 4.
The observed molecular and dissociative adsorptions of SO on the Ir(111) surface were further investigated through the electronic structure computed for the most energetically favorable adsorption site; H-bridge for molecular adsorption and H-hollow for dissociative adsorption, as shown in Fig. 5. It shows that the adsorption induces a significant change in the electronic structure. The two peaks observed between −10 eV and −12 eV for non-interacting systems were found to be merged and shifted down to lower energy with higher weight after the adsorption. The adsorption also changes the density of states of the SO molecule from discrete to continuous for
ARTICLE
scitation.org/journal/adv some specific energy levels, implying the charge transfer for those specific energy levels. The DOS of the d states of the nearest Ir atoms shows moderate decrements after the adsorption as compared to that of the isolated case. An appreciable difference in the electronic structures was also observed between the molecular adsorption (SO adsorption at the H-bridge site) and the dissociative adsorption (SO adsorption at the H-hollow site). The two peaks mentioned above exhibit more shift to a lower energy level and less weight for the dissociative adsorption as compared to that of the molecular adsorption. Between −8 eV and −4 eV, the DOS on the d states of the nearest Ir atoms for the molecular adsorption is moderately lower than that of the dissociative adsorption.
C. Adsorption on the defective surface
Finally, we have investigated the impact point defects may have on the adsorption of CO and SO molecules on the Ir(111) surface. It is now a common thought that defects can drastically alter the chemistry of surfaces by lowering energy barriers and causing large changes in adsorption energies. The presence of defects consisting of certain guest atoms or holes on a surface is supposed to be capable of altering the overall interaction of the surface with adatoms/molecules. Hence, we extend this study to investigate the adsorption of CO and SO on the defective Ir(111) surface. The corresponding most stable adsorption sites, as it is discussed in Secs. III A and III B, were selected for this investigation. The point defect on the surface was taken into account by incorporating either vacuum space or a guest atom on the surface. The vacuum is formed by removing one Ir atom from the surface, whereas the guest atom replaces an Ir atom on the surface. The defect site is arbitrarily chosen on the surface, but the adsorbate was set to be near the defect site. Our investigation shows no stability problem for the defective surface. Three different guest atoms were selected for this study: Ag, Au, and Pt. The energetic and structural parameters computed for the adsorption on the defective surface are summarized in Table III. Table III depicts that the CO adsorption is favored by the defect, which is verified by stronger binding of the molecule to the surface as compared to that of the clean surface. Both forms of defects, due to the vacuum and that of the impurity, yield stronger binding. For instance, the adsorption on the vacuum-surface is more stable than that on the clean surface by 0.136 eV energy. Similarly, the adsorption on the surface with impurity is more stable than that of the clean surface by 0.346 eV, 0.287 eV, and 0.298 eV energy, when the guest atom is Ag, Au, and Pt, respectively. The bond length enlargement also shows a slight change, for most cases it increases. The height of the adsorbed molecule above the surface significantly decreases for the vacuum-surface case, where it drops from 2.111 to 1.936. However, a much smaller change is observed in the presence of impurity. This difference is expected since the vacuum creates more space for the molecule to be closer to the surface by avoiding the repulsive interaction that would have happened with the Ir atom had the vacuum space been filled by the Ir atom.
The DFT calculation reveals that the CO adsorption on the surface doped with the Ag atom yields the most stable structure with −2.313 eV energy, which is the lowest of all CO adsorption cases included in this study. As compared to the one on the clean surface, the CO adsorption on the Ag doped surface is energetically favorable by 0.346 eV. The electronic structure, discussed in future paragraphs, has been computed with the purpose of elaborating the adsorption phenomenon on the Ag doped Ir(111) surface.
Unlike that of the CO adsorption, the SO adsorption could not be promoted, in terms of energy, by the surface defects investigated in this study. Rather, the SO adsorption on the perfect surface was found to be energetically favorable as it can be verified by comparing the corresponding quantities in Table III with those in Table II. The most stable adsorption on the defective surface is obtained for the case of the Ag doped surface, which is at higher energy by 0.356 eV than the dissociative adsorption on the perfect surface. The hole type defect was even observed to hinder the dissociative adsorption, yielding just a molecular adsorption of the SO molecule with enlarged bond length. Despite the fact that both the molecular and dissociative adsorptions of SO on the defective surface take place at higher energy as compared to that on the perfect surface, it gives a longer SO bond length enlargement or longer separation between dissociated S and O atoms. For instance, the bond length enlargement for the molecular adsorption of SO on the defective surface (hole defect) is greater by about 2.2% than that on the perfect surface. Similarly, in the case of the dissociative adsorption, the separation between dissociated atoms was found to be increased by 15.19%, 15.25%, and 2.68% for the Pt, Au, and Ag doped surfaces, respectively, as compared to that on the perfect surface.
Moreover, the Pt atom is found to significantly reduce the energy barrier for the dissociative adsorption of SO (Fig. 6). The NEB calculation shows the energy barrier of the adsorption on the Pt doped surface to be E b = 0.12 eV, which is only 20% of that on the clean surface. This should be due to the catalysis effect of the Pt atom, which can be termed single-atom catalysis. Singleatom catalysis is at the forefront of catalysis research due to its high efficiency and low cost. Pt is popular for its huge potential in the single-atom catalysis application. Our investigation reveals that the situation is totally different for other guest atoms, Ag and Au, for which the energy barrier rather increases. The obtained result suggests that even noble metals do not behave the same way as it regards to single-atom catalysis.
The computed electronic structure, as shown in Figs. 7 and 8, reveals that it is heavily affected due to the adsorption. free CO molecule were found to be broadened into resonance at a lower energy, indicating that electrons are transferred from the CO molecule to the Ir surface. The density distribution on the d state of the nearest Ir atoms of the oxidized Ir(111) surface is slightly lower than that of the clean surface, which evidently indicates the transfer of electrons from the surface to the CO molecule. Similarly, the DOS on the Ag atom appreciably decreases after the adsorption, suggesting that the adsorption process involves charge exchange with the guest atom too. Figure 8 shows that the DOS of the s and p states shows a major change; the two peaks from the SO-p states near the Fermi surface and two peaks from the SO-s state around −11 eV, which were observed on the electronic structure of the isolated molecule, were found to be disappeared after the adsorption. The DOS distribution of the adsorbed SO was also found to be continuous within a certain energy range, which evidently shows a series of occupied states within that particular energy range.
IV. CONCLUSION
The molecular and dissociative adsorption of CO and SO molecules on the Ir(111) surface has been thoroughly investigated using the first-principles approach. Different configurations have been initially selected in order to establish the preferential adsorption site and the corresponding geometry. The study involves both perfect and defective surfaces with the purpose of investigating the role of surface defects in the adsorption of the molecules, in addition to providing deep understanding on the adsorption mechanism of small molecules on the Ir(111) surface. The defective surface involves a vacuum or a guest atom (Ag, Au, or Pt) replacing one Ir atom from the surface.
The molecular adsorption of CO at the top site is found to be the most stable configuration. It is adsorbed by the end-on manner, where the CO bond is perpendicular to the Ir(111) surface with the configuration of the C atom facing the surface. The adsorption involves bond enlargement, implying the CO activation on the surface. The CO adsorption at the adsorption site of preference exhibits the smallest Ir-C bond length, the longest height of the molecule above the surface, and maximum deformation of the surface as compared to that of all the other adsorption sites included in this study. The DFT calculation also reveals that the adsorption significantly changes the electronic structure of both the molecule and the surface, asserting that the adsorption involves charge transfer between the surface and the molecule. The adsorption is then believed to be driven by the hybridization of electronic states. The computed electronic structure also suggests that the CO adsorption on the Ir(111) surface obeys the Blyholder model of CO adsorption on metallic surfaces. The molecular adsorption of CO is achieved on both the perfect and defective surfaces of Ir(111); however, the latter is found ARTICLE scitation.org/journal/adv to be energetically more favorable. All of the defects modeled in this study significantly enhance the adsorption; all the adsorptions on the defective surface take place at lower energy as compared to that on the perfect surface. Despite confirmed molecular adsorption, no dissociative adsorption was observed. The theoretical investigation of the SO adsorption on the Ir(111) surface reveals that both molecular and dissociative adsorptions are possible. The adsorption at the H-bridge site is found to give the most stable configuration of the molecular adsorption characterized by low energy as compared to that of all the other adsorption sites. The adsorption is accompanied by bond length enlargement, implying the SO activation on the surface. It also significantly changes the electronic structure of both the molecule and the surface, suggesting that the adsorption involves hybridization of different electronic states. In addition to the molecular adsorption, dissociative adsorption of SO on the Ir(111) surface was also observed. It is found to be energetically more favorable than even the molecular adsorption; its energy is 0.714 eV lower than that of the most stable molecular adsorption. Both molecular and dissociative adsorptions were found to be not much favored, in terms of energy, by the surface defects inspected in this study. Although the adsorption on the defective surface takes place at higher energy, it gives longer SO bond enlargement or longer separation between the dissociated atoms, which implies that the defective surface drives enhanced SO activation on the surface. The NEB investigation on the transition state of dissociative adsorption of SO for pre-converged MEP reveals the energy barrier of about 0.6 eV for the clean surface, while it is reduced to 0.12 eV on the Pt doped surface, suggesting the remarkable potential of Pt on single-atom catalysis application. We believe that the guest atom induced reduction in the energy barrier for the dissociation of molecules will promise further single-atom catalysis applications. This report is supposed to aid and/or guide experimentalists giving a good insight on the dissociation of molecules. Besides, the information provided here can be used as a benchmark for further investigation on dissociation of small molecules on the transition metal surfaces, which are highly desirable for catalysis application. | 7,256.2 | 2020-03-01T00:00:00.000 | [
"Chemistry",
"Physics"
] |
MINIMAL LOSS RECONFIGURATION CONSIDERING RANDOM LOAD: APPLICATIONS TO REAL NETWORKS
This paper approaches the minimal loss reconfiguration problem, taking into account the load variations of the systems, through a stochastic reconfiguration process. The Monte Carlo method is used to consider the natural load variation. A normal probability function is used to generate aleatory load levels in the nodes. The results of this work show the existence of a set of branches that are frequently eliminated. This generates a tree branch set that best represents the universal randomness of the load. We call it “Expected Branch Set (EBS)”. The topology associated to the EBS coincides with that obtained using the average demand values. This makes it unnecessary to generate a considerable number of tests to find that topology that best considers the load variation. The proposed algorithm was applied to two test networks and to a large real network.
INTRODUCTION
Network recon guration is an alteration process in the topological structure of distribution feeders through changes in the on/off state of the sectional switches.During normal operating conditions, networks can be recon gured to reduce the power losses caused by the Joule effect.This process is known as minimal loss recon guration problem.One of the rst papers published in this eld was presented by Merlin and Back [1], who developed a heuristic approach.This solution scheme starts with a total meshed system in which all the switching elements are closed.They are then opened one by one until all the closed circuits are eliminated, and a radial system is obtained.However, the application of this method to real systems is not practical due to the signi cant computer effort required.This method was later modi ed by Shirmohammadi and Hong [2].They reduced computing time by applying a more ef cient load ow.Another research approach on minimal loss recon guration is proposed by Civanlar [3].In this paper, Ingeniare.Revista chilena de ingeniería, vol.16 Nº 2, 2008 an analytical expression is developed in order to estimate loss reduction produced by open and close actions without altering the radiality of the system.The authors also provide some criteria to eliminate undesired switching.This kind of solution, called "branches interchange algorithm", is based solely on heuristics.Sar [4] present a survey in the area of distribution systems recon guration, ranging from the fundamental work of Merlin and Back, to the current state of art in 1993.In regards to load reduction on real-time operation considering load variability, Wagner [5] indicated that an important loss reduction was obtained through simulations in Canadian networks during a oneyear period.R. Broadwater [6] presented algorithms to reduce losses through load estimators that consider the load variability.Chen [7] showed hourly recon guration bene ts based on short and long-term loss reduction.An optimal power ow model for minimal losses is applied by C. Brian [8].This paper presents only results and conclusions about hourly recon guration for on-line power operation in an energy control center.Peponis [9], obtained loss minimization by the installation of capacitors and by network recon guration.They also took into account the impact of load modeling.The application method is presented in [10].A heuristic constructive method for minimal losses recon guration is proposed by T. E. McDermont in [11].In each stage, by means of a loss incremental evaluation, a new node is added that introduces minimal losses.López [12], presented an algorithm for minimal loss recon guration, based on the dynamic programming approach.This method is quite simple and the results are obtained in a very short computing time, so it is applicable to real big-sized systems.Therefore, it opens a way to real time recon guration of networks.Finally López [13] presented an application of the algorithm to on-line recon guration considering variability demand using daily pro les of various loads (industrial, commercial, public lighting).
One important conclusion of these previous works is that in short term and medium term the load variation is not relevant to the topological solutions e.g.independently of all possible values of demand, the total number of topologies to be considered is extremely reduced.This is because the objective function of the optimization problem shows a leveling off of the optimum zone (there is less sensibility towards the objective function regarding the load demand in this region).Consequently, we analyzed the recon guration problem as a probabilistic problem, assuming the existence of whatever aleatory events that could have an effect on the resulting topologies.In this paper, the switching actions to reduce losses take into account the varying nature of loads.This is done through the use of the Monte Carlo Method Applied to Recon guration (MCR): node powers are considered as a Normal Probability Function (NPF), having an "expected power" ( ) and a "standard deviation" ( ).In this paper, the MCR is applied rst to two test networks and then to a real network in order to evaluate the recon guration advantages considering the random load in each node.The studies are made with the model developed in [12], including demand aspects such as the models themselves (P, Z or I constant).Daily load patterns, such as those shown in [13] and [16] are used to assume average demand values and standard deviation values.Based on the preceding information, values of " %", 15%, 30% and 50% are used in this paper.These standard deviation values amply cover the load variations of real systems.
Minimal Loss Problem
The minimal loss recon guration problem in distribution systems, through topological changes, can be written as follows [1][2][3][4]: Subject to: Equation (2) corresponds to the balance of load currents in each node.Equation (3) corresponds to the feeder's thermal limits.Equation (4) considers voltage constraints in each node.Finally the fourth constraint ( 5) is the radiality restriction in a primary distribution system.
Demonstrative Example
The procedure can be described by using the small test system shown in gure 1, whose parameters are given in table 1. Node 1 is the only source.The resume results are showed in table 2. At the beginning, we consider no connection at all, so that there are no losses.Then in the rst stage, we consider connecting one node to the unique source; we have only two choice: connect 2 or 3; connection 1-2 leads to losses equal to 0.035 (p.u.), while connecting 1-3 lead to losses equal to 0.200 (p.u.).We choose to connect 1-2 ( gure 2.1), and we say that selected variable is X 1 *=2 and the following state will be S=2.Similarly, in stage 2 the selected variable is X 2 *= 3 and the next state is S=3 ( gure 2.2).In stage 3 the selected variable is X 3 *=4 and the next state is S=4 ( gure 2.3).Stage 4 does not imply big changes because there is only one possible way to incorporate state 5 ( gure 2.4).It is as follows: f*(5)=1.474(p.u.).When the state S=5 is incorporated in the stage 4, a "horizontal chaining" is produced.Here a new state is not incorporated, but branch distribution is recombined.For stage 5, the selected variable is X 5 *=3 and the next state is S=2 which implies that branches 2-4 and 3-4 interchange on stage 6 ( gure 2.5).Stage 7 does not make any changes in the topological structure ( gure 2.5).Finally, branches 1-2, 1-3, 3-4, 4-5 make up the de nite con guration.The loss for this con guration is 1.840 (p.u.).
General Algorithm of Recon guration
The minimal loss recon guration is solved by the Dynamic Programming Approach.The following algorithm describes the method [12].
i) System data: Number and rating of power substations and feeders, topology and switching possibilities of the power apparatus connected to the network.ii) Actual operation: To evaluate the actual system conditions such as node voltage, and real and reactive losses.iii) Graph compression: When there is a set of nodes with non-recon gurable radial topology, an equivalent node representing the load of the subsystem is considered.iv) Possible node connections: The process goes from each source node of the network (substation) to the nal load nodes, connecting each new possible node.v) Losses functional evaluation: In each stage the connection of a new node is added to the tree, producing the lowest increment in losses functional.vi) Radial load flow: To determine voltage profile, currents and losses.In this case, nodes are considered according to the load type (P, Z or I, constant).vii) Backtracking process: The effect of the last load connection in the structure is evaluated by applying a backward process.viii)Constraints: Veri cation of thermal limits in substations and feeders, voltage pro les and other constraints.If a constraint is not ful lled, a transfer of loads between sub-stations should be made and step iv) should be performed again.ix) Radial systems: The process goes on, until all loads are connected to the network; if not, they all go to iv).x) Final loss evaluation: A fast radial load ow is applied to determine the network's nal losses.
THE MONTE CARLO RECONFIGURATION
The Monte Carlo Method (MCM) is basically a statistical simulation that uses a random sequence of numbers to describe the statistical behavior of a variable (in this case the node demand).This work uses a NPF with characteristic statistical indexes: expected value " " and standard deviation " ".The distribution of a normal variable is entirely determined by these two parameters.The NPF is described by equation (6), which determines a "bell shaped" curve, shown in gure 3. The hypothesis of the MCM, as a mathematical technique, infers the search of an ef cient solution instead of an accurate solution [14].The substations composite demands vary in time according to: industrial, commercial, residential, street lighting and mixed loads [16][17][18][19], following different patterns of behavior.The use of the NPF considers the natural composite load variations, based on the characteristics of the demand predictors and the data acquisitions systems (regarding: ranges, variance and Pearson variation coef cients).On the other hand, practice indicates that the composite demand values in a substation exhibit medium Curtosis indexes and low asymmetry coef cients of Fischer.Thus, the probabilistic values for the demand (P n and Q n ) can be correctly represented by quasi-Mesocurtics functions i.e., distributed in a NPF [15].Consequently, the expected value and standard deviation of node composite demands were assumed, i.e., medium loads and realistic standard deviations for the electrical systems that were studied (two test systems and one real system), according to the speci c node and the study to be carried out.This permits working with two test systems shown in the literature, of which the particular stochastic behavior is unknown.Besides, the above is based on the application of our method to planning of the operation in distribution networks.
ALGORITHM USED
The algorithm developed to determine the minimal losses topology, considering the node load random variation, is made up of the following steps [12]
APPLICATIONS Method Validation
In table 3, relevant data of both testing systems and of the real system is presented, such as: source number, node number, line number and active and reactive powers in each node.The real system includes commercial, residential, industrial and public lighting loads [12].
Testing System 1
When applying MCR in System 1, see gure 4, ve branches must be eliminated (opened) to obtain a radial topology with minimal losses.Tables 5, 6 and 7 show the results for standard deviation of 15%, 30% and 50% respectively.Results show eliminated branches sequences, the number of the eliminated branch, the nodes between which the eliminated branch is connected (N p -N q ), the occurrence and the branch elimination percentage in respect to the 3.000 evaluations.
Testing System 2 [20]
The MCR is applied to System 2. In this system, 22 branches must be eliminated to obtain a radial topology at minimal losses.Tables 8, 9 and 10 show the resumed results for a standard deviation of 15%, 30% and 50% respectively.Results show: the eliminated branch sequences, the occurrence, and branch elimination percentage in regards to the 3.000 evaluations.
Ingeniare.Revista chilena de ingeniería, vol.16 Nº 2, 2008 Real System [21] MCR was applied to determine the minimal losses topologies that result in the real system, taking into consideration the random values of the node load.For this system, 43 branches in each recon guration step must be opened to maintain radial topology at minimal losses.Tables 11,12 and 13 present the resumed results obtained for a standard deviation of 15%, 30% and 50% respectively.These tables show a summary of the results obtained for different values of deviation.These tables show a change in the opening and the percentage of eliminated branches when applying the recon guration, starting from sequence 43.
Topologies comparison in real system
Table 14, shows a comparison between the losses obtained when recon guring the real system, applying the method proposed in [12], and the losses obtained when using the EBS topology at different load levels (75%, 100% and 125%).
RESULTS ANALYSIS
Results obtained for System 1 (tables 5, 6 and 7) show that a topology can be considered as the most often repeated when taking into account random load nodes with various standard deviations.Such an "EBS topology" is obtained by opening ve branches to maintain radial topology.For System 2, the same results can be observed: an "EBS topology" is obtained eliminating 22 branches (Tables 8, 9 and 10).However, it must be mentioned concerning both testing systems that two relevant facts occur when standard deviation increases (15%, 30% and 50%): Branch elimination percentage decreases (80.83%, 50.50% and 46.37%, for Sequence 5 in System 1, Tables 5, 6 and 7).An increase in the quantity of eliminated or opened branches is appreciated (11,18 and 24 branches are eliminated in System 1, Tables 5, 6 and 7).
For the real system, 43 branches must be eliminated to obtain radial topology at minimal losses.Tables 11,12 and 13 show the different eliminated branches and their occurrence.It can be observed that for the last eliminated branch (sequence 43) there exists an occurrence of 93.73%, 92.60% and 92.10% for a standard deviation of 15%, 30% and 50%, respectively (Table 11, 12 and 13).The opening branch occurrence from sequence 43 falls suddenly to 5.93%, 7.03% and 7.73% for a standard deviation of 15%, 30% and 50%.Moreover, similarly to testing systems, an increase of the standard deviation produces a decrease of the opening occurrence of the eliminated branches and, on the other hand, it produces an increase in the number of open branches.
Table 14, shows loss levels when a recon guration is applied to a real system (using the methodology presented in [12]), taking into account different load levels of the system (75%, 100% and 125%).Table 12, also shows the results obtained when applying only one radial load ow to determine the losses with the "EBS topology", taking into account the different load levels mentioned before.When recon guring the real network with different load levels and losses obtained when using the "EBS topology", there are very small differences between power loss results (6.851E-5% in case 2).
CONCLUSIONS
This paper presents a method for probabilistic minimal loss recon guration in electrical distribution systems considering random node loads.The loads are considered with a normal probability function, an expected value ( ), and standard deviations ( ) provided by both our own authors' experiences, and also other authors' experiences.
In each system analyzed, 3000 recon guration evaluations were performed with the purpose of nding in each case an Expected Branch Set (EBS).This expected branch set or "EBS topologies" is what best represents the aleatory behavior of the demand of the system.
Ingeniare.Revista chilena de ingeniería, vol.16 Nº 2, 2008 In practice only this EBS topology must be evaluated to consider random load.Results obtained when analyzing the real system show that the highest frequency EBS topology gives loss levels that are very close to those optimal losses obtained by the Monte Carlo recon guration process, taking into account all the random load levels.Therefore, the EBS topology turns out to be a useful tool in the planning and operation phases of distribution systems.
On the other hand, in all the studied cases, tests and real systems, it was found that the increase of the node power standard deviation (load variation increase) causes a decrease in the frequency of eliminated branches.
Consequently, an increase in the number of eliminated branches is observed.
Finally, the results obtained in this paper are consequent with the heuristic results presented in [13], where the on-line hourly load variation is considered.
Figure 2 .
Figure 2. Development stages of the minimal loss topology.
Figure 3 .
Figure 3. P r o b a b i l i t y F u n c t i o n w i t h N o r m a l Distribution.
Table 1 .
Test System Characteristics.
Table 2 .
Application of DP Method to the Test System.
Table 3 .
Parameters of Test Systems.
Table 14 .
Loss Level Comparison. | 3,889.2 | 2008-06-01T00:00:00.000 | [
"Engineering"
] |
Collective Attention and Stock Prices: Evidence from Google Trends Data on Standard and Poor's 100
Today´s connected world allows people to gather information in shorter intervals than ever before, widely monitored by massive online data sources. As a dramatic economic event, recent financial crisis increased public interest for large companies considerably. In this paper, we exploit this change in information gathering behavior by utilizing Google query volumes as a "bad news" indicator for each corporation listed in the Standard and Poor´s 100 index. Our results provide not only an investment strategy that gains particularly in times of financial turmoil and extensive losses by other market participants, but reveal new sectoral patterns between mass online behavior and (bearish) stock market movements. Based on collective attention shifts in search queries for individual companies, hence, these findings can help to identify early warning signs of financial systemic risk. However, our disaggregated data also illustrate the need for further efforts to understand the influence of collective attention shifts on financial behavior in times of regular market activities with less tremendous changes in search volumes.
Introduction
In the past decade connections of people all around the globe have dramatically increased due to technological innovations related to the internet. The ongoing worldwide computerization and integration provides great opportunities for scientists to enhance our understanding of the complex systems in which humans live today. Increasing availability of massive social media data abets efforts trying to explain collective behavior with methods stemming from the natural sciences [1][2][3][4][5][6][7], allowing to transfer knowledge about mechanisms already found in, for instance, complex ecological systems [8][9][10].
Given the impact of the recent financial crisis on economic wealth, political decisions and personal fortunes, special interest bestow researchers upon patterns in modern financial markets [11][12][13][14][15][16][17][18][19][20][21][22][23]. For an approximation of collective financial behavior diverse online sources have already been used that yield complementary results: strong correlations are reported between trading volumes of securities and the frequency brand names appear on Twitter [24], and the number of daily search queries on Yahoo [25], respectively. Editing activity in Wikipedia is linked to critical events in the near future [26], and the text content of daily tweets is analyzed in respect to its mood and found to be predictive of changes in the values of the Dow Jones Industrial Average [27]. Moreover, the Bitcoin crypto-currency and its price dynamics have been shown to exhibit, besides more fundamental and technical drivers [28], strong relationships with the number of new users, Wikipedia page views and search queries provided by Google Trends data [29,30]. The latter, publicly available service seems to be especially fruitful for scientists to comprehend collective financial behavior. Therein, Google provides access to aggregated information on the volume of queries for specific search terms over time. Although mostly capturing the attention of uninformed investors [31], those online search query data have delivered useful information to predict trading volumes [32], to diversify portfolio risks [33], and to quantify trading behavior with given keywords [34] or with semantic topics derived by a latent Dirichlet allocation ("topic modelling") of Wikipedia articles [35].
To explain collective financial actions, it is helpful to remind Herbert Simon´s [36] famous notion that decisions of economic actors start with the gathering of information, yet, that the attention of those actors is rather limited compared to the amount of available information. His observation stems from the 1950s, but seems to be more valid than ever in modern societies. Real-time information supply from countless online sources makes selection processes for investors increasingly important and for scientists an even richer research area. However, a series of problems clouds the possibilities of social media data [37][38][39]. In addition to more general issues of adequate methodological standards for analyzing large social media data discussed by Ruths and Pfeffer [40], we can add, in accordance with Sun and others [41], that the cited studies concerned with collective human behavior and financial markets mainly focus on the prediction of composite indices. In contrast, the influence of collective attention shifts on individual stock price movements is so far a widely unexplored question.
This paper tries to fill this gap by investigating not only aggregate compositions but individual stock prices and their connection to firm-specific volumes of Google search queries. The disaggregated data set allows the examination of the direct relationship between stock prices and company search volumes on different levels of aggregation. By including company-level information about sectoral affiliations we can study how diverse ways of doing business in different branches inspire different information gathering strategies. Moreover, we can also investigate individual company performances. The connection between stock markets and Google search volume becomes evident on all levels during recent financial turmoil; a period, in which returns of a Google Trend based strategy by far exceed average market developments. Our findings are consistent with the intuition that recessions are highly suitable for market predictions made by collective behavior indicators, since such downturns draw mass attention to economic issues in general and therein to the most affected business sectors in particular.
Results
For our analysis we gathered the search volumes provided by Google Trends for all companies listed in the Standard & Poor´s 100 (SPY) in August, 2014 (see Material section for further details and S1 File for the actual data) [42]. The SPY index composition is based on one hundred large and well established "blue chips" and represents ten major economic branches defined by the Global Industry Classification Standard [43]. We used the full corporate name in combination with "company" as search term to avoid semantic ambiguities. The scores produced by Google Trends consist of the volume of each search query relative to the total number of searches carried out at each point in time. The results are reported weekly and the subsequent data set was collected for the period between 4 January, 2004 and 4 January, 2014. For five companies we could not find any Google Trends scores with the described search terms, they are excluded from further analysis (an overview of the missing data is provided in S2 File). The following results are therefore based on the remaining 95 stocks, and a stricter sample (yielding similar results) is discussed in the Material section.
In order to identify shifts in collective information retrieval behavior we calculate relative changes in search volumes for each stock: Δn i (t,Δt) = n i (t) − N i (t-1,Δt), with, N i (t-1,Δt) = (n i (t-1) + n i (t-2)+. . .+n i (t − Δt)) / Δt, where n i is the relative search volume for stock i and Δt is set to three weeks, as done by Preis and colleagues [34]. To analyze the average change for each week over all company search queries we use hDniðt; DtÞ ¼ Utilizing sectoral information of each stock underlines the negative relationship of searches and markets but differs substantially across industries, as shown in Fig 2. The largest attention shifts take place with regard to financial corporations during the Subprime crisis. Additionally, the company-level data reveal some more specific intersections with real world events. Within the Materials sector, for instance, search queries see large changes in 2004 and 2005 due to regulatory inspections about DuPont´s involvement in the release of perfluorooctanoic acid (PFOA, also known as C8) into drinking water [44].
Despite these peaks in public attention, Figs 1 and 2 provide only "weak" (i.e. descriptive) evidence. Using a simple regression model with a basic control variable in terms of the S&P 500 volatility index reveals that the statistical relationship between the development of the SPY index and the average change over all (and sectoral) company search queries is indeed negative, as expected, but not significant (Table 1). This non-significance is true on a general level and for each considered sector. Hence, during regular market developments, and with it the better part of the observation period, no big changes exist in the public interest for large corporations; only through extraordinary events people seem to modify their search behavior in this respect.
Although there is no "hard" evidence for a general connection between the average (resp. sectoral) shift in search queries and the SPY index development, peak times of financial turmoil are visibly accompanied by increased collective attention. The main part of this paper tries to exploit these collective attention shifts and investigate their relationship with individual stock prices. For this purpose, we implement a hypothetical trading strategy based on company-level Google Trends scores. The intuition behind our strategy is to take an investment position that utilizes collective attention as an indicator for "bad news" and treats an increase in collective search queries as a signal to go "short", as it was successfully done in [33,34]. Following this approach, we first set all portfolios to an arbitrary value of 1. We implement the proposed strategy by selling a certain stock i at closing price p i (t) on the first trading day of week t and buying it back at closing price p i (t + 1) of the first trading day of the consecutive week, if the relative change in search queries is higher than the weekly average (i.e. Δn i (t-1,Δt)>0). The cumulative return R i of this "short position" is then changing by log(p i (t)) − log(p i (t+1)) If, in contrast, Δn i (t-1,Δt)<0, the relative change in search volume indicates no "bad news", but neither an immediate incentive to buy stock i regarding to changes in collective attention. In this case, we rely on the SPY index as a general indicator for collective financial behavior. We identify shifts in SPY prices by calculating relative changes as shown above with Google Trend scores. Thus, the "long position" is taken if the SPY index at the beginning of a trading week is higher than its average over three preceding weeks (i.e. Δn spy (t-1,Δt)>0). The cumulative return R i changes then by log(p i (t+1)) − log(p i (t)). If the index is below average at the beginning of a week we are going "short". As before, the cumulative return R i of this "short position" is then aggregated by log(p i (t)) − log(p i (t+1)) In summary, our investment strategy utilizes, on the one hand, Google Trends as an indicator to "short-sell" certain stocks with high attention scores, which appear especially around large market movements. On the other hand, our strategy follows the SPY index development and its mapping of general financial behavior.
In Fig 3, the mean performance of the Google Trends strategy for all stocks contained in the Standard & Poor´s 100 is illustrated by a blue line. Each company has thereby the same weight in the constructed portfolio. To get an approximation of the average market evolution we depicted the development of the index itself as a red line. Their difference is dyed blue. As baseline we implement a random strategy, in which investment decisions are generated in an uncorrelated manner by buying and selling the SPY index randomly. This simulation was executed 10,000 times and the reported results are the mean for each point in time of these procedure. The dashed lines indicate the standard deviation of the simulations. Applied for all stocks of the SPY index in the period between 2004 and 2014, the accumulated return of the Google To compare our hypothetical investment strategy we implement a "buy and hold" strategy based on the stocks of each sector to get an "Industry based strategy" by buying all shares of an industry in the beginning and selling it at the end of each week. The mean cumulative returns of all stocks of one sector are used as sector specific baseline and depicted as red lines. Fig 4 draws a more differentiated picture of the performance of the Google Trends strategy, whereby it outperforms in every sector the industry average. However, for some sectors the Google Trends strategy produces significant higher returns than in others. Clearly, the most profound attention shifts occur in Financials. The strategy increases the value of a hypothetical portfolio that trades financial stocks over 320%. For branches relatively unaffected by the financial crisis (e.g. Health Care or Telecommunication Services) our Google Trends strategy generates lower profits, yet, still considerable higher gains than the average of the subsequent industries. This crosssectoral pattern is consistent with the above reasoning that collective attention shifts are a major influence factor of stock price formations, since the strategy performs particularly well in times of large market movements and with companies that are in the center of massive public interest in the period between 2004 and 2014.
Finally, we can investigate the effect of the proposed Google Trends strategy for individual companies. In Fig 5 we present the performance of the 10 highest weighted constituents of the Standard & Poor´s 100 [44]. Generally, the same pattern of success is visible. For all companies the Google Trend based strategy generated considerable gains, and in all but one cases (Johnson & Johnson) the cumulative profits were higher than those 36% of the index. Similar to the observations on the sectoral level, the results are related with the way of doing business and the industry affiliation, respectively. The companies benefiting most from the investment strategy are banks and other financial institutions (e.g. JP Morgan Chase), since they were in the center of the financial turmoil during the Subprime crisis. In contrast, firms with businesses that are not directly connected with the financial market (e.g. Procter & Gamble), are less eligible for the investment strategy. Again, collective attention shifts as a negative trading signal apply especially well for corporations that were hit hard by recent financial crisis.
Discussion
Our results demonstrate that publicly available data on collective information gathering behavior can help investors in times of financial turmoil to hedge portfolio values and even extend their profits. In the period between January 2004 and January 2014 we investigated Google Trends data for companies listed in the Standard & Poor´s 100 index and detected increases in search volumes during the large market slumps in 2008 and, to a lower degree, in 2010. Investors who would have utilized these company-level search queries as an indicator for "bad news" could have gained considerable profits, particularly for sectors that were in the center of the Subprime crisis.
Interpreting Google Trends as an approximation for collective informational needs, the results may represent a general pattern in modern investment decision making regarding the importance of collective attention shifts for stock price formations. In times of large changes and great uncertainty the necessity to collect information about investment assets like stocks seems to be especially high [12,15,17,18,33,34]. In today´s world this means to "google" such companies one is interested in. At the same time, economic issues see a rise in media coverage during such market upheavals [13] and, hence, are more likely incorporated into everyday-life conversation topics. This (and many more conceivable) micro social processes become manifest in increasing search volumes for affected companies and are succeeded by decreasing stock prices. Therefore, changes in collective attention are an important reason for the particular success of our trading strategy during recent crisis and within afflicted sectors and companies; an explanation that follows directly Herbert Simon´s suggestion about attention as a scarce commodity which enhances the gathering of information especially in times of economic uncertainty [36].
However, the pattern applies particularly well if the state of the economy is turning drastically [20]. This means collective attention in terms of Google Trends data serve especially well as an indicator for "bad news" and subsequent falling prices, which can be exploited by going "short" for those assets. In contrast, people seem not inclined to search in great numbers for corporations that are presenting, for instance, respectable annual reports or announce noteworthy sales numbers. Thus, collective attention for large companies may follow the general media logic that "good news is no news", or in our case more precisely, that "only bad news is relevant news" in order to use it as an investment signal. In this way, the interplay between collective information gathering and financial behavior may even contribute to the overreaction of investors during financial crises and the subsequent magnification of economic slumps.
As a consequence, Google Trends data offer mainly a possibility to investigate collective financial behavior and search queries in negative economic contexts. Instead, regular market environments with steadily rising prices seem less connected to collective attention shifts, as far as current evidence tells us. Nevertheless, we are convinced that in the near future more disaggregated data will be available, so that regular market contexts can also be investigated in greater depths and advance our knowledge about complex social systems.
Material
We retrieved search volume data from Google Trends website (http://www.google.com/trends) on every day between August 23, 2014 and August 29, 2014 for all companies in the Standard and Poor´s 100 index. The index composition is taken as of August 29, 2014 and has not changed since then. Search volume data are restricted to requests of users localized in the USA, the home location of all companies contained in the Standard and Poor´s 100 index. The series are reported weekly on a Sunday to Saturday frequency. Since only five search terms can be looked up simultaneously, we retrieved the data for each company separately. To avoid semantic ambiguities we used the full company name plus the word "company". The search volumes are normalized by Google with a maximum of 100, serving as a scaling factor for the rest of the series. Due to this normalization Google Trends results are dependent from the time of observation. Therefore, we provide in S1 File the original data used in this article to facilitate the reproduction of our results.
However, there are missing values of different degrees within the dataset. For five companies we could not find any Google Trends scores with the described search terms. They have no values for all 522 weeks and are therefore not included in the analysis. A detailed distribution of the missing values can be found in S2 File. For 71 companies we have "complete" data in the sense that not more than four weeks are missing for the whole observation period. Calculating the results only for those companies, Fig 6 shows a very similar development and increases the performance of the proposed investment strategy slightly. Thus, the results for the entire dataset represent a lower boundary for possible profits, i.e. that stricter data cleaning can even improve profits.
To further support the reliability of the proposed trading strategy, we calculated another possible approach: We reversed the whole trading strategy by buying (instead of selling) if search volumes are above average. If this case is not applying, we are trusting the SPY index if its below its mean (i.e. Δn spy (t-1,Δt)<0) and are selling otherwise, which is for both elements of the strategy the exact opposite as initially suggested. The subsequent results are shown in S4 File and represent the inversion of the curve presented in Fig 3. Thus, if an investor would have applied this (hypothetical) trading strategy and used Google Trends scores as a buying signal, she would have generated a huge loss.
Furthermore, the stock prices were downloaded from Yahoo Finance (http://finance.yahoo. com) on a daily basis for every trading day between January 4, 2004 and January 4, 2014. Closing prices p i (t) at the first trading day of a week are matched to Google Trends data of the previous week, when information would be available for hypothetical investors. In a regular trading week, for instance, closing price p i (t) on Monday would correspond to Google search queries for i in the previous week, which is available on the preceding Saturday.
All steps of the analysis described above were conducted with the open-source language Python. All Figures were drawn with the R-Package "ggplot2". S4 File. Cumulative performance of a strategy that reverses the proposed investment mechanism. The reversed trading strategy is buying (instead of selling) if search volumes are above average and trusting the SPY index if its below its mean and selling otherwise, which is for both elements the opposite as suggested in Fig 3. (TIF) | 4,706.8 | 2015-08-10T00:00:00.000 | [
"Economics"
] |
powerLang: a probabilistic attack simulation language for the power domain
Cyber-attacks on power-related IT and OT infrastructures can have disastrous consequences for individuals, regions, as well as whole nations. In order to respond to these threats, the cyber security assessment of IT and OT infrastructures can foster a higher degree of safety and resilience against cyber-attacks. Therefore, the use of attack simulations based on system architecture models is proposed. To reduce the effort of creating new attack graphs for each system under assessment, domain-specific languages (DSLs) can be employed. DSLs codify the common attack logics of the considered domain. Previously, MAL (the Meta Attack Language) was proposed, which serves as a framework to develop DSLs and generate attack graphs for modeled infrastructures. In this article, powerLang as a MAL-based DSL for modeling IT and OT infrastructures in the power domain is proposed. Further, it allows analyzing weaknesses related to known attacks. To comprise powerLang, two existing MAL-based DSL are combined with a new language focusing on industrial control systems (ICS). Finally, this first version of the language was validated against a known cyber-attack.
Introduction
Recent deliberate disruptions of electrical power and energy systems (Defense Use Case 2016; Petermann et al. 2011) have shown that cyber-attacks on power assets can have disastrous consequences for individuals, regions, and whole nations. Attackers exploit malicious code to manipulate the controls of power grids, energy providers, and other critical infrastructure (Liu et al. 2011;Wang and Rong 2009). These manipulations result in real-world catastrophic physical damage, like major power outage or city-wide disruptions of any service that requires electric power (Defense Use Case 2016; Petermann et al. 2011;Rosas-Casals et al. 2007). In order to respond to these threats, the assessment of power domain's cyber security fosters a higher degree of safety for the entire society dependent on electric power.
Unfortunately, assessing the cyber security of an entire domain and its single entities is difficult. To identify vulnerabilities, security-relevant parts of the system must be understood, and all potential attacks need to be identified (Morikawa and Yamaoka 2011).
There are three challenges that need to be solved: First, all relevant security properties of a system need to be identified. Next, it is difficult to collect necessary information on the systems. Last, the collected information needs to be processed to detect all weaknesses that can be exploited. To address this, the use of attack simulations based on system architecture models have been proposed (e.g., Ekstedt et al. 2015;Holm et al. 2015). These approaches take a model of the system and simulate cyber-attacks to identify weaknesses. In other words, this is a execution of many parallel virtual penetration tests. This allows the security assessor to focus on the collection of the information about the system that is necessary for the simulations.
As the previous approaches rely on a static implementation, the use of MAL (the Meta Attack Language) (Johnson et al. 2018) was proposed. This framework for domainspecific languages (DSLs) defines which information about a system is required and specifies the generic attack logic. Since MAL is a meta language (i.e., the set of rules that should be used to create a new DSL), it does not define a particular domain of interest. Therefore, this work aims to create and evaluate a MAL-based DSL for simulation of known cyber-attacks for the power domain.
To achieve this goal, we reuse two existing MAL-based DSL: coreLang , which is designed to model common IT infrastructures, and sclLang (Ling 2020), which covers the internal structure of substations. To bridge the gap between the IT world, presented by coreLang, and the technical world in sclLang, we propose icsLang to provide a means for modelling OT environments.
The rest of this work is structured as follows: First, we present the state of the art; second, we explain the objectives towards our language; then, we provide the needed knowledge to create powerLang and the way it is comprised; next, demonstrate pow-erLang based on the Ukrainian scenario and discuss the generated insights, before we conclude our work.
State of the art
Our work relates to two domains of previous work: model-driven security engineering, attack/defense graphs, and security assessment in the power domain.
Within the domain of model-driven security engineering a large number of domainspecific languages has been proposed (Alam et al. 2007;Basin et al. 2011;Jürjens 2005;Paja et al. 2015). These languages typically allow to model a system's design with respect to its components and the interaction among these. Additionally, these languages encompass security properties such as constraints, requirements, or threats. Different formalisms and logics are used to provide the opportunity for model checking and searching for constraint violations. However, not all languages support automated analysis purposes (Almorsy and Grundy 2014;Lund et al. 2010). Instead, they solely offer the capability to model security relevant properties and analysis needs to be conducted manually.
The concept of attack trees was made popular by the work of Bruce Schneier (1999;2000), further formalized by Mauw and Oostdijk (2005), and extended to include defenses by Kordy et al. (2010). There are several approaches elaborating on attack graphs (Ingols et al. 2009;Kordy et al. 2014;Williams et al. 2008). These theoretical descriptions led to the development of different tools using attack graphs, which mostly build up on collecting information about existing systems and automatically create attack graphs. For example, the TVA tool (Noel et al. 2009) models security conditions in networks and enriches these by exploits that describe the transitions between these security conditions. These attack graphs can be extended to probabilistic attack graphs, e.g., by facilitating Bayesian networks. Frigault et al. (2008) use the TVA-tool to generate attack graphs, ad transform them to dynamic Bayesian networks. Finally, they enrich them with probabilities using CVSS (Common Vulnerability Scoring System) scores, like (Xie et al. 2010), who model uncertainties in the attack structure, attacker's actions, and alerts triggering.
Hitherto, we have united the approaches of attack graphs and system modeling in our previous works like P2CySeMoL (Holm et al. 2015) and securiCAD . Our central idea was to automatically generate probabilistic attack graphs based on a existing system specification. The generated attack graph then serves as an inference engine and produces predictive security analysis results from the system model. However, in these works, the used languages to create the attack graphs were hard-coded. Therefore, we have proposed MAL that allows to create domain specific languages. So far, several languages have been built in MAL. One example is vehicleLang , which allows modeling cyber-attacks on modern vehicles. Another example is coreLang, which contains the most common IT entities and attack steps and is included in the presentation of MAL (Johnson et al. 2018). Further, we proposed the automated creation of MAL languages by translating existing concepts from the ArchiMate language to MAL ).
Next, we discuss works, which elaborate on structural and security related aspects in the power domain. First, (Jiang et al. 2018) propose a DSL and a repository to represent power grids and related IT components. The SGAM (Smart Grid Architecture Model) (CEN-CENELEC-ETSI 2012) describes a technical reference architecture that describes the functional information flows between the main domains of smart grids. Further, it integrates several systems and subsystems architectures. Cherdantseva et al. (2016) conducted a systematic literature review to identify different methods to assess risk in Supervisory Control and Data Acquisition (SCADA) systems. Another review was conducted by Franke and Brynielsson (2014) considering cyber situational awareness. They cluster their findings, relate it to national cyber strategies, and give suggestions for future research. Sharifi and Yamagata (2016) research risks for the resilience of urban power supply and identify cyber threats as one origin of risk.
Additionally to the reviews on existing literature, researchers have conducted other kinds of surveys as well. Wang and Lu (2013) assessed different challenges that arise from cyber threats for the smart grid domain. Masood (2016) conducts similar research for cyber threats on nuclear power plants and, additionally, provides attack trees to model the threats.
Finally, we discuss approaches that help to improve the security in the power domain. Habash et al. (2013) provide a framework to overcome the cyber-physical threats of the power grid and have evaluated it in Canada by applying it for one year. Ten et al. (2008) suggest a vulnerability assessment to systematically evaluate the vulnerabilities of SCADA systems at the three levels of system, scenarios, and access points.
Objectives
Our creation of powerLang follows a design-centered approach, as we have the existing means of MAL and design a language that enables stakeholders in the power domain to easily assess threats to their IT and OT environments. This is needed as the responsibles in the power domain usually prioritize availability and resilience of the power supply (Wang and Lu 2013) and, thus, lack knowledge on security related aspects.
The entire language development is embedded in a bigger project that will equip utility providers with a fully-fletched framework of integrated tools that will improve the security in the power grid (EnergyShield 2020).
The value chain in the power domain is comprised of five main categories of stakeholders: bulk generation, transmission, distribution, distributed energy resource, and customer premises (CEN-CENELEC-ETSI 2012). To create a language covering the entire chain is ambitious. Therefore, we opt for an iteratively, continuously extension of our language to cover further aspects in the domain. In our first iteration, we opted to represent the domain of transmission, as this was prioritized in our project, due to reachability of the practitioners.
The main concern of our practitioners was the ability to have existing attacks, like the Ukraine scenario (Defense Use Case 2016), included in with the language created models. Thus, the language needs to cover IT as well as OT parts. Additionally, we received a list of concrete assets that they use in their OT environment. Thus, the objectives towards our language can be characterized as follows: • Known attacks should be discovered by performing simulations, which use powerLang.
• The language should reflect the different needs of IT and OT environments.
• The terminology of the power domain should be reflected in the language.
Creating powerLang
In this section, we describe how powerLang is comprised. Therefore, we take a closer look at the reused languages, icsLang that bridges the identified gap between these two languages, and the mechanics that we apply to combine these languages.
MAL and reused languages
Before we present powerLang as a whole, we give first a short introduction to MAL so that the reader can understand the mechanics of MAL-based languages. Further, we give an overview of the reused languages. For detailed insights, we refer to the original publications introducing MAL (Johnson et al. 2018), coreLang , and sclLang (Ling 2020).
Introduction to MAL
Hitherto, we proposed the use of system architecture models to conduct attack simulations (e.g., Ekstedt et al. 2015;Holm et al. 2015). However, our previous approaches used static implementations to represent assets and the related security information within the tools. Thus, changes to these underlying structures require high effort. Therefore, we developed MAL (Johnson et al. 2018). MAL itself defines which information about a system is required and specifies the generic attack logic. MAL is a meta language (i.e. the set of rules that should be used to create a new DSL) and represents no particular domain of interest. But it is to be used for creating languages that represent a particular domain. Before we explain such a concrete language, we present the basic building blocks of MAL, to ease the understanding of these languages.
First, a MAL language contains so called assets, which are the main elements found in the domain under study. Next, assets contain attack steps, which are actual attacks/threats that can be exploited. An attack step can be connected with one or more following attack steps to create an attack path. The sum of the attack paths are attack graphs used for the simulation. Assets can be linked by associations to model possible transitions of an attacker among them. Further, inheritance between assets is allowed as known from object orientation. Finally, there exists categories that allow organization of assets and probability distributions can be assigned to attack steps. A probability expresses the effort needed to complete the related attack step.
Next, we give a short example how a MAL language looks like. This example, which is a snippet of a complete language, contains attack steps on three assets. It illustrates how the attack steps are connected with each other, for example if one achieves blockingOperation, one is able to reach overspeed on Turbine and as a result finally lead to plantDamage and powerOutage on the power plant. Additionally, blockingOperation is annotated with Normal(5,0.2) meaning that an attacker's effort is described by a normal distribution with a mean of 5 and a standard deviation of 0.2. In the last lines of the example, the associations between the assets are defined. As illustrated before, MAL provides the basics to create a threat modeling language from scratch. However, many languages created with MAL share a common set of concepts. To reduce unnecessary redundant work, we developed coreLang ) that is comprised of predefined assets that appeared in different languages created with MAL. Thus, this coreLang can serve as starting point to model more domain specific languages or even act as a rudimentary language to model simple environments. In the following, we will give a short overview on the basic elements of this language and refer to the original publication for all the details. Figure 1 presents the overall structure of coreLang. The extends relationship expresses an inheritance between the parent and the child assets. Consequently, all attack steps and defenses of the parent asset are available in the child, too. However, it is possible to either complement the existing attack steps with further logic or to completely overwrite the logic. In contrast, the association relationship expresses possible paths for attackers between different assets. In other words, the associations describe the links between assets that can be used by attackers to traverse from one asset to another. For coreLang, we have identified six different main categories: system, vulnerability, user, IAM (Identity and Access Management), data resources, and networking. In the following, we will describe those categories and the related design decisions.
System
The first category, we like to shed light on, is system. This is the collection of assets that usually represent the computing instances in an environment, and thus are the Fig. 1 Overview of coreLang natural attack surface. First, we created an asset called Object (inspired by the object in object-oriented programming languages) that provides common functionality to all inheriting assets. Basically, an Object is the simplest form of an asset that can be compromised by a Vulnerability. On the one hand, Object is specialized into System, which specifies the hardware on which Applications can run. The attacker can DoS everything that is running on it and access (using physical control) on the OS after effort. On the other hand, Object is specialized into Application, which specifies pretty much everything that is executed or can execute other applications. Lastly, this category contains PhysicalZone, which is the location where systems are physically deployed.
Vulnerability The basic idea of creating a MAL related language is to provide a set of already known attack steps to the modeller. However, this incorporates two types of shortcomings. First, we concentrate on known attack steps. But, there are also attack steps that are not known yet. Second, we stay on a relatively abstract level for core-Lang. Consequently, we cannot provide all possible attack steps upfront, as the attack steps are very diverse for different assets. To overcome this issue, we provide a set of Vulnerability and Exploit. On the one hand, these assets can be used as extension points for other language developers. On the other hand, we provide a standard set of Vulnerability and Exploit. These can be used by the end-user to model attack steps that are not known at the time of creating the language. Basically, an Object does have a Vulnerability that can have different levels of complexity to take advantage of. This Vulnerability can then be facilitated by an Exploit that leads to different levels of access to the exploited Object.
User This category contains only the representation of a User. The User serves as attack surface for social engineering attacks. The most apparent attack modeled in this asset is the phishing, which can lead to either credential theft or takeover of the user's computer. However, this asset will be extended to represent more attacks in future iterations.
IAM is an accepted way to manage different identities representing users and their access to certain applications (Witty et al. 2003). Therefore, the category IAM is comprised of Identity that visualizes a user group. After authentication or compromise of an Identity, the attacker assumes its privileges. This is leads to both legitimate and illegitimate access. This access to an Identity is usually secured by means of Credentials. Those Credentials can be stolen/guessed by the attacker directly (e.g., due to brute-force) or the User can be convinced to enter them by herself (e.g., due to social engineering).
DataResources This category groups the assets that are usually communicated. First, there is Data that represents any form of data that can be stored or transmitted. An attacker can perform the classical actions of read, write, and delete. Second, we have defined Information as an abstract concept that is incorporated in Data.
Networking
The last category elaborates on networking related assets. First, we have identified the Network where a network zone is a set of network accessible applications. The border of such a Network is a RoutingFirewall that specifies a router with firewall capabilities that connects many networks. Lastly, there are Connections between Applications that allow a communication along different Networks, and consequently, a lateral movement of an attacker.
sclLang
The standard IEC 61850 was developed as substations became increasingly automated and digitalized to help with the transition (IEC Standard 2003). One part of this standard is the Substation Configuration Description Language (SCL). SCL was created to help with compatibility of different vendors and to share configurations of Intelligent Electronic Devices (IEDs) (IEC 2018). Because modern substations built according to the IEC 61850 standard already have existing configuration files according to SCL, we decided to build a DSL based on SCL so that this already existing information could be used to create threat models.
In sclLang (Ling 2020), the assets and their associations are precisely as specified in SCL as seen in Fig. 2. These assets are divided into three different categories. The Functional assets are related to the main substation functionality and includes, for example, transformers that are most often used to alter the voltage level. The Product category includes the products used in a substation, for example the IEDs that are used to enable automation. Finally, the Communication category includes all assets that are needed for the communication of the IEDs.
Regarding attacks, the attack steps in sclLang are access, communicate, execution, impact and hasRouter.
The attack steps access, execution and impact are from the categories of tactics as found in the ATT&CK Matrix for Enterprise (MITRE 2020a). Please note that access has been renamed from the category Initial Access. The categories of attacks were used to keep the complexity of the attacks down in the first version of the language. The attack step communicate was added to model how an attacker may communicate throughout the substation. Finally, the attack step hasRouter was added because whether an IED has the routing function enabled or not affects the attacker possibility to move throughout the substation.
Access The attack step access is the first attack that an external attacker would perform to gain initial access to a substation. It is possible that, for instance, a disgruntled employee would not need this attack step. After the attacker has access, they can move further through the substation by reaching other attack steps.
Communicate The modern IEC 61850 substations are built with communication networks and this attack steps shows how the attacker would be able to move through the network and therefore also the substation. Most of the communication happens through the access points, which can be physical or logical interfaces. If one IED has two different access points that are part of two different subnetworks, it is possible for the attacker to move between the subnetworks in a substation.
Execution The term execution in attacks is often defined as running malicious code. In a substation it is not necessarily code that creates certain actions but instead sending and receiving logical nodes. Logical nodes are sent for automation purposes to increase the voltage when a specific voltage level is detected. In this sense, one can consider that sending logical nodes is similar to execution of malicious tasks in an IT system. If an attacker reaches the execution attack step it is possible to potentially shut down the entire substation.
Impact The attack step impact means that the attacker has managed to in some way made an alteration in the substation. As described above in execution, the substation automation happens with logical nodes. This means that if an attacker can make an impact on a logical node, perhaps make the voltage level seem lower than it actually is, a malicious execution to increase the voltage level can occur and cause damage to the substation. It is also possible for the attacker to use that impact attack step to alter the clock, which is essential to the synchronization of substations and cause disruptions in this way.
HasRouter The attack step hasRouter is reached only if the router functionality is enabled on an IED. Similar to IT networks a router is needed to move between subnetworks in a substation. With this attack step it is possible for an attacker to move throughout the substation in between subnetworks.
In terms of MAL, this means that the AccessPoint can either have a Router existing or not. If the router exists, then the attacker can reach the attack step and communicate across Subnetworks. However, if no Router exists the attacker will not succeed to communicate.
icsLang
Following, we will focus on icsLang (depicted in Fig. 3) that is based on the ATT&CK Matrix for ICS (MITRE 2020b). The main asset in this language is represented by Fig. 3 Overview of icsLang the IcsAsset. It represents the abstract common behavior of all represented assets. Therefore, we have created a connection to IcsNetwork to illustrate that assets can communicate with each other if they are attached to the same network. The rest of the language is structured along the MITRE ATT&CK categories level 2 (supervisory control), level 1 (control network), and security.
Level 2
The supervisory control LAN level includes the functions involved in monitoring and controlling physical processes and the general deployment of systems. The central asset on this layer is the ControlServer which operates the Controller on level 1 and also computes their output (Stouffer et al. 2011). Those ControlServer and Controller are configured, maintained, and diagnosed using EngineeringWorkstations. To carry on the entire system, the operators use human machine interfaces (HMI) that provide a graphical user interface to EngineeringWorkstations and ControlServer. Lastly, there is a DataHistorian on level 2, which provides access to external users that are interested in data access for archival or analysis.
Level 1
The control network level includes the functions involved in sensing and manipulating physical processes. This is usually done by Controller that are controlled by ControlServer situated on level 2. However, this connection is not direct, but through an IOServer that provides the interface between the control system LAN applications and the field equipment monitored and controlled by the control system applications.
Additionally, there exists a safety layer consisting of safety instrumented systems (SIS). The function of protective relaying is to cause the prompt removal from service of an element of a power system when it suffers a short circuit or when it starts to operate in any abnormal manner that might cause damage or otherwise interfere with the effective operation of the rest of the system. Security Originally, the ATT&CK Matrix for ICS does not include security related assets explicitly. However, for every asset, there are mitigation defined that can be also modelled as assets. First, we introduced an AntiVirus that can detect malicious files on connected assets leading to a higher effort an attacker has to spend. Second, we added a Firewall that can block certain ports, enforce white lists, or apply intrusion detection systems (IDS).
Attack Steps
The ATT&CK Matrix for ICS contains 81 different techniques that can be used to exploit an ICS environment. Every of these techniques can be an attack step in our icsLang. As the justification if a certain technique will be included into the language would be quite extensive, we will focus on the overarching eleven tactics similar to the approach followed in substation-DSL. Based on the techniques, we will argue if a related technique will be included as attack step or not.
When we create MAL languages, we assume worst case scenarios, meaning that the attacker already knows about the architecture of the attacked organization. This leads to the decision that techniques related to collection, discovery, and lateral movement does not needed to be modelled in icsLang. Additionally, command and control related techniques that describe the way the attacker connects to the environment are not taken into account.
Another set of techniques, we will not model explicitly in our language, are related to the techniques of evasion, persistence, and inhibit response function. These techniques aim to masquerade the doing of the attacker or to create hooks which should prevent the lock out of the attacker by countermeasures. This cannot be represented in MAL directly but is already included in the given probabilities that a certain attack step is successful.
The rest of the techniques follow the common attacker's behavior pattern of access, privilege escalation, and harm the system (Ramsbrock et al. 2007). First, the techniques related to initial access provide either the initial attack surface or the entry-point to an asset due to lateral movement through the network. If the access is established, the attacker can perform techniques related to execution to perform a privilege escalation. Reaching this stage allows to perform attack steps that end up in impact on the asset or impair process control. Impact means in this case that ICS systems, data, and their surrounding environment can be manipulated, interrupted, or destroyed. Impair process control leads to manipulation, deactivation, or physical damage on control processes.
Probability Distributions on Attack Steps
For MAL based languages it is essential to define probability distribution on attack steps and defenses to describe how much effort an attacker has to spend to exploit certain attack steps or to which degree a defense is successful. Unfortunately, research that assess such probability distributions is quite scarce and often we have to rely on expert knowledge to model them. In the following, we will give an overview of the applied probability distributions that can be found in literature. Nonetheless, we like to highlight that the following distributions reflect not the entire reality but rather are approximately reasonable as long as no more solid studies on each and every attack step has been conducted. Additionally, the main contribution of this paper are the structures provided before and not the figures presented next.
First, (Baggett 2008) has identified that if anti-malware is enabled on a system, an attacker's success probability decreases to 90%. Therefore, we model a defense on different attack steps such as modifyControlLogic or rootkit with a Bernoulli distribution of 0.9. This means that if an anti-malware is in place it will be effective in 10% of the cases.
Second, research (Holm 2014;Sommestad and Hunstad 2013) has elaborated on identifying the effort that an attacker needs to spend to bypass an IDS. This effort depends on different parameters as the IDS is tuned and updated regularly. However, due modeling reasons we assume the worst case that the IDS is neither patched nor updated leading to an exponential distribution with a median of 3.5 days.
Third, from an interview with a domain expert in combination with vulnerability data from the US National Vulnerability Database (NVD), we conclude that a man in the middle attack is successful in 99% if no defense is in place and in 1% if the communication is encrypted. Accordingly, we model defenses with corresponding Bernoulli distributions.
Fourth, the probabilities of finding an entrance in a misconfigured firewall are gathered from a non-published expert survey performed in June 2016. According to the survey, a one day effort will result in a 28% probability of finding an entrance, after ten days the probability is 55%. The estimated gamma parameters then become 0.33, and 74.
Linking languages
So far, we have presented different languages that can be used to model different parts in the energy domain. However, to provide a fully functional language that covers the needs of the energy domain, these languages need to be linked to each other. Generally, there are two ways to link MAL-based languages to each other: First, associations between different assets of languages can be created. In this case, additional attack steps need to be created to make use of these associations. Second, inheritance relations between the assets of the different languages can be established.
In our approach, we rely on the second way as it figured out that it requires less effort to keep the links when one of the languages is updated. In relation to the Purdue Enterprise Reference Architecture (PERA), the three languages can be situated on different level. coreLang refers mainly to the levels 3 and 4 representing the business IT. icsLang is mainly situated on the levels 2 (supervisory control) and 1 (control network itself ), while sclLang includes level 1 and 0 (physical processes, sensors, and actuators). Consequently, we will link coreLang to icsLang and icsLang to sclLang as presented in Fig. 4.
To link coreLang to icsLang, we specify that the IcsNetwork is a specialized Network and IcsAsset is an Application. Due to this, the assets of both languages are linked to each other. Additionally, the attack steps need to be connected. Therefore, we overwrite authenticate that leads to the icsLang specific authenticatedAccess. Further integration is not needed. Similarly, we proceed with linking icsLang and sclLang. We define that Equipment is a Controller, IED is a SIS, and Server is a ControlServer. To link the attack steps to each other, we overwrite authenticatedAccess that it leads to execution on Equipment.
Fig. 4 Assets involved in linkage of languages
On the IED, we overwrite access that it leads to communicate. Same for Server where authenticatedAccess leads to communicate.
Demonstration
In 2015, an IT attack on the Ukrainian electric power grid switched off light for about 225,000 people (Defense Use Case 2016). This attack was the first documented successful cyber-attack on OT infrastructure affecting civilians. The attack was characterized by its coordinated and targeted approach to the critical infrastructure power supply. The attack involved a total of seven substations with 110 kV and 23 substations with 35 kV over a period of three hours. Manual interventions were needed to return to normal operations. Figure 5 shows the simplified IT/OT architecture of a DSO created with powerLang 1 to simulate the behavior of an attacker. The attackers in the Ukrainian scenario used a spear-phishing attack on the office PC's of the network operators as initial attack vectors (Defense Use Case 2016). Remote access to the spear-phished users' PCs gained the attackers by using the malware BlackEnergy 3 (ThreatSTOP 2016). This initial foothold was used to lateral move in the network. Somehow the attackers were then able to capture virtual private network (VPN) credentials and, thus, got access to the OT network. Their main goal was to control the central SCADA systems of the DSO. Therefore, they took over control of the HMI that allowed them to access several controllers that operate switches, which led primarily to the blackout.
Additionally, the attackers performed supporting attacks to retain their access to the infrastructure and to hinder defensive responses. For example, firmware manipulation attacks were carried out against serial-to-Ethernet gateways in the process network, to the uninterruptible power supply. Further, KillDisk commands on operator workstations were performed to keep them down. This led to a refusal of service of these devices and increased the downtime and aggravation of the network rebuilding by the personnel (Defense Use Case 2016). However, these supporting attacks are not directly part of our simulations as they are reflected in the given probability distributions.
Given this architecture, we can perform the attack simulations in securiCAD. The first steps of the attacker are like already described. But then, the path splits up into three different ways. Two of them describe that the attacker uses different VPN-credentials to proceed to the OT. The third one follows another approach using the business intelligence (BI) interface of the data historian. From the data historian the attacker can choose different ways to moving the switches. This is also in line with our experience gained from discussions with practitioners who state that if attacker make it to the OT environment, they are almost free to do everything. Finally, the attacker gain access to the I/O server and then to the controller no matter which way they took. Apart from the attack path generation, securiCAD calculates also the most probable path, which is the one taken also in reality.
Discussion
The application of powerLang in the context of the known attack on the Ukrainian power grid unveils some research that are mainly related to the early stage of development of the language. First, we encounter that the amount of possible attack steps in icsLang is too large to understand how an attacker behaves. This is based on the fact that we took over the attack steps described on MITRE ATT&CK ICS. As the analysis of the attacker is not solely based on the simulation results, but also on a visual analysis of the security assessor, the amount of attack steps should be reduced to a human recognizable amount. Therefore, we will analyze the attack steps and join them in a second iteration where feasible.
Second, powerLang covers just general IT and OT aspects related to the needs of power grid operators. Aspects of power plants or smart metering are not included so far. Due to the modular structure of powerLang, these aspects can be added in future easily.
Third, we recognized that the extension mechanisms in MAL are not sufficient. To combine different, independently developed languages to each other, the source code needs to be copied from one repository to another. Therefore, we plan to packatize single languages and to provide a package distribution similar to the way it is handled in Maven for Java libraries. Furthermore, we are reasoning about solutions to link languages to each other in a less tight way than using inheritance.
Fourth, a real world evaluation of powerLang is missing so far. However, it is already planned within the EnergyShield (2020) project and will be conducted in the Bulgarian demonstrator.
Conclusion
Within this article, we have developed powerLang that will enable power domain practitioners to model their IT/OT architecture and simulate the behavior of attackers within. This will allow them to identify possible choke points and improve infrastructure's security more efficient. To demonstrate this capability, we provided an exemplary model simulating the Ukrainian scenario.
However, there is still some work to be done. First, we solely tested our language for a scenario related to DSOs. But it is already planned to proof the language also in other environments and even in real-world settings. Second, many of attack steps are missing probability distributions that describe the expected effort to spend. These distributions are already substituted by the given probabilities inherited from IT-DSL. Nonetheless, future research needs to be conducted to gain more certain knowledge that needs to be spent on OT assets as this can be a significant difference compared to IT assets.
Apart from improvements of powerLang, this work has also shown that there are needs to improve mDSL's capabilities of integrating existing languages with each other to create new languages. Especially, a mechanism is needed for create the links between the languages and namespaces are needed to avoid naming issues. Furthermore, technical support is desirable to ease the distribution of languages like it is known from Maven for programming frameworks. | 8,524.6 | 2020-11-25T00:00:00.000 | [
"Computer Science"
] |
Patient-level explainable machine learning to predict major adverse cardiovascular events from SPECT MPI and CCTA imaging
Background Machine learning (ML) has shown promise in improving the risk prediction in non-invasive cardiovascular imaging, including SPECT MPI and coronary CT angiography. However, most algorithms used remain black boxes to clinicians in how they compute their predictions. Furthermore, objective consideration of the multitude of available clinical data, along with the visual and quantitative assessments from CCTA and SPECT, are critical for optimal patient risk stratification. We aim to provide an explainable ML approach to predict MACE using clinical, CCTA, and SPECT data. Methods Consecutive patients who underwent clinically indicated CCTA and SPECT myocardial imaging for suspected CAD were included and followed up for MACEs. A MACE was defined as a composite outcome that included all-cause mortality, myocardial infarction, or late revascularization. We employed an Automated Machine Learning (AutoML) approach to predict MACE using clinical, CCTA, and SPECT data. Various mainstream models with different sets of hyperparameters have been explored, and critical predictors of risk are obtained using explainable techniques on the global and patient levels. Ten-fold cross-validation was used in training and evaluating the AutoML model. Results A total of 956 patients were included (mean age 61.1 ±14.2 years, 54% men, 89% hypertension, 81% diabetes, 84% dyslipidemia). Obstructive CAD on CCTA and ischemia on SPECT were observed in 14% of patients, and 11% experienced MACE. ML prediction’s sensitivity, specificity, and accuracy in predicting a MACE were 69.61%, 99.77%, and 96.54%, respectively. The top 10 global predictive features included 8 CCTA attributes (segment involvement score, number of vessels with severe plaque ≥70, ≥50% stenosis in the left marginal coronary artery, calcified plaque, ≥50% stenosis in the left circumflex coronary artery, plaque type in the left marginal coronary artery, stenosis degree in the second obtuse marginal of the left circumflex artery, and stenosis category in the marginals of the left circumflex artery) and 2 clinical features (past medical history of MI or left bundle branch block, being an ever smoker). Conclusion ML can accurately predict risk of developing a MACE in patients suspected of CAD undergoing SPECT MPI and CCTA. ML feature-ranking can also show, at a sample- as well as at a patient-level, which features are key in making such a prediction.
Introduction
In recent years, there has been a surge in the use of machine learning (ML) techniques in cardiovascular imaging.As the number of imaging modalities for evaluating patients with potential coronary artery disease (CAD) increases and the technology continues to improve, there is an abundance of data available to consider when making clinical judgments.However, the large number of variables and growing volume of imaging data can make it challenging to accurately assess patients.Artificial intelligence (AI) and ML can assist in this process by providing helpful prompts based on a wide range of clinical and imaging variables [1].Indeed, ML algorithms have been shown to be valuable tools in patient risk stratification and diagnostic assessments [2,3].Coronary computed tomography angiography (CCTA) is a non-invasive diagnostic procedure used to assess coronary arteries for CAD.It has a high negative predictive value, allowing a negative CCTA result effectively rules out significant CAD [4,5].Another important non-invasive diagnostic test is single photon emission computed tomography (SPECT) which mainly assesses the functional significance of coronary stenosis and guides management.Assessment of plaque and perfusion burden using CCTA and SPECT add incremental prognostic value in patients suspected of CAD [6][7][8].
The prevalent approach to clinical prediction typically involves selecting potentially relevant variables by experts, followed by regression/classification analysis.Recent advancements in ML render this classical approach restrictive (uses only one model type), inefficient (requires manual tunning for hyperparameters) and potentially biased (predictor pre-selection).AutoML aims to alleviate the computational cost and human expertise required to develop well-performing ML pipelines [9,10].Despite advancements in ML-based prediction models in the healthcare, one major obstacle to the adoption of these models is that many of them are considered "black boxes", which refer to the lack of interpretability [11].There have been calls for more research on how these models operate [12][13][14][15].The inability to interpret predictive models can erode trust in them, particularly in cardiovascular medicine where decisions can have serious consequences.In medicine, black box models will have a significant role and, in many cases, are not too different from other areas where we lack complete biological or clinical understanding [16].However, just as it is beneficial to understand the mechanisms behind diseases and therapies, it can be helpful to have greater understanding of how ML models arrive at their conclusions [17].There has been a surge in research on explainable ML in an effort to address this issue [18].Various methods for exploring the reasoning behind AI predictions have been developed [19,20].One effective method is to build a secondary, more transparent model, such as a decision tree or random forest, through which the input CCTA CCTA scans were obtained using 3rd generation SOMATOM FORCE Scanner (Siemens, Forchheim, Germany) after 2016 (n = 530) and Phillips 64 slice CT (Philips Healthcare, Amsterdam, Netherlands) before 2016 (n = 426).Image acquisition was performed in accordance with the Society of Cardiovascular Computed Tomography (SCCT) guidelines [24].Patients with a heart rate of 65 beats per minute or higher were given intravenous metoprolol, and 0.4 mg sublingual nitroglycerin was given to all patients immediately before image acquisition.During image acquisition, 60-100 CCs of contrast were injected, followed by saline flush.Axial scans were obtained with prospective electrocardiographic gating.The image acquisition included the coronary arteries, left ventricle, and proximal ascending aorta.
The images were evaluated using a 3D workstation, with various post-processing techniques such as axial, multiplanar reformat, maximum intensity projection, and cross-sectional analysis.Type and location of lesion were visually evaluated using an 18-segment model according to SCCT guidelines [24].Atherosclerosis in each segment was defined as tissue structures larger than 1 mm2 within or adjacent to the coronary artery lumen that could be distinguished from pericardial tissue, epicardial fat, or the vessel lumen.
The percent coronary stenosis was determined by comparing the luminal diameter of the obstructed segment to the luminal diameter of the most normal-looking site and classified as none (0%), mild (1%-49%), moderate (50%-69%), or severe (�70%) based on the degree of narrowing of the luminal diameter.Anatomically obstructive CAD by CCTA was defined as at least 50% stenosis in the left main artery and at least 70% stenosis in the proximal, mid, and distal branches of the left anterior descending, left circumflex, and right coronary artery, but not including side branches.Findings were reported using SCCT Coronary Artery Disease Reporting & Data System (CAD-RADS) [25].Segment involvement score was used to quantify burden of disease using CCTA.Using an 18-segment coronary artery model, the presence of plaque in each segment was scored as 0 or 1, regardless of the degree of stenosis.The sum of all involved segments was calculated for each patient.Plaques were classified as non-calcified (NC SIS) or calcified/partially calcified (C/PC SIS) based on a Hounsfield unit threshold of <130.
Calcified/partially calcified plaques were further divided into calcified (C SIS) and partially calcified (PC SIS) based on the uniformity of calcification.
SPECT MPI
SPECT MPI scans were obtained using either an INTIVO scanner (Siemens, Forchheim, Germany) or a Phillips Brightview scanner (Philips Healthcare, Amsterdam, Netherlands).Image acquisition was performed in accordance with the American Society of Nuclear Cardiology guidelines [26].Gated SPECT stress and stress-rest were performed using either a 1-or 2-day protocol as appropriate, with Regadenoson as the stressing agent.Gated end systolic and diastolic left ventricular volumes were used to calculate the ejection fraction.Perfusion was graded on a 5-point scale in all segments, and summed stress, rest, and difference scores were evaluated [27].Scar was defined as a summed rest score >0, ischemia as a summed difference score >0, and significant ischemia as a summed difference score �7.All studies were interpreted by experienced imaging cardiologists who have at least 10 years of experience.
Follow-up and outcome data
Patients' clinical history, comorbidities, medications used, laboratory testing, and previous diagnostic modalities were captured.Medical records of enrolled patients were checked to record updates in relevant clinical and laboratory endpoints.The primary outcome was major adverse cardiovascular events (MACE), which is a composite of all-cause death, myocardial infarction (MI), and late revascularization (PCI or CABG) occurring more than 90 days after the earliest imaging date.MI was defined according to the 4th universal definition of myocardial infarction [28].All patients were followed from the date of their first imaging study.Patients were censored at either the occurrence of outcomes or the last known date of contact with the healthcare system as recorded in their medical records.
Machine learning methodology
Feature selection.The dataset of this study included 42 clinical variables,12 SPECT variables, and 83 CCTA variables.A full list of variables is outlined in S1 Table in S1 File.The main goal of feature selection is to improve the performance of a predictive model and reduce the computational cost of modelling.In this work, we used a feature selection technique, ANOVA F-test, to reduce the high dimensionality of the feature space before the modelling phase [29].ANOVA is a set of parametric statistical models and their estimation procedures determining whether the means of two or more samples come from the same distribution.Fstatistic is a set of statistical tests used to calculate the ratio between and within group variance.ANOVA F-test is a univariate statistical test where each feature is compared to the target feature to check the statistical relationship between them.
AutoML method.Our framework employs a state-of-the-art AutoML framework, Auto-Sklearn [30], for model selection and hyperparameter optimization to establish the predictive model of MACE.AutoSKlearn is implemented on top of Scikit-Learn [31] (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters) and uses SMAC for algorithm selection and hyperparameter tuning.AutoSKlearn combines the best-performing models into a single ensemble, using ensemble selection methodology [32], to improve the performance and robustness of the output model.Ensemble selection is a greedy approach that starts with an empty ensemble and iteratively adds models to the ensemble to maximize the validation performance.Models were optimized and evaluated by area under receiver operating characteristic (ROC) curves (AUC).The final ensemble model obtained from AutoSKlearn was trained and evaluated using a 10-fold cross-validation approach (8 folds were used for training, 1 for validation, and 1 testing, repeated in 10 iterations).
ML model explainability
To gain trust from clinical users and regulatory bodies in the predictions made by the ensemble model obtained from AutoSKlearn, we explain the model's behavior on both global and patient levels.For global explanation, we utilized the permutation feature importance technique that measures the decrease in the prediction performance of the model after we permuted the feature's values, which breaks the relationship between the feature and the true outcome [33].Thus, the drop in the model performance indicates how much the model depends on the feature.The more significant the drop in performance, the more important the feature is to the model and vice versa.For patient-level (local) explanation, we utilized Local interpretable model-agnostic explanations (LIME) for explaining the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction [34].LIME highlights the main features that contribute toward and against the prediction of a specific patient.
Statistical analysis
Variables that could take on any value within a certain continuous range were shown as a mean with a standard deviation, while variables that fell into specific categories were shown as a proportion with a percentage.These were compared using either a student's t-test or a chisquare test, depending on the type of variable.Accuracy measures, including sensitivity, specificity, positive and negative predictive values, as well as likelihood ratios, were calculated and reported along with their respective 95% confidence intervals.All analyses were done using Stata 16.0 (StataCorp, College Station, Texas) and a two-tailed p-value of 0.05 was considered statistically significant.
Baseline characteristics
The study included 956 patients (mean age 61.1 ±14.2 years, 54% men, 89% hypertension, 81% diabetes, 84% dyslipidemia).The median interval between SPECT and CCTA was 15 days (interquartile range: 2-101).60% of the patients had imaging within 30 days.In the cohort, over three-fourths of the patients underwent CCTA either on the same day as SPECT or after SPECT (200 CT then SPECT vs 756 SPECT then CT).
Baseline characteristics of the study population are summarized in Table 1.Most patients (56%) were experiencing symptoms such as chest pain or shortness of breath, and 80% were taking aspirin or clopidogrel.Table 2 shows the CCTA and SPECT findings, while Table 3 shows the clinical outcomes in this cohort.
Feature selection
The ANOVA-F value is calculated for each feature, and the features are ranked by their F-values in descending order.Fig 1 shows the top ten predictive features identified by ANOVA-F test.These features were used in building the ML model.
AutoML model of MACE
During a median of 31 months of follow-up (with an interquartile range of 12 to 65 months), 102 patients (10.7%, or 29.2 events per 1000 person-years) experienced at least one major adverse cardiovascular event (MACE).Of these events, more than half (54%, or 55 patients) were all-cause mortality, while about a quarter (19 PCI and 8 CABG procedures) were due to the need for late revascularization.More detailed imaging and clinical evaluations were reported in our previous publication using this data [8].
The developed ML ensemble model demonstrated high specificity but moderate sensitivity in predicting major adverse cardiovascular events (MACE).The model had a sensitivity of 69.61% (95% confidence interval [CI] 59.71% to 78.33%), specificity of 99.77% (99.16% to 99.97%), and accuracy of 96.54%, as shown in Table 4.
ML model interpretability
Global interpretability: Fig 2 shows the permutation feature importance scores for each feature in the ensemble model for predicting MACE risk.The most important 6 features were all CCTA parameters (segment involvement score, number of vessels with severe plaque �70, �50% stenosis in the left marginal coronary artery, calcified plaque, �50% stenosis in the left circumflex coronary artery, and plaque type in the left marginal coronary artery).The 7 th and 9 th most important features were clinical risk factors (past medical history of MI or left bundle branch block, followed by being an ever smoker), while the 8 th and 10 th features were also CCTA parameters (stenosis degree in the second obtuse marginal of the left circumflex artery, followed by stenosis category in the marginals of the left circumflex artery).
Patient-level interpretability: Figs 3 and 4 show LIME explanations for two randomly selected patients.Top ten global predictive features were identified in the study, eight of which were computed tomography angiography (CCTA) attributes related to plaque burden and stenosis, and two were clinical features.The most significant factor in predicting major adverse cardiovascular events (MACE) was the presence of partially calcified plaque (consisting of both calcified and noncalcified components).The number of vessels with severe plaque (greater than or equal to 70%) and a left main stenosis of greater than or equal to 50% were also found to be important predictors of MACE.incremental contribution to the predictive performance of the ML model.It is worth noting that features beyond the top ten had a minimal impact on the model's performance.
Discussion
We created an accurate, personalized ML method for predicting MACE risk for patients who undergo combined CCTA and SPECT imaging.This method combines all the available clinical, CCTA, and SPECT data variables, without making assumptions about the individual factors or their interplay.To optimize trust and gain a better understanding of how the ML predictions were made, we used a model-agnostic explanation of MACE predictions at the global and individual patient levels.Interestingly, out of the top 10 most contributory features to the ML prediction, most were CCTA attributes (8/10) and clinical variables (2/10).SPECT imaging features were not among the most important ones, as the first SPECT feature came in 19th place of all the 138 variables combined in global feature importance permutation.Notably, as seen in Fig 1, additional features contributed very little to the ML model predictive performance beyond the top 10 features.
Recent advancement in computed tomography assessment of the coronary vasculature has expanded the number of useful parameters that can be measured and collated.With the continued development, the amount, intricacy, and quality of the data arising from CCTA is increasing exponentially [35].Evolving evidence also points to the utility of quantification of plaque burden for risk stratification of patients, with and without known CAD, for adverse events [36,37].CCTA provides a comprehensive evaluation of coronary circulation by combining information on anatomy, function, and biology, which can be used to identify the risk level of patients and guide the selection of preventative measures that can be tailored to the individual [5,[38][39][40].AI and ML techniques have been developed and deployed to increase the efficiency and reliability of CCTA imaging by image acquirement, image processing, and automating quantification of plaque, stenosis, and inflammation [41][42][43].Risk prediction was also attempted with promising results by applying similar ML algorithms to CCTA-derived perivascular fat attenuation and more complex plaque quantifications [44][45][46].Furthermore, Slomka et al. reported a large multicenter study that tested a ML that relies on clinical and CCTA data to predict the 5-year all-cause death risk in patients suspected of CAD [47].The ML model they evaluated achieved a higher area-under-the-curve when compared to risk assessment relying on CCTA severity scores alone.Similarly, Dey et al. evaluated the performance of a ML approach to predict cardiac death and myocardial infarction by integrating Nuclear cardiology also attracted much attention from bioinformaticians and computer scientists due to the growing needs for such tools to keep up with the accelerating developments in the imaging technologies of the field.Particularly with a very widely available and frequently used non-invasive cardiac imaging modality such as SPECT, as well as its use as an adjunct diagnostic and prognostic assessment, ML tools have been developed to enhance its utility and predictive abilities [49].ML techniques have improved the quality of the acquired SPECT image, enhanced quantification of various parameters, and booted its diagnostic and prognostic utilities for cardiac patients.One of the largest efforts in this domain is the REFINE SPECT registry of Slomka et al. which has produced several important reports [50].One such report showed that a ML approach outperformed automatically-reported total perfusion defects as well as clinicians' assessments in predicting early revascularization in 1980 patients suspected of CAD [51].That same report also offered explainability by ranking the most important SPECT features.In subsequent iterations, they reported their work in finding the minimum number of variables that would help retain the ability of their ML model in predicting MACE incidents [52] and in predicting abnormal MPI from pre-test information [53].Our work builds on the previous efforts and takes it one step further in using ML techniques to integrate a large number of variables emerging from CCTA imaging, SPECT imaging, as well as clinical patient data.This approach is more real-world-like as CCTA and SPECT images are available for a large proportion of patients suspected of, or already diagnosed with, CAD.The ML model we evaluated in this study allows seamless integration of these parameters and offers explanations at the global as well as the individual patient levels.Interestingly, CCTA parameters showed higher predictive raking in how they contribute to the overall performance of the ML model in predicting MACE incidents.Of the top ten most important features in the model, 8 came from CCTA imaging and 2 from the clinical data, while no SPECT features were among the top 10 list.
Artificial intelligence has been applied in various ways in cardiovascular medicine, including using ML techniques for diagnostic procedures involving imaging techniques and biomarkers, as well as using predictive analytics for personalized therapies with the goal of improving outcomes [54].ML algorithms have also been used to predict the risk of cardiovascular diseases [55][56][57], in cardiovascular imaging [54,[58][59][60], to forecast outcomes following revascularization procedures [61,62], and to identify potential new drug targets [63][64][65].Using ML in clinical decision making can provide a more comprehensive analysis of all available data on patients suspected of CAD, which may be difficult for physicians to do objectively [66][67][68][69].Cardiologists have to integrate several data sources, including clinical data, CT imaging, and SPECT stress testing, among others, to make clinical decisions, but there is no consistent method for integrating all of this information [70][71][72].Additionally, guidelines recommend including certain variables in the myocardial perfusion imaging (MPI) report, but there is currently no standardized way to include them in the report [73].The interpretation of myocardial perfusion imaging (MPI) is subjective and the risk assessment of a patient based on clinical, stress test, and imaging results can vary depending on the physician's knowledge and experience, as well as the difficulty of properly evaluating individual factors.It is unlikely that physicians would be able to accurately and consistently consider all clinical and imaging factors for risk assessment in an individual patient scenario, whether they are interpreting the imaging report or treating the patient.Using explainable ML models offers a more reliable computational method for integrating all available information, while providing an explanation for the prediction at both the global and individual patient levels.The potential benefit of this ML predictive model also extends to improving accuracy and consistency not only across medical centers, including those with different amount of experience, medical care level, and across varying geographical locations and socioeconomic disparities, but also within such centers among their clinical and imaging physicians [74].Furthermore, the intention behind the two-level explainability reporting is to help contribute to ML model transparency, interpretability, and trustworthiness.Since the "black box" nature of most ML models is one of the challenges of its understanding and adoption, this would help to alleviate the tendency to mistrust the unknown behavior of ML models.
Limitations
The sample size in this single-center study was relatively small for a ML study, and follow-up was limited to a median of 31 months, however, the results were significant.Furthermore, additional research is needed to determine the utility for prospective clinical implementation and to validate the score in multi-center and external studies.Additional variables could be considered in future studies, including those from other cardiovascular imaging modalities, and it may be useful to evaluate the ML risk stratification in specific subpopulations, such as patients with suspected disease or early revascularization.It is also unknown how well the score will extrapolate to different centers, patient populations, and follow-up times.Other ML approaches may provide more advanced risk prediction but would likely require larger datasets and more sophisticated computational resources.Finally, because we included patients that had undergone both CCTA and SPECT imaging, the study cohort may have included higher risk patients, which may partially explain the lower sensitivity.However, this was mitigated by excluding patients with known CAD.
Conclusion
We developed an ML model that can accurately predict risk of developing a MACE in patients suspected of CAD undergoing SPECT MPI and CCTA.ML feature-ranking can also show, at a global-as well as a patient-level, which features are key in making such a prediction.Of the top 10 most important predictive features, 8 features came from CCTA imaging and 2 came from clinical variables.ML explainable models of MACE prediction using clinical and imaging variables can help ensure high accuracy of risk prediction as well as consistency within and across healthcare centers and systems.
Fig 3 shows the explanation of a patient correctly predicted as low risk of MACE.The top four factors detected by the ensemble model that increased the risk of MACE were number of vessels with severe plaque � 70%, left main plaque type, ever smoker, and left main stenosis � 50%.Fig 4 shows the explanation of a patient correctly predicted as high risk of MACE.The top five features, including partially calcified plaque (calcified and noncalcified), Left circumflex stenosis � 50%, Left main plaque type, number of vessels with severe plaque � 70%, and ever-smoker contributed to the prediction.
Fig 1 illustrates the top ten predictive features and their | 5,516 | 2023-03-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Feasibility Study on the Use of Fly Maggots (Musca domestica) as Carriers to Inhibit Shrimp White Spot Syndrome
The shrimp aquaculture industry has encountered many diseases that have caused significant losses, with the most serious being white spot syndrome (WSS). Until now, no cures, vaccines, or drugs have been found to counteract the WSS virus (WSSV). The purpose of this study was to develop an oral delivery system to transport recombinant proteinaceous antigens into shrimp. To evaluate the feasibility of the oral delivery system, we used white shrimp as the test species and maggots as protein carriers. The results indicated that the target protein was successfully preserved in the maggot, and the protein was detected in the gastrointestinal tract of the shrimp, showing that this oral delivery system could deliver the target protein to the shrimp intestine, where it was absorbed. In addition, the maggots were found to increase the total haemocyte count and phenoloxidase activity of the shrimp, and feeding shrimp rVP24-fed maggots significantly induced the expression of penaeidins 2. In the WSSV challenge, the survival rate of rVP24-fed maggots was approximately 43%. This study showed that maggots can be used as effective oral delivery systems for aquatic products and may provide a new method for aquatic vaccine delivery systems.
Introduction
Shrimp aquaculture brings crustaceans of substantial economic value and an important source of income in many countries worldwide. According to Food and Agriculture Organisation (FAO) statistics, in 2018, the farmed shrimp industry produced more than 9.4 million tons of product worth more than USD 69.3 billion (Litopenaeus vannamei, 52.9%; Procambarus clarkia, 18.2%; Eriocheir sinensis, 8.1%; Penaeus monodon, 8.0%; Macrobrachium nipponense, 2.5%; Macrobrachium rosenbergii, 2.5%; Other crustaceans, 7.8%). As with other high-density aquaculture species, shrimp are threatened by many pathogens, including acute hepatopancreas necrosis disease, Taura syndrome virus, and white spot syndrome virus (WSSV) [1]. WSSV is highly lethal to all shrimp life stages, and because 100% cumulative mortalities can be reached within 3-10 days under farming conditions, it results in massive economic losses worldwide. WSSV is the causative agent of white spot syndrome (WSS) disease, which has a wide host range, infecting all crustaceans. The disease is characterised by the appearance of white spots of 0.5 mm to 2.0 mm in diameter on the inside surfaces of the carapace, appendages, and the cuticle covering the abdominal segments. WSSV, the only member of the genus Whispovirus, which was assigned to a new family, Nimaviridae, is a large, enveloped, ellipsoid, and double-stranded DNA virus [2][3][4][5][6][7].
Until now, at least 58 structural proteins have been identified in WSSV, 30 of which are classified as envelope proteins. VP28 and VP26 are the most abundant, accounting for 60% of the envelope proteins. Envelope proteins play important roles in viral infection, such as the recognition of and attachment to receptors on the host cell surface [8][9][10][11][12][13]. Many studies showed envelope proteins interact with host proteins such as VP28/PmRab7 and Life 2021, 11, 818 2 of 15 VP187/β-integrin [9,14], and recently, the interactions between WSSV envelope proteins and host proteins were further confirmed. After interactions between nine structural proteins were identified, the 'infectome' concept was proposed [8]. A part of the infectome binds to chitin-binding protein, and VP24 connects it to the rest of the infectome [15].
Blocking the interactions between WSSV envelope proteins and host receptors could reduce viral infection burdens. Studies have found that envelope proteins, such as VP19, VP24, VP28, and VP53A, can inhibit WSSV infections [16][17][18][19], and many anti-WSSV strategies have used recombinant WSSV envelope proteins to block or induce the shrimp immune response to WSSV infection. Recombinant proteins have been examined and expressed in various systems, including viruses, yeast, Escherichia coli, baculovirus, and transgenic animals [20][21][22][23][24]. Although recombinant protein-expressing systems show effectiveness against WSSV in shrimp, the issues of efficiently transferring a recombinant protein or immune stimulant into shrimp intestines without damaging the protein structure, and translating research results into marketable technologies need to be addressed. Oral administration is a convenient, labour-saving vaccination method for aquaculture compared with injection or immersion and is less stressful to shrimp.
Many insects are used as feed in aquaculture, including housefly maggots (Musca domestica), black soldier flies (Hermetia llucens), silkworm pupae (Bombyx mori), and mealworms (Tenebrio molitoretc) [25,26]. Housefly larvae, or maggots, are used for environmental and medical health reasons, such as recycling waste food and removing gangrene from wounds, as well as in the production of maggot meal. Maggot meal contains 39-61.4% crude protein, 12.5-21% lipids, and 5.8-8.2% crude fibre and is rich in phosphorus, trace elements, and B complex vitamins [27]. The biological value of maggot meal is equivalent to that of whole fish meal, and the larvae contain antibiotics and no anti-nutritional or toxic factors [28]. However, the mass commercial production of fly maggots for raising shrimp is still in the development stage. There is a need for innovations in technology to enhance maggot production and the application of maggot meal in shrimp aquaculture.
The purpose of this study was to determine the possibility of transporting recombinant WSSV protein into maggots and to investigate the protective effects against WSSV in shrimp. The data showed that the recombinant protein accumulated in a biologically embedded manner in the maggot, and the maggot protected the recombinant protein from enzymatic digestion during transport to the shrimp intestine, where it induced specific immune gene expression. The results from this study suggest that maggots are a potential oral vaccine system for shrimp aquaculture.
Experimental Animals
Litopenaeus vannamei shrimp (average~5 ± 1 g) were purchased from southern Taiwan. Shrimps were selected for their vigour and hard-shell excellence and kept in 320 ppt sterilised seawater for 7 days at 26 ± 1 • C until the challenge experiment. Adult houseflies (Musca domestica) collected in proximity to the National Taiwan Ocean University were used to establish breeding colonies. Larva migration within the manure mass during development was assessed using clear plastic containers (H: 30 cm × W: 20 cm × L: 20 cm). A mixture of meat and water was provided ad libitum in open containers as oviposition substrate for the flies. The larvae were harvested before pupation at the second instar after approximately 2-4 days of growth and were rinsed in PBS buffer before being used in the following analysis.
Virus and Viral Inoculum
We used WSSV (GenBank Accession no. AF440570) isolated in 1994 in Taiwan from infected Penaeus monodon [7]. The haemolymph was collected from experimentally WSSVinfected shrimps (L. vannamei; mean weight: 15 g), diluted 1:4 with PBS, and frozen at −80°C. The shrimp were inoculated with the virus by feeding with infected shrimp meat, Life 2021, 11, 818 3 of 15 and the natural infection was monitored. The infected shrimp meat was prepared as follows: specific pathogen-free shrimp were treated with 100 µL WSSV (1.7 × 10 5 copy number/ng) via intramuscular injection, and infected shrimp were collected 7 days into the virus challenge. Then, the shrimp meat was extracted, chopped, and mixed evenly, and 0.5 g of infected shrimp meat transferred to a new microcentrifuge tube and stored at −20 • C. The WSSV content of the infected shrimp meat was measured using the Innocreate Bioscience WSSV QD Kit (Innocreate Bioscience, New Taipei City, Taiwan) and estimated according to the relevant standards and formulas provided in the kit. The viral content was maintained at 10 5 copies/µL.
Expression of Recombinant WSSV VP24 (rVP24) and Enhanced Green Fluorescent Protein in E. coli
The WSSV envelope protein VP24 was amplified from the genomic DNA of the WSSV T-1 strain with the primers VP24-F/VP24-R, and enhanced green fluorescent protein (EGFP) was amplified from the EGFP plasmid with the primers EGFP-F/EGFP-R (Table 1). The resultant recombinant plasmid pET28b-rVP24 and pET28b-EGFP was transformed into E. coli BL21 (DE3) strain. E. coli BL21 (DE3) cells were cultured in LB medium (10 g of tryptone, 5 g of yeast extract, 10 g of NaCl, and 1 L of distilled water) with 25 µg/mL kanamycin at 37 • C, and protein expression was induced with 1 mM isopropyl-β-D-thiogalactopyranoside at 30 • C for 8 h. The recombinant protein tagged with six consecutive histidines was purified with QIAexpressionist nickel-nitrilotriacetic acid metal-affinity chromatography (Qiagen, Hilden, Germany) according to the manufacturer's recommendations. The resins were washed with buffer (pH 8.0) containing 50 mM sodium phosphate, 0.3 M sodium chloride, and 10 mM imidazole, and protein was eluted with buffer (pH 8.0) containing 50 mM sodium phosphate, 0.3 M sodium chloride, and 250 mM imidazole. The eluted protein was then concentrated using Amicon Ultra-15 centrifugal filters (Merck Millipore, Burlington, MA, USA) in PBS buffer and stored at 4 • C for further antiserum production.
Antisera Production
New Zealand white rabbits were used to develop polyclonal antisera against rVP24. In brief, the rabbits were hyperimmunised by injection with 250 µg protein emulsified in complete Freund's adjuvant. Subsequent booster injections were carried out with 250 µg protein emulsified in incomplete Freund's adjuvant. The antisera were collected after the antibody titre had peaked.
Evaluation of Protein Carrying Ability of Maggots
After the expression of the rVP24 was induced in E. coli, the culture was centrifuged for 2 min at 13,000 rpm and the supernatant discarded. The E. coli pellet was mixed with PBS buffer. Second-instar maggots were rinsed in PBS buffer, which was removed before the following analysis. The maggots (rVP24-fed maggots) were placed in 9 cm Petri dishes and fed the pET28b-rVP24 solution, and collected after 30, 45, 60, 75, 90, 105, and 120 min. After all maggots had ingested at least 30 µg of rVP24, they were washed in PBS buffer and drained. The maggot samples were freeze-dried using a lyophilisation system (Kingmech, New Taipei City, Taiwan) at −40 • C for 16 h before being stored in a humidity control box. Western blotting was used to evaluate the rVP24 protein degradation times in the maggot digestive tract. E. coli expressing EGFP was used as an indicator for the evaluation of the protein-carrying ability of the maggots. The recombinant plasmid pET28b-EGFP containing green fluorescent protein in the open reading frame was used as an enrichment indicator. The rEGFP-fed maggots were prepared using the same method described for as the rVP24-fed maggots. The location of the recombination protein in maggots and shrimp was observed using fluorescence microscopy (Olympus IX71 395 nm, Tokyo, Japan).
Evaluation of the Ability of The Maggot Vector to Deliver Protein to The Shrimp Digestive System
To study the delivery of the recombinant protein to the digestive system of L. vannamei shrimp, they were fed freeze-dried maggots containing E. coli transformed with rEGFP as an indicator, as previously described. Negative control shrimp were fed normal maggots. Shrimp weighing 10 ± 1 g were first used to stock an indoor 80-L aquarium and fed commercial feed twice daily (09:30 and 19:00) for 1 week to acclimate to the experimental conditions. At the end of the acclimation period, the shrimp were fed commercial shrimp feed in the morning and fly maggot feed in the evening. During the experimental period, the water temperature ranged from 26 ± 1 • C, and the photoperiod followed a 12:12 light: dark schedule. Shrimp from each group were randomly sampled each day of the feeding trial and anaesthetised by placing on ice, and the alimentary canal was removed. After rinsing with PBS, the stomach and intestines were separated by tissue scissors aseptically. The stomach and intestines were observed using fluorescence microscopy.
SDS-Page
A discontinuous electrophoresis buffer system with 4% stacking gel and 12% resolving gel was used for protein separation. All samples were boiled for 10 min after the addition of sample loading buffer and subsequently electrophoresed at a voltage of 80 V for the stacking gel and 120 V for the resolving gel until the bromophenol blue reached the bottom of the gel. Protein bands were visualised by staining with Coomassie Brilliant Blue R-250.
Western Blotting
For Western blotting analyses, proteins separated by SDS-PAGE were transferred onto a polyvinylidene difluoride membrane (Merck Millipore, Burlington, MA, USA) by semidry blotting. Membranes were blocked in 5% skim milk (Difco Laboratories, Sparks, MD, USA) in TBS (0.2 M NaCl and 50 mM Tris-HCl, pH 7.4). Immunodetection was performed by incubating the blot in rabbit anti-VP24 serum diluted 1:5000 in TBS with 5% skim milk for 1 h at room temperature. Subsequently, goat anti-rabbit IgG antibody conjugated with horseradish peroxidase (Sigma-Aldrich, St. Louis, MO, USA) was used at a concentration of 1:10,000, and detection was performed with Western Blot Chemiluminescence Reagent (NEN Life Sciences, Boston, MA, USA).
WSSV Testing by Conventional PCR
Screening of shrimp gill tissue to identify WSSV-positive samples was conducted using the IQ2000 WSSV PCR Kit (GeneReach, Taichung, Taiwan). DNA was extracted from the pleopods using the supplied DNA lysis buffer in accordance with the manufacturer's instructions. WSSV-positive samples were graded as extremely light, light, moderate, or heavy using the banding pattern of PCR products, as recommended by the manufacturer.
Quantitative Real-Time PCR Assay
The WSSV QD Kit quantitative system (Innocreate Bioscience, New Taipei City, Taiwan) was used to quantify the absolute WSSV genomic DNA copy number in the pleopods collected from the WSSV-infected shrimp in the feeding study. Three pooled samples were prepared for each datapoint, with each pooled sample containing pleopods from three shrimps. The shrimp DNA and WSSV genomic DNA were then extracted using the DTAB/CTAB DNA extraction kit (GeneReach, Taichung, Taiwan). The samples were analysed on a real-time PCR system in accordance with the instructions provided in the WSSV QD Kit manual. The real-time PCR data were analysed using 7500 software (Applied Biosystems, Foster City, CA, USA). To calculate the results (copies/µL), the following equations were applied to convert the values into WSSV copies per nanogram of shrimp DNA: Shrimp DNA = Shrimp DNA copy number/Shrimp index (10872) Ratio of virus copy to shrimp DNA = WSSV DNA copy number/Shrimp DNA To assess the reproducibility of the standard curve, standard reactions were performed three times independently, including duplications of each reaction. The data were analysed by using the statistical program and presented as the mean ± SD
Feeding Trial
To compare the difference in the WSSV load and immune parameters of shrimp after feeding with the maggot vector, we used the IQ2000 WSSV Kit to select WSSV-positive shrimp and graded their positivity from extremely light to moderate. All the shrimp (10 ± 1 g) were randomly allocated to four groups of triplicates in 12 tanks. The groups were classified according to different diets (commercial shrimp feed, pET28b-fed maggot vector, rVP24-fed maggot vector, and normal maggot). The preparation of all maggot diets followed a previously described Materials and Method 2.5: the maggot was freeze-dried 1 h after feeding on different E. coli; the pET28b-fed maggot vector fed pET28b to the plasmid negative control to check the plasmid could not produce protection; the rVP24-fed maggots fed rVP24 to the experimental group; normal maggots did not feed any E. coli to the negative group. The shrimp were fed commercial shrimp feed in the morning and maggot feed in the evening for 15 days. During the feeding trial period, three shrimp from each group were randomly sampled on days 0, 3, 6, 9, 12, and 15.
Total Hhaemocyte Count
To conduct the total haemocyte count (THC), three shrimp from each group were randomly sampled. The pooled samples were prepared for each data point, with each pooled sample containing pleopods from three shrimps. Haemolymph (300 µL) was withdrawn from the ventral sinus of each shrimp and mixed with anticoagulant buffer (anticoagulant, 27 mM sodium citrate, 336 mM NaCl, 115 mM glucose, and 9 mM EDTA, pH 7.0) at a 1:1 ratio. The haemocytes were counted using haemocytometer and a phasecontrast microscope (Nikon, Tokyo, Japan)
Phenoloxidase Activity
Phenoloxidase (PO) activity was measured following a previously described method. Anticoagulant buffer was mixed with the haemocytes at a ratio of 1:1, and the cells were collected by centrifugation at 1000× g for 20 min at 4 • C. The cell pellet was resuspended in 1 mL cacodylate citrate buffer (10 mM sodium cacodylate, 0.2 M NaCl, 10 mM trisodium citrate, pH7.0) and centrifuged again. The supernatant was removed, and the pellet was resuspended in 200 µL cacodylate buffer (10 mM sodium cacodylate, 0.2 M NaCl, 10 mM CaCl 2 , 0.26 M MgCl 2 , pH 7.0). The aliquot was equally divided into two tubes, with one tube used for measuring total PO activity and the other for measuring background PO activity. Cacodylate buffer (100 µL) was added to the sample tube to measure total PO activity, and 100 µL 1% SDS buffer was added to the sample tube to measure background Life 2021, 11, 818 6 of 15 PO activity. After 10 min, 50 µL of 0.3% L-dihydroxyphenylalanine were added to the tubes for 5 min. We used a spectrophotometer at 490 nm to measure PO activity, and the amount of inactive PO was calculated as the total available PO minus PO activity before SDS treatment.
RNA Extraction and Real Time-PCR Analysis
Shrimp gill tissues were homogenised in 1 mL of TRIzol reagent (Thermo Fisher Scientific Inc. Waltham, MA, USA) and then subjected to 2-propanol extraction and ethanol precipitation of total RNA in accordance with the manufacturer's recommendations. The total RNA was centrifuged in 75% ethanol at 14,000× g for 30 min at room temperature, and the pellet was dissolved in Diethylpyrocarbonate (DEPC)-treated water and quantified by spectrophotometry. After RNA extraction, 1 µg of total RNA was used for cDNA synthesis using HiScript I Reverse Transcriptase (BIONOVAS, Toronto, ON, Canada) with an oligo (dT) 18 primer according to the manufacturer's protocol. The synthesis condition of cDNA was set at 65 • C for 5 min, 42 • C for 60 min, and 70 • C for 15 min. Real-time PCR was performed using the Applied Biosystem 7500 Real-Time PCR System (Applied Biosystems, Waltham, MA, USA) on a TOptical thermocycler (Analytik Jena AG, Jena, Germany). The gene expression of penaiedin 2 (PEN2), crustin, superoxidase dismutase (SOD), clotting protein (CP), Litopenaeus vannamei toll receptor (LvToll), and elongation factor-1α (EF-1α) genes were measured using the primers listed in Table 1. The real-time PCR reaction contained 1 µL of cDNA template, 10 µL of 2 × qPCRBIO syGreen Master Mix, and 0.8 µL each of the forward and reverse primers (10 pmol/uL). The amplification conditions were an initial denaturation at 95 • C for 10 min, followed by 40 cycles at 95 • C for 15 s and 60 • C at 60 s. The melting curve and cooling were performed in the last step of the PCR. The expression levels of the target gene were normalised to EF-1α, a shrimp housekeeping gene. The fold change in relative gene expression compared with the control group was determined by the standard 2 − Ct method. The changes were analysed using an unpaired sample t-test. Statistical significance was accepted at p < 0.05, and high significance was accepted at p < 0.01. All data were expressed as mean ± standard deviation (mean ± SD).
In Vivo Neutralisation Assay
According to the Guide for Animal Use Protocol of the Institutional Animal Care and Use Committee (IACUC) of National Taiwan Ocean University, ethical approval was not required. The shrimp were acclimatised in the laboratory for about 1 week before the experiment. The experimental shrimp were then further divided into four groups, with three replicates of 20 shrimp in each group. The shrimp were fed as follows: the positive control group 1 was fed with commercial shrimp feed, the plasmid negative control group 2 was fed pET28b-fed maggots, group 3 was fed normal maggots, and group 4 was fed rVP24-fed maggots. During the neutralisation assay period, the diet and water conditions were the same as in the above feeding trial, and the WSSV inoculum step was initiated after 9 days. During the experimental period, the shrimp survival rates of each group were recorded every day. The cumulative survival rates were calculated and subjected to a paired sample t-test, and differences were considered significant at p < 0.05.
Optimum Conditions for Incorporating Recombinant E. coli into Maggots
The maximum time that the recombinant protein remains intact in the maggots is an important factor. Thus, the aim of this analysis was to evaluate the digestion time of the recombinant protein in the maggot. The second-instar fly maggots were fed the E. coli solution containing rVP24. Using Western blotting, we observed the presence of rVP24 from 30 to 105 min, before it disappeared by 120 min. The data showed the recombinant protein was maintained in the maggot for at least 105 min (Figure 1). After feeding the maggots E. coli solution containing rEGFP for 1 h, the EGFP signal was observed in the The maximum time that the recombinant protein remains intact in the maggots is an important factor. Thus, the aim of this analysis was to evaluate the digestion time of the recombinant protein in the maggot. The second-instar fly maggots were fed the E. coli solution containing rVP24. Using Western blotting, we observed the presence of rVP24 from 30 to 105 min, before it disappeared by 120 min. The data showed the recombinant protein was maintained in the maggot for at least 105 min (Figure 1). After feeding the maggots E. coli solution containing rEGFP for 1 h, the EGFP signal was observed in the maggots via fluorescence microscopy, and no signal was seen in the negative control normal maggots (Figure 2). The maximum time that the recombinant protein remains intact in the maggots is an important factor. Thus, the aim of this analysis was to evaluate the digestion time of the recombinant protein in the maggot. The second-instar fly maggots were fed the E. coli solution containing rVP24. Using Western blotting, we observed the presence of rVP24 from 30 to 105 min, before it disappeared by 120 min. The data showed the recombinant protein was maintained in the maggot for at least 105 min (Figure 1). After feeding the maggots E. coli solution containing rEGFP for 1 h, the EGFP signal was observed in the maggots via fluorescence microscopy, and no signal was seen in the negative control normal maggots (Figure 2).
Delivery of Recombinant Protein to The Shrimp Digestive system
Delivery of the recombinant protein to the digestive system of the shrimp was demonstrated directly by fluorescent imaging. When maggots containing E. coli expressing EGFP protein (rEGFP-fed maggots) were fed to the shrimp, the fluorescent EGFP signal was seen in the gastrointestinal tract within 3 h, and no fluorescence was observed in stomach and intestine of shrimp fed normal maggots ( Figure 3A,B). We then ascertained whether the target recombinant protein accumulated in the digestive system of the shrimp when they were continuously fed the maggot vector. During the feeding trial period, one shrimp was randomly sampled every day to observe the gastrointestinal tract. Fluorescence was seen to increase in the shrimp stomach and intestine during the feeding period. The fluorescence was detected in the stomach at all data points. On the 1st day, a weak fluorescence signal was detected in the shrimp intestine wall. On the 2nd day, more fluorescence was detected in the shrimp intestine wall. On the 3rd day, the fluorescence's situation was similar to the 2nd day, but the distribution was wider. A significant increase in the fluorescence range can be observed on the intestinal wall from the 4th to the 7th day ( Figure 4A). No fluorescent signal was observed during the feeding trial with normal maggots in the shrimp stomach or intestine ( Figure 4B). Furthermore, we observed the decrease in fluorescence after we stopped providing the rEGFP-fed maggot in order to understand how long the rEGFP can remain in the gastrointestinal tract. When we stopped feeding the rEGFP-fed maggot, we still could observe fluorescence in the stomach and intestine wall from day 1 to 4. The rEGFP fluorescent signal was maintained in the gastrointestinal tract of the shrimp for 4 days, and there was no fluorescent signal after the 5th day ( Figure 5). Figure 4A). No fluorescent signal was observed during the feeding trial with normal maggots in the shrimp stomach or intestine ( Figure 4B). Furthermore, we observed the decrease in fluorescence after we stopped providing the rEGFP-fed maggot in order to understand how long the rEGFP can remain in the gastrointestinal tract. When we stopped feeding the rEGFP-fed maggot, we still could observe fluorescence in the stomach and intestine wall from day 1 to 4. The rEGFP fluorescent signal was maintained in the gastrointestinal tract of the shrimp for 4 days, and there was no fluorescent signal after the 5th day ( Figure 5).
In Vivo Test of WSSV Genome Copy Numbers, THC, and PO Activity
The changes observed in the total haemocyte count (THC) in the haemolymph of WSSV-positive shrimp are shown in Figure 6A. Compared with the control group, the THC in the normal maggot and pET28b-fed maggot groups changed by a relatively large extent, with significantly lower THC than the control group on days 3 and 6, but higher THC on day 9. However, the rVP24-fed maggot group maintained a relatively stable THC that was slightly higher than that of the control group. The change in phenoloxidase (PO) activity in the haemolymph was similar to that of the THC: PO activity was lower in the normal maggot group than the control group on day 3, but then rose on days 6, 9, and 12, peaking on day 12, and gradually stabilising on day 15. The PO activity of the pET28b-fed maggot group was lower than that of the control group on day 3, but rose on days 6 and 12, peaked on day 12, and gradually stabilised on day 15 ( Figure 6B). During the feeding trial, all WSSV-positive shrimps survived. The WSSV genome copy number peaked on day 9 in all groups, but no WSSV was detected in the shrimp on days 12 and 15 ( Figure 6C,D).
In Vivo Test of WSSV Genome Copy Numbers, THC, and PO Activity
The changes observed in the total haemocyte count (THC) in the haemolymph of WSSV-positive shrimp are shown in Figure 6A. Compared with the control group, the THC in the normal maggot and pET28b-fed maggot groups changed by a relatively large extent, with significantly lower THC than the control group on days 3 and 6, but higher THC on day 9. However, the rVP24-fed maggot group maintained a relatively stable THC that was slightly higher than that of the control group. The change in phenoloxidase (PO) activity in the haemolymph was similar to that of the THC: PO activity was lower in the normal maggot group than the control group on day 3, but then rose on days 6, 9, and 12, peaking on day 12, and gradually stabilising on day 15. The PO activity of the pET28b-fed maggot group was lower than that of the control group on day 3, but rose on days 6 and 12, peaked on day 12, and gradually stabilised on day 15 ( Figure 6B). During the feeding trial, all WSSV-positive shrimps survived. The WSSV genome copy number peaked on day 9 in all groups, but no WSSV was detected in the shrimp on days 12 and 15 ( Figure 6C,D).
Expression of Innate Immune-Related Genes
We performed gene expression analysis to evaluate the transcription levels of immune genes in the gills of L. vannamei fed the maggot feed and control diets. All maggotfed groups showed a significant downregulation of clotting protein (CP) and Litopenaeus vannamei toll receptor (LvToll) gene expression ( Figure 7A,B). The expression of crustin was upregulated in the normal maggot group on day 6 and significantly upregulated on day 15, but this returned to a level similar to the control group on the other days. We also
Expression of Innate Immune-Related Genes
We performed gene expression analysis to evaluate the transcription levels of immune genes in the gills of L. vannamei fed the maggot feed and control diets. All maggot-fed groups showed a significant downregulation of clotting protein (CP) and Litopenaeus vannamei toll receptor (LvToll) gene expression ( Figure 7A,B). The expression of crustin was upregulated in the normal maggot group on day 6 and significantly upregulated on day 15, but this returned to a level similar to the control group on the other days. We also noted the rVP24 maggots showed significant downregulation of crustin expression at every time point ( Figure 7C). Significantly upregulated expression of PEN2 was seen in the rVP24 maggots on days 3, 6, 9, and 12, returning to a level similar to the control group on day 15 ( Figure 7D). The SOD gene expression levels were significantly higher in the pET28b maggot group only on day 6 and returned to a level similar to the control group on the other days. The other groups showed downregulated expression at all time points ( Figure 7E).
E) SOD in haemocytes from
Litopenaeus vannamei-fed commercial shrimp feed (Control), normal maggots, (NM), pET28b-fed maggots (pET28b), or rVP24-fed maggots (rVP24) during the 15-day feeding trial. Data are expressed as mean ± SEM (n = 3), and significant differences from the control group are indicated by asterisks. Significant differences (p < 0.05) between the compared groups are indicated with asterisk (⁕). Highly significant differences (p < 0.005) of gene expression level between the compared groups are indicated with double asterisks (⁕⁕). Triple asterisks (⁕⁕⁕) indicated greater significant difference (p < 0.0005).
WSSV Neutralisation in Vivo by rVP24-fed Maggots
The protection conferred by the maggot vector was evaluated through an experimental WSSV challenge. Shrimp were infected with the virus by feeding them WSSV-in- Litopenaeus vannamei-fed commercial shrimp feed (Control), normal maggots, (NM), pET28b-fed maggots (pET28b), or rVP24-fed maggots (rVP24) during the 15-day feeding trial. Data are expressed as mean ± SEM (n = 3), and significant differences from the control group are indicated by asterisks. Significant differences (p < 0.05) between the compared groups are indicated with asterisk (F). Highly significant differences (p < 0.005) of gene expression level between the compared groups are indicated with double asterisks (FF). Triple asterisks (FFF) indicated greater significant difference (p < 0.0005).
WSSV Neutralisation In Vivo by rVP24-fed Maggots
The protection conferred by the maggot vector was evaluated through an experimental WSSV challenge. Shrimp were infected with the virus by feeding them WSSV-infect shrimp meat, and the survival rates were evaluated for 2 weeks. Compared with the control group, there was a significant difference from the 5th day (p < 0.05), and the greater significant difference (p < 0.0005) appeared from the 8th day and remained until the end of the experiment; the rVP24-fed maggot group maintained the greater significant difference from 6th to 14th days; the normal maggot group and pET28b-fed maggot group maintained the highly significant differences (p < 0.005) from 6th to 14th days. There are some significant differences between the normal maggot group, pET28b-fed maggot group, and rVP24-fed maggot group, but the overall trend is not significantly different. The final survival rate for the rVP24-fed maggot group was 43.3%, that of the normal maggot group was 23.3%, and that of the pET28b-fed maggot group was 25% (Figure 8). The final survival rate for the rVP24-fed maggot group was 43.3%, that of the normal maggot group was 23.3%, and that of the pET28b-fed maggot group was 25% (Figure 8). . In all groups, the virion inoculum was equivalent to 10 5 copies. The line marked with an asterisk (⁕) is significantly higher than that of the control groups. Highly significant differences (p < 0.005) of gene expression level between the compared groups are indicated with double asterisks (⁕⁕). Triple asterisks (⁕⁕⁕) indicated greater significant difference (p < 0.0005).
Discussion
Maggots have always been considered to have excellent properties as feed additives or fishmeal because they have multiple antimicrobial peptides and a large quantity of high-quality protein. The antibacterial compounds found in maggots are capable of lysing . Group 1, positive control fed commercial shrimp feed ( ); group 2, plasmid negative control fed pET28b-fed maggots ( ); group 3 fed normal maggots ( ); group 4 fed rVP24-fed maggots (×). In all groups, the virion inoculum was equivalent to 10 5 copies. The line marked with an asterisk (F) is significantly higher than that of the control groups. Highly significant differences (p < 0.005) of gene expression level between the compared groups are indicated with double asterisks (FF). Triple asterisks (FFF) indicated greater significant difference (p < 0.0005).
Discussion
Maggots have always been considered to have excellent properties as feed additives or fishmeal because they have multiple antimicrobial peptides and a large quantity of high-quality protein. The antibacterial compounds found in maggots are capable of lysing over 90% of Gram-positive and Gram-negative bacteria, including Pseudomonas aeruginosa, Klebsiella pneumoniae, and methicillin-resistant Staphylococcus aureus, within 15 min by changing the membrane potential of the bacteria [29,30]. However, as maggot production is still limited and the optimum percentage of maggots to add to shrimp feed has not yet been determined, maggot feed still needs to be developed [31][32][33]. In this study, we demonstrated that a recombinant WSSV protein vector delivered using maggots as an oral vaccine system induced a protective immune response in shrimp. The improved efficacy of this oral vaccine system over other oral vaccine systems may be due to several factors: First, using the natural food of shrimp as a carrier facilitated the uptake of the recombinant protein; second, the E. coli cell wall and the maggot intestine protected the recombinant protein from gastrointestinal degradation and enabled its delivery and accumulation in the shrimp gastrointestinal system; third, the antimicrobial peptides in the maggot or incorporated E. coli could produce an innate immune response.
The recombinant WSSV protein used was VP24, which plays a key role in the WSSV infectome, and EGFP was chosen as indicator for the evaluation of the protein-carrying ability of the maggot. Maggots were fed E. coli that were induced to express VP24 and collected every 15 min for a total of 2 h. As shown in Figure 1, the VP24 antibody signal did not weaken until 2 h after feeding, indicating that rVP24 persisted in the maggots for 105 min. This result is consistent with that found by other research teams, who showed that bacteria are digested by fly maggots within approximately 1 to 1.5 h [34]. The Coomassie blue staining showed the presence of a major 75 kDa band that was likely to be the insect's haemcyanins, and the Western blot assay detected specific antibody signals at approximately 75 kDa and 30 kDa in both the normal maggot group and rVP24-fed maggot group. Compared to the rVP24 E. coli liquid, these were considered to be non-specific signals caused by the proteins in the maggot. These non-specific signals will be used as the basis for subsequent judgments. To directly observe the capacity of the maggot to carry recombinant proteins, the maggots were fed rEGFP and freeze-dried. Through an inverted fluorescent microscope, we observed green fluorescence in the maggot after feeding, indicating that the maggots had the ability to carry and preserve the recombinant protein ( Figure 2). Based on the results of the maggot digestion experiments, the maggot will be freeze-dried 1 h after feeding on E. coli in subsequent experiments.
After feeding rEGFP-fed maggots to the shrimp, green fluorescent signals were observed in the stomach ( Figure 3A) and intestines ( Figure 3B), which proved that the maggot can protect the recombinant protein from degradation so that it reaches the shrimp's gastrointestinal tract. Moreover, rEGFP may accumulate in the shrimp intestines when the rEGFP-fed maggots are fed continuously (Figure 4). Once the rEGFP-fed maggot diet was stopped, the green fluorescent signal was maintained in the shrimp's gastrointestinal tract for 4 days until it disappeared on day 5 ( Figure 5). Continued feeding of rEGFPfed maggots was shown to cause the recombinant protein to accumulate in the shrimp gastrointestinal tract, where it was maintained until its digestion.
During the process of aquaculture, shrimp are easily infected with WSSV, and the survivors become asymptomatic carriers. We selected WSSV-positive shrimp to observe changes in the THC, PO activity, and WSSV copy number after they were fed with maggots. In the maggot feeding trial, WSSV was still detectable on days 3, 6, and 9 following ingestion, but after day 12, no WSSV was detected in any group ( Figure 6). Previous studies found that THC levels and PO activity declined with prolonged infection. We also found this phenomenon in the normal maggot and pET28b-fed maggot groups on day 3. As feeding days progressed, THC and PO activity gradually increased in the normal maggot and pET28b-fed maggot groups, while the WSSV copy number decreased, returning to the original level on the 15th day. This may be because the maggots stimulated THC and PO Life 2021, 11, 818 13 of 15 activity and cleared the WSSV. THC and PO activity in the rVP24-fed maggot group was maintained at a higher level than those in other groups, but the number of WSSV copies did not change significantly, which is similar to the situation in shrimp supplemented with immune stimulants after infection. Hence, supplementing with rVP24 not only maintained THC levels and PO activity from being inhibited by WSSV but also helped shrimp to reduce the WSSV copy number.
We chose genes relevant to different immune pathways, SOD of the proPO system, CP of the coagulation system, PEN2 and crustin of the antimicrobial peptide system, and Tolllike receptor (LvToll), and observed whether the maggots regulated the expression of these genes to resist WSSV (Figure 7). The gene expression of CP and LvToll was downregulated in all groups. Although crustin and SOD expression increased significantly at some time points, there was no overall upregulation. We noticed the gene expression of PEN2 was significantly upregulated, and shrimp fed a diet of rVP24-fed maggots showed 4 to 17 times greater PEN2 expression than the control groups. According to Xiao et al.'s research, WSSV infection activates the Toll and IMD (NF-κB-related) signalling pathways that induce the production of penaeidins, including BigPEN, PEN2, PEN3, and PEN4. Specifically, PEN2 interferes with the ability of the receptor-binding protein VP24 to bind to the host receptor LvpIgR, thereby blocking viral entry into the target cells [35]. The real-time PCR data showed that the rVP24-fed maggot carried rVP24 to the digestive tract of the shrimp, where it successfully induced a specific immune gene response.
After confirming that maggots can deliver the recombinant protein to the shrimp intestine, we used an in vivo neutralisation assay to test whether the maggot delivery system can reduce the mortality rate after WSSV infection. Based on the results of the previous experiments, the shrimp were fed with commercial shrimp feed or difference maggots for 9 days before the in vivo neutralisation experiment to ensure sufficient protection and reduce the remaining WSSV in the body. Then, the shrimp group were fed on the 9th day to start the WSSV inoculation. The results (Figure 8) showed that the positive control group (group 1 given commercial shrimp feed) showed a 0% survival rate on the 14th day after infection. The initial mortality and final survival rates of the normal maggots (group 2) and pET28b-fed maggots (group 3) were 23.3% and 25%, respectively. This could be due to the compound in the maggots stimulating the host defence system. Group 4, which was fed the rVP24-fed maggots, showed obvious protection, as the survival rate was 43.3%. Therefore, we concluded that the rVP24-fed maggots delayed or neutralised the WSSV infection. The in vivo neutralisation assay results were similar to previous VP24 in vivo neutralisation results [36]. In the first 10 days, the survival rate of the maggot-fed groups was maintained at 50%, then gradually decreased, which was possibly because the maggot feed replaced evening meals in our experiments, causing the shrimps to become nutrient-deprived for some time. The maggots may have to be used as an additional functional feed instead of as a replacement for fish meal.
Many studies have found that envelope proteins, including VP19, VP24, VP28, and VP53A, can inhibit WSSV infection. However, the most efficient method of delivering these preventive proteins to the shrimp gastrointestinal tract was unclear. To achieve efficient vaccination, the ability of the carrier to store the target protein, the ease of operation, and the palatability of the carrier to the shrimp must be considered. In this study, we successfully showed that the target protein persisted in the maggot and was stable at room temperature, and the treated maggot successfully delivered the target protein to the shrimp gastrointestinal tract. When we fed the shrimp the treated maggots, shrimp THC levels and PO activity levels were maintained, specific immunity gene expression was regulated, and the survival rate after WSSV infection was improved.
After hatching, the first-instar maggot is roughly 2 and 5 mm long, the second-instar maggot grows to around 10 mm and the third-instar maggot grows to between 15 mm and 20 mm. The method used in this study can carry recombinant protein to maggots of each instar, which will be adjusted according to the size of the farmed animals. This study established a preliminary method for using maggots, which are palatable to shrimp, as a | 9,612.4 | 2021-08-01T00:00:00.000 | [
"Biology"
] |
Autophagy: New Insights into Its Roles in Cancer Progression and Drug Resistance
Simple Summary Autophagy is a mechanism of lysosomal proteolysis that is utilized to degrade damaged organelles, proteins, and other cellular components. Although key studies demonstrate that autophagy functions as a mechanism of tumor suppression via the degradation of defective pre-malignant cells, autophagy can also be used as a mechanism to break down cellular components under stress conditions to generate the required metabolic materials for cell survival. Autophagy has emerged as an important mediator of resistance to radiation, chemotherapy, and targeted agents. This series of articles highlight the role of autophagy in cancer progression and drug resistance and underscores the need for new and more effective agents that target this process.
Autophagy is an evolutionarily conserved protein degradation process that is characterized by the formation of double-membraned vesicles (autophagosomes) that envelop bulk cellular material and/or organelles. Autophagosomes subsequently fuse with lysosomes and the degradation of their cargo is mediated by lysosomal proteases [1,2]. Autophagy is an essential cellular process that degrades damaged organelles, long-lived proteins, and recycles cellular components to generate the metabolic building blocks required for cell survival. Induction of autophagy has been reported to play both pro-death and pro-survival roles depending upon the specific cellular context. Pro-death-mediated autophagy can occur when the level of degradation goes beyond the threshold of maintaining the required number of organelles needed for cell survival. Jeong et al. investigated the effects of cannabidiol on oxaliplatin resistance in colorectal cancer cells [3]. They determined that combined cannabidiol with oxaliplatin reduced the phosphorylation of nitric oxide synthase 3 (NOS3), nitric oxide (NO) production, and superoxide dismutase 2 (SOD2) expression resulting in the generation of reactive oxygen species (ROS). Interestingly, the induction of ROS was associated with mitochondrial dysfunction, autophagy, and cell death. In addition, the combination of cannabidiol and oxaliplatin displayed superior anticancer activity as compared with either monotherapy and notably, was able to overcome oxaliplatin resistance. While autophagy induction is frequently associated with cell survival and drug resistance, this study highlights that it can also contribute to cell death under certain circumstances.
The roles of autophagy in normal and malignant cells can be strikingly different. In normal cells, autophagy functions as a mechanism of tumor suppression by eliminating damaged organelles and proteins to promote cellular homeostasis. However, cancer cells preferentially utilize autophagy to drive metabolic reprogramming that degrades cellular components to generate the necessary energy needed for cell survival during periods of stress such as hypoxia, starvation, and during chemotherapeutic treatment [4]. In addition, basal autophagic activity has been determined to be higher in more advanced and metastatic tumors [5]. Ieni et al. analyzed advanced tubular gastric adenocarcinomas and measured the levels of the autophagy-related proteins microtubule-associated protein 1 light chain 3 (LC3A/B), Beclin-1, and activating molecule in Beclin-1-regulating autophagy protein-1 (AMBRA-1) by immunohistochemistry [6]. Immunostaining demonstrated that LC3A/B, Beclin-1, and AMBRA-1 were selectively expressed in tumor tissue and not in adjacent normal stromal cells. In addition, an autophagy-positive expression signature was associated with poorer overall survival in these patients. Collectively, the authors concluded that autophagy is associated with more aggressive advanced tubular gastric adenocarcinomas.
Most studies have focused on inhibition of autophagy at the distal point in the process through interference of lysosomal degradation using drugs such as hydroxychloroquine (HCQ) [7]. Indeed, HCQ and the related drug chloroquine (CQ) are the only autophagy inhibitors that have been evaluated in clinical trials to date. Given that those are very old drugs that were not optimized for autophagy inhibition activity during their initial discovery, there is a tremendous interest in developing new agents that target autophagy more robustly particularly at more proximal points in the pathway as this remains an underexplored strategy. Chen et al. investigated the anti-autophagy effects of MPT0L145, a novel inhibitor of phosphatidylinositol 3-kinase catalytic subunit type 3 (PIK3C3) and fibroblast growth factor receptor (FGFR) [8]. PIK3C3 belongs to the PI3K family of kinases and has been previously shown to be an essential factor that promotes autophagy [9,10]. The authors demonstrated that MPT0L145 perturbs autophagic flux and sensitizes cancer cells to the anticancer agents gefitinib and gemcitabine. This study further highlights that targeting PIK3C3 can overcome drug resistance associated with autophagy induced by chemotherapy.
In addition to these research articles, several excellent reviews summarizing key aspects of the field were published in this Special Issue. It has been well established that autophagy is an important mediator of therapeutic resistance to diverse classes of anticancer agents. Two outstanding articles discuss the mechanisms underlying autophagy-mediated treatment resistance and strategies to enhance chemosensitization through inhibition of autophagy [11,12]. An additional article specifically focuses on autophagy-driven resistance to histone deacetylase (HDAC) inhibitors [13]. Indeed, autophagy has been demonstrated to be a key resistance mechanism to HDAC inhibitor therapy, which has prompted the clinical evaluation of this therapeutic approach [14][15][16]. Taken together, these articles provide a comprehensive review of autophagy as a drug resistance factor and summarize the robust evidence in the literature that demonstrates that targeting autophagy can improve the anticancer activity of many chemotherapeutic agents.
Besides being a facilitator of drug resistance, upregulation of autophagy has been identified as a contributing factor that accelerates disease progression and metastasis in a multitude of tumor types. Saxena et al. review the roles of autophagy in esophageal squamous cell carcinoma and esophageal adenocarcinoma pathogenesis [17]. They also conclude that the development of novel agents that specifically activate or inhibit autophagy is essential to better understand the role of autophagy in malignant biology and to improve the clinical targeting of this pathway. In addition to disrupting the lysosome with agents such as HCQ, CQ, and ROC-325, upstream components of the autophagy machinery may prove to be viable therapeutic targets [4,18,19]. Some of the potential upstream targets in the cascade include the aforementioned PIK3C3 or vacuolar protein sorting 34 (VPS34) as well as UNC-51-like kinase 1 (ULK1) and autophagy-related gene 4 (ATG4). A particularly interesting target is ATG4, which is reviewed in this issue by Fu et al [20]. ATG4 is required for autophagosome formation and studies have suggested that ATG4 may be a potential anticancer target due to its elevated expression in some cancer types [21]. Another interesting review describes the role of actin during autophagy and the development of drug resistance [22]. Actin has previously been demonstrated to be involved in the formation and maturation of autophagic vesicles during the autophagy process [23,24]. They describe how actin manipulation affects autophagy and highlight potential therapeutic targets in this pathway. These reviews illuminate the complexity of autophagy and underscore the need for new agents to innovatively modulate this pathway by targeting previously unexplored regulators of the process.
The articles in this Special Issue mesh perfectly with each other to highlight the significance of autophagy as a key mechanism that cancer cells utilize to drive malignant progression and drug resistance. The articles comprehensively discuss the rationale for developing novel autophagy-modulating agents and combining them with standard therapeutic regimens to improve clinical outcomes. They also establish the framework for further studies aimed at delineating the differences between inhibiting autophagy at proximal vs. distal (lysosomal) points. Hopefully, the development of specific and more potent compounds will enable optimized precision targeting of autophagy in future clinical studies. | 1,685.6 | 2020-10-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Prediction of monthly electric energy consumption using pattern-based fuzzy nearest neighbour regression
Electricity demand forecasting is of important role in power system planning and operation. In this work, fuzzy nearest neighbour regression has been utilised to estimate monthly electricity demands. The forecasting model was based on the pre-processed energy consumption time series, where input and output variables were defined as patterns representing unified fragments of the time series. Relationships between inputs and outputs, which were simplified due to patterns, were modelled using nonparametric regression with weighting function defined as a fuzzy membership of learning points to the neighbourhood of a query point. In an experimental part of the work the model was evaluated using real-world data. The results are encouraging and show high performances of the model and its competitiveness compared to other forecasting models.
Introduction
Electric energy consumption forecasting is an essential issue in power system planning and operation.Mediumterm forecasting is necessary for technical and operational purposes, such as: scheduling maintenance activities, planning of production levels and fuel purchases, and planning of network investments.From an economic viewpoint, energy consumption forecasts are fundamental for negotiating contracts between energy companies and concluding contracts with customers.
Fig. 1 shows a periodical time series representing monthly energy consumption for four European countries (the data from ENTSO-E repositorywww.entsoe.eu).In this figure, seasonal variations and rising tendency can be observed, caused by the influence of the economic and technological development on the electric market.Seasonal variations reflect the annual cycle and are dependent on climatic factors, which are comparable during the same month of different years.Other factors affecting directly or indirectly the level of energy consumption are political decisions and economic policy.They can disturb general rising trend and monthly fluctuations.They include: the emergence of alternative energy sources and technologies, fluctuating economic inflation, violent change in energy prices, industrial development, and global warming issues [1], [2].
The time series of monthly electric energy demand presented in Fig. 1 differ depending on the power system size and economic development of the country.Note significant share of random component in the time series and larger amplitude of annual cycles for France than for other countries.
Two approaches have been developed for mediumterm electric energy consumption forecasting [3].The first one, called conditional modelling approach, focuses on economic analysis, management and long-term planning and forecasting of energy load and energy policies.It considers socioeconomic conditions which impact energy demands, such as economic indicators and electrical infrastructure measures.These additional inputs are introduced to the model together with historical load data and weather-related variables.Such a model can be found in [4].It includes macroeconomic indicators, such as the consumer price index, the average salary earning and the currency exchange rate.
The second approach, called autonomous modelling approach, requires a smaller set of input information to forecast future electricity demand, primarily historical loads and weather factors.Because the economic factors are not taken into consideration, this approach is more suited for stable economies.Different forecasting models are used in this case such as classical autoregressive integrated moving average (ARIMA) and multiple linear regression [5], as well as computational intelligence methods, e.g.neural networks [6].Examples of such models can be found in [7], where ARIMA, neural networks and neuro-fuzzy systems are employed to forecast future load demand based on various weather-related parameters and historical load profiles.Another example is a model presented in [8], where interval load forecasting is proposed using multi-output support vector regression.In addition, a memetic algorithm is used to select input variables among the variable candidates, which include time lagged loads and temperatures.In [9] neural network is used for forecasting load time series components extracted using digital filtering.Evolving fuzzy neural networks are proposed for monthly electricity demand forecasting in [4].In this solution fuzzy neurons represent degree of importance of each input variable (loads, weather factors and daylight time).Different weights assigned to input variables lead to improved model accuracy and more precise prediction.
The forecasting model proposed in this work belongs to the latter category.It uses fuzzy nearest neighbour regression (FNNR), based on patterns of the time series fragments.An underlying assumption in this model is: if two fragments of the time series are similar in shape, then the fragments following them are also similar in shape [10].This approach is especially attractive when the time series expresses seasonal pattern.In our earlier works, we proposed models from the same class of pattern similarity-based nonparametric regression models: the model based on k-nearest neighbours (k-NN) [11] and Nadaraya-Watson estimator [12].The proposed FNNR allowed to consider similarity degree between shapes of the time series fragments using fuzzy set theory.
The remainder of the paper is organised as follows.In Section 2, a time series representation is described, using patterns of their fragments.In Section 3, forecasting model was defined, using fuzzy nearest neighbour regression.The model has been tested on realworld data in Section 4. Finally, the work was concluded in Section 5.
Patterns of time series fragments
In the first stage of the proposed approach load time series were pre-processed using methods presented in [10].Input and output patterns were defined.The input pattern was an n-dimensional vector representing a time series fragment preceding the forecasted one.Let us denote the forecasted fragment by Yi = {Ei+1 Ei+2 … Ei+m}, and the preceding fragment by Xi = {Ei-n+1 Ei-n+2 … Ei}, where Ek is the monthly energy consumption and k is the time index.An input pattern xi = [xi,1 xi,2 … xi,n] T represented the fragment Xi.Components of that vector were pre-processed points of the sequence Xi.For example [11]: where t = 1, 2, ..., n, i E is the mean value of the points in sequence Xi, and A pattern defined using ( 1) is a copy of the sequence Xi without processing.Pattern components defined using (2) are the points of the sequence Xi divided by the mean value of this sequence.Patterns (3) are composed of the differences between points and the mean sequence value.Pattern (4) is the normalised vector [Ei-n+1 Ei-n+2 … Ei] T .All patterns defined using (4) have the unity length, mean value equal to zero and the same variance.
The output pattern yi = [yi,1 yi,2 … yi,m] T , representing the forecasted sequence Yi, had the components defined similarly to the x-pattern components:
In the above formulas (5)-( 8) i E and Di are determined from the sequence Xi, and not from the sequence Yi.This is because the sequence Yi is not known at the moment of forecasting.To determine the forecast of the monthly energy consumption Ei+t on the basis of the forecasted y-pattern generated by the forecasting model, transformed equations have been used ( 5)- (8).For example, in the case of (8) the forecasted energy consumption is calculated as follows: Patterns xi and yi are paired (xi, yi).The set of these pairs determined from the history is used for learning the forecasting model.
Fuzzy nearest neighbour regression
The nearest neighbour estimate m(x) is defined as the weighted average of the y-patterns in a varying neighbourhood of the query x-pattern.Typically, this neighbourhood is defined through the x-patterns which are among the k nearest neighbours of the query pattern [11].The value of k determines the number of training patterns from which the regression function is constructed and controls the degree of smoothing.The k-NN estimator gives the regression function, which is discontinuous.In the points where the set of the nearest neighbours changes, the jumps on the function graph are observed.To avoid this inconvenience, a fuzzy membership of the training points to the neighbourhood of the query point was introduced [13].In this approach, each training point belongs to the query point neighbourhood with a degree depending on the distance between these points.
The regression function m(x) has the nonparametric form: where the weighting function w(x,xj) is dependent on the similarity or distance between patterns x and xj.Usually it decreases monotonically with the distance.When using fuzzy approach, the weighting function has a form of the membership function, e.g. a Gaussian-type function: where is a parameter controlling the width of the function, and d(x,xj) is a Euclidean distance between patterns x and xj.An estimator (10) is a linear combination of vectors yj weighted by the membership degree (11) which nonlinearly maps the distance d(x,xj).The greater the distance, the lower the weight.The width parameter decides about the bias-variance trade-off of the estimator.Too small value results in undersmoothing, whereas too large value results in over-smoothing.Thus, the selection of the width parameter is a key problem.In a training procedure the optimal value of is selected, as well as the optimal length of the input pattern n.These parameters are being searched using grid search method.
The training set contains pairs of patterns (xi, yi), which are historical for the forecasted sequence, i.e. these ones for which i = n, n+1, ..., i*-m, where i* is an index of the last month before the forecasted sequence.The forecasting task is to generate the forecasts for months i*+1, i*+2, ..., i*+m.
The forecasting procedure consists of four steps: 1. Pre-processing of load time series into x-and ypatterns.2. Calculating the weights for the training x-patterns using membership function (11).3. Calculating the forecasted y-pattern from (10). 4. Decoding the forecasted y-pattern using transformed equations ( 5)-( 8) to get the monthly electricity demand for consecutive months: i*+1, i*+2, ..., i*+m.
Experimental study
In this section, the proposed FNNR method was applied to model and monthly electricity load demand was forecasted.Then results were compared with results of several reported statistical and machine learning methods for load demand forecasting.Data used in this research were taken from the publicly available ENTSO-E repository (www.entsoe.eu).They included monthly electricity demand for four European countries: Poland (PL), Germany (DE), Spain (ES) and France (FR).The time range of data was 1998-2015 for PL, and 1991-2015 for other countries.We constructed the forecasting models for 2015, using data from previous years to model learning.Two variants of forecasting were considered: • Variant A -a model generated forecasts for all 12 months of 2015 (i* was an index of December 2014, m = 12), • Variant B -for each month of 2015 a separate model was created which generated one step ahead forecast (12 models created for i* corresponding to: December 2014, January 2015, ..., and November 2015, m = 1).
The model parameters, and n, were selected using grid search in leave-one-out cross-validation procedure.
Tables 1-8 present optimal values of parameters and Mean Absolute Percentage Errors (MAPE) obtained with these parameter values: validation errors (MAPEval) and test errors (MAPEtst for 2015).Accordingly to the tables, the selection of the best way of pattern definition seems to be difficult.Results depend on the time series features, such as a trend and level of random, irregular influences.The optimal x-pattern lengths vary between 8 and 24 depending on time series and pattern definition.Note that the optimal lengths are rarely equal to the annual cycle length, which is characteristic for these time series.Fig. 2 demonstrates test errors for individual months in both variants, A and B. Note that variant B, which generates one step ahead forecasts, does not always provide better results than variant A, in which the forecast horizon is 12 months.Errors for successive months are very varied.This is caused by the significant contribution of the random component in data.
Examples of the forecasted y-pattern construction are presented in Fig. 3. Grey lines in these figures are the xand y-patterns from the training set.A darker shade of grey indicates x-patterns which are closest to the query pattern and y-patterns paired with them.These patterns have higher value of the membership function (11), and consequently greater impact on the forecast.The query pattern and the true y-pattern paired with it are drawn with thick solid lines.The forecasted y-pattern is drawn with dotted line.Moreover, the optimal input pattern lengths are different for different pattern definitions (see Tables 1-8).
In Tables 9 and 10 results of comparative models are shown: ARIMA, exponential smoothing (ES) and Nadaraya-Watson estimator (N-WE) [12].The proposed FNNR model belongs to the same group of nonparametric regression methods as N-WE; thus, results of both models are similar.When comparing errors of all models, it can be concluded that FNNR is competitive with other models, but it should be noted also that the classical ES model outperformed all other models in six of eight cases.
Conclusion
This work proposes a practical methodology to forecast the monthly electric energy consumption using fuzzy nearest neighbour regression.This model is based on the assumption that the similarity of the input patterns implies the similarity of the output patterns paired with them.The patterns representing time series fragments are the key element of this approach.They unify data, reduce nonstationarity and filter out the trend.The main advantages of the model are the simple and understandable principle of operation and only two parameters to estimate: the length of the input pattern and the width of the membership function.Models with fewer parameters have better generalisability and do not require complex learning procedures.
We demonstrate the effectiveness of our approach on real-world data.Comparing with commonly used methods, such as ARIMA and exponential smoothing, the proposed model results in similar errors on average.Better performance of the model is observed for more regular time series with lower noise component and stable relationship between input and output patterns.The factors which decrease this stability are the nonlinear trend and heteroscedasticity of time series.
Table 1 .
Results for PL, variant A.
Table 2 .
Results for DE, variant A.
Table 3 .
Results for ES, variant A.
Table 4 .
Results for FR, variant A.
Table 5 .
Results for PL, variant B.
Table 6 .
Results for DE, variant B.
Table 7 .
Results for ES, variant B.
Table 8 .
Results for FR, variant B.
Table 9 .
MAPE of the forecasting models, variant A.
Table 10 .
MAPE of the forecasting models, variant B. | 3,207.4 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
On Monotonic Pattern in Periodic Boundary Solutions of Cylindrical and Spherical Kortweg–De Vries–Burgers Equations
: We studied, for the Kortweg–de Vries–Burgers equations on cylindrical and spherical waves, the development of a regular profile starting from an equilibrium under a periodic perturbation at the boundary. The regular profile at the vicinity of perturbation looks like a periodical chain of shock fronts with decreasing amplitudes. Further on, shock fronts become decaying smooth quasi-periodic oscillations. After the oscillations cease, the wave develops as a monotonic convex wave, terminated by a head shock of a constant height and equal velocity. This velocity depends on integral characteristics of a boundary condition and on spatial dimensions. In this paper the explicit asymptotic formulas for the monotonic part, the head shock and a median of the oscillating part are found.
Introduction
The well known Korteweg-de Vries (KdV)-Burgers equation for flat waves is of the form u t = −2uu x + ε 2 u xx + δu xxx .
Its cylindrical and spherical analogues are and respectively, see [1,2]. The behavior of solutions of the Korteweg-de Vries (KdV) and KdV-Burgers equations was intensively studied for about fifty years. However, these equations remain subjects of various recent studies, mostly in the case of flat waves in one spatial dimension [3][4][5][6][7]. However, cylindrical and spherical waves have a variety of applications (e.g., waves generated by a downhole vibrator), and are studied much less.
The case of the boundary conditions u(a, t) = A sin(ωt), u(b, t) = 0 and the related asymptotics are of a special interest here. For numerical modeling we use x ∈ [0, b] instead of R + for appropriately large b.
For the flat wave Burgers equation (δ = 0) the resulting asymptotic profile looks like a periodical chain of shock fronts with a decreasing amplitude (weak breaks or sawtooth waves). If dispersion is non-zero, each wavefront ends with high-frequency micro-oscillations. Further from the oscillator, shock fronts become decaying smooth quasiperiodic oscillations. After the oscillations cease, the wave develops as a constant height and velocity shock. It almost coincides with a traveling wave solution (TWS) of the Burgers equation [8,9].
A traveling wave solution is the solution of the form u = u(x + Vt). Such a solution travels with a constant velocity V along the x−axis, unchanged in its form. The wellknown examples are solitons for KdV, shock waves for the Burgers equation. For the existence of TWS for all values of the parameter V it is necessary that an equation has Galillean symmetry.
In the case δ = 0, the Burgers equation has traveling wave solutions, vanishing at x → +∞. They are given by the formula [10] it is used below. Our aim is to obtain a similar description of a long-time asymptote for cylindrical and spherical waves with periodic boundary conditions. We demonstrate that, in the case of the above IVBP, the perturbation of the equilibrium state for Equations (2) and (3) ultimately takes a form similar to this shock. This paper is organized as follows. In Section 2, we demonstrate graphs of our numerical experiments for cylindrical/spherical Burgers/KdV-Burgers equations for different combinations to show their the common patterns. In particular we demonstrate that, after the oscillation cease, a solution becomes a monotonic convex line terminated by a head shock.
In Section 3, we find symmetries to Equations (2) and (3). No Galilean symmetry is found, so no real TWS exists. Then equations are brought to a conservation law form, which is later used to obtain rough estimates for the median parameters of the solution. This rough estimate becomes exact for constant boundary conditions, and in Section 4, a very close asymptote for said solution is found in self-similar or homothetic form u = u(x/t).
Yet, at the head shock this asymptotic is unsatisfactory. This head shock moves in unchanged form and with numerically equal velocity and amplitude-exactly as the Burgers traveling wave solution does. In Section 5, using a simple combination of a self-similar approximation and the Burgers traveling wave solution, we obtain the compact closed form approximation. It coincides with a solution in its monotonic part; and this approximation correctly represents the median of the solution in its oscillating part. The quality of the approximation is verified numerically. Connection between the velocity of the solution's head shock and the median value at the start is obtained.
In the section "Conclusions" we formulate main result and discuss the remaining open questions.
Typical Examples
Here we demonstrate typical graphs for cylindrical and spherical Burgers waves (see Figures 1 and 2) and for cylindrical and spherical KdV-Burgers (Figures 3 and 4).
We obtained these graphs using the Maple PDETools package. The mode of operation used was the default Euler method, which is a centered implicit scheme.
The solution usually starts with a periodical chain of shock fronts with decreasing amplitudes (sawtooth waves). This weak breaks/sawtooth profile is inherent to periodic waves in dissipative media. Sawtooth waves, their decay, amplitudes, width, etc., were intensively studied in 1970 (see [1,2]) and later. One can also see a common pattern, previously not described, emerging on these figures. After the decay of initial oscillations, graphs become monotonic declining convex lines, terminated by a shock. Recall that for flat waves this monotonic part almost coincides with a constant height traveling wave solution of Burgers equation [7]. The new feature of convex declining lines is caused by the space divergence. We obtain an analytical description of this pattern below.
Symmetries
Since cylindrical and spherical equations explicitly depend on time, their stock of symmetries is scarce. For the algorithm of symmetry calculations, see [11]. We found that the algebras of classical symmetries are generated by the following vector fields: This list does not contain the Galilean symmetry, so no real traveling wave solution exists.
In particular, symmetry algebra for: • Cylindrical Burgers is generated by X, Y, Z; • Cylindrical KdV-Burgers is generated by X, Z; • Spherical Burgers is generated by X, Y, W; • Spherical KdV-Burgers is generated by X, W.
Conservation Laws
First rewrite Equations (1)-(3) into an appropriate conservation law form where n = 0, 1/2, 1 for flat, cylindrical and spherical cases, respectively. Hence, for solutions of the above equations we have While bearing in mind the initial value/boundary conditions u(x, 0) = u(+∞, t) = 0, for L = +∞ the integrals read The right-hand side of Equation (10) can be computed in some simple cases or estimated. For instance, assume that ε 2 u x (0, t) + δu xx (0, t) is negligible compared to u 2 (0, t). Then It follows that Another example of exact estimation of right-hand side of Equation (10) is the case of constant boundary conditions. Consider boundary condition u(0, t) = M. The graphs of solution are shown in Figure 5, left (compare their rates of decay caused solely by the spacial dimensions.) For the resulting compression wave u x (0, t) = 0, the right-hand side of Equation (10) equals As the Figures 1-4 show, for a periodic boundary condition, after the decay of initial oscillations, graphs become monotonic convex lines. These convex lines break at x = V · T and at the height V. These monotonic lines are similar to the graphs of constant-boundary solutions; see Figure 5.
Self-Similar Approximations To Solutions
By observing the solution's graphs, one can clearly see (e.g., on Figure 5, right) that the monotonic part and its head shock develops as a homothetic transformation of the initial configuration (by t as a homothety parameter). Hence, we seek solutions in the self-similar form, u(x, t) = y( x t ). By substituting it into Equations (1)-(3), we get the equation: or − ξy + ny = 2yy + ε 2 y t + δy for y = y(ξ) and n = 0, 1/2, 1. For sufficiently large t we may omit last two terms. It follows that appropriate solutions of these truncated ordinary differential equations are given by u 1 (x, t) = C 1 , C 1 ∈ R, n = 0, for flat waves equation; , for cylindrical and (The Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely, the branches of the inverse relation of the function f (w) = we w , where w is any complex number.
For each integer k there is one branch, denoted by W k (z), which is a complex-valued function of one complex argument. W 0 is known as the principal branch. When dealing with real numbers the W 0 = LambertW function satisfies LambertW(x) · e LambertW(x) = x. The Lambert W function, introduced in 1758, has numerous applications in solving equations, mathematical physics, statistics, etc.; for more detail, see [12].) Let V be the velocity of the signal propagation in the medium. Since at the head shock we have x = Vt and u = V, we obtain the condition for finding C i . It is y(V) = V. It follows then that For flat waves, it corresponds to a traveling wave solution of the classical Burgers equation.
For the cylindrical waves, the monotonic part is given by and for spherical waves Note that These formulas show that the velocity is proportional to the value of a constant boundary solution at x = 0.
The corresponding graphs visually coincide with the graphs obtained by numerical modeling; for instance, see a comparison to the solution (at t = 100) for the problem in Figure 6, left.
Median Approximation
Yet, the monotonic part of the periodic boundary solution ends with a breaking, which travels with a constant velocity and amplitude, very much like the head of the Burgers' traveling wave solution (Equation (6)). A rather natural idea is to truncate a self-similar solution, multiplying it by a (normalized) formula for the Burgers TWS. Namely, put
•
For the cylindrical waves takẽ • For spherical waves, This construction produces an approximation of astonishing accuracy (see Figure 6, right and Figure 7); these graphs correspond to the spherical KdV-Burgers problem (it comes from Equation) (3) after the change x → −x. Moreover, it is evident that the graphs ofũ 2 ,ũ 3 neatly represent the median lines of the approximated solutions over their whole ranges. By median we mean M(x) = (2πn/ω) −1 2πn/ω 0 u(x, t) dt, n ∈ N, n 1 (u(0, t) = sin ωt).
Let us assess the quality ofũ 2 ,ũ 3 approximations numerically. Evaluate the trapezoid area underũ 2 ,ũ 3 graphs: • For the cylindrical equation • For the spherical equation Hence, the mean value of the left-hand side of (10) can be estimated as follows.
This mean value can be also evaluated numerically. In the case illustrated by Figure 1 the direct numerical evaluation of the integral differs from the estimation (20) by 1%. It confirms the quality of the approximation.
For constant-boundary waves, it follows from Equation (12) see Equation (12); of course this result coincides with Equation (15). Hence, the mean value M of an arbitrary solution at the start of oscillations (or in a vicinity of the oscillator) is linearly linked to the velocity of the head shock. However, to find this mean value for an arbitrary border condition is a tricky task, because the integrands u x and u xx of the right-hand side of Equation (10) have numerous breaks. Still, one may get an (admittedly rough) estimation for M using Equations (11) and (21). It follows that for flat, cylindrical and spherical cases. In all these cases it results in M ≈ A √ 2 2 ≈ 0.71A. Numerical experiments also show (e.g., see Figure 3) that for the u| x=0 = A sin(t) boundary condition such a value is M ≈ A · a, where a ≈ 0.467 is the mean value for 1 · sin(t) condition. That is, M depends on A almost linearly.
Note that this value may be obtained via the velocity V of the head shock, which, in turn, can be measured with great accuracy by the distance passed by the head shock after a sufficiently long time.
Conclusions
In this paper, we studied the pattern formation in periodic boundary solutions of spherical and cylindrical KdV-Burgers equations. Such a solution usually starts with a periodical chain of shock fronts with a decreasing amplitude. When oscillations decay and cease, a solution proceeds as a monotonic convex line that ends with a head shock. This last pattern was not described previously and it is the main subject of the paper.
We obtained simple explicit formulas describing the monotonic part of the solution and its head break. These approximate formulas have great accuracy. Moreover, their graphs neatly represent the median lines of the approximated solutions on their entire ranges. (By median line we mean the level around which the periodical oscillations occur).
To obtain these approximations we used self-similar solutions of the dissipationless and dispersionless KdV-Burgers equation and a traveling wave solution of the flat Burgers equation. Formulas depend on only one parameter: either on the velocity of the signal propagation or on the median value of the solution in the vicinity of the periodic boundary.
Some open questions remain. Our approximations are very good for the one-parameter class of constant boundary solutions. The existence of a one-parameter family of solutions points to the existence of a suitable symmetry, but the classical symmetry analysis was, so far, unhelpful. Conservation laws allows us to assess the value of the approximation's parameter using the boundary condition, but the resulting estimation is rough.
Conflicts of Interest:
The author declares no conflict of interest. | 3,182.2 | 2021-01-29T00:00:00.000 | [
"Mathematics"
] |
An Automatic Exposure Method of Plane Array Remote Sensing Image Based on Two-Dimensional Entropy
The improper setting of exposure time for the space camera will cause serious image quality degradation (overexposure or underexposure) in the imaging process. In order to solve the problem of insufficient utilization of the camera’s dynamic range to obtain high-quality original images, an automatic exposure method for plane array remote sensing images based on two-dimensional entropy is proposed. First, a two-dimensional entropy-based image exposure quality evaluation model is proposed. The two-dimensional entropy matrix of the image is partitioned to distinguish the saturated areas (region of overexposure and underexposure) and the unsaturated areas (region of propitious exposure) from the original image. The ratio of the saturated area is used as an evaluating indicator of image exposure quality, which is more sensitive to the brightness, edges, information volume, and signal-to-noise ratio of the image. Then, the cubic spline interpolation method is applied to fit the exposure quality curve to efficiently improve the camera’s exposure accuracy. A series of experiments have been carried out for different targets in different environments using the existing imaging system to verify the superiority and robustness of the proposed method. Compared with the conventional automatic exposure method, the signal-to-noise ratio of the image obtained by the proposed algorithm is increased by at least 1.6730 dB, and the number of saturated pixels is reduced to at least 2.568%. The method is significant to improve the on-orbit autonomous operating capability and on-orbit application efficiency of space camera.
Introduction
With the rapid development of space remote sensing technology, the strong demand for satellite responsiveness and imaging quality are increasing [1]. The imaging parameters of space camera are usually determined by the satellite earth station according to factors such as the solar altitude angle α, the ground object reflectivity ρ, and the weather of the target location before a conventional space camera system performs a shot task. When the satellite passes through the border, it is uploaded to the satellite system through an available satellite earth station. When the satellite reaches the target location, the imaging parameters are used to set the operative condition of the space camera to detect and collect the terrain object information. The a priori model is based on ideal conditions and often cannot get ideal results in the actual imaging process.
Space cameras are different from ubiquitous cameras. It has a high dynamic range (HDR) and large area of the shooting scene, and the obtained remote sensing image has a large amount of data and rich details which all lead to higher exposure quality of the space camera [2,3]. The current conventional camera automatic exposure method can be divided into three categories: the determination of imaging parameters based on the prior model, image fusion and the determination of imaging parameters based on image statistics.
The first method is to use a priori knowledge based on the scene to set imaging parameters. Wang et al. proposed a prediction method of optimal parameters combination, i.e., inputting empirical target refraction and image digital number (DN) to radiative transfer model to output the target radiance expected, thus obtaining an expected absolute radiometric coefficient before launch, which could be helpful for rational use of relevant earth observation cameras [4]. Cao et al. proposed a method for autonomous imaging parameters adjustment based on solar elevation from remote sensing theory [5]. The factors which influence imaging quality and the relationship between apparent radiance and solar elevation are firstly discussed. Then, the integral time change under different roll angles in one orbital period and the internal links among solar elevation, roll angle, integral grade, and gain are analyzed. Finally, the best grading strategy is obtained and the two-dimensional lookup table which can be used for the autonomous imaging parameters adjustment is built. The strain capacity of the method is poor. If the weather conditions of the target imaging area change, and the camera system still looks up the table according to the original parameter state, the acquired image data may not achieve the anticipant imaging effect, and even the imaging task is invalid. The main disadvantage is that the camera system cannot adjust the imaging parameters adaptively based on the actual image data acquired.
The method of image fusion does not need to adjust the camera imaging parameters, but only needs to acquire multiple images. Traditional exposure fusion methods can generate HDR images based on a set of weight maps of low dynamic range (LDR) images with different exposures [6]. There are a variety of image fusion algorithms, from simple weighted average to complex methods based on advanced statistical image models. Ying et al. first used the illumination estimating technology to calculate the weight matrix for the image, and then used the camera's response model to synthesize the multiple exposure image to get the optimal exposure rate [7]. This method performs well in the underexposed areas, but performs badly in the overexposed areas of the original image. Mertens et al. used three quality indices of brightness, contrast, and saturation to determine the effect of a given pixel on the final composite image. The fusion image has a high contrast, but still cannot display the details of the brightest and darkest areas of the scene [8]. Effective image fusion methods require multiple pre-acquired images of different dynamic range [9], for the shooting target under extreme lighting conditions is poor.
Another method is to establish the mathematical relationships between image quality indices and imaging parameters through the analysis of image statistics, thereby adjusting the camera's imaging parameters such as shutter speed, aperture size, exposure time, and gain. The adjustment of shutter speed and aperture size usually depends on a highprecision mechanical structure, and the adjustable imaging parameters of the plane array space camera are usually exposure time and gain.
The simplest method is to measure the average gray value of the entire image or a specific area, and adjust the imaging parameters of the camera to make the average gray value of the image equal to half of the camera's dynamic range [10,11]. Kuno et al. calculated the average brightness of the entire image when analyzing the brightness of the image, and used the average brightness to indicate the brightness level of the entire image [12]. Then, set a target brightness and adjust the imaging parameters to make the brightness level of the image gradually consistent with the target brightness. This method based on the average gray value of the image could cause a large area of overexposure and underexposure at the same time, resulting in the loss of a large amount of image details.
Later scholars used more advanced image quality evaluation indices such as image gray histogram, one-dimensional entropy, and gradient to obtain the optimal exposure image. Montalvo et al. extracted the histogram of the region of interest from the histogram of the R and G channels in the RGB spectral channel, and then used the brightness of this area as the reference brightness to adjust the imaging parameters of the camera by the histogram matching method [13]. Torres et al. shifted the grayscale histogram of the image to a specified range by adjusting the imaging parameters of the camera, which can avoid overexposure or underexposure of the image to a certain extent [14]. Rahman et al. and Lu et al. proposed an automatic exposure method that uses the maximum image entropy as the image quality evaluation index [15,16]. Research shows that the size of entropy changes with the exposure parameters of the imaging system, and the imaging parameter corresponding to the largest entropy value is taken as the optimal imaging parameter. Zhang et al. proposed an active exposure control method to maximize the gradient information in the image [17]. They calculated the derivative of the gradient square and the photometric response function, and measured the change of the gradient with the exposure time to determine the optimal exposure time. Shim et al. also used the gradient information in the image to get the appropriate exposure time [18,19]. The author defines an information metric based on the size of the gradient at each pixel, and simulates exposure changes by applying different gamma corrections to the original image to find the gamma value that maximizes the gradient information, and then adjust the exposure time according to the gamma value. The above algorithms cannot behave well when both image details and brightness are concerned.
Recently, Kim et al. proposed a new exposure quality evaluation method, of which the entropy weighted gradient of the image was used as an image quality evaluation index [20,21]. They used the index to obtain the optimal exposure time of the camera. The entropy matrix partitioning threshold is a fixed value in this method because of the small shooting area with a large target, and it has a better shooting effect on scenes with less detail. Remote sensing images are rich in details, and the target is usually only a few pixels in size which can lead to massive loss in image details. Each of these methods has advantages and disadvantages.
Aiming at these problems above, some research work on automatic exposure methods for plane array remote sensing images is carried out in this paper. A new image exposure quality evaluation model for remote sensing and an optimal exposure time determination method are proposed. We conducted experiments under different conditions to verify the robustness of the method. Experimental results show that, compared with the current algorithm, the algorithm is more sensitive to image details, brightness, information, and signalto-noise ratio, which perfectly meets the quality requirements of remote sensing images.
The rest of the paper is organized as follows: the automatic exposure method for plane array remote sensing images based on two-dimensional entropy is proposed in Section 2; in Section 3, the proposed algorithm is experimentally compared with other algorithms and related discussions are carried out; Section 4 presents the conclusions of the paper.
Proposed Algorithm
The adjustable imaging parameters of the area array space camera are usually exposure time and gain. The relationship between the exposure quality and the exposure time of the space camera is mainly studied in this paper, so as to get the optimal exposure time in the imaging process. Similar to the previous research [20,21], considering the sensitivity of two-dimensional entropy to image brightness, edges, and information, the two-dimensional entropy of the image is used as the starting point. Different from previous studies, an adaptive two-dimensional entropy matrix partitioning threshold based on the maximum weighted variance is proposed, which can be used to distinguish the saturated and unsaturated regions of the image. Then, the cubic spline interpolation method is introduced to calculate the optimal exposure time. The algorithm proposed in this paper takes the proportion of the saturated area in the image as a measure. With good sensitivity to image brightness, edges, information, and signal-to-noise ratio, it can also maximize the reduction of overexposed and underexposed areas in the image. The proposed algorithm has a good adaptability to remote sensing imaging and important guiding significance to obtain high-quality original remote sensing images for the space camera.
It can be clearly seen from Figure 1 that the entropy value of the saturated area of the image is small, while the entropy value of the unsaturated area is larger. The image segmentation as shown in Figure 1d can be achieved by getting an appropriate entropy threshold.
The overall architecture of the algorithm in this paper is shown in Figure 2. The method consists of two modules: exposure quality evaluation module and exposure curve fitting module. First, λ is supposed as the exposure time step to acquire multiple images; then, the exposure quality of the collected images is calculated; then, we perform curve fitting on the exposure quality of these images; finally, the optimal exposure time is acquired according to the fitted curve.
Two-Dimensional Entropy of Image
The one-dimensional entropy of image, as the information feature of the image, cannot map the spatial distribution of pixels well. The advantage of using two-dimensional entropy as an image saturation measure is that the two-dimensional entropy of an image is more sensitive to features such as brightness, edges, and information. For remote sensing images, there are certain grayscale changes in the target areas, while the grayscales of pixels in the overexposed and underexposed areas are keeping consistent basically which provides a good prerequisite for using two-dimensional entropy as a saturation measure.
The definition of Shannon entropy is used in this paper [22], the two-dimensional entropy of the pixel I i,j in the image is: where GS i,j indicates the gray level corresponding to pixel I i,j , P GS i,j indicates the probability that the corresponding gray level of the pixel I i,j in the 9 × 9 neighborhood.
Entropy Matrix Partitioning Threshold
The original image can be divided into region of saturation (ROS) and region of unsaturation (ROUS) by choosing an appropriate threshold to partition the two-dimensional entropy matrix because of the above-mentioned characteristics of the two-dimensional entropy matrix. The following method is adopted to obtain the partitioning threshold of the two-dimensional entropy matrix.
Assuming the threshold is th, when H i,j < th, it is defined as the ROS entropy matrix element H ROS i,j . On the contrary, when H i,j > th, it is defined as the ROUS entropy matrix element H ROUS i,j . The variances of the two-dimensional entropy matrix of the saturated and unsaturated regions are calculated respectively: where σ 2 ROS and σ 2 ROUS respectively indicate the variance of the entropy matrix of the saturated region and the unsaturated region, mean H ROS and mean H ROUS respectively indicate the mean value of the entropy matrix elements of the saturated region and the unsaturated region, H ROS i,j and H ROUS i,j respectively represent the elements of the saturated and unsaturated regions in matrix H.
Derived from (2) and (3), the total weighted variance of the two-dimensional entropy matrix H is expressed as: where p ROS and p ROUS respectively indicate the proportion of ROS element and the proportion of ROUS element in the whole image when the threshold is th. Substituting (2) and (3) into (4): For (5), there is a matrix as follows: Traverse each element in th k and calculate all corresponding σ 2 H . When σ 2 H gets the maximum value, the corresponding th k is the optimal threshold, which is expressed as: in the equation, max σ 2 H (th k ) indicates the maximum value of σ 2 H .
Ratio of Saturated Area
Based on the results obtained in Section 2.2, a binary mask is defined to partition the two-dimensional entropy matrix and distinguish ROS and ROUS: According to the defined mask matrix Mask i,j , the value of the corresponding element of the ROS area is 1 and the value of the corresponding element of ROUS is 0. The proportion of ROS elements in the image can be obtained as: where m and n indicate the image size. Obviously, when the S value in (9) is smaller, the image ROS ratio is lower and the image exposure quality is better; when the S value is larger, the image ROS ratio is higher and the image exposure quality is worse. In the latter part of this paper, the S value is used as the image exposure quality evaluation standard, which is used as the basis for adjusting the camera's exposure time.
Exposure Quality Curve Fitting
For a certain scene, the plane array space camera can continuously expose the specified target when it is in the staring imaging mode. When shooting with an equal step of exposure time, the corresponding S value can be calculated for each image. We use the exposure time as the abscissa and the S value as the ordinate for curve fitting. Choosing a suitable curve fitting algorithm is the key to accurately determine the optimal exposure time. In this paper, the cubic spline interpolation method is used to fit the discrete points into a curve to show the relationship between the image exposure quality and the exposure time, so that the exposure time can be adjusted more accurately and the image with higher exposure quality can be obtained.
Cubic spline interpolation fits a smooth curve from a series of sample points. Mathematically, it is the process of obtaining a set of curve functions by solving the three-moment equation. Due to the small computation capacity and low complexity, cubic spline interpolation is widely used in various fields. The cubic spline curve about exposure time and image exposure quality can be expressed as: in the (10), t is the exposure time, and t i is the abscissa of the sample point. a i , b i , c i and d i are undetermined coefficients. S i (t) is the exposure quality corresponding to the exposure time t. Different from the conventional cubic spline interpolation method, the boundary condition of the spline curve in this paper is the Not-A-Knot [23]: where S n (t n ) indicates the third derivative of the curve at point t n .
Fitting the exposure quality curve based on cubic spline interpolation Not-A-Knot boundary conditions has the following advantages: (1) Interpolation can still be performed when there are few sample points; (2) There will be no excessive errors at the start and end points of the sample; It can characterize the change of exposure quality with exposure time at different exposure time steps; (4) The second-order smoothness of the curve conforms to the gradual characteristic of the exposure quality with the exposure time.
Rules for Determining the Optimal Exposure Time
Generally, the exposure quality curve of the images is a concave curve. The optimal exposure image corresponds to the lowest point of the curve from the above. When the accuracy of the target exposure time is λ min , suppose that the exposure time λ = 2λ min for continuous exposure to obtain multiple sample points. According to the fitted curve, the optimal exposure time can be extracted. For different shooting scenes, the relationship between image exposure quality and exposure time does not strictly satisfy the cubic spline function. Therefore, the following optimal exposure time determining rules are proposed to avoid the errors: where t optimal indicates the optimal exposure time of the scene. minS(t) indicates the minimum value obtained by sampling in the curve with λ min as the step size. minSP(t) indicates the minimum value of the sample point, and S a (t) indicates the S value of the actual image obtained by configuring the camera parameters with the exposure time corresponding to the lowest point of the curve.
Verification and Analysis
In order to verify the algorithm proposed, we conducted multiple sets of experiments to simulate low ground reflectivity and high ground reflectivity in different scenes. First, the image exposure quality evaluation model proposed in this paper is verified; then, the performance of the cubic spline interpolation curve in improving the exposure accuracy of the imaging system and improving the image exposure quality is verified. In the experiment, the camera's default method (CDM), average gray level method (AGLM) [12], one-dimensional entropy method (ODEM) [15], entropy weighted gradient method (EWGM) [21], gradient-based method (GBM) [19], etc. are used for comparison, which proves the superiority of the proposed method.
The camera used in the experiment is acA1300-60gm from Basler. The parameter indices are shown in Table 1. In the experiment, the relative aperture value of the camera is 1:1.4 and the image size is 1280 × 1024.
Experiment under Bright Conditions
Suppose that the exposure time step λ 1 = 20 ms under bright conditions to take eight consecutive shots of target 1. The exposure quality is the highest when the exposure time is 41 ms. The optimal exposure image obtained by other algorithms is also shown in Figure 3. Figure 3 shows that the default automatic exposure algorithm of the camera is similar to the average gray level method, which has certain advantages in the overall brightness of the image, but the overexposed area in the image is not effectively limited. The onedimensional entropy method pays attention to the overall information of the image. The result of the gradient-based method is consistent with the one-dimensional entropy method, the contrast of the image is relatively strong, but the overexposure phenomenon also exists. The entropy weighted gradient method comprehensively considers the amount of information and edge details of the image, which makes overexposure and underexposure of the image suppressed to a certain extent, but information loss in some areas is still unavoidable. The optimal exposure image obtained by the proposed algorithm has a weaker contrast, but the blue part of the entropy matrix is the least. In other words, the overexposure and underexposure areas in the entire image are minimized so that the image details are preserved to the maximum extent.
Some other objective indices were used to compare the optimal exposure images of each method, as shown in Table 2. Table 2 shows that the optimal exposure image taken by the proposed algorithm has a small gradient, but the number of saturated pixels is much lower than that of other algorithms, and the signal-to-noise ratio is increased by at least 2.4031 dB. The average gradient equation in this paper is expressed as: SNR formula is expressed as: where I max and I min respectively indicate the maximum and minimum values of the image gray level, and σ represents the standard deviation of the image. In order to further improve the exposure accuracy and the exposure quality, curve fitting is performed on the S value of the obtained images. Sampling with λ 1 = 10 ms in the curve. The resulting curve is shown in Figure 4. It can be seen from the curve that the optimal exposure time is 41 ms. Through actual shooting, the quality of the image taken at exposure timing of 41 ms is better than that at exposure timing of 31 ms and 51 ms. The determination of the optimal exposure time is accurate, which verifies the accuracy of the algorithm proposed.
Experiment under Dark Conditions
Suppose that the exposure time step λ 2 = 50 ms under dark conditions to take eight consecutive shots of target 2. The exposure quality is the highest when the exposure time is 205 ms. The optimal exposure image obtained by other algorithms is shown in Figure 5. Figure 5 shows that the experimental results are consistent with the experimental results under bright conditions, and more intuitively reflecting the superiority of the algorithm. Figure 5 shows that the experimental results are consistent with the experimental results under bright conditions, and more intuitively reflecting the superiority of the algorithm. The camera's default automatic exposure algorithm is similar to the average gray scale method. It has certain advantages in the overall brightness of the image, but the overexposed area in the image is not limited. The one-dimensional entropy method pays attention to the overall information of the image. The result of the gradient-based method is consistent with the one-dimensional entropy method. The contrast of the image is relatively strong, but it also cannot limit the overexposure phenomenon. The entropy weighted gradient method comprehensively considers the amount of information and edge details of the image, which makes overexposure and underexposure of the image suppressed to a certain extent, but information loss in some areas is still unavoidable. The entropy matrix of the optimal exposure image obtained by the proposed algorithm has the least blue part and the smallest saturated area of the image, which can clearly show the detailed information of the marked area and even the whole image.
Some other objective indices were used to compare the optimal exposure images obtained by each method. As shown in Table 3, the optimal exposure image taken by the proposed algorithm in this scene has the smallest average gradient, but the number of saturated pixels is much lower than that of other algorithms. The signal-to-noise ratio is improved by at least 1.6730 dB. According to the curve, the optimal exposure time is 180 ms. However, the actual quality of the image taken at exposure timing of 180 ms is worse than that at exposure timing of 205 ms. According to the rules for determining the optimal exposure time defined in Section 2.5, the optimal exposure time in this scene is 205 ms. It verifies the necessity of the rules in Section 2.5.
In order to further improve the exposure accuracy and improve the exposure quality, we fit the S value of the images obtained by the proposed algorithm to a curve, and sample it with λ 2 = 25 ms. The resulting curve is shown in Figure 6.
Experiment Results Analysis
The above experimental results show that there are partial overexposure and underexposure in the images taken by the conventional automatic exposure algorithm, resulting in the loss of information of the image. The algorithm proposed can effectively solve this problem. Compared with several other algorithms, the algorithm proposed has better subjective and objective consistency. Subjectively, it can minimize the overexposed and underexposed areas in the image under the premise of making full use of the camera's dynamic range, so as to maximize the detailed information of the image. Objectively, although the average gradient of the image is relatively smaller compared with other methods, the number of saturated pixels in the image is the least and the signal-to-noise ratio is the highest, which exactly meets the quality requirements of remote sensing images.
During the experiment, it was found that other imaging parameters of the camera would have a certain effect on the accuracy of the algorithm, such as the focal length. In the series of experiments conducted, the camera was out-of-focus in one of the experiments. The optimal exposure image obtained is blurred, as shown in Figure 7. The overexposure area has not been effectively suppressed. After preliminary analysis, the image is blurred due to the defocus of the camera, which in turn causes the inaccuracy of the twodimensional entropy matrix of the image and affects the segmentation of ROS and ROUS. Therefore, there is an error in the fitted exposure quality curve which ultimately affects the determination of the optimal exposure time. Research work would be carried out on the problem of accurate research of optimal exposure time for strong noise imaging systems.
Conclusions
In summary, a new automatic exposure method of plane array remote sensing image is proposed in this paper to appropriately evaluate the exposure quality of the image and then determine the optimal exposure time more accurately. Experiments conducted have proven that the theoretical model herein has certain advantages for image characteristics in terms of brightness, edges, information volume, and signal-to-noise ratio. It is more suitable for remote sensing imaging than current methods nowadays. The automatic exposure method is suitable for plane array space cameras, and it can also be rolled out to other types of cameras and applied in other engineering fields to solve the misuse of the camera's dynamic range to obtain high-quality original images.
Author Contributions: Conceptualization, T.G., L.Z., and W.X.; methodology, T.G. and L.Z.; writing-original draft preparation, T.G. and L.Z.; writing-review and editing, T.G., L.Z., W.X., Y.P., R.F., T.Z., and X.C.; funding acquisition, W.X. and L.Z. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: All data will be made available on request to the correspondent author's email with appropriate justification. | 6,420.4 | 2021-05-01T00:00:00.000 | [
"Physics"
] |
Bayesian Simultaneous Credible Intervals for Effect Measures from Multiple Markers
Abstract Inference from multiple markers is often encountered in biomedical research, especially when comparing treatments. Most common statistical methods rely on hypothesis testing with null hypothesis of no effect for each marker and methods based on closed testing for getting the combined p-value. This article proposes the use of simultaneous credible intervals obtained from the joint posterior distribution of effect measures of multiple markers for combined inference. The advantage of this method is that it finds the actual effect sizes and also allows different types of effect measures for various markers. We used equal tailed intervals and highest posterior density regions for effect measures of multiple markers. Simulation studies were carried out to examine frequentist properties of these simultaneous credible regions. These studies revealed that the inference based on simultaneous intervals can be different than the inference based on individual marker. The method is demonstrated through two case studies: (a) an observational study of wet age-related macular degeneration (wet-AMD); and (b) reanalysis of a randomized controlled trial for treatment of migraine. The wet-AMD observational study was the motivation for the present research.
Introduction
In clinical studies, multiple markers or outcomes are studied frequently, especially when comparing two groups: treatment and control. The term marker will be used for "marker or outcome" in this article. If multiple markers are independent of each other, then the inference is straightforward. However, this is rarely the case in real-world situation. Although the dependence structure of these markers is often not known explicitly, getting the combined inference from all the markers is crucial. In observational studies, the interest could be in finding the difference between two groups of baseline measurements, between two groups of existing health conditions or in comparing two treatments or markers in naturally or artificially paired units, such as left and right eye. For carrying out combined inference from all the possibly correlated markers, a joint interval estimation method is a natural choice for effect measures derived from multiple markers. The effect measures should also be chosen appropriately for different markers. The most commonly used effect measures for a binary marker are risk difference, risk ratio and odds ratio.
Most of the frequentist methods dealing with multiple markers depend on multiple hypothesis testing. When dealing with multiple markers, in frequentist approach, if the joint distribution of all the parameters of interest is not available in a closed form, one has to rely on the Bonferroni adjustment (Holm 1979;Simes 1986;Hochberg 1988;Hommel, Bretz, and Maurer 2007;Huque et al. 2010). Under the Bonferroni adjustment, equal Type I error is assigned to all the markers, while other modifications of the Bonferroni method suggest allocating the Type I error to selective markers based on domain expert's opinion or based on the p-value ordering. When multiple markers are assessed with various effect measures, the marginal p-values are obtained as the first step and then combined inference is carried out by a combination of these marginal p-values or an adjustment of marginal p-values (Pesarin 1990;Westfall and Krishen 2001). Bayesian alternative of multiple hypothesis testing is also employed quite often (Abramovich and Angelini 2006;Guo and Heitjan 2010). Furthermore, all observed markers may not be equally important for a specific analysis. For example, the hierarchy of the markers of interest is commonly studied before starting a clinical trial. The markers are then divided into families as per their hierarchy. Gate-keeping procedures are used on these families of markers to get the adjusted p-values (Röhmel, Benda, and Läuter 2006;Dmitrienko and Tamhane 2007).
In all these multiple hypothesis testing approaches, the focus is more on controlling the family-wise error rate or the total Type I error than on quantifying the effect. However, when the interest is in quantification of the difference between two groups in the population with respect to various markers, identifying proper effect measures for the markers and obtaining simultaneous confidence or credible intervals of those effect measures can be an appropriate option. The interest may also be in carrying out simultaneous estimation and hypothesis tests with possible different types of effect measures. In many cases, the hierarchy of the markers is not known and the interest lies in identifying a marker or markers that would be more efficient in distinguishing the two groups. When multiple markers are to be studied, vast literature is available for multiple testing but simultaneous testing is rarely seen (Roy and Bose 1953;Gabriel 1969) and simultaneous interval estimation is almost nonexistent. Frequentist methods of simultaneous confidence intervals mainly tackle the problem of continuous response by using multivariate normal distribution. Furthermore, with frequentist approach, the methods to obtain confidence intervals need to be adapted to the effect measure used. These frequentist methods cannot be used for small or sparse data, for which normality assumption is not valid. We refer to Robin et al. (2019) for a good review of analysis methods of multiple markers for small sample studies.
In this article, we propose Bayesian simultaneous credible intervals for effect measures derived from multiple markers, in order to determine the difference between two groups. We extend the method for a single binary marker, discussed in Nurminen and Mutanen (1987), to m (> 1) binary markers. The tail probability from the joint posterior distribution can be treated as one-sided Bayesian p-value for comparison with the frequentist approach of hypothesis testing. The upper-and lower-tail posterior probabilities are plotted for illustration purpose. The R-shiny app and the corresponding R code developed for exhibiting the joint distributions of various combinations of effect measures are available at the gitlab URL given in the supplementary materials. This app can also be used to get the tail probabilities from the joint posterior distribution of a subset of the effect measures for selected markers. Hence, it can be used as Bayesian approach of gate-keeping procedures.
This article is organized as follows. In Section 2, we describe Bayesian methods for simultaneous credible intervals for a single binary marker. Three possible effect measures, viz., risk difference, risk ratio and odds ratio, are used to quantify differences between two groups. In Section 3, we extend this approach to multiple binary markers and we describe the use of this approach for gate-keeping procedures. In Section 4, we discuss the choice of prior parameters. In Section 5, we discuss frequentist properties of the Bayesian credible intervals with the help of simulation studies. In Section 6, applications of these methods to the data from an observational study of 100 patients suffering from wet-type of age-related macular degeneration (wet-AMD) and to the data from a published clinical trial are demonstrated. We compare Bayesian credible intervals from the case study with Wald and Score type frequentist confidence interval for a single marker. Section 7 gives concluding remarks.
One Binary Marker: Review of Bayesian Methods
Our interest is in comparison of two independent groups, with fixed sample sizes N 1 and N 2 , with regard to one binary marker E 1 . Let π i , i = 1, 2 be the probabilities of presence of E 1 for the two groups, respectively.
The difference between the two groups can be measured in terms of effect measures P(π 1 , π 2 ). Examples of these are: difference of proportions P(π 1 , π 2 ) = (π 2 − π 1 ), risk ratio (π 2 /π 1 ), and odds ratio ({π 2 /(1−π 2 )}/{π 1 /(1−π 1 )}). Bayesian credible intervals for these effect measures are studied in detail in Nurminen and Mutanen (1987). In this section we review their methods and in the next section we extend these methods to m multiple markers. The data of the two groups with a binary marker can be written as a 2 × 2 contingency table, Y, as shown in Table 1. For fixed group sizes N 1 and N 2 and given (π 1 , π 2 ), the probability of observing such a table Y can be written as the product of two binomial probabilities.
To understand the difference between the two independent groups with respect to the marker in the Bayesian way, we specify an appropriate prior for (π 1 , π 2 ). We assume that π 1 and π 2 are independent a priori and assign a beta prior to π i with parameters (a i , b i ), i = 1, 2. This specification results into the following joint prior.
where 0 ≤ π i ≤ 1 and a i > 0, b i > 0, for i = 1, 2. The beta distribution (2) is a conjugate prior for the binomial likelihood (1) and hence, the posterior distributions of π 1 and π 2 given Y are also independent beta distributions with parameters (a i + y i1 , b i + y i0 ), i = 1, 2.
The posterior probability that the parameter P(π 1 , π 2 ) is more than p, can be written as follows: F(p|Y) = Pr(P(π 1 , π 2 ) > p|Y). ( For the sake of convenience, we omit explicit conditioning on Y in (3) and hereafter denote F(p|Y) as F(p). The posterior distribution (3) can be approximated by the Monte Carlo estimator: where (π 1s , π 2s ), s = 1, . . . , S are samples from the beta posterior distributions and S is sufficiently large. All the distributional characteristics such as posterior mode, posterior median and the credible intervals can be obtained using (4).
Credible Intervals for Effect Measures
Two methods are generally used to compute the credible intervals for the effect measure from the samples of posterior distribution (Kruschke 2014).
Equal tailed credible interval
Equal Tailed credible Interval (ETI) is defined such that equal probability is observed below the lower limit and above the upper limit of the interval. If the interval is with coverage probability (1−α), then the probability of observing a parameter below its lower limit is α/2 and the probability of observing a parameter above its upper limit is α/2. The equal tailed credible interval always contains the posterior median of the posterior distribution but may exclude the posterior mode.
Let the two-sided credible interval with coverage probability (1 − α) be denoted as (p L , p U ). It is computed asF(p U ) = α 2 and (1 −F(p L )) = α 2 . For a 1-sided interval, if one is interested in large effects, only the upper limit is computed and the lower limit is the lowest possible value of the parameter. For the parameter difference of proportions, the one-sided credible interval is either of the form For one-dimension, these limits can be easily obtained by applying a bisection method or by using existing R packages such as HDInterval (Meredith and Kruschke 2020).
Highest posterior density set
The Highest posterior Density Interval (HDI) is the set of points in the parameter space, which contribute more to the posterior density. It may not be a connected single interval. The set is computed such that the density at any point inside the set is more than the density of every point outside the set. If the interval is denoted by I HDI then for sufficiently small > 0 and This is equivalent to finding a threshold k such that all the points with their neighborhood density more than or equal to k are included in the HDI and all the points with their neighborhood density less than k are excluded from the HDI. An algorithm to compute such a threshold using the bisection method is given in Turkkan and Pham-Gia (1993). However, a direct computation of this threshold is possible by sorting the empirical point densities for a fixed . In this article, we use this direct method, where the density of a point is the empirical density in the neighborhood of the point. For the choice of in empirical density computation, we refer to Waterman and Whiteman (1978). To compute the HDI, it is possible to use some existing R packages that compute approximations of empirical density functions to normal densities. In this article, we have used the actual empirical densities and not their approximations, because the actual empirical densities can be conveniently extended to higher dimension. Since this is the highest posterior density set for the given threshold, the posterior mode always lies inside the set but the posterior median may not be included in the HDI.
Multiple Markers: Bayesian Simultaneous Credible Intervals
We extend the approach of Section 2 to m (> 1) binary markers here. Without loss of generality, let m = 2. As before, we Total y consider two independent groups with fixed and known sizes, N 1 and N 2 . Our interest is in comparison of the two groups with regard to two binary markers E 1 and E 2 . We represent data of these two markers in a 2×2 contingency table for each group as shown in Table 2. Here, y ijk , (j, k = 0, 1; i = 1, 2) is the frequency in each cell of the contingency table for group i and responses j and k for the two markers, respectively. Let π ijk be the corresponding cell probability and π i = {π ijk } j,k=0,1 and π = (π 1 , π 2 ) be the vector of the four probabilities for the two groups. Note that j,k π ijk = 1 for each i. Furthermore, for each group i, the probability of presence of the marker E 1 is π (1) i = π i11 + π i10 and probability of presence of the marker E 2 is π The likelihood for the parameters π given the data Y = (y ijk ; j, k = 0, 1; i = 1, 2), shown in Table 2, for the two groups can be written as the product of two independent multinomial distributions.
The conjugate prior for the multinomial likelihood is a Dirichlet distribution. We assume that π 1 and π 2 have independent Dirichlet distributions with parameters a i = (a i11 , a i10 , a i01 , a i00 ), i = 1, 2, as given by A posteriori π 1 and π 2 are independent and distributed according to the Dirichlet distributions with parameters (a ijk + y ij,k ), i = 1, 2, and j, k = 0, 1. The posterior distributions of any function of π , for example, π i , i = 1, 2, can be obtained from the posterior distribution of π.
Extending the concept of the effect measure P of Section 2, we can now define P 1 (π (1) 2 ) for marker 1 and P 2 (π (2) 1 , π 2 ) for marker 2. For example, P 1 (π (1) 1 . Since the proposed approach allows use of different effect measures simultaneously, we will suppress the explicit formula and write only P j in the sequel. The joint posterior probability of the two measures (P 1 , P 2 ) being at least p = (p 1 , p 2 ), respectively, for the given data, can be written as F(p|Y) = Pr(P 1 > p 1 , P 2 > p 2 |Y).
As before, hereafter we will denote F(p|Y) as F(p).
The above formulation can be extended to multiple binary markers, m > 2, with Dirichlet distribution serving as the prior for 2 m probabilities for each group.
Simultaneous Credible Intervals for Effect Measures
Credible intervals for the effect measures of interest for two markers are the simultaneous intervals obtained from the joint posterior distribution of π. In this section, we extend the methods discussed in Section 2.1 to higher dimensions.
Equal tailed credible interval
The two-sided simultaneous equal tailed credible intervals (ETI) for P j , j = 1, 2 are denoted as (P L1 , P U1 ) and (P L2 , P U2 ) and are computed asF where, F p j is defined for one dimension as in (3) for marker E j andα is the largest possible value of α such that (9) Then, the simultaneous interval denoted by I s is formed by the Cartesian product of (P L1 , P U1 ) and (P L2 , P U2 ).
The interval is computed using a bisection method described in the following algorithm.
A similar algorithm using bootstrap sampling is given in Montiel Olea and Plagborg-Møller (2019). Although the computation seems to be dependent on a seed used for generating the posterior samples, the required accuracy can be obtained with sufficiently large number of samples.
Highest posterior density set A Highest posterior Density Interval or region (HDI) may not be a single connected region. The density at any point inside the HDI is more than the density at every point outside the set. The HDI is computed by ordering the point densities and finding a threshold k such that all the points with their neighborhood density more than or equal to k are included in the HDI and all the points with their neighborhood density less than k are excluded.
Hence, computing the HDI reduces to finding a threshold k such that The HDI is the union of rectangular neighborhoods of points with density more than k.
Gate-Keeping Procedures for Multiple Markers
In order to apply the proposed methods to clinical trials, we briefly describe some gate-keeping procedures and their connection with the proposed methods. When multiple markers have a natural hierarchy of importance based on treatment effect, then the markers are grouped into families according to their importance. Gate-keeping procedures are applied to address the multiplicity problems by explicitly taking into account the hierarchical structure (Röhmel, Benda, and Läuter 2006;Dmitrienko and Tamhane 2007). Gate-keeping procedures are used in clinical trials, where each family plays the role of a gate-keeper for the next family. If the available data supports that there is no treatment effect for the markers from the higher ranked families, then the lower ranked families will not be assessed. In this section, we will elaborate how simultaneous credible regions can perform the following gate-keeping procedures.
Serial Gate-Keeping
In serial gate-keeping, hypothesis testing is sequential. Hence, only if all the null hypotheses in the current family are rejected, the markers from the next family are analyzed. In serial gatekeeping, the interest lies in rejecting all the null hypotheses. Hence, the resultant test statistic is the minimum of all the test statistics or the adjusted p-value is the maximum of the marginal p-values for the markers. The simultaneous intervals discussed in this article correspond to the serial gate-keeping procedure, where the objective is to find the effect of all the markers simultaneously.
Parallel Gate-Keeping
In parallel gate-keeping, the interest lies in rejecting at least one null hypothesis. Hence, the resultant test statistic of the combined hypothesis is the maximum of all the test statistics computed for the markers. This can be translated into a simultaneous interval setting, by defining the interval as a set of all the points having at least one co-ordinate lying in the corresponding marginal posterior interval.
Choice of Prior Parameters
For two binary markers, we specified Dirichlet prior for multinomial probabilities (Section 3). The important question is about the choice of prior parameters (a i11 , a i10 , a i01 , a i00 ), i = 1, 2, especially because it can be interpreted as an increase in the sample size for small sample correction. In order to understand the role of prior parameters from this angle, we look at the posterior means.
The posterior means of the probability of presence of marker 1 for the groups i = 1, 2 are where c i = j,k a ijk , y i1. = y i11 + y i10 and a i1. = a i11 + a i10 .
The parameter of interest is the effect measure P(π (1) 1 , π (1) 2 ) and to study its posterior distribution, we use the posterior distribution of π (1) 1 , we observe that the posterior mean is For m markers, the above expectation is (y 21. − y 11. )/(N + Ma). It is clear that the sample size in each group increases by the sum of the Dirichlet prior parameters. In the case of equal sample sizes and the same prior, the assessment of the effect measure is based on the sample sizes which are increased equally by (4a). Note that m binary markers results into M = 2 m categories and hence, require M Dirichlet parameters for each group. In this case, the sample size increases by M with uniform prior D(1) while it increases by M/2 with Jefferys prior D(1/2). Such increase in the sample size would have visible impact, especially for small sample sizes and large number of markers. Another prior choice, a reference distance prior (Berger, Bernardo, and Sun 2015), is D(1/M)-Dirichlet prior with all parameters equal to the inverse of the number of categories (1/M). This choice of prior increases the sample size only by 1. In a simulation study for multiple markers, we have considered "reference distance prior, " in addition to uniform and Jeffreys prior, Section 5.
It is known that for m = 1 (categories M = 2), Jeffreys prior is β(1/2, 1/2) and it corresponds to adding 1/2 to each cell. This is equivalent to the continuity correction used in the frequentist approach in order to facilitate computation in the case of sparsity. It is to be noted that the reference distance prior is same as Jeffreys prior for m = 1.
The parameter of interest is P(π 1 , π 2 ) but we have chosen to specify priors for (π 1 , π 2 ). In order to visualize the prior distributions of P(π 1 , π 2 ) for chosen priors of π 1 and π 2 , the prior densities for (π 2 − π 1 ) and (π 2 /π 1 ) are shown in Figure 1 of the supplementary materials. For the difference of proportions the uniform priors for (π 1 , π 2 ) gives a triangle density, and beta distributions with large parameters give bell-shaped densities. For the risk ratio the priors with small beta parameters show linear density curve with negative slope for risk ratio ∈ (0, 10) and beta distributions with large parameters give bell-shaped densities.
Simulation Studies
The main aim of this section was to explore the frequentist properties of simultaneous credible regions, using simulation study for m = 1, 2 markers. We have followed the recommendations for frequentist simulation study in Ollila et al. (2022). We also investigate the influence of prior parameters when the sample sizes are small or the data are sparse.
A small simulation study was also performed to examine the effect of well-suited or ill-suited priors. For this study, data were generated with proportions from beta distributions such that they would have small variance and the expected value of π 1 = {0.5} and that of π 2 ∈ {0.3, 0.5, 0.7}). In addition to the three sample size combinations discussed earlier, a larger sample size of 200 in each group was also considered for examining the effect of prior on posterior when data have varied sample size.
Multiple Markers
For multiple markers, to determine the parameters for simulation, we used the real data described in Section 6.1 and other scenarios, including equal probability of presence Table 3. Simulation scenarios for two binary markers: Six scenarios of marginal probabilities of presence of two markers in two groups.
E 1 0.6 0.71 0.9 2 E 2 0.9 0.71 0.9 NOTE: ρ 2 is the Kendall's τ correlation between the two markers and ρ is the corresponding correlation between the bivariate normal distributed variables. Note that when the probability of presence of a maker is 0.5 for both the markers, ρ 2 = (2/π ) arcsin(ρ), for example, scenario 3. For other scenarios, ρ was chosen by trial and error method so that on an average the simulated data gives binary variables with required correlation of ρ 2 .
for both the markers in both the groups, high correlation among markers. A sample of size N i in group i was generated using the given marginal probabilities (π (1) i , π (2) i ) and their association in terms of Kendall's τ . We generated the datasets using the package simdata (Kammer 2020) in R. In the first step, the data from multivariate normal distribution, with specified correlation matrix, were generated. In the next step, the data were dichotomized with the given probabilities to get binary variables. The process of dichotomization of two bivariate normal variables reduces the Pearson's correlation ρ, to Kendall's τ correlation ρ 2 in binary markers. When the dichotomization probability is 0.5 for both the variables then ρ 2 = (2/π ) arcsin(ρ) (Rousson 2014). For other choices of dichotomization probabilities, we configured the value of ρ such that the required ρ 2 is obtained after dichotomization, by performing a large number of simulations. The six scenarios of simulation study are described in Table 3. For each of the six scenarios, 1000 datasets were simulated and analyzed.
Exploration of frequentist properties: For a single marker, we computed two types of frequentist confidence intervals, Wald and Score confidence intervals, and two types of Bayesian credible intervals, ETI and HDI for each effect measure. Noninformative uniform prior and Jeffreys prior were used for Bayesian methods. The estimands in this exploration were the coverage probabilities of each confidence and credible intervals. The coverage probabilities of the true value of the effect parameter were calculated for each type of intervals described above.
For multiple markers, the coverage of simultaneous posterior regions was studied with three priors for multinomial probabilities-uniform, Jeffreys, and reference distance prior. Table 4. Coverage probabilities for 15 scenarios of π 1 , π 2 with balanced sample sizes for the effect measure difference of proportions computed with methods-Bayesian equal tailed credible interval (ETI) with Beta(1, 1) prior, Bayesian highest posterior density credible region (HDI) with Beta(1, 1) prior, ETI with Jeffreys prior, HDI with Jeffreys prior, Wald confidence interval, Newcombe confidence interval(NC) and Miettinen-Nurminen confidence interval (MN).
One Marker
The coverage probabilities for effect measure difference of proportions with equal sample sizes on both groups are given in Table 4. The coverage probabilities for this effect measure with unbalanced sample sizes on groups are given in Table 5. Similar tables for effect measures ratio of proportions and odds ratio are given in the Tables 1-4 in the supplementary materials. For all the effect measures, the ETI and the frequentist score interval showed desired coverage. Wald interval showed good coverage for larger sample sizes as compared to smaller sample sizes. HDI showed the lowest coverage in all the scenarios and for all effect measures. For imbalanced sample sizes when one of the probabilities of presence of marker is 0.1 or 0.9 and the effect measure is ratio of proportions or odds ratio then the coverage probability of Bayesian credible intervals seems to be lower than that of the balanced sample size scenarios.
To understand the effect of prior parameters used in the analysis in comparison with the distributions used for generating the data, a small simulation study was performed. The results of this study are given in Table 6. In the case of informative priors, when the analysis prior was the same as the prior used for simulating the data, the coverage probability was close to the desired value. When the analysis prior and the actual prior were different from each other, the coverage probability was lower, especially for small sample sizes. The effect of ill-specified prior was prominent with imbalanced sample sizes. On the other hand, with medium to large sample sizes, the effect of a prior was negligible. Noninformative or weakly informative priors, like uniform prior and Jeffreys prior, show good coverage probabilities in all the scenarios.
Multiple Markers
The coverage probabilities of the simultaneous ETI and the HDI region are shown in Table 7. Posterior joint distribution of the effect measures of two markers was computed with uniform prior (Dirichlet D(1) prior), Jeffreys prior (Dirichlet D(1/2) prior) and reference distance prior (Dirichlet D(1/4) prior). As expected, equal tailed credible interval showed better coverage as compared to HDI. Since the joint distribution of the effect measures was not available, frequentist simultaneous confidence regions are not reported for multiple markers.
It is to be noted that with noninformative priors, the equal tailed credible region includes the true parameter more often than the HDI , although, the HDI is the smallest region containing the points of highest densities. In all the scenarios of simulation study, the coverage probability with Jeffreys prior was less than the coverage probability obtained with uniform prior. Although the difference is small, it indicates the importance of choice of prior. In scenarios 5 and 6 with highly correlated markers, the coverage probability of HDI was improved as compared to the first 4 scenarios. Further, the coverage probabilities of the posterior region with Jeffreys prior and reference distance approach are comparable, indicating the benefits of the use of reference distance prior over uniform and Jeffreys prior for a large number of markers.
Application 1: Observational Study of Wet-AMD
We considered data of 100 wet age-related macular degeneration (wet-AMD) patients whose baseline and sixth month measurements were available. The measurements included visual acuity (VA), presence of abnormal fluid within the retina as macular Cyst, presence of abnormal fluid between the retina and the retinal pigment epithelium (neuroepithelial detachment, NED), and pigment epithelial detachment (PED). All the patients underwent the same treatment with slight variation in the duration between injections. Wet-AMD being an agerelated disease, the patients were grouped according to their age at the time of diagnosis of the disease-baseline age less than 80 years and baseline age 80 years or above. The aim of this analysis was to understand how the two groups differed with respect to the markers and how the baseline age affected the initial treatment effect in terms of these markers. The markers Cyst, NED, and PED were binary while VA was continuous. For the purpose of illustration, VA was dichotomized-1 indicating that VA at 6 months was greater than or equal to baseline VA, and 0 indicating that VA at 6 months was less than baseline VA. With simultaneous credible intervals, we checked whether these markers distinguished the age groups at the baseline and after initial treatment. A general description of this cohort and data are available in Ollila et al. (2022).
Analysis of One Marker
We analyzed each of the three markers (Cyst, NED, PED) separately, at baseline and at 6 months, using the method described in Section 2.1, and computed 90% credible intervals of difference of proportions in two age groups with noninformative beta(1, 1) prior. The resulting empirical posterior densities are given in Figure 1. In each plot of Figure 1, x-axis has the parameter of interest, Difference of proportions and y-axis has the posterior density. The upper panels show the posterior densities for the baseline measurements of each marker and the middle panels show the posterior densities for the measurements of the same markers after 6 months of treatment. The posterior density of difference of proportions for the marker Cyst after the initial treatment has lesser variance than its baseline plot, and is more concentrated on the positive side of "no effect. " This indicates that the presence of Cyst is generally larger in the age group of 80 and above with ETI of (−0.08, 0.16) at baseline and (−0.03, 0.17) after 6 months of treatment. This reduction in the presence of Cyst shows that the treatment worked better for younger patients. The credible interval of approximately (−0.25, 0.04) for difference of proportions for NED indicates that the presence of NED is lower in the age group of 80 and above at the baseline. After 6 months of treatment, the posterior has not changed drastically. The presence of PED does not seem to be different in the two groups at the baseline and also after the initial treatment, but the posterior distribution after 6 months has shifted left as compared to the baseline posterior distribution.
In all the figures, the triangle on x-axis indicates the posterior median and the filled circle indicates the posterior mode. Almost all of the posterior distributions are unimodal and symmetric about their posterior modes hence, the posterior mode and median coincide. Figure 1(g) shows the posterior distribution of difference of proportions in two age groups for a binary marker indicating improvement in VA at six months after the start of the treatment. It can be seen that the improvement in VA in the first 6 months is much better in the lower age group, since, the 90% credible interval lies completely below zero. Figure 2 shows the posterior densities of baseline Cyst with uniform, Jeffreys and an informative prior. For the presence of Cyst, the prior study results were available in (Chakravarthy, Evans, and Rosenfeld 2010). This information was used to specify an informative prior for Cyst in both the groups. Figure 2(c) shows the posterior density of baseline Cyst measurements with the informative prior. The posterior densities with the informative and the noninformative prior look similar but with the informative prior, the posterior mode is shifted slightly away from zero as compared to the noninformative prior.
Many frequentist methods like Wald, Score, methods with continuity correction (Newcombe 1998) are available for computing confidence interval for the effect measures of a single binary marker. However, most of these confidence intervals are obtained by inverting a specific hypothesis test. We obtained these intervals by testing the null hypothesis of no difference in age groups versus the alternative hypothesis of nonzero difference in the two age groups for each marker. We compared the Bayesian ETIs with these frequentist intervals for one dimension.
The actual values of the credible intervals and the frequentist confidence intervals for the improvement in VA are given in Table 8. The HDI is not included in the table, since, it cannot be written as two limit points because it may not be a connected interval. With the noninformative Beta(1, 1) prior, the Bayesian posterior credible interval is close to the frequentist confidence interval. Overall, we observed that with the noninformative Beta(1, 1) prior, the Bayesian posterior credible intervals are close to the frequentist Agresti-Caffo and Newcombe confidence intervals.
Analysis of Multiple Markers
Next, we analyzed multiple markers jointly for the comparison of the two age groups. In Figure 3, the pairwise joint credible intervals of the markers at the sixth month of the treatment period are shown. On the Y-axis, the values of the effect measure of the first marker are displayed and on the X-axis the values of the effect measure of the second marker are displayed. The dashed rectangle shows the equal tailed rectangular simultaneous interval (ETI) for two markers and the region marked by the smaller rectangles shows the HDI. The mode of the joint posterior density is shown by a dark circle on the region. The inference obtained from each individual marker is reflected to a large extent in the pairwise joint distributions. However, the 90% VA interval in one dimension excluded zero clearly, while, all the simultaneous intervals have zero included. From Figure 3(a), it is clear that the joint posterior of difference of proportions in NED and VA improvement has equal variation on both the effect measures, and the posterior mode is below zero for both, NED and VA. The joint posterior density of the effect measures for Cyst and VA in Figure 3(c) shows longer right tail for VA. Cyst has a smaller variation than VA. A similar pattern is observed in Figure 3(d) for Cyst and NED. Cyst effect measure has a smaller variation as compared to NED and the distribution has longer right tail for NED. Figure 4 shows Bayesian simultaneous credible intervals for difference of proportions for the two markers Cyst and NED, computed from the joint posterior distribution with three different priors-uniform, Jeffreys and reference distance prior. As compared to uniform prior, Jeffreys prior and reference distance prior have smaller posterior regions. In Figure 4(a) of uniform prior, the posterior mode is away from the observed difference of proportions, while, for Jeffreys prior they are close to each other. In Figure 4(c), the posterior mode is toward zero on both the dimensions from the observed value.
Visualizing simultaneous intervals for three and more markers is not trivial. The cumulative joint probability is computed such that all effect measures are either below a fixed value (left tail probability) or above a fixed value (right tail probability). The cumulative joint probabilities of these diagonal values are plotted in Figure 5. In this example, the diagonal values of difference of proportions are plotted on the x-axis and the corresponding cumulative joint probability is plotted on the y-axis. These plots are obtained from the R code available at the gitlab URL given in the supplementary material.
The uppermost panel shows pairwise (joint distribution of two markers simultaneously) left and right cumulative joint probabilities, respectively. The leftmost curve of both the plots show joint cumulative probability of VA and NED. From their marginal distributions and their joint distribution, it can be easily inferred that the effect measure for these two markers is negative when comparing the two age groups. From this curve, it can also be seen that more than 80% probability mass lies below "zero" value of difference of proportion for NED and VA improvement. Next, the rightmost curve is from the joint posterior of Cyst and PED. The left panel curve of Cyst and PED shows that less than 20% of the joint probability mass lies below zero. This indicates that the difference in the two age groups is positive for this joint distribution. Overall, in pairwise distributions, the pairs containing VA improvement as one of the markers have more probability mass below zero and the pairs with Cyst as one of the markers have more probability mass above zero.
A similar trend is seen in the plots in the middle panel of Figure 5, where the joint posterior tail probabilities are plotted for effect measures of three parameters simultaneously. For the combinations with Cyst as one of the markers, the substantial posterior probability mass lies above zero, while the combinations without Cyst are dominated by the effect measure of improvement in VA. It can be inferred from these graphs that Cyst is an important marker jointly with VA and it helps in differentiating the two age groups.
Application 2: Clinical Trial-Migraine Study
To illustrate the use of the proposed method to a randomized clinical trial, we reanalyzed a small section of the data from a randomized, placebo-controlled, double blind clinical trial designed to compare multiple treatments for migraine with placebo (Ho et al. 2008). We considered two groups of 50 patients each, with one on one of the treatments, and the other on placebo. These data were taken from StatXact (version 12) software (Cytel Inc. 2019). Here, we considered four markers: pain freedom at 2 hr post treatment (EP 1 ), absence of phonophobia at 2 hr post treatment (EP 2 ), absence of photophobia at 2 hr post treatment (EP 3 ) and sustained pain freedom up to a 24 hr period (EP 4 ). Out of these markers, EP 1 , EP 2 and EP 3 were co-primary markers, while, EP 4 was the secondary marker. Figure 6 shows the posterior densities of difference of proportions for all the four markers with uniform prior and Jeffreys prior. From Figure 6, it is clear that the posterior densities of difference of proportions corresponding to EP 2 and EP 4 are away from zero to the right, and the 90% credible intervals fall completely on the right side of zero. For the markers EP 1 and EP 3 the shift is comparatively less and the lower limit of the credible intervals are close to zero. The markers EP 2 and EP 4 clearly show the treatment effect. The posterior distribution with two noninformative priors, uniform and Jeffreys, show very little difference. Figure 7 shows the joint cumulative posterior distribution for the diagonal values of multiple markers. The inference obtained from individual markers is further strengthened with the inference from the joint distribution. The upper panel shows pairwise joint left and right cumulative probabilities, respectively. The rightmost curve of cumulative joint probability is for the joint probability of EP 2 and EP 4 , and shows more posterior probability mass on the right side of zero. The leftmost curve is the joint probability of EP 1 and EP 3 . A similar trend is seen in the middle panel for joint probabilities of effect measures of various combinations of three markers out of the four. The lower panel shows joint probabilities for all the four markers. It clearly shows the positive treatment effect from the joint posterior. Table 9 lists the frequentist individual and adjusted p-values (adjustment for the presence of other markers) for serial and parallel gate-keeping . It also shows Bayesian empirical posterior probability at "no effect" computed from the marginal posterior and the joint posterior distribution. Since, the frequentist method only provides adjusted p-value and not the confidence interval for multiple markers, it was not possible to compare the Bayesian credible intervals with their frequentist counterparts. Since, the data were small, we have considered the exact nonparametric method from StatXact (Cytel Inc. 2019) to compute the frequentist individual marker p-values and adjusted pvalues.
For all three primary markers, the individual or marginal frequentist and Bayesian p-values are comparable. The joint posterior probability of "no effect" in all the markers simultaneously is very small, indicating the treatment effect. The frequentist serial gate-keeping p-value is 0.076, which is generally inflated because of the discrete nature of the exact p-value computations.
Furthermore, the Bayesian posterior probability of "no effect" for the secondary marker EP 4 is 0.018, and agrees with the frequentist inference of rejection of null hypothesis with the exact p-value of 0.004.
The use of the proposed methods to gate-keeping in clinical trials is illustrated in Figure 8. The equal tailed simultaneous regions at 90% credible level corresponding to two markers EP 1 and EP 4 are shown. The left figure shows the ETI with serial gate-keeping, while, the right figure shows the ETI with parallel gate-keeping.
Discussion
Comparison of two groups with regard to multiple markers occur frequently in randomized clinical trials as well as in observational studies. Statistical inference using all markers simultaneously would provide a coherent approach for the comparison. However, this has rarely been carried out.
In this article, we have attempted to assess m binary markers using simultaneous Bayesian credible intervals. When multiple markers are available for comparison of two groups, the inference based on them simultaneously is often different than that based only on individual marker. There is often loss of information in the latter inference and the results may be misleading. As we have shown in Application 6, visual acuity as a single marker can differentiate between two age groups very well since zero is excluded from the credible region obtained from the marginal distribution. But when visual acuity is considered with the other markers, all the simultaneous credible regions of two markers include the point (0, 0), indicating a not-soextreme difference in age groups, contrary to what is suggested by individual marker. Further, although cyst did not give a clear indication of extreme effect measure when studied as a single marker, all its joint posterior probability curves indicate that cyst is an important marker along with VA, and helps in differentiating the two age groups.
The major advantage of using the proposed Bayesian method is that it works seamlessly for marker-specific effect measures, unlike frequentist methods. Another problem with frequentist approach is-when the effect measures do not follow multinormal distribution, it is difficult to obtain simultaneous confidence intervals and the only inference in such a case is a single pvalue, which is often criticized. With Bayesian analysis, the joint posterior distribution of all the effect measures itself is available and any characteristics of the effect measures can be obtained. As shown in Section 4, the choice of prior parameters for Dirichlet prior enables the estimation of the proportions when the sample size is small or the data are sparse. This can be viewed as an ad hoc approach of continuity corrections employed in the frequentist approach. Based on the simulation studies, we recommend use of Dirichlet prior with all parameters equal to the inverse of the number of categories (1/M) in most situations. This choice of prior is recommended especially for small or sparse data because it doesn't increase the sample size dramatically.
In frequentist approach, an appropriately chosen effect measure is estimated or tested directly. Here we have specified priors on the multinomial probabilities because specification of priors directly for the effect measures result into nuisance parameters. Moreover, the choice of prior may not be obvious for the effect measure.
From the simulation study, it was clear that frequentist methods worked better for equal and large sample sizes for two groups, as compared to imbalanced or small sample sizes, whereas, Bayesian methods showed overall consistent performance. We have already noted that, although, the highest posterior density region is the smallest region with points of highest densities, it does not show required coverage of true parameters. On the other hand, equal tailed credible regions show good coverage of true parameters, which are comparable to frequentist methods of Agresti-Caffo or Newcombe. The latter are preferred over the traditional Wald or Score confidence intervals. For multiple markers, equal tailed regions showed better coverage than highest posterior density regions. However, the highest posterior region is a better visualization of the posterior probability distribution. In addition, the proposed methods can provide inference for gate-keeping procedures popular in clinical trials. The advantage of these methods is that they can be used even when the hierarchy of markers is not known a priory. This method can also serve as a good Bayesian alternative to simultaneous confidence interval method for tests such as noninferiority test discussed in Tang and Yu (2020).
Throughout this article, we have assumed that the group sizes are fixed. However, similar methods can be developed by appropriately choosing the likelihood function when the groups sizes are unknown. For example, the Poisson likelihood for counts is appropriate in such situations. The gamma distribution is the conjugate for this likelihood. Furthermore, relative risk or risk ratio can be considered as an effect measure for each marker and simultaneous credible intervals can be obtained on similar lines as in Section 6.
The limitations of the proposed methods is the visualization of simultaneous credible regions for large number of markers m.
Equal tailed regions can be specified by their lower and upper limits but the highest posterior regions cannot be given by two endpoints. So, visualization is important for highest posterior regions. Visualizing the simultaneous confidence regions beyond three dimensions is not possible, hence, one has to rely on the cumulative probability curves to understand the joint posterior distribution as in Figure 7. When m increases, the possible combinations of responses increase to 2 m . The methods can easily become computationally intensive and hence, alternative methods might be needed. | 11,096.4 | 2022-08-10T00:00:00.000 | [
"Mathematics"
] |
TOP-Rank: A Novel Unsupervised Approach for Topic Prediction Using Keyphrase Extraction for Urdu Documents
In Natural Language Processing (NLP), topic modeling is the technique to extract abstract information from documents with huge amount of text. This abstract information leads towards the identification of the topics in the document. One way to retrieve topics from documents is keyphrase extraction. Keyphrases are a set of terms which represent high level description of a document. Different techniques of keyphrase extraction for topic prediction have been proposed for multiple languages i.e. English, Arabic, etc. However, this area needs to be explored for other languages e.g. Urdu. Therefore, in this paper, a novel unsupervised approach for topic prediction for Urdu language has been introduced which is able to extract more significant information from the documents. For this purpose, the proposed TOP-Rank system extracts keywords from the document and ranks them according to their position in a sentence. These keywords along with their ranking scores are utilized to generate keyphrases by applying syntactic rules to extracts more meaningful topics. These keyphrases are ranked according to the keywords scores and re-ranked with respect to their positions in the document. Finally, our proposed model identifies top-ranked keyphrases as topical significance and keyphrase with the highest score is selected as the topic of the document. Experiments are performed on two different datasets and performance of the proposed system is compared with existing state-of-the-art techniques. Results have shown that our proposed system outperforms existing techniques and holds the ability to produce more meaningful topics.
I. INTRODUCTION
In the last two decades, with the enhancement in the use of World Wide Web (WWW), several news forums such as news channels, reporters or column writers broadcast their daily news and articles on their websites. This evolution in online news and other electronic forums has provided numerous challenging tasks for researchers where they have to find useful information from trillions of unstructured data records. On the basis of the latest research in natural language processing (NLP) and statistics, researchers have developed several new techniques for extraction of valuable information from The associate editor coordinating the review of this manuscript and approving it for publication was Shadi Alawneh . a collection of documents using hierarchical or probabilistic models called topic model [1]. The key benefit of topic modeling is to determine patterns among words or phrases and clustering documents which share similar patterns. In other words, topic modeling is a reproductive model for documents which identifies a simple probabilistic technique by which documents can be produced. Furthermore, topic modeling is a statistical method that facilitates for organizing, summarizing and understanding large amount of textual data [2].
Topic modeling and keyphrase extraction techniques assist to identify the title of the document which helps readers to choose most relevant documents with the help of the title. However, to assign a title for the document one has to read the whole document and then assign the most appropriate title. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This is a time consuming task and requires manual interaction by a human. Therefore, there is a need to build such a system which can automatically read the document and assign the most appropriate title. A lot of efforts have been performed for topic prediction for different languages. In different applications, topic modeling has been applied for detecting and retrieving information [3]. Moreover, topic modeling has been applied to countless fields including text clustering, document tagging, film genre identification, sentiment analysis, etc. [4]- [6]. There are several techniques available for topic modeling to identify or to extract latent information from a text document e.g. Latent Semantic Analysis (LSA) [7] Probabilistic Latent Semantic Analysis (PLSA) [8] and Latent Dirichlet Allocation (LDA) [9]. However, very limited work has been performed for Urdu language due to its complex structure [10]- [12]. Furthermore, topic modeling with keyphrase extraction has been studied for languages like English, Chines, Arabic, etc. but no such technique exists in the literature for Urdu language. Urdu is a morphologically rich but resourced poor language [13]. It is Pakistan's national language and more than 170 million people all over the world use it for the communication 1 . It is a language enriched with grammar and has a wide range of derivations and inflections in a single word which makes it a difficult language to process. Since Urdu is new in the field of NLP and information retrieval (IR), very few amounts of research work has been performed on it. Many models and tools developed for other languages cannot operate with Urdu language because of the completely distinct language structure [14]. English language is written from left to right but Urdu writing script is written from right to left. Recognition of phrases in English language is simple as compared to Urdu because English language follows some standards i.e. space insertion, notion of capitalization, etc. However, in the Urdu language there is no standard for space insertion and notion of word capitalization. Hindi and Urdu are closer only to the speakers of both languages, but the writing style of both languages is distinct.
Urdu text classification problem has been studied by several researcher in recent years [15]- [20]. However, these approaches relies on the classification of the documents and did not consider the title prediction. Assigning a title to the document is different from classification problem. As it deals with the title prediction for a single document and each document may have different title on the basis of information provided in the document, even for the same domain. On the other hand, documents classification categorized similar documents in one class, based on the similarity among text in the document and assumes that all the documents belong to the same domain. This is why assigning a title to the document becomes more challenging.
Building on above explanation, in this paper we have proposed a novel TOP-Rank approach for topic prediction by extracting top-ranked keyphrases in Urdu language. In the 1 https://en.wikipedia.org/wiki/Urdu first step, the proposed approach pre-processes the text to remove invalid characters, stop words and sentence boundary identification. After preprocessing, our system identifies keywords and assigns rank (score) to each extracted keyword on the basis of its position in a sentence. After ranking keywords, proposed system extracts keyphrases of different sizes from the document based on the extracted keywords and these keyphrases are ranked by adding score of each keyword in the phrase. Once keyphrases are ranked, top-ranked keyphrases are selected and re-ranked by re-visiting the document and score of these keyphrases is updated based upon their occurrence in the document. Finally, keyphrase with the highest score is assigned as topic of the document. We have conducted experiments on Urdu language datasets which contains multiple documents from several different domains. The effectiveness of our proposed model is evaluated on these datasets and compared with the topic modeling-based state-of-the-art approaches. Experimental results have shown that our proposed model outperformed topic modeling-based approaches and have shown promising results on Urdu language dataset.
The rest of the paper is organized as follows: section two highlights related work and section three presents the proposed methodology. Explanation of dataset and experimental evaluations is given in section four and finally we conclude our work in section five.
II. RELATED WORK
Several supervised and unsupervised approaches have been developed for the topic modeling for different languages. In supervised line of research, classifiers are trained on the textual data and annotated with keyphrases that determine whether the document or phrase is a topical keyphrase or not. Huang et al. [21] proposed a supervised topic modeling technique Siamese Labeled Topic Model (SLTM) for English language at sentence level. The working mechanism was similar to the pLSA in which they distributed words on the basis of their labels and used artificial neural networks for training perspective. Wang et al. [22] proposed hierarchical Dirichlet process-based inverse regression (HDP-IR) model for the evaluation of e-commerce reviews. HDP-IR contained three components -non-parametric component, inverse regression and coupling component. The first component was used to build HDP to capture the uncertainty of data concerned with topics. Second component influenced by multinomial inverse regression (MNIR) model, while third component combined the first two components and integrated the topics into the logistic regression within MNIR model. Zeng et al. [23] proposed expectation-maximization algorithm for topic modeling by computing maximum likelihood. For distribution of the topics from document, they demonstrated the fast online expectation maximization (FOEM) which was able to converge LDA's probability function at local stationary point. By dynamically scheduling fast speed and streaming parameters for low memory use, FOEM was more efficient for lifelong topic modeling for big amounts of data. Li et al. [24] designed generative modeling for multi-labeled classification of documents and trained two different extensions of LDA named as frequency LDA (FLDA) and dependency frequency LDA (DFLDA) for multi-label documents categorization task. These two models aimed to incorporate observations of labeled frequency and labeled dependency into the traditional LDA. FLDA used label frequency information to generate labeled Dirichlet prior to each document, while DFLDA introduced a topic-layer to capture co-occurrence relationships among labels.
In unsupervised approaches, various measures such as TF-IDF and topic proportions are used to identify topic associated terms. In topic prediction, keywords are ranked based on their relevance to the topic [25]. Latent semantic analysis (LSA) was developed for analyzing the relationships between a set of documents and the terms contained. Documents are compared by taking the angle cosine such that values closed to 1 represents documents are similar, whereas values closed to 0 represents different documents [7]. Bastani et al. [26] developed an intelligent system to analyze consumer complaints and labeled each document with different keywords and trained LDA model for topic prediction from these complaints. Venkatesaramani et al. [27] proposed two-step approach for topic modeling for short text collected from tweets and YouTube comments. They used TF-IDF based clustering to find similarity between comments. Zhang & He [28] proposed an approach for extracting topics for events on social media using reinforcement knowledge. Their methodology consisted of three steps: first they run their topic model based on word embedding and the structure of the conversation for mining preceding topic of each event.
In the second step, they mined set of reinforced knowledge from previously extracted topic. Finally by using the reinforced knowledge sets they extracted the final topic for every event. Wang et al. [29] proposed a system for news-topic recommendation by extracting keywords from news articles. They proposed rapid automatic keyword extraction (RAKE) method for extracting keywords from online news and then ranking these extracted keywords by using position rank algorithm on the basis of syntactic rules. Alhawarat & Hegazi [30] proposed topic modeling technique for Arabic language news data set. They used LDA and k-means clustering algorithms. For topic prediction they reduced vector space model and then extracted the hidden topic from the document as a feature selector.
Several approaches have adopted keyphrase extraction method. Bougouin et al. [31] proposed an approach for keyphrase extraction for topic prediction and used a topic-based model with clustering. After making the clusters of topics, a graph was generated where topic clusters were considered as vertices and edges created by calculating keyphrases in the clusters that appeared together. Distance was calculated between clusters and this distance created edges between two clusters. They used text rank algorithm to rank their topics and finally keyphrase having higher rank considered as topic. Danilevsky et al. [32] did not consider the length of the keyphrase and identified topics and grouped them in clusters. Each cluster is assigned words from the document and these words are formed into keyphrases and ranked based on purity, completeness, and coverage and finally, the topmost keyphrases were selected as topic. Parveen et al. [33] proposed a graph-based model where they considered every node as a topic. For each edge in the graph, they normalized according to their length so that long sentences do not get any benefit. Furthermore, they utilized Hyperlink-induced topic search (HITS) algorithm to rank sentences. Boudin [34] extracted keyphrases by using multipartite graphs and sentence clusters to incorporate the topical knowledge, and Alfarra and Alfarra [35] proposed a graph-based technique for extracting keyphrases in a single document which utilized phrases and terms in a sentence rather than focusing structure of the document.
Wan et al. [36] proposed an unsupervised graph-based approach for both summarization and keyword extraction. They generated sentence-to-sentence graph, sentence-toword graph and word-to-word graph. Another graph based approach was proposed by Danesh et al. [37] which used TF-IDF, term length and position of first occurrence (PFO) of keywords for ranking mechanism. The main contribution was to use PFO which ranked keyphrases passes on a certain threshold and decreases the term frequency score if it appears in other terms. Ali, Wang and Haddad [38] assigned each word a syntactic category (i.e. noun, adjective, etc.) and identified several syntactic patterns, and phrases were extracted according to the syntactic pattern. Furthermore, these keyphrases were ranked using TF-IDF and text rank algorithm. Corina and Cornelia [39] developed position rank graph-based approach for keyphrase extraction. They build graphs where words represent nodes and edges represent weights. The edges were assigned weights based on how many times these words appear together.
Shakeel et al. [10] proposed a topic modeling technique for Urdu language based upon standard LDA as Urdu-LDA (ULDA). They utilized Gibbs sampling technique along with Markov Chain Monte Carlo (MCMC) algorithm for extracting topics from Urdu text document. Rehman et al. [11] proposed probabilistic topic model with statistical variational Bayes approach for Urdu documents (VB-ULDA). They described two versions of VB-ULDA, with stemmer VB-ULDA (WS) and without stemmer VB-ULDA (WiS) for topic modeling of Urdu text article. Rehman et al. [12] proposed non parametric Bayesian Hierarchical (hLDA) model for topic modeling in Urdu text articles (uhLDA). For statistical and probabilistic inference, they used Gibbs Sampling algorithm and extracted topics based on the hierarchies of the terms used in documents.
There exist several approaches for document classification in Urdu language. Ahmed et al. [16] proposed a SVM based classifier for Urdu news headline classification. They used predefined classes to group similar headlines in a single class. Zia, Akhtar and Abbas [17] presented a comparative study of different classification algorithms on Urdu VOLUME 8, 2020 document classification. They analyzed the classification algorithms with respect to the selected features to classify text. Akhtar et al. [19] used Single-layer Multisize Filters Convolutional Neural Network (SMFCNN) to classify Urdu documents and presented a comparison with several machine learning algorithms. Akhtar et al. [20] presented another comparison of deep learning algorithms where they selected four deep learning algorithms and compared their performance over Urdu document classification along with four machine learning algorithms. Rasheed, Banka and Khan [40] proposed a feature selection approach for Urdu news articles classification. They used LSI model to extract useful features from the news articles and used SVM for the classification.
III. PROPOSED MODEL
This section provides an overview of the proposed methodology for topic prediction using positioned-based top-ranked keyphrase extraction. Figure 1 illustrates the architecture of the proposed model which is divided into several steps: first step is the preprocessing of text data, second step is to extract positions of each word from the target document, generating keyphrases with the help of syntactic rules and ranking these keyphrases. Third step performs extraction of higher position rank keyphrases. To extract the most relevant and important keyphrases from an article, proposed system re-ranks keyphrases in step four and extracts top-rank keyphrases in step five and selects topic for the target document.
A. TEXT PREPROCESSING
Text pre-processing is an important step in NLP tasks as it transforms text into a more digestible form to achieve better performance of the algorithms. To achieve this, partof-speech (POS) tagging is performed to allow syntactic filters because it helps for extraction of nouns, adjectives or pronoun phrases from the text document. For this purpose, we use well known POS tagger for Urdu language CLE 2 . After generating POS tags, several steps to clean the dataset have been performed e.g. removal of invalid characters, stop word removal and sentence segmentation as explained in upcoming subsections.
1) INVALID CHRACTER REMOVAL
In pre-processing, the first step is to remove invalid symbols like punctuation marks, links and special characters like !, ?, @, /, #, $, %,^, &, * , (, ), etc. from the text as these are unnecessary elements and hence it is better to remove all these invalid characters.
2) STOP WORD REMOVAL
In Urdu language, there are different kind of stop words known '' '' (Haroof-e-Jaar or post position) similar to prepositions in the English language which appears before the object. Table 1 elaborates the difference between preposition in English and postposition in Urdu language. In English, on ('' '') comes before the object table ('' '') but in Urdu '' '' (on) comes after the object '' '' (table). Table 2 shows some Urdu Haroof-e-Jaar (Postposition). These words provide no meaningful information about the document and therefore must be removed from the target documents.
Other than ''
'' (Haroof-e-Jaar), there are also some words that are used to link two words in Urdu and known as '' '' (Haroof-e-Izafat) as in Table 3. Haroof-e-Izafat are the words which creates relationship between nouns and adjectives. Table 4 illustrate examples for the use of Haroof-e-Izafat within a sentence. This can be observed that Hroof-e-Izafat are adding more explanation to understand meanings of the phrases. Therefore, proposed technique will remove all the stop words (Haroof-e-Jaar) and does not remove Haroof-e-Izafat as they express information useful for the identification of topics.
3) SENTENCE SEGMENTATION
In any language, a collection of words makes sentences and sentences create the document. Each sentence in a document contains significant information. To extract this information, the proposed system splits sentences on the basis of dash ''-'' having <SM> tag in the POS tagger for Urdu language. The position of each keyword will be counted from the start of each sentence because important keywords that contain significant information about the document occur on the starting positions of each sentence. Figure 2 highlights the impact of keywords positioning in the sentence. Although
keyword ''
'' is also frequent but '' '' has more importance in the document due to its position in a sentence and represent more meaningful topic.
B. WORD POSITION RANKING
After completing preprocessing, position of each keyword in a document is extracted by using proposed technique. The proposed system aggregates information from all positions of word occurrences in the document. The main idea behind the position ranking is to assign higher weight (or probability) to those words which are at the start of a sentence. For example, if a word appeared at 2nd, 5th, and 10th position in a document then its weight will be calculated as 1/2 + 1/5 + 1/10 = 0.8. Equation 1 shows how the weight of each word w is calculated, where w i is the particular word in a document, j is the position of the word. Summing up the positional weights for a given word aims to grant more confidence for frequently occurring words by taking into account the position weight of each occurrence.
However, it is to be noted that weight will remain same at every position in a document once it is calculated. For example, if the aggregated weight of the word '' '' (Pakistan) is calculated as 1.6 in the target document then its aggregated weight will remain the same at every position wherever word '' '' (Pakistan) occurs.
C. KEYPHRASE GENERATION
Keyphrases describe the most important topics about document, therefore to identify keyphrases, nouns and adjectives are considered as candidate words because they hold key information in a sentence for the identification of a topic. These words combined can represent more valuable information than their independent usage. The proposed system generates keyphrases by following some syntactic rules. These rules are: R1: Each keyphrase will be comprised of a given size R2: Each keyphrase will never start and end with Haroofe-Izafat, e.g. '' '','' '','' '' and '' '' R3: After the first noun or adjective, if Haroof-e-Izafat is associated in the sentence then it will be included in the keyphrase Algorithm 1 highlights the process for the generation of keyphrases. Algorithm takes POS tagged file as input and preprocesses it. This includes invalid character removing, stop word removal and sentence segmentation. First of all, algorithm ensures that keyphrase starts or ends with noun or adjective and Haroof-e-Izafat ('' '','' '','' '','' '') do not appear at the start or end of any keyphrase. Line 10-11 will check that the start or end of the keyphrase and ensure that there should not be Hroof-e-Izafat (according to rule R2) and if any appears at these positions then such keyphrases will be discarded. The proposed system will never generate keyphrases like '' '' because it does not produce any meaningful information. If the system is in the start or end of the keyphrase then line 12-18 generate keyphrases by joining nouns and adjectives. Lines 21-26 generate keyphrases if the system is not at the end or start of the keyphrase. Here rule R3 is applied and if any Hroof-e-Iazafat appears after the first noun of adjective of the keyphrase then it is considered as the part of the keyphrase. The algorithm generates several keyphrases from each sentence. The algorithm repeats itself for each sentence and generates the list of keyphrases as an output. At the end all keyphrases are extracted which start and end with a noun or adjective like '' ''. The complete steps of keyphrase ranking are shown in Algorithm 2. This algorithm takes input the list of keyphrases generated by Algorithm 1 and score of keywords calculated in subsection B. The algorithm calculates the keyphrase rank by adding positional weight of each word and generates a list of ranking of all keyphrases as output. Line 5-7 add score of each word in a keyphrase to generate an overall score of the keyphrases.
E. EXTRACTING HIGHER RANK KEYPHRASES AND RE-RANKING
Keyphrases are comprised up with multiple words. To extract higher rank keyphrases from the target document, the for each word in keyphrase 6: keyphrase_rank + = W[word] 7: end for 8: R←keyphrase_rank 9: keyphrase_rank = 0 10: end for 11: Output: Set of keyphrases ranks R proposed system sorts keyphrases on the basis of their scores, extracts 10 higher rank keyphrases and further uses them for re-ranking.
After extracting higher rank keyphrases, next task is to extract most relevant keyphrase from the document. For this purpose, we have defined a novel re-ranking mechanism which revisits the document with keyphrase perspective and re-rank keyphrases based upon their occurrence in the document. For example, a keyphrase '' '' has initial rank 3.1 however on revisiting document if this keyphrase has been identified more than once then it means this keyphrase contains more abstract information about the document. Therefore, there is a need to re-rank this keyphrase by increasing its score based on its occurrence in the document and this helps to extract more relevant keyphrases from the document. The proposed model revisits the document for all top ten keyphrases. If any keyphrase appears more than one in the document then the rank of the keyphrase is incremented by one for each occurrence. By doing this the system will obtain top-rank keyphrases which are more accurate predicted topics of the target document. Algorithm 3 highlights the mechanism to re-rank keyphrases. If a keyphrase appears more than once in the document then against each occurrence its score is incremented by one.
IV. EXPERIMENTAL EVALUATION
This section presents the explanation of the experimental evaluation for the proposed model and elaborates our results including a comparison with related techniques. Evaluation measures are a vital part to assess the performance of the building model. Almost all evaluation measures depend on the nature of the data. To calculate the performance of the proposed system, a number of experiments are performed on a variety of datasets. We have choose two datasets, first dataset (D1) is prepared by us which contain 640 documents. The dataset has been collected from different websites and news articles e.g. express.pk/, bbc.com/urdu/, urdupoint.com/ and urdu.geo.tv etc. Dataset contains documents from five different domains which include politics, sports, entertainment, health and economy. Each domain consists of more than 120 different documents and each document contains several sentences. Table 5 presents detailed overview of the dataset D1. Second dataset (D2), Northwestern Polytechnical University Urdu (NPUU), was prepared by [19] and contains more than 10,000 documents from six different domains which include business, crime, entertainment, politics, science and technology and sports. Table 6 presents detailed overview of the dataset. D2 dataset contains documents collected from news websites and annotated manually by human annotators. Table 5 and 6 present the domains of documents along with total number of documents in each domain, percentage of each domain in the dataset and total number of words in all documents in the domain for both D1 and D2 datasets respectively.
B. RESULT ANALYSIS
To evaluate the proposed system, a documents from both datasets D1 and D2 were executed multiple times to extract top-rank keyphrase as topics for documents. Accuracy of the proposed system is measured by following equation.
Accuracy =
no. of true predicted documents total no. of documents × 100 (2) The topics predicted by the system are evaluated by TOP-Rank (proposed model), position rank and TF-IDF predictions on different window sizes (keyphrase length).
To check the correctness of the proposed system, systematic procedure marked the true predicted topic if it fulfills the condition that topical keyphrase should be on the top out of the top ten predicted keyphrases for all three mechanisms TOP-Rank, position rank and TF-IDF.
For result analysis, keyphrases with different sizes are selected which are (k = 1) unigram, (k = 2) bigram, (k =3) trigram, and (k = 4 and k = 5) considered as n-gram. Unigram contains single word e.g. '' '' (Pakistan), bigram contains two words e.g. '' '' (Prime Minister Pakistan), trigram contains three words e.g. '' '' (Prime Minister of Pakistan) and keyphrases contains four or five words are considered as n-gram e.g. '' '' (Prime Minister of Pakistan Imran), '' '' (Prime Minister of Pakistan Imran khan) respectively. Keyphrases with different window sizes (k) are compared with higher positional rank keyphrase and TF-IDF. The averaged keyphrase relevance with document is given in Table 7 and 8 for each window size, i.e. unigram, bigram, trigram and n-gram for datasets D1 and D2 respectively. Each document contains an average forty original assigned keyphrases but the system will extract only the top ten higher ranked phrases. Table 7 and 8 represent average top-rank keyphrase relevance with target document for all classes at each window size i.e. unigram, bigram, trigram and n-gram over dataset D1 and D2 respectively. It can be noted that TF-IDF performs better in some cases for unigram due to the single keyword extraction. However, its accuracy starts decreasing when size of keyphrases increases and produces more inaccurate results as compared to position rank and TOP-Rank keyphrases. The main reason is that it extracts keyphrases only based on the frequency of the words instead of its positional influence in the document. Position rank produces lower accuracy as compared to TOP-Rank because its contains only positional weights of the words in keyphrases but in some cases positional rank keyphrases performed better than TOP-Rank on different window sizes in different classes due to the limitation of re-ranking mechanism. Because in TOP-Rank, we re-ranked keyphrases for some cases which resulted in some irrelevant keyphrases to become top-rank which is considered as false predicted topic for TOP-Rank mechanism. But for overall evaluation, it can be noted that TOP-Rank outperformed both positional rank keyphrase and TF-IDF.
We can observe from Table 7 and 8 that against unigram TOP-Rank, position rank and TF-IDF produced similar results but as a whole TF-IDF performs better at unigram. Position rank performs better at trigram in politics and health in D1 and entertainment and simlarly it perfroms better for science and technology and entainment in NPUU. After increasing the window size for keyphrase prediction, it is clearly indicated that TOP-Rank performs better as compared to the both datasets and produces more accurate and meaningful topics. Moreover, it is to be noted that more accurate results were found on tri-gram and n-gram because when the window size was increased, keyphrases provide more accurate information about the document while single keywords i.e. unigram cannot deliver the accurate information about the document. Further, it is evident that TOP-Rank performs better as compared to position rank and TF-IDF resultantly producing more accurate topics for documents in both the datasets. Figure 4 and 5 present an overall comparison of the proposed TOP-Rank model with position rank and TF-IDF for both D1 and D2 datasets respectively. This is clear from the diagram that TF-IDF has the highest accuracy for unigram however, position rank and TOP-Rank have also produced comparable performance. On the other hand, as we increase the length of the keyphrases, our proposed model TOP-Rank shows highest accuracy.
These figures clearly depicts that on unigram all three yield similar results but when increasing window size at trigram and n-gram, TOP-Rank outperforms the other two and extracts more accurate results. From the diagrams it is evident that TOP-Rank outperformed both position rank and TF-IDF and produced better results. This establishes the effectiveness of the re-ranking mechanism adopted in TOP-Rank model.
V. CONCLUSION
Topic modeling is the key method in machine learning and NLP to extract significant information from a document.
This information helps to identify important topics from the document which further could be utilized for topic prediction. Topic modeling has been widely explored in different languages however, there is limited work for topic modeling for Urdu documents. In this research we have introduced a framework to extracts meaningful topics from Urdu documents by using TOP-Rank keyphrase extraction based on their positional weight. There exists no technique on keyphrase-based topic extraction for Urdu language and re-ranking of keyphrase has not applied before. The framework first extracts the positions of each keyword from the target document and ranks these keywords according to their positions. After ranking positional weights to each keyword in document, keyphrases of different sizes are generated by applying several syntactic rules. Furthermore, these generated keyphrases are ranked according to the scores of keyword and higher ranked keyphrases are extracted. For the extraction of more relevant keyphrases, a re-ranking of the keyphrases is introduced which extracts top-ranked keyphrases as potential topics of the target document where the one with the highest score is selected as the topic of the document. We have conducted experiments on two dataset of Urdu language which contains multiple documents from several different domains. Our framework produces better results and generates more meaningful topics for Urdu language as compared with existing techniques. Our system is capable to extract more accurate and meaningful topics and outperformed the existing approaches. Our methodology has some limitations like the efficiency and accuracy of the POS tagger etc. In future, the proposed approach will be enhanced by using graph based ranking of the keyphrases and improving the results of POS Tagger for Urdu language.
NATASH ALI MIAN received the M.C.S. degree from the University of Lahore, the M.S.(CS) degree from SZABIST, Islamabad, and the Ph.D. degree from NCBA&E, Lahore. He is currently working as an Assistant Professor with the School of Computer and Information Technology (SCIT), Beaconhouse National University. He specializes in software engineering with special interest in requirement engineering, self-adaptive systems, Internet of Things, cloud computing, formal methods, and reverse engineering and databases.
MUHAMMAD WASEEM IQBAL received the Ph.D. degree from The Superior College (University Campus) Lahore. He is currently working as the Head of Software Engineering Department, The Superior College (University Campus). He specializes in human computer interaction (HCI), with special interest in adaptive interfaces for mobile devices, usability evaluation of mobile devices for normal, and visually impaired people and user context ontological modeling.
ABBAS KHALID received the master's degree in computer science from the University of Central Punjab, Lahore, Pakistan, and the Ph.D. degree from Lancaster University, U.K. He is currently an Assistant Professor with the University of Lahore, Lahore. He has over 15 years of experience in academics and research. His research interests include communication systems, the Internet of Things, and robotics.
TAHIR ALYAS (Member, IEEE) received the master's degree in computer science and the Ph.D. degree from the School of Computer Science, NCBA&E, Lahore, Pakistan. He is currently working as the Head of the Department of Computer Science, Lahore Garrison University, Lahore. His research interests include cloud computing, fog computing, Hyper-convergence, the IoT, and intelligent age. He is also Oracle certified in Cloud Infrastructure Architect, Associate, Professional, and Oracle Autonomous Database Cloud 2019 Specialist.
MOHAMMAD TUBISHAT received the B.Sc. degree in computer science and the M.Sc. degree in computer and information sciences from Yarmouk University, Jordan, in 2002 and 2004, respectively, and the Ph.D. degree in computer science (artificial intelligence-natural language processing) from the University of Malaya, Malaysia, in 2019. He is currently working as a Lecturer with the Asia Pacific University of Technology and Innovation, Kuala Lumpur, Malaysia. His research interests include natural language processing, data mining, artificial intelligence, machine learning, optimization algorithms, data science, and sentiment analysis. | 7,816.2 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
A Thought Experiment On Gravity Based On Falling Objects: Investigation Of Science Teachers’ Thinking Process
This study aims to analyse the thought processes of science teachers who are master students in scientific education using a thought experiment on gravity based on falling objects. The phenomenological study approach, one of the qualitative research methodologies, was used to achieve this aim. Purposive sampling was used to investigate eight science teachers continuing their master’s degrees. Data was collected through interviews and a thought experiment on gravity based on falling objects. The teachers participated in Face-to-face problem-solving sessions, thinking aloud and backward questioning sessions. Results reveal that the teachers mostly showed secondary effects as establishing a new amount of relationship, carrying out thought experiments to predict, and preferring scientific concepts and hypothetical simulations as sources of thinking. Likewise, spatial reasoning-symmetry-compound simulation and experience were equally and less frequently preferred. Results also show that science teachers had strong self-efficacy judgments, a mastery of the curriculum, an unpleasant attitude when dealing with difficulties, and hypothetical thinking skills.
Introduction
Physics helps us understand the universe and how physical phenomena take place.Physics not only helps us understand the universe but also helps technological growth by emulating nature (Özel, 2004).The importance of knowing and teaching the principles of physics at both secondary and high school levels cannot be overstated.As a result, science literacy and science education are becoming more and more important.School-based science education appears to play a significant role in addressing such a common problem as a lack of interest in science (Raes et al., 2014).To quickly adapt to life and achieve success, students must understand the world of science and how to benefit from it.In the learning-teaching process, the teacher adopts the position of an individual who studies, questions, explains, discusses and transforms the information source into a product, and encourages and directs it at the same time (MoNE, 2018).The way people understand and interpret a topic, or a situation differs in different periods and ways.There is a collective activity that takes place for everyone to think about.Predictions or a mixture of information arise through the mind as the knowledge we have in this action.The images we see around us, or the visualization exercises we do in our minds represent our mental processes.In retrospect, various methodologies and approaches are used to interpret thought processes.
Review Of Thought Experiments
The idea was first suggested in thought experiments by Danish physicist Hans Christian Örsted in the 19th century.The impact of thought experiments on hypotheses and conjectures was explained by Örsted.A historical thought experiment, however, was neither discussed nor examined by him (Witt-Hansen, 1976).Ernst Mach is generally acknowledged to be the first scientist to use thought experiments in the literature review.Mach actually created the systematic explanation of the thought experiment notion.Ernst Mach highlighted the evolution of thought experiments and their significance for the development of the mind (Gendler, 1998).The use of thought experiments is debatable from a philosophical and scientific perspective.There is not a precise definition of the idea of thought experiments in the literature review.For this reason, thought experiments have several meanings and explanations.Sorensen (1992) and Wilkes (1988) noticed thought experiments as a source of scientific information.In order to stimulate thought, experiments with open-ended consequences are used (Bunzl, 1996).According to James Robert Brown (1991), thought experiments are difficult to define.Thought experiments are mental exercises that can be imagined.They are based solely on hypotheses; no conclusions can be drawn from calculations alone.However, upon closer inspection, it becomes clear that they are thought experiments.According to Gilbert and Reiner (2000), thought experiments are complementary to genuine experiments.Thought experiments and actual experiments have a lot in common.The student must take some sort of active role in his own learning process to achieve permanent learning.Kuhn (1963) asserts that the basic strategy for teaching science and giving it meaning is to get rid of all unnecessary details.Scientific findings that are independent of context, or issues with theories and laws, suggest that these should be taken into consideration.As was previously established, no scientific material is used in thought experiments.It is the process of mentally understanding all scientific knowledge.It can be shown that thought experiments, also known as the laboratory of the mind, are used to explain and analyse the thinking processes, a new technique used in the interpretation of scientific thought (Acar & Gürel, 2016;Gelen et al., 2017;Gilbert & Reiner, 2000).
The Current Study
Gravity is one of the main parts of science and physics education.It's a common occurrence in daily life, and suitable for multidisciplinary applications.Before starting education, people can see the sky with their naked eyes and continue to live with second-hand information about events occurring in the sky, adding meaning to them using their interpretations.Unfortunately, misunderstandings arise from tactics that are regularly seen in daily life and extremely difficult to correct (Yürük et al., 2000).If teachers, who are crucial aspects of education, have such errors, this information, contrary to scientific facts, is transferred from generation to generation and continues in this way.In a study of Israeli children aged 9-17 years, Bar et al. (1997) discovered that the source of gravity is frequently a magnetic force that requires a medium-air-to be carried from the ground to the item.According to Watts (1982), gravity is "selective" for 12-year-old British children because it does not apply to bodies at rest or objects thrown into the air.Palmer (2001) found that 11-to 16-year-old Australian students believe that gravity is a phenomenon that occurs exclusively on the Earth.Vosdianou (1994) observed that Greek kids do not regard the earth as a planet until the end of primary school, but rather as a physical entity with its laws, and Baldy and Aubert (2005) found that this differentiation remains among 15-year-old students.When traditional teaching methods are applied, students' concepts are resistant and change little with age.
In this context, this study aims to evaluate the thinking processes of science teachers who receive postgraduate education to advance their professional development, in relation to a thought experiment devised for gravity based on falling objects.
Research Questions
The research problems of this study were determined as follows, based on the mentioned aim: RQ .What are the secondary effects on the thinking processes of science teachers who are master students in science education and who conduct a thought experiment designed to explain gravity based on falling objects?RQ .What are the aims of science teachers who are master students in science education to conduct a thought experiment designed to explain gravity based on falling objects?RQ .What are the sources of thought of science teachers who are master students in science education while conducting a thought experiment designed to explain the gravity based on falling objects?
Research Model
Participants in the phenomenology approach have first-hand knowledge of the phenomenon which is working in all of its aspects Creswell (2012) .According to Nitsche (2020), phenomenology, this method of teaching has become more popular recently.The fields of education commonly use two types of phenomenological approaches: descriptive and interpretive.In this study, the descriptive phenomenology approach, one of the qualitative research methodologies, was preferred.
Research Participants
Two criteria were established in accordance with the purpose of the research: you must be a science teacher, and you must continue your education after graduation.According to the purposeful sampling method, the participants consisted of eight volunteer teachers who were actively teaching and continuing their education.Table 1 shows the demographic characteristics of the participants.Thought Experiment The thought experiment on gravity which was used in this study is illustrated in Figure 1.
Data Collection
In this study, face-to-face problem-solving sessions, thinking aloud, and retrospective questioning methods were conducted with the teachers to analyse their thinking processes regarding the thought experiment on gravity based on falling objects.Within the scope of the research, a thought experiment was created for the unit of 'gravity based on falling objects' in the 7 grade Turkish Science Curriculum.This experiment is for the acquisition 'S/he discovers that gravity can be explained on the falling objects.' Data were collected from teachers through interviews on this thought experiment.
Data Analysis
The researchers changed the transcribing procedure after each problemsolving session to avoid missing words and phrases.After decoding, the entire obtained material was read several times to gain familiarity.A research diary was kept during the data-collecting phase.The material recorded in the research diary helped the participants delve deeper into their responses.A diary was also kept while familiarizing the data.The data was discovered and analysed with the help of the information in the diary.The coding was done by printing out the decrypted data and colouring it with coloured pencils.Separate coding was done for each participant's decoding procedure, and then the notes were collected in the diary while choosing the codes based on how often they were utilized by comparing them to the codes of the other participants.The codes were examined regularly and continued until new codes could not be established.The category process was carried out from induction to the deduction phase after the coding phase was completed.When the categories reach saturation, the category creation process is completed.The data were analysed under three main issues to answer the research questions.These main topics are the sources of thought, for what purpose they use the sources of their thoughts and how their thoughts are affected in line with the aims.All coding and analyses made throughout the research were reviewed by the researcher, an expert in this field.Figure 2 illustrates the steps of this phenomenological investigation.
Findings Of The Study
Within the scope of the study, there are three main problems in this part of the research.The results are presented using a combination of deductive and inductive methodologies and data is presented in Table 2.
Secondary Effects of Science Teachers on The Gravity Thought Experiment
The secondary effect enables us to instantly comprehend the participants' thought patterns or methods of thinking after they have completed the thought experiment.The participant can build a consistent link between the past knowledge and the thought experiment, engage in conflict, or create a new schema by using thought experiments.These findings were categorized into three groups.
Establishing a New Coherence Relationship
The participant conforms between their prior knowledge and the thought experiment while doing the thought experiment.When a participant reads the thought experiment, s/he establishes consistency by applying prior knowledge to solve the difficulty that arose.
P4: …Both will be subjected to the same gravitational forces exerted by the Earth.As a result, they both arrive in the world at the same time.Or, in the case of a falling object, does the heavier one fall faster?However, because it is in space, there is no friction force.The smaller item falls faster after entering the Earth's atmosphere because it exerts more frictional force on the bigger object and less on the smaller object.If we include friction, we can see that the smaller meteor would drop more quickly.However, whether we consider an item in celestial bodies or on the Earth, we may believe that if we drop two huge bottles of water, one 5 litres and one 1 litre, from the fifth floor, we think as if the heavier one will fall faster.However, when the friction of the environment is taken into account, the smaller one would fall quicker because the smaller atmosphere will not block it, but the larger one will.Then I choose tiny meteor."
Establishing a New Conflict Relationship
It is a circumstance in which the participants notice a discrepancy between their prior knowledge and the thoughts they present during the thinking experiment.
P1: If we assume that the force of the Earth would remain the same, the smaller one should become a meteorite in a shorter time… Right now, I'm considering abandoning my first response.In reality, there exist gravitational forces between the masses… Hmm, is it different when it comes to space?Stuffy atmosphere… The smaller one is quicker, but it asks for our time rather than our speed.Would it have been different if the question had been about speed?The result would be different because kinetic energy increases as mass increases.However, when it comes to duration, I believe the smaller meteor will become a meteorite in a shorter time.But what if you're a meteorite?Their scenario prior to entering the environment, perhaps their initial scenario, as I previously stated.However, I can predict that the speed of the smaller meteor will drop after it enters the atmosphere since it will be subjected to less air resistance.
Activating a New Schema or Schemas
It is the visualization of new schema or schemas in the mind of the participant doing thought experiments based on prior knowledge.In other words, while looking for a solution to the problem, s/he associates some concepts related to and not related to the problem and produces new interpretations.
P2: They reached the atmosphere and began to burn since one is bigger than the other.Because it is larger around it, the larger one must burn more due to friction.Exactly, there should be more friction in the atmosphere, not combustion.Hmmm, that the effect of combustion is accelerated by friction.I'm thinking… It needs to burn when it hits the atmosphere regardless, or at least some of it does at first, but not all of it burns, and the rest falls as meteorites.I think about the moment when it burned in the atmosphere.Is it accelerating?If I assume there is no acceleration, the one with higher mass should fall faster and arrive sooner.But if I look at the friction, the larger mass must be subjected to higher friction because the area is bigger.However, gravity forces the greater mass to drop first.When I consider it, the one with the greater bulk falls first.I discover two solutions here if I start from this point.If they have the same mass, and one has a greater area, we may assume the same.The bigger one will most certainly fall sooner, or if we consider that it accelerates more, the one with the greater mass will descend faster.
Thinking Process of Science Teachers on The Gravity Thought Experiment
Researchers can understand why people do the thought experiment by looking at their thinking purposes.The results describing the participants' purpose for participating in the thought experiment are categorized into three groups and data is presented in Table 3.
Prediction
While offering solutions to the challenges, the participants attempt to provide viable solutions to circumstances that they have never experienced before or for which they have not made any remarks, even if they have.
P3: … I believe it to be the larger one, but I'm not certain.Because the big one has lots of energy, I reasoned.The only response I can provide is that because kinetic energy is dependent on both mass and velocity, but is larger in one, it will be prone to more friction.Its speed will also slow down.I couldn't offer a whole healthy response because I'm now confused.Therefore, one of them has more bulk, and the other has less.
Conviction
In the gravity of scientific knowledge, conviction is the participant's support for the answer to the thought experiment with a formula, a law or presenting it inside the framework of specific norms.
P6:
The one has a huge mass since the gravitational force is proportional to the mass, i.e., the larger the mass, the higher the gravitational force.As a result, the bigger one has more mass, i.e., the bigger one has a mass.Because greater gravity is given to a larger mass, it arrives more quickly.As a result, it arrives faster because it will be subjected to increased gravitational force.
Explanation
The answer is that the participants use an example to communicate their ideas on the thought experiment.
P8: … When you observe them side by side, you can tell that the speed of one meteorite is different from the speed of the other.Since that implies, they didn't become side by side before, and if their speeds are the same from the start, which I don't believe they are, the mass will not influence the speed with which they approach the earth because there is no air there.There is no friction because there is no air.Because there is no friction, both are affected by the same gravitational force.We were doing this in the experimental environment as follows.When we left a little mass and a large mass in a closed container, their fall speed and duration were the same, but when we left them in an atmosphere containing air, things changed.Of course, because there isn't any air here, which one will turn into a meteorite first?But, as I already stated, they weren't initially adjacent to each other when we saw them side by side.Someone is almost certainly faster than the other… Let's pretend the large asteroid is moving quickly.If the huge meteor is moving quickly, it suggests the smaller meteor is moving quickly as well.And the huge meteor will very certainly surpass the little one.As a result, it will arrive sooner.On the contrary, if the tiny meteor is quick, the small meteor will arrive first and crash.As a result, I don't believe there is a clear answer to this question.
Table 3
Thinking Process of Science Teachers on Gravity Thought Experiment.
Thinking Sources Used by Science Teachers on The Gravity Thought Experiment
This section presents the findings of the thinking sources employed by the thought experiment participants under six subheadings and data is presented in Table 4.
Spatial Reasoning
It is the ability of the person to produce a solution more easily by changing the existing circumstance according to herself/himself.
P2: It's been a while since I've sat on the earth, and we're doing so right now, but I couldn't visualize it… It needs to burn when it hits the atmosphere regardless, or at least some of it does at first, but not all of it burns, and the rest falls as meteorites.I think about the moment when it burned in the atmosphere.Is it accelerating?If I assume there is no acceleration, the one with higher mass should fall faster and arrive sooner.But if I look at the friction, because the area is bigger, the larger mass must be subjected to higher friction.However, gravity forces the greater mass to drop first.
Symmetry
It is the formation of an opinion following the norms of nature and the participant's perception of the situation in the problem.
P8: There is no friction because there is no air.Because there is no friction, both are affected by the same gravitational force.We were doing this in the experimental environment as follows.When we left a little mass and a large mass in a closed container, their fall speed and duration were the same, but when we left them in an atmosphere containing air, things changed.Of course, because there isn't any air here, which one will turn into a meteorite first?But, as I already stated, they weren't initially adjacent to each other when we saw them side by side.Someone is almost certainly faster than the other… Let's pretend the large asteroid is moving quickly.If the huge meteor is moving quickly, it suggests the smaller meteor is moving quickly as well.And the huge meteor will very certainly surpass the little one.As a result, it will arrive sooner.On the contrary, if the tiny meteor is quick, the small meteor will arrive first and crash.As a result, I don't believe there is a clear answer to this question.
Compound Simulation
As the participant deals with the current difficulty scenario, s/he is directed to various circumstances and situations that should not be directed in reality.
P4: …Both will be subjected to the same gravitational forces exerted by the earth.As a result, they both arrive in the world at the same moment.Or, in the case of a falling object, does the heavier one fall faster?However, because it is in space, there is no friction force.The smaller item falls faster after entering the earth's atmosphere because it exerts more frictional force on the bigger object and less on the smaller object.If we include friction, we can see that the smaller meteor would drop quicker… However, when the friction of the environment is taken into account, the smaller one would fall quicker because the smaller atmosphere will not block it, but the larger one will.Then I chose a tiny meteor.
Experience
It is the participant's use of their own experiences as a source in the thought experiment since they have already faced the problem or have experienced a circumstance comparable to the one in the thought experiment.
P1: … There was a paper experiment, for example, when we put the normal A4 paper on the ground, and at the same time we dropped the crumpled paper from the same height, the one with the smaller surface area would fall faster.Meteorites are subjected to the Earth's gravity field.The force exerted by the earth will be the same.According to this reasoning, the smaller one should become a meteorite in a shorter time.
Hypothetical Simulation
Because thought experiments are based on real-life scenarios, the participant has instinctively experienced the circumstance previously but presents it without realizing it.
P7: … Which would come first, the extremely huge or a rather smaller one?I believe the smaller one would have come sooner.If you're wondering the reason, it's because it's smaller in mass, or because it appeared to move more quickly.The smaller one appeared to be approaching me quicker, but the large meteorite fragmentation and other factors were blocking it.
Scientific Concepts
It is the participant's answer to the issue scenario in the thought experiment by explaining the ideas through an acquisition previously taught to the students in the curriculum, an experiment done or seen, or by employing the analogy approach.
P6: …The one has a huge mass since the gravitational force is proportional to the mass, i.e., the larger the mass, the higher the gravitational force.As a result, the bigger one has more mass, i.e., the bigger one has a mass.Because greater gravity is given to a larger mass, it arrives more quickly.As a result, it arrives faster because it will be subjected to increased gravitational force.
Table 4 Thinking Sources Used by Science Teachers on Gravity Thought Experiment.
Table 5
The Scope of The Topic Being Tested and Its Contribution to The Thought Process.
Discussion Of Results
Below is a discussion of the above-mentioned findings.A combination of mental actions is called imagination.The term "thought experiments" refers to mental experimentation.Thinking while imagining is a cognitive activity that leads to some outcomes dependent on thought processes.Examining the mental processes of teachers is seen as beneficial in terms of education.The subject of "gravity based on falling objects", which is the subject of the research, is included in international science teaching programs.For example, according to the study on 15-year-old students' perceptions of falling corpses (Baldy & Aubert, 2005), students at this age use a variety of explanatory systems to explain the phenomena, depending on where it happens.The idea that objects fall due to gravity only applies to events that happen on the Earth.Since there is no atmosphere on the moon or in space, objects float because they are in a vacuum.According to Galili (2001), the "too-complex" view of current physics, "deprives us of a golden chance to assist students to get a greater comprehension of the ideas of gravity and According to Vosdianou (1994) and Bar et al. (1997), ninth graders, do not comprehend.Einstein's theory provides a geometric explanation for objects falling independently of gravity.Students should be able to see that bodies "simply" have the effect of "deforming" the space-time that surrounds them and that this deformation affects their course as they pass near one other.
The study aims to examine the thinking processes of science teachers who continue their graduate education in the field of science education when they conduct thought experiments designed to explain gravity based on falling objects.Table 1 shows that the information held by the participants and the information in the problem are generally consistent.The teacher's subject knowledge (Johnson & Cotterman, 2015), conceptual background, and pedagogical abilities associated with innovations (Avidov-Ungar & Forkosh-Baruch, 2018; Zhu et al., 2013) contribute to the teacher's curricular domination.Since the thought experiment used in the study was aimed at achievement in the Turkish Science Curriculum, the participants, both as teachers continuing their graduate education in the relevant field and as on-the-job teachers, may have easily interpreted the solution suggestions for the problems and established a relationship between them.As shown in Table 2, establishing a new conflict relationship and activating a new schema or schemas took place much less frequently than the effect of establishing a new coherence relationship.According to Daniel (2016), problem-solving is the process of using mental and physical talents to solve a problem.Several factors have an impact on the participant's ability to solve the problem.Emotional condition is one of them.If the participant is in a tense or anxious mood during the problem-solving session, S/he may be told to create a conflict between the knowledge s/he already knows and the information in the problem.When activating new schemas, the participant has to turn to other scenarios related to the current situation in the problem, instead of developing a new conflict connection or a new coherence relationship (Clement, 2008).The person may have come with the problem in daily life, but s/he may have tried to discover the reasons and interpreted it differently because s/he did not pay attention.The process of thinking is used to reach a conclusion in any scenario.The study aims to investigate the thinking processes of science teachers in the process of conducting a thought experiment designed to gravity based on falling objects.
When Table 3 is examined, it is seen that the highest frequency is prediction.Internal information systems in long-term memory are triggered when they begin to think and make predictions about a system (Clement, 2008).The participants' professional experience ranged from one to thirteen years.According to Bağçeci and Kinay (2013), teachers with five years or less of professional experience act more hastily than teachers with more than twenty years, while those with more than twenty years have more self-confidence.Since the participants are teachers continuing their education, the potential of their responses being incorrect may have worried them and made them guess instead of solving the problem completely.It is understood from Table 3 that the participants rank second in solving by presenting evidence, which is one of the aims of thought experiments.Using a law, a scientific rule or a formula provides evidence while creating answers to the thought experiment.The major purpose is to raise individuals who are integrated with the knowledge, abilities and behaviours that are part of the competencies of the Turkish education system.The eight competencies that science teachers have are the most significant elements in gaining students the eight basic skills specified in the Turkish Qualifications Framework (TQF).The participants came up with answers to the problems by offering evidence.This circumstance demonstrates that the participants are fully aware of the fundamental competency in science and technology, which is one of the eight competencies, as well as the knowledge and abilities in three dimensions outlined in the Turkish National Education Basic Law item 43 (SPO, 2000).When Table 3 is examined, it is seen that the second frequency is prediction.Only one person did the thought experiment to explain according to the findings.According to Clement (2008), performing thought experiments for the aim of explanation; and arguing about the circumstance is merely a means of offering comparable or different instances to the scenario, without the objective of evidence.Even though it shows that the general cultural knowledge item 43 in TNEBL is not at a sufficient level.The fact that the examples given are related to daily life can be interpreted as the ability to learn, which is one of the eight keys of TQF, by using natural events.Our personal differences emerge when we unconsciously use our thought resources while executing the thinking process.According to the data in Table 4, the participants generally chose scientific concepts and hypothetical simulation as a source of thought.According to the literature, people should not only think logically and mathematically but also process their views via an emotional filter (Damasio, 2006).The fact that participants prefer scientific concepts as a source of cognition may be an indicator of their hypothetical thinking abilities.Experiments and intuitions, considered mental activities, are combined to gather information (Bergson, 2013).Since the participants tend to think more scientifically than intuition in science teaching, hypothetical simulation was not used as a source of thought.This shows that teachers' self-efficacy perceptions are high.Table 4 shows that experience, compound simulation, symmetry, and spatial reasoning resources are all used in equal amounts.While the right hemisphere of the brain benefits from current data, the left hemisphere generates data that isn't based on speculation or inference.In other words, the left hemisphere of the brain constantly generates hypotheses by continually inferring broad meanings (Boydak, 2017).The purpose of reflective thinking is to reveal acquired implicit knowledge.Participants who use their experiences as a resource in the thinking process can show that they can think reflectively because the experiences are used as a resource without awareness.Analytical thinking principles encourage considering different possibilities before focusing on the best of these options (Nuroso et al., 2018).According to Tian et al. (2014), analytical thinking is the capacity to know the details or break down an issue into smaller components and grasp the interrelationships between them.As a result, it can be thought that people who use composite simulation as a source of cognition exhibit analytical and integrative thinking.If the scenario of the problem is too complex for the participant, s/he will try to solve it by making spatial changes and making the problem more comfortable and easier to solve (Lindsay, 1988) because the problem-posing skill is related to creative thinking ability (Contreras, 2013;Puspitasari et al., 2018;Van Harpen & Sriraman, 2013;Wulandari et al., 2018).As a result, people who use spatial reasoning as a source of thought can think creatively.It can be said that the participants who use symmetry as a source of ideas can think vertically.According to Frank (2013), vertical thinking is an analytical, sequential, and limited process.It uses the negative to avoid certainty, forces irrelevant information to be excluded, and always chooses the most likely path.
Conclusions
The results obtained in response to the research questions on the gravity thought experiment are given in the order in which they were received.
Secondary Effects of Science Teachers
When science teachers conducted thought experiments based on falling objects, three types of secondary effects arose in the thinking processes.These are establishing a new coherence relationship, establishing a new conflict relationship, and activating a new schema or schemas.It was determined that the participants had a grasp of the science curriculum, field expertise, and conceptual infrastructure since they created a new quantity connection.The participants' uncomfortable or anxious moods were attributed to the establishment of a new conflict relationship.It was observed that the participants became aware of their surroundings as they activated new schemes and schemes.
Thinking Process of Science Teachers
It was discovered that science professors frequently conduct thought experiments to make predictions.According to the findings, the participants were in an uneasy mood when they made predictions based on thought experiment solutions, they had basic competency in science and technology from presenting conviction, and the ability to learn in TQF was achieved through nature because of their explanations.
Thinking Sources Used by Science Teachers
According to the findings, although the participants valued scientific concepts and hypothetical simulation the most, they turned to different thinking sources according to their difficulty levels.They chose spatial reasoning, symmetry, complex simulation, and experience as sources equally often.It was observed that the hypothetical thinking skills and self-efficacy perceptions of the participants were high.It has been observed that analytical thinking and integrative thinking skills are high when they use the experiences; their creative thinking skills are high when they use compound simulations; when they use symmetry and hypothetical simulations, they are able to filter their emotions through their minds.
Finally, physics courses are challenging in every country, including ours Faisal and Martin (2019) .Thought experiments are used to explain the results of physical theories and to bridge abstract concepts (Uyar & Karamustafaoğlu, 2022;Velentzas & Halkia, 2013).
Figure 1 .
Figure 1.The Thought Experiment Designed to The Gravity Based on Celestial Bodies.
Figure 2 .
Figure 2. The Steps of This Phenomenological Investigation.
Table 2 Secondary Effects of Science Teachers on Gravity Thought Experiment.
(1-Establishing a New Coherence Relationship, 2-Establishing a New Conflict Relationship, 3-Activating a New Schema or Schemas, P-Participant) | 7,526.4 | 2023-11-01T00:00:00.000 | [
"Education",
"Physics"
] |
Gallium-Protoporphyrin IX Inhibits Pseudomonas aeruginosa Growth by Targeting Cytochromes
Pseudomonas aeruginosa is a challenging pathogen due to both innate and acquired resistance to antibiotics. It is capable of causing a variety of infections, including chronic lung infection in cystic fibrosis (CF) patients. Given the importance of iron in bacterial physiology and pathogenicity, iron-uptake and metabolism have become attractive targets for the development of new antibacterial compounds. P. aeruginosa can acquire iron from a variety of sources to fulfill its nutritional requirements both in the environment and in the infected host. The adaptation of P. aeruginosa to heme iron acquisition in the CF lung makes heme utilization pathways a promising target for the development of new anti-Pseudomonas drugs. Gallium [Ga(III)] is an iron mimetic metal which inhibits P. aeruginosa growth by interfering with iron-dependent metabolism. The Ga(III) complex of the heme precursor protoporphyrin IX (GaPPIX) showed enhanced antibacterial activity against several bacterial species, although no inhibitory effect has been reported on P. aeruginosa. Here, we demonstrate that GaPPIX is indeed capable of inhibiting the growth of clinical P. aeruginosa strains under iron-deplete conditions, as those encountered by bacteria during infection, and that GaPPIX inhibition is reversed by iron. Using P. aeruginosa PAO1 as model organism, we show that GaPPIX enters cells through both the heme-uptake systems has and phu, primarily via the PhuR receptor which plays a crucial role in P. aeruginosa adaptation to the CF lung. We also demonstrate that intracellular GaPPIX inhibits the aerobic growth of P. aeruginosa by targeting cytochromes, thus interfering with cellular respiration.
INTRODUCTION
Pseudomonas aeruginosa is a challenging bacterial pathogen due to both innate and acquired resistance to several antibiotics (Moore and Flaws, 2011). This bacterium is capable of causing a variety of infections, including chronic lung infection, which represents the main cause of morbidity and mortality in patients suffering from cystic fibrosis (CF) (Murphy, 2006;Davies et al., 2007). The success of P. aeruginosa as an opportunistic pathogen relies, at least in part, on its metabolic versatility, including the ability to obtain energy from different sources under a variety of environmental conditions (Williams et al., 2007;Arai, 2011). P. aeruginosa possesses a branched respiratory chain terminated by oxygen or nitrogen oxides, to allow growth by aerobic respiration or by denitrification under anaerobic conditions, respectively (reviewed in Arai, 2011). Moreover, P. aeruginosa is able to ferment arginine and pyruvate anaerobically (Vander et al., 1984;Eschbach et al., 2004). Aerobic respiration in P. aeruginosa relies on five terminal oxidases (Matsushita et al., 1982(Matsushita et al., , 1983Fujiwara et al., 1992;Cunningham and Williams, 1995;Cunningham et al., 1997;Stover et al., 2000;Donohue, 2002, 2004). Three of these enzymes, the aa 3 terminal oxidase (Cox), the cbb 3 -1 (Cco-1), and the cbb 3 -2 (Cco-2) are cytochrome c-type oxidases, while the other two, i.e., the cyanide-insensitive oxidase (Cio) and the bo 3 oxidase (Cyo), are quinol oxidases (Figure 1). All these terminal oxidases contain heme, and are differentially expressed depending on the growth conditions, likely as a consequence to their different affinity for oxygen (Alvarez-Ortega and Harwood, 2007;Kawakami et al., 2010). Denitrification is ensured by a set of enzymes which sequentially convert nitrate (NO − 3 ) to molecular nitrogen (N 2 ). Among the denitrification enzymes, only nitrite reductase (Nir) and nitric oxide reductase (Nor) contain heme as a cofactor (Figure 1).
Like almost all pathogenic bacteria, P. aeruginosa has an absolute need for iron to cause infections and to persist within the host (Ratledge and Dover, 2000). Iron is required as a cofactor of many key enzymes involved in respiration, DNA synthesis and defense against reactive oxygen species (Andrews et al., 2003). However, in the human host, iron is poorly available to bacteria due to its incorporation into hemecontaining molecules (e.g., hemoglobin and myoglobin) and iron carrier proteins (e.g., transferrin and lactoferrin) (Weinberg, 2009). This iron-withholding capacity represents the first line FIGURE 1 | Branched respiratory chain of P. aeruginosa. Cio, Cyo, Cox, Cco-1, and Cco-2 represent the five terminal oxidases that reduce oxygen to water under aerobic conditions. Cio and Cyo are quinol oxidases while Cox, Cco-1, and Cco-2 are cytochrome c oxidases. Nar, Nir, Nor, and Nos are nitrate reductase, nitrite reductase, nitric oxide reductase, and nitrous oxide reductase, respectively. These enzymes transfer electron to nitrogen oxides under anaerobic conditions. Nar receives electrons directly from the quinone pool while the other three receive electrons via the cytochrome c or from the small blue-copper protein azurin. a, b, c, and d represent different types of low-spin heme while a 3 , b 3 , d 1, and o 3 indicate the high-spin ones (modified from Arai, 2011). of the host defense against invading pathogens, a phenomenon known as "nutritional immunity" (Skaar, 2010). To circumvent iron-limitation, P. aeruginosa possesses several systems that actively acquire this essential metal, such as (i) the production of the siderophores pyoverdine (Pvd, Meyer and Abdallah, 1978;Cox and Adams, 1985) and pyochelin (Pch, Cox et al., 1981;Heinrichs et al., 1991); (ii) the ability to utilize a wide range of siderophores synthesized by other organisms (Cornelis and Matthijs, 2002;Cornelis et al., 2009); (iii) the ability to acquire Fe(II) through the Feo system (Cartron et al., 2006). In addition, P. aeruginosa can utilize heme-iron, by expressing two distinct heme-uptake systems, namely phu and has (Ochsner et al., 2000). The phu system allows the direct acquisition of heme from hemoproteins, which bind to the outer membrane receptor PhuR (Ochsner et al., 2000). In the has system a secreted hemophore HasA withdraws heme from hemoproteins and delivers it to the outer membrane receptor HasR (Létoffé et al., 1998). Given the similarity with the well-known has system of Serratia marcescens (Rossi et al., 2003;Létoffé et al., 2004), it is likely that the has system of P. aeruginosa positively regulates its own expression, via the sigma factor HasI and anti-sigma HasS, upon interaction of heme-loaded HasA with the HasR receptor (Llamas et al., 2014). The expression of both has and phu heme-uptake systems is shut down in the presence of sufficient intracellular iron, due to the negative regulation exerted by the ferric-uptake regulator (Fur) protein (Ochsner et al., 2000).
It has been shown that P. aeruginosa aerobic respiration and iron-uptake capabilities play pivotal roles during chronic lung infection in CF patients. In particular, three terminal oxidases (Cco-1, Cco-2, and Cio) sustain bacterial growth in the CF lung, a particular environment where P. aeruginosa iron-uptake abilities are sought to evolve toward heme utilization (Alvarez-Ortega and Harwood, 2007;Marvig et al., 2014;Nguyen et al., 2014).
The paucity of effective antibiotics to treat P. aeruginosa infections have made bacterial respiration and/or iron metabolism promising targets for the development of new anti-Pseudomonas drugs (Ballouche et al., 2009;Foley and Simeonov, 2012;Imperi et al., 2013). The possibility of using iron mimetics as novel therapeutics to interfere with iron metabolism has been exploited (Kaneko et al., 2007;Banin et al., 2008;Minandri et al., 2014). Ga(NO 3 ) 3 , the active component of the FDA-approved formulation Ganite R , has successfully been repurposed as an antimicrobial drug Rangel-Vega et al., 2015). Interestingly, Ga(NO 3 ) 3 has been shown to be very active against P. aeruginosa, by interfering with iron-dependent metabolic pathways (Kaneko et al., 2007;Bonchi et al., 2015). The antibacterial proprieties of Ga(III) reside in the fact that, different from Fe(III), Ga(III) cannot be reduced under physiological conditions. However, redox cycling is critical for many of iron-dependent biological functions, including respiration (Breidenstein et al., 2011). Moreover, the heme-mimetic GaPPIX [i.e., Ga(III) coupled with the heme precursor protoporphyrin IX] has been shown to possess a good antibacterial activity against several bacterial species, including Staphylococcus aureus and Acinetobacter baumannii (Stojiljkovic et al., 1999;Arivett et al., 2015;Chang et al., 2016). GaPPIX is likely to exploit heme-uptake routes to enter bacterial cells, where it could substitute for heme in heme-containing enzymes, including cytochromes, catalases, and peroxidases, resulting in the perturbation of vital cellular functions (Stojiljkovic et al., 1999). Due to the similarity between GaPPIX and heme, GaPPIX is predicted to interfere with heme-dependent b-type cytochromes, thus impairing their function and ultimately inhibiting bacterial respiration.
In this work, the in vitro effect of GaPPIX on P. aeruginosa was tested under iron-depleted conditions, as those encountered during infection. The entrance routes of GaPPIX into P. aeruginosa cells and possible targets of GaPPIX were investigated. We demonstrate that the sensitivity of P. aeruginosa to GaPPIX depends on both intracellular iron levels and the expression of heme-uptake systems. Furthermore, we show that GaPPIX enters P. aeruginosa cells mainly through the heme-uptake receptor PhuR. Evidence is also provided that intracellular GaPPIX inhibits the aerobic growth of P. aeruginosa by targeting heme-dependent b-type cytochromes.
Bacterial Strains and Growth Conditions
Strains and plasmids used in this work are listed in Table 1. P. aeruginosa clinical isolates are listed in Table S1. P. aeruginosa strains from frozen cultures were maintained on Luria Bertani (LB) agar before being transferred to liquid culture media. Bacteria were cultured in iron-free Casamino Acids medium (DCAA, Visca et al., 1993) supplemented or not with 100 µM of FeCl 3 at 37 • C, with vigorous shaking. When required, antibiotics were added to the media at the following concentrations for Escherichia coli, with the concentrations used for P. aeruginosa shown in parentheses: Ampicillin 100 µg/ml; carbenicillin (300 µg/ml in LB and 200 µg/ml in DCAA); and tetracycline 12.5 µg/ml (100 µg/ml). DCAA agar plates were prepared by the addition of 15 g/l bacteriological agar (Acumedia, Neogen corporation). When GaPPIX was required, a 50 mM of stock solution of GaPPIX (Frontier Scientific) was prepared in dimethyl sulfoxide (DMSO) and stored at 4 • C in the dark. When Ga(NO 3 ) 3 was required, a 100 mM of stock solution of Ga(NO 3 ) 3 (Sigma-Aldrich), was prepared in double-distilled water and stored at −20 • C.
Susceptibility Testing
The activity of GaPPIX, Ga(NO 3 ) 3 and Hemin (Hm) (Sigma-Aldrich) on P. aeruginosa was tested in 96-well microtiter plates (Falcon). Briefly, bacterial cells were grown over-night in DCAA supplemented with 100 µM FeCl 3 in order to obtain high cell densities, then washed in saline and diluted to an OD 600 of 0.01 in 200 µl of DCAA containing increasing concentrations (0-100 µM) of GaPPIX, Ga(NO 3 ) 3 or Hm. Microtiter plates were incubated for 24 h at 37 • C with gentle shaking (120 rpm). Growth (OD 600 ) was measured in a Wallac 1420 Victor3 V multilabel plate reader (PerkinElmer). The minimum inhibitory concentration (MIC) of gallium compounds was visually determined as the lowest concentration that completely inhibited P. aeruginosa growth. As a control experiment the same procedure was performed, except that 100 µM FeCl 3 was added in the medium containing the highest concentration of gallium compounds tested (100 µM).
The antibacterial activity of gallium compounds was also assessed by disk diffusion assays. Briefly, cells from an over-night culture in DCAA supplemented with 100 µM FeCl 3 were washed and diluted in saline to OD 600 = 0.1, then seeded on the surface of DCAA agar plates supplemented or not with FeCl 3 . Sterile 6-mm blank disks (ThermoFisher-Oxoid) soaked with 10 µl of a 15 mM solution of either GaPPIX or Ga(NO 3 ) 3 were deposited on the agar surface and the Zone Of growth Inhibition (ZOI) was measured (in mm) after 16 h of incubation at 37 • C.
To observe the rescue effect of Hm and Hemoglobin (Hb), disks were soaked with 10 µl of a 7.5 mg/ml solution of bovine hemin chloride (Sigma-Aldrich) in 10 mM NaOH or bovine hemoglobin (Sigma-Aldrich) in phosphate buffered saline (PBS) and deposited on the plate surface nearby the disk soaked with GaPPIX. The appearance of a half-moon-shaped growth area around the disk soaked with Hm or Hb was detected after 16 h of incubation at 37 • C. mutant, a 2932 bp fragment containing the hasR gene with its own promoter region was amplified by PCR from the PAO1 genome using primers hasR compl FW and hasR compl RV ( Table 1). The product was then digested with KpnI and BglII and directionally cloned into the corresponding sites of the shuttle vector pUCP18, giving plasmid pUCPhasR. To express phuR in hasR phuR mutant, a 2575 bp fragment containing the phuR gene with its own promoter region was amplified by PCR from the PAO1 genome using primers phuR compl FW and phuR compl RV ( Table 1). The product was then digested with EcoRI and KpnI and directionally cloned into the corresponding sites of the shuttle vector pUCP18, giving plasmid pUCPphuR. To express hasR and phuR in the hasR phuR mutant strain, the pUCPhasRphuR plasmid previously described (Minandri et al., 2016) was used.
Generation of P. aeruginosa Mutants
For mutant construction, E. coli and P. aeruginosa strains were grown in LB, with or without antibiotics, at 37 and 42 • C, respectively, with vigorous aeration. Previously described suicide plasmids ( Table 1) were used according to procedures detailed elsewhere (Milton et al., 1996;Frangipani et al., 2008).
Measurement of Cytochrome c Oxidase Activity in P. aeruginosa Intact Cells
Cytochrome c oxidase activity was assayed by using the artificial electron donor N,N,N',N'tetramethyl-p-phenylene diamine (TMPD) (Fluka). Briefly, bacteria were grown over-night in DCAA supplemented with 100 µM FeCl 3 , then washed in saline and inoculated in DCAA to a final OD 600 = 0.05. When the mid-exponential growth phase was reached (≈6 h post inoculum), cells were washed once in saline and adjusted to an OD 600 = 1 (corresponding to ≈10 9 CFU/ml).
Then, 10 8 bacterial cells (100 µl) were suspended in 1.4 ml of 33 mM potassium phosphate buffer (KPi, pH 7.0). The reaction was started by the addition of 5 µl of a 0.54 M TMPD solution to the sample cuvette. The rate of TMPD oxidation was recorded spectrophotometrically at 520 nm for 8 min at 25 • C. Results were expressed as µmol TMPD oxidized/min −1 /10 8 cells using 6.1 as the millimolar extinction coefficient of TMPD (Matsushita et al., 1982).
Isolation of Outer Membrane Proteins (OMPs) and SDS-PAGE Analysis
OMPs were isolated following the sarcosyl solubilization method (Filip et al., 1973), with some modifications. Briefly, bacteria from over-night cultures in DCAA supplemented with 100 µM FeCl 3 and 200 µg/ml Cb were washed in saline, then diluted to OD 600 = 0.05 in 60 ml DCAA supplemented with 200 µg/ml Cb, and incubated over-night at 37 • C. Cells were collected by centrifugation (2500 × g, 20 min), washed with 5 ml of 30 mM Tris HCl (pH 8, Sigma-Aldrich) and suspended in 1 ml of the same buffer. Bacteria were lysed by sonication in an ice bath (8 × 20 s cycles in a Sonics Vibra-Cell TM VCX 130 sonicator), punctuated by 20 s intervals (50% power). Phenyl methyl sulfonyl fluoride (PMSF, Sigma-Aldrich) was added to cell lysate at 1 mM final concentration. Unbroken cells were removed by centrifugation at 2400 × g for 20 min, and supernatants were transferred to fresh tubes. Sarcosyl (N-laurylsarcosinate sodium salt, Sigma) was added to the supernatant to a final concentration of 2%. After 1 h incubation at room temperature with gentle shaking, the mixture was centrifuged for 2 h at 55,000 × g at 4 • C. OMP pellets were suspended in 40 µl 2 x SDS-PAGE loading dye (Sambrook et al., 1989), boiled for 10 min, then separated by 8% SDS-PAGE and visualized by Coomassie brilliant blue staining.
Statistical Analysis
Statistical analysis was performed with the software GraphPad Instat (GraphPad Software, Inc., La Jolla, CA), using One-Way Analysis of Variance (ANOVA), followed by Tukey-Kramer Multiple Comparisons Test.
P. aeruginosa is Inhibited by GaPPIX under Iron-Deplete Conditions
It has been previously reported that GaPPIX has no effect on P. aeruginosa (Stojiljkovic et al., 1999). This results is quite surprising given that P. aeruginosa is able to utilize heme as an iron source, by expressing two heme-uptake systems, i.e., has and phu (Ochsner et al., 2000). However, since the effect of GaPPIX has previously been investigated in iron-rich media (Stojiljkovic et al., 1999), we sought that under these conditions iron availability would have impaired Ga(III) activity. To verify this hypothesis, we preliminary tested the effect of GaPPIX on P. aeruginosa PAO1 growth using the iron-poor medium DCAA (Visca et al., 1993), supplemented with increasing concentrations of GaPPIX, the iron-binding porphyrin Hemin, or Ga(NO 3 ) 3 , the latter resulting very active on P. aeruginosa in this medium (Bonchi et al., 2015). Ga(NO 3 ) 3 completely inhibited P. aeruginosa growth at 12.5 µM, and its activity was abrogated by the addition of FeCl 3 (Figure 2A) consistent with previous findings (Kaneko et al., 2007;Frangipani et al., 2014). Although the minimal inhibitory concentration (MIC) could not be determined for up to 100 µM GaPPIX (Figure 2A), exposure of PAO1 to GaPPIX reduced bacterial growth by 50% (IC 50 ) at 12.5 µM (Figure 2A). Also in the case of GaPPIX, growth inhibition was completely reversed by the addition of FeCl 3 (Figure 2A). As expected, exposure P. aeruginosa PAO1 to Hemin promoted bacterial growth at concentrations ranging between 1.55 and 25 µM, in line with the ability of P. aeruginosa to use Hemin as an iron source (Ochsner et al., 2000).
The GaPPIX susceptibility of P. aeruginosa PAO1 was also tested using the disk diffusion assays in DCAA agar plates supplemented or not with an excess of FeCl 3 (600 µM) ( Figure 2B). In FeCl 3 -supplemented DCAA, both GaPPIX and Ga(NO 3 ) 3 caused no inhibition of PAO1 growth. Conversely, in DCAA a clear ZOI was observed around the GaPPIX and Ga(NO 3 ) 3 disks ( Figure 2B). Different from the ZOI formed by Ga(NO 3 ) 3 , the ZOI formed by GaPPIX was less transparent (Figure 2B), consistent with the evidence that no MIC (full inhibition) could be determined for GaPPIX in liquid DCAA (Figure 2A). Although more transparent, the ZOI caused by Ga(NO 3 ) 3 was smaller than that of GaPPIX ( Figure 2B). These preliminary data indicate that iron-deplete conditions render P. aeruginosa PAO1 susceptible to GaPPIX-mediated growth inhibition.
The Response of P. aeruginosa Cells to GaPPIX Depends on Intracellular Iron Carryover The above results prompted us to investigate the effect of the intracellular iron content on GaPPIX-dependent growth inhibition. To this aim, the effect of GaPPIX was compared between P. aeruginosa PAO1 cells that had been pre-cultured in either DCAA containing 100 µM FeCl 3 (to increase the intracellular iron content) or DCAA without FeCl 3 (to lower the intracellular iron content). Iron-starved bacterial cells were significantly more susceptible to GaPPIX (P< 0.001) compared with those pre-cultured with FeCl 3 (Figure 3A). In particular, upon the addition of 0.38 µM GaPPIX, the growth of ironstarved PAO1 cells was reduced by 40% compared with cells pre-cultured in the presence of 100 µM FeCl 3 (Figure 3A).
To further investigate the correlation between the intracellular iron content and GaPPIX-dependent growth inhibition, GaPPIX susceptibility was evaluated on P. aeruginosa mutants impaired in Fe(III)-siderophore uptake systems, i.e., mutants unable to synthesize pyoverdine ( pvdA), pyochelin ( pchD), or both siderophores ( pvdA pchD) (Figure 3B). While GaPPIXdependent growth inhibition was similar in the wild type and the pchD mutant, both pvdA and pvdA pchD mutants were extremely sensitive to GaPPIX (Figure 3B). In particular, 0.38 µM GaPPIX inhibited the growth of the pvdA and pvdA pchD mutant strains by 75 and 78%, respectively, compared with the untreated cultures, while it reduced the growth of the wild-type strain and of the pchD mutant by only 40 and 30%, respectively ( Figure 3B). Altogether, these data indicate that the response of P. aeruginosa PAO1 to GaPPIX also depends on the carryover of intracellular iron.
GaPPIX is Preferentially Uptaken via the P. aeruginosa PhuR Receptor
To investigate the hypothesis that GaPPIX may enter P. aeruginosa cells by exploiting the same routes as heme, P. aeruginosa mutants carrying a deletion of either of the known heme receptors ( hasR and phuR mutants; Table 1) were generated. The effect of GaPPIX on these mutants, as well as on a hasR phuR double mutant lacking both heme receptors (Minandri et al., 2016), was investigated in DCAA in the presence of 12.5 µM GaPPIX (IC 50 ; Figure 4A). While all strains showed the same growth profiles in the untreated medium, both phuR and hasR phuR mutants grew better than the wild type or the hasR mutant in the presence of 12.5 µM GaPPIX, displaying ≈50% higher growth levels relative to the wild type or the hasR mutant ( Figure 4A). These data suggest that, among the P. aeruginosa heme-uptake systems, phu has a more prominent role than has in the uptake of GaPPIX. Then, the effect of GaPPIX on heme-receptor mutants was evaluated in DCAA agar plates, by performing the disk diffusion assays ( Figure 4B). Results showed a similar ZOI (27.6 ± 2.0 mm) for both the wild-type strain and the hasR mutant, while a smaller ZOI (24.5 ± 0.7 mm) was observed for the phuR mutant, indicating a less susceptible phenotype ( Figure 4B, Table S2). In addition, no ZOI was observed for the hasR phuR double mutant, indicating a fully resistant phenotype ( Figure 4B). These observations indicate that both has and phu systems are implicated in GaPPIX transport, although the phu system appears to be the preferential route for the entrance of GaPPIX in P. aeruginosa cells (Figure 4B).
The Sensitivity of P. aeruginosa to GaPPIX Depends on the Expression of the Heme-Uptake Receptors
To further investigate the contribution of the HasR and PhuR receptors to GaPPIX-uptake, we individually expressed multicopy hasR, phuR, or both hasR and phuR in the hasR phuR mutant strain (using plasmids pUCPhasR, pUCPphuR, or pUCPhasRphuR, respectively) ( Figure 5A). The effect of GaPPIX on these strains was initially tested by the disk diffusion assays ( Figure 5A). While, the empty pUCP18 vector did not alter the susceptibility of hasR phuR to GaPPIX (cfr Figures 5A, 4B), the expression of hasR from the multicopy plasmid pUCPhasR made the hasR phuR mutant more susceptible to GaPPIX (ZOI = 27.6 ± 2.0 mm) ( Figure 5A, Table S2). The effect of GaPPIX was even more pronounced in the has phuR mutant overexpressing either phuR ( has phuR carrying the multicopy plasmid pUCPphuR; ZOI = 34.0 ± 1.0 mm) or both hasR and phuR ( has phuR carrying the multicopy plasmid pUCPhasRphuR; ZOI = 33.3 ± 0.5 mm) ( Figure 5A, Table S2). GaPPIX sensitivity of the has phuR strain expressing hasR, phuR, or both genes, was also evaluated in DCAA liquid medium, in the presence of different concentrations of GaPPIX ( Figure 5B). All strains grew equally in the untreated medium, and GaPPIX did not affect the growth of hasR phuR/pUCP18 up to 25 µM ( Figure 5B). Conversely, strains hasR phuR/pUCPhasR, hasR phuR/pUCPphuR, and hasR phuR/pUCPhasRphuR were very sensitive to GaPPIX. In particular, 0.38 µM GaPPIX reduced the growth of the hasR phuR/pUCPhasR strain by 56%, and by >80% in both hasR phuR/pUCPphuR and hasR phuR/pUCPhasRphuR strains ( Figure 5B). This effect was much more pronounced than that observed for the parental strain PAO1 (Figure 2A). Of note, no further growth reduction was observed for both the hasR phuR/pUCPphuR and hasR phuR/pUCPhasRphuR mutant strains at > 0.38 µM GaPPIX. The increased sensitivity of the hasR phuR strain expressing either hasR or phuR, relative to the wild type, can be explained by the overexpression of heme receptors from the multicopy plasmid pUCP18 (Figure 5A). To confirm this hypothesis, HasR and PhuR protein levels were visualized by SDS-PAGE analysis of OMPs purified from the different P. aeruginosa strains cultured in DCAA ( Figure 5C). By comparing P. aeruginosa outer-membrane-proteins profiles of the wild type, the phuR or the hasR phuR mutant strains, the lack of a ca. 75 kDa protein in the phuR or the hasR phuR mutants, was observed. This was in good agreement with a predicted molecular mass of 82 kDa for the mature PhuR receptor. Moreover, a protein band at that position was evident in SDS-PAGE electropherograms of the hasR phuR/pUCPphuR and the hasR phuR/pUCPhasRphuR complemented mutants ( Figure 5C). Similarly, a protein band corresponding to ca. 94 kDa, consistent with the HasR receptor mass, was absent in the hasR and hasR phuR mutants, while it was clearly detectable in the hasR phuR/pUCPhasR and hasR phuR/pUCPhasRphuR complemented mutants ( Figure 5C). In line with previous results (Ochsner et al., 2000), protein levels greatly differed between PhuR and HasR, the latter being poorly expressed in wild-type PAO1. These results confirm that both HasR and PhuR direct GaPPIX entrance in P. aeruginosa cells, and argue for a prominent role of PhuR as a consequence of its higher expression levels, compared with HasR.
To confirm the specificity of GaPPIX for both heme-uptake systems, we investigated whether the growth inhibitory effect of GaPPIX could be rescued by the presence of Hemin (Hm) or Hemoglobin (Hb), which are known to deliver iron via hemeuptake receptors (Ochsner et al., 2000). To this aim, the hemeuptake mutant hasR phuR overexpressing either PhuR or HasR was tested in the GaPPIX disk diffusion assays in the presence of Hm and Hb (Figure 5D). Both Hm and Hb partly rescued the growth of the hasR phuR mutant overexpressing either PhuR (from pUCPphuR) or HasR (from pUCPhasR) thus confirming that (i) Hm, Hb and GaPPIX compete with heme receptors and (ii) GaPPIX enters P. aeruginosa cells through PhuR and HasR ( Figure 5D).
It has been observed that P. aeruginosa isolates evolving during chronic lung infection in CF patients tend to accumulate mutations in siderophore loci, concomitant with preferential utilization of heme iron (Cornelis and Dingemans, 2013;Marvig et al., 2014;Andersen et al., 2015). To simulate this situation, we tested GaPPIX susceptibility of a siderphore-defective P. aeruginosa mutant overexpressing both PhuR and HasR receptors ( pvdA pchD/pUCPhasRphuR). Whereas, exposure of the pvdA pchD mutant to GaPPIX reduced bacterial growth by 82% at 0.38 µM, expression of both hasR and phuR from multicopy plasmid pUCPhasRphuR made the pvdA pchD mutant extremely susceptible to GaPPIX, displaying 90% growth reduction (IC 90 ) at 0.38 µM (Figure 5E). Notably, full inhibition of the pvdA pchD/pUCPhasRphuR strain was observed upon challenge with 50 µM GaPPIX.
GaPPIX Targets the Aerobic Respiration of P. aeruginosa
GaPPIX has been proven effective against a wide range of pathogenic bacteria by targeting metabolic pathways that require heme as an enzymatic cofactor, such as cellular respiration (Stojiljkovic et al., 1999). Thus, we investigated whether GaPPIX could interfere with the activity of terminal oxidases implicated in P. aeruginosa aerobic respiration. In particular, we focused on Cco-1, Cco-2, and Cio, which have been shown to sustain P. aeruginosa growth under low oxygen conditions, as those encountered in the lung of CF patients (Alvarez-Ortega and Harwood, 2007;Kawakami et al., 2010). To this aim, we initially tested the sensitivity of cytochrome c oxidases (i.e., Cox, Cco-1, and Cco-2) to GaPPIX. Strains deleted of the whole operon encoding the terminal oxidase Cox ( cox) or both the Cco-1 and Cco-2 terminal oxidases ( cco) were generated in the same parental strain used to generate the heme-receptor mutants ( Table 1). The effect of GaPPIX was then assayed on these cytochrome-defective mutants using the TMPD redox indicator, which is an artificial electron donor to the cytochrome c (Matsushita et al., 1982). Oxidation of TMPD to a blue indophenol compound indicates electron flow to the cytochrome c terminal oxidases. Thus, cytochrome c oxidase activity was measured on P. aeruginosa PAO1 and in the cox and cco mutants grown in DCAA supplemented or not with a subinhibitory concentration of GaPPIX (4 µM). In whole cells cultured in the untreated medium, no cytochrome c oxidase activity could be measured in the cco strain (Figure 6A), confirming that in our conditions the TMPD test mainly measures the activity of Cco. Indeed, the cox mutation does not affect the TMPD oxidase activity (Figure 6A), as previously reported (Frangipani and Haas, 2009). This is because Cox is known to be poorly expressed during P. aeruginosa exponential growth (Kawakami et al., 2010). Interestingly, 4 µM GaPPIX reduced the respiratory activity by more than 50% in the wildtype strain PAO1 and the cox mutant, compared with the untreated condition ( Figure 6A). These observations suggest that Cco-1 and Cco-2 terminal oxidases are sensitive to GaPPIX. To confirm these preliminary results, the effect of GaPPIX was tested on a mutant expressing only Cco-1 and Cco-2. To this aim, a cyo cio cox triple mutant strain was generated. Disk diffusion assays showed that the cyo cio cox mutant was more sensitive to GaPPIX than the wild type (ZOI = 34.6 ± 1.24 vs 27.6 ± 2.0 mm, respectively) ( Figure 6B, Table S2). Similar results were obtained in DCAA liquid cultures. PAO1 wild type and the cyo cio cox mutant showed a similar growth profile in the untreated medium (Figure 6C), whereas exposure to 0.38 µM GaPPIX reduced bacterial growth by 40% and 68%, respectively, relative to the untreated cultures ( Figure 6C). Interestingly, it was possible to determine an IC 90 at 82 µM for the cyo cio cox mutant strain (Figure 6C). These results confirm that Cco-1 and Cco-2 are targeted by GaPPIX.
Then, the effect of GaPPIX on the Cio terminal oxidase was assessed. To this purpose, sodium azide (NaN 3 ) was used as a specific inhibitor of copper-dependent oxidases, i.e., all terminal oxidases except Cio (Cunningham and Williams, 1995). Preliminarily, we determined the minimal NaN 3 concentration inhibiting all terminal oxidases except Cio in DCAA, by comparing the growth of wild-type PAO1 and the cio mutant in the presence of increasing NaN 3 concentrations (250-1000 µM). We observed that 350 µM of NaN 3 completely inhibited the cio mutant without affecting PAO1 growth (data not shown). Then, the sensitivity of Cio to GaPPIX was tested by performing a GaPPIX disk diffusion assays with wild-type PAO1 in DCAA supplemented or not with 350 µM NaN 3 . It was observed that PAO1 remains sensitive to GaPPIX in the presence of 350 µM NaN 3 , displaying a ZOI even greater than that obtained for PAO1 without NaN 3 (36.6 ± 3.0 vs 27.6 ± 2.0 mm, respectively) ( Figure 7A, Table S2). This result provides evidence that Cio is a target for GaPPIX. To strengthen this evidence, a P. aeruginosa cyo cco cox triple mutant, which expresses only Cio (Table 1) was constructed and assayed for GaPPIX susceptibility. Disk diffusion assay results showed that the cyo cco cox mutant was more sensitive to GaPPIX than the wild-type PAO1 (ZOI = 30.0 ± 0.7 vs 27.6 ± 2.0 mm, respectively) ( Figure 7A, Table S2). Similar results were also obtained in DCAA liquid medium, showing that GaPPIX significantly reduced (P < 0.001) the growth of the cyo cco cox mutant relative to the wild type, at concentrations ranging between 0.38 and 6.25 µM ( Figure 7B). Altogether, the above results indicate that P. aeruginosa Cco-1, Cco-2, and Cio terminal oxidases are targets for GaPPIX.
P. aeruginosa Clinical Isolates Are Sensitive to GaPPIX
The expression of P. aeruginosa genes encoding heme-uptake systems has recently been detected in sputum samples collected from CF patients (Konings et al., 2013), and an evolution toward preferential heme utilization has been documented in P. aeruginosa during the course of chronic lung infection in CF patients (Marvig et al., 2014;Nguyen et al., 2014). Given the importance of heme in sustaining P. aeruginosa growth during infection, we have comparatively assessed the response to Ga(NO 3 ) 3 and GaPPIX in a collection of P. aeruginosa clinical isolates from CF and non-CF patients (Figure 8, Table S1). Although GaPPIX (up to 100 µM) never abolished P. aeruginosa growth, the majority of clinical isolates (>70%) was sensitive to GaPPIX, displaying an IC 50 values in the range 0.1-15.2 µM (Table S1). Moreover, all but one P. aeruginosa clinical isolates were significantly more susceptible than the reference PAO1 strain (Figure 8). In line with previous reports (Bonchi et al., 2015), all clinical isolates except one (FM1, Table S1) were very sensitive to Ga(NO 3 ) 3 , showing IC 50 values ranging from 0.2 to 9 µM (Table S1).
DISCUSSION
The ability of pathogenic bacteria to colonize the host and cause infections is dependent on their capability to acquire iron and generate energy to sustain in vivo growth (Ratledge and Dover, 2000;Alvarez-Ortega and Harwood, 2007;Hammer et al., 2013). The success of P. aeruginosa as a pathogen relies on the presence of several iron-uptake systems (reviewed in Llamas et al., 2014), as well as on a multiplicity of terminal oxidases which allow bacterial respiration in vivo. Both iron-uptake systems and respiratory cytochromes have been shown to contribute to P. aeruginosa fitness during chronic lung infection in CF patients (Alvarez-Ortega and Harwood, 2007;Konings et al., 2013). Recent observations have documented an adaptation of P. aeruginosa toward heme iron acquisition in the CF lung, where bacterial energy metabolism mainly relies on the three terminal oxidases Cco-1, Cco-2, and Cio, all of which have high affinity for oxygen (Alvarez-Ortega and Harwood, 2007). These data suggest that heme utilization pathways and respiratory cytochromes could represent candidate targets for the development of new anti-Pseudomonas drugs (Alvarez-Ortega and Harwood, 2007;Marvig et al., 2014;Nguyen et al., 2014). Indeed, targeting bacterial membrane functions such as cellular respiration, are considered promising therapeutic opportunities, especially in the case of persistent or chronic infections (Hurdle et al., 2011). Given that all terminal oxidases require heme as a cofactor, and that heme-uptake systems are expressed during chronic lung infection, in this work we have investigated the effect of the heme-mimetic GaPPIX against P. aeruginosa. We focused on Cco-1, Cco-2, and Cio since P. aeruginosa uses any of these three terminal oxidases to support the microaerobic growth necessary to thrive in the lung of CF patients. Cox and Cyo are not expressed or strongly repressed under these conditions (Alvarez-Ortega and Harwood, 2007).
We have initially demonstrated that GaPPIX is able to reduce the growth of P. aeruginosa only under iron-limiting growth conditions. However, different from Ga(NO 3 ) 3 , bacterial growth was never completely inhibited at GaPPIX concentrations up to 100 µM (Figure 2A), in line with the fact that the ZOI for GaPPIX was less transparent compared with that generated by Ga(NO 3 ) 3 in the disk diffusion assays (Figure 2B). This diverse response of P. aeruginosa upon exposure to GaPPIX or Ga(NO 3 ) 3 (Figures 2A,B) could be explained by the fact that GaPPIX and Ga(NO 3 ) 3 enter bacterial cells through different pathways. Ga(NO 3 ) 3 may enter P. aeruginosa cells (i) by diffusion; (ii) through the HitAB iron transport proteins (García-Contreras et al., 2013); or (iii) via the siderophore Pch . On the other hand, we have demonstrated that GaPPIX can cross the P. aeruginosa outer membrane only through the heme-receptors HasR and PhuR, since a hasR phuR mutant is fully resistant to GaPPIX (Figure 4). Indeed, overexpression of heme receptors in the hasR phuR mutant makes this strain susceptible to GaPPIX, at even lower GaPPIX concentrations compared with wild-type PAO1 (Figures 5A,B). However, it should also be taken into consideration that GaPPIX and Ga(NO 3 ) 3 likely have different targets. In fact, while Ga(NO 3 ) 3 is known to target a variety of essential iron-containing enzymes (Bernstein, 1998;Soo et al., 2016), less is known about GaPPIX targets. Several studies have demonstrated that the antibacterial activity of GaPPIX relies on the molecule as a whole, since GaPPIX cannot be cleaved by bacterial enzymes (Stojiljkovic et al., 1999;Hammer et al., 2013). In fact, we demonstrated that the homolog of GaPPIX (Hemin) did not affect P. aeruginosa PAO1 growth. Indeed, Hemin promoted bacterial growth at concentrations ranging between 1.55 and 25 µM (Figure 2A), likely as a consequence of iron delivery to the cell, combined with positive regulation of the has system (Llamas et al., 2014). Hence, GaPPIX might be erroneously incorporated in hemecontaining proteins such as cytochromes. However, due to the multiplicity of pathways involving cytochromes, exposure to GaPPIX never results in a complete growth inhibition. This hypothesis is supported by the observation that GaPPIX is more active against P. aeruginosa mutants deleted in some of the cytochrome-dependent terminal oxidases (Figures 6, 7). In fact, a P. aeruginosa mutant that only expresses the terminal oxidases Cco-1 and Cco-2 ( cyo cio cox) is much more sensitive to GaPPIX than the wild-type strain. In addition, the cyo cio cox mutant showed a 68% growth reduction in liquid DCAA at 0.38 µM GaPPIX, compared to the untreated cultures, and an IC 90 of 82 µM (Figures 6B,C). Along the same lines, a P. aeruginosa strain that only relies on the terminal oxidase Cio to respire oxygen, is more sensitive to GaPPIX than the wildtype strain (Figure 7). Taken together, our results demonstrate that GaPPIX targets P. aeruginosa respiratory cytochromes Cco-1, Cco-2, and Cio, which are exclusively found in bacteria (Cunningham and Williams, 1995;Pitcher and Watmough, 2004), although we cannot discriminate which of the Cco cytochromes is preferentially targeted by GaPPIX (the cco-1,2 strain is mutated in both). Moreover, it is tempting to speculate that GaPPIX may also inhibit the other terminal oxidases Cyo and Cox (Figure 1), as well as some of the enzymes involved in denitrification, such as the heme-containing protein complexes Nir and Nor (Figure 1). Moreover, GaPPIX could also be incorporated into heme-containing enzymes involved in the protection from oxidative stress, increasing the susceptibility of P. aeruginosa to reactive oxygen species.
Although it was not possible to determine the MIC of GaPPIX for wild-type PAO1, it is worth to point out that GaPPIX was extremely active against a P. aeruginosa mutant impaired in siderophore production ( pvdA pchD) and overexpressing both HasR and PhuR heme receptors from plasmid pUCPhasRphuR (Figure 5). Ninety percent growth reduction and full inhibition were observed upon exposure of this mutant to 0.38 and 50.0 µM GaPPIX, respectively. It is tempting to speculate that such strong inhibition could also occur in the CF lung, where siderophore-defective P. aeruginosa variants emerge during chronic infection, and heme represents the principal iron source (Marvig et al., 2014;Nguyen et al., 2014). Inhibition could further be enhanced under the microaerobic conditions encountered by P. aeruginosa in the CF airways (Hogardt and Heesemann, 2010), where the three high affinity terminal oxidases targeted by GaPPIX (Cco-1, Cco-2, and Cio) are essential for bacterial growth (Alvarez-Ortega and Harwood, 2007). Irrespective of the Ga(III) delivery system and of the energy metabolism adopted by P. aeruginosa, the balance between Fe(III) and Ga(III) availability in vivo will be the main determinant of Ga(III) efficacy. The inhibitory activity of GaPPIX was not limited to the prototypic strain PAO1, as it was also exerted on a representative collection of P. aeruginosa clinical isolates (Table S1). The great majority of clinical isolates (>70%) was sensitive to GaPPIX, irrespective of their origin, and all but one were significantly more susceptible than PAO1 (IC 50 ≤ 3.2 µM, Table S1).
Interestingly, studies on several human cell lines report that GaPPIX does not show cytotoxicity at concentrations ≤ 128 µM (Stojiljkovic et al., 1999;Chang et al., 2016), far above the concentrations that we found active on P. aeruginosa clinical isolates. Moreover, GaPPIX did not show to affect the health and behavior of mice, when administered by intraperitoneal injections (25-30 mg/kg) followed by four daily doses of (10-12 mg/kg) (Stojiljkovic et al., 1999), though it reduced the survival of Galleria mellonella larvae by 50% (LC 50 ) when injected at 25 mM (Arivett et al., 2015).
Although further studies are needed to assess the effect of GaPPIX against P. aeruginosa infection in vivo, our work should encourage future research directed to the development of hememimetic drugs targeting cellular respiration for the treatment of P. aeruginosa chronic lung infection.
AUTHOR CONTRIBUTIONS
PV and EF designed research; SH performed research; SH, EF, and PV analyzed data; SH, EF, and PV wrote the paper.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fcimb. 2017.00012/full#supplementary-material | 9,350.6 | 2017-01-26T00:00:00.000 | [
"Biology"
] |
TruFLaaS: Trustworthy Federated Learning as a Service
The increasing availability of data generated by Internet of Things (IoT) and Industrial IoT (IIoT) devices, as well as privacy and law regulations, have significantly boosted the interest in collaborative machine learning (ML) approaches. In this direction, we claim federated learning (FL) as a promising ML paradigm where participants collaboratively train a global model without outsourcing on-premises data. However, setting up and using FL can be extremely costly and time consuming. To effectively promote the adoption of FL in real-world scenarios, while limiting the overhead and knowledge of the underlying technology, service providers should offer FL as a Service (FLaaS). One of the major concerns while designing an architecture that provides FLaaS is achieving trustworthiness among involved typically unknown participants. This article presents a blockchain-based architecture that achieves trustworthy FLaaS (TruFLaaS). Our solution provides trustworthiness among third-party organizations by leveraging blockchain, smart contracts, and a decentralized oracle network. Specifically, during each FL round, the service provider supplies a sample, without overlapping, of its validation set to validate all partial models submitted by clients. By doing so, poor models, which tend to degrade performance or introduce malicious backdoors, are identified and discarded. Due to the transparency of the blockchain, not changing the validation set would enable participants to forge a malicious partial model that passes the validation phase. We evaluate our approach over two well-known IIoT data sets: the reported experimental results show that TruFLaaS outperforms the state-of-the-art literature solutions in the field.
Abstract-The increasing availability of data generated by Internet of Things (IoT) and Industrial IoT (IIoT) devices, as well as privacy and law regulations, have significantly boosted the interest in collaborative machine learning (ML) approaches.In this direction, we claim federated learning (FL) as a promising ML paradigm where participants collaboratively train a global model without outsourcing on-premises data.However, setting up and using FL can be extremely costly and time consuming.To effectively promote the adoption of FL in real-world scenarios, while limiting the overhead and knowledge of the underlying technology, service providers should offer FL as a Service (FLaaS).One of the major concerns while designing an architecture that provides FLaaS is achieving trustworthiness among involved typically unknown participants.This article presents a blockchain-based architecture that achieves trustworthy FLaaS (TruFLaaS).Our solution provides trustworthiness among third-party organizations by leveraging blockchain, smart contracts, and a decentralized oracle network.Specifically, during each FL round, the service provider supplies a sample, without overlapping, of its validation set to validate all partial models submitted by clients.By doing so, poor models, which tend to degrade performance or introduce malicious backdoors, are identified and discarded.Due to the transparency of the blockchain, not changing the validation set would enable participants to forge a malicious partial model that passes the validation phase.We evaluate our approach over two well-known IIoT data sets: the reported experimental results show that TruFLaaS outperforms the state-of-the-art literature solutions in the field.Index Terms-Blockchain, federated learning (FL), federated learning as a service (FLaaS), security, trust, trustworthiness.
I. INTRODUCTION
T HE INCREASINGLY widespread adoption of Internet of Things (IoT) and Industrial IoT (IIoT) devices is notably contributing to the design and development of next-generation services [1].As reported in recent statistics, the number of such devices will surpass 125 billion by 2030 [2], generating an unprecedented amount of data that paves the way for new applications based on artificial intelligence (AI) [3].However, traditional machine learning (ML) techniques, which The authors are with the Department of Computer Science and Engineering, University of Bologna, 40136 Bologna, Italy (e-mail: carlo.mazzocca@unibo.it;nicolo.romandini@unibo.it;matteo.mendula@unibo.it;rebecca.4montanari@unibo.it;paolo.bellavista@unibo.it).
Digital Object Identifier 10.1109/JIOT.2023.3282899require data centralization, are not feasible when a remarkable amount of information comes from multiple locations; both in terms of privacy awareness [4] and energy consumption [5].Furthermore, law and privacy regulations, such as the European General Data Protection Regulation (GDPR) [6], hinder centralized ML approaches that may lead to potential data leakages.These reasons are pushing industrial and academic communities toward more decentralized and collaborative ML approaches.In this direction, federated learning (FL) is envisioned as a promising ML paradigm in which the parties involved, which share common goals, collaboratively train a global model.Unlike centralized ML, which typically relies on cloud-based resources, data are no longer sent to a central entity, as training is performed directly on remote clients using on-premises data.Each client trains a local ML model with its own data and, subsequently, sends it to a server that combines all the partial models retrieved according to an aggregation strategy [7].Nowadays, different services and companies, which could be also competitors, have to face similar problems that can be effectively solved through the use of distributed ML.Adopting a collaborative approach to train ML models can be extremely beneficial, especially for small/medium enterprises that may not have enough on-premises data to build useful models on their own.For example, in smart manufacturing environments, the various equipment and uneven load distribution may lead to unbalanced data regarding faults.In such a context, implementing a diagnostic model requires gathering a large amount of high-quality fault data, which is a hard task.Therefore, the lack and imbalances in fault samples represent two main factors that negatively affect the performance of fault diagnosis models [8].These limits can be overcome through the training of a shared model, bringing advantages to every participant.Edge nodes deployed in multiple sites and the use of FL allow the exploitation of unbalanced samples to train models with excellent accuracy, generalizability, and efficiency [9].
Although FL is gaining much popularity in various fields [10], [11], [12], [13], there are just a few works that propose to provide FL as a Service (FLaaS) [14] to interested third parties.In the last decade, cloud providers have offered many cloud-based ML as a Service (MLaaS) [15] Providing FLaaS with minimum overhead and knowledge of the underlying technology is a key factor in promoting the successful use of FL solutions.An effective FLaaS should be designed to: 1) offload developers from collecting data allowing them to only focus on the algorithm to implement; 2) preserve data privacy by avoiding data transfers from the owner to external entities; and 3) provide trustworthiness among unknown participants, which is the main focus of this work.For example, regarding the fault diagnosis use case reported above, smart manufacturing enterprises may want to be sure that the employed model can effectively predict a certain failure.
Although the FL paradigm enables tackling some AI-based challenges, such as preserving privacy, the global model can still be the target of different attacks (i.e., model poisoning and inference attack) [16].Regarding trustworthiness, the main concern that hampers FL adoption in third-party applications is the presence of malicious clients and servers that negatively impact the performance of FL training and introduce malicious backdoors [17].Furthermore, the traditional FL architecture based on the client-server model suffers from a single point of failure, low scalability, and tampering of the global model, including possible biases that induce some partial models over others [18].
To effectively improve the trustworthiness of the whole FL process, this article proposes a blockchain-based trustworthy FLaaS (TruFLaaS).Our solution provides trustworthiness among third-party contributors to the FL process by leveraging blockchain, smart contracts, and decentralized oracle networks (DONs).TruFLaaS proposes a novel validation strategy to aggregate partial models, resulting in an improved quality of the global model.Clients' models are validated by a smart contract through a sample of the validation data set given by the service provider through a DON.By evaluating the partial models on defined quality metrics (e.g., accuracy), we can generate high-quality global models.We associate a level of trust, updated during each round, with each client in order to properly weigh their contributions.To the best of our knowledge, we are the first to propose a validation protocol that leverages a smart contract to directly validate partial models.The experiments demonstrate that TruFLaaS outperforms conventional baselines and the state-of-the-art literature under different circumstances that are particularly relevant to FL scenarios.The following summarizes the major contributions of this article.
1) We present a novel blockchain-based architecture for enabling TruFLaaS.Our solution combines blockchain, smart contracts, and a DON to build a collaborative trustworthy AI model training system that can resist attacks from the server and malicious participants.2) We design a novel validation protocol based on smart contracts and a DON.The DON is needed to dynamically feed the smart contract with a sample of the validation data set.3) We propose a weighted aggregation strategy that takes into account the level of trust of each participant.To properly consider contributions, each client has a level of trust that is given by its performance achieved during all previous rounds.
The remainder of this article is structured as follows.Section II motivates the need for the FLaaS and discusses the main guidelines to design a TruFLaaS.Section III presents the blockchain-based architecture for enabling trustworthy FL, while Section IV discusses in detail the validation protocol as well as the level of trust of participants.Section V evaluates the proposed approach and presents experimental results.Section VI analyzes related work on trustworthiness and FL.Finally, Section VII draws our conclusions.
II. MOTIVATION AND DESIGN GUIDELINES
FL is emerging as a valuable solution for creating ML models in a distributed manner without sacrificing data privacy.Despite the benefits, setting up and using FL can be extremely expensive and time consuming, especially in some sectors, such as industry or healthcare, where the necessary infrastructure and expertise are often lacking.FLaaS provides clients with an easy way to use FL with limited overhead and technological knowledge, allowing them to eliminate the heavy burden task of developing and tuning algorithms and tools.Furthermore, FLaaS is flexible to meet different participants' requirements while implementing FL training.
To facilitate the understanding of our proposal and to practically clarify the motivations behind the primary TruFLaaS design choices, we introduce an example that will be used as a reference use case throughout this article.Let us consider a company that sells industrial machines and offers a predictive maintenance service.All the customers who use a specific machine are interested in joining such FLaaS since predicting the breakdown of an industrial component brings significant advantages [19], such as reducing maintenance costs and increasing production capacity.Since all the machines of the same model share the same characteristics, they are prone to the same performance degradation trend over time.For this reason, the environmental condition experienced by each machine can be beneficial to others to understand the reasons behind, and so prevent, a component fault.In this context, when a smart manufacturing company buys a machine, it obtains the infrastructure needed to run FL training, too.Each local training resulting parameters are then sent to the vendor that aggregates them into a more generalized model able to predict the behavior of the machine under a wide spectrum of circumstances.
Therefore, due to the above consideration, FLaaS is designed to address the following scenarios.
1) FL training for a single client on an existing ML problem without the need of developing and tuning algorithms.For example, a smart manufacturing enterprise may want to model the temperature of a given machine to avoid overheating.2) FL training between two or more clients to solve an existing task, which is the same for all the involved parties (e.g., predictive maintenance) giving access to a wider knowledge spectrum.3) FL training two or more clients to solve a novel task not explored yet.For example, a smart manufacturing enterprise may want to model the temperature of a given machine during a specific month.
4) Allowing clients to specify the requirements to address while implementing FL training.Clients' requirements comprise quality metrics, aggregation strategy, and the number of nodes involved.Although FLaaS can bring many advantages to third-party organizations, there are several challenges arising that need to be properly addressed.
A. Trustworthiness
Clients require transparency in the FL process, especially if they do not fully trust each other and/or the service provider.For example, the node in charge of aggregating models might have biases and prefer one update over another.In addition, it is important to check the models sent by clients, as they may be Byzantine.Aggregation of a malicious model could generate a global model with poor performance and/or backdoors [20], Therefore, to effectively allow unknown clients to collaborate, service providers have to guarantee that: 1) all partial models are equally treated without possible biases inducing to prefer some partial models over others and 2) malicious attempts to arbitrarily alter the global model are properly mitigated through a validation process of partial models.Concerning the smart manufacturing example reported above, there are two potential targets for an attack: 1) the service provider (i.e., vendor) and 2) the customers (i.e., smart manufacturing industries).A malicious competitor of the service provider could be interested in joining the FL training as a client to negatively contribute to the global model or make the service unavailable, impacting the service provider's reputation and reliability.On the other hand, a contender of a third-party customer could buy that equipment only to participate in the FL process and degrade the performance of the global model, exposing the customer to potential machine failures.
These considerations lead us to claim that trustworthiness is one of the major requirements that service providers have to guarantee to offer an effective FLaaS.With this idea in mind, we propose the exploitation of blockchain and smart contracts to make more trustworthy the validation and aggregation processes of partial models.Many research works propose enriching FL with blockchain, but the use of blockchain is mainly devoted to avoiding a single point of failure, providing higher reliability, and tracking participants' contributions.This ensures accountability and maintains data providence [21], [22], [23], [24].However, unlike our proposal, the validation and aggregation processes in these studies are usually performed off-chain, meaning they occur outside the blockchain network.
However, performing these operations off-chain significantly reduces the usefulness of the blockchain, while also reducing the benefits of its consequent overhead.Off-chain approaches, which do not rely on smart contracts, are neither public nor verifiable.Therefore, the results obtained do not represent solid evidence and could be challenged by clients.Furthermore, a participant may not have adequate or enough validation data to perform an accurate validation process.For example, in the case of predictive maintenance, a smart manufacturing enterprise may not have data on a particular fault and therefore could not assess whether the model can predict it.
Although keeping the validation on the blockchain avoids Byzantine contributions that could introduce backdoors in the global model, on-chain validation also brings challenges to address.In particular, validating a partial model through a smart contract implies that validation data are published on the blockchain.Hence, malicious contributors could intentionally craft an evil model that achieves satisfying performances on the validation data.Therefore, to offer a reliable FLaaS, service providers must implement security mechanisms that allow partial models to be validated using smart contracts without making validation data publicly available.Moreover, due to the transparency by-design nature of blockchain environments, service providers have to offer the possibility to verify how validation was performed.
B. Privacy-Preserving Techniques
Although one of the main advantages of FL is to avoid data exchange between the server and clients, personal information can still be inferred by analyzing the partial models submitted by the clients [25].Therefore, to further improve privacy, privacy-preserving techniques are generally employed [26], [27].Differential privacy (DP) is one of the most representative approaches [28].It consists of adding artificial noise to the partial model before sending it for aggregation.However, while higher noise will result in higher privacy, the global model performance would be worst and a longer convergence time of the training process is likely to be required.Hence, an adequate tradeoff is needed to preserve privacy while guaranteeing satisfying performance.Another approach that is emerging in FL consists of using homomorphic encryption [29].It is an encryption technique that allows operations to be performed on the encrypted data without having to decrypt them first.The result is provided in encrypted form and is equivalent, when decrypted, to that obtained by performing the same operation on plaintext data.In this way, clients encrypt partial models before sending them.The result of the aggregation is an encrypted global model that can be used by clients once decrypted.
C. Authentication and Authorization
Participants of FLaaS need to interact with each other transparently and securely.However, the highly distributed operating environment of FL hinders the adoption of centralized identities to identify clients and regulate their participation in FL training.Centralized approaches own and control clients' data that could be also shared with other services without their awareness.In addition, storing sensitive information in a unique server increases the risk of data leakage.Due to these considerations, a framework that offers FLaaS has to implement decentralized authentication and authorization mechanisms.Decentralized identities are only under the control of the data owner that decides with whom to share its information.The authorized access to FLaaS can be regulated through decentralized identifiers (DIDs) and verifiable credentials (VCs) [30].A DID is a new type of identifier that enables verifiable, decentralized digital identity [31].VCs are claims made by an issuer that states something about a subject [32].DIDs and VCs enable claim-based identity, a method of authenticating entities in other systems.
Specifically, to use FLaaS, clients need VCs with the necessary permissions to join the desired FL processes.Moreover, since participants may have different requirements, the service provider has to guarantee that the issued VCs allow clients to only join the FL training that satisfies its demands.
D. Incentive and Penalization
Clients are always reluctant to share their data, hence, incentives are needed to attract sufficient distributed training data and computation power.Therefore, to effectively involve as many positive participants as possible, resulting in a highquality global model, service providers have to implement mechanisms to reward clients according to their contributions [33].It sharpen that to correctively reward participant, avoiding low participation rate or financial loss, contributions have to be accurately evaluated.However, implementing incentives is not enough, because Byzantine participants may participate only to attempt to gain a reward.Thus, service providers must also determine penalization mechanisms to discourage spamming and incorrect computations, which could impact the quality of the global model.
III. TRUFLAAS ARCHITECTURE AND PRIMARY DESIGN CHOICES
This section describes the architecture of TruFLaaS whose main components are highlighted in Fig. 1, and their interactions in Figs. 2 and 3. Clients interact with the service that offers FLaaS.The blockchain, smart contracts, validation set, and DON are the architectural entities that enable achieving a TruFLaaS.A DON is a middleware layer that enables to deliver off-chain validation data to the blockchain in a secure and reliable manner.The use of blockchain and smart contracts improves the trust of the participants in the FL process.Specifically, a smart contract validates partial models on a validation set provided by the DON.Then, it aggregates clients' contributions weighting them according to their level of trustworthiness.Honest participants are promoted to participate through incentives, while malicious adversaries are discouraged by penalization mechanisms.TruFLaaS provides the flexibility needed to meet the demands of different clients.To the best of our knowledge, TruFLaaS is the first designed and implemented framework that provides trustworthiness in the FLaaS paradigm and performs validation of partial models in FL by leveraging smart contracts and a DON.
A. Service Provider
The service provider is the entity that offers FLaaS to its clients for tasks that are usually worthy for all participants.For instance, it is noteworthy that all the smart manufacturing enterprises that use certain machinery are willing to prevent its breakdown.However, although the goal may be common, clients still may have different requirements in terms of metrics, number of involved nodes, and aggregation strategy.For example, a client may want to obtain the global model as soon as possible, hence, it can determine a threshold of participants that must be satisfied to aggregate collected partial models.On the other hand, another client may not be interested in retrieving the global model in a short time window since it may prefer to wait longer in order to collect a higher number of contributions.Therefore, the service provider has to implement a flexible service capable of meeting different demands.Clients provide such information to the service provider which in turn uses them to implement a smart contract that realizes an FL process compliant with them.The service provider is also responsible for registering clients and providing them with a valid identifier to interact with the blockchain.This way guarantees that only an identifiable and authorized client participates in the proper FL training.
Furthermore, since the service is directly offered by the service provider on a task that is under its control (e.g., predictive maintenance of its machines), we can assume that it has a validation data set large enough to validate partial models [8].After each round of FL training, the service provider feeds a sample of this set into the blockchain.Such data are used to validate all the partial models before aggregating them.Given two distinct rounds, the validation sample has to be different to avoid possible model forging attacks.Otherwise, malicious participants could exploit it to build a crafted partial model that passes validation checks.This could result in the introduction of backdoors in the global model that can compromise its integrity and effectiveness.Thus, to securely and reliably inject data into the blockchain, the service provider has to count on a DON that allows it to accurately fetch data off-chain and deliver it to the blockchain.
1) Decentralized Oracle Network: Oracles are trusted third entities that serve as bridges between blockchains and external systems.They enable smart contracts to make computations leveraging inputs and outputs from the real world.An oracle is a software component that queries, verifies, and authenticates external data sources and then relays that information to the blockchain.As previously mentioned, TruFLaaS relies on oracles to provide the validation data set to the smart contract.These oracles are responsible for feeding the smart contract with the validation data set managed by the service provider.This approach helps to ensure the integrity and security of the validation process, as the oracles act as trusted intermediaries between the participants and the smart contract.However, the usage of a single oracle leads to a central point of failure, which could contradict the decentralization principles of blockchain technology and lead to security vulnerabilities.The issue is known in the literature as the "blockchain oracle problem" [34].To address this challenge, DONs have emerged as a solution [35].A DON employs a combination of multiple independent oracle node operators and multiple sources of reliable data to provide decentralized and secure access to offchain information.By leveraging the collective intelligence of multiple independent nodes, a DON helps to ensure the reliability and accuracy of data inputs into the blockchain network, while also maintaining the decentralization and security that blockchain promises.
In TruFLaaS, we use a DON as a middleware layer between the service provider and the blockchain.Without a DON, validation data have to be published on the blockchain.As a negative side effect, all participants may exploit published validation data for forging a partial model that, even if malicious, passes the validation phase.To ensure the integrity and security of the FL process, validation data should only be provided after the necessary requirements for performing aggregation have been met.For example, concerning the predictive maintenance use case, a malicious client may attempt to construct a partial model that achieves satisfying performance on the validation data although its model fails while estimating the remaining useful lifetime (RUL) when it falls under a certain threshold.
B. Client
Clients collect data provided by IoT and IIoT devices and use them to train local models, independently from the ML algorithm employed.Once the training is completed, the partial model is forwarded to a smart contract deployed on the blockchain.Thus, a client has a module to perform tasks related to the FL and hosts a node of the blockchain to join the service.Before joining an FL, the client has to express its willingness to join an FL.In case existing processes do not meet its demands, the client can provide new conditions to the service provider that meet them while setting up a novel FL training.
C. Blockchain Node
As discussed above, we leverage blockchain and smart contracts to improve the trustworthiness among unknown participants in the FL process.Therefore, the service provider has to deploy blockchain nodes.Clients may either run locally a blockchain node, as shown in Fig. 1, or connect to one of those deployed by the service provider.On the one hand, running a blockchain node can provide clients with direct visibility of FL processes.On the other hand, it comes with the potential downside of consuming a nonnegligible amount of client resources, which can be a challenge for clients with limited computing power or storage capacity.Therefore, it is important to carefully consider the tradeoffs between direct monitoring and resource consumption when deciding whether to run a blockchain node or connect to one of the proxy nodes and submit its partial model.This flexible configuration increases the ease of use of FL since each participant is not involved in the operations to manage a blockchain node, while it has to only train partial models and send them to the corresponding smart contract.
1) Validation and Aggregation:
The validation and aggregation of the global model are performed through a smart contract, which is implemented according to the client's requirements.The smart contract first verifies whether the client is authorized or not to join the desired FL process.Then, before validating and aggregating partial models, it waits until the training requirements are satisfied.For example, if the aggregation strategy foresees that all participants have to provide their contributions, the smart contract waits until all the clients have submitted their partial models and then it sends a request for the validation set for that round.As anticipated above, the validation set is provided by the service through a DON.All the partial models are validated against the collected validation set and, if they achieve satisfying performance on the selected threshold, are considered in the aggregation phase.
2) DID-Based Access Control System: We regulate access to the FLaaS through DIDs and VCs.Each client has only one digital identity, which is a DID issued by the service provider, but has multiple claims (i.e., VCs) that prevent misuse of services and Sybil attacks [36].Such identity information is not stored or controlled by other parties, rather they are kept in a wallet under the surveillance of the user, thus, improving both the control over the client's data and the degree of trust and security for external entities (e.g., apps or service providers) [37].Our DID-based access control system comprises the following actors.
IV. TRUFLAAS TRUSTWORTHINESS PROTOCOL
This section discusses the main phases that enable achieving trustworthiness in an FLaaS environment.Let us consider a service provider s that offers FL training f l ∈ F to its client set C to collaboratively train, according to given requirements, a global model mg l on a given task.To join the FLaaS offered by s, a client c i ∈ C must be already registered with that s.Once c i is registered with s, it owns a DID issued by s that enables it to join the FLaaS.
A. Starting and Joining FL Training
A client c i can either join an existing f l or start a new one if the requirements implemented by existing processes do not satisfy its demands.TruFLaaS employs a robust authorization workflow, illustrated in Fig. 2, that leverages DIDs and VCs to regulate all interactions.In the following, we detail the steps involved in initiating a new f l or becoming a member of an existing one.
1) c i presents its DID and provides s with the requirements that f l has to address.Table I summarizes the parameters that can be customized.2) In case there are no preexisting smart contracts sc l that meet the client's needs, s creates a novel sc l that verifies and aggregates partial models according to the specified criteria.However, if such sc l does exist, refer to step 3).3) s returns to c i a VC vc i,l signed with its DID that enables c i to interact with the deployed sc l .4) once the local training is completed, c i signs with its DID the previously obtained vc i,l generating a VP vp i,l .
Then, it provides such vp i,l and the partial model mp i,l to sc l .5) sc l verifies the validity of vp i,l through the DID of s, which has released the vc i,l , and subsequently grants or denies the participation to f l .It is worth noting that starting a new f l is an expensive operation that should be avoided if there is existing training that already satisfies the client's demands.
B. Trust Level
Each client c i ∈ C is assigned a trust level t i,l ∈ [0, 1].As pointed out in [38], most trust models in P2P networks distinguish trust toward a peer into direct and indirect.Direct trust is based on previous interactions with that peer, while indirect trust is based on that peer's global reputation.We denote with TAR i,l the Transaction Acceptance Rate, which is defined as where TA i,l is the number of accepted transactions and T i,l is the total number of transactions made, both referred to the f l process.The Global Trust Value, which is denoted with GT i , is defined as where the trust levels are obtained by the clients in past or current FL processes.Thus, leveraging the direct and indirect trust, we calculate the trust level as follows: These values are updated at each round, through the specific smart contract (Fig. 3, step 11).At this point, we also consider whether to revoke the vc i,l from client c i in case its TAR i,l does not meet minimum requirements (i.e., pass a threshold).At each round, the average TAR μ TAR , among all participants, and the corresponding standard deviation σ TAR are calculated.Then, we estimate whether or not a client can, considering the number of remaining rounds, exceed the threshold value set to μ TAR − σ TAR .In case it fails, the corresponding vc i,l is revoked and the client is excluded from f l .
C. Validation and Aggregation
After having started a novel f l or joined an existing one, a client is provided with the necessary vc i,l to contribute to that training.In the following, we detail the validation and aggregation workflow depicted in Fig. 3.These steps are repeated for each round k.
1) Each c i,l ∈ C l collects local data from the deployed IoT/IIoT devices.2) Data are used to locally train a partial model mp k i,l .3) Each c i,l provides mp k i,l and vp i,l , which is obtained by signing vc i,l through its DID, to sc l that validate and aggregate all partial models MP k l .Algorithm 1 shows the algorithm implemented by the smart contract to validate and aggregate partial models.4) sc l forwards vp i,l to a smart contract responsible for authorizing participants.5) This smart contract will grant or deny access to f l .
Specifically, it jointly verifies the validity of vp i,l and ensures that the embedded vc i,l has not been revoked.6) Before aggregating all partial models MP k l , sc l waits until the aggregation requirements are met and validates MP k l against a validation set V k l ⊂ V l provided by a DON d. ; otherwise, c i,l could craft mp z i to achieve satisfying performance on a known V l [].9) d returns V k l to sc l .10) sc l validates MP k l against V k l .To be accepted, a partial model must achieve performance equal to or better than a specific threshold.We employ the interquartile range (IQR) method for detecting outliers.This method does not use the median, mean, and standard deviation, being more robust to extremely large or small values.The IQR is calculated as Q3−Q1, where Q1 is the first quartile of the data, and Q3 is the third quartile.To detect outliers, we calculate the threshold as Q1 − 1.5 * IQR.Any data that falls below this value is considered an outlier and consequently discarded.11) According to the collected metrics, sc l sends the updated reputations to a smart contract employed to trace the t i of each c i .12) The smart contracts calculate all the trust levels T k of each c i,l and returns them to sc l .13) sc l aggregate all validated all mp k i ∈ MP k l weighting them according to the corresponding t k i ∈ T k .14) The global model mg k l is provided to all c i,l .It is worth outlining that transparent collaboration among smart contracts plays a key role in achieving a TruFLaaS.In particular, such a design choice is justified by the following considerations.
1) All MP k l are validated and aggregated without any biases, guaranteeing the correctness of gm k l .2) Only c i,l having satisfying the process can join f l .3) Reputations of c i,l are calculated by a smart contract using as input the output of sc l .Thus, we ensure the correctness of t i for each participant.
Algorithm 1 Smart Contract-Validation and Aggregation
Input:
D. Incentives and Penalization
To start a novel f l or join an existing one, clients use tokens that are by design the natural incentive mechanism for blockchain-based platforms.Tokens are purchased from s and earned by clients through positive participation.Moreover, in order to participate in an already settled f l , tokens are also required to discourage malicious behavior.Participants who provide incorrect contributions are penalized by having a portion of their tokens withdrawn in proportion to their contribution quality score (GT i ).This ensures that all participants have a vested interest in contributing high-quality work and helps maintain the integrity of the f l .More in detail, at the time of the creation of a new f l , the budget b l (i.e., tokens) is locked up into the corresponding sc l by the c i,l that initialized that f l .This budget represents an incentive to promote participation to freshly started f l .Indeed, at the end of f l , it will be distributed among the participants C l according to the corresponding TAR i,l .The reward r i,l assigned to each c i is calculated as follows: Such an incentive scheme fairly distributes b l according to the contributions of all the c i,l ∈ C l .For each c i,l , the contribution corresponds to its TAR i,l .It is clear that, given c i,l , c j,l ∈ C l , and TAR i,l > TAR j,l , it follows that r i,l > r j,l .Furthermore, to deter malicious behavior, before joining f l , each c i,l has to deposit an amount of tokens d i,l bounded by to b l (1/GT i ).Thus, the higher a client's reputation, the less it will have to deposit, and vice versa.This amount will be fully returned to the participant c i,l at the end of f l only if the TAR i,l is greater than a threshold.Otherwise, the amount returned will be equal to d i,l TAR i,l .Such a mechanism is a strong deterrent to voluntarily submitting malicious or inaccurate models, as it would result in an economic loss of tokens.
V. EVALUATION RESULTS
To validate our proposal and compare it with the existing literature, we consider predictive maintenance and botnet attack detection use cases, which are of high interest for industrial deployment environments and call for data collection from multiple distributed sources.We first describe the implementation setup for our experiments and the employed data sets, then we present the details of the performed experiments, and, finally, we discuss the performance indicators that we have experimentally measured, by drawing some related considerations.
A. Implementation Setup
TruFLaaS can be integrated into any blockchain infrastructure that supports smart contracts and DONs.For instance, for the following assessment and evaluation, we have made TruFLaaS work with Hyperledger Fabric,1 an open-source, modular, and extensible framework for deploying permissioned blockchains.Fabric-based applications are enterprisegrade and offer a high level of security, scalability, and performance [39]; in particular, Fabric smart contracts are written in general-purpose languages, such as Java, Go, and NodeJS.To implement the proposed validation protocol, we have implemented our smart contracts in NodeJS by using TensorFlow libraries.This choice is motivated by the need to recreate an ML model directly in the smart contract: TensorFlow is one of the few frameworks that implement ML also in JavaScript [40].Concerning the DON, we have used Provable,2 whose only requirement is to deploy a specific smart contract that acts as a connector between the blockchain and the outside world.Our experiments were run on a Python-simulated FL framework. 3
B. Data Set
For the predictive maintenance use case, we selected the NASA Turbofan Jet Engine data set [41], which is a widely accepted and well-known baseline data set from NASA for engine degradation modeling.It enables estimating the RUL of the considered engine; the data set was generated through the simulation of the commercial modular aero-propulsion system.Specifically, it comprises four subdatasets, with temporal signals from 21 sensors (e.g., temperature and fuel flow ratio); each of the subdatasets consider different combinations of operational conditions and fault modes.To employ the data set effectively, first, we performed a data preprocessing step to remove features with nonconsistent values.In addition, since the training set does not present RUL values but only the number of time cycles of engine usage, we were forced to calculate them manually.For the purpose of the following evaluation, We assume that RUL decreases linearly over time so that it would have a value of 0 at the last time cycle of the engine: for each engine, RUL is calculated as max_time_cycle − time_cycle; moreover, as usual for regression problems, we have normalized the input values.Finally, we have split the data set into training and testing subsets for 100 engines to replicate the behavior of an FL network during the training phase.
Concerning the botnet attack detection use case, we employed the N-BaIoT data set [42], which contains real traffic data gathered from nine commercial IoT devices authentically infected by Mirai and BASHLITE.Malicious traffic is divided into multiple different attacks (e.g., network scanning and firmware), thus, enabling us to use it for multiclass classification: 10 classes of attacks, plus 1 class of benign.To prepare the data for training, we first used a Label Encoder to convert the target value for each sample into a numerical value.The target value indicates the type of network traffic, either benign or belonging to one of the ten possible attacks.Next, we applied one-hot encoding to these values, resulting in a vector for each sample.Additionally, we normalize each feature by using a MinMaxScaler, which scales the data in the range of [0, 1].To minimize the number of features, we implemented a feature selection mechanism based on an ExtraTreesClassifier. Tree estimators are utilized to compute feature importance through impurity calculations, which can subsequently be used in combination with the SelectFromModel meta-transformer to eliminate irrelevant features.
C. Experiments
To show the effectiveness of our solution, we conducted several experiments considering the application domains of predictive maintenance and botnet attack detection.In each of them, we considered both honest clients with a limited data set and malicious nodes, which aim to either disrupt the training process or introduce backdoors within the global model.We compare TruFLaaS against both the conventional baseline (i.e., no validation mechanisms) and TrustFed [21], i.e., a framework for fair and trustworthy FL.For the sake of fairness, in the predictive maintenance use case, we use the same FL model, configured as follows.The input layer takes input_size × 24 input neurons for each window.Two middle dense layers represent 24 × 24 neurons, while the output dense layer is mapped on 24 × 1 neurons.The ReLU activation function is used for all the layers and weights are adjusted through stochastic gradient descent (SGD) optimizer.Mean absolute percentage error (MAPE) is used to evaluate the accuracy of each model at the end of the final aggregation, while we calculated mean absolute error (MAE) to validate partial models and identify the most beneficial ones for training.However, TrustFed was not thoroughly validated against multiple data sets and models.Therefore, for the botnet attack detection scenario, we developed a novel FL model consisting of four layers.The first two layers are Dense layers with ReLU activation functions, comprising 64 and 32 neurons, respectively, followed by a Dropout layer with a rate of 0.2 to minimize overfitting.The final layer is a Dense layer with 11 neurons, one for each class to be predicted, utilizing a softmax activation function.Due to the nature of the problem being a multiclass classification, we used categorical cross-entropy as the loss function.Moreover, since this use case does not aim to solve a regression problem, in Table II, we report the final evaluation metrics for the experiments conducted.
1) Heterogeneous Data Distribution: First, we consider a scenario where the distribution of data, among honest clients, is heterogeneous.Hence, some nodes have more or fewer data samples than others.Having clients with heterogeneous data distribution is one of the major cases that justifies the adoption of FL: this situation is quite common in industrial environments since the size of enterprises directly affects the amount of data generated.We run this type of experiment while varying the number of participants and with different percentages of nodes having augmented data.2) Heterogeneous Data Distribution on Rare Cases: This experiment is a special case of the previous set.In FLaaS, clients may be interested only in a specific subtask (e.g., RUL under a given threshold or a specific botnet attack).We focus on a deployment environment where some nodes have no data on a particular class of events, which we define as rare cases.For example, there may be smart manufacturing enterprises that have not experienced the breakdown of specific machinery yet or have never been affected by a certain attack.In this experiment, for the predictive maintenance use case, we discriminate the data records accordingly to their RUL values.In particular, we tag as rare records all the samples inside a low percentile of a pseudo-normal distribution, by separating the low RUL values from the others.We identify, through statistical analysis, a subset of the data containing the data records with low RUL.To do that, we calculate the 10th percentile values on both the training and the validation sets.Since NASA data do not follow a normal distribution, we operate a standardization process to use z-score table to calculate exact areas for any given normally distributed populations.Mathematically, the standardization operation is described by the following formula: where z is the z-score value, x is the observation value, μ is the mean of the distribution, and σ is the standard deviation of the distribution.Figs. 4 and 5 illustrate the obtained percentiles on training and validation data sets, respectively.For the botnet attack detection use case, a multiclass classification approach is used.Thus, we consider the two attack classes with the lowest occurrence as rare cases.Specifically, as shown in Fig. 6, such classes are represented by junk and scanning attacks.Our experiments involve varying the number of nodes without rare cases and the strategy for discarding nodes.We test four strategies using two validation sets: one with only rare cases (Rares) and another with the same data distribution as the global test set (Overall).The first strategy entails discarding nodes that perform poorly on the validation set that contains only rare cases.The second strategy involves discarding nodes that perform poorly on the second validation set.The third strategy requires discarding nodes that perform poorly on both the first and second validation sets.Finally, the fourth strategy involves discarding nodes that perform poorly on either the first or second validation set.
3) Model Forging Attack: Ensuring the security and integrity of FL platforms is a major concern due to their widespread adoption in various domains.In model forging attacks, a malicious participant could craft a partial model to introduce backdoors into the global model or simply disrupt the training process.To assess the resilience of TruFLaaS against this class of attacks, we conducted several experiments with different percentages of malicious nodes.In these experiments, as done in TrustFed, we simulated malicious nodes' behavior by performing training on data containing random noise to corrupt the model.
D. Results and Associated Considerations
The experimental results reported in this section show that our solution outperforms conventional baselines and TrustFed under all the considered circumstances.
1) Heterogeneous Data Distribution: TrustFed only aggregates partial models whose accuracy falls in the interval identified by the neighborhood of the medium and standard deviation.However, the TrustFed approach neglects clients with a heterogeneous data distribution that results in highquality partial models (which can achieve performance results that overcome the bound of the interval).Also, TruFLaaS discards partial models whose performance is below its threshold; on the opposite, in the aggregation phase, TruFLaaS involves all the partial models whose accuracy is more significant than the lower neighborhood of the medium and the standard deviation.Figs.7 and 8 show how TruFLaaS outperforms TrustFed by better identifying clients' contributions with significantly larger data sets.TruFLaaS not only reaches the target accuracy faster than the others but also gains a greater advantage with the increase in the number of nodes with augmented data.
2) Heterogeneous Data Distribution on Rare Cases: In this type of experiment, for the sake of fairness, we have not compared TruFLaaS with TrustFed because the latter does not make any distinctions on the rare cases and our approach would certainly perform better.Figs. 9 and 10 highlight the experiment results by varying the number of nodes with no rare data and the strategy used to discard nodes with poor performance.We can observe that in the predictive maintenance scenario the first two strategies, i.e., discarding nodes that perform poorly on the validation set comprising only rare cases and discarding nodes that perform poorly on the validation set with the same distribution as the test set, performing better than the other aggregation strategies considered.In the botnet attack scenario, we can see the resilience of our solution to an increasing amount of nodes without rare data.The accuracy is not affected negatively by the higher amount of heterogeneous nodes.
3) Model Forging Attack: In the predictive maintenance use case, since the FL configuration employed by TrustFed was not leading to acceptable results in comparison with our solution, we increased the number of FL rounds to 100.Figs.11 and 12 sharply outline how TruFLaaS is more robust than TrustFed against model forging attacks varying the number of malicious nodes.This is mainly motivated by the fact that TrustFed, by using the mean and standard deviation to detect outliers, is less accurate in the presence of excessively large or small values.For example, there might be an outlier with such poor performance that would significantly shift the total mean.In this case, TrustFed ends up accepting outliers with performance that should not be accepted under nominal conditions.Moreover, by not weighing the partial models, aggregating an outlier will completely ruin the training performed up to that point.It is also interesting to observe that, in the predictive maintenance experiment, TrustFed performs worse than the baseline with 0 malicious nodes.This could be caused by the fact that TrustFed also removes nodes that reach performances much greater than the average.
On the contrary, TruFLaaS employs a more robust outlier detection algorithm, achieving in all cases very similar performance.Moreover, due to the use of weights based on the number of accepted transactions (i.e., level of trust), sporadic errors during the validation process would result in very little influence on the global model, without disrupting the whole training process.These results provide valuable insights into the robustness of our proposal and help ensure that it can effectively protect from model forging attacks.
VI. RELATED WORK
One of the biggest challenges facing the widespread adoption of FL in real-world scenarios is the lack of trust among unknown participants.However, in recent years, there has been a growing interest to design novel solutions that can increase the trustworthiness and fairness of FL environments.Many of these proposals are made possible by blockchain technology, which, in some cases, only ensures the correctness of the generated global model by replacing the centralized server (as in [47] and [48]).In this section, we review some of the most relevant works that aim to enhance the trustworthiness of FL.
Table III provides an overview of such approaches and their main limitations, while Table IV summarizes their key features.
A. Accountability and Fairness
Blockchain technology can be utilized as a reliable data source that offers all participants a consistent and transparent view of the stored data.For this purpose, Lo et al. [23] employ blockchain to enable accountability and improve fairness in FL systems.Data-model provenance is granted through the blockchain that stores the hashed value of data, local, and global model versions.To increase fairness, the authors present an algorithm that dynamically samples training data from classes poorly represented according to the inverse of the weight distribution of the data set used for testing.Their approach can contribute to building fairer models in a scenario where each client trusts the other.However, they do not prevent malicious participants are excluded from the model aggregation which is, as discussed above, one of the major concerns in an FLaaS context.Abdel-Basset et al. [24] presented Fed-Trust, a blockchain-orchestrated edge intelligence framework for trustworthy cyberattack detection in IIoT.However, their approach does not bring remarkable novelties in terms of validation of partial models since the verification phase consists in allowing fog nodes to collect the block comprising the partial models from all contributors to calculate the global model.In this work, trustworthiness is intended as one of the main targets of cyberattacks for the IIoT that can be protected through a distributed temporal convolution generative network.
B. Validation Mechanisms
The absence of adequate validation mechanisms led to the aggregation of any submitted model, leaving room for the introduction of malicious backdoors.Consequently, many papers in the literature have been dedicated to introducing novel validation methods.TrustFed [21] is a blockchain-based [22] presented a blockchain-based decentralized FL framework that validates partial models through a decentralized validation mechanism.During each round of the FL training, a set of devices is selected to act as validators.All the local updates are validated by all of them using their local data set.After having observed the experimental results, a validator casts its vote on the legitimacy of each model.Collecting votes from multiple validators enables removing malicious devices that are associated with a negative model.Such an approach improves robustness since validating operations can be properly performed despite several compromised validators.
Li et al. [8] proposed a dynamic verification strategy to decrease the influence of abnormal customers on the global model.Similarly to our work, the authors use a secondary server-side data set to validate the contribution of each client.Only the partial models that achieve satisfying accuracy are involved in the model aggregation process.However, their approach still suffers from all the weaknesses related to using a centralized server for model aggregation.Recently, there has been a rise in novel approaches that ensure the correctness of partial model aggregation without being dependent on blockchain and smart contracts.However, these new techniques still have a vulnerability to a single point of failure, which can compromise the entire system.Wang et al. [45] proposed PTDFL, a decentralized FL scheme that prioritizes privacy and trustworthiness.Their method employs a local proof mechanism to verify that the partial model submitted by the client is the genuine output of their training.However, it is worth noting that a malicious model could still be aggregated if the corresponding proof is correct.The previous two approaches do not rely on blockchain technology, which means they are vulnerable to the drawbacks associated with utilizing a centralized server for model aggregation.Gao et al. [46] developed SVeriFL a novel protocol based on BLS and multiparty security that enables verifying the integrity of partial models provided by clients and the correctness of their aggregation.However, like previous work, the authors do not prioritize the quality of the submitted partial models.Furthermore, their protocol relies on a trusted authority, which introduces an additional element of centralization and potential vulnerability.
C. Reputations and Weighted Contributions
To improve trustworthiness among participants, clients can be selected according to their reputations and partial models can be weighted based on a trust score associated with each participant.Kang et al. [43] proposed to evaluate the reputation, stored on a consortium blockchain while selecting the participants of an FL training.According to the authors, reputation is measured from its training task completion history with the past behaviors of good or unreliable activities.Their approach is specifically for mobile devices.Indeed, it may not apply to scenarios, with a restricted number of clients that impede discarding nodes in advance.In these cases, only a small number of participants may satisfy the threshold making FL pointless.When the number of clients is not huge, all the contributions can be relevant to improve the quality of the global model.
Cao et al. [44] introduced FLTrust, a Byzantine-robust FL method that protects against malicious attacks by training a server model using a small, manually collected clean training data set as if it were a client.FLTrust assigns trust scores to each local model update based on its similarity with the server model update.Such trust scores are then used for weighting local model updates and generating the global model.Beyond the weaknesses of the centralized server, FLTrust heavily depends on the training data set provided to the server.In addition, honest clients that perform much better could be assigned with a low trust score.
VII. CONCLUSION
Service providers can simplify and promote the use of FL by offering it as a service.FLaaS significantly reduces the overhead and technological knowledge to develop and tune algorithms and tools for collaboratively training a global model on a shared task.However, despite the discussed advantages, designing and developing effective FLaaS still arise several technical challenges that have to be properly addressed.
In particular, the lack of trustworthiness among unknown participants is one of the major factors that hinders the adoption of FL in real-world scenarios.To overcome this concern, this article proposes a novel blockchain-based architecture and approach, which transparently validate partial models by leveraging blockchain, smart contracts, and a DON.In particular, before being aggregated, partial models are validated against a sample of the validation set, different for each round, provided by the service provider through a DON.TruFLaaS uses smart contracts to keep the level of trust of each participant, which is used to weigh the contributions of each partial model during the aggregation phase.The extensive experimentation work described in this article shows that TruFLaaS outperforms conventional baselines and the state-of-the-art literature for the detection of malicious nodes under different relevant families of use cases, i.e., when forging an ad-hoc model to pass the validation process, or discarding low-quality models on rare data, or making a global model converge by varying the number of malicious nodes.
Manuscript received 15
November 2022; revised 12 April 2023; accepted 25 May 2023.Date of publication 5 June 2023; date of current version 7 December 2023.This work was supported in part by the SERICS Project through the NRRP MUR Program, which is funded by the EU-NGEU under Grant PE00000014.(Corresponding author: Carlo Mazzocca.)
7 )
d requests V k l to s; 8) s provides V k l to d.Given two round j, z, where j < z, V z l must be = V j l
Fig. 7 .
Fig. 7. Predictive maintenance: heterogeneous data distribution-Accuracy comparison of different node selection strategies with (a) 0 nodes having augmented data, (b) 10 nodes having augmented data, and (c) 25 nodes having augmented data.
Fig. 10 .
Fig. 10.Botnet attack detection: heterogeneous data distribution on rare cases-Accuracy comparison of different node selection strategies with (a) 0 nodes without rare data, (b) 10 nodes without rare data, and (c) 25 nodes without rare data.
Graduate Student Member, IEEE, Nicolò Romandini , Graduate Student Member, IEEE, Matteo Mendula , Graduate Student Member, IEEE, Rebecca Montanari , Member, IEEE, and Paolo Bellavista , Senior Member, IEEE that comprise computing resources, APIs, open-sourced libraries, and tools for data analytics.However, despite increasing interest, current commercial solutions do not support collaborative training.
c 2023 The Authors.This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/ Claim Verifier: Verifying claims is implemented through a smart contract.A client signs a verifiable presentation (VP), which embeds a VC, with its DID and sends it to the claim verifier that checks if the client owns a valid VC to participate in the FL training for which it is applying.
TABLE II BOTNET
ATTACK DETECTION-TRUFLAAS EXPERIMENTS RESULTS
TABLE III COMPARISON
OF RELATED WORK APPROACHES AND LIMITATIONS
TABLE IV COMPARISON
OF RELATED WORK BASED ON THEIR FEATURES framework for fully decentralized cross-device FL systems.It provides fairness by removing malicious participants from the training distribution through statistical outlier detection techniques.While it employs blockchain and smart contracts to maintain participating devices' reputations.However, their approach removes outliers including contributions of clients that may have performed significantly better by having a local training set bigger than all the other participants.Experimental results demonstrate that TruFLaaS outperforms TrustFed under different circumstances.Chen et al. | 13,262.2 | 2023-12-15T00:00:00.000 | [
"Computer Science",
"Education",
"Engineering"
] |
GPU-Based Cloud Service for Smith-Waterman Algorithm Using Frequency Distance Filtration Scheme
As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.
Introduction
The Smith-Waterman (SW) algorithm searches for a sequence database to identify the similarities between a query sequence and subject sequences [1,2]. However, this algorithm is prohibitively high in terms of time and space complexity; the exponential growth of sequence databases also poses computational challenges [3]. Owing to the computational challenges of the Smith-Waterman algorithm, some faster heuristic solutions (e.g., FASTA [4] and BLAST [5,6]) have been devised to reduce the time complexity yet degrading the sensitivity of alignment results.
The feasibility of using massive computational devices to enhance the performance of many bioinformatics programs has received considerable attention in recent years, especially many-core devices such as FPGAs [7][8][9], Cell/BEs [10][11][12], and GPUs [13]. The recent emergence of GPUs has led to the creation of hundreds of cores, with their computational power having exceeded one TFLOPS and NVIDIA released the CUDA programming environment [14], which allows programmers to use a common programming language (e.g., C/C++) to develop GPU-related applications to enhance the computing performance. Additionally, the feasibility of using GPUs to accelerate the SW database search problem has been widely studied, in which the pioneering work is proposed by Liu et al. [15] to develop SW algorithm using OpenGL for general-purpose GPUs (GPGPU). Following the development of the CUDA programming model, SW-CUDA [16] as the CUDA-based SW solution on GPUs could run on multiple G80 GPUs. However, SW-CUDA distributed the SW algorithm among multicore CPUs and GPUs, making it a highly dependent CPU, owing to their inability to utilize the entire computational power of GPUs. Thereafter, CUDASW++ 1.0 [17], as designed for multiple G200 GPUs, deployed all of the SW computations on GPUs to fully utilize the powerful GPUs. In contrast to previous works, CUDASW++ 2.0 [18] contributes to SW database search problem and optimizes the SIMT abstraction in order to outperform CUDASW++ 1.0. The previous research significantly improves the performance of SW algorithm; in addition, CUDASW++ 2.0 significantly reduces the search time in protein database searches.
However, when using a sequence to query a protein database, biologists do not require all results between the query sequence and all database sequences; however, the similarities are more than at a certain level. Therefore, many computations can be omitted when performing protein database searches if the minimal difference of all alignment combinations can be known in advance, allowing us to omit the extremely different combinations and retain the possible combinations in order to perform the SW alignment. Related research in recent years has heavily focused on establishing the multicore of a multicomputer system. Having received considerable attention in bioinformatics research, cloud computing integrates a large amount of computational power and storage resources, as well as provides different services through a network, such as infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). In these cloud services, users can access desired services without location constraints. Therefore, a cloud service focuses on acquiring services via a remote connection through a network, such as the Amazon EC2 service which is an IaaS and provides various virtual machines with operating systems for users. Other service such as the Google App Engine is a PaaS cloud computing platform for developing and hosting web applications in Google-managed data centers. Other services using the SaaS platform are those such as G-mail or Dropbox services. This cloud computing platform can be viewed as an extended SaaS concept, which refers to customized software, made available via the Internet. Thus, no real computing environments in a local client do not need to be set up since these software applications do not need to ask each end user to manually download, install, configure, run, or use the software applications on their own computing environments. By using cloud services, users can even use a mobile device to complete their tasks, which could only be completed on a PC previously.
This work implements an efficient CUDA-SW program for a SW database search on GPUs. A real-time filtration method based on the frequency distance [19], referred to hereinafter as CUDA-SWf, is also designed to reduce unnecessary computations efficiently. Before the database search, a frequency vector is constructed for the query sequence and the database sequences. Frequency distances are then counted on GPUs for all combinations between query and database sequences. Frequency distance refers to the minimum difference between the query and database sequence, allowing us to record frequency distance in order to determine which combinations should be used to perform a SW alignment and then output the alignment results. Additionally, a friendly user interface (UI) is designed for the potential cloud server with GPUs. Cloud service is combined with GPU computing, in which the SaaS concept through a network is used and a UI is provided to access the service. In our test data sets, the CUDA-SWf can reduce up to 41% of the computational time by comparing with CUDA-SW. Moreover, CUDA-SWf is about 76x faster than its CPU version.
The rest of this paper is organized as follows. Section 2 briefly describes the preliminary concepts for SW algorithm, CUDA programming model, and related works for SW algorithm on GPUs. Section 3 then introduces the method of CUDA-SW algorithm and the implementations of the frequency filtration method. Next, Section 4 summarizes the experimental results. Conclusions are finally drawn in Section 5, along with recommendations for future research.
Related Works
The SW algorithm is designed to identify the optimal local alignment between two sequences by estimating the similarity score of an alignment matrix. The computation is based on a scoring matrix such as BLOSUM62 [20] or PAM250 [21] and on a gap-penalty function. Given two sequences 1 and 2 whose lengths are 1 and 2 , respectively, the SW algorithm calculates the similarity score where sc denotes the character substitution scoring matrix, represents the gap opening penalty, and refers to the gap extension penalty. A scoring matrix sc gives the substitution rates of amino acids in proteins, as derived from alignments of protein sequences.
The recurrences are initialized as ( , 0) = (0, ) = ( , 0) = (0, ) = 0 for 0 ≤ ≤ 1 and 0 ≤ ≤ 2 . The maximum local alignment score refers to the maximum score in function. Estimating each cell in function depends on its left, upper, and upper-left neighbors, as shown by the three arrows in Figure 1. Additionally, this data dependency implies that all cells on the same minor diagonal in the alignment matrix are independent of each other and can be calculated in parallel. Thus, the alignment can be estimated in a minordiagonal order from the top-left corner to the bottom-right corner in the alignment matrix, where calculating the minor diagonal only requires the results of minor diagonals − 1 and − 2. (CUDA 3.2). Compute unified device architecture (CUDA) is an extension of C/C++, in which users can write scalable multithreaded programs for GPU computing field. The CUDA program is implemented in two parts: host and device. The host is executed by CPU, and the device is executed by GPU. The function executed on the device is called a kernel. The kernel function can be invoked as a set of concurrently executing threads, and it is executed by threads. These threads are in a hierarchical organization which can be combined into thread blocks and grids. A grid is a set of independent thread blocks, and a thread block contains many threads. The grid size is the number of thread blocks per grid, and the block size is the number of threads per thread block. Threads in a thread block can communicate and synchronize with each other. Threads within a thread block can communicate through a per-block shared memory, whereas threads in different thread blocks fail to communicate or synchronize directly. Besides shared memory, four memory types are per-thread private local memory, global memory for data shared by all threads, texture memory, and constant memory. Of these memory types, the fastest memories are the registers and shared memories. The global memory, local memory, texture memory, and constant memory are located on the GPU's memory. Besides shared memory accessed by single thread block and registers only accessed by a single thread, the other memory can be used by all of the threads. The caches of texture memory and constant memory are limited to 8 KB per streaming multiprocessor. The optimum access strategy for constant memory is all threads reading the same memory address. The texture cache is designed for threads to read between the proximity of the address in order to achieve an improved reading efficiency. The basic processing unit in NVIDIA's GPU architecture is called the streaming processor. Many streaming processors perform the computation on GPU. Several streaming processors can be integrated into a streaming multiprocessor. While the program runs the kernel function, the GPU device schedules thread blocks for execution on the streaming multiprocessor. The SIMT scheme refers to threads running on the streaming multiprocessor in a small group of 32, called a warp. For instance, NVIDIA GeForce GTX 260, each streaming multiprocessor with 16,384 32-bit registers, has 16 KB of shared memory. The registers and shared memory used in a thread block affect the number of thread blocks assigned to the streaming multiprocessor. Streaming multiprocessor can be assigned up to 8 thread blocks. More details and other version of CUDA can be found in the CUDA programming guides.
SW Algorithm on GPUs.
The several platforms that the SW algorithm has been implemented on include FPGAs, Cell/Bes, and GPUs [7][8][9][10][11][12][13][14][15][16][17][18]. A query sequence compared with all database sequences is more practical than with a single sequence [22][23][24][25][26] (pairwise comparison). Many works have implemented the SW algorithm on GPUs. Liu et al. [13] first attempted to implement the SW algorithm on a GPU by using OpenGL. The SW algorithm has subsequently been implemented on NVIDIA graphics cards by using CUDA [14,16]. As for database searches, many efficient methods implement the SW algorithm either by a thread called intertask parallelization or by a thread block called intratask parallelization [27]. By using intertask parallelization [27], this work calculates the similarity score of each pair of input sequences by a single thread. Additionally, a related work developed a method to perform large sequence alignment, not only a similarity score, but also alignment results, with limitations on hardware [28]. Those works improved the performance of the SW database search by using GPUs to reduce the time spent. However, increasing the efficiency of a database search is of priority concern. Performing a protein database search involves finding the most similar protein sequence in a specific database; biologists frequently perform this task. However, many low-quality results are available when performing all database comparisons, indicating the low similarity between query sequence and database sequences. The ability to identify those sequences and distinguish them from deep comparisons will significantly decrease the computational time. Additionally, the ability to qualify a filtration algorithm under this circumstance allows us to reduce computational resources and time. The most similar sequence can be obtained by filtering out the dissimilarity of characters, followed by a series of computations. When sequences are filtered, the level of filtering depends on the length of the query sequence. Longer database sequences are generally preserved to prevent containment of the query sequence. Hence, a longer query sequence implies a more efficient filtering algorithm implemented in this work.
CUDA-SW and CUDA-SWf Methods
There are two methods, CUDA-SW and CUDA-SWf, designed and implemented in this work. By integrating the frequency-based filtration method [19], CUDA-SWf performs better by reducing the comparisons than the CUDA-SW. The CUDA-SWf algorithm can be divided into three parts. (Host, CPU). The inputs of CUDA-SWf are a query sequence and a specific protein database with a large amount of sequences. Before filtration on the device (GPU) is performed, these inputs must be processed in the following steps.
Part 1: Inputs Processing
(1) For a query sequence, CUDA-SWf records the query string and the query length, referred to hereinafter as " " and " , " respectively, followed by an analysis of the string character structure to construct a frequency vector (FV) for a query sequence named " V . " The V is an integer array with 26 indices that record the frequency of each alphabet occurring in a string. Finally, is stored in a character array, is stored as an integer, and V is stored in an integer array.
(2) For a protein database, CUDA-SWf scans the entire database and then records the sequence string and sequence length for each database sequence, which is stored in the host memory. All database strings are stored in three onedimensional arrays, referred to hereinafter as " , " " , " and " , " respectively. Notably, stores all characters of each database sequence; stores the length of each database sequence in ; stores the start position of each database sequence in . The sequence length must be shorter than 2,000 characters; owing to that when executing the SW algorithm, some data must be stored in the local memory; in addition, local memory size for each thread is limited. In this step, CUDA-SWf does not construct the frequency vector for each database sequence; owing to that the database contains a large amount of sequences and the cost is high for constructing the frequency vector for each database sequence on the host (sequentially). CUDA-SWf constructs a frequency vector for each database sequence on the device (GPU) when executing the filtration method (run time filtration method). (Device, GPU). Inputs on the host should first be transferred from the host to the device. Because the query data are used and not updated, the query string, , query length, , and query frequency vector, V , are stored in the constant memory. The size of database sequence data ( , , and ) is too large and stored in the global memory.
Part 2: Implementation of the Frequency Filtration Method
When implementing the filtration method, assume that two similar sequences found by SW algorithm may have a certain number of the same characters. As restated, counting the different characters can help to filter out the dissimilar sequences by the enormous difference among character structures. Counting the different characters for each database sequence and query sequence is relatively easy; CUDA-SWf allows a thread to analyze the difference between the query and a database sequence. To analyze the differences between query and database sequences, each thread must construct an FV for a database sequence named " V . " Similar to V , the V value of each database sequence is also an array with 26 indices to store the appeared frequency of each alphabet. Next, counting the sum of the differences between the number of each alphabet in the V and V allows us to calculate the differences in their character structure, which is called frequency distance (FD). Frequency distance refers to the minimum differences between two sequences. The details of FV and FD can be found in the literature [19].
Finally, a variable "mismatch percentage (MP)" is available to determine whether to perform SW comparisons. Notably, MP refers to the allowed maximum differences ratio between a query and a database sequence; a small value implies a strict filter due to the small FD allowed; otherwise, it implies loose with large FD. When the FD value between a query sequence and a database sequence is greater than MP, it refers to a situation in which the maximum similarity ratio of these two sequences is not satisfied, and this database sequence can be filtered out. When the FD value between a query sequence and a database sequence is lower than MP, it refers to a situation in which the maximum similarity ratio of these two sequences may be satisfied, and this database sequence should make a SW comparison with the query sequence. An attempt is made to prevent database sequences from having too long length, which would make the sequences filtered out due to the large value of FD. When calculating FD, if is longer than , CUDA-SWf will consider that this database sequence must be compared with the query sequence by a SW algorithm. In doing so, a situation can be avoided in which the query sequence is a local (partial) sequence of the database sequences.
Part 3: SW Comparison (Device and Host).
Following selection of the frequency filtration method, CUDA-SWf performs the SW comparison for each selected database sequence with the query sequence. CUDA-SWf uses a thread to make a SW comparison that is called intertask parallelization. To improve the load balance and memory access pattern, CUDA-SWf moves the selected database sequences to the host memory before making SW comparisons for sorting and rearranging the memory pattern for selected database sequences for two subjects: (i) improved load balance for each thread in the same thread block and (ii) coalesced global memory access [17]. In the CUDA programming model, a thread block occupies the resource of a streaming multiprocessor (SM) until all threads in the same thread block complete their computations. To improve the load balance for interftask parallelism, CUDA-SWf must ensure that all threads in the same thread block are assigned a similar length of sequences to achieve a better load balance by sorting the database sequences to assemble the sequences of a similar length, as shown in Figure 2. In order to simply the work in CUDA-SWf, the sorting is performed on CPU. After sorting the database sequences, CUDA-SWf converts the memory configuration from the row major to the column major, as shown in Figure 3 in order to coalesced global memory access. Therefore, all threads in a thread block can access sequences in a continuous memory space. During implementation of the SW algorithm, the alignment sequences must be stored in the global memory and then moved to the local memory of a multiprocessor. The Fermi architecture has per-SM L1 cache and unified L2 cache to service the load/store to global memory; to maximize the performance of cache memory, all threads in the same warp should access the alignment data in global memory to maximize the efficiency of cache memory.
To output the alignment result by the trace back path, the original SW comparison must calculate and store the values in a × matrix ( denotes the query length and represents the selected database sequence length), explaining why its space complexity is O( 2 ), assuming that is equal to . In this work, CUDA-SWf only reports the similarity score, not alignment result, and does not need to record the trace back path, explaining why its runtime space complexity to each thread can decrease O(2 ) and suitable for using the intertask parallelization. Because each thread service requires a selected sequence comparison to perform the query sequence, the shared memory cannot load all alignment data. CUDA-SWf thus stores the alignment data of each thread in the local memory. In the Fermi architecture, it is still efficient to store data in the local memory due to L1/L2 cache. Notably, performance of the local memory is not far away from that of the shared memory and is even better than that of the shared memory when the bank conflict occurs in shared memory. The SW comparison of each thread can be divided to three steps: (i) create alignment data: when the comparison is initiated, each thread must create two integer arrays and , in which size denotes the length of a selected sequence and stored in the local memory. Owing to that the size limitation of local memory is 16 KB per-thread and the maximum length of database sequence is 2,000. (ii) Row by row comparison: CUDA-SWf can only output the alignment similarity score. Array is first assigned the value of 0 and, then, each row cell can be calculated simultaneously and the calculated score is stored in array . Next, the values in array are moved to array . Finally, the next row is calculated until all comparisons are finished. (iii) Store the maximum score and final output: when each row comparison is completed, CUDA-SWf confirms the maximum score and records it; finally, CUDA-SWf stores the maximum score in the global memory and, then, moves it to the host memory and finally outputs the database sequences that are similar to the query sequence. The flowchart of CUDA-SWf is shown in Figure 4. The CUDA-SW method is similar to CUDA-SWf without the frequency filtration method. Xeon E5506 2.13 GHz with 12 GB RAM running on Linux operation system. The protein sequence database was human protein database downloaded from NCBI (http://www.ncbi. nlm.nih.gov/); the query sequences were selected from the H1N1 virus database from the Influenza Virus Resource from NCBI (http://www.ncbi.nlm.nih.gov/genomes/FLU/FLU.html). The testing data sets include the following: (1) 32,799 protein sequences of human with an average length of 555 as the database, and (2) H1N1 virus protein sequences that were randomly selected from the NCBI H1N1 virus database, and the length brackets are 100, 200, 300, 400, 500, 600, and 700 as query sequences. After deleting the protein sequence with length larger than 2,000, there are 32,133 human sequences used in the following tests. The gap open penalty was set to 10.0; the gap extension penalty was set to 2.0; the scoring matrix was BLOSUM62. Next, the MP was set to 10%, 30%, 50%, and 100%, implying the number of different characters between query sequence and database sequences. When the MP is set to 100%, it means that no filtration method is used in CUDA-SWf. The number of threads in a thread block is set to 128; the number of thread blocks depends on the number of sequences that must be compared with query sequences. Table 1 shows the overall computation time of CPU version of SW algorithm, CUDA-SW, and CUDA-SWf for human protein database and H1N1 virus sequences under various query sequence lengths with MP of 10%. The overall computation time of CUDA-SWf is the sum of computation time in each part. Table 1 indicates that the proposed frequency filtration method can reduce up to 46% of the computation time by filtering out the database sequences in which the minimum different ratio exceeds 10%. Besides, there are two observations in Table 1. First, the computation time increases when the query sequence (H1N1 virus) length increases. The time complexity of SW algorithm is proportional to the query sequence length. Second, the improved ratio increases when the query sequence length increases. The reason is that the number of filtered database sequences is few when the query sequence length is short. When the query sequence length is short, most of database sequences have larger length than it, and they should make SW comparisons in Part 3 of CUDA-SWf. Table 2 shows the overall computation time of CUDA-SW for human protein database and H1N1 virus sequences under various MPs with the query length of 700. Table 2 indicates that the number of selected database sequences decreases when the MP decreases. When MP is 100%, there are 32,133 human protein sequences selected to make following SW comparisons; when MP is 10%, only 21.8% of 32,133 human protein sequences can be selected. Therefore, the computation time of CUDA-SWf is reduced from 8.27 to 4.4 (near to 47% improved ratio). When doing the filtration method, extra computation time is needed for CUDA-SWf to construct FV and calculate FD for each database sequence and sorting database sequences on the host. From Table 2, the best score can be found by CUDA-SWf under various MPs. It implies that the frequency filtration method in CUDA-SW is suitable for database search problem. Besides, in Table 2, the worst score found by CUDA-SWf when MP is 10% is closer to that when MP is 100%. This phenomenon indicates that a selected database sequence with low FD may have large difference to a query sequence. Therefore, the FD can be used to filter out the dissimilar sequences; however, it cannot be used to determine the similarity score. Figure 5 shows the speedup ratio of CUDA-SW and CUDA-SWf by comparing with CPU version of SW algorithm for Human protein database and H1N1 virus sequences under various query sequence lengths with MP of 10%. From Figure 5, the speedup ratios of CUDA-SW range from 7x to 41x; the speedup ratios of CUDA-SWf range from 7x to 76x. The improvement is significant when the query sequence length is larger than 400 due to large number of database sequence filtered out.
CUDA
For the user interface, this work constructs a workbench for CUDA-SWf with QT Creator 2.4.1 (http://qt.nokia.com/ /products) on Ubuntu 10.04.1, as shown in Figure 6. As a cross-platform application framework, QT is used to design the same UI for different operating systems then through a network, which transfer the input data to a cloud server. Figure 6 reveals 7 steps to run the CUDA-SWf method.
Step 1 (select the scoring matrix). Notably, the scoring matrix is needed when doing the SW comparison. Five matrices are provided in this work: Blosum50, Blosum62, Blosum80, PAM100, and PAM250. Step 2 (select the gap penalty). Users can select the desired penalty. The open gap penalty range is 5∼20, and the gap extension penalty range is 0∼10.
Step 3 (select query sequence). Users select a sequence as a query sequence. If a new query file is available, a new file can be created using File(F)->New(N).
Step 4 (select the database). A database can be selected or created by the button "Create FV file. " Users can download the database from NCBI. Also, a new database can be created using the button, in order to implement the frequency filtration method. This button creates two files: the first one is the new database sorted by length; and the second one is the FV file by the new database file.
Step 5 (select the filter ratio (MP)). Filter ratio can allow users to determine how strict the CUDA-Swf is used with the filtration method. Users can choose from 10% ∼100%. 10% refers to the sequences with those with more than 90% similarity to be computed.
Step 6 (select the FV file). Users select the FV file to be created at Step 4, which helps to execute the frequency filtration method.
Step 7 (execute CUDA-SWf). Two modes can be selected, CPU or GPU. GPU version requires CUDA. Following their execution, the result window is shown (Figure 7). The empty text line displays a message with some errors.
Conclusions
This work designs and implements a novel CUDA-SWf method to solve the Smith-Waterman database search problem with a frequency-based filtration method and CUDA. The proposed method focuses on the intratask parallelization to calculate the frequency distance and perform Smith-Waterman comparisons on a single GPU. Experimental results demonstrate that the proposed CUDA-SWf method achieves up to 76x speedup ratio under a single GPU for the computation time. Moreover, CUDA-SWf can improve the computational time by up to 41% than CUDA-SW without the frequency filtration method. These results demonstrate that CUDA-SWf can accelerate the Smith-Waterman algorithm on GPUs, and the novel idea is still worth to be designed and proposed in order to enhance the performance of CUDA applications. | 6,592.6 | 2013-04-03T00:00:00.000 | [
"Computer Science"
] |
Application of Composite Deflecting Model in Horizontal Well Drilling
Based on current issues of difficulties in controlling horizontal well trajectory and high cost in drilling deflecting, a compound deflecting BHA (bottom hole assembly) with diameter-adjustable stabilizer (DAS) and bending-adjustable housing (BAH) is presented. According to the DAS operational principles and the stress condition in operation, the computational formula of wedge’s axial moving displacement and piston’s radial telescopic displacement of the DAS driven by drilling fluid pressure is presented. This formula is verified by lab experimental simulation. Three-points-circle method is utilized to calculate geometrical build up rate of compound deflecting BHA, and the result is verified by field data. The method is utilized to make design and calculate for compound BHA. The research can be used as a reference for compound deflecting drilling in horizontal wells. The flow rate and pressure difference have a very serious impact on the flow regulator erosion, so the flow rate and pressure difference should be controlled when the DAS works, and it is suggested that the flow regulator should maintain and replace frequently when in service.
Introduction
With the large-scale development of unconventional oil and gas (shale gas and tight oil), American horizontal drilling operation increases much, which started to exceed that of vertical drilling after 2010 for drilling footage. e American horizontal well rapidly increased from 1144 wells in 2000 to 17,721 wells in 2012, and the proportion in all wells rose from 3.68% in 2000 to 36.65% in 2012, namely, more than 1/ 3 of American new wells were horizontal in 2012. It is expected that American horizontal wells will approach 22,000 in 2017, accounting for over 40% of all wells. With the largescale development of unconventional oil and gas, American horizontal wells' drilling footage will rise stably to as high as 70% in proportion in 2018 expectedly [1][2][3].
is indicates that the horizontal drilling technology has been widely applied. Horizontal drilling process is mainly divided into prespudding engineering, drilling engineering, completion engineering, postdrilling treatment, oil test, and fracturing. e drilling engineering consists of 1st vertical section, 2nd vertical section, 2nd deviated section, and 3rd horizontal section. According to the average data from 3 horizontal wells of a gas field of Sinopec in China, the average total investment for a well is 24.3 million RMB. e drilling engineering investment is 13.25 million RMB, accounting for 54.5% of the total; the drilling cost of 2nd deviated section is 5.6933 million RMB, accounting for 23.4% of the total [4,5].
In horizontal drilling, the 2nd deviated section drilling technology and drilling cost is one of the main problems hindering large-scale application of horizontal wells. In the 2nd deviated section drilling process, in order to control well trajectory and prevent from deviation, a rotary steering drilling tool is usually used. Sometimes multiple fixed stabilizer BHA or combination of DAS and motor are used. As shown in Figure 1, the DAS diameter change is controlled by adjusting the drilling fluid pressure on the ground, and the deviation is achieved by changing the diameter. However, the cost of the rotary steering drilling tool is very expensive, multiple fixed stabilizer BHA needs frequent replacement of stabilizer, and continual pulling out and tripping in caused issues of high cost and labor intensity. In contrast, the combination of DAS and motor with BAH is able to achieve deviation prevention, fast drilling, effective control of well trajectory, and low drilling cost [6]. As shown in Figure 2, BAH is integrated with the motor, and the bending angle of BAH can be adjusted to achieve deviation. During the drilling process, DAS and BAH combine to achieve composite deviation. How to use the combination of DAS and motor with BAH to accurately control well trajectory remains to be solved. e working principle of variable diameter stabilizer and its function in directional drilling were introduced [7,8]. e variable diameter stabilizer was designed, and its working mechanics were calculated [9,10]. e variable diameter stabilizer was developed and applied in the field [11,12]. In order to make the variable diameter stabilizer play a better role in the directional well, the variable diameter stabilizer was used in combination with the positive displacement motor, antiwear and antitorque tool [13][14][15]. However, the compound deflection of variable diameter stabilizer and BAH is seldom applied in the field, and the mechanism and model of the compound deflection are seldom studied in the application process. erefore, it is necessary to carry out the research on the compound deflection model of DAS and motor with BAH. Figure 1, DAS is mainly composed of body, flow regulator, spring, piston, wedge, signal institution, and joint. During drilling, the tool is controlled by the pressure difference generated by the drilling fluid. When the displacement of the mud pump increases, the drilling fluid actuates the flow regulator due to pressure difference. Under the effect of pressure difference, the wedge starts to move axial downward, pushing the piston radial out, and the spring is compressed. When the displacement of the mud pump decreases, the pressure difference generated by the flow regulator starts to decrease. e spring starts to rebound and push the wedge to start the axial upward movement, while the piston radial retracts back to the starting position. e deviation is achieved by the piston radial displacement.
Introduction of DAS and BAH. As shown in
As shown in Figure 2, BAH is mainly composed of a lower shell, curved axle, adjusting ring, and upper shell. Deflection angle can be obtained from 0°to 3°by adjusting the matching relationship between the shell (lower shell and upper shell) and curved axle.
Conventional BHA with DAS.
DAS is able to change the diameter in well. erefore, the combination of the DAS and motor can effectively control well trajectory and reduce pulling out and tripping in arising from diameter problems of the fixed stabilizer. Drilling practice shows that ROP increases about 24% if the combination of the DAS and motor is used. Rotary BHA is shown in Figure 3(a): bit + near bit fixed stabilizer + straight motor + DAS + drill collar + drill pipe. Steering power BHA is shown in Figure 3(b): bit + near bit fixed stabilizer + fixed bending housing motor + DAS + drill collar + drill pipe [8,16].
Compound Deflecting BHA with DAS.
Motor with fixed-angle bending housing can only satisfy the demand of a fixed build up rate. It increases not only the quantity of tools but also operational cost. e multistage BAH can adjust the housing angle repeatedly in the well [17][18][19]. e combination of multistage BAH and DAS can make the BHA achieve compound deflecting. e current computational method considers the gap between the fixed stabilizer and the wall. erefore, a new computational method is needed to calculate the build up rate of the compound deflecting tool in horizontal drilling. As shown in Figure 4, the compound deflecting BHA is bit + near bit fixed stabilizer + motor with BAH + DAS + drill collar + drill pipe.
Piston Movement Mechanical Analysis of DAS.
e simplified structure is shown in Figure 5. In the drilling process, the pressure difference from the bottom of the borehole applies on the flow regulator and makes it to move. e driving force F P of drilling fluid will apply to 6 pistons via wedges [9,10].
Joint
Signal institution Piston Flow regulator Body Wedge Spring Spring
Mathematical Problems in Engineering
As Figure 1 depicts, the flow regulator gets the driving force F P from drilling fluid: where ΔP is pressure difference, MPa and S is throttling area of the flow regulator, m 2 .
Restoring force F T of the spring is where k is stiffness of spring, k � 92 N/mm and Δl is length of spring compression, m. e telescopic combination mainly suffers driving the force of drilling fluid and restoring the force of the spring and the force of the wall [9,20,21]. Respectively, setting the piston and the wedge as the object of study, static analysis was made on the telescopic combination. e stress condition is shown in Figure 6.
When the piston contacts the wall of the well, the static equilibrium equation is where F N is the normal pressure of the hole in the housing to the piston, N; F f1 is the friction force between the piston and the hole, N; F f2 is the friction force between the piston and the wedge, N; F N1 is the normal pressure of the wedge to the piston, N; α is the dip angle of the piston slope,°. Besides, where f 1 is the static friction coefficient between the piston and the hole in the housing, dimensionless; f 2 is the static Figure 6: Stress condition of the piston.
Mathematical Problems in Engineering 3 friction coefficient between the piston and the wedge, dimensionless. e stress condition of the wedge is shown in Figure 7. e equilibrium equation of the wedge is During the downward movement of the wedge, the piston is pushed out simultaneously and the diameter of the tool increases until it reaches the wall. e equation between the pressure difference ΔP and guiding body moving displacement Δl is ΔP.
Substituting D 1 � 97 mm, D 2 � 45 mm, and k � 92 N/mm into equation (7), Hence, the equation between moving displacement Δl and diameter D of the tool when the piston is pushed out is where α � 6°and α � 7°, and the result is shown in Figure 8.
Experimental Study.
e experimental simulation was made to study the relation between the guiding body moving displacement and tool diameter change of the DAS when the pressure difference was applied. e experimental devices were DAS, electric push rod, and related accessories; core length measurement apparatus; and so forth, as shown in Figure 9 [9]. In order to meet the need in the experiment, the DAS is printed on a 3D printer using polymer material. By coding the attached encoder, the electric push rod moving displacement can be controlled, and thus the guiding body moving displacement can be controlled. Meanwhile, the diameter of the tool in every displacement was measured by the core length measurement apparatus. e result is shown in Table 1. [24,25]. Afterwards, the method attracted considerable attention and got widely applied [26,27]. Based on the threepoints-circle method, the effect of the diameter change of DAS and bending change of BAH on the geometric build-up rate of compound deflecting BHA was comprehensively considered. Figure 10 shows a compound deflecting BHA: bit + near bit fixed stabilizer + motor with BAH + DAS + drill collar + drill pipe. For compound deflecting BHA, the coordinate parameters are y 2 � L 1 ,
Geometric Build-Up Rate Computation of the Curved Section in Horizontal Wells
where L 1 is the distance between near bit fixed stabilizer and bit, m; L 2 is the distance between near bit fixed stabilizer and motor with BAH, m; L 3 is the distance between motor with BAH and DAS, m; and c is the angle of BAH (°). e telescopic distance δ of piston of DAS has a great effect on the build-up rate of compound deflecting BHA. Assuming that the wall touches the lower stabilizer, the DAS coordinates can be described as where Δx � (L/L 1 )δ cos β and Δy � − (L/L 1 )δ sin β; x 3,δ and y 3,δ are the DAS coordinates with the build-up rate affected by the piston's telescopic movement, m; and L and β are intermediate variables.
Mathematical Problems in Engineering
For single bending compound deflecting BHA, e formulas above can be simplified as follows due to the small angle of housing bending: where k c is the geometric build-up rate of compound BHA, (°)/30 m; k δ is the build-up rate due to the piston's telescopic movement of DAS, k δ � (10.8δ/πL s L 1 ), (°)/30 m; and L s is the length of the stabilizer, m. erefore, the build-up rate formula of compound deflecting BHA with single bending is where L s � L 2 + L 3 , L T � L 1 + L s , and λ � L 3 /L s . Mathematical Problems in Engineering Table 2 shows comparison between the measured build-up rate and the computational one. When the depth of the well is 1740∼1760.67 m, the measured build-up rate is 6.65/(°) · (30 m) − 1 , and the computational build-up rate is 6.65/(°) · (30 m) − 1 ; the error between them is 5.354%. Especially in 1777.65∼1788.2 m, the measured build-up rate is 6.80/(°) · (30 m) − 1 , the computational build-up rate is 6.6791/(°) · (30 m) − 1 , and the error between them is 1.778%, which is very small. Comparing the measured build-up rate with the computational one, this is little error, which proves the feasibility of the computational model.
Case Application and Computation.
Compound deflecting BHA is bit + near bit fixed stabilizer + motor with BAH + DAS + drill collar + drill pipe. Parameters of a compound BHA are L 1 � 1.1 m, L 2 � 1.2 m, L 3 � 4.0 m. e build-up rate needed is 14.0°/30 m, and the radial piston sticking out distance of DAS is: 0, 2, 4, 6, and 8 mm. Based on the enumeration method, the required angle of bending housing and the telescopic displacement of DAS are acquired as shown in Figure 11.
According to the table above, when the structural angle is c � 1.75°and the telescopic displacement of DAS is δ � 2 mm, the demand can be satisfied with little error. Hence, according to the demand of the build-up rate, the structural angle and the telescopic displacement of DAS are acquired based on the enumeration method in order to adjust the build-up rate.
Erosion Model of the Flow Channel.
e DAS annular flow channel passes through high-speed fluid when DAS keeps in the normal drilling process, as shown in Figure 12. When well trajectory is required to change in the drilling process, the pressure difference of drilling fluid becomes larger at the inlet of the "funnel" flow regulator passed in the DAS flow channel. Under the action of pressure difference, the piston of DAS is extended out, ultimately achieving deflecting of wellbore. When drilling fluid with a large displacement pass through the DAS flow channel, it is easy to cause erosion at the inlet of the flow passage. Because the drilling fluid contains small solid particles, the high-speed solid-liquid two-phase fluid is likely to cause erosion of the channel wall. e flow regulator easily causes erosive wear due to changes in the shape of the flow channel, so it is necessary to conduct numerical calculation. If the inlet of the flow passage is erosive, the flow rate and pressure difference will be changed. e pressure difference is not directly proportional to piston extension. Finally, the well trajectory control is inaccurate. From the above, it is very necessary to evaluate the erosion of the flow channel. e calculation model of the DAS flow channel was established according to Figure 13. e nonuniform structure mesh technology was used to mesh. e inlet wall surface, the annulus wall surface, and the outlet wall surface were meshed. Moreover, the inlet wall surface is densely meshed considering the accuracy of the calculation result. As is shown in Figure 14, the model uses tetrahedral mesh, the number of grids is 159,203, fully implicit multigrid coupling solution technique to make numerical simulation.
In this paper, two computational models are developed: the one is the continuous phase model and the other one is the particle tracking model. e model based on k − ε fluid dynamics equations and particle motion equation is applied to predict the erosion rate in the flow channel. e instantaneous equations of mass, momentum, and energy conservation and the turbulence model are solved. According to the characteristics of mud-particle two-phase flow in the flow channel, the erosion of the flow regulator is related to the particle impact velocity, impact angle, and so on.
erefore, the erosion model of CFD software is used in this paper [28,29].
5.2.
e Evaluation of Erosion under Different Pressure Difference. In order to study the effect of pressure difference on flow channel erosion, the same numerical model as before was used for calculation. When the maximum working differential pressure is 2 MPa, the piston of the DAS fully extends. e erosion law of the flow channel is studied, when the pressure difference is changed within the range of 0∼2 MPa. e inlet pressure is 0.5∼2 MPa, and the outlet pressure is 0 MPa. e other boundary conditions are the same as the previous settings. Table 3 shows the erosion rate of the flow channel when the pressure difference is 0.5 MPa, 1 MPa, and 2 MP, respectively. When the pressure difference Mathematical Problems in Engineering increases from 0.5 MPa to 2 MPa, the erosion rate increases from 6.49 × 10 − 5 to 8.03 × 10 − 4 kg/m 2 s, and the erosion rate increased by about 91.9%, which shows that the pressure difference has a great influence on the erosion of the flow channel. It can be seen from Figure 15 that the erosion position still mainly occurs at the funnel-shaped flow channel of the flow regulator. It is recommended to use a small differential pressure during the use of the DAS to control the differential pressure as much as possible. In order to make the flow control of the DAS accurately control, the maintenance and replacement of the flow regulator are required.
e Evaluation of Erosion at Different Flow Rate.
e turbulence model and discrete phase model is employed. e inlet flow rate is set from 40 L/s to 80 L/s. e normal and tangential reflectivity of the wall boundary is defined as the polynomial function of the particle impact angle. In the erosion model, the impact angle function is defined to describe the ductile damage of the flow channel wall and calculate the flow channel erosion. It can be seen from Table 4 that, as the flow rate increases, the erosion rate of DAS flow channel increases in turn. When the flow rate increases from 40 L/s to 80 L/s, the erosion rate increases by approximately 46.6%. e erosion mainly occurs at the sudden change position of the funnel-shaped flow channel. It can be seen that the erosion is very serious at the abrupt changing place in the funnel-shaped flow channel at the inlet. erefore, the structure needs to use the erosion-resistant material and has antierosion design. In actual work, the maintenance work of the flow regulator should be done well.
Conclusions
(1) e compound deflection model of DAS and motor with BAH is presented. e model is proved to be feasible by field data. It is recommended to continue modifying the model in future application. (2) e computational formula of wedge's axial moving displacement and piston's radial telescopic displacement of the DAS driven by drilling fluid pressure is presented. Laboratory experiments show that the formula is feasible.
(3) e result shows that the flow rate and pressure difference have a very serious impact on the flow regulator erosion. It is suggested that the flow rate and pressure difference should be controlled, and also flow regulator should be maintained and replaced frequently when in service.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this article. | 4,580.8 | 2020-03-09T00:00:00.000 | [
"Engineering"
] |
Repenting for Antisemitism: “To Elevate Evil into a State of Goodness”
This paper examines repentance as a practice of transformation that aims to improve relations between the Jewish and Christian communities but also changes Christian theology itself. In his lectures, On Repentance, Orthodox rabbi and philosopher Joseph Soloveitchik maintains that “it is the memory of sin that releases the power...to do greater things than ever before. The energy of sin can be used to bring one to new heights.” My subtitle invokes Soloveitchik’s claim that a certain “quality of repentance...elevates evil to a state of goodness.” Following Maimonides in his Mishneh Torah (1170-1180), Soloveitchik unpacks the interlocking precepts of teshuvah, the Hebrew term for repentance that literally translates as turning back or returning to God. The basic precepts of repentance are not unlike the three-step process of the Roman Catholic sacrament of penance. What attracts me, a Lutheran theologian, to the Roman Catholic sacrament of penance and the Jewish ritual observance of teshuvah is precisely their clearly outlined set of obligations that provide texture and structure to the process. Repentance, in Soloveitchik’s startling declaration, has the power to transform historical wrongdoing into a state of productive and constructive “goodness.” This is an audacious hope, even in religious communities, where the teachings and practices of repentance appear to lie increasingly fallow; it seems outlandish. The language of repentance has largely been replaced by that of reconciliation. The terminology of reconciliation is popular in the fields of transitional justice, cultural memory, politics, and theology, while penance sounds quaint and antiquated, too “religious” for serious consideration in the areas of politics, law, and psychology. A quick search confirms that there are over ten times more titles on “reconciliation” than on “repentance” in the Library of Congress and the libraries of Boston College
from perpetrators. For all of these reasons, it is worth "returning" to repentance and teshuvah in the aftermath of atrocity and systemic evil in order to map the lengthy and circuitous process of transformation.
Jean Amery wrote his essay on "Resentments" amid the Auschwitz Trial in Frankfurt am Main in 1965. This became the fourth chapter in his seminal book At the Mind's Limits: Contemplations of a Survivor on Auschwitz and Its Realities. Here, he eloquently and passionately defended the right of victims to deny reconciliation and to hold on to "hard feelings" and "grudges :" 11 In two decades of contemplating what happened to me, I believe I have recognized that forgiving and forgetting induced by social pressure is immoral. Whoever lazily and cheaply forgives, subjugates himself to the social and biological time-sense, which is also called the "natural" one. Natural consciousness of time actually is rooted in the physiological process of woundhealing and became part of the social conception of reality. But precisely for this reason it is not only extramoral, but also antimoral in character. Man has the right and the privilege to declare himself in disagreement with every natural occurrence, including the biological healing that time brings about. 12 Amery defended his resentments in principle, because he demanded nothing less than the "annulment of time" and the "negation of the negation" before any forgiveness was to be extended to the German nation. 13 His resentments-including of German youth-intended to arouse "self-mistrust" and the desire to "reject everything, but absolutely everything, that [Nazi Germany] accomplished in the days of its deepest degradation, and what here and there may appear as harmless as the Autobahns." 14 Nothing short of "the spiritual reduction to pulp" in an "actual practice" of repentance was required to alleviate his trauma and resentment. By 1965, this process had barely begun in West Germany. And Amery saw very little hope that such "an extravagant moral daydream" would ever come to pass: "Nothing of the sort will happen, I know, despite all the worthy efforts of German intellectuals." 15 As a secular thinker, he famously disavowed atonement as something that "has only theological meaning and therefore is not relevant for me." 16 Amery underestimates the possibilities of atonement, which in my view, should not be relegated to the heavens. Repentance, taken seriously, implements an "externalized and actualized" performance of steps that take place in "the field of history." 17 As the "culture of contrition" and the broad program of Vergangenheitsaufarbeitung [working off the past] gained momentum in Germany, some of 11 Jean Amery, At the Mind's Limits: Contemplations by a Survivor on Auschwitz and Its Realities, Sidney Rosenfeld and Stella P. Rosenfeld, trans. (New York: Schocken Books, 1986), 68. 12 Ibid,72. 13 Ibid,79. 14 Ibid,78. 15 Ibid,79. 16 Ibid,77. 17 Ibid,77. what Amery envisioned occurred: the cultural and political conditions created by National Socialism shifted. Susan Neiman, an American Jewish philosopher living in Berlin, provocatively titled her book Learning from the Germans: Race and the Memory of Evil to make that point. There, she examines whether and how the strategies of "working-off-the-past can prepare the ground for freer futures." 18 Comparing German practices of accountability and reparation to U.S American cultural memory of slavery and racial segregation, she concludes that the German penitential regime, initially enforced by the Allied victors but later embraced voluntarily, successfully transformed the culture. While Germany is still struggling with the persistent presence of antisemitism, racism, and nationalism, a resilient, racially and religiously pluralistic democracy emerged from the ashes of National Socialism and the Holocaust. For Neiman, as well as other scholars who compare the German case to Austria and Japan, this political and cultural transformation is linked to the collective embrace of repentance as a personal and political practice. 19
Jewish-Christian Relations after the Holocaust
The history of Jewish-Christian relations after the Holocaust provides another case study by which to query the power of repentance to bring about theological and institutional change. Until 1945, most Christians and Christian institutions found nothing morally or theologically objectionable in a habitual denigration of Jews and Judaism. Most Christians faithfully repeated that God had rejected the Jews and punished the people of Israel for failing to recognize and accept Jesus as Messiah. Sporadically, these doctrines erupted into actual violence ranging from expulsions to pogroms, from personal assault to the destruction of entire communities. Casual violence in the form of verbal and artistic defamation is deeply embedded in the history of the Christian West. 20 Rosemary Radford Ruether has called anti-Judaism "the left hand of Christology," pointing to its iconic depictions in paintings such as the "Living Cross," where a hand emerges from the left side of the cross to stab the blind-folded synagogue through the heart. Violence against Jews, anti-Judaism, has taken root at the very center of the Christian message, Ruether argued, and extended into all areas of Christian liturgy and doctrine, exegesis and music, art and literature, architecture and education. Christians habitually characterized and condemned the people of Israel as blind, obdurate, hypocritical, deceitful, arrogant, and conspiratorial. Consequently, they exacted punishment from Jews in the form of repression, harassment, ghettoization, exile, and murder.
The "teaching of contempt," as Jules Isaac called this body of anti-Jewish doctrines, only became an embarrassment to Christian churches after the European genocide of the Jews. 22 The images and reports of extermination rattled the conscience of Christian leaders and activists in a way that the "mere" conflagration of hundreds of synagogues across Germany in November 1938 had not. It bears remembering that burning down synagogues remained within the acceptable boundaries of Christian anti-Judaism. Martin Luther had counseled authorities to do exactly that; his writings against the Jews remained in circulation, republished and distributed in 1938 by the Bishop of Thuringia, Martin Sasse. 23 Only a handful of pastors and priests protested publicly or were arrested for condemning the conflagration of synagogues in their sermons the following Sunday. There were essentially no street protests or offers of sanctuary to Jewish communities by local churches. 24 Although the exceptions should not be minimized, for the majority, theological anti-Judaism was too ingrained and normalized to allow for forceful and unambiguous resistance against the politics of National Socialism. 25 Without denying the exceptional and heroic efforts that were made, or the national and local differences that existed, the paralysis of Christendom in the face of genocidal antisemitism demands a reckoning with theological teachings, scriptural interpretations, and liturgical practices. The eradication of the habitual denigration of Judaism requires a prolonged process of repentance, which Alice Eckardt recalls James Parkes explaining, will take at least three hundred years. 26 Repentance means a return to God's path and commandments. It is the same word in all three Abrahamic monotheistic religions. As Muslim scholar Mahmoud Ayoub explains: "Like the Hebrew teshuvah and the Greek metanoia, the [Arabic] word tawbah means 'oft turning' to God… The active participle tawwab implies an attitude of constant turning… Repentance is not a metaphysical or theological concept, but rather a practical attitude or state of moral and religious consciousness." 27 Although repentance refers to a return, it is not a backward-oriented, restorative move. The point of repentance is not to return to the past but rather to cultivate the conditions for an open and different future. All three Abrahamic religions mobilize people's ability to reach for wholeness, connection, and fullness of life, and they extend this possibility to any and all repentant sinners. As Hannah Arendt puts it: without being forgiven, released from the consequences of what we have done, our capacity to act would, as it were, be confined to one single deed from which we could never recover; we would remain the victims of its consequences forever, not unlike the sorcerer's apprentice who lacked the magic formula to break the spell. 28 Repentance affirms the possibility of change and transformation. The past is not the future, and we are not condemned to repeat the same beliefs and actions in endless cycles of repetition. In order to atone for wrongdoing, repentance requires recognition, expressions of remorse, truthful confession, and willingness to engage in penitential service that renders restitution to the victim, community, and God. While Jew-hatred is deeply entrenched in Christianity and the history of Western civilization, it is neither essential nor inevitable. Antisemitism is not eternal, despite Hendryk Broder's book title making that claim. 29 Rather, repentance can transform traditions of contempt into teachings of respect, even if such a prospect appears little more than an "extravagant moral daydream," as Jean Amery put it. 30 How does this happen? According to Soloveitchik's reading of Maimonides, there must be intellectual and moral recognition of one's sins (hakarat haḥet), as well as feelings of remorse and self-loathing (ḥarata). These two precepts align with the Christian mandate of contritio cordis, heartfelt contrition, which involves both cognitive understanding and moral emotion. Jewish practice requires a firm commitment from a ba'al or ba'alat teshuvah (penitent) to avoid future repetition of the wrongful act (azivat haḥet) as well as restitution and compensation (peira'on), which must be rendered either directly to the victim or to the community in general. These two clearly spelled out precepts find a faint echo in the Christian sacramental step of satisfactio operis, which traditionally involved the performance of specific tasks of charity, prayer, and austerities. Both the Roman Catholic and the Jewish process of teshuvah require verbal confession (vidui -confessio oris) which can be made personally or within a liturgical ritual context. This commandment to confess has puzzled Jewish commentators, since there is no priest who would receive such a confession. Maimonides explains the need for verbal 27 articulation of wrongdoing on the basis of the Talmudic principle that "unspoken matters that remain in the heart are not significant matters" (b. Kiddushin 49b). 31 According to Maimonides, writes Soloveitchik, "confession is the concretization of repentance. Speech, the verbalizing of confession, endows the thought of repentance with reality. It is the climax and final chord of the long and torturous internal process of repentance." 32 Therefore, he argues, the book of Leviticus, which addresses and regulates the offering of guilt sacrifices, adds verbal confession as an extra step: "He shall confess the sin he has committed upon it" (Lev 5:5).
It is the performative, external completion of steps and tasks that make repentance especially relevant for the context of antisemitism. Culpable histories must be publicly expressed and acknowledged in order to be expiated. David Blumenthal likens teshuvah to a spiral, which can begin at any point, and reaches deeper over time as it changes a person at ever more profound levels. 33 Soloveitchik similarly uses organic metaphors for a "repentance that sprouts forth and grows in the course of a long and drawn-out process typified by doubt and speculation, soul-searching and spiritual reckoning." 34 Can this spiritual process that culminates in atonement with God on Yom Kippur be taken out of its liturgical context and applied to more profane intellectual and moral processes of "coming to terms" with antisemitism?
Repentance in Church Documents after the Shoah
In the aftermath of the Shoah, numerous declarations on Jews and Judaism were issued by national churches as well as ecumenical and international churchbodies. The sheer number of synods, assemblies, councils, commissions, and review panels that were convened to study, debate, and vote on declarations on the Jews, Judaism, Christian-Jewish relations, and the Holocaust is remarkable. 35 These statements form a body of work that can be examined for insights into the process of repentance for the Holocaust and Christian Jew-hatred. 36 a peculiar form of theological work, distinct from both the academic production of scholarly theology and liturgical and homiletic proclamations of faith.
As a distinct vector of theological production, such statements may not be as effective as their proponents like to assume. Eva Fleischner has noted that "most Christians have no idea of the recommendations various churches have made in the past forty or so years." 37 Official church statements aim to articulate communal principles of faith, but they are regularly ignored, taught neither in seminaries nor in churches. Nevertheless, these Christian declarations about Jews and Judaism, the Holocaust and antisemitism, demonstrate a growing recognition of theological error and willingness to institute theological, liturgical, exegetical, and educational changes. In Franklin Sherman's words, as a body of work, they add up to a "major 'turning,' what the Hebrew term teshuvah denotes, at the official levels." 38 And Eva Fleischner affirmed in 2005, the churches engaged in "serious soul searching and, in many cases, …a conversion of heart and repentance" in response to the Shoah. 39 Immediately after 1945, national and international church bodies rushed to declare antisemitism a "sin against God" (WCC 1948) and a "denial of the spirit and teaching of our Lord" (WCC 1946), but the precise function, shape, and history of anti-Judaism remained opaque and largely unacknowledged. It took two decades before the central charge of Jewish guilt for the crucifixion of Christ was officially retracted by various church bodies, most notably in the Second Vatican Council's Nostra Aetate. Many Christians, and certainly all Nazis, were convinced that Jews deserved to be punished. For centuries, Jews were pictured as persecutors of Christ, members of a conspiracy to entrap the innocent man of God. God himself had rejected this people. According to the Gospel of Matthew, the entire people was guilty and had called "his blood upon us and our children" (Mt 27:25). This concept of "collective guilt" has historically applied exclusively to the people of Israel. Punishing the Jews was God's plan and, by extension, every Christian's duty, a righteous act of self-defense. Contemporary conspiracy myths, such as Protocol of the Elders of Zion and the recent QAnon conspiracy, build upon the Gospel passion narrative of entrapment and wrongful conviction of Jesus by a cabal of powerful Jewish leaders.
The atrocity of the Shoah created the possibility for guilt reversal: for the first time in history, the murderers of Jews were seen as guiltier than their Jewish victims. This "sea change" did not occur, though, until the mid-1960s. The first church body to disavow the deicide charge as a "tragic misunderstanding" was the House of Bishops of the Episcopal Church in the USA in 1964, who wrote, "To be sure, Jesus was crucified by some soldiers at the instigation of some Jews. But this cannot be construed as imputing corporate guilt to every Jew in Jesus' day, much less the Jewish people in subsequent generations. Vatican Council in Rome overwhelmingly passed Nostra Aetate, widely acclaimed as the revolutionary moment in Jewish-Christian relations. It similarly states: True, the Jewish authorities and those who followed their lead pressed for the death of Christ; still what happened in His passion cannot be charged against all the Jews, without distinction, then alive, nor against the Jews today. Although the Church is the new people of God, the Jews should not be presented as rejected or accursed by God, as if this followed from the Holy Scriptures. 41 The attribution of Jewish guilt had been the cornerstone on which the election of the Gentile Christian church was built. It establishes the reason for God's rejection and replacement of the people of Israel. Any repentance for the churches' silence and complicity in the Holocaust, therefore, must begin with superseding supersessionism and the replacement of replacement theology.
In their 2002 statement, "A Sacred Obligation: Rethinking Christian Faith in Relation to Judaism and the Jewish People," the American ecumenical Christian Scholars Group, founded in 1969, laid out ten "sacred obligations" to demand theological, exegetical, liturgical, and educational changes in light of centuries of anti-Judaism. The book, published in 2005, belongs to the genre of consultative theological work. The platform of ten obligations opens with a denunciation of the erroneous traditional portrayal of Jews as "collectively responsible for the death of Jesus and therefore accursed by God…We acknowledge with shame the suffering this distorted portrayal has brought upon the Jewish people. We repent of this teaching of contempt. Our repentance requires us to build a new teaching of respect." 42 Repentance grounds the search for new interpretations that seek to integrate the theological integrity and religious vitality of rabbinic Judaism into Christians theology, exegesis, and ethics. Revising the Jewish-Christian relation is, the authors argue, "a central and indispensable obligation of theology in our time:" It is essential that Christianity both understand and represent Judaism accurately, not only as a matter of justice for the Jewish people, but also for the integrity of Christian faith, which we cannot proclaim without reference to Judaism. Moreover since there is a unique bond between Christianity and Judaism, revitalizing our appreciation of Jewish religious life will deepen our Christian faith. 43 Self-knowledge triggered by repentance results in not only negative emotions, such as shame and self-mistrust, but innovation and creativity. Repentance is a future-oriented practice that generates new paradigms and patterns of relating to self and others.
Paul's Olive Tree and the Biology of Grafting
Since Nostra Aetate, Paul has become the guarantor of new covenantal thinking. Instead of speaking of superseding and replacing the Jews in God's one and only covenant, Romans 11 introduces the metaphor of the olive tree, into which a new branch has been grafted: But if some of the branches were broken off, and you, a wild olive shoot, were grafted in their place to share the rich root of the olive tree, do not boast over the branches. If you do boast, remember that it is not you that support the root, but the root that supports you. … For if you have been cut from what is by nature a wild olive tree and grafted, contrary to nature, into a cultivated olive tree, how much more will these natural branches be grafted back into their own olive tree (Romans 11: 17-18, 24).
The science of plant biology and grafting makes the metaphor of the olive tree even more compelling. Biologist and historian of science Hans-Jörg Rheinberger has written extensively about the biology of grafting as well as its metaphorical use in epistemology. Grafting, he points out, provides distinct benefits over other techniques of biological cultivation, such as hybridization, transplantation, and vaccination. In hybridization, transplantation, and vaccination, the genetic material of two different plants mixes and mingles, is absorbed and integrated. In grafting, the genetic difference between the two plants remains for the duration of the lifetime of the plant(s). The graft is neither a parasite that destroys the host, nor does it diminish the growth of the original tree: Despite complete bonding at the interface, the grafts retain their specific genetic identity, although the base plant influences the development and quality of the graft, which grows more or less, and bears more or less fruit. However, they remain heteronomous to each other. 44 This suggests that in Paul's metaphor, the Christian graft does not take over the Jewish root, nor does its sap all of its strength thereby harming the growth of the "Jewish" branches. This image of the grafted olive tree provides an alternative model by which to envision Jewish-Christian relations, one not consumed by covenantal rivalry, exclusivism, and triumphalism. 45 The metaphor of the Gentile church as a wild graft plugged into the established olive tree permits and celebrates difference and dependence within covenantal theology. Such covenantal plurality is exceptional, as even Paul retreats to the more prevalent narrative tradition of filial competition. In Galatians 4:21-31, he claims the Church to be the sole legitimate heir of Abraham through the free woman Sarah, while the Jews are heirs of the older brother Ishmael, born to the enslaved Hagar. With his two wives and two sons, Abraham embodies the competitive conflict over the attentions of the F/father. 46 This legacy of patriarchal rivalry and exclusive identity formation has been critiqued often and eloquently. It begins with the rivalry of Cain and Abel, as Regina Schwartz argues in The Curse of Cain: The Violent Legacy of Monotheism. 47 And it continues with Jacob and Esau's lament, "Has God only one Blessing?" cited by Mary Boys in her eponymous rereading of the covenantal dilemma. 48 The notion that election must entail rejection has been the source of much historical repression and persecution. It underlies the violence of the left hand of Christology that emerges from the "living cross" to stab the synagogue, and the imagery of the triumphant Ecclesia who looks upon the dethroned and destroyed Synagoga. 49 The triumph of one necessitates the annihilation of the other.
Repenting Supersessionism: Religious Pluralism
There is broad consensus across Christian denominations and Christian theology that anti-Judaism is an evil that ought to be avoided and that supersessionism is implicated. There is no agreement, though, on how exactly supersessionism can be replaced. In light of the covenantal model of the grafted olive tree, I want to examine two pathways that have emerged in recent years. The first pathway is developed by scholars in the Jewish-Christian dialogue who take the metaphor of the grafted olive tree as validation of theological difference, diversity, and religious pluralism within the language of the covenant. The second pathway is taken by Christian theologians, often evangelical, who understand Paul's olive tree as invitation to affirm a Jewish-Christian intimacy and affection that blurs the boundaries between the two religious communities and traditions. The difference between these two positions is partly denominational, but more importantly, hinges on the legitimacy of Christian mission to the Jews.
Members of the Christian Scholars Group exemplify the first pathway. The seventh statement of their "A Sacred Obligation" explicitly repudiates any and all missionary campaigns directed at Jews: "Christians should not target Jews for conversion." 50 This statement directly responds to the missionary mandate, known as The Great Commission, that concludes the Gospel of Matthew: "go and make disciples of all nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit" (Mt 28:19). Christians have taken this as an obligation to share the "good news" with the world in general, and with the Jews in particular. "A Sacred Obligation" rejects this out of repentance for the violence it caused.
In his contribution "Covenant and Conversion," Philip Cunningham critiques the triumphalist confidence that lies at the core of missionary campaigns that feel "competent to blame others for failing to 'believe' the proclamation of the good news." 51 Theology, he cautions, is a human enterprise that must remain cognizant of the supreme sovereignty and transcendence of God: "no human being can know the mysterious workings of God's purposes and grace in the heart of another." 52 The faith of medieval Jews "who refused forced baptism out of fidelity to their covenant with God and so were slain by fanatical Christians cannot be faulted for rejecting the gospel." 53 Indeed, their martyrdom and steadfastness inspires respect. Cunningham counsels eschatological humility: "Conceivably, at the end of days Jews will come to appreciate why Christians revere Christ Jesus, while Christians will come to value Jewish love for the Torah. Both may profoundly recognize the presence of their divine covenant partner in the other and so will exclaim with Paul, 'Oh, the depth of the riches and wisdom and knowledge of God. '" 54 For theologians such as Phil Cunningham, repentance for anti-Judaism leads to humility and requires a theological paradigm shift toward religious pluralism. Indeed, in another publication, Cunningham draws on Walter Cardinal Kasper's felicitous phrase of Judaism as the "sacrament of every otherness." 55 In the Jewish other, Christianity confronts, and either accepts or rejects, difference and diversity. The false equation of monotheism with monoculture has long authorized the violent suppression of heresy and blasphemy, dissent and difference. Cunningham disavows the vision of a world in which every living human being is baptized in Christ, which would eradicate all other cultivars of religious wisdom and observance, leaving a much poorer and diminished world. He questions the ultimate goal of Christian mission and its universalist vision for "equating the church with the kingdom of God. However, the church is the servant of God's kingdom, not the kingdom itself…" 56 Such a vision of the triumph of the Christian church aims to 50 Christian Scholars Group, "A Sacred Obligation: Rethinking Christian Faith in Relation to Judaism and the Jewish People," in Seeing Judaism Anew,xvi. 51 Philip A. Cunningham,"Covenant and Conversion," 157. 52 Ibid,157. 53 Ibid,157. 54 Ibid,158,citing Rom 11:33. 55 Phil Cunningham in "Judaism as Otherness" in Jewish-Christian Relations (2/29/2004) https://www.jcrelations.net/article/judaism-as-sacrament-of-otherness.html?tx_extension_pi1%5Bac-tion%5D=detail&tx_extension_pi1%5Bcontroller%5D=News&cHash=f0d431225705ae589d2697cfe 7f0d25e [January 21,2021]. 56 Phil A. Cunningham,"Covenant and Conversion," Seeing Judaism Anew,160. control and conquer multiplicity and difference, beginning with the Jewish "No" to Jesus proclaimed by Christians as the messiah to Israel.
The Vietnamese Catholic theologian Peter Phan, also a member of the Christian Scholars Group, similarly pursues a "Christian theology of religious pluralism." He sees the Jewish-Christian relation as the foundation for all other interfaith encounters. The Asian religions, not constrained by narratives of fraternal rivalry and monotheistic monoculture, have more easily coexisted and allowed for multiple religious belonging. For Phan, religious plurality exists as "not just a matter of fact but also a matter of principle." 57 His theological project, Being Religious Interreligiously, however, earned him a "critical notification" by the doctrinal commission of the United States Conference of Catholic Bishops in 2007. 58 Phan is looking for a different covenantal model that envisions a "process of complementarity, enrichment, and even correction [which] is two-way, or even reciprocal." 59 This approximates and returns to Paul's metaphor of the grafted olive tree, where reciprocity and relationality ensure vitality and reproductive success not only of the graft but also of the host and its branches. An "inclusivist-pluralist Christology" understands its vital dependence on the Other not as weakness and diminishment but as enrichment and fortification. 60 Tree grafted with 40 different types of fruits 61
Repenting Supersessionism: Jewish-Christian Affection and Intimacy
A different approach to repentance for anti-Judaism is exemplified by the "Society for Post-Supersessionist Theology." Its members are committed to develop Christian theologies that affirm and respect Judaism, but they are unwilling to completely denounce missionary movements. This leads to an embrace of the Jewish other, which seeks to "unite" Jews and Christians "in the Messiah." 62 In their mission statement, the "Society for Post-Supersessionist Theology" writes: The Society understands post-supersessionism as a family of theological perspectives that affirms God's irrevocable covenant with the Jewish people as a central and coherent part of ecclesial teaching. It seeks to overcome understandings of the New Covenant that entail the abrogation or obsolescence of God's covenant with the Jewish people, of the Torah as a demarcator of Jewish communal identity, or of the Jewish people themselves. The Society welcomes participation from all who seek to advance post-supersessionist theology. The Society especially seeks to promote perspectives that remain faithful to core Christological convictions; that affirm the ekklesia's identity as a table fellowship of Jews and Gentiles united in the Messiah; and that engage with Jewish thought and tradition as an expression of ecclesial partnership with the Jewish people as a whole." 63 The aim of establishing "table fellowship of Jews and Gentiles united in the Messiah" threatens to erase the genetic difference between the Jewish tree and its Christian branches. Groups, such as "Messianic Judaism," "Friends of Israel," and "Jews for Jesus" practice "table fellowship." Jews who follow the Torah and observe kashrut do not practice table fellowship with Christians. They are not-not now and not ever-united around the belief in Jesus as the Messiah of Israel. 64 As evangelical Protestant churches reach out in dialogue and support the state of Israel, their unwillingness to denounce missionary movements undermines their efforts. 65 What emerges is an affection and intimacy that fails to respect the genetic difference between Judaism and Christianity. For instance, increasingly Christians are celebrating Jewish holidays (especially Passover) and appropriate Jewish liturgies and ceremonial garb (tallis, tefillin). When U.S. Vice President Mike Pence invited Rabbi Loren Jacobs of the Congregation Shema Yisrael to offer a commemorative prayer for the victims of the deadly shooting in the Pittsburgh Synagogue, Jacobs concluded the prayer "in the name of Jesus." The rabbi is the leader of a "Messianic synagogue," in other words, an evangelical church in Detroit, MI. 66 Such events deliberately violate and erase the boundary between Judaism and Christianity. 67 Another example is the conflict over the approval of the cable station "God TV" by the Israeli Council for Cable and Satellite Broadcasting. 68 Scandal erupted in the Israeli media when the TV station's CEO, Ward Simpson, asserted both his Jewishness by virtue of his Jewish mother and his absolute right "to share the good news of Messiah with my own people," proclaiming that "Yeshua is the Messiah of Israel." 69 While evangelical Christianity is a strong political ally of the state of Israel, its investment and entanglement in missionary organizations such as the "Jews for Jesus," "Friends of Israel," "messianic Judaism," "Christian Mission to Israel," and "Christian Witness to Israel" erases the distinctions and particular expressions of communal Jewish life. 70 This new affection and embrace of Israel is explained as a form of repentance. On Aug. 10 and 11, many Jews worldwide observed the religious holiday of Tisha B'Av by fasting, praying and reading Bible passages related to the destruction of the first Temple by the Babylonians. Religious Jews have kept Tisha B'Av for centuries as a day of communal mourning. But this year they were joined by a growing number of evangelical Christians who now observe the holiday to lament the historical persecution of Jews by the church. The new Such a reversal leaves the competitive paradigm of top and bottom, privileged and secondary intact.
But, to invoke the biological metaphor once more, the graft is not second-class or an afterthought. It is deliberately chosen for hardiness, pest-resistance, maturity, and ease of propagation, to name just a few of the benefits that have made grafting a frequent and popular horticultural technique. Joining two plants together while maintaining their distinctions creates abundance and benefit. The relationship of "stock" and "scion," root and branch, is neither a mother-daughter relationship nor fraternal competition but a distinct model of flourishing that thrives on difference. Repentance for anti-Judaism has the potential to unleash this kind of fruitfulness. But it cannot flow from self-abjection and degradation. That will inevitably lead to resentment and rebellion against the privileged and primary. Instead, repentance for Christian anti-Judaism requires willingness to accept difference and disagreement as a theological good.
By now, it should be clear that I consider the first pathway, exemplified by members of the Christian Scholars Group, the more radical and appropriate from of teshuvah because it rebuilds Christian theology around respect for theological difference and disagreement. The second pathway remains beholden to claims of theological supremacy and cannot repudiate missionary and inclusionary incorporations of Jewish otherness. This ultimately undermine genuine encounters between equals.
Conclusion
The Jewish tradition affirms that the doors of repentance are always open. The "repentant sinner is greater than a truly righteous man," writes Soloveitchik, having passed through the cleansing fires of contrition: 73 Hate is more emotional and more volatile than love. The destructive forces are stronger than the constructive forces. A thoroughly righteous man is not given to feelings of hatred or jealousy…But a man who has sinned and repented may be able-if he proves worthy-to utilize the dynamism of the forces of evil which had enveloped him before and elevate them… and make them operate on behalf of the forces of good. 74 In the Gospels, Jesus teaches that there will be more joy in heaven over one sinner who repents and returns than over ninety-nine righteous who do not need to repent (Luke 15:7). Repentance means wrestling with the powerful forces that fuel hatred and contempt. Without a doubt, antisemitism is a powerfully persuasive poison. It requires active and long-term engagement to decontaminate toxic traditions and to transform them into life-giving teachings of theological integrity. | 7,873.6 | 2021-04-05T00:00:00.000 | [
"Philosophy"
] |
A Portable Lipid Bilayer System for Environmental Sensing with a Transmembrane Protein
This paper describes a portable measurement system for current signals of an ion channel that is composed of a planar lipid bilayer. A stable and reproducible lipid bilayer is formed in outdoor environments by using a droplet contact method with a micropipette. Using this system, we demonstrated that the single-channel recording of a transmembrane protein (alpha-hemolysin) was achieved in the field at a high-altitude (∼3623 m). This system would be broadly applicable for obtaining environmental measurements using membrane proteins as a highly sensitive sensor.
Introduction
Planar bilayer lipid membranes (BLMs) have been proposed as a useful platform for various potential applications including single ion channel analysis, drug screening, nanopore sensing at the single-molecule level, and nanopore DNA sequencing. [1][2][3] For sensing with a BLM system in outdoor environments the following properties are imperative: 1) portabilization of the measurement system, 2) low noise measurement, and 3) preparation of reproducible and stable BLM. Although several types of lipid bilayer micro-chips are reported, [4][5][6][7] none of the systems have been satisfied with these requirements at the same time.
In this study, we propose a system for constructing a portable, low-noise, and reliable BLM experiments as a platform for conducting membrane protein measurements in outdoor environments. The stable and reproducible formation of BLM is achieved by a method referred to as the ''droplet contact method'' [8][9][10][11][12][13] in a double-well (DW) chip (Figure 1a,b). This chip has Ag/AgCl electrodes on the bottom of the chamber and is connected to a handheld amplifier via a laptop PC as shown in Figure 1b-d. We here apply this portable system for environmental nanopore sensing using alpha-hemolysin (aHL). [14] We also evaluate the applicability of this system for obtaining single channel current recordings in the field at high altitude (,3623 m). This demonstration or experiment would be one of proof of concepts for our portable system in the outdoor operation.
Materials
The following reagents were used in this study: poly(methyl methacrylate) (PMMA) substrate (Mitsubishi Rayon; Japan); KCl, K 2 HPO 4 , KH 2 PO 4 , and ethylenediamine tetraacetic acid (EDTA; Wako; Japan); 1,2-diphytanoyl-sn-glycero-3-phosphocholine (DPhPC) and phosphocholine from egg yolk (EggPC) (Avanti Polar Lipids; Alabama); n-decane (Sigma-Aldrich; St. Louis). Buffered electrolyte solutions were prepared from ultrapure water. The ultrapure water was prepared with .18 MV cm water from a Milli-Q system (Millipore). Wild-type aHL (Sigma-Aldrich; St. Louis) was obtained as a monomer polypeptide isolated from Staphylococcus aureus in the form of a lyophilized powder and dissolved at a concentration of 1.0 mg protein/mL in ultrapure water. During use, samples were diluted to the desired concentration using a buffered electrolyte solution, and stored at 4uC. For measurements in the field, all samples were stored at room temperature.
Device fabrication
The device consists of a DW chip with the wells separated by a thin poly(chloro-p-xylylene) (parylene) film with micropores (described below). [13] The DW chip was made from poly(menthyl methacrylate) with dimensions of 3062064 mm, and fabricated using an automated CAD/CAM (computer aided design/ computer aided manufacturing) modeling machine (MM-100; Modia Systems; Japan). Each well was 4 mm in diameter and 3 mm in depth. The intersectional plane of the overlapped area of the wells in which the parylene film was settled was 2 mm in width.
The parylene film was fabricated using a general photolithography method. First, a 5-mm-thick parylene film was coated on a single-crystalline silicon substrate with chemical vapor deposition. Then, a thin aluminum layer was deposited on the parylene film and patterned using a standard photolithographic process. Using the aluminum layer as a mask, the exposed parylene film was etched by oxygen plasma. After the aluminum layer was removed, the parylene sheet with micropores (5 pores 100 mm or 150 mm in diameter) was peeled off of the silicon substrate using tweezers.
Ag/Cr was deposited and patterned on the PMMA substrate as wired electrodes for the electrical recording from the chambers to a handheld patch-clamp amplifier. The chambers with the parylene films and the wired substrate were connected using thermocompression bonding. The bottoms of the chambers, which made contact with droplets, were coated with Ag/AgCl paste (BAS; Japan).
Lipid bilayer preparation and channel reconstitution using the droplet contact method
The droplet contact method for BLM formation is relatively simple comparing to conventional method such as painting method [15]. In addition, the BLM formed in our system showed around 2 weeks stable life time with reconstituting alamethicin channels. [11] First, the DPhPC or EggPC/n-decane (20 mg/mL) solution (5-7 mL) was initially dropped in each well. Next, the buffer solution (18-20 mL) was dropped into both wells. Within a few minutes of adding the buffer solution, the lipid monolayers made contact, and the BLM was formed. If the BLM becomes ruptured during the process, it can be reformed by repainting using a hydrophobic stick (Plastic needle, As one, Japan).
We defined the two droplets set on the working and ground electrodes as droplet A and B, respectively. A symmetrical buffer solution (1 M KCl, 10 mM phosphate-buffered saline (PBS), 1 mM EDTA, pH 7.4) was used for both droplets in this study. aHL was dissolved in droplet at a 0.6 mM concentration.
Channel measurements
The channel currents were monitored using Pico or Pico2 (Tecella; CA) handheld amplifier and Axopatch 200B (Axon Instruments) patch-clamp amplifiers with a Digidata 1440A digitizer (Molecular Devices). The signals were detected through a 5-kHz low-pass filter at a sampling frequency of 20 kHz in Pico (and Pico2) or a 1-kHz low-pass filter at a sampling frequency of 5 kHz in Axopatch 200B at 2361uC. Analysis of the channel current was performed using pCLAMP ver. 10.6 (Molecular Devices; Sunnyvale) and Igor Pro 6.2 (Wavemetrics; Oregon).
Noise validation of the channel current in the system
The noise validation was carried out using two different system types: one in which the Ag/AgCl electrodes were directly inserted
Field testing
The field test of the portable system was performed at the summit of Mount Fuji in Japan using our DW chip connected to a handheld amplifier and a laptop PC (Toshiba Dynabook R730; Japan) on August 5-6, 2011. The weight of this portable system is relatively right (less than 3 kg) compared with the conventional lipid bilayer system. The altitude, wind speed, and barometric pressure at the time of testing were measured using ADC Pro (Brunton; USA). All chemicals and equipment were transported up the mountain by four people (R. K., K. K., Y. T., and T. K.).
Results and Discussion
Reducing current noise by wiring electrodes on the bottom of the well In a conventional system, the Ag/AgCl electrodes for recording ion channel currents are often directly inserted into the buffer solution (see the schematic in Figure 2a). While this is an efficient method for laboratory-based experiments, this potentially faltering configuration is not applicable to a hand-held system. Bottom-well wiring is a common technique used in MEMS (Micro-Electro-Mechanical Systems) fabrications; thus, we attempted to apply such wiring to our device and examined the current noise, which can be influenced by the electrode configuration. Figure 2a shows the power spectra of the aHL channel current noise from the two different systems (electrodes inserted into the aqueous droplets or positioned on the bottom of wells). The frequency-dependent noise characteristics were improved in the bottom-electrode system. The power density on the wiring system was over 1 order of magnitude lower for frequencies below 1000 Hz, and also showed significant reduction at low frequencies. For the inserted electrode system, the Ag/AgCl electrodes have several interfaces, aqueous buffer/ndecane and n-decane/air. Capacitor noise should thus be generated from these interfaces. [16] On the other hand, the electrodes in the bottom-electrode system only contact the aqueous phase without interfaces. The aqueous solution will be below the oil phase, and can be wetted over the electrode surface because the density of aqueous solution is greater than that of n-decane. [9] In addition, RMS noise (baseline noise) in the inserted and the bottom-electrode systems at 1 kHz filtering was determined to be 2.460.4 pA and 0.960.2 pA, respectively (the headstage noise of our equipment was approximately 0.5 pA RMS at the same conditions). These results demonstrated the advantage of the bottom-electrode system for obtaining low-noise channel recordings.
Rapid aHL channel current measurement
The mechanism of aHL channel formation involves initial formation of the BLM by contact between the two lipid monolayers, and then the subsequent insertion of the aHL monomers in the BLM, which are then assembled and ultimately form a nanopore. Despite the versatility of this approach for BLM formation and channel reconstitution, the time of the channel formation are not clear. The time required for the appearance of an initial aHL current signal was examined using two different methods: drop-off or repainting method (see Materials and Methods). Figure 2b presents the histogram representing the time of initial aHL reconstitution using the drop-off method. The most frequent time required for the reconstitution was approximately 4 min after dropping the aHL solution. However, the time required when using the repainting method was approximately 10 times faster than that for the drop-off method (Figure 2c). The lipid phase can be deformed and the lipid bilayer is readily reformed by simply stroking the contact area using a hydrophobic stick. This method is highly useful for repeated measurements and supports the rapid data acquisition ability of this method.
Field test: channel current measurement of aHL at high altitude The portability of a BLM system for nanopore or membrane receptor sensing is one of the essential requirements of obtaining environmental measurements in a natural area. To exhibit this ability using our chip and a handheld amplifier, we evaluated the single-channel measurement of aHL at a high-altitude location (the summit of Mount Fuji in Japan). The experimental setting is depicted in Figure 3a. The actual altitude was 3623 m. The DW chip with the handheld amplifier connected to a laptop personal computer (PC) via a USB (Universal Serial Bus) cable was placed on the ground, and the bilayer formation procedure was conducted. We found that the current noise of aHL was relatively low even without grounding in the natural environment, as shown in Figure 3b; the low noise current is likely due to the lack of electrical noise sources in the area. Nonetheless, we sometimes observed vibration noises that was generated by strong winds. The channel conductance of aHL was ,1 nS, which is similar to the value measured in the laboratory. The frequency-dependence data of the current noise measured in the laboratory and in the field are shown in Figure 3c. The power density of the field measurement was over 1 order of magnitude higher than that of the laboratory measurement below 100 Hz, which can be caused by the vibration noises. In addition, at 50 Hz, electromagnetic noise clearly appeared in the field measurement, which was likely generated from the laptop PC.
In summary, a BLM was prepared in a DW chip connected to a handheld amplifier and a PC. The portable chip with bottom electrodes enabled low-electrical noise measurement. Because of the reproducible and stable BLM formation in the chip, the channel current signal of aHL could be measured at the singlemolecule level at high altitude in a natural environment. We believe that this portable system would be applied to a wide variety of environmental measurements including biomimetic separation (e.g. nano-filtration of water [17]), measurement for deleterious gases [14] and chemical agent of odors in nature [18]. | 2,674 | 2014-07-29T00:00:00.000 | [
"Biology"
] |
Redefining the scientific method: as the use of sophisticated scientific methods that extend our mind
Abstract Scientific, medical, and technological knowledge has transformed our world, but we still poorly understand the nature of scientific methodology. Science textbooks, science dictionaries, and science institutions often state that scientists follow, and should follow, the universal scientific method of testing hypotheses using observation and experimentation. Yet, scientific methodology has not been systematically analyzed using large-scale data and scientific methods themselves as it is viewed as not easily amenable to scientific study. Using data on all major discoveries across science including all Nobel Prize and major non-Nobel Prize discoveries, we can address the question of the extent to which “the scientific method” is actually applied in making science's groundbreaking research and whether we need to expand this central concept of science. This study reveals that 25% of all discoveries since 1900 did not apply the common scientific method (all three features)—with 6% of discoveries using no observation, 23% using no experimentation, and 17% not testing a hypothesis. Empirical evidence thus challenges the common view of the scientific method. Adhering to it as a guiding principle would constrain us in developing many new scientific ideas and breakthroughs. Instead, assessing all major discoveries, we identify here a general, common feature that the method of science can be reduced to: making all major discoveries has required using sophisticated methods and instruments of science. These include statistical methods, particle accelerators, and X-ray methods. Such methods extend our mind and generally make observing, experimenting, and testing hypotheses in science possible, doing so in new ways and ensure their replicability. This provides a new perspective to the scientific method—embedded in our sophisticated methods and instruments—and suggests that we need to reform and extend the way we view the scientific method and discovery process.
Science is fascinating because discoveries like new vaccines, more efficient forms of electricity generation, and new medical therapies can spread across the globe and improve the lives of many people.Science and discoveries have enhanced our ability to understand and predict many aspects of our natural and social world.Einstein's special relativity revolutionized physics in the 20th century and how we understand the relationship between space and time.Darwin and Wallace's theory of evolution via natural selection transformed biology and how we comprehend the historical origins of our species.Franklin, Crick, and Watson's discovery of the double helix structure of DNA radically redefined genetics and how we conceive the way genetic information of living organisms is stored, copied, and passed along.These scientists fundamentally changed the way we view the world, but they did not carry out an experiment to make these path-breaking discoveries.In fact, hundreds of major scientific discoveries did not use "the scientific method", as defined in science dictionaries as the combined process of "the collection of data through observation and experiment, and the formulation and testing of hypotheses" (1).In other words, it is "The process of observing, asking questions, and seeking answers through tests and experiments" (2, cf. 3).Many recent science textbooks also present the scientific method as a sequence of steps or a process of observing, experimenting, and testing hypotheses, as shown in systematic studies of university-level science textbooks across science (4-7).The common scientific method is thus embedded in science dictionaries and textbooks (4-7).A study of major science institutions like the National Science Foundation and National Institutes of Health also found that they primarily endorse this scientific method focused on hypothesis testing, and generally not other exploratory research methods that do not test a predefined hypothesis (8).Researchers have not however yet used large representative data to assess the extent to which the scientific method is actually applied in science or they investigate it at an abstract level (9,10).In general, this universal method is commonly viewed as a unifying method of science and can be traced back at least to Francis Bacon's theory of scientific methodology in 1620 which popularized the concept (11).This seminal book in many ways has laid the foundation of philosophy of science and fundamentally influenced generations of scientists and the common conception of how science is conducted, which remains widespread and institutionalized today (4-8, cf.12).However, before hypothesizing about science, what its general method is and how it should be conducted, we need to first assess the evidence on how science is actually conducted in practice.Assessing science's major discoveries across scientific fields and time provides a new systematic way to do so and enables us to evaluate how this universal concept of scientific methodology holds up.Science's major discoveries are defined here as all 533 Nobel Prize-winning discoveries in science (from the first year of the prize in 1901 to 2022) (13) and all other major discoveries that were made prior to or did not receive a Nobel Prize; these are derived from all science textbooks (a total of seven) that provide a top 100 list of the greatest scientists and their discoveries and that span across scientific fields and history (14-20) (with textbooks specific to a field or time period not included).After excluding duplicate cases within the seven textbooks, 228 other major discoveries remained.A total of 761 major discoveries, which have driven humankind's knowledge, have thus been included in the study.The main source for compiling the data in this study is the main publication of the discovery that indicates the methods used to make the breakthrough (in the case of discoveries earning a Nobel Prize, the prize-winning papers) (13).For further description of the data, see figure captions (for overall greater details on the data, see the companion study that outlines the features and characteristics of science's major discoverers) (21).
Examining science's major discoveries, we find that the common scientific method (the combined use of observation, experimentation, and hypothesis testing) is applied in making 71% of all discoveries; and the share is 75% for all discoveries in contemporary science, defined as all Nobel Prize and major non-Nobel Prize discoveries since 1900.Among all major scientific discoveries, we find that 94% have required using observation, 81% testing a hypothesis, and 75% experimentation (Fig. 1)-with some hypotheses tested using experimental research designs and others using only observation.Science thus does not always fit the textbook definition.
Comparison across fields provides evidence that the common scientific method was not applied in making about half of all Nobel Prize discoveries in astronomy, economics and social sciences, and a quarter of such discoveries in physics, as highlighted in Fig. 2b.Some discoveries are thus non-experimental and more theoretical in nature, while others are made in an exploratory way, without explicitly formulating and testing a preestablished hypothesis.Importantly, the common scientific method does not take into account that all Nobel Prize discoveries across fields require applying sophisticated methods (such as statistics and randomization techniques) or instruments (such as centrifuges and computers)-Fig.2b.
When we systematically assess all major discoveries, what is the common method of science that we use to be able to do
Methods and instruments
Sophisticated methods and instruments 100%
Scientific discoveries
Fig. 1.Methods of science pyramid: share of each methodological approach used for making discoveries.Data reflect all 761 major discoveries."Sophisticated scientific method" (statistics, centrifuges etc.)
A B
Fig. 2. Share of discoveries made using the classic and the sophisticated scientific method, across time and fields.Data reflect all 761 major discoveries (including all Nobel Prize discoveries) (a), and all 533 Nobel Prize discoveries (b).Each of these discovery-making publications are classified as using observation if the study describes collecting observational data (using eyesight) (bar 1 in the figure), as using experimentation if the study conducted an experiment (bar 2), and as testing a hypothesis if the study formulated and assessed a proposed explanation (rather than conducted exploratory research) (bar 3).The publication is classified as using the classic scientific method if the study applied the three features (bar 4).In contrast, the publication is classified as using the sophisticated scientific method if the study applied a complex scientific method or instrument (bar 5), as defined below.The 10 most commonly used scientific methods and instruments-among all Nobel Prize discoveries-include statistical/mathematical methods, spectrometers, X-ray methods, chromatography, centrifuges, electrophoresis, lasers, (electron) microscopes, particle accelerator, and particle detector.Analysis expanding the data in (b) to include, in addition, the other major discoveries that did not earn a Nobel Prize but were made within the same time period (633 discoveries in total) illustrates comparable results (except for astronomy) and serves as a robustness check, with for example the share of discoveries made applying "the classic scientific method" at 40, 35, 75, 93, and 89% across these five fields, respectively.
science and make discoveries?We find that one general feature of scientific methodology is applied in making science's major discoveries: the use of sophisticated methods or instruments.These are defined here as scientific methods and instruments that extend our cognitive and sensory abilities-such as statistical methods, lasers, and chromatography methods.They are external resources (material artifacts) that can be shared and used by others-whereas observing, hypothesizing, and experimenting are, in contrast, largely internal (cognitive) abilities that are not material (Fig. 2).Applying sophisticated methods or instruments is thus a necessary condition for discovery in contemporary science.We find that a number of sophisticated methods and instruments have each been used in making at least 10% of all major discoveries, such as centrifuges, X-ray diffraction, and spectrometers-and statistical methods for example have been used in making 62% of all discoveries.Without such scientific tools, discovery and scientific progress is not possible.In fact, this sophisticated scientific method is actually more unique to science, as the most common scientific methods and instruments-such as particle accelerators, electrophoresis methods, and X-ray diffraction-are largely only used in science.In contrast, we also often make observations, test hypotheses, and experiment in business, industry, public policy, and everyday life and they are thus not just prototypical or distinctive of science.Recognizing the vast importance of such complex methods and instruments adds an essential element to understanding science and especially how science has evolved from its early origins in directly observing, hypothesizing and experimenting to now only being able to do so by using such complex tools.The classic scientific method dominated how science was done for much of history (especially when early scholars like Bacon described it) (11) but now sophisticated scientific methods dominate contemporary science by enabling us to observe, experiment, and test hypotheses in much more diverse, complex, and efficient ways.Just as science has evolved, so should the classic scientific method-which is construed in such general terms that it would be better described as a basic method of reasoning used for human activities (non-scientific and scientific).
While features of science such as observation, experimentation, and hypothesis testing are commonly used in science and making discoveries, they are thus not universal.An experimental research design was not carried out when Einstein developed the law of the photoelectric effect in 1905 or when Franklin, Crick, and Watson discovered the double helix structure of DNA in 1953 using observational images developed by Franklin.Direct observation was not made when for example Penrose developed the mathematical proof for black holes in 1965 or when Prigogine developed the theory of dissipative structures in thermodynamics in 1969.A hypothesis was not directly tested when Jerne developed the natural-selection theory of antibody formation in 1955 or when Peebles developed the theoretical framework of physical cosmology in 1965.These scientists all earned a Nobel Prize for these discoveries, but they did not directly apply or generally could not apply the "scientific method" to make their discovery.The common scientific method captures much of scientific practice but not all domains.If we were to abide by the common definition of the scientific method, Copernicus (22), Darwin (23), Einstein (24), Franklin, Crick, and Watson, and many others would not be viewed as having applied it as they did not directly carry out experiments to make their seminal breakthroughs.These scientists have however become iconic figures of science.
In general, scientific methods-like scientists-come in many sizes, shapes, and levels of sophistication.We use many methods to conduct science across fields: combining mathematics with measurement instruments, statistics with experimentation, X-ray diffraction, spectrometers, and particle detectors using systematic observation, and hundreds of other combinations.We may think of the diverse methods needed in immunology, oceanography, neuroscience, and astrophysics, or chemistry, agronomy, and behavioral economics.We cannot do science without our sophisticated methods and instruments which make it possible, for most phenomena in science, to observe, experiment, and test hypotheses and especially do exploratory research in the first place-and also to do so in new and innovative ways (Table 1).The sophisticated scientific method integrates the use of observation, experimentation, and hypothesis testing into our central methods and instruments (Fig. 3).Replicability, a central feature of science, is also tied to particular sophisticated methods, such as statistical methods and X-ray devices.Different researchers applying sophisticated methods ensures that studies, theories, and discoveries are replicable (while observation, experimentation, and hypothesis testing are too general to do so and are subject to each researcher applying them differently and thus more susceptible to researcher bias).Sophisticated methods make research more accurate and reliable and enable us to evaluate the quality of research.
Overall, with the classic scientific method, we would not be able to label many major scientific discoveries as scientific, though they have vastly impacted science and our lives.The concept of the common scientific method, as a golden principle connecting the scientific community together, can be misunderstood as being universal.It is an idealization, embedded in university science textbooks (4-7), science dictionaries (1-3), and several major science institutions (8), that can be confusing for students and less-experienced researchers when learning about science and scientific discoveries and realizing it does not always apply.
We do science and make breakthroughs using our diverse and complex methodological toolbox.We can best view the method of science as the use of our sophisticated methodological toolbox.The classic scientific method needs to be integrated into and redefined as the sophisticated scientific method that better reflects actual scientific practice: Scientific methodology is defined as the use of sophisticated scientific methods or instruments (such as mathematics, particle accelerators, and chromatography methods), which are systematic techniques and tools that extend our cognitive and sensory abilities, are generalizable and enable better observing, hypothesistesting, problem-solving, and experimenting and thus acquiring knowledge about the world.
A generalizable method or instrument means that it is applicable in different contexts to do science.This definition can provide a more accurate understanding of the nature of scientific methodology.It also directs our attention to refining and expanding our sophisticated methodological toolbox that is what enables us to drive science and push the scientific frontier.Other features of science's major discoveries are outlined in a series of forthcoming papers and forthcoming book The Motor of Scientific Discovery.Ultimately, the best path to discovery is not the classic scientific method but the sophisticated scientific method.
Table 1 .
Main types of methodological approaches used in science and making discoveries.
Data reflect all 761 major discoveries (including all Nobel Prize discoveries) (first row of data), and all 533 Nobel Prize discoveries (second row of data).Applying observation and a complex method or instrument, together, is decisive in producing nearly all major discoveries at 94%, illustrating the central importance of empirical sciences in driving discovery and science. | 3,512.6 | 2024-03-12T00:00:00.000 | [
"Philosophy"
] |
Jet mass distribution in Higgs/vector boson + jet events at hadron colliders with $k_t$ clustering
We address the issues of clustering and non-global logarithms for jet shapes in the process of production of a Higgs/vector boson associated with a single hard jet at hadron colliders. We perform an analytical fixed-order calculation up to second order in the coupling as well as an all-orders estimation for the specific invariant mass distribution of the highest-$p_t$ jet, for various jet algorithms. Our results are derived in the eikonal (soft) limit and are valid up to next-to-leading logarithmic accuracy. We perform a matching of the resummed distribution to next-to-leading order results from MCFM and compare our findings with the outputs of the Monte Carlo event generators Pythia 8 and Herwig 7. After accounting for non-perturbative effects we compare our results with available experimental data from the CMS collaboration for the Z + jet production. We find good agreement over a wide range of the observable.
Introduction
The invariant mass of a jet is a typical example of a jet shape that plays an important role in the study of the substructure of jets, testing QCD, and identifying newphysics signals. Being sensitive to soft and/or collinear emissions from the parton initiating the jet and from the other incoming and outgoing partons, this observable provides an indispensable mean for probing various aspects that are relevant to achieving better accuracy in QCD calculations. Examples of such aspects include, on the non-perturbative side, hadronisation corrections, underlying event, pile-up interactions, and on the perturbative side, initial and final-state radiation, colour flow, resummation of large logarithms, etc. Analytical calculations for these aspects pave the way for a deeper insight into QCD processes, a better control of theoretical uncertainties, and a precise quantification of missing higher-order contributions and their significance, all of which are issues not very clear in Monte Carlo event generators.
In this paper we shed light on the resummation of large logarithms that arise due to a miscancellation of soft a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>(corresponding author) and collinear singularities between real emissions and their corresponding virtual corrections. The convergence of the perturbative series, in the invariant jet mass (m j ) distribution, is spoiled by the presence of large logarithms in the ratio of the jet mass and its transverse momentum p t , L = ln(m j /p t ), at each order in perturbation theory. In the exponent of the integrated distribution, these logarithms take the form α n s L m , with α s being the strong coupling constant and m ≤ (n + 1), and thus they require an all-orders resummation. A next-to-leading logarithmic (NLL) resummation ensures that all single logarithms of the form α n s L n are resummed, in addition to the leading (double) logarithms (LL) α n s L n+1 .
The jet mass is a non-global observable, i.e., an exclusive observable that is sensitive only to gluon emissions which end up inside the jet. To ensure a proper NLL resummation then its distribution must carefully be treated for a class of large single logarithms known as non-global logarithms (NGLs), which are related to secondary non-Abelian emissions of soft gluons [1,2]. Furthermore, another type of large single logarithms known as clustering logarithms (CLs) [3,4], related to primary gluon emissions off the hard Born configuration, needs to be resummed when the jets are reconstructed using jet algorithms such as k t [5,6] and Cambridge-Aachen (C-A) [7,8]. The anti-k t clustering algorithm [9] is known to cause no CLs (see for instance refs. [10,11]). The full resummation of both NGLs and CLs has thus far proven to be a formidable challenge. The resummation of NGLs is usually achieved numerically via a Monte Carlo approach [1,2] in the large-N c limit (N c being the number of quark colours), though full-colour numerical resummation has been provided in refs. [12,13] based on an analogy between small-x BFKL resummation in Regge scattering and the Weigert equation [14]. Additionally, NGLs may also be resummed via an evolution equation known as the Banfi-Marchesini-Smye (BMS) equation [15] valid at large N c .
In this work we study the jet mass distribution in the process of production of a single jet associated with a vector boson (γ, Z or W ) or a Higgs boson H at the Large Hadron Collider (LHC). In ref. [11], the jet mass distribution was calculated at NLL accuracy combined with nextto-leading order (NLO) results in Z + jet and di-jet processes at hadron colliders, 1 for jets defined with the antik t clustering algorithm. The NGLs were computed therein analytically at fixed order (at O(α 2 s )) and numerically to all orders in the large-N c approximation. In the context of soft-collinear effective theory the jet mass distribution was also studied in ref. [16] for di-jet events, in ref. [17] for γ +jet events, and in ref. [18] for H + jet events. We elaborate herein on the work of ref. [11] by considering the jet mass distribution when jets are reconstructed using k t or C-A clustering algorithms. We additionally consider other vector bosons, namely γ and W , as well as Higgs boson + jet production processes. On the experimental side, the jet mass distribution in W/Z + jet events at the LHC was studied by the CMS collaboration [19], where the jets were reconstructed using various jet algorithms. Additional jet substructure techniques such as trimming, filtering, and pruning, were also addressed in the same work [19]. We do not address these techniques in the present paper.
We compute NGLs and CLs at fixed order, specifically at O(α 2 s ) where they first appear, for the invariant mass distribution of the highest-p t jet. We provide results for the following three jet algorithms: k t , C-A and anti-k t , where we note that for the latter algorithm NGLs were first computed in ref. [11] and that CLs are absent. Moreover, we approximate the all-orders resummed CLs and NGLs by an exponential of the O(α 2 s ) result in the case of k t and C-A algorithms. This is justified by the fact that for the anti-k t algorithm the said exponential approximates the all-orders numerical result very well as we shall demonstrate. 2 We then compare the NLL-resummed and NLO-matched result for the jet mass distribution, which includes the resummed global and non-global (NGLs and CLs) form factors convoluted with the Born cross-section 1 Note that our convention for LO, NLO, etc, is different from that in ref. [11]. In our convention, the LO differential distribution is proportional a delta function.
2 While the all-orders numerical resummation of NGLs for the anti-kt algorithm may be computed using the Monte Carlo code of ref. [1], as was done in ref. [11], we found that this code produces unreliable results for some dipoles in the case of kt clustering. Note that the C-A algorithm is not implemented in the code of ref. [1]. and corrected for NLO effects for each of the four V /H + jet processes, with results from Pythia 8 [20] and Herwig 7 [21,22] parton showers. Finally, we estimate the nonperturbative corrections to this distribution and compare our predictions with experimental data from the CMS collaboration [19] for the jet mass distribution in Z + jet events at the LHC. This paper is organised as follows. In section 2 we discuss kinematics of the processes under consideration and define our observable. We calculate, in section 3, the distribution of the jet mass at leading order and construct the resummed global form factor up to NLL accuracy in the exponent. In section 4 we compute the leading CLs at O(α 2 s ) for both k t and C-A clustering algorithms, which happen to give identical results at this particular order. We also calculate NGLs at O(α 2 s ) for the aforementioned jet algorithms in addition to the anti-k t . We are then able to assess the impact of the various clustering algorithms on NGLs. In section 5 we discuss the all-orders resummation of NGLs and CLs. In section 6 we compare our NLL-resummed result including NLO corrections for the jet mass distribution with the outputs of Pythia 8 and Herwig 7 parton showers. In section 7 we estimate the non-perturbative corrections, which include hadronisation corrections and the underlying event, on the distribution, and compare our results with the experimental data. Finally, in section 8, we draw our conclusions.
Processes and kinematics
In this paper we are interested in the calculation of both CLs and NGLs at single logarithmic accuracy, for the jet mass distribution in the process of production of a single jet associated with a vector (W/Z/γ) or Higgs boson at hadron colliders. For this purpose, it suffices to consider the eikonal (soft) approximation in the squared matrix elements for the emission of gluons. The emitted gluons are assumed to be strongly ordered in transverse momenta, i.e., k tn ≪ · · · ≪ k t2 ≪ k t1 ≪ p t , where k ti is the transverse momentum of the i th emission and p t is that of the outgoing hard jet. The latter ordering simplifies the calculations of the emission amplitudes while being sufficient for capturing the single logarithmic CLs and NGLs.
For a vector boson + one jet production in hadron collisions, there are three partonic channels that contribute to the Born process, namely: qq → g, qg → q, andqg →q. For W ± production, flavour changing needs to be taken into account at the Born level, but this does not affect the QCD structure of initial and final-state radiation. As for the Higgs + one jet process there are four partonic channels to be considered. These are: qg → qH,qg → qH, qq → Hg, and gg → Hg. As far as QCD calculations are concerned all mentioned channels, whether for Higgs or vector boson production, are in fact identical as they all involve three hard coloured (QCD) partons and a colour-neutral boson. This means that the resummation of the jet mass distribution is essentially identical in all of the said channels, with differences pertaining to just the Born cross-section and the associated colour factors for the various channels. We note that the relevant total cross-sections have been calculated up to next-to-next-toleading order (NNLO): Higgs + jet in refs. [23][24][25][26], Z + jet in refs. [27,28], W + jet in ref. [29], and γ + jet in ref. [30].
In the current work, we henceforth consider the three partonic channels shown in figure 1: (δ 1 ) : qq → g + X, (δ 2 ) : qg → q + X, and (δ 3 ) : gg → g + X, 3 where X refers to the colour-neutral boson (γ, Z, W ± or H). We label the incoming partons with (a) and (b) and the outgoing parton initiating the hard jet with (j). The four-momenta of the three hard Born partons and the emitted soft gluons are given by where η i and φ i are the rapidity and azimuth of the i th emission and y and ϕ are those of the outgoing hard jet, measured with respect to the beam axis. The incoming partons a and b carry momentum fractions x a and x b of the incoming protons, and √ s is the collision centreof-mass energy. We shall be ignoring recoil against soft emissions throughout, as it is beyond single logarithmic accuracy.
Jet mass observable and jet algorithms
We study the normalised (squared) invariant mass of the outgoing hard jet j defined by where the sum is over all emitted soft gluons which end up inside the hard jet after the application of a jet algorithm on the final state partons. Notice that we are considering massless quarks and that the soft approximation has been assumed in the above equation whereby p j · k i ≫ k ℓ · k m . The k t , C-A and anti-k t jet algorithms work as follows. For each pair (im) of hadrons in the final state one defines a distance and for each single hadron a beam distance for some fixed jet radius parameter R. Here the parameter p = 1, 0, −1 for k t , C-A, and anti-k t clustering, respectively. If the smallest of all of these distances is d im , then particles i and m are combined into a single particle with four-momentum p i + p m , whereas if the smallest is d i then particle i is considered as a jet and is removed from the list of particles. This procedure is iterated until one is left only with jets in the final state. For the k t algorithm, and in the regime of stronglyordered emissions, the clustering of particles starts with the softest real gluon. Then, in a given event this softest gluon is dragged towards the next-to-softest real parton within a circle of radius R in the (η, φ) plane. If no such harder parton exists then this softest gluon is considered as a jet and is removed from the list of partons. The process is then repeated until no particles are left. When clustering two partons together, the resulting pseudo-jet is essentially aligned along the direction of the harder, and its four-momentum is just that of the harder parton.
For the anti-k t algorithm, on the other hand, clustering starts with the hardest particle, and hence it works in an apposite way to k t clustering. For the C-A algorithm, only geometric distances between partons in the (η, φ) plane decide how clustering happens. Particles which are closest to each other get clustered first.
Jet mass distribution
In what follows we calculate at NLL accuracy the jet mass distribution for a given channel δ, defined by (following the notation of refs. [11,31]) where d 2 σ δ /dB δ d̺ is the differential cross-section with respect to both the Born configuration B δ and the jet mass observable ̺. Details of the differential Born configuration dB δ are discussed further in appendix A. The integrated jet mass distribution is obtained by integrating dΣ δ (ρ)/dB δ over B δ with some chosen kinematical cuts (which we denote by Ξ B ), and summing over all Born channels. That is Following ref. [11], we write eq. (4) in the region ρ ≪ 1 in the factorised form where dσ 0,δ /dB δ is the differential partonic Born crosssection for channel δ (see appendix A) and the factor C B,δ (ρ) depends on the Born kinematics and has the perturbative expansion where C (n) B,δ (ρ) are channel-dependent terms that correct the resummation for non-logarithmically-enhanced terms. The ρ-dependent function f B,δ (ρ) resums all the large logarithms. It has the form [31] where the function Lg 1 resums the leading (double) logarithms (LL) of the form α n s L n+1 ,g 2 resums next-toleading (single) logarithms (NLL) of the form α n s L n , and α sg3 resums next-to-next-to-leading logarithms (NNLL) of the form α n s L n−1 , and so on, with L = ln(R 2 /ρ). The LL functiong 1 receives contributions from soft-collinear emissions from the parton initiating the jet and depends on its colour Casimir scalar. The NLL functiong 2 receives contributions from various sources: (a) hard-collinear emissions from the outgoing hard parton, (b) soft wide-angle emissions from all hard partons, (c) starting at O(α 2 s ), NGLs from soft wide-angle correlated secondary emissions, and (d) CLs, when jet algorithms other than anti-k t are implemented for jet reconstruction, from soft wide-angle primary emissions off the hard partons. These again appear starting from O(α 2 s ). The whole functiong 1 and parts ofg 2 , namely contributions (a) and (b) stated above, have been determined in ref. [11] for the anti-k t algorithm. The exact same result also applies for the case of k t and C-A clustering as the effect of jet algorithms first appears at O(α 2 s ). Our task is to determine the other two contributions tog 2 , namely (c) NGLs and (d) CLs, for k t and C-A algorithms. Before doing so, we review in the next section the basic calculations that lead to the determination ofg 1 and contributions (a) and (b) ofg 2 .
.1 Fixed-order calculation
In this section we compute the jet mass distribution at leading order in QCD and present the all-orders resummed result. Our calculations are valid in the eikonal approximation and accurate up to NLL accuracy. First, we define the following antenna functions relevant for the squared matrix elements for the emission of soft gluons It is worth noting that these antennae are purely angular functions, i.e., they involve no energy or momentum dependence.
Consider the process of emission of a single soft gluon off the three-hard-legs Born configuration (abj), i.e., the process a+b → j+k 1 shown in figure 2. The corresponding factorised eikonal amplitude squared is given by with ∆ δ = {(ab), (aj), (bj)} denoting the three dipoles formed from the partons in channel δ. The colour factor C iℓ is defined as where T i are the generators of the SU(N c ) group with Casimir scalar given by for quarks (and anti-quarks) and T 2 i = C A = N c for gluons. Conservation of colour implies that for our leading-order process a + b → j + k 1 we have [32]: T a + T b + T j = 0, where the generators are taken as if all partons were incoming. Explicitly written, the colour factors relevant to our dipoles are: The term W R 1,δ is the eikonal amplitude squared for the emission of a real soft gluon in the partonic sub-process δ. The corresponding virtual correction in the eikonal limit is simply W V 1,δ = −W R 1,δ . Notice that we are adopting the notation used in our previous work on eikonal amplitudes for e + e − → di-jet process [32]. In our recent paper [33] we have generalised the latter to the case of hadron collisions, specifically considering three-hard-legs Born processes. The corresponding phase-space factor is given by whereᾱ s = α s /π = g 2 s /4π 2 , g s is the strong coupling, and ξ 1 = k t1 /p t . The running of the coupling is irrelevant at one loop and only becomes important at higher orders. The full resummation that we present later will include running-coupling effects.
Following the procedure of measurement operators (see for instance ref. [34]), we write the jet mass distribution at one loop as where the function Ξ in (k 1 ) ensures that the angular integration region for gluon k 1 is such that it gets clustered to the hard jet when the jet algorithm is applied. At this order all jet algorithms essentially work in the same manner, and Ξ in (k 1 ) is then a simple Heaviside step function; At higher loops, as we shall see, this is not as simple. Substituting the expression of the eikonal amplitude squared (10) into eq. (13) we obtain with the antenna function w 1 iℓ for each emitting dipole in ∆ δ given by . (15c) Note that the upper limit of the k t1 integral is the renormalisation scale µ R = p t , which translates into an upper limit 1 on ξ 1 . In order to perform the angular integrations we introduce the polar variables (r 1 , θ 1 ) such that and make a change of variables in the integration such that dη 1 dφ 1 = 1 2 R 2 dr 2 1 dθ 1 . One may expand the jet mass ̺ 1 defined in eq. (2) as a series in R as follows In fact, at single logarithmic accuracy it suffices to keep just the first term in this expansion, and thus we write the step function in eq. (14) as Θ ξ 1 R 2 r 2 1 − ρ . We now perform the integrations for each dipole.
-The dipole (ab): The contribution of the in-in dipole (ab) to eq. (14) at single logarithmic accuracy may be written as follows with L = ln(R 2 /ρ) being the large logarithm that we aim to resum. This contribution corresponds to soft wide-angle radiation from the in-in dipole into the interior of the measured outgoing jet, and is thus free from collinear logarithms.
For the in-jet dipole (aj) eq. (14) reads Note here that the step function Θ ξ 1 R 2 r 2 1 − ρ which restricts ξ 1 > ρ/(R 2 r 2 1 ) also implies that R 2 r 2 1 > ρ since ξ 1 < 1. This serves as a collinear regulator for the integral over r 1 , which would otherwise diverge, resulting in an overall double logarithm as well as a single logarithm. Evaluating the ξ 1 integration yields We perform the integration over θ 1 by expanding the integrand as a series in R and neglecting higher-order terms that have small coefficients. Thus we find The first term in this expansion corresponds to soft and collinear emissions from the outgoing hard leg (j) into its own jet. It contributes at the double logarithmic level, giving the result which is independent of the jet radius (other than in the argument of the logarithm). The other terms in the expansion (21) are purely soft wide-angle contributions, hence we can set ρ → 0 in the lower limit of integration over r 2 1 , and throw away the sub-leading ln r 2 1 term in the integrand. Performing the integration we obtain (23) We note that the coefficient of R 8 in this expression is vanishingly small (O(10 −7 )).
-The dipole (bj): For the other in-jet dipole (bj) the only differences relative to the dipole (aj) are the colour factor C aj → C bj and a minus sign to be inserted in the exponent of the exponential in the integrand of eq. (19), i.e., exp(R r 1 cos θ 1 ) → exp(−R r 1 cos θ 1 ). This is equivalent to a change R → −R (the rest of the integral is invariant under this change). This actually does not produce any differences in the integration since only even powers of R appear in the results (22) and (23).
We can therefore write the assembled soft-collinear double-logarithmic result as and the soft wide-angle single-logarithmic contribution as with This result was first derived in ref. [11], and it actually exponentiates to all orders. However, the running coupling, whose argument is the invariant transverse momentum κ 2 t1,(iℓ) = k 2 t1 /w 1 iℓ of the emission k 1 with respect to the emitting dipole (iℓ) [35], contributes at higher orders and modifies the single logarithmic contribution f where β 0 is the one-loop coefficient of the QCD β function. Accounting for the running coupling for the double logarithmic contribution f is more subtle. In fact the running coupling introduces additional single logarithmic components which depend on the renormalisation scheme. We discuss the all-orders resummed result in the following subsection.
Resummed global result
The full NLL-resummed global form factor f global B,δ (ρ) has been computed in ref. [11] (eqs. (3.3), (3.11) and appendix C therein). The interested reader is referred to the latter reference, together with ref. [31], for details. Here we only state its form, which is given by [11] f global with γ E the Euler-Mascheroni constant (γ E ≈ 0.577) and Γ denotes the Gamma function. The radiator R and its derivative with respect to L, R ′ , are presented in appendix B. We note that the global form factor is identical for all jet algorithms. We also note that the expression of f global B,δ (ρ) may be deduced from the general form presented in ref. [31] as we show in appendix B.
In the next section we treat the case of two-gluon emission where clustering and non-global logarithms first popup.
Two-gluon emission
In the eikonal approximation, the factorised squared amplitude for the emission of two real gluons k 1 and k 2 off the three-hard-legs Born configuration is given by [33] where the one-loop amplitude squared W R i,δ , which builds up the reducible part of the above two-gluon squared amplitude (first term on the right-hand side), is given in eq. (10), and the irreducible contribution W RR 12,δ reads The virtual corrections at this order are Following ref. [34], and implementing the measurementoperator method, we write the jet mass distribution at this order as with phase-space factor dΠ 12 Here, the first integral produces the primary-emission contribution, which contains CLs, and the second integral gives NGLs. The functions Ξ p , where p stands for primary, and Ξ NG , where NG stands for non-global, result from the application of the jet algorithm and restrict the angular integration regions for gluons k 1 and k 2 .
Clustering logarithms
In this subsection we focus on the primary-emission integral in eq. (32) and leave the treatment of the correlatedemission NGLs term to the next subsection. The primaryemission contribution may be split into two parts. The first is the global component which results from integrating both gluons within the measured jet region. This has, however, been accounted for by the all-orders resummed formula (28) discussed in the previous section, and will thus be skipped here. The second part is related to the way jet algorithms cluster gluons and results in large single logarithms that are referred to as clustering logarithms [3,4,36]. These logarithms are a result of miscancellation between real emissions and virtual corrections. The key point is that while real gluons may be dragged into/out of the jet by other real gluons and thus get clustered together, virtual gluons can neither drag nor get dragged. We note that CLs are totally absent when jets are reconstructed using the anti-k t algorithm. At two loops, the C-A and k t algorithms produce identical CLs, but they start to differ at higher orders as was shown in ref. [36].
To perform the first integral in eq. (32) we begin by simplifying the clustering function Ξ p (k 1 , k 2 ). To this end, we introduce the same change of variables as in eq. (16) such that (η 1 , φ 1 ) → (r 1 , θ 1 ) and (η 2 , φ 2 ) → (r 2 , θ 2 ). Note that the upper limit of r i is π/ (R | sin θ i |) since we have −π < φ i − ϕ < π. We then have for the k t clustering algorithm [4,36] where the algorithm distances d im have been defined in eq. (3). The first term exactly reproduces half the square of the one-loop result (14), i.e. 1/2! [f (1) B,δ ] 2 , and persists at higher orders as 1/n! [f (1) B,δ ] n . This signifies that the oneloop result simply exponentiates into the global form factor discussed before. It is the second term, the CLs term, that we shall focus on in the remainder of this subsection.
We write the CLs contribution at this order as follows The expressions inside the square brackets are the oneloop eikonal amplitudes squared (10) for gluons k 1 and k 2 , respectively. To single logarithmic accuracy the ξ integrations factor out from the rest of the integrals yielding the result L 2 /2!, and we are left with where where the first term represents contributions from independent dipoles, that is, each dipole consecutively emits softer gluons at each order independently of the other dipoles. This situation is analogous to that in e + e − annihilation to di-jet process (see for instance ref. [36]). The second term in eq. (36) represents contributions arising from the interference of dipoles in channel δ.
To carry out the integrations we expand the integrand as a power series in R and use the change of variable θ 1 − θ 2 → θ 1 for the angular integrations. We obtain the following result for the dipole-interference part. Notice that the interference term F 2,int is not symmetric under the interchange of the dipoles (aj) and (ab), or (bj) and (ab), as apposed to the dipoles (aj) and (bj). This stems from the fact that integrands such as w 1 aj w 2 ab and w 2 aj w 1 ab are not identical, though symmetric under (r 1 , θ 1 ) ↔ (r 2 , θ 2 ). Since the angular restrictions on k 1 and k 2 are not identical then the results one obtains for the two mentioned terms are different. This boils down to the effect of the k t algorithm which does not treat the two gluons symmetrically. Furthermore, independent and interference terms involving the in-in (ab) dipole vanish in the limit R → 0.
Substituting the results (37) back into eq. (36) we obtain the corresponding CLs coefficients for each channel.
They read
for channel qq → g + X, for channel qg → q + X, and for gg → g + H. We show in figure 3 a plot of the CLs coefficient 1 2! F δ 2 as a function of R for the various channels δ. We notice that gluon-initiated jets have larger CLs co- Fig. 3. Two-loops CLs coefficient as a function of jet radius R in the kt and C-A algorithms for the three channels. efficient than quark-initiated jets, mainly due to the corresponding colour factors (C A = 3 and C F = 4/3, respectively). These series expansions in R converge, and at small values of R it suffices to keep only the leading terms. At very small values of R we observe that This result for CLs obtained here for H/W/Z/γ + jet events at hadron colliders coincides with that found in refs. [10,36,37] for jet mass distribution in e + e − → di-jet events. It does, however, deviate from it as R increases due to initial-state radiation from the incoming partons. Inline with the findings of refs. [4,36] we expect the term (35) to simply exponentiate to all orders. Nonetheless, there will be new CLs terms at each order that are not captured by the latter exponential and that are highly non-trivial to deduce (see ref. [36]). Moreover, we expect that at higher orders the small-R limit of the CLs coefficient in H/V + jet events at the LHC will coincide with that in e + e − → di-jet events found in ref. [36].
In the next subsection we compute NGLs at two loops.
k t and C-A clustering algorithms
We now turn to the evaluation of the correlated secondaryemission contribution in eq. (32) for the k t and C-A clustering algorithms. To this end we write where the clustering function reads As before, the integration over ξ 1 and ξ 2 yields L 2 /2!, and we may write with NGLs coefficient Performing the integration, as in the previous subsection, we obtain the results for G (iℓ) 2 for each dipole as a series in R In terms of channels we have for channel qq → g + X, for channel qg → q + X, and for gg → g + H. Moreover, in the small-R limit we observe that The result for channel δ 2 is exactly the same small-R limit found in the case of jet shapes in e + e − → di-jet events (see for instance ref. [37]). Results for channels δ 1 and δ 3 in the limit R → 0 are also the same and differ from those for channel δ 2 only in the colour factor. In figure 4 we plot the NGLs coefficient 1 2! G δ 2 at this order as a function of jet radius R. Once again we notice that gluon-initiated jets have larger NGLs coefficient due to their large gluonemission colour factor (C A ). We observe from the plots in figures 3 and 4 that the CLs coefficient for the gg → g channel grows larger with R while that for NGLs does not change much.
Moreover, in order to assess the overall impact of CLs and NGLs at this order, we plot in figure 5 the combined coefficient of the single logarithmᾱ 2 s L 2 resulting from the non-global nature of our observable. We note that at large jet radii (R 1.0) and for all partonic channels the CLs and NGLs tend to balance each other out, but not entirely though. For small jet radii the said single logarithmic CLs + NGLs coefficient is quite large in magnitude especially for gluon-initiated jets.
Anti-k t clustering algorithm
For the sake of assessing the effect of clustering on NGLs, we report the results for the NGLs coefficient in the anti- k t algorithm. Note that there are no CLs in this case. The corresponding integral is identical to that in eq. (40) except for the clustering function. It reads The results we obtain for each dipole are In terms of channels we have G δ1,akt These results are in agreement with those reported in ref. [11]. Notice again that the R → 0 limit of the above expressions produces a result (which is proportional to 1.645 = ζ 2 ) that is identical to that reported in ref. [37] for e + e − → di-jet process. We plot in figure 6 the NGLs coefficient 1 2! G δ,akt 2 with anti-k t -clustered jets as a function of the jet radius R for the various partonic channels. As is clearly evident from the plots, NGLs in the anti-k t algorithm are much larger compared to those in the C-A or k t clustering case. This is made clearer in figure 7 where NGLs coefficients for each dipole are plotted for both k t and anti-k t algorithms. This observation was also made in previous studies of NGLs with k t clustering [4,36,38]. While k t clustering induces another tower of large single logarithms, namely CLs, it actually diminishes the impact of NGLs. Additionally, as we observed in the previous subsection 4.2.1, the induced CLs play a role of further reducing NGLs since their coefficients have opposite signs. This may hint at a (phenomenological) favour for the k t (or C-A) clustering algorithm over the anti-k t algorithm.
All-orders treatment of CLs and NGLs
Including the resummation of NGLs and CLs together with the global form factor (28) then the all-orders NLLresummed jet mass distribution may be cast into where S δ (ρ) and C δ (ρ) account for the resummation of NGLs and CLs, respectively. We note that, unlike the global form factor, the factors S δ and C δ are algorithmdependent.
In the anti-k t algorithm, the NGLs form factor results from multiple correlated gluons outside the jet that coherently emit the softest gluon into the jet. For the k t and C-A clustering algorithms, gluons can be moved into and out of the jet by the clustering, thus NGLs can be induced when more than one gluon is emitted within the jet region from an ensemble of harder gluons. The NGLs factor S δ can be computed numerically and in general only in the large-N c limit [1,15]. For the e + e − → di-jet process, finite-N c results do exist though [12,13]. Moreover, the CLs form factor results from multiple independent (primary) emissions that are clustered by the k t or C-A algorithm. Just like NGLs, the latter CLs can also be resummed numerically.
For the anti-k t algorithm, the all-orders numerical resummation of NGLs may be obtained from the dipoleevolution Monte Carlo code of ref. [1] as reported in ref. [11] for the various dipoles. We see from figure 8 that the Fig. 8. The full resummed differential jet mass distribution in the anti-kt jet algorithm with NGLs factor as an exponential of the two-loops result and as an all-orders numerical result obtained from ref. [11]. We explain the in the next section how these plots are obtained. exponential of the two-loops result (46) approximates very well the all-orders numerical result for the NGLs factor in the Z + jet process. The same is observed for the other processes. Hence we shall confine ourselves to simply using the exponential of the two-loops result for the k t and C-A algorithms. To this end we write, for a given channel δ, where G δ 2 , for the k t and C-A algorithms, is given in eq. (44), and the evolution parameter t is defined by Note that at fixed order t reduces to justᾱ s L.
As for CLs, it was shown in refs. [4,36] that the perturbative CLs series exhibits a pattern of exponentiation, and that the exponential of the two-loops result is a very good approximation to the numerically-resummed CLs factor obtained from the code of ref. [1]. Therefore, and just as we did with NGLs, we shall be using the exponential of the two-loops result for the CLs resummed factor C δ (ρ). Thence where F δ 2 is given for k t and C-A algorithms in eq. (38).
Comparison to Pythia 8 and Herwig 7 parton showers
In this section we present comparisons of our results for the jet mass distribution with those obtained from Pythia 8 [20] and Herwig 7 [21,22] parton showers (PS), where the jets are clustered with FastJet [39]. The resummed result is obtained by convoluting dΣ δ /dB δ given in eq. (6) with parton distribution functions (we use MSTW 2008 (NLO) PDFs [40] and µ F = µ R = 200 GeV). For doublechecking we perform the convolution using two different methods. In one method we simply use a Monte Carlo code to integrate over the momentum fractions of the partons x a and x b and over the transverse momentum p t and rapidity y of the jet, as explained in detail in appendix A.
In the other approach we generate a set of unweighted parton-level Born events using MadEvent from MadGraph [41,42] in the "Les Houches Event File" format [43], with the cuts Ξ B being applied. We then weigh each event by the resummed form factor S δ (ρ) C δ (ρ) f global B,δ (ρ), sum over all events, and divide by the effective luminosity L = N tot /σ 0 , with N tot the total number of events and σ 0 the Born cross-section calculated with MadGraph. This results in the integrated distribution given in eq. (5) from which the differential distribution can straightforwardly be obtained. To avoid low-p t resummation we impose a cut on p t of the final-state jet, e.g., p t > 200 GeV, i.e., we only consider high-p t jets, at a centre-of-mass energy √ s = 7 TeV.
In our resummed result we also include an approximation to the NLO effects on the distribution through the NLO factor C (1) B,δ (ρ). The full NLO distribution may ideally be analytically calculated using the full squared amplitude with two partons in the final state as well as virtual corrections to the Born cross-section. Though possible this is a delicate task. The alternative numerical approach would be to exploit fixed-order programs and obtain the factor C (1) B,δ (ρ) as a fully differential distribution in the Born configuration, and then perform the integration including the resummed form factor over the Born kinematics. Practically this is not feasible. Instead, one could obtain an NLO factor C (1) B,δ (ρ) that is averaged over the Born configuration [44] and insert it in eq. (6) as if it were unintegrated over dB δ . In this paper we employ this method and estimate the Born-configurationaveraged factor C (1) δ (ρ) as was done in refs. [11,44], using the NLO jet mass distribution obtained from the fixedorder program MCFM [45,46].
In refs. [11,44], the NLO factor C (1) δ (ρ) was calculated in the small-ρ limit as a constant, and then the ρ-dependence of the NLO contribution to the jet mass distribution was included at the stage of matching. This is equivalent to using the full ρ-dependence of C At NLO there are new channels that open up, specifically processes with incoming qq ′ or two gluons, that are not present at the Born level. These channels are not logarithmically enhanced and only contribute a small correction to the distribution. 4 In figure 9 we show plots for the differential jet mass distribution 1/σ dΣ/d √ ρ, where Σ(ρ) is defined in eq. (5), in Z + jet events at the LHC with k t clustering. We choose two values for the jet radius, one for which the size of NGLs + CLs is expected to be small, R = 1.0, and another where NGLs + CLs are expected to be important, e.g., R = 0.6. The global and pure-resummed distributions are normalised to the Born cross-section, while the resummed + C (1) , Pythia 8, and Herwig 7 distributions are normalised to the total cross-section.
We observe from the R = 1.0 plot in figure 9 that the global and full-resummed distributions are quite close to the Pythia 8 PS result, indicating the smallness of the effect of NGLs and CLs factors in this case. For the R = 0.6 plot, there is a clear difference between the global and Pythia 8 PS curves, and our full resummation, which is based on the exponential of the two-loops NGLs and CLs result, seems to do better. We also note that the NLO term C (1) slightly modifies the peak and tail of the distribution especially for R = 0.6, bringing it even closer to the Pythia 8 PS result.
As is clear from the plots, the Pythia 8 PS result seems to be in better agreement with our resummed distribution near the peak than Herwig 7. This observation was also made in ref. [11]. It should be noted, however, that a more comprehensive comparison is feasible only when one includes non-perturbative effects, where different event generators are then expected to be in agreement. We do this in the next section.
In figure 10 we plot the same distribution employing the C-A algorithm. We recall that up to two loops both k t and C-A algorithms produce identical results. This means that the resummed formula that includes the exponential of the two-loops NGLs and CLs as well as the C (1) δ terms for all channels are the same in both algorithms. We expect, however, differences between the two cases when one performs an all-orders NGLs and CLs resummation, and also when one includes the higher-order C (n) terms. We compare, in figure 10, the resummed + C (1) result with the Pythia 8 PS result employing both k t and C-A algo-rithms, where we notice that the peak of the distribution is slightly higher in the latter algorithm. We additionally show in figure 11 the differential jet mass distribution in the process gg → Hg for jet radii R = 1.0 and R = 0.6. Notice from figures 5 and 6 that, for this channel, the combined effect of NGLs and CLs at two loops in the k t algorithm is small compared to that of NGLs with anti-k t clustering, and so we expect our resummed distribution to fit well with the PS result in the case of k t clustering. This is indeed the case as is clear from figure 11, particularly for Pythia 8 PS.
Finally, in figure 12, we plot the resummed differential jet mass distribution in the processes W + jet and γ + jet at the LHC with k t clustering and R = 1.0. Our results are in general in good agreement with Pythia 8 results particularly near the peak of the differential distribution. The discrepancy between the results of Pythia 8 and Herwig 7 may be lifted when nonperturbative effects are included, as we do in the next section.
Matching to fixed order
Before we end this section we discuss the matching of the resummed result with the NLO fixed-order distribution. In fact, including the constant term C (1) δ (ρ) in eq. (6), the expansion of the resummed distribution now agrees with the fixed-order result over the entire range of ρ, except for the small correction due to the missing channels at the Born level (specifically the channel with incoming gg).
Additionally, as was shown in ref. [11], the NLO distribution has a kinematical end point of ρ max = tan 2 (R/2), which the resummed distribution does not have. In order to match the resummed distribution to the NLO result, specifically at the end point, we introduce the following change of the large logarithm [10] such that the large logarithm L ′ vanishes when ρ → ρ max , and L ′ → L when ρ → 0. We then use the simple matching formula where now both Σ NLL and Σ NLL,αs include the C (1) term. The subtracted term Σ NLL,αs cancels both the large logarithms and the C (1) terms in Σ NLO (ρ), leaving only corrections due to the channels missing at the Born level. We show in figure 13 a plot of the matched differential jet mass distribution compared to the fixed-order result from MCFM for the Z + jet process at the LHC. In this plot the resummed curve is plotted with the standard definition of the large logarithm (L = ln(R 2 /ρ)), and thus does not posses the end-point character, while the matched curve does have an end point exactly as in the MCFM curve. We note from this figure that the matched curve coincides with the resummed curve at small ρ, indicating a perfect cancellation of the large logarithms between the expanded result Σ NLL,αs and the fixed-order MCFM result Σ NLO .
Comparison to CMS data
In order to compare our results with the experimental data we first need to account for non-perturbative effects from hadronisation corrections and the underlying event. One commonly used numerical approach to extract these corrections is to compute the ratio of the results obtained from Monte Carlo event generators with nonperturbative effects switched on and off. In this paper we include these corrections analytically by considering the mean value of the change in the jet mass δm 2 j due to these non-perturbative effects. This change was computed in ref. [47] to be with and Here µ I is an arbitrary matching scale (chosen to be of order of a few GeV) and α 0 is the averaged coupling over the non-perturbative low-k t region, α 0 = 1/µ I µI 0 α s (k t ) dk t . The A(µ I ) is rescaled by the so-called Milan factor (M = 1.49 for anti-k t clustering and M = 1.01 for k t clustering [48]) to account for gluon decay. The constant K is defined in the appendix.
Non-perturbative effects are dominated by the contributions of the dipoles involving the outgoing jet, which scale like O(R), and which account for hadronisation corrections, while the smaller O(R 4 ) contributions from the incoming legs account for the underlying event. Since the mean value of δm 2 j depends both on the Born channel and kinematics, then we perform the shift on the mass of the jet on an event-by-event basis, that is we make the change m 2 j → m 2 j − δm 2 j in the resummed form factor and then perform the convolution. Furthermore, we shift the terms C (1) (ρ) accordingly.
We compare, in figure 14, the NLL+NLO resummed result (with the C (1) term), including non-perturbative corrections, with experimental data from the CMS collaboration [19,49] (obtained with integrated luminosity L = 5 fb −5 ), in the Z + jet process at the LHC with anti-k t clustering and R = 0.7. We also include in this figure the Monte Carlo results obtained from interfacing MadGraph with Pythia 8 [50,51] and Herwig 7 including hadronisation corrections and the underlying event. The plots in this figure are for the un-normalised jet mass variable m j rather than the normalised one √ ρ = m j /p t . In this figure we have 300 GeV < p t < 450 GeV. CTEQ6L parton distribution functions [52] have been used both in the convolution and MadGraph/Pythia 8/Herwig 7/MCFM results. For best fit we choose µ I = 3.5 GeV. This plot shows a good agreement between the data and the resummed prediction over the entire range of the jet mass, as well as with the Monte Carlo simulation. Resummed differential mj distribution with antikt clustering and R = 0.7 in Z(→ ℓ + ℓ − )+ jet events, with ℓ = e, µ, compared to experimental data from CMS [19] and MadGraph+Pythia 8 and Herwig 7 results. The experimental data are taken from ref. [49].
We note that the NLL+NLO+NP curve is cut off at around 40 GeV, which is just a manifestation of the shift of the resummed distribution to the right, as explained above. The value of the NLL+NLO+NP distribution at say m j = 40 GeV is related to the value of the resummed distribution at m 2 j − δm 2 j , with δm 2 j (see eq. (56)) varying from 20 GeV to around 40 GeV depending on the p t of the jet and partonic channel. Hence, we have no result for the non-perturbative distribution below this value (∼ 40 GeV) of the jet mass. Additionally, due to the Landau-pole singularity at small values of the jet mass, the distribution is unreliable in the region to the left of the Sudakov peak.
Conclusions
In this paper we have presented state-of-the-art detailed fixed-order calculations as well as all-orders estimates of distributions of important observables at the LHC. Specifically we have considered a typical jet-shape observable that has been studied quite substantially in the literature, namely the invariant jet mass. It is a member of a large class of observables known as non-global observables, that have so far proven to be quite delicate to treat. The subtleties in the analysis of such observables stem from the fact that they are defined for a restricted phase-space region. This is unlike global observables which are defined over the whole phase space. The former non-global observables receive contributions that are totally absent for their global counterparts. These contributions appear at each higher order in perturbation theory and have so far shown no pattern of iteration.
We have extended the work of ref. [11] from various angles: (a) we have implemented two clustering algorithms, k t and C-A, instead of just the anti-k t considered in the latter reference. Generally, computations in k t and C-A algorithms are much more difficult to handle than in antik t case; (b) we have computed CLs, which are completely absent for anti-k t and thus not treated in [11]; (c) we have investigated the jet mass distribution in various different processes, namely W/Z/γ/H + one jet, while only Z + jet was considered in the said reference; and (d) we have provided analytical expressions for our results in the form of power-series expansions in the jet radius R. As the experimental data [19] for the jet mass distribution in Z + jet events at the LHC with anti-k t clustering were available only after the publication of ref. [11], we made the comparison of the resummed result with these experimental data herein.
We have confirmed previous results that were arrived at in studies of e + e − annihilation processes. These include, for instance, the observation that NGLs are decreased by the application of jet clusterings other than anti-k t . In other words, NGLs are more significant when anti-k t is used. This may hint at the advantage of using other jet clustering algorithms in order to bypass the difficulties posed by NGLs. Additionally, we showed that in the limit of very small jet-radius parameter the NGLs and CLs at hadron colliders coincide with those at e + e − colliders. Moreover, we have been able to identify new features that are not present in the simple e + e − annihilation case such as the significance of initial-state radiation and its impact on the jet mass distribution. The jet mass provides a tool to discriminate gluon and quark-initiated jets as their corresponding jet mass distributions were shown to be quite different.
It is worth, as a continuation to this project, investigating other crucial hadronic processes at the LHC such as di-jet production. The latter represents an important background for numerous potential new physics signals. Another issue that is also worth tackling is performing calculations beyond two-gluon emission. This will provide a deeper insight into the nature of QCD hadronic processes that have not been fully understood so far.
where τ = 7.65 and F = 0.09. The total Born cross-section σ 0 is simply the integral of dσ 0,δ /dB δ (including the kinematical-cuts function Ξ B ) over B δ , summed over possible δ, where we have where f i denotes the parton density function for the corresponding incoming parton (i) evaluated at a factorisation scale µ F . Substituting the Mandelstam variables into the delta function in the integrand we obtain This delta function can be used to perform the integration over one of the x's, say x b , and thus we set and multiply the integrand by Since x b > 0 then x a > e y p t / √ s and y < ln( √ s/p t ). Additionally, since x b < 1 then y > − ln( √ s/p t ) and x a > e y p t / √ s + M 2 /s 1 − e −y p t / √ s .
The latter inequality overrules x a > e y p t / √ s, and furthermore, since x a < 1 we deduce that which also overrules the condition |y| < ln √ s/p t . We perform the integration over p t , y and x a , either in the Born cross-section or in the jet mass distribution, numerically via Monte Carlo method.
B Resummed Global form factor
The Sudakov global form factor that resums global logarithms is given in eq. (28). The radiator R is composed of contributions from double-logarithmic soft-collinear and single-logarithmic hard-collinear emissions from the outgoing leg (j), and from single-logarithmic soft wide-angle emissions from all legs. The leading-order soft wide-angle contribution that we calculated in section 3.1 simply exponentiates to all orders. Additionally, the non-global and clustering corrections appear as a factorised part that multiplies the resummed global form factor. The remaining soft-collinear and hard-collinear contributions may be obtained using the general formalism of resummation introduced in ref. [31] as we show below.
First we write the definition of the observable ̺ in terms of the transverse momentum k (ℓ) t , rapidity η (ℓ) , and azimuth φ (ℓ) of a single soft-collinear emission with respect to the direction of the hard leg (ℓ). Emissions that are collinear to the incoming legs (a) and (b) do not end up inside the jet, so they do not contribute to its mass, hence ̺ (a) = ̺ (b) = 0. For emissions that are collinear to the outgoing leg (j), we introduce a coordinate rotation that takes the momentum of leg (j) to the z axis, where the momenta of the jet and the soft emission become p (j) j = p t cosh y (1, 0, 0, 1) , (72a) The jet mass observable (being invariant under rotations) is then given by Comparing the definition of the normalised invariant jet mass (73) to the general parametrisation of observables from ref. [31] where Q = p t is the hard scale, we see that a j = b j = g j = 1 and d j = 2 cosh y.
Employing the master formula for resummation (eq. (3.6) from ref. [31]) we obtain the expression of the radiator R δ (ρ), for a given Born channel δ, in the MS renormalisation scheme R δ (ρ) = C j [L g 1 (α s L) + g 2 (α s L) + g 2,coll (α s L)] + + g 2,wide (α s L) C ab R 2 2 + (C aj + C bj ) h(R) , (75) with h(R) being given in eq. (26). Here C j is the colour factor associated with leg j, C j = C F for outgoing (anti-) quark jet and C j = C A for outgoing gluon jet, and C iℓ is the colour factor for dipole (iℓ) introduced in the main text. We have with λ = α s (R p t ) β 0 ln(R 2 /ρ). The factor B j accounts for corrections due to hard-collinear emissions to the outgoing jet j and is given by for quark jets , with T R the Dynkin index (normalisation constant) for the SU(N c ) generators, T R = 1/2, and n f = 5 the number of active quark flavours. Additionally we have K = C A 67 18 − π 2 6 − 5 9 n f , β 0 = 11 C A − 2 n f 12 π , (78) In the master formula we excluded the single-logarithmic soft wide-angle term referred to in ref. [31] as ln S(T ), and we calculated it manually in section 3.1. It appears as the last term in the radiator (75). The derivative of the radiator R with respect to L, relevant in the expression (28), is given by
C Fixed-order expansion
For the sake of matching we need the fixed-order expansion of the resummed form factor (49). We can cast the latter in the form dΣ δ (ρ) dB δ = dσ 0,δ dB δ C δ (ρ) exp where the expansion coefficients in the exponent, G nm , up to O(ᾱ 2 s ), are | 13,675.2 | 2021-04-22T00:00:00.000 | [
"Physics"
] |
Distributed Adaptive Synchronization for Complex Dynamical Networks with Uncertain Nonlinear Neutral-Type Coupling
Distributed adaptive synchronization control for complex dynamical networks with nonlinear derivative coupling is proposed. The distributed adaptive strategies are constituted by directed connections among nodes. By means of the parameters separation, the nonlinear functions can be transformed into the linearly form. Then effective distributed adaptive techniques are designed to eliminate the effect of time-varying parameters and made the considered network synchronize a given trajectory in the sense of square error norm. Furthermore, the coupling matrix is not assumed to be symmetric or irreducible. An example shows the applicability and feasibility of the approach.
Introduction
A complex network is a large set of interconnected nodes, where the nodes represent individuals in the graph and the edges represent the connections among them, such as climate system [1], biological neural networks [2], human brain system [3].Many natural and man-made systems can be modeled and characterized by complex networks successfully [4].Such systems may be characterized by a system with uncertainties, time delays, nonlinearity, neutral properties, hybrid dynamics, distributed dynamics and chaotic dynamics.
Synchronization phenomena has been found in different forms in complex networks, such as fireflies in the forest, description of hearts, and routing messages in the internet.Thus synchronization is one of meaningful issues in dynamical characteristics of the complex dynamical networks.A considerable number of papers on this topic have appeared (see [5][6][7] and references therein.) Recently, various control techniques have been reported to achieve networks synchronization (see [4,[8][9][10][11][12][13][14][15][16] and references therein) Some control schemes [8][9][10][11][12][13] were based on a solution of the homogenous system, in which it may be difficult to obtain the state information of an isolated node.
Consequently, utilizing the information from neighborhood to realize the network synchronization is more reasonable.Paper [17] introduced the concept of control topology to describe the whole controller structure.In [18], based on local information of node dynamics, an effective distributed adaptive strategy was designed to tune the coupling weights of a network.A considerable number of controlled synchronization techniques have been derived for complex dynamic networks based on the assumption which is the coupled nodes of CDN with the same dynamics (see the above papers and the references therein).In reality, complex networks are more likely to have different nodes for different dynamics.For example, in a multi-robot system, the robots can have distinct dynamic structures or different parameters.Recently, special attention has been focused on the synchronization of complex dynamical networks with nonidentical nodes [19][20][21][22].Paper [20] investigated the synchronization problem of a complex network with nonidentical nodes via openloop controllers.The paper [22] considered nonlinearly coupled networks with non-identical nodes and designed pinning control to obtain synchronization criteria.On the other hand, many real-world network systems' structure will change over time and contain unknown parameters.Very recently, some papers studied complex networks with unknown time-varying coupling strength [23][24][25][26].In these results, non-identical nodes were not considered.Only in [21], the time-varying complex network with non-identical nodes was investigated, and a criterion of global bounded synchronization of the maximum state deviation between nodes was developed.
In other aspects, new complex networks models are proposed to reflect the complexity from the network structure.Thus, the problem of neutral-type couplings has also been widely investigated [27][28][29][30][31].However, in the above studies, only linear derivative coupling is considered.More recently, [32] studied the synchronization in a class of dynamical networks with distributed delays and nonlinear derivative coupling.Considering the preceding discussion, nonidentical nodes complex dynamic network with nonlinear derivative coupling, and time-varying coupling strength is not concerned yet.
Inspired by the aforementioned results, the problem of adaptive synchronization is studied for complex dynamical networks with non-identical nodes, nonlinearly derivative couplings, and unknown time-varying coupling strength.A prominent feature of this network is that its complexity originates not only from the nonlinear dynamics of the nodes, but also from the complex coupling strength.The difficulty in dealing with the nonlinearly derivative couplings with unknown time-varying parameters is solved by using the parameter separation method.The distributed adaptive learning laws of periodically time-varying and constant parameters and the distributed adaptive controllers are constructed to guarantee that the system is asymptotically stable and that all closed loop signals are bounded.
The remainder of the paper is organized as follows.The problem statement and preliminaries are given in Section 2. Section 3 gives the main results and proofs.In Section 4, an illustrative example is provided to verify the theoretical results.Finally, conclusion is given.
Define synchronization error as where () ∈ is a solution of the dynamics of the isolated node to which all () are expected to synchronize.Then, ( 1) is said to be asymptotic synchronization in the and the system (1) is said to be globally asymptotically synchronized onto state To achieve the control objective in (3) and ( 4), we need an adaptive control strategy to nodes in network (1).Then, the controlled network is given by where () = [ 1 (), 2 (), . . ., ()] ∈ , = 1, 2, . . ., , are the adaptive controllers to be designed.Since the row sum of coupling matrix is zero, it is easily obtained that ∑ =1 Γ((), ()) = ∑ =1 Γ( ṡ (), ()) = 0, which implies the dynamical error equations as Our objective is to design a distributed adaptive control law () so as to obtain the convergence of the synchronization errors.
In order to derive the main results, the following assumptions and lemmas are introduced.
Assumption 4 (see [33]).For the unknown continuous functions (⋅, ⋅), (⋅, ⋅), the following inequality holds: In this paper, we separate the unknown parameters from the nonlinear function in (8), according to the separation principle in [33]; thus, Assumption 4 is reasonable and easily obtained.By using Assumption 4, we are able to solve the synchronization problem for a class of nonlinearly parameterized systems with nonlinear derivative couplings.
Assumption 6.In the given networks in (1), (), () are unknown time-varying periodic functions with a known period .
From Assumption 6, it is easy to see that ( ()) and each element in () are periodic functions with the same period .Suppose where () is an unknown continuous periodic function with a known period and is an unknown constant parameter.
Assumption 7. In network (1), the inner coupling matrix Γ and where , are positive constants.
Assumption 8. Assume that the state and the state derivative of system (1) are measurable.
Remark 9.This assumption is necessary to design controller and adaptive laws.Assumption 8 seems to be restrictive.The observer for state derivative will be considered in the near future.
Proof.In fact, if () is unbounded, we can get ∀ > 0, ∃ ∈ [−, ], () > .As () is continuous function, according to the properties of the continuous functions, we have this is a contradiction with the assumption.The proof is completed.
Adaptive Synchronization of the Complex Dynamical Networks
In this section, distributed adaptive controller and distributed adaptive laws are designed to control the given system to synchronize the given trajectory (), and the synchronization process is proved.
The constant parameter distributed update law is designed as follows: and the time-varying distributed periodic adaptive learning laws as where , , and are positive constants and 0 (), 0 () are continuous and strictly increasing nonnegative functions satisfying 0 (0) = 0, 0 () = , and The following theorem will give a sufficient condition for the controlled network in (5) to be asymptotical synchronization.
Consider the system (6) and the proposed control laws ( 14)-( 16); it can be seen that the right-hand side of ( 6) is continuous with respect to all arguments.According to the existence theorem of differential equation, (6) has unique solution in the interval [0, 1 ) ⊂ [0,) with 0 < 1 ≤ .This can guarantee the boundedness of () over [0, 1 ).Therefore, we need only focus in the interval [ 1 , ).
The derivative of () with respect to time is given by Let us introduce some notations as From ( 6) and ( 13), the first term on the right hand side of (19) satisfies Applying ( 14), the third term on the right-hand side of (19) satisfies Let us focus on the second and fourth terms on the righthand side of (19).In the interval [ 1 , ), since 0 (), 0 () are continuous and strictly increasing functions, According to Lemma 11, Substituting ( 23)-( 27) into ( 19) we obtain It is obvious that there exist sufficiently large positive constants such that According to (28) we have For ∀ ∈ [ 1 , ), since () is continuous and periodic and every element in matrix (⋅) is continuous function, the boundedness of them can be obtained.The boundedness of () and tr( ) leads to the boundedness of V().That is, () is bounded in [0, ).For ( 1 ) is bounded, the finiteness of () is obvious by using integral technique, ∀ ∈ [0, ).
Next step, the asymptotical convergence of () is proved.
Simulation Example
To demonstrate the theoretical result obtained in Section 3, the following dynamical network with six non-identical nodes is considered: )) According to Theorem 14, the synchronization of the complex dynamical network can be guaranteed by the distributed adaptive controllers in (15) and the distributed adaptive learning laws when ( 16)- (18).Figure 1 shows the error evolutions under the designed controller.In this example, (⋅, ⋅) is bounded function, we clearly see that the states of the network asymptotically synchronizes with the states of the desired orbit.Figure 2 depicts the time evolution of the controller, and Figure 3 shows the evolution of the estimated time-varying parameters.Figures 2 and 3 show that all signals in the network are bounded.Figures 4 and 5 show that the time-varying parameters are periodic and bounded.Remark 15.It is not difficult to draw the evolution of other elements in parameter ().Here, we only take the first row of 11 () for example.Compared with existing results [26,28], the biggest innovation of this paper is the asymptotical synchronization ability for the nonlinear neutral-type coupling complex networks under the designed controller.
Conclusion
In this paper, the synchronization problem for a complex dynamical network with nonlinearly derivative couplings is solved via distributed adaptive control method.The adaptive strategies are concerned with the networks topology.By combining inequality techniques and the parameter separation, introducing the composite energy function, the convergence of the tracking error and the boundedness of the system signals are derived.Moreover, the coupling matrix is not assumed to be symmetric or irreducible.Finally, a typical example was simulated to verify the proposed theoretical results. | 2,448.6 | 2013-01-01T00:00:00.000 | [
"Mathematics"
] |
Structural, Optical and Dynamic Properties of Thin Smectic Films
: The problem of predicting structural and dynamic behavior associated with thin smectic films, both deposited on a solid surface or stretched over an opening, when the temperature is slowly increased above the bulk transition temperature towards either the nematic or isotropic phases, remains an interesting one in the physics of condensed matter. A useful route in studies of structural and optical properties of thin smectic films is provided by a combination of statistical–mechanical theories, hydrodynamics of liquid crystal phases, and optical and calorimetric techniques. We believe that this review shows some useful routes not only for the further examining of the validity of a theoretical description of thin smectic films, both deposited on a solid surface or stretched over an opening, but also for analyzing their structural, optical, and dynamic properties.
Introduction
Free-standing smectic films (FSSFs) and lipid membranes are curious and ubiquitous fluid-like objects in the realm of soft-matter science. In the long-scale limit they can be considered to be a new class of two-dimensional state embedded into three-dimensional space. Here the free-standing liquid crystal (LC) systems, composed of a stack of smectic layers confined by surrounding air [1] or water [2], or deposited on a solid surface, will be considered. Since there is no substrate, these freely supported fluid-like films represent an excellent model of low-dimensional systems for the study of surface effects as the film thickness is reduced.
It should be pointed out that in contrast to thin fluid-like films, there is a parallel world of crystalline [3], glassy freely supported membranes [4], and graphene films [5], but we will focused here on the fluid-like films composed of nano-sized sheets of LC matter.
Competition between surface and finite-sized effects leads to unusual physical properties of LC systems with a finite number of layers. One of the most interesting features of such LC systems is that under appropriate conditions they can be spread across an opening to form FSSFs which can be considered to be a stack of smectic layers confined by the surrounding air or water. Both meniscus, which connects the film with the solid frame, and the surface tension of the film are responsible for stability of these films. Moreover, the presence of the surface tension is believed to be responsible for intriguing surface ordering phenomena exhibited by these films. Unlike the preferential surface melting exhibited by conventional solids, the FSSF/air or FSSF/water interface appears to enhance the order of the surface layers so that they become ordered at temperatures well above the bulk smectic-A-isotropic (AI) [1] or smectic-A-nematic (AN) [6] transition temperatures T AI(N) (bulk). This interesting phenomena has been reported both for LC compounds exhibiting This review aims to show some useful routes not only for further examining the validity of theoretical descriptions of thin SmA films, both deposited on a solid surfaces or stretched over an opening, when the temperature is slowly increased above the bulk transition temperature towards either the nematic or isotropic phases, but also for analyzing their structural, optical, thermodynamic and dynamic properties.
Layer-Thinning Transitions and Optical Reflectivity Observed in FSSFs
One of the most effective techniques of the experimental investigation of thin LC films both deposited on solid substrates or in freely suspended films is the study of their optical properties, namely the optical transmission spectra [2,6] and the optical reflectivity [1,2,6], especially with the use of the techniques of fluorescence confocal microscopy [24,29] and X-ray reflectivity [30]. By measuring the optical reflectivity R of free-standing SmA films composed of certain LC compounds a remarkable phenomenon of layer-thinning transitions in FSSFs upon heating above T AI(N) (bulk) has been revealed. During these transitions the FSSF with initial thickness of several tens of smectic layers can thin layer-by-layer and the film thickness reduction is suitably described by a power-law expression N(t) ∼ l −ν , where N is the number of layers, l = T AI(N) (N) − T AI(N) (bulk) /T AI(N) (bulk) is the reduced temperature, T AI(N) (N) is the temperature corresponding to the layer-thinning transition in N-layer smectic film, and ν is the critical exponent value belonging to the interval [0.5, 0.9], for different compounds [31]. It has been shown both by calorimetric and optical studies that the free-standing LC films composed of 5-n-alkyl-2-(4-n-(perfluoroalkyl-metheleneoxy)phenyl) (H10F5MOPP) molecules may exhibit the following thinning sequence: N = 15 → 11 → 9 → 8 → 7 → 6 → 5 → 4 → 3 → 2, as the film temperature is increased. The two-layer SmA film, composed of H10F5MOPP molecules ruptures in air at a temperature approximately 30 K above T AI (bulk) [1] and the thinning transition is thermally driven and irreversible. For example, a two-layer film does not rupture for more than 5 h at 30 K above T AI (bulk) and does not spontaneously thicken when cooled well into the SmA phase [32].
Only a small number of compounds possessing partially fluorinated alkyl chains show a layer-by-layer thinning process in air but films of cyanobiphenyl s compounds, for instance nCB, just rupture when heated above T AI (bulk). In turn, the thinning effect in thin smectic films, composed of decylcyanobiphenyls (10CB) and dodecylcyanobiphenyls (12CB) molecules in water on heating above T AI (bulk) has been observed [2]. It was shown that these stable smectic films, composed of cyanobiphenyl s molecules, can be prepared in water with the help of a surfactant that induces homeotropic anchoring of LC molecules at the film/water interfaces. These films undergo a one-step thinning transition with the following thinning sequence: N = 9 → 8 → 7 → 6 → 5 → 4 → 3 → 2, in which the film thickness decreases in a stepwise fashion until the film ruptures at ∼2 K above T AI (bulk) [2]. This behavior is different from FSSFs in air: on heating to the isotropic phase the FSSFs in air either just rupture, for vast majority of LC compounds, or show a layer-by-layer thinning transition, for LC compounds possessing the partially fluorinated alkyl chains. Therefore, it clearly indicates that the film's environment plays a crucial role in formation of the thinning effect and FSSFs present unusual physical properties which are associated with the interplay of surface and finite-size effects.
Experimentally obtained values both for the reflectivity R(T) and heat capacity C p (T) decrease in a series of sharp steps separated by wide plateaus as the temperature is increased (see Figures 1 and 2, Ref. [1]). Moreover, the plateau values have been found to obey the laws R(T) ∼ N 2 (N < 15) and C p (T) ∼ N. Therefore, it is clear that the steps correspond to discrete reductions of the film thickness, demonstrating the unique nature of this melting transitions. These transitions have been attributed to the reduction of smectic fluctuations in the bounding layers. It takes place because these films thin as the interior layers undergo the SmA-isotropic transition and the disjoining pressure [12] acting, for instance, across the N-layer and (N − 1)-layer smectic film, is responsible for removal of one smectic layer from the N-layer smectic film into the surrounding reservoir during the layer-thinning process. It has also been shown, by means of high-resolution optical reflectivity investigations [32] that partially fluorinated compound, such as 2-4-(1,1-dihydro-2-(2-perfluorobutoxy) perfluoroethoxy) phenyl-5-octyl pyrimidine (H8F(4,2,1)MOPP), reveal a substantial compression of the smectic layer spacing during the layer-thinning transitions. It was found that upon heating the N-layer perfluorinated film (N = 10, 9, 8, 7, ..., 3) to its maximum temperature T AI (N) of existence, the average film layer thickness L decreases monotonically to a certain minimum value L m , and then, at the thinning transition to the (N − 1)-layer film, L jumps to a nearly initial value. Upon further heating, the average smectic layer thickness in the new (N − 1)-layer FSSF exhibits a similar behavior. It should be noted that a change in the average layer spacing can be as large as ∼0.1 nm, and the minimum value of L in the N-layer film, which is reached at the temperature T AI (N), decreases with decreasing the number N of the film layers. In other words, the minimum value of L in the nine-layer FSSF is smaller than in the ten-layer film, and L m for eight-layer film is smaller than that in the nine-layer one, etc. These results are in contrast with data on the optical reflectivities of FSSF's made of a hydrogenated LC compounds, such as n-pentyl-4 -pentanoyloxy-biphenyl-4-carboxylate (54COOBC), composed of molecules with ordinary alkyl tails without fluorine atoms [33]. Though free-standing films of this material also undergo layer-thinning transitions upon heating above T AI (bulk), their reflectivities, at given number N of the film layers, do not change with increasing temperature up to its maximum value T AI (N). If the reflectivity of the N-layer FSSF does not change upon heating up to the temperature T AI (N) of its thinning transition, then the average layer thickness in this film is completely temperature independent. The origin of such diverse behavior of the smectic layers in FSSFs of different mesogens is not clear up to now.
The free-standing SmA films with two free bounding surfaces represent an excellent example of low-dimensional systems for the study of surface effects. Among other properties of such systems, research of surface tension behavior during the layer-thinning transitions is of both academic and applied interest. Recently, by means of the high-resolution calorimetric technique, direct measurements of the surface tension γ of the partially fluorinated 5-n-alkyl-2-(4-n-(perfluoroalkyl-metheleneoxy)phenyl) films in air, during the layer-thinning transitions, has been carried out [10]. It has been found that at each thinning the film tension γ abruptly drops to a lower value and then continues to increase with a smaller slope. This effect tends to repeat for the rest of the sequence of the layer-thinning transitions N = 15 → 11 → 9 → 8 → 7 → 6 → 5 → 4 → 3 → 2 as the film temperature is increased, where each thinning is characterized by abrupt drops to lower values of γ(T), after which it then continues to increase with a smaller positive slope.
Most of experimental techniques to study the effect of the molecular alignment of LC compounds in thin cells are focused on geometric aspects of the molecular anchoring and stabilization at solid surfaces. To achieve a better stabilization of LC mesophases and to gain a more precise control of their behaviors, various methods to prepare bounding surfaces have been developed [33][34][35][36][37]. The primary method was based on producing micro-grooves on polymer coated surface and, next, on using the rubbing technologies. By employing the atomic force microscopy, the rubbing process has significantly been improved to allow the formation of grooves on the microscale [34]. Alternative methods involve photolithography, enabling the imprinting of appropriate patterns onto photoactive polymer substrates [35], the technique of creating self-assembled alignment surfaces or the process of controlled self-assembly of LC molecules on surfaces [36], as well as the process of realigning of LC molecules at surfaces by means of the infrared laser beam [37]. Thus, experimental studies have shown that the introduction of fluorine functionality into a LC molecule often leads to the change of the observed behavior significantly from that of similar, fully hydroalkyl analogs. Exploring and understanding the mechanisms which connect the presence of fluorine with those molecules' novel macroscopic properties are therefore important both to advance technology and to clarify the relevant underlying physical interactions.
On the other hand, the theoretical treatment of both structural and dynamic properties of thin smectic films is not an easy task and requires a certain number of simplifying assumptions which may only be justified by comparison between model predictions and experiment. Thus, the combination of optical and calorimetric techniques with theoretical treatments provides a powerful tool for the investigation of both structural and dynamic properties of mesogenic compounds containing flexible moieties, especially partially fluorinated chains. In this Section it will be considered the LC system confined between two parallel surfaces. Both the structural and optical properties of such LC system will be considered in the framework of the mean-field model [25,26]. The smectic phase, consisting of N smectic layers (each of the thickness d) oriented parallel to the bounding surfaces was considered to be the initial state of the LC system. The geometry of the LC system used for theoretical analysis is shown in Figure 1. Two types of interactions will be considered, first, the short-range, rapidly decaying with distance, interactions of the smectic layers with confining surfaces (V i (i = 0, s)) and, second, the long-range van der Waals interactions between smectic layers (V ij (i = j)) [25,26]. Taking into account that the length of the smectic layers is much bigger than the thickness Nd, we can suppose that all the physical quantities depend only on the z-coordinate counted from the lower bounding surface.
The set of the effective anisotropic potentials Φ i (i = 1, ..., N) within the ith smectic layer can be introduced in the framework of the mean-force approach [25,26]: where is the local orientational order parameter (OP), while is the local translational order parameter, P 2 (cos β i ) is the second-order Legendre polynomial, β i is the polar angle or the angle between the long axis of the molecule from the i-th layer and the director, whereas z i = z/d is the dimensionless position of the molecule from the i-th layer, and the overbar has been (and will be) eliminated in the following equations. It should be noted that both these potentials V i and V ij describe the interactions of the ith layer with confining surfaces and the pair interactions between ith and jth molecules, respectively. Here (...) i is the statistical-mechanical average with respect to the one-particle distribution function of the ith layer [7,38] where T is the absolute temperature of the system, k B is the Boltzmann constant, is the partition function of the ith layer, respectively. Please note that in the smectic phase both OPs η i and σ i are nonzero, whereas in the nematic phase η i = 0 and σ i = 0, respectively. Finally, in the isotropic phase both OPs η i and σ i are equal to zero. Both sums A i and B i can be considered to be some weighted local order parameters. The constant α = 2 exp[−(πr 0 /d) 2 ] implicitly characterizes molecular packing within smectic layers, and r 0 is a characteristic length associated with the rigid core of the molecule.
Taking into account the experimental results, both potentials Φ s and Φ ij are well described by an exponentially decaying functions [25,26] and Please note that both parameters V s and V 0 are positive, because all abovementioned interactions are attractive. For simplicity, we assume that the surface potential Φ s [Equation (6)] is symmetric with respect to distances from surfaces. Furthermore, the characteristic length scale ξ, specifying the range of the surface interactions, has been chosen equal to one [27]. It should be pointed out that the inverse square distance dependence for the pair interlayer potential Φ ij is in accordance with the theoretical result obtained for a pair of interacting surfaces [39,40]. Here, for convenience, the inverse reduced temperature V 0 ∼ 1/T has been used rather than the pure temperature [25,26].
The set of the OPs η i and σ i , corresponding to the ith layer of the film composed of a stack of N layers can be obtained by solving the system of 2N nonlinear self-consistent Equations (1)-(5), at a given number of film layers N, temperature T, and two parameters α and V s /V 0 of the model. The distributions of the OPs η i and σ i across the N = 100 layer smectic film, at nine values of V 0 is shown in Figures 2a-d, 3e-h, and 4i [41], respectively. It should be pointed out that the ratio V s /V 0 > 7 corresponds to the case of rather strong surface interactions.
"Calculations showed that both OPs η i and σ i are positive for all layers, when the temperature takes sufficiently low values (V 0 ≥ 1.765) [41] (Figure 2a). This means that the SmA phase exists in the whole confined LC system. Because both the surface and interlayer interactions are attractive, smectic layers are more strongly stabilized in the middle part of the LC system than in the vicinity of the bounding surfaces. On the other hand, the studied system is surface stabilized, and hence the LC molecules exhibit the SmA ordering in domains close to surfaces. As the temperature further rises, the smectic ordering begins to vanish in the vicinity of the bounding surfaces and, simultaneously, the nematic ordering starts to arise in these domains [42]. This is illustrated in Figure 2b, where the orientational OP η i > 0, whereas the translational OP σ i is equal to zero. Calculations showed that for N = 100, the nematic phase starts to arise at the temperature T 1 [41], corresponding to V 0 = 1.765, whereas the smectic ordering still prevails in the central domain of the LC system. Figure 2c,d shows how with further increase of temperature, the layer melting transition from the smectic-A to the nematic ordering propagates, largely into interior of the LC system. In turn, as shown in Figure 3e, at higher temperature T 2 (corresponding to V 0 = 1.735), the smectic phase completely disappears in the central domain of the LC system and, afterwards, for the temperature T 3 (corresponding to V 0 = 1.418), the local isotropic domain (associated with η i = 0 and σ i = 0) begins to form in the vicinity of the bounding surfaces, as shown in Figure 3f. As a consequence, Figure 4i shows that with increase of temperature, the frontiers between the isotropic and centrally arising nematic domains move (Figure 3g-h), until the nematic ordering completely vanishes (at the temperature T 4 , corresponding to V 0 = 1.38). In the case when V 0 ≤ 1.38, i.e., above the reduced temperature T 4 , the isotropic phase occurs in whole the LC sample, except for the small domains close to surfaces, where the surface interactions hamper the disorder process, promoting the smectic order, which persists also at high enough temperatures. Accordingly, smectic layers formed in the immediate vicinity of each of surfaces can coexist with nematic and centrally formed smectic domain (Figure 2b-d), or can coexist with isotropic and centrally formed nematic domain (Figure 3f-h). When temperature increases, fronts between nematic or isotropic domains and the SmA domain, as well as fronts between isotropic and nematic domains move, mainly towards the center of the LC system. Results presented in Figures 2-4 show the very complex behavior of both orientational and translational OPs, due to the interplay between pair long-range intermolecular and nonlocal, relatively short-range [41] surface interactions. Calculations also showed that the SmA, nematic and isotropic phases can coexist, whereas the phase transitions from SmA to nematic, as well as from nematic to isotropic phases, as the temperature increases, does not occur simultaneously in the whole volume of the LC system but only in some domains of the LC sample. It should be pointed out that there are four characteristic temperatures, T i , (i = 1, 2, 3, 4), at which particular phases arise or vanish. (Please note that T i ∼ 1/V (i) 0 is the corresponding value of the reduced temperature.) For instance, at temperature T 1 the nematic phase starts to form in the vicinity of the bounding surfaces. Simultaneously, the smectic-A phase disappears within these domains, as shown in Figure 2b. How it is shown in Figure 3e, the vanishing process of the smectic phase in the central domain of the LC system takes place at somewhat higher temperature T 2 . In turn, Figure 3f shows that like the nematic phase, the isotropic phase begins to appear also in the vicinity of the bounding surfaces, but at temperature T 3 > T 2 . Finally, at temperature T 4 > T 3 , the nematic phase completely disappears." [42].
Clearly, when the system is not very thin, its interior (sufficiently far from surfaces) is controlled by interlayer interactions. However, when the thickness of a system is relatively small, in comparison with the range of surface interactions, the behavior of the system is dominated by surface anchoring couplings. Calculations showed [25] that the profiles presented in Figure 5 are qualitatively consistent with those derived also for N = 25, but assuming that surface potentials are strictly local and that two-layer potentials are independent of distance between the layers [7,[19][20][21]. This indicates that the underlying method based on averaging such potentials at each iteration of self-consistent procedure applies for rather very thin real systems, entirely or almost entirely governed by surface anchoring interactions. It should be pointed out that in the framework of the abovementioned mean-field approach for description of LC system confined in the microsized volume [25,26], the reduced temperature T was defined as T ∼ 1/V 0 . In the case where it is necessary to calculate the temperature values with high accuracy, for example, in the case of unusual layer-thinning transition observed in FSSF, composed of partially fluorinated H10F5MOPP molecules [1], a precise definition of the dimensionless temperature is needed. It will be done in Section 2.2.3.
In turn, a new type of scaling behavior of the LC system interacting with the solid substrate will be analyzed in the next Section.
Finite-Size Effect in Thin LC Films on a Solid Surface
Effects of surface ordering in LC systems confined between solid boundaries are of great theoretical and experimental interest. In the previous Section a new theoretical approach for analyzing the effect of surfaces on local molecular ordering in thin LC systems with planar geometry of the smectic layers [25,26,41] was introduced. These results showed that due to the interplay between pair long-range intermolecular forces and nonlocal, relatively short-range, surface interactions, both orientational and translational orders of molecules across confining cells are complex. In particular, it has been demonstrated that the SmA, nematic, and isotropic phases can coexist [25,26]. The phase transitions from SmA to nematic, as well as from nematic to isotropic phases, occur not simultaneously in the whole volume of the system but begin to appear locally in some domains of the LC sample. Phase-transition temperatures are demonstrated to be strongly affected by the thickness of the LC system. The dependence of the corresponding shifts of phase-transition temperatures on the layer number is shown to exhibit a power-law character. This new type of scaling behavior is concerned with the coexistence of local phases in finite systems. The influence of a specific character of interactions of molecules with surfaces and other molecules on values of the resulting critical exponents now will be analyzed.
The set of the temperature parameter shifts t i (N) = V 0 (500), i = 1, 2, 3, 4 as a function of the number N of smectic layers can be obtained by solving the system of 2N nonlinear self-consistent Equations (1)-(5), at a given number of film layers N, temperature T, and two parameters α and V s /V 0 of the model, and results of calculation of the temperature parameter shift t i (N) (i=1,2,3,4) [41] is shown in Figure 6.
Here V whereas the values of critical exponents α i (i = 1, 2, 3, 4) are being independent of N. Values of α i (i = 1, 2, 3, 4) corresponding to the appropriate local phase transitions are: α 1 = −1.402, α 2 = −1.122, α 3 = −1.488, and α 4 = −1.172, respectively. Calculations showed that the following approximate relations α 1 ≈ α 3 and α 2 ≈ α 4 are satisfied [41]. Critical exponents α 1 and α 3 describe processes of the appearance (at least, in the vicinity of the bounding surfaces) of the nematic and isotropic phases, respectively, while α 2 and α 4 characterize processes of disappearance (in the central domain of the LC system) of the SmA and nematic phases, respectively. Values of pairs of the exponents (α 1 , α 3 ) and (α 2 , α 4 ) distinctly differ each other because indices α 1 and α 3 correspond to the local phase transitions that take place in noncentral parts of the LC system, dominated by the surface interactions, while exponents α 2 and α 4 are associated with the local phase transitions within the central part of the LC system, dominated by the van der Waals interactions. Therefore, the values of critical exponents are determined by the type of interactions existing in the LC system [41]. Please note that the values of exponents α 1 and α 3 are not identical, as well as α 2 is not exactly equal to α 4 , although exponents of a given pair (α 1 , α 3 ) or (α 2 , α 4 ) are associated with the same domain of the LC cell (i.e., peripheral or central regions, respectively). It should be pointed out that surface interactions play dominating role in the vicinity of the bounding surfaces, whereas the central part of the LC system is dominated by the intermolecular interactions. However, each of the indices of a given pair refers to transitions between different phases, characterized by different thermodynamic behavior of the studied LC system, which is why indices of a given pair are not equal. The analysis of ordering in the studied surface stabilized LC systems has been carried out for the set of the ratios between values of surface and intermolecular interactions [41]. This ratio corresponds to temperatures of transitions between the studied local phases, differently located with respect to surfaces of the considered LC system. All phases and phase transitions between them are characterized by the wide range of values of that interaction ratio. It would be interesting to compare theoretical and experimental results for phase-transition temperatures and for profiles of order parameters across the surface stabilized LC systems. However, till now, there are no experimental results explaining these problems. Figure 7. Values of critical exponents α i , i = 1, 2, 3, 4 [42] corresponding to the appropriate local phase transitions. The solid lines represent the best linear fits.
The finite-size effect studied theoretically in this review can be a motivation for experimental studies, by applying, e.g., the fluorescence scanning laser confocal microscopy or the X-ray technique. The initial experimental results obtained by applying the technique of fluorescence scanning laser confocal microscopy [24,42,43] confirmed the inhomogeneity of molecular ordering in finite surface stabilized LC systems [42].
It should be pointed out that the full phase diagram and the set of phase transitions of LC system interacting with the solid surface can be obtained in the framework of the mean-field approach which takes into account the translational-translational, orientational-orientational and mixed correlations [44,45]. In the next Section, it will be demonstrated by an example of a freely suspended smectic films.
Mean-Field Theory with Anisotropic Forces for Description of the Layer-Thinning Transition in FSSFs
In this Section we will present an overview of mean-field approaches for describing the structural and thermodynamic properties, such as the Helmholtz free energy, entropy, and heat capacity, of free-standing smectic films. This will be done in the framework of the mean-field approaches, with anisotropic forces [7,12,17], where a free-standing smectic-A film is composed of N discrete smectic layers with a thickness of the order of the molecular length d and with total number of particles where N i is the number of molecules per layer, which is assumed to be the same for all layers. The molecules within each layer are assumed to interact only with molecules of the same layer and those of the two neighboring ones. In the framework of these mean-field approaches, the set of potentials Φ i (i = 1, ..., N) within the ith smectic layer can be introduced [7,38] where z i is the dimensionless distance through the smectic film, V 0 is the force potentials which is responsible for the molecule-molecule interaction, W 0 is the parameter corresponding to "enhanced" pair interactions in the bounding layers, and the constant α implicitly characterizes molecular packing across smectic layers [38]. Physically, these approaches indicate that we replace V 0 by W 0 within the first and last layers, whereas for all interior layers 1 < i < N the interaction coefficient V 0 has not been changed. It should be pointed out that the effective anisotropic potential Φ i (i = 1, ..., N) in the form of Equation (9) is a reduced version of the potential Φ i in the form of Equation (1). The set of OPs η i and σ i corresponding to the ith layer of the smectic film composed of a stack of N SmA layers in air can be obtained by solving the system of 2N nonlinear self-consistent Equations (3)-(5), with the effective anisotropic potential Φ i (i = 1, ..., N) in the form of Equation (9), at a given number of film layers N, temperature T, and the two parameters α and W 0 /V 0 of the model. Having obtained the set of OPs η i and σ i (i = 1, ..., N), one can calculate the full Helmholtz free energy of the LC system as F(N, where F i is the Helmholtz free energies corresponding to the ith layer. In turn, the dimensionless full Helmholtz free energy per molecule for each layer can be is the dimensionless Helmholtz free energy corresponding to the ith layer, which can be calculated as [7,12,17,46] where is the partition function of the ith layer, respectively.
In the scope of our research interest is also to investigate the experimentally observed phenomenon of the stepwise reduction of the value of heat capacity [7,17] as the temperature θ is raised above the dimensionless θ AI (bulk) = 3k B T AI (bulk)/V 0 [1]. In order to calculate the values of c v , one must first calculate the entropy of the system per molecule Here is the dimensionless entropy per molecule corresponding to the ith layer. Recently, an experimental phenomenon of the stepwise behavior of surface tension upon heating the smectic-A film above θ AI (bulk) has been observed [9]. It was shown that the film tension Γ, at each thinning abruptly jumps to a lower value and then continues to increase with a smaller slope [9]. In the framework of the mean-field approach, the dimensionless surface tension γ = Γ a V 0 of the smectic film per molecule at constant volume v = V/M, can be calculated as [11,46] where p is the dimensionless pressure per molecule and a = ad 3 is the area per molecule at constant p and v. At the same time, the calculation of surface tension Γ = ∂F account the fact that V = ANd = const, where F is the Helmholtz free energy of the smectic film and A is the LC/vacuum interface area.
In the case when the FSSF is subjected to the external electric field E directed both across E k and along E î the smectic layers, the set of effective anisotropic potentials Φ i (i = 1, ..., N) can be rewritten in the form [11] where and ∆ = 0 a E 2 n 0 V 0 is the dimensionless parameter corresponding to the electric field E applied across or along the smectic layers. Here 0 is the dielectric permittivity of vacuum, a is the dielectric constant of the smectic film, and n 0 is the number of density. In the framework of the mean-field approach, the dimensionless Helmholtz free energy corresponding to the ith layer can be written as [11] (5), with the effective anisotropic potential Φ i (i = 1, ..., N) in the form of Equations (9)- (12) and (15) are the relations which are needed to calculate both the structural, optical and thermodynamic properties of the free-standing SmA films. The set of external parameters used in calculations are N, α, and W 0 /V 0 , respectively. For the case of films composed of the partially fluorinated H10F5MOPP molecules, both calorimetric and optical reflectivity studies were carried out with initially 25-layer thick films, above the bulk SmA-Isotropic transition temperature (T AI (bulk) ∼ 358 K). Taking into account this fact, in the theoretical investigations the initial thickness of the film was chosen as being equal to N = 25 [7,11,12,17,[19][20][21]. According to the McMillan's theory [38], the first-order bulk AI transition occurs for α ≥ 0.98, so, the choosing of α = 1.05 is acceptable. When choosing the value of W 0 /V 0 , one is usually guided by the fact that the partially fluorinated free-standing smectic films composed of the H10F5MOPP molecules are stable above the θ AI (bulk). This allows the assumption that the value of the interaction constant W 0 should be greater than V 0 . In the number of theoretical investigations [7,11,12,17,[19][20][21] the strong surface-enhanced pair interactions with W 0 = 5 V 0 has been chosen. Taking into account that the partially fluorinated compound H10F5MOPP has bulk SmA-I transition temperature T AI (bulk) ∼ 358 K, (θ AI (bulk) ∼ 0.675) and for α = 1.05, according to the McMillan's theory [38], the value of k B T AI (bulk)/0.2202V 0 = 1.021, one can estimate that the value of V 0 is equal to ∼ 2.2 × 10 −20 J. Please note that values of the dimensionless temperature θ = 3k B T V 0 often vary between 0.60 (∼ 318.2 K) and 0.80 (∼ 424.3 K) [7,11,12,17].
In the next Section we will review several examples of numerical simulation of the layer-thinning transitions in free-standing partially fluorinated smectic films as the temperature is increased above T AI (bulk).
The set of the model parameters used in these calculations are N = 25, α = 1.05, and W 0 = 5 V 0 , respectively. In the low-temperature region 0.60 ≤ θ ≤ 0.675 (318.2 K ≤ T ≤ 358 K), results for orientational η i (θ) (Figure 8a and translational σ i (θ) OPs ( Figure 8b OPs showed [7] that these equations have a stable unique solution, which is characterized by high values of η i (θ) (Figure 8a, squares and up and down triangles) and σ i (θ) OPs (Figure 8b, squares and up and down triangles), both in the vicinity of the bounding surfaces, as well as near the film center. In the high-temperature region 0.685 ≤ θ ≤ 0.8 (363.3 K ≤ T ≤ 424.3 K), one also has a stable unique solution, which is characterized by vanishing both OPs η i (θ) and σ i (θ) near the film center, whereas in the vicinity of the bounding surfaces, both OPs still maintain relatively high values. In [7,19], this type of solution was called a "quasi-smectic" state. At intermediate temperatures 0.675 ≤ θ ≤ 0.685 (358 K ≤ T ≤ 363.3 K) both types of solutions of the self-consistent equations exist, although, for clarity, Figure 8 shows only the quasi-smectic profiles.
Calculations also showed that both η i (θ) and σ i (θ) profiles demonstrate strong ordering in the vicinity of the bounding surfaces, due to the stronger pair interactions within the first and last layers than for all interior layers, which decreases rapidly with distance from those surfaces. For instance, both the η i (θ) (Figure 8a, up triangles (i = 5)) and σ i (θ) (Figure 8b, up triangles (i = 5)) OPs fall continuously to some finite values [7], whereas those parameters corresponding to the interior layers close to the film center (Figure 8a,b, down triangles (i = 10)) drop to 0.
Furthermore, on the basis of the behavior of the free energy, one can calculate the values of the layer-thinning transition temperatures [7]. For instance, in the case of strong (W 0 = 5 V 0 ) "enhanced" pair interactions in the bounding layers, the value of the temperature θ AI (N = 25) is equal to ∼ 0.678 (T AI (N = 25) ∼ 359.6 K). Here θ AI (N = 25) and T AI (N = 25) denote the dimensionless and dimensional layer-thinning transition temperatures, respectively. According to these calculations [7], the distributions of the OPs η i (θ) and σ i (θ) across the 25-layer smectic film, at three dimensionless temperatures θ = 0.65 (∼ 344.74 K), 0.67 (∼ 355.35 K), and 0.69 (∼ 366 K), are characterized by a monotonic decrease of both η i (θ) and σ i (θ) with increasing distance (or number of layers) from the bounding surface towards the interior of the film.
In the case of strong (W 0 = 5 V 0 ) "enhanced" pair interactions in the bounding layers, (see Figure 9a-c [7]) these distributions are characterized by minima in the middle part of the film and decreasing values of these OPs with increase in temperature. Having obtained the profiles of OPs η i (θ) and σ i (θ), and using Equations (10) and (11), the distributions of both the dimensionless Helmholtz free energy f (i) (Figure 10a) and entropy s(i) (Figure 10b) can be calculated [7]. Calculations that were performed for three temperature θ values [7]: 0.66 (down triangles), 0.665 (up triangles), and 0.67 (squares), showed that the free-energy profiles demonstrate monotonic growth of the value of f (i) up to the 8th layer from each boundary, where the function f (i) saturates and does not change with further increase of i. Physically, this means that all film layers are subjected to attractive forces from the bounding surfaces. The results of calculations shown in Figure 8 indicate that at temperatures close to the layer-thinning value θ AI (N = 25) ∼ 0.678, strong ordering takes place only in the vicinity of the bounding surfaces, whereas far from the surfaces ordering drops to lower values than in the bounding layers. As a result, anyone can find that when the temperature varies from below θ AI (N = 25) to a lower value θ = 0.66, there are smaller differences between the Helmholtz free-energy f (i) profiles (see Figure 10a, contrasting the up and down triangles from the squares). The same tendency can be seen in the case of the entropy s(i) profiles (see Figure 10b). The distribution of the free-energy profiles across the smectic film changes dramatically as the temperature increases. When the layer-thinning transition temperature corresponding to the case of strong interaction with ) for a film initially containing 25 layers is reached, the interior layers become unstable and the system undergoes the discontinuous transition to the quasi-smectic state [7]. Such an effect has been seen earlier in the behavior of the order parameters in Figure 8. The distributions of both the f (i) and s(i) profiles in the high-temperature region θ > 0.678 are shown in Figure 11a . Same as in Figure 12a,b [7], but for next three film thicknesses: N = 10 (curves 1), 8 (curves 2), and 6 (curves 3), respectively.
Calculations were carried out for the set of the model parameters [7] α = 1.05 and W 0 = 5 V 0 , respectively. In Figure 12a,b the distribution of both the f (θ) and s(θ) vs. θ, for several film thicknesses: N = 25 (curve 1), N = 13 (curve 2), and N = 11 (curve 3) (Figure 12a,b), and N = 10 (curve 1), N = 8 (curve 2), and N = 6 (curve 3) (Figure 13a,b), respectively are shown. Results of calculations showed that the SmA-I transition occurs through the sequence of layer-thinning transitions 25 → 13 → 11 → 10..., as the temperature is increased. The calculated free energy f (θ) per molecule for the 25-layer thick film vs. θ is shown in Figure 12a (curve 1), and demonstrates smooth behavior with increase of θ, whereas the value of s(θ) (Figure 12b, (curve 1)) demonstrates a discontinuous rise at θ AI (N = 25) ∼ 0.678 (T AI (N = 25) ∼ 359.6 K) greater than 40 k B per molecule, due to the transition to the quasi-smectic state and corresponding change in slope of the free-energy curves. A similar discontinuity in s(θ) is seen in Figure 13b for N = 10 (curve 1). Discontinuities in s(θ) also occur for the other values of N, but are not seen in the figures due to the fixed vertical length scale.
Following the transition of N-layer film to the quasi-smectic state, it has been determined the number of layers (N − n) remaining in the film with non-vanishing smectic order near the film center to be such as to provide a lower free energy than the N-layer state at the same temperature, as well as with a higher transition temperature. Calculations showed [7] that the next stable state with lower free energy occurs at N = 13, then at N = 11, etc. The corresponding layer-thinning temperatures θ AI (N) are: In the following Section several structural, thermodynamic and optical properties of free-standing smectic films, as the temperature is above θ AI (bulk), will be considered. [7] 2.2.5. Heat Capacity, Surface Tension, Disjoning Pressure and Optical Refectivity of FSSFs A great variety of thermodynamic properties has been observed in FSSFs. Among other, a very interesting phenomena is the stepwise reduction of heat capacity c v (θ) when the temperature is increased above θ AI (bulk). In the framework of the abovementioned mean-field approach, the temperature θ s effect on the c v (θ) at constant volume of the smectic film with 25 layers, in two cases of strong interactions W 0 = 5 V 0 and 10 V 0 , has been investigated numerically [7] and the results are shown in Figure 14a,b [7]. Calculations showed that the heat capacity c v (θ) ∼ 10 4 anomaly (i.e., heat-capacity peaks) ( Figure 14a) at temperature θ AI (N = 25) ∼ 0.678 (T AI (N = 25) ∼ 359.6 K), is associated with the interior first-order SmA-I transition, where the entropy change is greater than 40 k B (Figure 12b, curve 1) per molecule, and demonstrates a discontinuous rise at θ AI (N = 25), whereas the value of c v (θ) (Figure 14a), in the temperature range 0.655 ≤ θ ≤ 0.677, varies between 280, at θ ∼ 0.655, and 450, at θ = 0.677, respectively. Please note that in the case when the "enhanced" pair interactions in the bounding layers are in two times stronger W 0 = 10 V 0 , the temperature θ s effect on the c v (θ) has the same qualitative behavior (see Figure 14b) and the value of the layer-thinning transition temperature θ AI (N = 25) is practically the same as in the weaker case. In the temperature range 0.655 ≤ θ ≤ 0.677 the value of c v (θ) (Figure 14b) varies between 300, at θ ∼ 0.655, and 480, at θ = 0.677, respectively.
"Heat capacity c v (θ) values of the partially fluorinated H10F5MOPP 25-layer film, calculated in the framework of the mean-field approach, at temperature θ ∼ 0.674 (∼ 357 K), below both the bulk SmA-Isotropic transition temperature and the layer-thinning transition temperature corresponding to strong (W 0 = 5 V 0 ) interactions in the bounding layers, are equal to ∼ 420, or ∼ 82. 5 [7], respectively. In turn, the measured, by means of calorimetric techniques, value of C P , at the same temperature corresponding to "plateau" values of the heat capacity, is equal to ∼ 80 [ µJ cm 2 K ], or ∼ 5.9 × 10 −21 [ J K mol ] [1]. Hence, it has been obtained a good agreement between the theoretically predicted [7] and experimentally obtained [1] results. In recalculations of the theoretical values of c v (N) per H10F5MOPP molecule to compare with the measured C p (N) values, it has been used the fact that the total number of molecules M per unit area in the film, denoted as n s , can be estimated as n s = n 0 l, where n 0 ∼ 1.5 × 10 21 cm −3 is the number density and l = Nd is the thickness of the N-layer film. Since d is of the order of the molecular length ∼ 3.0 nm [33], n s can be estimated as The result of comparing of the calculated value on c v (N), obtained in the framework of the mean-field approach, and the experimentally measured values of C v (N) shows that the extended McMillan's approach "enhanced" by anisotropic interactions in the bounding layers, with W 0 = 5 V 0 , is more suitable for describing both the structural and thermodynamic properties of a partially fluorinated H10F5MOPP smectic film than with W 0 = 10 V 0 , which gives c v ∼ 450, or ∼ 86. 7 The calculated data [7] on the dimensionless heat capacity c v (N) per molecule, and the recalculated dimensional heat capacity C v (N), corresponding to N layer films, as well as the "plateau" temperatures θ(N) for the sequence of the abovementioned layer-thinning transitions (with W 0 = 5V 0 ) are collected in Table 1. Calculations showed that these plateau temperatures satisfy [7]: θ(25) < 0.678 < θ(13) < 0.697 < θ(11) < 0.706 < θ(10) < 0.7106 < θ(9) < 0.717 < θ(8) < 0.729 < θ(7) < 0.736 < θ(6) < 0.743, where the numbers correspond to the successive layer-thinning transition temperatures given earlier. The observed data on C p (N) for the free-standing partially fluorinated H10F5MOPP smectic films also correspond to a series of "plateau" values for the sequence of the layer-thinning transitions 25 → 15 → 11 → 9 → 8... etc., [1]. In the range of film thicknesses investigated, the reduction of C v (N) is, at least qualitatively, in agreement with the experimentally observed decrease of C p (N) with decrease of N. Comparisons of the theoretical and experimental reductions of heat-capacity values at "plateau" regions in thin smectic films away from the layer-thinning transition temperatures should be unaffected by questions of the layer-thinning mechanisms. Nevertheless, these mechanisms may affect the "anomalies" shown by the heat-capacity peaks in Figure 14. Such anomalies have not been presented in experimental studies of SmA layer-thinning transitions, to our knowledge, but only in studies of SmA to hexatic-B transitions of smectic films [22], and we hope the present review will spur further experimental work in this direction [7].
Recently, it has been carried out the high-resolution study of the film tension γ as the film is heated through the layer-by-layer melting process, and the sawtooth behavior of the surface tension upon heating the FSSF above θ AI (bulk) was observed [9,10]. The understanding of how confinement influences the γ(θ) of thin smectic film when one or several layer(s) is(are) squeezed-out to meniscus has been investigated theoretically [11,12,47]. It has been done in the framework of the mean-field approach with anisotropic forces [7]. "The temperature θ s effect both on the dimensionless Helmholtz free energy f (θ) (see Equation (10)) and surface tension γ(θ) (see Equation (13)) per H10F5MOPP molecule in the smectic film, in the case when there is no electric field E = 0 or ∆ = 0, corresponding to a sequence of layer-thinning transitions 25 → 13 → 11 → 10 → 9 → 8 → 7 → 6, has been investigated numerically by solving the set of 2N self-consistent nonlinear equations from (3) to (5), with the effective anisotropic potential Φ i (i = 1, ..., N) in the form of Equation (9). The results are shown in Figure 15a,b [11]. Figure 15. Plot of f (θ) (a) and γ(θ) (b) per H10F5MOPP molecule vs. θ [11], corresponding to a sequence of layer-thinning 25 → 13 → 11 → 10 → 9 → 8 → 7 → 6.
Calculations showed that above θ ∼ 0.675 (∼ 358 K), both the 25-layer free energy f (θ) and surface tension γ(θ) profiles demonstrate monotonic growth of these values with the smaller slope and abruptly jumps to the lower and higher values, respectively, at θ AI (N = 25) ∼ 0.678 (∼ 359.6 K), where the film thins to 13 layers. This effect tends to repeat for the rest sequence of layer-thinning transitions 13 → 11 → 10 → 9 → 8 → 7 → 6, where each thinning is characterized by abrupt jumps to the lower or higher values, both for f (θ) and γ(θ), respectively, and then continues to increase with the smaller, practically constant, positive slope. Please note that the value of γ (25) per H10F5MOPP molecule of the 25-layer film is lower by a factor ∼4 than the value of γ(6) per molecule of the 6-layer film, where the numbers correspond to the successive layer-thinning process described earlier.
In order to carry out the direct comparison between the high-resolution measured data on the surface tension Γ [9,10] and calculated, in the framework of the extended McMillan's approach "enhanced" by anisotropic interactions in the bounding layers, with W 0 = 5 V 0 , value on γ(θ) [11], the dimensionless value of γ has been obtained. The calculated value of γ(θ = 0.67), for the case of H10F5MOPP 25-layer at temperature θ = 0.67 (∼353 K), is equal to 0.0073 or Γ = 0.0181 N/m, while the measured value of Γ is equal to 0.014 N/m. Hence, it has been obtained a good agreement between the theoretically predicted [11] and experimentally obtained [9,10] results.
These results show that the extended McMillan's approach "enhanced" by anisotropic interactions in the bounding layers is suitable for describing both the structural and thermodynamic properties of a partially fluorinated H10F5MOPP smectic film through the sequence of the abovementioned layer-thinning transitions.
"To examine the external field ∆'s effect on the layer-thinning transition sequence and both on f (θ) and s(θ), calculations of the above values, for the case when the electric field E is directed across the film [11,47], has been carried out. Calculations showed that the dimensionless field ∆'s effect on the layer-thinning sequence is reflected in the change of the layer-thinning transition sequences and of both values of the first multilayer jumps in the thickness and the corresponding layer-thinning temperatures θ AI (N). For instance [11,47] , where the film thins to 12, 15, and 14 layers, respectively. This effect tends to repeat for the rest sequence of layer-thinning transitions 12 → 11 → 10 → 9 → 8 → 7 → 6, for ∆ = 0.02, 15 → 13 → 12 → 10 → 9 → 8 → 7 → 6, for ∆ = 0.04, and 14 → 12 → 10 → 9 → 8 → 7 → 6, for ∆ = 0.08, respectively, where each thinning is characterized by abrupt jump to the lower values of f (θ), and then continues to increase with the smaller, positive slope. Both the temperature θ and field ∆'s effects on the γ(θ) per H10F5MOPP molecule in the smectic-A film, for the cases of ∆ = 0.02, 0.04, and 0.08, is shown in Figure 17a-c, respectively. These calculations showed that at each thinning the film tension γ(θ) abruptly jumps to the higher value and then continues to increase with the smaller slope [11,47], with growth of θ within the temperature interval θ AI (N) < θ < θ AI (N − n). The calculated data both on the dimensionless γ(∆) and dimension Γ(∆) surface tension per H10F5MOPP molecule for the 25-layer film [11], vs. ∆, at the fixed temperature 359 K, are collected in Table 2 [11]. According to these calculations, the ∆'s effect is characterized by increase of γ(∆) up to 29% with increasing of ∆ from 0.0 up to 0.08.
We can now estimate the magnitude of the electric field E necessary for the experimental observation of the effect of ∆ on the change of the first layer-thinning transition temperature from θ AI (N = 25) ∼ 0.678 (∼ 359.6 K), in the case of ∆ = 0.0, to θ AI (N = 25) ∼ 0.7 (∼ 371.3 K), in the case of ∆ = 0.02. It can be obtained by applying the electric field E ∼ 1.36 × 10 −2 [C/m 2 ] across the 25-layer smectic-A film.
Therefore, based on these calculations one may conclude that the external electric field may affect not only the layer-thinning transition sequences, but also the change of the first multilayer jump in the film thickness and increase the value of the surface tension. Later effect is caused by enhancing of the order in the surface layers under the influence of the electric field applied across the layers.
These results indicate that the mean-field approach based on the extended McMillan's theory can be usefully applied for describing the effect of the external electric field both on the layer-thinning transitions and surface tension of free-standing smectic films. It has been shown, by solving the self-consistent nonlinear equations for the order parameters, that for the regime of strong interaction with W 0 /V 0 = 5, both the layer-thinning transition temperatures and values of the surface tension grow with increasing of ∆. Taking into account that there is good agreement between theoretical predictions and experimental results, this work lends credibility to the theoretical interpretation of the surface tension data and to the validity of the mean-field approach." [11].
The number of the optical techniques, such as the measurements of the optical transmission spectra [6] or the optical reflectivity [1,2,6] of thin smectic films, are the most effective experimental techniques which allow us to study these smectic films with the high-resolution. Measurements of the optical reflectivity R of thin smectic film composed of partially fluorinated molecules (H10F5MOPP) revealed a remarkable phenomenon of layer-thinning melting in smectic films upon heating above θ AI (bulk) [1]. It was shown that the experimentally obtained values of R(θ) decrease in a series of sharp steps separated by plateaus as the temperature is increased [1,6]. In turn, the theoretical description of reflectivity R(θ), which has been done in the framework of mean-field approach [8,47], shows that the values of R(θ) also decrease in a series of the stepwise reduction of reflectivity when the temperature is increased above θ AI (bulk). In the limiting case, when the smectic film is sufficiently thin and the wavelength k 0 of incident radiation is within the visible range, the reflectivity R can be written as [8,47] where the refractive indices n 2 i can be expressed in terms of the OP η i and the film thickness L i , corresponding to the ith layer. In turn, the film thickness L i can be found as [7,8] where is the disjoining pressure acting on the film layers from the bounding surfaces, and is the compressibility modulus of the ith layer. Here L(N) = 1 N ∑ N i=1 L i , where L i is the thickness of the ith layer, σ i (N, T) and σ b (T) are the values of the translational OPs corresponding to the ith layer and the bulk SmA phase, respectively, and B 0 and L 0 are the compressibility modulus and the layer thickness in the absence of the disjoining pressure, respectively. It should be pointed out that the set of translational OP σ i , corresponding to the ith layer, as well as the change ∆F = F(N, T) − F((N − 1), T) of the total Helmholtz free energy of the smectic film, can be calculated in the framework of the abovementioned mean-field theory [7,8,12]. Please note that the change of the total Helmholtz free energy ∆F of the smectic film is equal to the work which must be performed on the film unit surface area to decrease its thickness by one layer. Here F(N, T), is the full Helmholtz free energy corresponding to the N-layer smectic film. In principle, two variants can be realized, the first variant is when the value of ∆F is positive, then the disjoining pressure P prevents the thinning of FSSF, and the film layers are subjected to a stretching force. On the other hand, when the value of ∆F is negative, the disjoining pressure promotes a thinning of the smectic film, and its layers are subjected to a compressive force. Calculations showed that both the η i and σ i OPs for smectic layers demonstrate strong ordering in the bounding domains, and the profiles of η i and σ i are characterized by rapid decrease of both OPs with distance from those surfaces. This nonuniformity of the film was taken into account when the reflectivity and layer-thinning compression have been computed.
The electric field ∆'s effect on the smectic layers should give rise to the change of their thicknesses L i (N, θ, ∆). According to Equations (17)- (19), the thickness L i (N, θ, ∆) of the ith film layer is the function of the disjoining pressure P (N, θ, ∆) and the compressibility modulus B i = B 0 [σ i /σ(bulk)] 2 for each layer i of FSSF of a given thickness N.
The temperature θ s effect on the dimensionless disjoining pressure P(θ, ∆) = P (θ, ∆)/V 0 n 0 , investigated in the framework of the mean-field approach, is shown in Figure 18. These calculations correspond to the sequences of the layer-thinning transitions, for three cases [47]: ∆ = 0 (case I), ∆ = 0.08 E k (case II), and ∆ = 0.08 E î (case III), respectively, and showed that the external electric field (∆ = 0.08), both directed across (case II) and along (case III) smectic film, has the strong influence on P(θ, ∆). Indeed, in both cases II and III, the values of P(θ, ∆ = 0.08) (Figure 5b,c) are on average by two orders of magnitude greater than the value of P(θ, ∆ = 0) (Figure 18a) for the case I. Please note that the nature of such the electric field ∆'s effect on P(∆) is due to the ∆'s effect on the Helmholtz free energy f (∆). Indeed, the values of f (∆ = 0.08) are approximately by one order of magnitude greater than the values of f (∆ = 0) (see Figures 15 and 16), when the electric field is absent. As a result, anyone can find that the values of P(∆ = 0.08) are on average by two orders of magnitude greater than the value of P(∆ = 0). This means that the average dimensional disjoining pressure P in the smectic film with N = 25 layers is [47] P ∼ 6.6 × 10 5 N/m 2 , for ∆ = 0.08, E k , 4.3 × 10 5 N/m 2 , for ∆ = 0.08, E î , 3.1 × 10 3 N/m 2 , for ∆ = 0.
Based on these calculations, one can conclude that the layer-thinning transitions are characterized by abrupt (stepwise) increase of P (N) when the film thins from N-layer to (N − 1)-layer film, then from (N − 1)-layer to (N − 2)-layer film, and so on. All smectic layers during the thinning process are subjected to the compressive force which grows with N as [12] The electric field ∆'s effect on the smectic layers should give rise to the change of their dimensionless thicknesses L i (N, θ, ∆) = L i (N, θ, ∆) /L 0 . The behavior of the dimensionless smectic layer thickness profiles L i (N = 25, ∆) across the 25-layer partially fluorinated H10F5MOPP smectic film, for several values of ∆ [47], showed that the interior film layers are compressed much stronger than the bounding layers. In the case of ∆ = 0.0, the interior layers are compressed weaker than in cases when the electric field is applied. Calculations also showed that with decreasing of the film thickness the biggest compressions of interior layers are increased, from L i=13 (N = 25) ∼ 0.98, for 25-layer film, to L i=5 (N = 10) ∼ 0.845, for 10-layer film, respectively. Physically, this means that in the case of thinner films all the layers are subjected to bigger compressive forces than in the case of thicker ones. The dimensionless field ∆'s effect on the average film thicknesses L(θ, ∆) in the smectic film corresponding to the sequence of the abovementioned layer-thinning transitions is shown in Figure 19, and characterized by the stepwise decreasing of L(θ, ∆) [8,47]. Calculations showed that at each thinning the film thickness abruptly jumps to the higher values and then continues to decrease with the smaller slope, with growth of θ within the temperature interval θ AI (N) < θ < θ AI (N − n). Here N − n is the number of smectic layers remaining in the film after each thinning.
Behavior of the average film thicknesses L(T) measured in the smectic film composed of 2-(4-(1,1-dihydro-2-(2-perfluorobutoxy) perfluoroethoxy) perfluoroethoxy) phenyl-5-octyl pyrimidine (H8F(4,2,1)MOPP) molecules also exhibit the upward jumps at each thinning transition [33]. These results show that the extended McMillan's approach "enhanced" by anisotropic interactions in the bounding layers is suitable for describing the stepwise reductions of the smectic film thickness through the sequence of the abovementioned layer-thinning transitions. Hence, it has been obtained a good agreement between the theoretically predicted [8] and experimentally observed decrease of L(θ) with decrease of N, for the FSSF composed of partially fluorinated molecules H8F(4,2,1)MOPP.
The understanding of how the temperature θ and the electric field ∆ effects on the reflectivity R (θ, ∆) in the smectic film [47] through the sequence of the abovementioned layer-thinning transitions, has been obtained in the framework of the mean-field approach [47]. The calculation results are shown in Figure 20 and indicate that the reflectivity also demonstrates the stepwise reductions of R (θ, ∆) during the sequence of the abovementioned layer-thinning transitions. Plot of R (θ, ∆) = R (θ, ∆) /L 2 0 k 2 0 vs. θ, for the case II, and several values of ∆ [47] is shown in Figure 20. Here the set of ∆ values are: 0 (a), 0.02 (b), 0.04 (c), and 0.08 (d), respectively. These results indicate that the mean-field approach based on the extended McMillan's theory can be usefully applied for describing not only the layer-thinning transitions which occurs through the series of layer-thinning causing the films to thin in a stepwise manner as the temperature is increased above θ AI (bulk), but also several structural, thermodynamic and optical properties of free-standing smectic films. Taking into account that there is a good agreement between theoretical predictions and experimental results, this mean-field approach lends credibility to the theoretical interpretation of a wide range of structural and optical data.
In the next Section the diffusion phenomena in thin smectic films will be discussed.
Translational and Orientational Diffusion across the Smectic Films
Although several approaches have been proposed to theoretically describe the diffusion process in liquid crystals [48][49][50][51], it is still too early to talk about the development of a theory which would make it possible to describe the diffusion processes in thin smectic films based only on the form of the Hamiltonian. In the bulk of the SmA phase the translational diffusion process across the smectic layers implies a passage through a potential barrier Φ. Taking into account that in the smectic-A phase the coordinate system is chosen so that the direction of z-axis coincides with direction of the directorn, the potential barrier Φ (z + d) = Φ (z) is a periodic function of z, with the period d, which is the layer spacing. The jump rate for molecular diffusion in the bulk of the SmA phase can be described, for instance, by the translational diffusion model [52], which assumes a stochastic Brownian process, in which each molecule moves in time as a sequence of small steps caused by collisions with its surrounding molecules and under the influence of the potential Φ (z), which is set up by these molecules. This diffusional process can be described by the translational diffusion tensor whose principal elements (D xx = D yy = D ⊥ , D zz = D ) are determined in a frame fixed on the molecule.
Recently, a molecular model based upon the random walk theory [52] has been proposed to describe translational diffusion in freely suspended smectic films [53]. It was shown that for the calculation of the translational diffusion coefficient (TDC) D across the smectic layers both in the bulk of the film, as well as in the vicinity of the bounding surfaces the set of η i and σ i OPs, obtained by using the mean-field McMillan's approach [7] with anisotropic forces [39] are required.
"The random walk theory allows us to calculate the translational diffusion across the smectic layers when a molecule makes a jump from (i + 1)th to ith layer. It can be realized when the molecule reaches "the boundary" between these layers with a "positive" momentum. Here the layers are counted from the film/air interface to the bulk of the film. In that case, the TDC can be written as [53] where dz is the mean-square jump length from (i + 1)th to ith layers, h (z) is the one-particle distribution function (see Equation (5)), and τ i,i+1 is the time required the molecule to jump from (i + 1)th layer to the ith ones. In turn, the time τ i,i+1 can be written as where τ 0 is the time of oscillation of the molecule about the equilibrium position in the bulk of the smectic film, and is the height of the potential barrier. Here Φ i (max) and Φ i+1 (min) are the values of maxima and minima neighborhood potentials belonging to ith and (i + 1)th layers, respectively. For calculation of the potential barrier ∆Φ i,i+1 one needs an effective anisotropic periodic potential within the ith smectic layer. By implementing the integration in the last equation one obtains the set of expressions for the height of the potential barrier where z i = z i /d is the dimensionless space variable. Notice that the overbar in the space variable z has been (and will be) eliminated in the last as in the following equations. Furthermore, it is convenient to rewrite expressions for potential barriers as [53] − Having obtained the set of OPs η i and σ i (i = 1, ..., N) one can calculate the potential barrier ∆Φ i,i+1 , the mean-square jump length from (i + 1)th to ith layers [53] Calculations showed [53] that the distribution of the profiles D i,i+1 (θ, ∆ = 0) /D N/2,N/2+1 (θ, ∆ = 0) across the 25 layer smectic film, in the absence of the electric field (∆ = 0.0), corresponding to three temperature θ values 0.67 (squares), 0.675 (up triangles), and 0.677 (down triangles), respectively, are characterized by the monotonic increase of the ratio D i,i+1 (θ, ∆ = 0) /D N/2,N/2+1 (θ, ∆ = 0) up to the middle-film's values, with increasing distance (or number of layers) from the bounding surface towards the interior of the film. In the case of strong (W 0 = 5V 0 ) "enhanced" pair interactions in the bounding layers these distributions demonstrate monotonic growth of the value of D i,i+1 (θ, ∆ = 0) /D N/2,N/2+1 (θ, ∆ = 0) up to the eighth layer from each boundary, where the function D i,i+1 (θ, ∆ = 0) /D N/2,N/2+1 (θ, ∆ = 0) saturates and does not change with further increase of i. In turn, near the bounding surface the motional constant D i,i+1 (θ, ∆ = 0) /D N/2,N/2+1 (θ, ∆ = 0) drops to zero, i.e., the strong "enhanced" pair interactions completely suppresses the diffusion process in the bounding layers. The distribution of the number of D i,i+1 (θ, ∆ = 0) /D N/2,N/2+1 (θ, ∆ = 0) profiles across the smectic films, during the sequence of the layer-thinning transitions 25 → 13 → 11 → 10 [53], as the temperature is increased above the value θ AI (bulk) ∼ 0.675, is shown in Figure 22. Here, calculations have been carried out in the absence of the electric field (∆ = 0.0) [53]. The electric field E s effect on the dimensionless translational diffusion coefficient D i,i+1 (θ, ∆) /D N/2,N/2+1 (θ, ∆) as a function of layer number i, in the smectic film with N = 25 layers, both in the cases of ∆ = 0.0 and ∆ = 0.08) [53], is shown in Figure 23. These calculations showed that the electric field ∆ has a weak effect on distribution of D i,i+1 (θ, ∆) /D N/2,N/2+1 (θ, ∆) across the 25-layer smectic film and the diffusion process is completely suppressed in the bounding layers [53]. Such behavior of the TDC is due to the fact that the potential barrier ∆Φ 1,2 is much bigger than ∆Φ i,i+1 (i = 2, ..., N/2) as the temperature is increased above the bulk value θ AI (bulk), because of the strong ordering in the vicinity of the bounding layers.
It should be noted that the abovementioned mean-field model is applicable to describe diffusion across smectic layers, because one deals with the potential barrier which is set up across the smectic layers but not within the smectic layers.
To calculate the dimension value of translational diffusion coefficient D b (N) in the smectic film one can use the Maclaurin expansion of momentum autocorrelation function f (t) = p(0) · p(t) / p(0) · p(0) . For such purposes, the function f (t), in the form of damped oscillation, has been adopted for calculation of the dimensional value of diffusion coefficient [53] where the angle brackets indicate the equilibrium ensemble average [52]. Taking into account that the coefficients of Maclaurin series in time of function f (t) are in principle calculable [53], a two-parameter functional expression for f (t) takes the form [54] where the parameter α determines the rate of decay, and δ gives the rate of oscillation relative to the time scale determined by α. All this allows us to record the diffusion coefficient D b (N), in terms of α and δ, which takes the form [54] where parameters α and δ are given in Ref. [53]. The calculated data on D b (N = 25), at T = 367 K (θ = 0.67) and d = 3.26 nm [55], for 25-layer partially fluorinated H10F5MOPP smectic film, gives ∼ 6 µm 2 /s. In turn, the experimentally obtained data on D b (N) (N = 25, 13, 11, 10), for 25-layer smectic film composed of 4-octyl-4 -cyanobiphenyl molecules, gives ∼ 3 µm 2 /s [56,57]. Hence, it has been obtained a good agreement between the theoretically predicted [53] and experimentally obtained [55] results." [53]. "In turn, the rotational dynamics of a uniaxial molecule in anisotropic phase can be described in the framework of the rotational diffusion model [58], which is based on the concept that the molecular reorientation proceeds through a random sequence of large-amplitude angular jumps from one orientation to another [51]. In that model, a molecule is considered to be an ellipsoid aligned along, or close ton, where the diffusional jump results in rotation of the molecule from β ∼ 0 to β ∼ π. This assumes that a molecule to make jump by a minimal successful angle π, if reaches "the boundary" between the β i ∼ 0 and β i ∼ π orientations, and has a positive angular-momentum projection p β onto any axis perpendicular ton.. In the framework of this model, the rotational self-diffusion (RSD) coefficient D ω ≡ D ⊥ can be written as [51] D where is the rotational jump rate, I is the moment of inertia of the molecule with respect to the minor axis of the ellipsoid, p β and p ϕ are the angular-momentum components, F p ϕ , p β , ϕ, β is the one-particle distribution function of the LC film on the solid surface. The function F does not depend on the azimuthal angle ϕ, and, moreover, in the vicinity of the equilibrium state, the momentum projections are neither correlated between them nor with the conjugated angles ϕ and β. In this case, the function F can be written as a product of three functions where F p ϕ and F p β are the Maxwellian distribution functions, whereas f (cos β) is the ODF. By integrating Equation (31) one obtains the final expression for the coefficient RSD [51,59,60] Thus, D ω is the function of temperature θ, I, and the value of the ODF at β = π 2 . Physically, this means that the one-particle function f (cos β) of the LC phase has a rather sharp maximum at the point β = 0 (i.e., around the directorn), rapidly decreasing as β tends to π 2 . At β = π 2 the function f (β i ) is small but finite, and defines the "gate" width in orientational space through which the molecule diffuses from one orientation to another. Therefore, having obtained the ODF f π 2 , one can calculate, using Equation (33), D ω as the function of θ and I. The numerical analysis of rotational diffusion processes in thin smectic film (N = 25) deposited on the solid surface (with W 0 /V 0 = 3.0 and W 1 /V 0 = 10.0) showed that only a strong electric field ∆ = 0.1 has a visible effect on the dimensionless coefficient [61] D ω (i, θ) /D ω (bulk) (i = 1, 25) (see Figure 24). Here W 0 and W 1 are two parameters of the LC system which are defined for the enhanced pair interactions in the LC/vacuum and LC/solid bounding layers, respectively, and the dimensionless electric field ∆ = 0.1 can be obtained by applying the electric field ∼ 7 × 10 −2 C/m 2 across the 25-layer smectic film. Calculations showed that the motional constant D ω (25, θ) /D ω (bulk) decreases in the low-temperature range (0.57 ≤ θ ≤ 0.59), up to 20% [61], with increasing of ∆ from 0 to 0.1, both in the first (i = 1, Figure 24a) and the last (i = 25, Figure 24b) layers [61]. In the high-temperature range (0.59 ≤ θ ≤ 0.61) the effect of the electric field ∆ decreases, and finally disappears at the end of the temperature interval (0.57 ≤ θ ≤ 0.61). In the case of a strong electric field (∆ = 0.1), calculations showed that curves describing D ω (i, θ) /D ω (bulk) vs. θ, both for the first (i = 1) and the last (i = 25) layers, are practically congruent curves [61].
The parameter α's effect on D ω (i, θ, α) /D ω (bulk) for two values of ∆ is shown in Figures 25 and 26. Calculations showed [61] that in the case when the electric field is absent (∆ = 0), and in the low-temperature limit 0.57 ≤ θ ≤ 0.61, the lower values of α, i.e., 0.6 and 0.7, produce the higher values of D ω (25, θ) /D ω (bulk) (see Figure 26b), whereas in the first layer (i = 1), the lower values of α produce the higher dimensionless RSD coefficient only at the beginning of that temperature interval. At the end of the temperature interval [0.57 ≤ θ ≤ 0.61] the higher values of D ω (1, θ) /D ω (bulk) are produced at α = 0.8 (see Figure 26a). In the case of a strong electric field (∆ = 0.1), the dependence of D ω (i, θ) /D ω (bulk) vs. θ is shown in Figure 26, and demonstrates the same qualitative behavior as in the case when the electric field is absent, only in the last (i = 25) (see Figure 26b) layer. In the first layer (i = 1) and at the end of that temperature interval the higher values of D ω (1, θ) /D ω (bulk) are produced at α = 0.7 (see Figure 26a). With the growth of the value of ∆ from 0 to 0.1, the biggest values of D ω (i, θ) /D ω (bulk) (i = 1, 25) [61] are produced by the molecules with the lower values of the alkyl tail length α = 0.6, 0.7, and 0.8. These calculations, based on the alkyl tail length α's effect on D ω (i, θ, α) /D ω (bulk) (i = 1, 25), and displayed in Figures 24-26, showed that the parameter α has a strong effect on the rotational diffusion process in the smectic film deposited on the solid surface and subjected to the strong electric field. In all the cases described above, the value of the moment of inertia of the molecule with a change in α, did not change [61]. Calculations of the coefficient D ω (bulk) in the bulk of the LC phase composed of 8CB molecules, at T = 307 K, with I = 7.41 × 10 −44 [kg m 2 ] and f π 2 ∼ 10 −4 , gives D ω (bulk) ∼ 3 × 10 8 s −1 , which is in a good agreement with experimental 2 H NMR (0.5 − 3) × 10 8 s −1 [62] values. Please note that the function f π 2 has been obtained by solving the system of nonlinear Equations (3) and (4), for two bulk OPs η b and σ b with the effective anisotropic potential Φ (z, cos β) = V 0 η b + ασ b cos 2πz d P 2 (cos β)." [61].
Taking into account that from an order-of-magnitude point of view, there is a good agreement between theoretical predictions and experimental results for RSD coefficient in the bulk of the LC phase, this mean-field model lends credibility to the theoretical interpretation of the motional data and to the validity of that theoretical approach.
We conclude Section 2 by pointing out that the combination of the mean-field models with the experimental techniques provides a powerful tool for exploring and understanding the mechanisms which clarify the relevant underlying physical interactions.
Dynamics of the Layer-Thinning Processes in Free-Standing Smectic Films
The dynamic properties of LC systems confined within small spaces, such as the free-standing smectic films, are quite different from their bulk dynamics. A unique properties of FSSFs is their ability to demonstrate the layer-thinning transitions above T AI(N) (bulk). Balance between surface and finite-sized effects leads to unusual layer-by-layer thinning when the interior layer(s) is(are) squeezed-out by the bounding ones. It has been assumed that the squeezing-out is initiated by a thermally activated nucleation process in which a density fluctuation forms a small hole of critical radius in the center of the circular smectic film [17]. If the hole inside the film is taken to be of circular shape with a radius and with a thickness of the order of the molecular length d, then under the effect of the pressure gradient ∇P one can develop the squeezing-out process between the squeezed-out and non-squeezed-out areas. In that case [12,17] ∇P is responsible for the driving out of one or several smectic layer(s) from the N-layer smectic film. The dynamics of the bounding area, which is separated by the layer-thinning transition front, during the layer-thinning transition N → N − 1, will be described by using the conservation laws for mass and linear momentum [17,18], with and without accounting for the coupling between the smectic film and the meniscus.
Squeezing-Out Dynamics of Layer-by-Layer Thinning Transition in FSSF without Accounting for the Meniscus
"The evolution of the bounding area, which is separated by the layer-thinning transition front, from the N-layer to (N − 1)-layer smectic film, without accounting for the effect of meniscus, has been studied on the basis of conservation laws for mass and linear momentum [17]. The dynamics of the bounding area between the squeezed-out and non-squeezed-out areas in the FSSF has been investigated for the case of the circular shape with the area A 0 = πR 2 , where R is the radius of the total smectic film. In the framework of this approach the squeezing-out process starts from a small hole in the center of the circular smectic film. This hole is formed as a result of the thermally activated nucleation process in which a density fluctuation forms the small hole in the FSSF [17]. The evolution of the size of the small hole is determined by the balance between the bulk and surface thermodynamic forces, and the nucleus grows only when its size δ = π 2 d exceeds a critical value δ c . Since the boundary between the squeezed-out and non-squeezed-out areas in the FSSF has been investigated for the case of the circular shape with the area A(t) = πr 2 (t), and the squeezing-out process continues until the area of the circle A(t) reaches the value A 0 = πR 2 , the shape of the bounding line can be treated as dislocation. Please note that during the thinning process all smectic layers are subjected to the compressive force P acting across the FSSF upon heating to the isotropic temperature. This allows us to assume that the disjoining pressure [12,17,63] P acting across the N-layer and (N − 1)-layer smectic film is responsible for develop of the pressure gradient ∇P between the squeezed-out and non-squeezed-out areas [12,17]. And since the disjoining pressure P(N − 1) acting across the (N − 1)-layer film is greater then P(N) acting across the N-layer smectic film [12,17,63], one can assume that the disjoining pressure (DP) is responsible for the pressure gradient ∇P which drives the squeezed-out smectic layer. All this allows us to assume that the conservation laws for mass and linear momentum must be held. Bearing in mind that the layer-thinning process in FSSF is characterized by removal of interior isotropic layer(s) from the overheated film, it should be taken into account only the continuity equation and the Navier-Stokes equation for the velocity field v(r, t). Taking into account the thickness of the smectic film, one can assume, with high accuracy, that the mass density ρ across the FSSF does not change, and one deals with an incompressible fluid. The incompressibility condition gives that [17] ∇ · v = 0, (34) whereas the linear momentum balance equation can be written as [17] ρ ∂v(r, t) ∂t = −∇ r P(r) + ∇ r σ rz (r, t), where σ rz is the stress tensor component corresponding to the viscous force. In this case, it is convenient to choose a cylindrical coordinate system, where only one nonzero radial component v(r, t) of the velocity vector v, directed parallel to the smectic layers, exists whereas the pressure is a function only of the radius r, andê r is the unit radial vector. The relevant solutions of Equations (34) and (35) can be written in the forms and respectively, and α 4 is the shear viscosity coefficient. Substituting Equations (38) and (39) This equation has a solution [17,64] ln r R where the ∆P = P(N) − P(N − 1) is a disjoining pressure dropping across the front of the moving boundary area during the layer-thinning transition N → N − 1. Taking into account that A(t) = πr 2 (t), the last equation can be rewritten as [17] 1 2π ln where In the following, the second time derivative term in Equation (42) has been neglected. Indeed, at the left end of the time's interval [0, t R ], r varies very slowly and the second-order time derivative of r 2 or A(t) can be ignored, whereas at the right end of the same interval, lim t→t R ln Here t R is the time which is needed to completely squeeze-out one smectic layer. In these circumstances, the last Equation (42), which determines the value of the area A (t) takes the form and its solution can be written as where κ c = π 2 c , and c is the radius of the critical nucleus. In turn, the expression for the velocity v(r, t) can be written as where r ∈ [ c , R]. The time t R which is needed to completely squeeze-out one smectic layer can be obtained from the relation Here we need to take into account the fact that the time t R is inversely proportional to ∆P = P (N) − P (N − 1) and this value is always negative.
To examine the ∆P's effect on t R in the FSSF corresponding to the N → (N − 1) layerthinning transition, one need to have the values of the disjoining pressure P (N) and P (N − 1), when the FSSF thins from N-layer to (N − 1)-layer film. The numerical study of the disjoining pressure P (N, T) of FSSFs for two cases, first, for the initially 25-layer film composed of the partially fluorinated 5-n-alkyl-2-(4-n-(perfluoroalkyl-metheleneoxy) phenyl (H10F5MOPP) molecules in air [12], and, second, for the initially 10-layer film composed of decylcyanobiphenyl (10CB) molecules in water [63], on heating to the isotropic temperatures has been carried out. Calculations showed that the layer-thinning transitions are characterized by abrupt (stepwise) rises to the higher values of P (N − 1) with respect to P (N), when the film thins from N-layer to (N − 1)-layer film, and all smectic layers, during the thinning, are subjected to the compressive force which grows with the number of N as [12,63] N . Such behavior of P (N) provides the negative values of the DP [12,63] ∆P = P (N) − P (N − 1). A number of data on the ∆P both in free-standing partially fluorinated H105FMOPP film in air [12], and in cyanobiphenyl 10CB film in water [63] is collected in Table 3 [17].
With Equation (46) and data on ∆P one can calculate the values of t R [in sec] and the average velocity u = R/t R [in m/sec]. These data also are collected in Table 3.
The calculation results collected in Table 3 correspond to the 25-layer FSSF composed of partially fluorinated H10F5MOPP molecules in air (The first seven lines from the top) [17], and to the 10-layer FSSF composed of 10CB molecules immersed in water (The fifth, forth, and third lines from the bottom), respectively. The measured data on the average thinning speed of single-layer thinning in 5-layer smectic film composed of H8F(4,2,1) MOPP molecules in air is given in two last lines [15]. The results of disjoining pressure ∆P calculation performed using two different approaches [12] (see Table 3, for instance, the first three lines from (1) to (4)) (case I), and [19] (see Table 3, for instance, the lines from (5) to (7)) (case II)), (case II), gave results that differ from each other on average by one order of magnitude. Please note that the values of ∆P has been calculated for the same sequence of the layer-thinning transitions 10 → 9 → 8 → 7 in the FSSF composed of the same partially fluorinated H10F5MOPP molecules. As expected, the results of the time t R calculation performed using two different approaches (I) and (II) also gave results that differ from each other on average by one order of magnitude. The results of calculations also showed that the disjoining pressure ∆P values calculated for FSSF in water (case III) and air differ from each other by several orders of magnitude. Therefore, the values of ∆P which drives the squeezed-out one smectic layer in the case III are in two orders and one order higher than in the cases I and II, respectively [17]. This variation in pressure ∆P values leads to the fact that the time t R required for complete squeezing-out one smectic layer in case III is much less than in both cases I and II. The same situation is repeated when calculating the average velocities u = R/t R , whose results are shown in the last column of Table 3 [17]. In all these calculations the value of α 4 is equal to 0.1 [Pa s] and R = 100 [µm].
The condition for determining the value of the critical radius c can be obtained by minimizing the energy W required to form a small circular hole. The value of this energy W, formed by the three contributions, can be written in the form [17] where denotes the radius of the hole. Here, the first contribution is due to the line tension, while the second one is due to the interfacial contribution. Finally, the third term in Equation (47) is an elastic energy contribution. It should be noted that the first contribution to Equation (47) has a positive value, where γ is the interfacial LC/air tension, n is the number of squeezed-out layers, and B is the compressional elastic constant which has the dimension of an energy per volume V = π 2 (N − n) d.
Here t R (max) is equal to the time t R corresponding to the 10 → 9 layer-thinning transition in FSSF composed of partially fluorinated H10F5MOPP molecules in air. The results of calculations of the circular area A (τ) /A 0 show that the squeezing-out is accelerated at the final stage of the processes. Indeed, for instance, in the case of the layer-thinning transition 10 → 9 the value of the velocity u (R, t R ), at the edge of the circular smectic film of the radius of R, is equal to 2.52 [m/s], what is on one order high than the value of the average velocity u = R/t R , which is equal to 0.26 [m/s]. In all these calculations the value of the shear viscosity was equal to α 4 ≈ 0.1 [Pa s]. In turn, the results of experimental studies [65] indicate that the value of the rotational viscosity coefficient (RVC) γ 1 in the volume of the LC phase is very different from the value of RVC near the bounding surfaces. This fact must therefore be taken into account to obtain more reliable estimates of the shear viscosity coefficient α 4 .
It should also be noted that the value of the effective radius of nucleus has a strong influence on the dissipation at the smectic film/meniscus interface. Indeed, it has been shown that in smectic films the separated dislocations are coupled by means of the dissipation in the meniscus [66], and this dynamic coupling may change the effective radius of nucleus up to 10 times with respect to the static critical radius. As a result, the time required to completely squeeze-out one smectic layer in the smectic film will increase several times." [17].
Squeezing-Out Dynamics of Layer-by-Layer Thinning Transition in FSSF Accounting for the Meniscus
"In the previous paragraph we described the dynamics of squeezing-out of the number of smectic layers from the N-layer film without accounting for the effect of the meniscus. The evolution of the bounding area, which is separated by the layer-thinning transition front, from the N-layer to (N − 1)-layer smectic film, without accounting for the effect of meniscus, has been studied on the basis of conservation laws for mass and linear momentum [17]. The dynamics of the bounding area between the squeezed-out and non-squeezed-out areas in the FSSF has been investigated for the case of the circular shape of the smectic film. This section will generalize the previous case to the case of accounting for the influence of the meniscus [17,18]. In this approach, the mechanism which is responsible for squeezing-out process, is based on the concept of the the disjoining pressure. In the previous paragraph, it was shown that the disjoining pressure P(N) acting across the N-layer film is smaller than P(N − 1) acting across the N − 1-layer smectic film [12,17]. As a result, it is formed the pressure gradient ∇P which drives the squeezed-out smectic layer in the zone far from the meniscus. We will assume that the influence of the meniscus extends only to the area R − δ ≤ r ≤ R closely adjacent to the interface between the FSSF and meniscus. As a result, the process of evolution of the bounding area will be affected by the additional pressure P 1 caused by the coupling of the smectic film with the meniscus. Here δ is the distance, counted from the smectic film/meniscus edge, where that effect occurs. All this allows us to assume that the conservation laws for mass and linear momentum must be held. Thus, the equation of the balance of linear moments acting on the unit volume of the smectic film, taking into account the influence of the meniscus, can be written as [18] ln y (τ) where τ = t/t N is the dimensionless time, t N is the normalization time, and ∆P will be modeled by the linear function of the radius r (t) as Here y (τ) = A (τ) /A 0 [18] and ∆P = P(N) − P(N − 1) is a disjoining pressure dropping across the front of the moving boundary area during the layer-thinning transition N → N − 1. If in the previous paragraph the value of disjoining pressure dropping across the moving front during the layer-thinning transition N → N − 1 was determined only by the disjoining pressure ∆P = P(N) − P(N − 1), now an additional pressure P 1 acting from the meniscus on the smectic film should be accounted. Thus, the P 1 s effect will be extended to the submicrometer's distance δ, and that effect will be modeled by the linear function ∆P (r) of the distance r. Therefore, at r = R, ∆P is equal to P 1 + ∆P, whereas at r = R − δ, ∆P is equal to ∆P. The justification of the choice of the linear form of the distance dependence of ∆P is dictated by the submicrometer's range of δ.
In the previous paragraph, without taking into account the influence of the meniscus, it was shown that Equation (42) can be simplified so that both the second time derivative term and the nonlinear term in Equation (42) [17] can be neglected. The neglecting of these two terms in Equation (49) was justified by the fact that the velocity v is small and at the left end of the time's interval [0, t R ], y (t) varies very slowly and one can ignore the second-order time derivative of y (t), whereas at the right end of the same interval, lim t→t R y (t) = 1. Here t R is the time which is needed to completely squeeze-out one smectic layer. It should be noted that the time t N does not always coincides with the time t R . Taking into account the above limitations, Equation (49), for the determination of y (τ), takes the form [18] where λ 1 = t R P 1 c δα 4 and λ 2 = t R (∆P+P 1 ) are two parameters of the smectic system, whereas c is the radius of the critical nucleus. Linear Equation (50), with the initial condition y (τ = 0) = c R 2 , has a solution −4τ = 1 In this case, the relationship y (τ) = 1 can be used for obtaining the value of the dimensionless time −4τ R (P 1 , δ) = 1 which is needed to completely squeeze-out one smectic layer accounting for the pressure P 1 . In the limiting case P 1 = 0, when the effect of the meniscus on the smectic film is negligible, Equation (50) has the solution is the parameter of the smectic film. It should be noted that when the influence of the meniscus can be ignored, the value of 4∆P ln R c , calculated using Equation (52) coincides with the value of the time given in Equation (46)) [17]. In turn, the velocity v(r, t) of the boundary between the squeezed-out and non-squeezed-out domains of the smectic film can be determined by using the equation v(r, t) = 1 2πr Thus, the velocity v(r, t) is proportional to the first-order time derivative of y (τ) and inversely proportional to the radius r (τ) .
Further detailed analysis will be given for evolution of the bounding area from the N-layer to (N − 1)-layer smectic film during the layer-thinning process, based on accounting for both the second time derivative term and the nonlinear term in Equation (49) (case I), and when both these terms are neglected (case II) [18]. The results of calculations of the dimensionless squeezed-out area y (τ) (see Equation (49)) and the velocity v (r, τ) (see Equation (54)) of the bounding area between the squeezed-out and non-squeezed-out domains, accounting for the meniscus effect will be presented. The effect of the disjoining pressure ∆P = P(N) − P(N − 1) and the additional pressure P 1 , caused by coupling of the smectic film with the meniscus, the distance δ and the radius of the critical nucleus c on the nature of the squeezing-out dynamics, for the number of dynamic regimes, will be investigated. Calculations were performed for the following values of ∆P and c [18]: ∆P = −0.66 × 10 3 N/m 2 , c = 3.43 nm, for the case 10 → 9, ∆P = −2.64 × 10 3 N/m 2 , c = 3.38 nm, for the case 9 → 8, ∆P = −3.98 × 10 3 N/m 2 , and c = 3.23 nm, for the case 7 → 6, respectively. In all these calculations the value of α 4 is equal to 0.1 Pa s and R = 100 µm.
In the case I, the nonlinear ordinary differential Equation (49) has been solved using the Runge-Kutta method of fourth order [67], and the result of calculation of the reduced area y (τ) = A (τ) /A 0 vs. τ = t/t R (P 1 = 0), without accounting for the meniscus effect (P 1 = 0), are shown in Figure 28 (curves 1), for several layer-thinning transitions: 10 → 9 (Figure 28a), 9 → 8 (Figure 28b), and 7 → 6 ( Figure 28c) [18], respectively. In the case II (curves 2), the reduced area y (τ) = A (τ) /A 0 , calculated using Equation (53), as the function of the reduced time τ = t/t R (P 1 = 0), are also shown in Figures 28. Here τ R (I, II) is the time which is needed to completely squeeze-out one layer from the N-layer smectic film without accounting for the pressure P 1 , i.e., the time when y (τ R (I, II)) is equal to 1. In this case τ R (I) was obtained by means of numerical solution of Equation (49), whereas τ R (II) was calculated using Equation (53) [18]. These calculations showed that the numerical result (case I) for evolution of y (τ) vs. τ, for the layer-thinning transition 10 → 9, is faster approximately on the 20% with respect to the analytical result (case II) (see Figure 28a). It should be noted that at the final stage of the process of squeezing-out of the smectic layers, the results obtained by numerical methods and using analytical expressions are more and more close to each other results. For instance, in the cases 9 → 8 and 7 → 6 this difference almost disappears (see Figure 28b,c) [18]. (2) Analytic. Having obtained the evolution of the y (τ) function, in the process of thinning the smectic film, one can calculate the velocity v (τ) vs. τ, for two layer-thinning transitions: 10 → 9 and 7 → 6 regimes. The results of these calculations are shown in Figure 29 [18]. Figure 29. Plot of v (τ) vs. τ = t/t R (P 1 = 0) [18], for the 10 → 9 (a) and 7 → 6 (b) squeezing-out regimes, respectively. Curves (1) and (2) correspond to cases I and II, respectively.
The curve (1) was calculated analytically using Equations (53) and (52), while the curve (2) was calculated numerically, using data on the function y (τ). Both results indicate that the dependence of the dimensionless velocity is characterized by the gradual increase in v (τ) with increasing of τ. In addition, the results of the comparisons indicate that the influence of both the second time derivative term and the nonlinear term in the Navier-Stokes equation (Equation (49)) in the further calculations can be neglected.
The results of the calculation of the dynamics of thinning for the 10 → 9 squeezing-out regime, under the action of ∆P, accounting for the coupling of the smectic film with the meniscus, are shown in Figure 30a Figure 30. Plot of y (τ) = A (τ) /A 0 vs. τ = t/t R (P 1 = 0) [18], for the 10 → 9 squeezing-out regime in the smectic film in air and for two values of P 1 : (a) −0.1∆P (curve 1) and −0.9∆P (curve 2), respectively.
Here δ = 600 c . (b) Same as in (a), but for δ = 300 c . Figure 30a,b show the evolution of y (τ) vs. τ = t/t R (P 1 = 0) for two values of the pressure P 1 acting on the smectic film, first, P 1 = −0.1∆P (curve 1) and, second, P 1 = −0.9∆P (curve 2), respectively. These calculations were performed for two values of distance δ, first, for δ = 600 c (∼ 2 µm), second, for δ = 300 c (∼ 2 µm). The calculation results also showed that the meniscus has a strong effect on the time t R (P 1 ), whereas the distance δ practically does not affect that time. Indeed, for P 1 = −0.9∆P (∼ 0.6 × 10 3 N/m 2 ) the value of the dimensionless time τ R (P 1 = −0.9∆P) (τ R (P 1 = 0) = t R (P 1 = 0) /t R (P 1 = 0)) is equal to 9.88 (∼ 4 × 10 −3 s) (Figure 30a (curve 2)), whereas for P 1 = −0.1∆P, the value of the dimensionless time τ R (P 1 = −0.1∆P) is equal to 1.11 (Figure 30a (curve 1)), which is practically 9 times less. In all these cases the distance δ is equal to 600 c (∼ 2µm). In the case when the distance δ decreases in 2 times, from δ = 600 c to 300 c , the value of the time τ R (P 1 = −0.9∆P, δ = 600 c ) ≈ 9.88 (Figure 30a (curve 2)), which is practically the same as in the case of τ R (P 1 = −0.9∆P, δ = 300 c ) ≈ 9.76 (Figure 30b (curve 2)). The same tendency is kept for the case when the pressure P 1 is reduced by 9 times, from P 1 = −0.9∆P to −0.1∆P. In this case, the time τ R (P 1 = −0.1∆P, δ = 600 c ) ≈ 1.11 (Figure 30a (curve 1)) which is the same as in the case of τ R (P 1 = −0.1∆P, δ = 300 c ) ≈ 1.11 (Figure 30b (curve 1)). These calculations showed that the influence of the meniscus is significant only when P 1 → −∆P, and can be taken into account only at distances δ being of a few percent of the value of R. It is clearly seen that the evolution of the A (τ) /A 0 profile is slowed down only at the final stage of the squeezing-out processes, when the values of the pressure P 1 reach the values of −∆P. Taking into account the fact that the value of ∆P = P (N) − P (N − 1) is negative, one can conclude that the positive value of P 1 = −∆P completely slows down the squeezing-out process, whereas the negative values of P 1 lead to accelerating of the squeezing-out process during the layer-thinning transition.
Therefore, on the basis of this dynamic model of the squeezing-out process, everyone can make the conclusion that the external pressure, caused by coupling the smectic film with the meniscus, has a strong effect on that process and this dynamic coupling may significantly change the time which is needed to completely squeeze-out one or several layer(s) from the free-standing smectic film. Having obtained the data on t R (P 1 = 0), one can calculate the average velocity u = R/t R (P 1 = 0) [in m/s]. For instance, in the case of 10 → 9 transition and P 1 = −0.9∆P, the value of the time t R (P 1 = −0.9∆P) is equal to 9.88 × t R (P 1 = 0). Taking into account that t R (P 1 = 0), for the case of 10 → 9, is equal to 38.9 × 10 −5 [s] (see Table 1, Ref. [17]), one can estimate the average velocity u (R, P 1 = −0.9∆P), at the edge of the circular smectic film of the radius R = 100 µm, as 0.026 m/s, what is in agreement with the value of the velocity of the thinning front ∼ 0.06 m/s, obtained by means of the video measurements in overheated free-standing smectic film in air [15].
Thus, the results of calculations performed in the framework of the abovementioned dynamic model, which takes into account the influence of the meniscus on the smectic film, showed that the pressure gradient ∇P which develops between the squeezed-out and non-squeezed-out areas is responsible for the successive removal of one or several layer(s) from the N-layer smectic film during the layer-thinning process. It has been assumed that the squeezing-out is initiated by a thermally activated nucleation process in which the density fluctuation forms a small circular hole (void) of critical radius in the center of the circular smectic film. The origin of ∇P is a disjoining pressure (DP) acting across the N-layer and (N − 1)-layer smectic film, respectively. Taking into account the additional pressure P 1 , which is responsible for coupling of the smectic film with the meniscus, a more realistic description of the thinning process in free-standing smectic film, when the temperature is slowly increased above θ AI (bulk), has been proposed. In the framework of this model it was shown that the time t R (P 1 = 0) which is needed to completely squeeze-out one layer from the N-layer smectic film is inversely proportional to ∆P + P 1 . Bearing in mind that the value of ∆P = P (N) − P (N − 1) is negative, one can conclude that the positive value of P 1 = −∆P completely slows down the squeezing-out process, whereas the negative values of P 1 lead to accelerating of the squeezing-out process during the layer-thinning transition." [18].
It should be noted that there is not yet a clear consensus on the mechanisms by which the layer-thinning occurs. Different mean-field theories have been used to obtain a qualitative description of the layer-thinning transitions [7,8,11,12,14,16,19,63], but the theoretical thinning temperatures were much larger than the experimental values [1]. Common features of all these theories are the existence of enhanced smectic ordering at the free surfaces of the film and the fact that thinning occurs when the smectic ordering in the interior of the film becomes sufficiently weak. Apart from details of the models used, the main differences among the theories are in the description of the kinetic processes by which layer-thinning occurs, i.e., whether this is by uniform squeezing-out of the melted interior [17,18], or via spontaneous nucleation of dislocation loops between domains of differing thickness [13,14,66]. Another mean-field theory [16], based on the generalization of the de Gennes model for a "presmectic" fluid confined between two solid walls, by means of including a quadratic term in the surface smectic OP while neglecting the external field term, also presents a simple analytical formula for variation of T AI (N) with N. Hence, further study on a wider range of compounds will be required to sort through the correlation between the transition temperatures resulting from the mean-field approaches and experimental measurements.
Please note that only the first set of the mean-field approaches [7,8,11,12,[19][20][21] provides us an opportunity to calculate the disjoining pressure which is responsible for setting up of the pressure gradient, which drives the squeezed-out smectic film.
Conclusions
In this review, some recent progress made in the area of predicting structural and dynamic behavior associated with thin smectic films, both deposited on a solid surface or stretched over an opening when the temperature is slowly increased above the bulk transition temperature towards either the nematic or isotropic phases, has been discussed. The theoretical treatments for both of dynamic and static processes of flexible molecules in thin smectic films require a certain number of simplifying assumptions, which may only be justified by comparison between model predictions and experimental results. For instance, according to the set of mean-field theories followed here, thinning takes place when the smectic layer structure throughout the middle of the film vanishes. In an alternative theory, supported by experimental study, layer-thinning occurs in compounds which undergo first-order SmA-I transitions by spontaneous nucleation of dislocation loops, the growth of which causes a film to thin. A model of this thinning, predicting a layer-thinning transition temperature T AI (N) dependence, is functionally different from the power-law relation first described in [1] but fits experimental data closely. Another mean-field theory, based on the generalization of the de Gennes model for a "presmectic" fluid confined between two solid walls by means of including a quadratic term in the surface smectic OP while neglecting the external field term, also presents a simple analytical formula for variation of T AI (N) with N which also fits experimental data very closely. Hence, further study on a wider range of compounds will be required to sort through the correlation between the transition temperatures resulting from the mean-field approaches and experimental measurements.
Thus, the combination of experimental techniques, such as the optical and calorimetric measurements, and theoretical approaches, based on the extended McMillan's theory, provides a powerful tool for investigating both the structural and dynamic properties of real smectic films, deposited on a solid surface or stretched over an opening. | 25,862.2 | 2020-04-20T00:00:00.000 | [
"Physics"
] |
A Method of Reducing Flight Delay by Exploring Internal Mechanism of Flight Delays
is paper explores the internal mechanism of ight departure delay for the Delta Air Lines (IATA-Code: DL) from the viewpoint of statistical law. We roughly divide all of delay factors into two sorts: propagation factor (PF), and nonpropagation factors (NPF). From the statistical results, we nd that the distribution of the ight departure delay caused by only NPF exhibits obvious power law (PL) feature, which can be explained by queuing model, while the original distribution of ight departure delay follows the shi power law (SPL). e mechanism of SPL distribution of ight departure delay is considered as the results of the aircra queue for take-o due to the airports congestion and the propagation delay caused by late-arriving aircra. Based on the above mechanism, we develop a specic measure for formulating ight planning from the perspective of mathematical statistics, which is easy to implement and reduces ight delays without increasing operational costs. We analyze the punctuality performance for 10 of the busiest and the highest delay ratio airports from 155 airports where DL took o and landed in the second half of 2017. en, the scheduled turnaround time for all ights and the average scheduled turnaround time for all aircra operated by DL has been counted. At last, the eectiveness and practicability of our method is veried by the ights operation data of the rst half of 2018.
Introduction
Flight delay is one of the major issues in aviation systems all over the world. Such delay events downgrade the functioning of airlines and cause tremendous loss in human life, economy and tra cs [1,2]. To alleviate the harm of ight delay, considerable work has been done [3][4][5][6][7][8][9]. Actually, the air transportation system is a rather complex system, which have been traditionally described as graphs with vertices representing airports and edges direct ights during a xed time period [10]. ese graphs are called aviation network. Recently, many research has been carried out from the viewpoint of complex network [11][12][13], which propose almost all kinds of aviation network features.
Many networks in nature display rather complex structures, that o en seem random and unpredictable. Barabási and Albert discovered that many realistic networks [14,15] exhibit the scale-free feature, which the vertex connectivity follows a PL distribution. e fundamental mechanisms leading to the PL distribution are considered to be growth and preferential attachment [16,17]. On the other hand, in Ref. [18], the author proposed a SPL's model with a parameter which controls the relative weights between the power-law and exponential behaviors. Empirical investigation for many real world networks [19][20][21] also shows SPL distribution. ese work provide an e ective theoretical support for us to explore the internal mechanism of ight delay and propose e ective measures to alleviate the harm of ight delay.
ere are many factors that cause ight delay, the Bureau of Transportation Statistics (BTS) classi es them into ve categories [22]: (1) aircra arriving late, (2) national aviation system (NAS) delay, (3) air carrier delay as a result of crew, baggage loading or maintenance problems, (4) extreme weather conditions such as hurricanes or blizzards and (5) security-related delays. If one ight is delayed, then a subsequent ight might also be delayed because it is awaiting that inbound aircra . is kind of delay is called propagation delay [3,5,[22][23][24][25], which is also quite substantial (more than onethird of the delays) [3]. On the other hand, since the schedule of one aircra is quite tight, the on-route absorption of departure delay of the last ight is very limited and the delay in subsequent ights is relatively predictable, while the delay caused by NPFs is hard to predict. us, quantitative research of propagation delay is great signi cance, which helps to come up with solutions.
In order to alleviate the delay of propagation, researchers have proposed to modify schedule departure time so as to re-allocate the existing slack in the ight schedule [3,6,[26][27][28]. ese studies share a similar research methods: they allow schedule departure time to vary within a time window, then establish an objective function with several constraints, and nally obtain the optimal solution. ey focus on the impact of schedule modi cation on system performance to maximize the utilization of aviation resources. But we are more concerned about how to reduce ight delay ratio and hope to propose the concrete practicing method. In the follow, we propose a speci c implementation method, not an objective function, although we used the same idea as the previous studies, that is, modify schedule departure time. We take advantage of the predictability of propagation delay and assume that there is no newly formed delay (delay caused by NPFs) a er changing the plan, the e ectiveness and practicability of our method is veri ed by the ights operation data of the rst half of 2018. e structure of this paper is organized as follows: Section 2 presents a statistical law for airline of DL and explores internal mechanism of ight delay. Section 3 contains analysis for operation performance evaluation of di erent airports and statistical results of the scheduled turnaround time for all ights and the average scheduled turnaround time for every aircra . And the speci c method is put forward. Section 4 presents and discusses the empirical results. In Section 5, conclusions and some hints for future research are given.
Statistical Law and Internal Mechanism
We collect primary records of ights operation from July 1, 2017 to December 31, 2017 for the Delta Air Lines. e data of ights operation were downloaded from the website of the Bureau of Transportation Statistics (BTS) [29]. Our analysis focuses on the departure delay rather than the arrival delay, because the arrival delay is approximately linearly related to the departure delay [30]. In general, the departure delay is commonly measured as the di erence between the scheduled and the actual ight departure time. e Federal Aviation Administration (FAA) de nes the ight departure delay as the ights departure at least 15 minutes behind schedule. e detail information for primary data is listed in Table 1.
In order to vividly describe the ight delay, we plot in Figure 1 the probability distribution function (PDF) of the departure delay and set (statistical interval of PDF) of PDF is equal to 15 minutes. Clearly, we notice that the departure delay distribution shows attenuation trend, which is faster than the linear attenuation in double logarithmic chart. erefore, we consider the departure delay distribution is well approximated by SPL: Shown in Figure 1, the tting function ( ) of SPL can describe the empirical data very well. Statistical data shown as black lled circles, while red tting line in panel describes the tting result of Formula (1), in which the corresponding parameters ≈ 132.43, ≈ 25.83, and ≈ 2.74. e constants of , , and are estimated in the way of the least square tting, and the goodness of t is about 2 ≈ 0.999.
To explore the internal mechanism of ight departure delay, we rst investigate the factors causing ight delays. As shown before, delay factors include ve categories, we consider these ve kinds of factors can be roughly divided into two sorts: the propagation factor (PF), i.e., category (1) aircra arriving late, and the nonpropagation factor (NPF) which include all other four. Flight delays caused by NPFs are more accidental, while delay propagation has more direct relevance. Delay propagation occurs when late arrivals at an airport cause late departures, which in turn cause late arrivals at the destination airports. In general, the air tra c controller will set appropriate turnaround bu er time to prevent propagation delay when formulating ight planning [7], although this method reduces revenue-marking ight time and incurs schedule time costs. From the follow statistical results, we nd that current measure of setting bu er time does not play a prominent role.
Actually, a key challenge to explore the internal mechanism of ight delay is extracting e ective information from the raw data. Because the existing data do not provide direct information to distinguish between the di erent types of delay factors [23]. e other reason is that ight delay may be not merely attributed by a late arrival of the ight immediately preceding it, but also be attributed by one or more other factors (NPFs). In order to quantitatively study the propagation delay and simplify the cause-explanation of late-arrival in the present work, we consider that: a delayed ight with the time between the last actual arrival and the current schedule departure less than is attributed by PF. We know that the schedule turnaround time is consisting of two portions, namely the schedule bu er time and the standard ground service time [31]. For di erent types of aircra , the required standard ground service time is about 30-50 minutes (generally speaking, the larger the passenger capacity of the aircra , the longer the necessary ground service time). at means if the time between the last actual arrival and the current schedule departure is less than 30-50 minutes, it can be attributed to propagation delay.
To explore the impact of PF on the statistical law of the departure delay, we remove the departure delay causing by PF from the raw data. Since the data we collected without the information about the passenger capacity for di erent aircra , we plot the departure delay (remove the delayed ights causing by PF) distribution 2 ( ) by setting = 30, 40, and 50 for all aircra in Figure 2.
It exhibits a PL distribution instead of a SPL distribution, given by number of delayed ights caused by PF, and the fewer the number of delayed ights caused by NPFs. On the other hand, the smaller the value, the better the t of the curve using the PL function.
In the statistical process, we use di erent thresholds to obtain PL distributions, which shows that the distribution of ight delay caused by NPFs does exhibit the characteristics of PL distribution. To understand the origin of this observed PL distribution, we have to realize that the airport runway restrictions and the take-o queue size as the signi cant causal factors that a ect the actual departure time [32]. One delayed ight caused by NPFs, such as extreme weather, the ights behind this at the same airport usually delay too. When emergencies return to normal, the waiting aircra s' takeo is a queuing process. erefore, the distribution characteristic shown in Figure 2 can be regarded as the consequence of a decision-based queuing process [17,33,34]: when some perceived priority has been executed, the time of the planes waiting for take-o will show the characteristic of PL, with most ights rapidly take o , whereas a few experience very long waiting times. erefore, the mechanism of SPL distribution of ight departure delay is considered as the results of aircra queue for take-o due to the airports congestion and the propagation delay caused by late-arriving aircra .
Method
According to the previous mechanism of the ight delay, we can deal with the ight delay from two aspects, namely the airports congestion and the propagation delay. e most e ective way to reduce queuing time is building multiple airport runways. However, it is a huge investment. From the perspective of statistics, a new method is developed to improve the ight on-time performance. is method consists of two stages: (1) data statistics and summarization; (2) implementation steps.
Data Statistics and Summarization.
Due to the airports congestion, delay originating from these airports spreads to downstream ights. So the operation performance of airports plays a vital role in the punctuality ratio of airlines. e data that we collected not only contains the message of time for where 2 is a constant and 2 is a constant parameter of the distribution known as the exponent or scaling parameter. We obtain the value of 2 and 2 by the way of the least square tting (a er taking the log of the two sides and 2 becomes the slope of the line). As shown in Figure 2, the main part of the distributions t well with the t function of Formula (2), while the tail of distributions (larger delay) do not appear to be captured by it. However, the goodness of t of R 2 for all distributions with di erent is bigger than 0.99. From the data, we nd that the number of delayed ights with delay l bigger than 500 minutes is about 400-500, accounting for only 0.085-0.106% of the total number of ights. e fact that the scaling spans close to two orders of magnitude, from minutes to hours, indicates that most ight delays (70.51% for DL) are within less than one hour. With the increasing of the value, the value of the distribution function is smaller. Obviously, the longer the necessary ground service time, the more the 3: Log-log plots of departure delay distribution 3 ( ) for the 10 busy airports with the highest delay ratio. and where required, catering and cabin cleaning procedures.
is measure is associated with airport operational e ciency and is used to improve the planning of ight connectivity and the robustness of ight plan. In our method, we will modify the existing ight schedule and redistribute part of the schedule bu er time in the ight schedule without changing total slack time of the day and total daily number of ights.
In order to properly reset the slack, we count the scheduled turnaround time for all ight and the average scheduled turnaround time for all aircra operated by DL in the second half of 2017. Since there are typically no ights between 0 and 6 o' clock, we do not take into account this longer time when calculating the scheduled turnaround time. On the other hand, records available in BTS are not always complete for all aircra .
To promote the quality of statistics, we take 100 ights within 6 months as the ltering threshold, which means that aircra with their taking-o records smaller than 100 will not be counted into our statistics in the present work. A er ltering, a total of 728 aircra s are counted, and the total number of turned around for these aircra is 347073. e scheduling of aircra turnarounds is a consequence of both the operational policies and the scheduling strategies of an airline. For di erent airlines, the average scheduled turnaround time is quite di erent, Southwest Airlines in the USA shows a low average aircra turn time of 17 minutes and United Airlines an average turn time of 50 minutes [36]. In Ref. [36], we know that Delta Airlines shows an average turnaround time of 46.7 minutes, in which the database includes information from September 1987 to May 1994. According to our statistics, the average scheduled turnaround time of all ightsis about 75.3 minutes and standard deviation is about 92.9. is shows that the scheduled turnaround time of ights has increased greatly nowadays, it is particularly advantageous to our method of redistributing part of the schedule bu er time. Number distribution of the schedule turnaround time is shown in Figure 4(a), almost all ight's scheduled turnaround time is longer than 30 minutes. So we set the minimum necessary turnaround time to be 30 minutes in our method. departure and arrival, but also the carrier, tail number of aircra and the airports for departure and arrival. Next, we assess the operation performance of each airport and compute the scheduled turnaround time for all ights and the average schedule turnaround time for every aircra .
While recent studies on air tra c delays focus primarily on operation performance for the di erent airlines [22,35], we are interested in operation performance for the di erent airports. As we know, airports are distributed in di erent locations, the punctuality ratio for di erent airports are very different due to the weather conditions and other regional factors. From our statistical results, we nd there are 44 airports which have more than 2,000 taking-o ights in the second half of 2017 and 10 of 44 airports with the highest delay ratios are reported in Table 2. We can see that, airport of SEA has more delayed ights than BOS, but the total delay is smaller. at means the ight delay of airport BOS is mostly larger than SEA, so delay at airport BOS will have a greater impact on subsequent ights.
Initial delays a ect the downstream ights, but small delays do not have much impact due to the scheduled turnaround bu er time. e study of delay distribution for various airports is necessary, not only delay ratio. In Figure 3, we compare the ight departure delay distributions of 10 airports. From Table 2, we know that airports of JFK, LGA, LAX, and SEA concentrate a large part of Delta Airline's ights, but the characteristics of their delay distributions are not very di erent from each others. e shape of the delay distribution of di erent airports is similar, but small di erence can only be observed when one focuses on EWR airport. e EWR airport shows a bias toward larger delays and may have a greater impact on subsequent ights than other airports. e insu cient schedule turnaround time is another important factor for causing the propagation delay. e schedule turnaround time stands for the time spent by an aircra on ground from scheduled arrival to scheduled departure from the gate, which is used for an aircra to absorb last ight delay, complete full o -loading and loading maintenance of aircra Speci c measures are as Figure 5, where means schedule departure time, ὔ means schedule arrival time, means actual departure time, ὔ means actual arrival time, , and means scheduled bu er time, standard ground service time and scheduled turnaround time, respectively.
One aircra ies from airport 1 to airport 4, if airport 1 belongs to one of the 10 busiest airports in the previous statistics, then we delay the scheduled departure time of ight 2 from 2 to 2 , and the amount of delay is equal to the scheduled turnaround time 3 between the ight 2 and the ight 3 minus the necessary turnaround time . All in all, if the time interval t between the actual arrival time of ight 1 and the schedule departure time of ight 2 is larger than required ground service time, ight 2 will take o on time.
In Figure 4(b), we can see that almost all aircra 's average scheduled turnaround time is about 50-140 minutes. If we set the necessary turnaround time too large, then the change to the ight plan is small, and the e ect of restraining delay propagation will not be obvious.
Implementation Steps.
e overall approach is based on the ight delay mechanism where newly formed delays usually occur at busy airports due to airport/airspace capacity constraints and they spread to downstream ights by the same aircra . From our data, it is possible to trace the propagation of delay from airport to airport: if a particular aircra is scheduled to y from airport A to airport B and then to airport C and departs from A with a long delay, part or all of that delay will be propagated downstream and result in departure delay at B and, possibly, subsequently at C. In this section, we will develop a new method for formulating ight planning by using the previous statistical results.
Since the newly formed delay was hard to predict when we formulated the ight planning, we simply assume that ights departure from these 10 of the highest delay ratio airports mentioned above will experience this kind of delay. Actually, we cannot reduce the newly formed delay by optimizing ight plans, but we can mitigate the propagation e ects of last ight delay by postponing the scheduled departure time of subsequent ights. On the other hand, we have to keep the scheduled departure time of the next ight unchanged and reserve enough turnaround time (greater than the necessary turnaround time) for the next ight. is means that we can delay the scheduled departure time of the current ight, and the maximum amount of delay is equal to the schedule bu er time between the current and the next ight operated by the same aircra . According to our statistical results, the scheduled turnaround time varies greatly between di erent ights or di erent aircra , but the required standard ground service time is about 30-50 minutes, so we set the necessary turnaround time to 30, 40, and 50 minutes as mentioned earlier. we eliminated e ects of PF, the distribution of departure delay exhibit an obvious PL feature instead of SPL. e queue model which executes the highest-priority item on its list helps to understand mechanism of PL feature. We consider that the mechanism of SPL distribution of ight departure delay is the results of aircra queue for take-o due to airports congestion and propagation delay caused by late-arriving aircra .
Based on the above mechanism, we develop a speci c measure to mitigate propagation delay without increasing operational costs. Speci cally, if one aircra takes o from an airport with a higher delay ratio, we delayed the schedule departure time of the next ight operated by the same aircra , which is equal to the schedule bu er time between the next ight and the subsequent ight. It is proved that our approach is pretty e ective in reducing ight delay, although it is not signi cant for ights with larger delay.
In addition, our approach is based on the predictability of propagation delays and mathematical induction, which provides a new way to optimize ight schedules. Although this is by no means intended as a exhaustive study, it nonetheless provides a starting point to motivate future research, which is more accurate forecasting of the newly formed delays and nding the optimal amount of slack that we redistributed.
Data Availability e data used to support the ndings of this study can be found from the website of the Bureau of Transportation Statistics (BTS) at http://www.bts.gov.
Conflicts of Interest
e authors declare that there is no con ict of interest regarding the publication of this paper.
Empirical Results
In order to verify the e ectiveness and practicability of our method, we collect additional six-month data of ight operation in the rst half of 2018. We will use the method of this article to adjust the ight planning and compare the number of delayed ights before and a er adjustment for the rst six months of 2018. From the previous statistical results, we know that the 10 airports with the busiest and the highest delay ratio are SFO, EWR, JFK, LGA, MIA, PBI, ORD, BOS, LAX and SEA. We assume that if one ight departs from one of these 10 airports, it will generate newly formed delay and cause another ight immediately a er it with the same aircra also to delay. However, strictly speaking, the latter ight delay may be not merely attributed by a late arrival of the ight immediately preceding it, but also be attributed by one or more of other factors. In other words, sometimes the actual departure delay is hard to predict when we change the ight plan by our method, while the delay only caused by PF is not. erefore, to simplify the prediction of current ight delays in the present work, we do not take into account the newly formed delay when the last ight by the same aircra departed from one of the 10 highest delay ratio airports. e six-month data comprehends 463322 ight operation records, and a total of 84828 ights departing from the 10 highest delay ratio airports. Actually, since there are typically no ights between 0 and 6 o' clock, delay on the last ight of each day does not propagate to the rst ight of the next day. erefore, without considering the delayed propagation of the last ight per day, we only adjust the schedule departure time for 72902 ights instead of 84828 ights. Comparing the results before and a er adjustment, we nd that the departure delay ratio dropped from 13.91% to 12.06%, 12.25% and 12.39% with equal to 30 minutes, 40 minutes and 50 minutes, respectively. e change in the number of delayed ights in each delay interval is presented in Figure 6.
Obviously, we can see that the number of delayed ights in almost all delay intervals has decreased. And the smaller the necessary turnaround time , the more the delay and delay ratio will be reduced. But we cannot set too small in our method, because large aircra s require a relatively long turnaround time, small does not correspond to actual. e other reason is that the operation of the ights is full of many uncertain factors, the slack time is reserved to help deal with some unexpected situations and improve the robustness of the ight plan. On the other hand, our method is pretty e ective in the case of short delay, but not in the case of long delay. is is due to the limited slack time reserved by the airline in formulating ight plan. Many delayed ights with small delay have been able to take o on time a er our measure, but ights with larger delay have only slightly reduced delay.
Conclusion
By data mining and statistical analysis, we study the distribution characteristics and inherent mechanism of ight departure delay for DL. From the statistical results, we nd that the distribution of ight departure delay follows SPL, and when | 5,987.8 | 2019-12-30T00:00:00.000 | [
"Computer Science"
] |
VEHICLE TRAJECTORY BASED CONTROL DELAY ESTIMATION AT INTERSECTIONS USING LOW-FREQUENCY FLOATING CAR SAMPLING DATA
Control delay is an important parameter that is used in the optimization of traffic signal timings and the estimation of the level of service at signalized intersection. However, it is also a parameter that is very difficult to estimate. In recent years, floating car data has emerged as an important data source for traffic state monitoring as a result of high accuracy, wide coverage and availability regardless of meteorological conditions, but has done little for control delay estimation. This article proposes a vehicle trajectory based control delay estimation method using low-frequency floating car data. Considering the sparseness and randomness of low-frequency floating car data, we use historical data to capture the deceleration and acceleration patterns. Combined with the low-frequency samples, the spatial and temporal ranges where a vehicle starts to decelerate and stop accelerating are calculated. These are used together with the control delay probability distribution function obtained based on the geometric probability model, to calculate the expected value of the control delay for each vehicle. The proposed method and a reference method are compared with the truth. The results show that the proposed method has a root mean square error of 11.8 s compared to 13.7 s for the reference method for the peak period. The corresponding values for the off-peak period are 9.3 s and 12.5 s. In addition to better accuracy, the mean and standard deviation statistics show that the proposed method outperforms the reference method and is therefore, more reliable. This successful estimation of control delay from sparse data paves the way for a more widespread use of floating car data for monitoring the state of intersections in road networks.
Introduction
Traffic control delay (the difference between the actual travel time influenced by traffic signals and reference travel time under free flow conditions) is an important performance indicator for evaluating signal control systems and the Level Of Service (LOS) in traffic operations at intersections. However, in current traffic data detection infrastructure, control delay is not directly measurable. A variety of theoretical models were developed to estimate control delay of signalized intersection. Cheng et al. (2016) reviewed and classified the estimation model development process into three stages. Stage 1 covers 1920-1970s, and approaches proposed in this stage largely considered random arrivals. These models failed to provide accurate results under high saturation degree. To improve the accuracy of delay estimation for high saturation level, the coordination transformation technique, time-dependent models were derived and progression factors to account for the filtering impact from upstream intersection are introduced from 1970s to 2000s (Stage 2). Due to inaccurate approximation of specific traffic condition, some modified approaches and supplementary terms were derived from 2000 onwards (Stage 3). The drawback of the theoretical delay estimation model is when the traffic is undersaturated, they all could achieve satisfying accuracy, but under high saturation degree, their performance will decline to varying degrees. Although some modified models could give acceptable estimation result by introducing some factors, the model is more complicated and more parameters need to be calibrated. Besides, these models need signal timing information and traffic volume collected by the fixed sensors like loop detectors as input. Therefore, the theoretical models could only provide control delay estimation for the intersections without fixed sensors. In recent years, probe vehicle technologies able to register vehicle trajectories have created an opportunity to address the limitations of the current systems in estimating traffic control delay. In theory, probe vehicle or floating data have the potential to provide high accuracy vehicle position, location, time and derivatives over a wide spatial-temporal coverage. Although probe vehicle data are spatial-temporally sparse due to the limitations of storage and transmission, it has been widely used for various traffic parameters estimation (Comert, Cetin 2009;Rahmani et al. 2015;Shi et al. 2017). But up to now there has not been much research focusing on the delay estimation based on sparse probe data, the aim of this article is to contribute to the estimation of delays at signalized intersection, making use of low-frequency trajectory data.
As early as 1991, researchers explored the plausibility of using floating car data to estimate control delay at intersections. Quiroga and Bullock (1999) proposed a forwardand-backward-acceleration method for detecting critical delay points and then estimating control delays. Colyar and Rouphail (2003) improved the prediction accuracy of the Quiroga and Bullock (1999) method by accounting for the influence of traffic conditions. Ko et al. (2008) estimated delay components based on speed profiles. Čelar et al. (2018) developed a algorithm based on average acceleration rate and deceleration rate and phase duration. The method aims to eliminate the delay that is not affected by traffic signals. Li et al. (2018) developed a virtual detection box methodology to generate control delay measures with high fidelity commercial probe vehicle trajectory data. The method did not encounter privacy issues, because no actual trajectory data would be transferred to the computer. However, these methods assume the sample frequency of Global Positioning System (GPS) data are 1 Hz, which are not always available in reality.
Applying these methods with low-frequency GPS data results in low accuracy delay estimation. Liu et al. (2006) attempted to assess the sensitivity of delay to sample frequency. The results show that delay measured from data at a sampling interval of 10 s are consistent with the values from an interval of 5 s for 74% of the cases. However, when the sampling interval is 60 s, the level of consistency drops to 37%. So the methods above are not suitable for low-frequency data.
To accurately obtain control delay from low-frequency floating car data, the main challenge is how to detect where and when a vehicle starts to decelerate and stop accelerating. He and Ye (2014) proposed a method, which delimits the affected area of intersection on the basis of queue length and calculated the times when a vehicle enters and leaves the affected area from low-frequency sample points. Although this method is simple and has a high computational efficiency, the affected area of the intersection in this article is assumed to be stable. The affected area of the intersection is related to the queue length and varies in different cycles. Wang et al. (2016) developed a piecewise model representing vehicle motion as it passes an intersection. An optimization method is used to determine the locations and times of the initiation of deceleration and stoppage of acceleration. Their model assumes that a vehicle travels at free-flow speed before deceleration. However, when traffic is congested this assumption may not hold. Some researcher proposed methods for reconstructing the trajectory with low-frequency floating car data. The critical points could be inferred from the trajectory. Hao et al. (2014) proposed a model investigating all possible driving mode sequences between two consecutive GPS updates. With likelihood quantified using an a priori distribution, a detailed trajectory is reconstructed and used to calculate delay. In principle, this should work well even for floating car data whose sample interval is 60 s. However, the distribution of each scenario's likelihood is difficult to obtain a priori. Wan et al. (2016) proposed an Expectation Maximum (EM) algorithm to reconstruct the maximum likelihood trajectory. However, the method has low computational efficiency and poor real-time performance. The methods need the signal timing information, which is not always available.
Several somewhat different approaches are developed by researchers. Liu et al. (2013) addressed the sparseness problem by introducing the Principle Curve method into the calculation of turn delay. In their study, the sampling interval of floating car data is 10 s. Ban et al. (2009) employ piecewise linear interpolation to obtain traffic delays. However, traffic delay is different from control delay. Neumann et al. (2010) computed turn-dependent delay times by introducing a simple linear model, which arises from the superposition of two types of turn-dependent delays and free flow travel time. Free flow speed and delay are estimated as model parameters. However, because of a lack of reference values, the results were not verified. Turn delay and traffic delay is different from control delay, so if these methods are used to estimate control delay, the performance is uncertain.
The limitations of the methods above could be classified into three categories: (1) they do not account for randomness of low-frequency sampling due to the dynamic nature of traffic flow, hence, the reliability of the results are not provided; (2) the data some model used are not always available, such as high-frequency probe vehicle data and signal timing information; (3) some models are designed to estimate turn delay or traffic delay, which may not be applicable for estimating control delay. This article aims to contribute to the estimation of signalized intersections with common low-frequency floating car data. This article addresses these data sparseness by introducing the Principle Curve method and using the expected value instead of the estimated value for high accuracy vehicle control delay estimation. By using historical data, the deceleration and acceleration patterns of vehicles through an intersection are constructed with the Principle Curve method and combined with low-frequency data to compute the spatial and temporal ranges of the deceleration onset points and acceleration end points. Using this information together with the control delay probability distribution function obtained based on the geometric probability model, the expected value of the control delay is calculated.
The main contributions of this article are as follows: 1) the proposed method tackles the control delay estimation problem of the signalized intersection without fixed sensors; 2) historical data are used to capture vehicle motion/ dynamics at signalized intersections to reduce sensitivity to randomness of sampling and no assumption is introduced; 3) the method developed calculates the expected value of vehicle control delay, resulting in the improvement of both accuracy and stability.
Preliminaries
As illustrated in Figure 1, control delay comprises three parts: (1) deceleration delay d b ; (2) stop delay d s ; (3) acceleration delay d a . T d is the time when the vehicle begins to decelerate; T s1 is the time when the vehicle stops decelerating; T s2 is the time when the vehicle starts to accelerate; T a is the time when the vehicle stops accelerating; L d is the location where the vehicle's deceleration process starts; L s is the location where the vehicle stops; L a is the location where the vehicle's acceleration starts; v f is the free-flow speed; d b is the delay caused by the vehicle decelerating from T d to T s1 ; d s is the stop delay when the vehicle is stationary; d a is the acceleration delay caused by the vehicle accelerating from T s2 to T a ; d is the control delay the vehicle experiences through the intersection and is the sum of the deceleration, stop and acceleration delays.
Each delays are calculated as follows: From the expressions above, the time and location when a vehicle starts to decelerate and stop accelerating are the most important for control delay estimation (i.e. the stoppage period is irrelevant in control delay estimation). Hence, the two are often referred to as critical points. From low-frequency floating car data, it is not always possible to realize a complete picture of the vehicle through the intersection. Hence, the objective is to detect the critical points from the sparse floating car data.
Methodology
In order to address the limitation of data sparseness, historical data are used to explore the deceleration and acceleration patterns. This helps to capture the changes in vehicle motion or dynamics through intersections to obtain the space-time ranges of the critical points. From these data and based on the geometric probability model, the distribution function of the delay values is obtained, from which the expected value of delay is calculated.
Travel pattern analysis in the spatial dimension
Detecting the "critical points" is the first prerequisite in control delay estimation. However, as Figure 2 shows, for a vehicle through an intersection only several sample points could be obtained, providing an incomplete trajectory. From such data the location of the critical points are unknown making it impossible to accurately estimate control delay. Hence, more trajectory data are required. Therefore, we use historical data to capture the vehicle travel (motion or dynamics) patterns through intersections. When a vehicle passes through an intersection, the dwell time depends on the arrival time of the vehicle and signal timing scheme, and has the characteristic of randomness. Deceleration and acceleration are relatively stationary processes largely not impacted by the environment. Therefore, the travel pattern of deceleration and acceleration processes were explored by mining historical data. In order to provide a reference for the historical values, the historical data must meet the two conditions that the instantaneous speed of historical sample points is above 0 and must be within the close position as the trajectory to estimate.
Extracting historical data according to the aforementioned conditions, the historical sample points were divided into two parts: (1) before a vehicle stops; (2) after a vehicle stops. The deceleration and acceleration patterns are investigated separately. Considering that historical data consist of discrete sparse points with uncertain quantities, the Principle Curve method is adopted for curve fitting for use to generate the required sample points to enable the determination of the critical points. In practical terms, the Principle Curve method is used to deal with raw data noise and non-uniform distribution, common phenomena in traffic data. As illustrated in Figure 3, the horizontal axis represents the distance from the centre of the intersection, and the vertical axis represents the instantaneous speed. Points A and B represent the critical points whose locations are to be estimated. The stars represent the current sampling points. The dots are the historical sample points. From Figure 3, it can be seen that adding historical data better captures the travel patterns. The fitting function represents the vehicle's speed distribution at different locations on the road. Hence, the function could help increase the number of sample points at each stage of the vehicle's movement. The distance and speed range of the critical points could be determined also. When new data comes, the fitting function could be updated.
Spatial-temporal range delineation
To determine the critical points automatically from the profile of speed versus distance, we adopt a forward acceleration method proposed by Quiroga and Bullock (1999). It should be noted that the forward and backward average acceleration method was proposed to automatically detect the critical points from the acceleration profile. In this article, the fitting function based on the historical data is the speed-distance curve.
The acceleration is defined as the differential velocity to distance: where: a i is the acceleration associated with point i; v i-1 , v i are speeds at points i-1 and i; s i-1 , s i are positions at points i-1 and i. It should be noted that a i is different from the common acceleration a = Dv/Dt. Next we prove that when a vehicle decelerates or accelerates, a i will show the same trend of change with a, which is the differential velocity to time; it is assumed that a vehicle travels at a deceleration rate a i , its initial speed is v 1 , initial position is s 1 , and after Dt, its speed is v 2 , and position is s 2 . Then: When the vehicle travels at a constant speed, a i = 0, when the vehicle decelerates, a i will increase, and as the speed becomes smaller, a i increases gradually. Similarly, when the vehicle accelerates, a i reduces gradually, and if the vehicle restores free-flow speed, a i = 0. a i performs the same variation tendency with the common acceleration a.
The expression above can be used to determine the acceleration that is significantly different from zero. Furthermore, the deceleration onset point could be detected. However, the expression only applies to the deceleration process. For the acceleration process, the backward method is adopted as follows: where: i = n + 1, n + 2, ..., N + 1.
Different from the forward average acceleration method, the backward average acceleration algorithm is used to determine when the acceleration is essentially zero, enabling the acceleration end point to be detected. Therefore, using the forward acceleration and backward acceleration methods, the spatial range of the critical points' is determined. For convenience, as shown in Figure 3, let A and B represent the critical points identified by the method. Their spatial and speed ranges are ( ) . s a is the distance from point A to the centre of intersection. s b is the distance from point B to the centre of the intersection. Points 1, 3, 4 are the sample points of the trajectory to estimate. On the basis of the definition of control delay, if t a1 (the travel time between A and 1), t b3 (the travel time between B and 3) are known, the control delay can be calculated. From the literature (Clement et al. 2004), it is assumed that a vehicle travels between A and 1, and B and 3 at a constant acceleration. Prior knowledge consists of historical deceleration and acceleration values. Its lower and upper bounds are a 1 and a 2 respectively.
In Figure 3, if point 3 does not exist and only point 4 is known, t b4 (the travel time between B and 4) could be calculated as follows: The number of sample points do not have impact on the calculation of time range. Our methods could work under different sample scenarios.
The expected value of delay
Based on the existing information, the control delay d of the trajectory can be calculated as: where: t 13 is the time between sample points 1 and 3; v f is the free flow speed.
To compute the expected value of control delay, the theory considered here is the geometric probability model. In this model, all possible results for the random experiment are infinite, and the probability of each basic result is the same. The magnitude of the probability is reflected by the length of the line segment that intersects the line with the feasible area. For convenience, the range of t a1 + t b3 + t 13 is expressed as ( ) Figure 4.
The shaded area represents the feasible region. The length of the line formed by the intersection between the objective function and square represents the possibility that d is equal to a certain value. When the objective function passes ( ) , l e s t , it is equal to d 4 and the probability is 0. As the value of the objective function decreases from d 4 , the probability increases gradually. When the delay (the value of objective function) is equal to d 3 , the probability is the largest. When the delay is between d 3 and d 2 , the probability is the largest and fixed. When the delay reduces from d 2 to d 1 , the probability is linearly reduced to 0. The probability density curve shown in Figure 5.
The expected value of delay is then obtained by the following formula:
Experimental tests
In order to capture the deceleration and acceleration dynamics of the vehicle through the intersection, historical low-frequency trajectory data of five months were adopted. The data contains longitude, latitude, speed, time and direction. The historical data is based on temporal sample and the sample frequency is 1/30 Hz. The process of historical data contains the following steps: 1) the trajectories, which the vehicles experience obvious stop are selected. It is judged by the speed of sample points. If there is a point whose speed is smaller than 5 km/h, the vehicle is considered to have a stop. The trajectory is divided into two parts: deceleration and acceleration; 2) the distance between the vehicles stop and the centre of the intersection are calculated for all the selected trajectories. The trajectories are grouped according to the equal distance interval. In our study, the interval is 20 m; 3) the speed before deceleration and after acceleration of a trajectory is estimated. For trajectory of deceleration, if there is one point, the speed of the sample point is seen as the speed before deceleration. If there are two or more sample points, the speed before deceleration is the max speed among the speed of the first two sample points and the average speed between the two sample points. For acceleration, if there is one point, the speed of the sample point is regarded as the speed after acceleration. If there are two or more sample points, the travel speed after acceleration is the max speed among the speed of the last two sample points and the average speed between the two sample points. The deceleration trajectory will be classified according to the speed before deceleration and the acceleration trajectory will be classified according to the speed after acceleration; 4) the trajectory of deceleration and acceleration will be divided into multiple subsets according to the speed and stop position. Each subset will be fitted with the Principle Curve method to capture vehicle dynamics; 5) for a new trajectory, the distance between where the vehicle stops to the intersection is calculated and the speed before deceleration and after acceleration are estimated. Combined with the corresponding curve, the expected value of control delay could be calculated with the proposed method. The parameters like 5 km/h, 20 m are chosen according to the field experience. A field experiment was conducted to validate the method proposed. The study site is the intersection of Songshan and Huanghe roads in Harbin (China). Both roads are arterials. There is a large shopping centre nearby, and therefore, the traffic conditions are complex with obvious changes at different times of the day. The speed limit is 50 km/h on both roads.
In order to evaluate the accuracy and reliability of the method proposed, a set of high-frequency trajectory data were collected on 13 September 2017. Eight probe vehicles were equipped with GPS receivers to collect GPS data at 1 Hz. The vehicles were driven in the north-south direction traversing straight through the intersection repeat-edly during the periods 07:00…10:00 and 16:00…19:00, capturing the morning and evening peaks respectively. The morning peak was from 07:00…09:00 and the evening peak from 17:00…19:00. The process lasted for six hours generating 144 valid high frequency GPS tracks.
Low-frequency floating car data were subsequently generated from the high frequency data at the typical interval of 30 s. It should be noted that the trajectory data constitute the GPS points generated from the time the probe vehicle enters the intersection to the time when the vehicle departs the intersection. The discretization of distance is at the 20 m level. This is based on a series of tests undertaken on the sensitivity of control delay accuracy to distance accuracy and computational efficiency. 95% of the vehicle's acceleration and deceleration rates were less than 2.8 m/s 2 in field observations. In Haas et al. (2004), for speed ranging from 20 to 25 mph, the average deceleration rate is 0.1⋅g, but when speed ranges from 35 to 40 mph, the average deceleration rate is 0.18⋅g. To make estimation results insensitive to the parameters and guarantee the assumption's robustness, the acceleration and acceleration range is selected to be between 0.1⋅g and 0.2⋅g. Control delays from the low-frequency probe vehicle data were computed using both the method proposed in this article and the reference method, based on the low-frequency floating car data.
Accuracy of individual probe vehicle control delay
Given the fact that the behaviours of a vehicle traveling at different speeds are different, the GPS tracks were classified into two categories according to the speed at which the vehicle starts to decelerate. Above 30 km/h is classified as high-speed pattern, otherwise, the class is low-speed pattern.
The observed control delay value is calculated from the high-frequency floating car data. The low-frequency floating car data are generated by resampling the highfrequency floating car data. The low-frequency data are then processed to generate control delay values for each of the classes using both the methods proposed in this article and the reference method. Figure 6 shows the comparison of the observed and estimated control delay values for different speed patterns from the proposed and reference methods. The horizontal axis and the vertical axis represent the observed and estimated control delay values, respectively. The black dotted line is the 45-degree line, which means that the closer the points are to the dotted line, the more accurate the estimation method. As shown in Figure 6, the proposed method has a better performance than the reference method for both the low-speed pattern and high-speed patterns. For both methods, the accuracy for the high-speed pattern is higher than the low-speed pattern. For the low-speed pattern, the vehicle's speed is relatively small and the traffic volume is large. Under this circumstance, vehicle movement is complex. For example, a vehicle may experience a second stop delay, which means that it does not pass the intersection in a signal cycle.
Estimation accuracy of the control delay of the intersection
To further demonstrate the effectiveness of the method proposed in this article a reference approach developed by He and Ye (2014) is adopted. The reason why we choose this as reference method is the method is not sensitive to the low-frequency sample points and it could achieve a satisfying accuracy, with 85% of the control delay estimation results within 10 s in terms of absolute error. The reference method delineates the affected area of the intersection according to the historical queue length. It is assumed that the vehicle travels at free flow speed out of the affected area. When the vehicle enters the affected area, it starts to decelerate, and restores its free flow speed on exiting the area. As shown in Figure 7, a trajectory through points P 0 , P 1 , P 2 generated during a vehicle travelled through the intersection. The area between S and E is the affected area of the intersection. It is assumed the vehicle travels at a uniform speed outside the area. The real travel time from S to E could be calculated as follows: where: L 1 is the distance between point P 0 and point S; The control delay could be calculated as: where: L is the distance between point S and E; v f is free flow speed.
To quantitatively evaluate the accuracy of the estimation results, the Root Mean Square Error (RMSE) is selected as the evaluation indicator and provides an estimate of the goodness of fit between the estimated value and observed values according to: where: x i is the i-th estimated control delay value of the intersection; x is the i-th observed delay value of the intersection.
We list the RMSE for proposed method and reference method compared to the truth value of control delay in peak hour and off-peak hour in Table 1.
For low-frequency probe car data, the sample is random, which means that for a trajectory, there may exist many different sample point sequences. To test the stability of the proposed and reference methods, for each observation interval, all high-frequency trajectories were sampled 10 times at a low-frequency and control delay value of the intersection was calculated 10 times by the proposed and reference methods respectively. The results are presented in Table 2. The results show that the proposed method improves the accuracy of the reference method by 14% in the peak hour and 26% in off-peak period.
In Table 2, it can be seen that the standard deviation of the estimated control delays using the method proposed is smaller than that with the reference method. This shows that the method is more reliable.
To better demonstrate the reliability of the proposed method, a box plot is adopted to show the distribution of the control delay estimation results from 16:00…19:00. Figure 8 presents the estimated control delay value distribution of the intersection with the proposed and reference methods. The blue point is the ground-truth control delay value of the corresponding time period. P represents the proposed method and R, the reference method. 12 time periods, 15 min each, between 16:00…19:00 were generated. For each time period, the control delay of the intersection was estimated ten times by resampling the high-frequency trajectories at 30 s interval. As shown in Figure 8, for most time periods, the estimated control delay value distribution is more consistent than the control delay value distribution estimated by the reference method. Besides, in general, the mean value of the result obtained by proposed method is closer to the ground-truth value than the results from reference method. This shows that the proposed method has a better reliability.
The computer efficiency is another factor to be evaluated. The computer used in data processing and analysis had Intel® Core™ I5-8250 4 Cores 1.6GHz CPU, 4Gb memory, 1T Hard Disk and Windows 10 64 bit operation system. For an arterial with five intersections, link length is about 800 m. Five months historical data were used to capture vehicle dynamics through each signalized intersection. The computational time of this is from 5…10 min, but it only needed to be calculated once. Calculating control delay of each intersection takes about 7 s. These figures show that our approach has a satisfying computer efficiency and could be used in real time.
Given that there may be a concern that the sample interval may affect accuracy and reliability, we analyse the sensitivity of the sample interval to accuracy. For all the time periods, control delay of the target intersection was estimated with low-frequency trajectory data for different sample intervals, and the RMSE calculated. The results are shown in Figure 9.
As shown in Figure 9, although the RMSE increases as the sample interval becomes longer, the rate of growth is low and hence, at least for the range of sampling interval analysed (from 30 to 60 s), the accuracy of control delay estimation is largely insensitive to the sample interval.
Conclusions and recommendations
This article presents a novel method to estimate control delay at road intersections from low-frequency floating car data. In order to address the limitations of data sparseness, historical data are used to explore the deceleration and acceleration patterns. This helps to capture the changes in vehicle motion or dynamics through intersections to obtain the space-time ranges of the critical points. From these data and based on the geometric probability model, the distribution function of the delay values is obtained, from which the expected value of delay is calculated.
Both the proposed method and a reference method are compared against the truth control delay value of the target intersection for different time periods. The results show that proposed method has an RMSE of 11.8 s compared to 13.7 s for the reference method for the peak period. The corresponding values for the off-peak period are 9.3 s and 12.5 s. In addition to a better accuracy, the mean and standard deviation statistics show that the proposed methods outperforms the reference method.
The method proposed in this article could be used to estimate the control delay at road network intersections from sparse data (e.g. from floating cars), which is important for traffic management and control, and hence the improvement of the overall of operational efficiency of a road network. However, there are some limitations with the research methodology should be highlighted in order to enhance its applicability and transferability. First, as observed control delay is not easy to obtain, the statistical analysis is based on the observation result at a selected intersection for six hours. More data are needed to validate our conclusion. Second, the sample size of the trajectory data is not considered in our research. It may have impact on the performance of our method. The relationship between the sample size and performance of our method will be investigated in the future. Time period 1-P 1-R 2-P 3-P 4-P 5-P 6-P 7-P 8-P 9-P 10-P 11-P 12-P 2-R 3-R 4-R 5-R 6-R 7-R 8-R 9-R 10-R 11-R 12-R | 7,498.4 | 2020-02-05T00:00:00.000 | [
"Engineering"
] |
Humidity Sensor Composed of Laser-Induced Graphene Electrode and Graphene Oxide for Monitoring Respiration and Skin Moisture
Respiratory rate and skin humidity are important physiological signals and have become an important basis for disease diagnosis, and they can be monitored by humidity sensors. However, it is difficult to employ high-quality humidity sensors on a broad scale due to their high cost and complex fabrication. Here, we propose a reliable, convenient, and efficient method to mass-produce humidity sensors. A capacitive humidity sensor is obtained by ablating a polyimide (PI) film with a picosecond laser to produce an interdigital electrode (IDE), followed by drop-casting graphene oxide (GO) as a moisture-sensitive material on the electrode. The sensor has long-time stability, a wide relative humidity (RH) detection range from 10% to 90%, and high sensitivity (3862 pF/%RH). In comparison to previous methods, the technology avoids the complex procedures and expensive costs of conventional interdigital electrode preparation. Furthermore, we discuss the effects of the electrode gap size and the amount of graphene oxide on humidity sensor performance, analyze the humidity sensing mechanism by impedance spectrum, and finally perform the monitoring of human respiratory rate and skin humidity change in a non-contact manner.
Introduction
Humidity sensors have a large number of applications in many fields, such as agricultural production, industrial manufacturing, food processing, and environmental monitoring [1][2][3][4]. Humidity sensors have recently been applied to monitor human health [5,6]. With the global spread of the COVID-19 virus, non-contact sensing and respiratory monitoring have become important tools to prevent and control respiratory infectious diseases. The moisture content in breath and skin moisture change can reflect the body's metabolism and health status, so it is especially important to obtain data on changes in respiratory rate and human skin humidity by a non-contact method, which poses a great challenge to the sensitivity, real-time and reliability of humidity sensor.
There are many kinds of humidity sensors, mainly capacitance [7,8], impedance [9,10], current [11], voltage (i.e., self-powered humidity sensors) [12][13][14], fiber optic [15], quartz crystal microbalance (QCM) [16], and resonant surface acoustic wave (SAW) [17]. In terms of power supply systems, humidity sensors can be classified into passive sensors and self-powered sensors. Passive sensors are limited in their application due to the need for an external power supply. Compared to traditional passive sensors, self-powered sensors are of interest because they can be powered. However, during the process of power supply, a redox reaction occurs [14], which reduces the lifetime of the sensor, and the energy storage problem of self-powered sensors is also to be solved. Additionally, mm. The average power of the laser was 50 W, the power percentage was 20%, and the frequency was 200 kHz. Then, the copper wire was glued to the common electrode with conductive silver glue. Finally, the GO solution was drop-coated on the electrode with a pipetting gun. It was dried naturally in the room for 48 h to make a uniform GO film.
Materials Characterizations
A Leica DVM6 depth-of-field (DOF) microscope was used to observe the morphology of GO and LIG. Raman spectra were observed with a Raman spectroscopy (Pioneer Technology RTS2, Bordentown, NJ, USA). Raman excitation source was a laser of 532 nm. The sensor capacitance was measured using an LCR meter (TH2829A, Tonghui Electronic Co. Ltd., Changzhou, China). A high-precision Bluetooth humidity sensor (Jiali Technology Co., Ltd., Chengdu, China) was used to monitor humidity with a humidity accuracy of 1.5% RH and humidity resolution of 0.1% RH. The gas flow rate was controlled by a gas flow controller (LZB-3WB, Shunlaida Measurement Co., Ltd., Nanjing, China).
Humidity Sensing System
As shown in Figure 2, all measurements were performed at 30 °C. The gas coming from the synthetic air (N2: 78%, O2: 22%) bottle was dry gas with a flow rate of 1 L/min, and the gas bubbling through the deionized water was moist gas. The dry gas was mixed with the moist gas in different proportions by adjusting the flow meter to obtain a stable humidity environment. We placed the sensor in the test chamber and measured its capacitance with LCR. The ambient humidity in the test chamber could be measured in real time by a commercial hygrometer. Humidity sensitivity is denoted by the formula S = , where S denotes the sensitivity, CRH denotes the capacitance at RH% humidity, and C0 denotes the capacitance at RH0% humidity. The response and recovery time of the sensor is the time it takes for the sensor capacitance to go from the initial value to 90% of the stable value.
Materials Characterizations
A Leica DVM6 depth-of-field (DOF) microscope was used to observe the morphology of GO and LIG. Raman spectra were observed with a Raman spectroscopy (Pioneer Technology RTS2, Bordentown, NJ, USA). Raman excitation source was a laser of 532 nm. The sensor capacitance was measured using an LCR meter (TH2829A, Tonghui Electronic Co. Ltd., Changzhou, China). A high-precision Bluetooth humidity sensor (Jiali Technology Co., Ltd., Chengdu, China) was used to monitor humidity with a humidity accuracy of 1.5% RH and humidity resolution of 0.1% RH. The gas flow rate was controlled by a gas flow controller (LZB-3WB, Shunlaida Measurement Co., Ltd., Nanjing, China).
Humidity Sensing System
As shown in Figure 2, all measurements were performed at 30 • C. The gas coming from the synthetic air (N 2 : 78%, O 2 : 22%) bottle was dry gas with a flow rate of 1 L/min, and the gas bubbling through the deionized water was moist gas. The dry gas was mixed with the moist gas in different proportions by adjusting the flow meter to obtain a stable humidity environment. We placed the sensor in the test chamber and measured its capacitance with LCR. The ambient humidity in the test chamber could be measured in real time by a commercial hygrometer. Humidity sensitivity is denoted by the formula S = C RH −C 0 RH−RH 0 , where S denotes the sensitivity, C RH denotes the capacitance at RH% humidity, and C 0 denotes the capacitance at RH 0 % humidity. The response and recovery time of the sensor is the time it takes for the sensor capacitance to go from the initial value to 90% of the stable value.
Morphology and Structure Analysis of LIG
Three sensors with different electrode gaps are designed, 50 μm, 150 μm, and 360 μm corresponding to electrode areas of 32.34 mm 2 , 41.34 mm 2 , and 59.34 mm 2 , respectively, with an electrode width of 290 μm, as shown in Figure 3a-c. Sensors that are not dropcoated with GO are called PI-based sensors. The sensor with GO drop-coated is called a
Morphology and Structure Analysis of LIG
Three sensors with different electrode gaps are designed, 50 µm, 150 µm, and 360 µm corresponding to electrode areas of 32.34 mm 2 , 41.34 mm 2 , and 59.34 mm 2 , respectively, with an electrode width of 290 µm, as shown in Figure 3a-c. Sensors that are not dropcoated with GO are called PI-based sensors. The sensor with GO drop-coated is called a GO-based sensor. For convenience, we named the sensors with different gap sizes and different volumes of GO solution as LIG M -N, where M represents the gap size, and N represents the volume of GO solution. For example, in LIG 150 -60, 150 represents the gap size of 150 µm, and 60 represents 60 µL of GO solution.
Morphology and Structure Analysis of LIG
Three sensors with different electrode gaps are designed, 50 μm, 150 μm, and 360 μm corresponding to electrode areas of 32.34 mm 2 , 41.34 mm 2 , and 59.34 mm 2 , respectively, with an electrode width of 290 μm, as shown in Figure 3a-c. Sensors that are not dropcoated with GO are called PI-based sensors. The sensor with GO drop-coated is called a GO-based sensor. For convenience, we named the sensors with different gap sizes and different volumes of GO solution as LIGM-N, where M represents the gap size, and N represents the volume of GO solution. For example, in LIG150-60, 150 represents the gap size of 150 μm, and 60 represents 60 μL of GO solution. As shown in Figure 3d,g in the absence of GO, the surface is black with a gap in the middle and has more small holes with a diameter of about 3-5 µm. In the process of laser ablation, the laser is emitted at a certain frequency, resulting in a crumpled carbon electrode morphology with layers of stacking accompanied by bulges and small holes. Bulges are caused by PI melting, and holes are caused by PI bulge cracking. The middle gap is formed because the instantaneous power of the laser is so high that it raises the PI film to more than 1000 K, and an explosive phase change occurs, resulting in boiling, vaporization, and decomposition of the polyimide. The high temperature causes carbonization of the PI film, which results in microhumps around the ablation area. From the cross-sectional DOF image in Figure 3h, the carbon layer is about 50 µm higher than the PI film. Figure 3i shows that the PI film becomes a layered structure after drop-casting GO, and the whole electrode has three parts: the bottom layer of yellow PI, the middle layer of a black carbon electrode, and the upper layer of GO. When the GO solution is 30 µL, the GO film is flat, and the gap can be clearly seen, while when the GO solution is 120 µL, the gap becomes blurred, and the GO film is porous, as seen in Figure 3d-f. It confirms that the thickness of the GO layer becomes thicker as the amount of GO increases.
As shown in Figure 4, point (3) is obviously different from points (1) and (2), and point (3) is a typical polyimide characteristic peak [42]. In contrast, points (1) and (2) have distinct peaks characteristic of the carbon structure, i.e., D-peak, G-peak, and 2D-peak, corresponding to positions at 1357 cm −1 , 1580 cm −1 , and 2695 cm −1 . These peaks are identified as a graphene structure, with the D peak representing the amorphous carbon structure and the G peak representing the carbon-carbon bond stretching, which can be considered as a graphite structure. The ratio of 2D to G can be a good indication of the presence of high-quality monolayer graphene. I 2D /I G is 0.5 near the center of the electrode, and it can be inferred that there is a multi-layer graphene with around 4-5 layers [43]. There is no obvious 2D peak in the electrode's edge spectrum, but there are distinct D and G peaks. It indicates the formation of the graphite and amorphous carbon structure. Near the center, the temperature is quite high, and the polyimide decomposes fast, producing higher mass graphene, whereas the edge region has a lower temperature and slower decomposition, producing amorphous carbon and a thicker graphite structure. The presence of graphene enhances the conductivity of the electrode. Figure 3i shows that the PI film becomes a layered structure after drop-casting G the whole electrode has three parts: the bottom layer of yellow PI, the middle la black carbon electrode, and the upper layer of GO. When the GO solution is 30 GO film is flat, and the gap can be clearly seen, while when the GO solution is 120 gap becomes blurred, and the GO film is porous, as seen in Figure 3d-f. It confir the thickness of the GO layer becomes thicker as the amount of GO increases.
As shown in Figure 4, point (3) is obviously different from points (1) and (2), an (3) is a typical polyimide characteristic peak [42]. In contrast, points (1) and (2) h tinct peaks characteristic of the carbon structure, i.e., D-peak, G-peak, and 2D-pe responding to positions at 1357 cm −1 , 1580 cm −1 , and 2695 cm −1 . These peaks are id as a graphene structure, with the D peak representing the amorphous carbon s and the G peak representing the carbon-carbon bond stretching, which can be con as a graphite structure. The ratio of 2D to G can be a good indication of the pre high-quality monolayer graphene. I2D/IG is 0.5 near the center of the electrode, an be inferred that there is a multi-layer graphene with around 4-5 layers [43]. The obvious 2D peak in the electrode's edge spectrum, but there are distinct D and G p indicates the formation of the graphite and amorphous carbon structure. Near the the temperature is quite high, and the polyimide decomposes fast, producing high graphene, whereas the edge region has a lower temperature and slower decomp producing amorphous carbon and a thicker graphite structure. The presence of g enhances the conductivity of the electrode.
Comparison of PI-Based Sensors with Different Electrode Gap Sizes
Figure 5a-c shows the humidity response performance of PI-based sensors with different gaps. The capacitance of the sensor decreases gradually with increasing gap size. This phenomenon can be qualitatively explained by the equation C = εε 0 S/d for parallelplate capacitor, where d is the gap size, S is the cross-sectional area of the electrode finger, and ε is the relative permittivity of the interdigital electrodes, ε 0 is the dielectric constant in vacuum, as shown in Figure 5d. Additionally, the capacitance of these sensors increases as the relative humidity rises in Figure 5a-c. ε water is 78.4, which is much higher than 3.4 of ε PI in the electrostatic field. When the humidity rises, the PI film absorbs more water molecules, increasing the dielectric constant of PI film and thus an increase in capacitance. The high dielectric constant of water is because water molecules are polarized in an electric field. This polarization can better respond to the low-frequency electric field. When the frequency of the electric field increases, the polarization speed of water molecules cannot keep up with the change in the direction of the electric field [44,45]. Therefore, the dielectric constant increases more slowly. The capacitive response of the sensor is maximum at 100 Hz. Although the humidity sensor based on PI responds to changes in humidity, its sensitivity is relatively low. Therefore, we enhance the sensitivity of the sensor by applying GO on the surface of the interdigital electrode.
in vacuum, as shown in Figure 5d. Additionally, the capacitance of these sensors increases as the relative humidity rises in Figure 5a-c. εwater is 78.4, which is much higher than 3.4 of εPI in the electrostatic field. When the humidity rises, the PI film absorbs more water molecules, increasing the dielectric constant of PI film and thus an increase in capacitance. The high dielectric constant of water is because water molecules are polarized in an electric field. This polarization can better respond to the low-frequency electric field. When the frequency of the electric field increases, the polarization speed of water molecules cannot keep up with the change in the direction of the electric field [44,45]. Therefore, the dielectric constant increases more slowly. The capacitive response of the sensor is maximum at 100 Hz. Although the humidity sensor based on PI responds to changes in humidity, its sensitivity is relatively low. Therefore, we enhance the sensitivity of the sensor by applying GO on the surface of the interdigital electrode.
Comparison of GO-Based Sensors with Different Electrode Gap Sizes
For the three gap sizes (50 μm, 150 μm, 360 μm) of GO-based sensors, the drop-coated GO solution is 30 μL, 60 μL, 90 μL, and 120 μL, among which the sensitivity of LIG150-60 is the highest, as seen in Table 1. Figure 6a-d depicts the capacitance response curves of the sensor with a gap size of 150 μm for various GO loadings. It is clear that the smaller the frequency, the higher the capacitance response, which is consistent with PI-based sensors. Therefore, we used a frequency of 100 Hz to evaluate the performance of the sensor. As shown in Figure 6b, when the humidity varies from 10% to 90% RH, the capacitance of LIG150-60 changes from 18.8 pF to 3.09 × 10 5 pF. The sensitivity reaches 3862 pF/%RH,
Comparison of GO-Based Sensors with Different Electrode Gap Sizes
For the three gap sizes (50 µm, 150 µm, 360 µm) of GO-based sensors, the drop-coated GO solution is 30 µL, 60 µL, 90 µL, and 120 µL, among which the sensitivity of LIG 150 -60 is the highest, as seen in Table 1. Figure 6a-d depicts the capacitance response curves of the sensor with a gap size of 150 µm for various GO loadings. It is clear that the smaller the frequency, the higher the capacitance response, which is consistent with PI-based sensors. Therefore, we used a frequency of 100 Hz to evaluate the performance of the sensor. As shown in Figure 6b, when the humidity varies from 10% to 90% RH, the capacitance of LIG 150 -60 changes from 18.8 pF to 3.09 × 10 5 pF. The sensitivity reaches 3862 pF/%RH, which is much greater than that of PI-based sensors. It is not sufficient to explain the change in capacitance by an increase in the dielectric constant between the interdigital electrodes. In addition, the capacitance response increased more significantly at high RH than at low RH. We can try to explain this phenomenon by impedance spectroscopy. As seen in Figure 6e of the LIG150-60 impedance spectrum, at 10% to 30% humidity, it is an approximate semicircle; the diameter of the semicircle can be used as the internal resistance of the sensor [46], about 100 KΩ to 140 KΩ. Due to a large number of hydrophilic groups on the GO surface [47], water molecules are mostly chemisorbed on the GO surface by hydrogen bonding and cannot move freely. As shown in Figure 6g, the conductivity mainly depends on electrons in the electrode and GO. The increase in capacitance relies mainly on the increase in dielectric constant after the adsorption of water. As the (e) Impedance spectroscopy of LIG 150 -60 measured at different RH levels, inset is enlarged curve. (f) Bode diagram of LIG 150 -60 at different RH levels. (g,h) Schematic illustration of the humidity sensing principle of LIG 150 -60, (g) low humidity less than 40% and (h) high humidity higher than 50%.
As seen in Figure 6e of the LIG 150 -60 impedance spectrum, at 10% to 30% humidity, it is an approximate semicircle; the diameter of the semicircle can be used as the internal resistance of the sensor [46], about 100 KΩ to 140 KΩ. Due to a large number of hydrophilic groups on the GO surface [47], water molecules are mostly chemisorbed on the GO surface by hydrogen bonding and cannot move freely. As shown in Figure 6g, the conductivity mainly depends on electrons in the electrode and GO. The increase in capacitance relies mainly on the increase in dielectric constant after the adsorption of water. As the humidity continues to increase, the first water molecules layer is formed. When the humidity rises to 50%, the diameter of the semicircle continues to decrease, meaning that the internal resistance of the sensor decreases, while a straight line with an approximate 45-degree slope appears in the low-frequency band, indicating the appearance of Warburg impedance, which is caused by the diffusion process of charge carriers at the GO film/electrode interface. At this point, due to the increasing number of water molecules, a physical adsorption layer is formed on the first water molecules layer, as illustrated in Figure 6h. According to the Grotthuss transport mechanism (H 2 O + H 3 O + = H 3 O + + H 2 O), [21,48], hydrated hydrogen ions are formed in the adsorbed layer as conductive carriers, and the capacitance of the sensors depends mainly on the diffusion of hydrated hydrogen ions at the GO membrane/electrode interface. At humidity greater than 80%, multiple layers of physical adsorption have been formed, at which point the semicircle has disappeared, and there is only a straight line, indicating that the sensor performance is mainly determined by the Warburg impedance generated by ion diffusion. The higher the humidity, the more hydrated hydrogen ions are present, and the diffusion capacitance increases sharply. Figure 6f shows the bode diagram of LIG 150 -60 at different humidity levels. From the phase diagram, it can be seen that the phase curves at all humidity levels basically intersect at the position of 100 Hz, which indicates that the phase angle is basically unchanged at 100 Hz regardless of the humidity level. Therefore, the change in impedance at 100 Hz can represent the change in capacitance. At other frequencies, the impedance and phase angle vary with humidity, and the uncertainty increases. The selection of 100 Hz frequency as the test frequency also considers this factor. As can be seen from the impedance diagram in Figure 6f, at 100 Hz, as humidity rises, the impedance also becomes gradually smaller, representing a gradual increase in capacitance, a result that is also consistent with Figure 6b.
To investigate the effect of different gaps and different GO amounts on the performance of the humidity sensor, we measured the capacitive response of the sensor with different parameters, as shown in Figure 7a-c. Figure 7a shows that the sensitivity of the sensor is not monotonically increasing with the amount of GO but has an optimal value. When the GO solution is 60 µL, the capacitive response is the largest, and at 90 µL, the capacitive response becomes smaller instead. This may be because the thickness of GO film increases as the amount of GO increases, resulting in a larger resistance between the carbonized electrode and GO film, making the capacitive response smaller. Figure 7b is similar, and Figure 7c is slightly different, mainly because the area of the electrode with a 360 µm gap size is almost twice as large as that with a 50 µm gap size, resulting in less GO per unit area. Based on the electrode area and the amount of GO, it can be obtained that the capacitive response is better when the amount of GO is 1.45-1.86 µL/mm 2 . The details are shown in Table 1. Figure 7a-c further shows that the sensor with a 50 µm gap is not the most sensitive; rather, the sensor with a wider gap is more sensitive. The sensor with a narrow gap has a limited area, and when the GO solution is 30 µL, there are fewer hydrated hydrogen ions at high relative humidity, resulting in a weak diffusion capacitance. As shown in Figure 7d, the capacitive response is inversely proportional to the gap size at this point. Additionally, when the GO solution is 120 µL, the electrodes with larger gaps produce more hydrated hydrogen ions due to their larger areas, the ion diffusion is enhanced, and the diffusion capacitance grows rapidly at high relative humidity. As seen in Figure 7e, the capacitive response at this time is positively correlated with the electrode gap. In summary, when the amount of GO is little, the sensitivity is inversely related to the electrode gap size, while when the amount of GO is sufficient, the sensitivity is proportional to the electrode gap.
tionally, when the GO solution is 120 μL, the electrodes with larger gaps produce more hydrated hydrogen ions due to their larger areas, the ion diffusion is enhanced, and the diffusion capacitance grows rapidly at high relative humidity. As seen in Figure 7e, the capacitive response at this time is positively correlated with the electrode gap. In summary, when the amount of GO is little, the sensitivity is inversely related to the electrode gap size, while when the amount of GO is sufficient, the sensitivity is proportional to the electrode gap. Figure 8a shows the capacitive response of LIG 150 -60 experiencing five cycles between 80% and 40% RH, and it can be seen that the adsorption time is 58 s and the desorption time is 15 s. As mentioned above, water molecules will combine with hydrophilic groups and adsorb on the GO surface when the humidity rises. However, the adsorption process is not uniform, and there may be some areas where more water molecules have been adsorbed, while others may not have water molecules yet, which results in the ion diffusion process being hindered. The first layer of chemisorption must be completed on the GO surface, and the Grotthuss effect is produced so that the ions can diffuse sufficiently and the diffusion capacitance can increase rapidly. Additionally, the desorption process is a shift from high to low humidity; at high humidity, a complete layer of physically adsorbed water molecules has formed on the GO surface, but the thickness of the water molecule layer is not uniform. In areas where there are fewer water molecules, the water molecules quickly and completely release from the GO surface. As long as these areas are free of water molecules, the diffusion process is impeded, and the diffusion capacitance is rapidly reduced. Since the area where water molecules need to be released is smaller, the time is also shorter. In the adsorption process, on the other hand, a much larger adsorption area is required to form a complete chemisorption layer on the GO surface. The adsorption time is also much longer. time is 15 s. As mentioned above, water molecules will combine with hydrophilic groups and adsorb on the GO surface when the humidity rises. However, the adsorption process is not uniform, and there may be some areas where more water molecules have been adsorbed, while others may not have water molecules yet, which results in the ion diffusion process being hindered. The first layer of chemisorption must be completed on the GO surface, and the Grotthuss effect is produced so that the ions can diffuse sufficiently and the diffusion capacitance can increase rapidly. Additionally, the desorption process is a shift from high to low humidity; at high humidity, a complete layer of physically adsorbed water molecules has formed on the GO surface, but the thickness of the water molecule layer is not uniform. In areas where there are fewer water molecules, the water molecules quickly and completely release from the GO surface. As long as these areas are free of water molecules, the diffusion process is impeded, and the diffusion capacitance is rapidly reduced. Since the area where water molecules need to be released is smaller, the time is also shorter. In the adsorption process, on the other hand, a much larger adsorption area is required to form a complete chemisorption layer on the GO surface. The adsorption time is also much longer. As we all know, humidity hysteresis is a key parameter for humidity sensor performance. The black and red curves in Figure 8b represent the water molecule adsorption and desorption curves from 10% to 90% RH, respectively. Maximum hysteresis occurs around 80%, about 1.2%. Furthermore, we measured the capacitance of the LIG 150 -60 sensor weekly for a period of 42 days to evaluate its long-term stability. The capacitance varies very little at each humidity level, which confirms that the sensor has long-term stability, as shown in Figure 8c. Table 2 lists the different types of capacitive humidity sensors that have been recently reported, and our sensor is the most sensitive in terms of sensitivity. However, its response time is longer. From the sensor structure, the proportion of the IDE structure is larger.
Respiratory and Skin Humidity Monitoring
Due to the sensor's superior performance, it can be used for non-contact monitoring of human physiological signals, such as sweating, breathing, and non-contact fingertip techniques. Non-contact sensing is better than contact sense because it prevents sweat from contaminating the sensing surface and allows the sensor to be used repeatedly. Figure 9a is a mask for respiratory monitoring and how the mask is worn. Figure 9b is a photo that monitors sweating on the wrist. Before testing, the sensor needs to be adjusted to a height of 6 cm from the desktop. Then, put your wrist between the desktop and the sensor and keep your wrist motionless during the test. The distance may have an error of 1-2 mm, but it does not affect the trend of the detection curve. Figure 9c depicts nose breathing, while Figure 9d depicts mouth breathing. The nose breathing sensor has a variation of only a dozen nanofarads, while the mouth breathing sensor has a variation of tens of nanofarads, which is a clear difference. Compared with the oral cavity, the nasal cavity is smaller, the water content is less, and the moisture of the exhaled air is lower. Figure 9e shows the capacitance response of the finger after approaching the sensor. As can be seen, a sensor's response varies depending on the proximity distance. For example, a capacitive sensor's response is larger at a proximity distance of 2 mm than it is at a proximity distance of 10 mm. This characteristic has potential applications in non-contact positioning and human-computer interaction. Figure 9f shows the non-contact monitoring of human sweating. Stage 1 represents normal ambient humidity conditions, while stage 2 is the case when the wrist is near the sensor (no contact) and the sensor capacitance changes to about 100 nF. Stage 3 represents the stage when the person drinks water, and the sensor capacitance remains basically unchanged. The fourth stage is when the human body starts to sweat, and the sensor capacitance increases rapidly. The fifth stage is when sweating reaches its peak and the sensor capacitance reaches about 250 nF. The sixth stage is when the wrist leaves. As can be seen, the sensor has a high sensitivity to human body sweat, demonstrating the potential of the sensor for monitoring human physiological processes. Figure 9f shows the non-contact monitoring of human sweating. Stage 1 represents normal ambient humidity conditions, while stage 2 is the case when the wrist is near the sensor (no contact) and the sensor capacitance changes to about 100 nF. Stage 3 represents the stage when the person drinks water, and the sensor capacitance remains basically unchanged. The fourth stage is when the human body starts to sweat, and the sensor capacitance increases rapidly. The fifth stage is when sweating reaches its peak and the sensor capacitance reaches about 250 nF. The sixth stage is when the wrist leaves. As can be seen, the sensor has a high sensitivity to human body sweat, demonstrating the potential of the sensor for monitoring human physiological processes.
Conclusions
In this work, we ablated the PI film by a picosecond laser to obtain an interdigital electrode and enhanced the humidity response by GO. This method has a simple fabrication process and lower cost. After the PI film is ablated by laser, graphene is generated on the electrode of the PI-based sensor, which contributes to the conductivity, and the smaller the electrode gap size, the greater the capacitive response. The effects of the electrode gap size and the amount of GO on the performance of GO-based sensors are discussed. The
Conclusions
In this work, we ablated the PI film by a picosecond laser to obtain an interdigital electrode and enhanced the humidity response by GO. This method has a simple fabrication process and lower cost. After the PI film is ablated by laser, graphene is generated on the electrode of the PI-based sensor, which contributes to the conductivity, and the smaller the electrode gap size, the greater the capacitive response. The effects of the electrode gap size and the amount of GO on the performance of GO-based sensors are discussed. The result shows that the sensitivity of the sensor is inversely related to the electrode gap size when the amount of GO is low, and when the amount of GO is large, the sensitivity is proportionate to the electrode gap. Meanwhile, there is an optimal value of the amount of GO, and the sensor is the most sensitive when the drop-coated GO is in the range of 1.45-1.86 µL/mm 2 . Due to the sensor's high sensitivity, rapid response time, and minimal hysteresis, it can monitor human physiological signs, such as breathing and perspiration.
Author Contributions: X.F.: Conception, investigation, formal analysis, methodology, writingoriginal draft, visualization, data collation, project management. J.H.: resources, investigation, data collation, funding acquisition. W.S.: resources, conceptualization, terminology, data collation, review, supervision, project management, funding acquisition. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the National Natural Science Foundation of China (62073089) and Guangdong Ocean University Education Quality Project (PX-112175).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 7,618.8 | 2023-07-29T00:00:00.000 | [
"Materials Science"
] |
Generalized cut operation associated with higher order variation in tensor models
The cut and join operations play important roles in tensor models in general. We introduce a generalization of the cut operation associated with the higher order variations and demonstrate how they generate operators in the Aristotelian tensor model. We point out that, by successive choices of appropriate variational functions, the cut operation generalized this way can generate those operators which do not appear in the ring of the join operation, providing a tool to enumerate the operators by a level by level analysis recursively. We present a set of rules that control the emergence of such operators.
The Virasoro algebra has a natural extension named w 1+∞ algebra whose roles at the 2dimensional gravity and some integrable models have been well investigated. In particular, the constraints of w 1+∞ type coming from the higher order contribution of the variation [31] have turned out to be algebraically independent and nontrivial in some of the matrix models such as the two-matrix model. Here we would like to discuss such higher order contributions arising from the change of the integration measure under the variation.
The basic structure of the contributions from the action is the join operation defined by where K and K ′ are arbitrary operators and the summation over the repeated indices is implied. When we choose appropriate keystone operators for K and K ′ , a block of independent operators called join pyramid is successively generated by the join operation. In other words, the join operation forms a ring whose elements are independent operators and the multiplication is given by (1.1). There are, however, operators which are not involved in the join pyramid and these can not be ignored because the cut operation which underlies the contribution from the variation of the integration measure generates these. These pieces of structure were discovered in [24]. (In contrast, the cut and join operation in one matrix model, namely, r = 2, is depicted as going up and down at integer points on one-dimensional half-line.) The cut operation is defined by ∆K = ∂ 2 K ∂A a 1 a 2 ···ar ∂Ā a 1 a 2 ···ar , (1.2) and corresponds to going up one stair (by one level) in the join pyramid. There is no systematic way to predict when a new operator appears and, in the situation of [24], one can try to discover this by acting the cut operation on all operators in the join pyramid only. Taking the above mentioned role of the cut and join operation into account, we expect that the cut operation plays an important role in resolving the enumeration problem of the operators in tensor models. Below we will investigate higher order contributions to the constraints from the variation of the integral measure.
This paper is organized as follows: In section 2, the higher order variations of the integration measure are considered. In section 3, we discuss the successive choices of the variational function. In section 4, we check that our choice of the variational function is correct up to the level 6 operators. In section 5, a procedure of generating the operators not included in the join pyramid is described.
Higher order variation
Let us consider the rank r = 3 Aristotelian tensor model. Let A be a rank 3 tensor with its component A a 1 a 2 a 3 and be its conjugateĀ withĀ a 1 a 2 a 3 . Each index a i , i = 1, 2, 3 runs over 1, · · · , N i and is colored respectively in red, green and blue. The shift of integration variables of the partition function is defined by A → A + δA andĀ →Ā + δĀ with for arbitrary K. As the line element is given by its response under (2.1) is The measure is, therefore, transformed as The cut operator ∆ (1.2) corresponds to We are interested in the higher order contribution at the response of the measure under the general variation and the gauge-invariant operators which are contained in det(1+F ). We use the pictorial representation of the operators as follows: The tensor A (resp.Ā) are denoted by a white circle (resp. a black dot) and the contractions of indices are denoted by colored lines connecting between the white circles and the black dots. For example, The connected operators come from trF n . The number of A in the operator under consideration is called level of the operator. In the case of n = 2 and higher, the generalized cut operation trF n raises the level and therefore it can be used as the procedure which generates the higher level operators, while the usual cut operation (1.2) lowers the level of the operators by one. In the next section, we see that all connected operators at each level are included in trF n if K is appropriately chosen.
Choice of K
In this section, we seek for the appropriate choice of the variational function K to construct all operators. Now let us choose temporarily The operator (3.1) is the linear combination of the level 2 operators and all operators in trF n are level k = n. Although trF n consists of ∂∂K ≤2 , ∂∂K ≤2 and∂∂K ≤2 , it turns out that only ∂∂K ≤2 is necessary below. Pictorially,
4)
where In the subsections in what follows, we will show that all operators at the first few levels denoted generically by k are included in trF n .
level k = 1
The only connected operator is K 1 = A a 1 a 2 a 3Ā a 1 a 2 a 3 . In the case of n = 1, trF is the cut operation itself as mentioned above and we have Conversely, we can obtain K 1 as the form of, for example, Here the trace "Tr" denotes the contraction of all indices. In the pictorial representation, it corresponds to connecting the two open lines with the same color on the both sides.
level k = 2
All connected operators are listed in the appendix A2 of [24]. Similarly to the case of level k = 1, trF 2 contains The operators K 2 and K 2 are also obtained in a similar way.
level k = 3
All connected operators are listed in the appendix A3 of [24]. At n = 3, trF 3 contains not only (3.14) The last one K 3W cannot be obtained by the join operation. Hereafter, and in [22][23][24] such operators are called secondary operators. In the original procedure of [22][23][24], we had to act the original cut operation (1.2) on all of the level k = 4 operators in order to discover the secondary operator K 3W .
level k = 4
The independent operators are listed in the appendix A4 of [24]. At n = 4, trF 4 contains
level k = 5
At level k = 5, K XXV , K XXVI and K XXVIII are still missing even with the generalized cut operation of this paper by the choice (3.1). In order to resolve this, let us replace (3.1) by (3.23) In this case, we have, in addition, The subscripts r(ed), g(reen) and b(lue) denote the color which acts trivially. Eq. (3.23) is the linear combination of operators whose levels are greater than or equal to 2. The levels of the operators in trF n are not always equal to n in such case. To be more specific, operators of level k must be included in trF n for some n ≤ k. Then, one can observe 1 (3.29) In addition, K XIV is the secondary operator, which we can generate in the generalized cut operation already with (3.1).
We now arrive at a conjecture: in order to predict all connected operators at the higher levels, all we need to do is to add the new secondary operator at each lower level to K successively. Then all connected operators at a given level k are included in trF n ( ∃ n ≤ k).
Examination at level 6
At level 5, K 4C and K 22W (also K 22W and K 22W , of course) appear as the new secondary operators. Hence, we choose We then have and We checked by direct inspection that all operators at level 6 are included in trF n ( ∃ n ≤ 6) with K ≤4 . We plan to elaborate upon this in the future. In particular, we found 10 independent secondary operators at level 6 up to the coloring, (4.9) (4.12) 14) The secondary operators can be constructed by the appropriate product of the objects (2), (2), (2), (3W ) r and so on and the trace "Tr".
Construction of the secondary operators
In the previous section, we have seen that, up to level 6, all operators appear as the constituents of trF n (n ≤ 6) . In particular, the secondary operators are constructed by the trace of the product of the ingredients (2), (2), (2), (3W ) r and so on. Then a natural question arises as to what combinations of these ingredients the secondary operators consist of. Unfortunately, we do not have an complete answer. However, there may be some rules as to the correspondence between a "word" and each of the secondary operators.
The join operation {K, K 2 } is the following operation in the pictorial representation: one of the white circles (resp. the black dots) in K (resp. K 2 ) are removed and then the open lines with the same color are connected with each other. Thus if an operator can be split into two sub-diagrams by cutting one line per each color, it appears in the join operation pyramid. From this fact, the following corollary follows at once: Since the operators including the loop t ❞ can always be split into two diagrams as shown in Fig. 1, they are obtained by the join operation.
Since the existence of the loops in a diagram means that the operator can be obtained by the join operation, the diagram that includes (2) ′ , (2) ′ or (2) ′ does not correspond to the secondary operators by construction.
Moreover, each of the ingredients (2), (2) and (2) can not be repeated if these have the same color because loops are always generated in such cases. For example, (2)(2) generates one green- This restriction on the repeated use of the ingredients with the same coloring is extended to the objects with subscript r, g, b, such as (3W ) r . For example, (3W ) r (2) can always be split by cutting the lines depicted by the thick black lines as follows: In fact, (3.14), (3.20), (3.22), (3.29), (3.28), (3.30), (4.9)-(4.15) satisfy these restrictions. However, we have not been able to formulate rules for (4.16)-(4.18) by the computation up to level 6. In addition, we have seen to cases in which different "words" yield the same operator. Despite these incompleteness of the currently constructed rules, in principle, our procedure successfully generates all secondary operators level by level recursively. | 2,608.2 | 2019-03-25T00:00:00.000 | [
"Mathematics"
] |
Analytic Optimization of the Halbach Array Slotless Motor Considering Stator Yoke Saturation
Hybrid subdomain analysis is utilized to optimize the design of a high-speed compressor motor. Permanent magnets (PMs) are arranged in Halbach array and bounded by a carbon fiber sleeve. The stator core does not have a slot structure, so that it is advantageous in reducing the iron loss. Since the core saturation takes place, magnetic equivalent circuit (MEC) is used to find the permeability depending on the flux density of stator yoke. Furthermore, since the core is not infinitely permeable in the subdomain analysis, the solution is obtained in the whole subdomain regions: shaft, PM, air gap, coil, and stator yoke. The results of hybrid analysis are compared with those of finite-element analysis (FEA). Very close matching results are obtained in the flux density and torque even under yoke saturation. The stator yoke height is optimized against the rotor outer radius so that torque, power density, and efficiency are maximized, while stator outer radius, stack length, and coil area are fixed.
I. INTRODUCTION
T HE use of an extreme high-speed motor (≥100 krpm) is increasing in the applications such as compressors and micro-turbine generators [1], [2]. In high-speed motors, the iron loss should be considered since the eddy current loss is proportional to the square of frequency. Most iron loss takes place in the stator core. As an effort to minimize the iron loss, a slotless permanent magnet (PM) motor is considered [3]. In such a case, the effective air gap tends to be large. The Halbach PM arrangement is often beneficial in a large air gap machine, since it can steer all PM fields to the air gap side. Further, it is also possible with the Halbach array to make the rotor without a back iron. Lee et al. [4] used the Halbach array slotless motor as a propulsion motor for lightweight aircraft, since it required high-power density and high efficiency.
Subdomain analysis is used to design PM motors of various shapes since we can avoid repeated use of time-consuming finite-element analysis (FEA) [5], [6]. It utilizes a big matrix inversion to obtain the coefficients in the general solution of Laplace and Poisson's equations. The solving process is relatively simple when the core is assumed to be infinitely permeable. It is because the field penetrates vertically to the iron core boundary. The analysis becomes more complex when core permeability is finite. In such a case, the field components should be considered at the boundaries between teeth and slots. It is called r-edge boundary [7]. Hannon et al. [8] summarized the overall Fourier-based modeling of electrical machines.
To reflect the core saturation in the subdomain analysis, it is necessary to apply different permeability depending on the flux density. It means that iterative matrix calculations Manuscript are required since permeability and flux density cannot be calculated separately. Thus, hybrid models which incorporated the magnetic equivalent circuit (MEC) were proposed to consider the core saturation problem. Liang et al. [9] reflected the core saturation by decreasing the equivalent air gap permeability which was obtained from an MEC analysis. It required an iteration process in subdomain and MEC calculations until they reach the same field solution. On the other hand, Guo et al. [10] obtained the flux density results without using the iteration process. Instead, three different methods were used to analyze the interior PM motors: the FEA for the complex rotor configuration, the conformal mapping for the slotting effect, and the subdomain analysis for load conditions. However, the rotor geometry was oversimplified in the subdomain model. In this work, a high-speed PM motor is designed using a Halbach array and a slotless stator. A hybrid analysis is proposed to account for the stator yoke saturation. The electromagnetic results of the hybrid analysis are very close to the FEA results. In the optimal design, the split ratio, defined as the air gap to the outer radii, is determined. Fig. 1 shows a sectional view of a high-speed motor for a turbo compressor. The design goals of maximum speed and power are 120 000 rpm and 4.0 kW, respectively. The motor has a two-pole structure with a three-phase distributed winding.
II. HYBRID ANALYSIS
In this work, the hybrid analysis is utilized to design the Halbach array slotless motor with the stator yoke saturation. The roles of the MEC and subdomain methods are determined by advantage of each method. The MEC method is used to find the relative permeability of the stator yoke (μ i ). The detailed flux density of each region can be obtained by the subdomain analysis with the μ i provided by the MEC analysis. loop of the MEC analysis, whereas overall motor performances are evaluated using the subdomain analysis in the outer loop.
A. MEC Analysis for Stator Permeability
Before establishing the subdomain analysis, it is necessary to find a proper permeability, μ i of the stator yoke. The stator permeability depends on the non-linear B-H curve of material. Thereby, the core flux density and its permeability form iterative relations in solving the MEC: μ i is required for B calculation, and a value of B is necessary to read out μ i from the B-H curve. Kano et al. [11] solved the MEC repeatedly to find a proper set of (B, μ i ) using the following update algorithm: where B y and H y are the magnetic flux density and field intensity of the stator yoke, respectively, d is the damping constant, and k is the iteration index.
Once the motor dimensions are determined, the stator permeability is calculated using (1) and (2). A simple MEC is depicted on the section diagram. Each PM is denoted by a current source along with a parallel resistance. The reluctance and MMF are calculated using the geometric dimensions and magnetic permeability [12]. Here, the stator coil MMF is not considered as the armature reaction is relatively minor. Details about determining reluctance and MMF in the MEC model are omitted here.
B. Subdomain Analysis
where A is the vector potential in each region, μ 0 is the vacuum permeability, J i is the current density in the i th coil region, and are the radial and circumferential magnetizations of PM, respectively, [6]. General solutions are written in Appendix. The unknown coefficients in the solutions are determined by the boundary conditions. There are two types of boundary conditions: One is over the angle interval at each θ -edge boundary (r = r si , r c , r m , r ro ) and the other is over the radius interval at each r -edge boundary (θ = φ ci ±(φ c /2)). It should be noted that the r -edge boundary must be set on the boundary of different phase coil regions. It is solved by adding the series in r [7].
where Z 1u = (r si /r so ) up and Z 2m = (r c /r si ) F m . In the same way, it follows from H θ1 = Q s i H θ2i that: At r = r c and θ ∈ [φ ci − (φ c /2), φ ci + (φ c /2)], continuity of vector potential, A z2i = A z3 yields where Z 3u = (r m /r c ) up . Also, the boundary condition At r = r m and θ ∈ [0, (2π/ p)], continuity of vector potential, A z3 = A z4 yields where Z 4u = (r ro /r m ) up . Also, from boundary condition where μ m is the relative permeability of PM. At r = r ro and θ ∈ [0, (2π/ p)], continuity of vector potential, A z3 = A z4 yields a 4o + b 4o ln r ro = b 5o ln(r ro /r ri ) (24) where Z 5u = (r ri /r ro ) up . Also, boundary condition The above equations summarize θ -edge boundary conditions. Note that η io , ζ io , η i , ζ i , κ okp , κ okn , κ iscp , κ iscn , κ issp , and κ issn are calculations for Fourier series expansion. Due to the page limit, only two definitions are written in the following: The above equations summarize r -edge boundary conditions. As in the θ -edge problem, ν r , ν l , ν rs , ν r F m p , and ν r F m n are calculations for series expansion [7]. Due to the page limit, only two are shown as follows: The vector potential is solved by putting all the regional boundary equations, (8)-(31) into a single matrix equation ⎡ T are coefficient vectors. The θ -edge conditions between Region 1 and Region 2 i are condensed in Q 11 , Q 12 , and P 1 . The same conditions between Region 2 i and Region 3 are summarized in Q 22 , Q 23 , and P 2 , and the conditions between Region 3 and Region 4 in Q 33 , Q 34 , and P 3 . The conditions between Region 4 and Region 5 are summarized in Q 44 , Q 45 , and P 4 , and the r -edge conditions between Region 2 i in Q 52 and P 5 .
III. FEA VALIDATION
The motor parameters for validation are listed in Table I. The analytic results of air gap flux density are compared with the 2-D FEA results under a loaded condition in Fig. 3(b). Note that the two results agree well in both the radial and circumferential directions. Fig. 4(a) and (c) shows the field intensity H and flux density B along the middle arc of the stator yoke. The result shows that the FEA and analytic results of B are fully agreed. However, the two results are quite different at H . It is because the stator yoke was not segmented into many pieces and different permeability of each segment is not assigned depending on the level of saturation. It is clear when we look at the relative permeability shown in Fig. 4(b). The FEA result is 8000 in the non-saturation region but drops to 30 in the high saturation area. However, in the subdomain analysis, a constant value 60 is assigned to the entire stator. It is surprising that the B values match well despite the differences in H and μ i . It is because the stator has no slots and the back yoke is very narrow. As a result, it is mostly saturated. Then it looks like an air core machine, leaving only the fundamental sinusoidal flux component. Accordingly, the single-domain solution with an average μ i obtained from the MEC yields similar B even in the stator yoke. Fig. 6(a) shows the design constraints and variable. It is assumed that the stator outer diameter, the magnet height, the stack length, and carbon sleeve height are fixed. Here, the coil current density, i.e., the coil area is also fixed. However, the rotor radius is considered as a design variable and is limited by maximum speed (r m ≤ 13 mm [1]). The split ratio is defined as γ = (r m + 1)/r so , where "1" is the height of the carbon sleeve, r m is the magnet outer radius, and r so is the stator outer radius [2]. Therefore, when the split ratio changes, the yoke height is traded with the PM area as the coil area is fixed. The yoke is saturated with a large γ , and the air gap flux density and torque will be reduced with a low γ . Fig. 5(a) shows a comparison of the time stepping torque calculated by Maxwell stress tensor [5] when the split ratio is 0.56. Here, "subdomain" refers a subdomain result with μ i = ∞ in the stator yoke. On the other hand, "hybrid" means the subdomain result with a finite μ i obtained from the MEC. It shows that the proposed hybrid analysis yields the identical torque to the FEA result. Note also from Fig. 5(b) that the average torque increases along with the split ratio until γ = 0.56. After that, it drops due to the stator saturation as the stator yoke height rapidly decreases. It should be emphasized that the FEA and this hybrid subdomain analysis match well in all ranges.
IV. DESIGN OPTIMIZATION
Power density is defined as the mechanical power divided by the total weight of the active material where T is the magnetic torque, ω r is the mechanical speed, m y , m c , m s , and m m are the weights of iron yoke, coil, sleeve, and PM, respectively. The electrical power applied to the motor is equal to where R ph is the phase resistance, k h is a hysteresis constant, k e is an eddy current constant, β is the Steinmetz constant [13], and v y is a volume of the stator yoke. Now the efficiency is defined as χ e = T ω r /P e . Note that the copper loss is constant since the coil area is kept the same independently of split ratio change. On the other hand, B y and v y are affected by the split ratio according to the change in the yoke height. Fig. 6 shows the change in power density and efficiency as the split ratio changes. Note that the efficiency steadily decreases as the yoke height reduces. However, the power density increases until γ = 0.56. It is because torque is maximized at γ = 0.56 as shown in Fig. 5, while the yoke mass decreases. After that, the power density also drops due to yoke saturation.
The two design criteria can be combined with a weighting factor W such that δ = p n W +e n (1−W ), where p n and e n are the normalized power density and efficiency by some target values. The bar graph in Fig. 6(b) shows the evaluation factor when W = 0.5, the target power density is 12 kg/kW, and the target efficiency is 0.95. Based on it, the optimal design is determined as γ = 0.56. Fig. 7 shows the flux density contours of two case designs for γ = 0.5 and 0.56. Design 1 has a greater yoke height, thus it is efficiency oriented. On the other hand, Design 2 has a larger rotor radius, thus it is a power-density-oriented design.
V. CONCLUSION
A high-speed compressor motor is designed with the Halbach PM arrangement. The Halbach arrangement is a good combination with the slotless motor, since it can steer all radial fields to the air gap via circumferentially magnetized PM. As a design tool, the hybrid subdomain analysis was utilized. When the yoke saturation takes places, the assumption of infinite permeability is no longer valid. It requires to solve the finite core region, which makes the solving process much more complex. Since the permeability decreases with saturation, a proper value of permeability should be found for each flux density. To this end, the MEC was used recursively. The hybrid subdomain analysis yielded very similar results to the FEA results in both field and torque. While determining the rotor size and height of the stator yoke, both power density and efficiency are considered. Design study shows that an optimal design is found with a larger rotor radius that allows for slight efficiency reduction. Based on this optimized design, the real motor is now manufactured. | 3,633 | 2021-02-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
About creation of machines for rock destruction with formation of apertures of various cross-sections
The article presents the results of the experimental research of the high strength rock destruction by a bladeless tool. Rational circuit designs of disposing of indenters in the impact part of the drill bits and a diamond tool are justified. New constructive solutions of reinforcing bladeless drill bits, which allow drilling blast-holes of the various cross-section, are shown.
Statement of the problem
Elaboration of new and improvement of the known systems of development of the minerals are connected with wide application of drilling operations of different functions [1][2][3][4][5]. The primary way of destruction of high strength rocks is a percussive method. When striking blows on the tool, they can be transmitted to the destroyed object with the highest energy per time unit. Drilling without rotation of the tool round its axis [6], except for the rotation mechanism, allows essentially simplifying the drilling machine design and drilling blast-holes of any geometrical form. The percussive method of destruction can be realized by application of a bladeless tool -non-coring bits with indenters [7].
Commercial tests showed considerable advantages of indenters over the tool equipped with the blade in terms of endurance, the drilling rate, the specific charge of the firm alloy, expenses for drilling. Application of indenters in equipment of bits allows one to create almost an unlimited quantity of alternatives of crown designs using simple means. Changing, for example, their form, quantity and a disposition of circuit design for specified mountain-technological conditions of the deposit, it is possible to choose the most optimal equipment design. Now there are no well-founded recommendations for choosing the size, form, quantity of indenters and circuit designs and their optimal disposition on the chisel tool.
The project was financially supported by the Ministry of Education and Science of the Russian Federation within the Federal target program 'Research and development of priority directions of scientific-technological complex development of Russia for 2014-2020' (agreement No. 14.607.21.0028 from 05.06.2014, unique ID RFMEFI60714X0028).
Problem solution
For the purpose of acknowledging the possibility of rocks drilling without axial rotation of the tool, as well as revealing features of the penetration of bladeless indenters, experimental research [8] was conducted. During the experiment indenters were introduced into rocks by the universal testing machine 'IK-500.01' (Figure 1). MEACS2015 IOP The experimental technique consisted in the following. The rock block was installed on the lower platform of the stand. A vertically oriented non-coring bit was fixed in the overhead part of the stand and so that it could rotate in the course of the test. With appropriate setting of the hydraulic station the speed and the maximum value of loading was established. After turning of the stand the contact between the bit and the granite block was installed. Then the crown loading with speed V H = 1 kN/s was shifted. The computer control system recorded a diagram of dependence of the crown penetration depth into the granite on the magnitude of the loading (a 'force -penetration' diagram). The first cycle of the test proceeded until the muck was destroyed and the tool penetration formed a characteristic hole under indenters. The subsequent test was carried out after cleaning a hole from the destroyed muck and installation of the non-coring bit into the same hole. The loading proceeded until applied force magnitude F max = 100 kN was achieved. After each test the analysis of the broken rock was made.
The above-mentioned diagram 'force-introduction' allow us to study the mechanism of rock destruction at dynamics by means of the tool. The diagram 'force-introduction' allows identifying the various stages of the process of tool introduction into the rock. The first section of the chart until the reduction of the load indicates the accumulation of elastic deformation resulting in the brittle fracture of the rock. The next section shows a sharp drop in the loading at a small depth of penetration. Then, under the almost constant loading, the depth of the tool penetration increases. This step corresponds to brittle fracture of the rock. Then the cycle is repeated.
Analysis of the results
During the experiment the following results have been obtained.
As the result of the statistical analysis of various functional relationships the diagram 'forceintroduction' allowed us to determine that the hyperbolic function is the closest to the truth: where P -value of the loading, h -depth of tools' penetration, i k -empirical coefficients that characterize the object of destruction, 3 , 2 , 1 = i . The physical meaning of coefficients i k is the compliance of the object of destruction with the tool. The resulting function describes adequately the experimental results and allows one to judge the effectiveness of the tool.
During penetration of the single indenter under the bonding pad a tightly pressed and quite thin bowl-shaped layer of the rock is formed. Its diameter is slightly less (1…2 mm) than the diameter of the hole. The other part of the destroyed muck represents less pressed shattered layers of the muck removed without considerable efforts by a sharp object.
During simultaneous penetration of two, three, or four indenters ( Figure 3) the muck part located between indenters is destroyed by the large-scale spalling. At an optimum distance between three next indenters l i = 1…2.5 d i , located at the apexes of the equilateral triangle, there is a 1.5…2.5-fold increase in the total volume of destruction on the average. According to the analysis results of the obtained diagram «force -penetration », the distribution of the magnitude of the applied forces on the number of simultaneously introduced indenters is set. If the force per one indenter is related to the volume of fracture (Figure 4), the most optimum scheme from the point of view of energy consumption will be that in which three indenters are located at the apexes of the equilateral triangle. Thus, the experimental verification of the interference effect of closely spaced and, at the same time, introduced indenters provides the effect of simultaneity. Assuming a particular value of the diameter of the blast-hole, it is possible to find such mutual bracing of three indenters which allows drilling the aperture holes without the tool rotation around its geometrical axis. This type of drilling mode provides an opportunity of obtaining apertures of a non-circular cross-section. It is also necessary to note that the figure formed by the next indenters should not necessarily be a precise equilateral triangle; the angle between the sides can be in the range of 60 ± 12º. Instances of the recommended circuit designs of indenters' disposition on the working part of non-coring bits are shown in Figure 5. In Figure 6 the bit with the rational disposition of indenters, by means of which churn drilling with the participation of the author, a blast-hole of the rhombic cross-section is obtained.
Conclusion
New aspects of the drill bits and a diamond tool provide a possibility of the short-hole drilling of the various cross-section including a non-circular one, when in the sharp corners of the drilled blast-hole, stress concentrators are created and there are possibilities of purposeful influence on the array with an essential decrease in drill footage. | 1,677.2 | 2016-04-01T00:00:00.000 | [
"Materials Science"
] |
Antimicrobial Activity and Cytotoxicity of Ag(I) and Au(I) Pillarplexes
The biological activity of four pillarplex compounds featuring different metals and anions was investigated. The toxicity of the compounds against four bacterial strains [Bacillus subtilis (ATCC6633), Staphylococcus aureus (ATCC6538), Escherichia coli (UVI isolate), Pseudomonas aeruginosa], one fungus (Candida albicans), and a human cell line (HepG2) was determined. Additionally, a UV-Vis titration study of the pillarplexes was carried out to check for stability depending on pH- and chloride concentration changes and evaluate the applicability in physiological media. All compounds are bioactive: the silver compounds showed higher activity against bacteria and fungi, and the corresponding gold pillarplexes were less toxic against human cells.
INTRODUCTION
Since the early 2000s, coinage metal complexes featuring N-heterocyclic carbenes (NHC)-a ligand class with a facile tunability toward sterics, electronics, and solubility-have been employed as bioactive compounds (Herrmann, 2002;Mercs and Albrecht, 2010;Hopkinson et al., 2014). As first examples, silver (I) NHC complexes have been used as antimicrobial compounds, pioneered by Youngs et al. (Kascatan-Nebioglu et al., 2004;Melaiye et al., 2004), and a respective applicability of such compounds has been shown for a variety of complexes ever since (Figure 1) (Kascatan-Nebioglu et al., 2007;Hindi et al., 2009;Oehninger et al., 2013;Liang et al., 2018). Hereby, a slow release of silver ions originating from the decomposition of the NHC complexes is expected to be the cause of their activity, which can be rationalized by the comparably labile metal-carbene bond (with respect to other late transition metal-NHC bonds) (Kascatan-Nebioglu et al., 2007). The more stable gold (I) NHC complexes were also employed in studies investigating their antibiotic potential (Lazreg and Cazin, 2014). One possible target are (seleno)-cysteine moieties in proteins, e.g., thioredoxin reductase, accompanied by the inhibition of the enzyme, which is similar to the mode of action proposed for the approved metallodrug Auranofin (Baker et al., 2005;Schuh et al., 2012). This is expected in particular for gold(I) mono-carbene complexes, which can dissociate one (labile non-NHC) ligand to coordinate the sulfur or selenium atom (Rubbiani et al., 2011Cheng et al., 2014;Meyer et al., 2014;Arambula et al., 2016;Bertrand et al., 2017;Karaca et al., 2017a;Schmidt et al., 2017;Zhang et al., 2018). In case of the di-NHC complexes, which are more stable toward dissociation, a different mode of action can be observed. Casini and coworkers were able to show stacking of Au(I) di-caffeine NHC complexes in G4 quadruplex DNA structures, inhibiting telomerase activity (Bertrand et al., 2014;Bazzicalupi et al., 2016;Karaca et al., 2017b). Hereby, the overall structure of the intact complex (being planar, cationic, and possessing a conjugated system for stacking) determines the ability to interact in a non-covalent binding, forming supramolecular aggregates. A related supramolecular recognition of biomolecules causing bioactivity was discovered by Michael Hannon and coworkers, who were using cylindrical metal helicates-a class of supramolecular coordination complexes (SCCs, Figure 1)-to interact with different DNA structures (Meistermann et al., 2002;Oleksi et al., 2006;Hannon, 2007;Ducani et al., 2010;Phongtongpasuk et al., 2013;Malina et al., 2016). They showed, that the overall charge of the compounds (4+) as well as the aromatic parts of the ligands were crucial for supramolecular recognition of the negatively charged DNA. In general, such supramolecular coordination compounds are discussed as a promising class for future applications as metallodrugs or drug delivery systems (Casini et al., 2017).
We recently introduced the pillarplexes (Figure 1), a new family of SCCs which are structurally similar to Hannon's cylindrical helicates yet additionally exhibit a pore which allows for encapsulation of guest molecules inside the complex (Altmann and Pöthig, 2016). These compounds are octanuclear coinage metal complexes with two coordinating macrocyclic NHC ligands. Due to their in-built functionality (e.g., luminescence, easy tunable solubility) caused by the metal-complex character, the pillarplexes are even more versatile than their highly successful organic relatives-the pillararenes (Ogoshi et al., 2008(Ogoshi et al., , 2016. The latter have also been applied in biomedical applications very recently, for reducing cytotoxicity, and improving the anticancer bioactivity of oxaliplatin (Hao et al., 2018). In case of the pillararenes and metallocage systems (Casini et al., 2017), the cavitand itself shows no bioactivity and therefore can be used to modulate the selectivity and activity of an actual metallodrug.
Our pillarplexes combine the possibilities to behave like NHC complexes, i.e., as a metallodrug themselves, with the possible applications of cavitands. Therefore, to explore the future potential of our pillarplexes in the biomedical context, we conducted a toxicity study. We tested the antimicrobial activity of the four metal complexes (3-6), the metal salts as well as the ligand precursor salts (1, 2) toward four different bacterial strains [Bacillus subtilis (ATCC6633), Staphylococcus aureus (ATCC6538), Escherichia coli (UVI isolate), Pseudomonas aeruginosa] as well as one fungus (Candida albicans). We also evaluated the toxicity of the complexes toward a human cell line (HepG2) in order to clarify if related future research directions might be promising to follow. Finally, we conducted a stability study of the pillarplexes toward changes in pH and chloride ion concentration, which has implications on the use of the compounds under physiological conditions.
General Details
Compounds 1-6 were prepared according to the reported procedures (Altmann et al., 2015;Altmann and Pöthig, 2016). Chemicals were purchased from commercial suppliers and used without further purification if not stated otherwise. Liquid NMR spectra were recorded on a Bruker Avance DPX 400 and a Bruker DRX 400 at room temperature if not stated otherwise. Chemical shifts are given in parts per million (ppm) and the spectra were referenced by using the residual solvent shift as internal standards. Emission spectra were recorded on a Agilent Cary 60 UV-Vis. Nutrient agar plates were prepared according to the instructions provided by Oxoid where 28 g of nutrient agar (CM0003) was needed to make 1 L of nutrient agar broth. 11.2 g of the agar was added to three 400 ml glass bottles. Four hundred milliliters of distilled water was added into each bottle containing the nutrient agar and was dissolved by stirring. After sterilization, the nutrient agar bottles were cooled to 50 • C and then placed into a 50 • C water bath for the temperature to remain constant. The nutrient agar was then poured halfway into 9 cm sterile petri dishes in HEPA filtered laminar flow cabinets to minimize the risk of contamination. The nutrient agar plates were then left to solidify and were refrigerated at 4 • C. Mueller-Hinton agar plates were prepared from Mueller-Hinton agar medium (Sigma-Aldrich) and agar (Oxoid LP0011). Twenty-two grams of the Mueller-Hinton medium was added into 1 L of distilled water in a volumetric flask and dissolved with a magnetic flea at speed 6-7 and temperature 300 • C for ∼10 min (IKA Labortechnik). Fifteen grams of agar was added to the mixture and the stirring continued at speed 5-6 and at temperatures between 200 and 250 • C until the mixture began to boil. After sterilization, the Mueller-Hinton broth was cooled to 50 • C and stirred slightly with a magnetic flea for ∼1 min (IKA Labortechnik). Sixty milliliters of the Mueller-Hinton broth was poured into each 13 cm petri dish using the media dispensing machine (IBS Integra Biosciences Technomat) using aseptic techniques. The Mueller-Hinton plates were allowed to cool and then stored in a refrigerator at 4 • C. Plates were sterilized and stored at 4 • C in the refrigerator before use. One liter of a 0.9% solution of sodium chloride was prepared and sterilized at 121 • C for 20 min at 1 atm. The bacteria were streaked onto a nutrient agar plate using a sterile loop and incubated at 37 • C overnight. The fungus was streaked using a sterile loop onto a TSA plate and incubated at 25 • C for 48 h. Fresh streaks were prepared for each disc diffusion assay.
Disc Diffusion Assays for Antimicrobial Activity
Antimicrobial activity was measured using the disc diffusion assay essentially as described in guidelines from Clinical and Laboratory Standards Institute CLSI (2012). The bacteria were maintained on Nutrient agar (Oxoid), while the fungus was maintained on Sabouraud dextrose agar (Oxoid). An inoculum of the test microorganisms were made by resuspending freshly overnight grown colonies into 2 mL of a sterile salt solution (0.9% NaCl). The test organism was diluted to McFarland standard density no. 2 and mixing thoroughly (McFarland, 1907). For the Gram-negative bacteria and fungus, 60 µL of the inoculum was added to 25 mL of sterile salt solution, while 120 µL was added for the Gram-positive bacteria.
To prepare the plates for the disc diffusion assay Mueller-Hinton agar 2 (Sigma-Aldrich) were covered with 5 mL of the freshly made inoculate. The surplus inoculate was removed and the plates were then left in a laminar flow hood until the surface of the plates were completely dry.
Six millimeter filter discs were impregnated with 10 µL volume of the ligand precursor compounds 1 [L(PF 6 ) 4 ] and 2 [L(OTf) 4 ], silver pillarplexes 3 [Ag 8 L 2 (PF 6 ) 4 ] and 4 [Ag 8 L 2 (OAc) 4 ], and gold pillarplexes 5 [Au 8 L 2 (PF 6 ) 4 ] and 6 [Au 8 L 2 (OAc) 4 ]. The concentration of the compounds used were 10 mM. Further filter discs were also impregnated with 10 µl of: dimethyl sulfoxide (DMSO) acting as a negative control; 10 mM of silver nitrate, 10 mM of gold chloride acting as model compounds for free metal ions; and antibiotic discs including pre-impregnated 30 mg/ml gentamycin sulfate discs (BD BBL Sensi-Disk) (E. coli, S. aureus, P. aeruginosa), pre-impregnated 30 mg/ml tetracycline discs (BD Sensi-Disc) (B. subtilis), and 10 mM of Miconazole nitrate discs (Sigma-Aldrich) (C. albicans), acting as positive controls. The filter discs were placed evenly on 13 cm Mueller-Hinton agar plates separated to avoid overlapping inhibitions zones. The plates were incubated overnight at 32 • C for the bacteria or 25 • C for the fungus. The inhibition zones were measured with a caliper. All experiment was performed at least three times.
In vitro Toxicity in HepG2 Liver Cells
Human hepatocarcinoma cell line HepG2 (HB-8065, ATCC, Manassas, VA, USA) was cultured in MEM-Glutamax (5.5 mM glucose) supplemented with 10% fetal bovine serum (Gibco, Life Technologies AG, Basel, Switzerland), 100 µg/mL streptomycin, and 100 units/mL penicillin (both from Gibco, Life Technologies AG, Basle, Switzerland). Cells were incubated at 37 • C under a 5% CO 2 atmosphere. For viability assays, cells were seeded in white 96-well Nunc plates at a density of 20,000 cells/well and left overnight to adhere before experiments were conducted.
The compounds were dissolved in DMSO at concentrations ranging from 10 −3 to 10 −6 M and were added to white 96-well plates (maximum DMSO concentration in wells was lower than 1%) containing 20,000 HepG2 cells/well. Plates were incubated for 24 h at 37 • C in a 5% CO 2 atmosphere. After 24 h, AlamarBlue cell viability reagent (Thermo Fisher, Carlsbad, CA, USA) was added as a 10% solution, and plates were placed back in the incubator for 4 h. AlamarBlue is a redox indicator yielding a fluorescence signal proportional to the number of viable cells in each well (O'Brien et al., 2003). The fluorescence signal was measured in a microplate reader (Clariostar, BMG Labtech, Ortenberg, Germany) at 550 nm/603 nm (excitation/ emission). Data from four replicates were used to calculate the halfmaximal inhibitory concentration (IC 50 ) using Sigmoidal, 4PL, where X is log(concentration) analysis, and a four-parameter logistic regression from GraphPad Prism 7 (GraphPad Software Inc., USA). The experiment was repeated twice with similar results.
Stability Tests of Pillarplexes Against Chloride
The titrations of the silver pillarplex 4 and gold pillarplex 6 against chloride ions were carried out by stepwise addition of an increasing volume of a 3.072 M sodium chloride solution to 2 mL of a 1.38 · 10 −5 M aqueous pillarplex solution followed by thorough mixing in a quartz cuvette. The UV-Vis absorption spectra were recorded immediately after the addition. The measured absorbance was corrected for the increase of the sample volume.
Stability Tests of Pillarplexes Against pH
The stability of silver pillarplex 4 and gold pillarplexes 6 in different concentrations of trifluoromethanesulfonic acid (HOTf) was monitored by UV-Vis spectroscopy. One milliliter of a 2.76 · 10 −5 M aqueous pillarplex solution were injected into an equal volume of 1 mL HOTf solution with pH-values 2, 4, 5, and 6 in the quartz cuvette. The absorption spectra were recorded after 1 min, 1, 7, 24, 48, and 72 h (see Supplementary Material).
Antimicrobial Activity Studies
The results of the antimicrobial studies are summarized in Table 1.
Both silver compounds (entries 3 and 4) show antimicrobial activity against all bacterial strains as well as the fungus. Hereby, the activity is independent of the anion present, as the results are identical within the margin of errors. In comparison to the positive controls (entries 10-12) the overall activity is moderate, however, (by means of statistic uncertainty) it is identical to that of AgNO 3 (entry 7), which has been used as an antibiotic since ancient times (Danscher and Locht, 2010). Hence we suspect the release of silver ions via decomposition of the pillarplexes, which is in agreement with the general behavior of silver(I) NHC complexes, as stated above.
The gold pillarplexes show lower to no activity (entries 5 and 6). Compound 6, the completely water soluble acetate, shows no activity against any of the microbes, whereas, the more lipophilic compound 5 shows a selective moderate activity against Gram-negative E. coli and Gram-positive S. aureus.
In contrast, AuCl 3 shows activity against all bacterial strains (interestingly not against the fungus), which of course might be additionally influenced by the redox activity of the gold(III) ion. However, we suspect the gold pillarplexes being more stable in the physiological environment, therefore not releasing uncoordinated metal ions, which would explain the lower activity. Similarly, if the gold complexes would decompose, a similar toxicity as in case of the free ligand precursors would be expected. In general, such imidazolium salts are known to be potentially toxic, depending on different factors, e.g., lipophilicity or anions (Gravel and Schmitzer, 2017). In our case, the two macrocyclic polyimidazolium ligand precursors (entries 1 and 2) show only moderate and very selective toxicity only against the Gram-positive bacteria S. aureus and B. subtilis. Gram-positive bacteria lacks the outer membrane surrounding the cell wall. This outer membrane excludes, by various mechanisms, certain drugs from penetrating the bacterial cell (Hancock, 1997) and could be the reason for antimicrobial selectivity of compound 2. For the latter, no activity at all was observed in case of the gold pillarplexes, why we rule out a possible decomposition.
Cell Toxicity Studies
The results for the toxicity study of the compounds against human HepG2 liver cells are summarized in Table 2. The IC 50values were determined for all compounds, however, the silver pillarplexes (3 and 4) as well as AgNO 3 and AuCl 3 all showed precipitation to some degree. This can influence both uptake of the compounds by the cells and the absorbance read, resulting in ambiguous measurement results, which we have pointed out by an asterisk in Table 2.
In general, all tested compounds exhibit biological activity. Both ligand precursors (1 and 2) exhibited low toxicity levels which corresponds to the determined IC 50− values. In contrast, high cell toxicity was observed in concentration higher than 100 µM for all pillarplexes (see figures in the Supplementary Material). According to the determined IC 50values, the gold congeners are more active within the pairs of pillarplexes with the same anions (3 vs. 5 and 4 vs. 6).
Frontiers in Chemistry | www.frontiersin.org However, they also show a higher base RFU compared to the silver compounds, indicating the gold compounds to be less toxic.
With regard to the effect of the anions, the more water-soluble compounds (2, 4, 6: triflates or acetates) show higher activity than the less water-soluble hexafluorophosphate salts (1, 3, 5). In general, the same trend as in the antimicrobial assay are observed with the HepG2 cells. The silver pillarplexes appear to be more toxic and more active than their gold counterparts. Precipitation was observed in case of the silver pillarplexes as well as for AgNO 3 and AuCl 3 , whereas the gold pillarplexes did not exhibit any stability or solubility issues.
Stability Tests
To evaluate possible reasons for the observations made during the bacterial and cell tests, we conducted an UV-Vis titration study. In detail, we checked the influence of a varying chloride and proton concentration on the stability or solubility of the pillarplex compounds. Therefore, we first evaluated the absorption properties of the two water-soluble pillarplex acetates 4 and 6 in aqueous solution, as well as the ligand precursor (Figure 2A). All compounds absorb in the UV range: the silver complex 4 shows an absorption maximum at 226 nm whereas the gold complex 6 absorbs at 245 nm. The ligand precursor absorbs at 209 nm. The molar extinction coefficients for the pillarplex compounds at the wavelengths of the individual maximal absorption are 9.33 · 10 4 ± 6.41 · 10 2 M −1 cm −1 (4) and 1.21 · 10 5 ± 4.64 · 10 3 M −1 cm −1 (6). The titration results of the pillarplexes against an increasing amount of chloride ions present in aqueous solution show a very different behavior of the silver compared to the gold compound ( Figure 2B). The absorption signal of silver complex 4 immediately drops up to addition of 0.5 mmol NaCl (which is about 17,000-fold excess of chloride). After that, no significant change can be observed in the absorption spectra upon addition of more equivalents of chloride. Apparently, this is close to the physiological concentration of chloride (0.9%) which might be a possible explanation why precipitation was observed in the biological tests for the silver containing pillarplexes. The gold compound 6 also shows a decay if the absorption signal upon chloride addition. However, the drop is less pronounced and at 0.9% chloride concentration, there is still a significant absorption (85% of the initial value). At higher chloride contents we observed a higher variation of the measured values, which we cannot explain up to now. However, even after addition of 1 mmol NaCl (about 35,000-fold excess) the characteristic absorption band at 245 nm can be observed for compound 6 (see Supplementary Material), strongly indicating that the gold complex is significantly less effected by chloride addition and still present in solution under physiological conditions. Figure 3 shows the pH-dependent decay of pillarplex compounds 4 and 6 over time. From our previous work on pillarplex rotaxanes we already knew, that in case of silver, the metal ions can be released quickly in the presence of an excess of the strong trifluoromethanesulfonic acid (Altmann and Pöthig, 2017). This was reproduced also in case of the empty pillarplex 4 ( Figure 3A) for which an immediate drop of absorption signal at 226 nm was observed at pH 2, indicating very fast decomposition to the protonated imidazolium precursor. The resulting UV-Vis spectrum is also in agreement to that measured for the ligand precursor (Figure 2A). At higher pHvalues, the decomposition of 4 is significantly slower and almost identical for pH 4-6. A similar behavior was observed for the gold complex 6 although the decay at pH 2 is significantly slower than that of its silver analog ( Figure 3B). Interestingly, at the higher pH-values the relative drop of the absorption signal is more pronounced compared to the silver complex. However, in case of 6 the resulting absorption spectrum after the assumed decomposition is not resembling that of the ligand precursor, and rather corresponds to the spectrum of 6 just with lower absorption intensity. Therefore, we additionally conducted a NMR experiment to check for protonation of the NHC ligands at pH 2. As a result, no protonated species was detected strongly indicating that the gold pillarplexes are stable even at low pH (see Supplementary Information Figure S15).
CONCLUSION
In general, the silver pillarplexes behave like similar silver complexes and show antimicrobial and antifungal activity as well as moderate toxicity toward human HepG2 cells. The corresponding gold complexes were inactive against most bacterial strains and fungi, as well as had lower HepG2 toxicity. The observed effects originate most likely from the increased stability of the gold pillarplexes compared to the silver pillarplexes, as evident by the UV-Vis titration and the 1 H NMR experiment. The fact that the gold complexes seem comparably non-toxic and stable opens up the possibility of them being used as drug carriers for selective drug delivery or modified release of drugs that could fit inside the cavity in the pillarplexes.
AUTHOR CONTRIBUTIONS
AP: project conception and supervision, manuscript composition, and writing. PA: synthesis and characterization of pillarplexes. SG: synthesis and characterization of pillarplexes, UV-Vis studies. JK: UV-Vis studies. OH: biological testing supervision, data analysis, manuscript writing. SA biological testing, data analysis. HWL: biological testing, supervision, data analysis. AS: biological testing, data analysis. TG: biological testing supervision, data analysis.
ACKNOWLEDGMENTS
AP thanks the Fonds der chemischen Industrie (FCI) for funding of the project (Sachkostenzuschuss) as well as the Leonhard-Lorenz-Stiftung for financial support. SG thanks the CSC for a personal scholarship and the TUM Graduate School for financial support. OH thanks the Research Council of Norway (RCN) for a mobility grant (grant number: 240215).
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fchem. 2018.00584/full#supplementary-material Additional information on the IC 50 determinations as well as the UV-Vis experiments is available as supplementary material. | 5,010.6 | 2018-11-27T00:00:00.000 | [
"Biology",
"Engineering"
] |
First do no harm: extending the debate on the provision of preventive tamoxifen
The Breast Cancer Prevention Trial (BCPT-P-1) demonstrated that tamoxifen could reduce the risk of invasive breast cancer in high-risk women by 49%, but that it could also increase the risk of endometrial cancer, vascular events and cataracts. This paper provides an estimate of the net health impacts of tamoxifen administration on high-risk Canadian women with no prior history of breast cancer. The results of the BCPT-P-1 were incorporated into the breast cancer and other modules of Statistics Canada’s microsimulation POpulation HEalth Model (POHEM). While the main intervention scenario conformed as closely as possible to the eligibility criteria for tamoxifen in the BCPT-P-1 protocol, 3 additional scenarios were simulated. Predicted absolute risks of breast cancer at 5 years of 1.66%, 3.32% and 4.15% were calculated for women 35 to 70 years of age. When the BCPT-P-1 results were incorporated into the simulation model, the analysis suggests no increase in life expectancy in this risk group. Tamoxifen appeared to be beneficial for women with a 5-year predicted risk of 3.32% or greater. The results of these simulations are particularly sensitive to the reduction in mortality observed in the BCPT-P-1, as well as being sensitive to other characteristics of the simulation model. Overall, the analysis raises questions about the use of tamoxifen in otherwise healthy women at high risk of breast cancer. © 2001 Cancer Research Campaign
For this analysis, tamoxifen's impact on breast and endometrial cancer (BC and EC), deep vein thrombosis (DVT), stroke, coronary heart disease (CHD), fractures and cataracts has been evaluated. In addition, mortality from causes other than those listed above was explicitly modeled, but without any preceding morbidity. A wide variety of data sources were culled (references are contained in the original breast cancer reports - . Breast and endometrial cancer incidence data were obtained from the national cancer registry by age group. Breast cancer risk factors were taken from the National Breast Screening Study and from vital statistics records. Baseline incidence for the remaining diseases under study was calculated from the electronic health care records maintained by the province of Manitoba. Mortality rates for the individual diseases were modelled to reflect as closely as possible those from Canadian vital statistics records. No mortality was associated with cataracts or DVT in the model, but they have still been modeled because their incidence is significantly affected by tamoxifen. The use of hormone replacement therapy (HRT) is included since it affects whether women are eligible for tamoxifen. These modules have been validated by ensuring that the incidence (when known), overall life expectancy, and disease-specific mortality rates generated by POHEM correspond to those observed in Canada. See Appendix A for details on the diseases in the model and the methodology used.
Assumptions in modeling preventive tamoxifen for Canadian women
POHEM was used to simulate the administration of preventive tamoxifen to a representative cohort of Canadian women. The reference case assumed no provision of preventive tamoxifen. The main intervention scenario (Scenario 1) conformed as closely as possible to the eligibility criteria for tamoxifen in the BCPT-P-1 protocol. Women were defined to be at increased risk for breast cancer in the simulation if they: q were 60 to 70 2 years of age; or q were 35-59 years of age with a 5-year predicted risk of breast cancer of at least 1.66%; or q had a history of lobular carcinoma in situ (LCIS); q had no history of deep vein thrombosis or endometrial cancer; and q had not taken hormone replacement therapy in the 3 months prior to starting on tamoxifen.
Estimating the 5-year predicted risk
For this analysis, we used the Gail algorithm for estimating the probability (risk) of breast cancer over time (Gail et al, 1999) in the same way as in the BCPT-P-1 to predict each simulated woman's 5year risk of breast cancer. POHEM's simulated individual risk profiles, in aggregate, replicate Canadian risk factor distributions for age, family history, nulliparity or age at first live birth, number of breast biopsies, and age at menarche. For each synthetic woman, the odds ratios from Gail et al were used in combination with the age-specific Canadian breast cancer incidence rates to estimate the 5-year predicted risk. If this predicted risk was high enough to place the woman in the eligible range, the intervention simulated was 20 mg of tamoxifen per day for 5 years. Tamoxifen administration was assumed to have been stopped at the onset of deep vein thrombosis, breast cancer, or endometrial cancer.
Incorporating all outcomes from the BCPT-P-1
When the results of a clinical trial are reported, much emphasis is put on the primary endpoint for which it was designed. In the case of preventive tamoxifen, the highlight of the trial was the 49% reduction in breast cancer incidence. However, other outcomes were measured (e.g., endometrial cancer, deep vein thrombosis, etc.) with varying degrees of precision. In performing an evaluation of the global impact of an intervention, many researchers use the point estimates of the different outcomes. However, since some outcomes are not statistically significant, or are of borderline statistical significance, some judgement is required as to which are likely to be real. This means that different researchers could arrive at different conclusions regarding which outcomes to include. Instead of using the point estimates of the relative risks (which would force a subjective decision about which of the relative risks are significant), we used the entire information on their distribution, as published from the BCPT-P-1. Table 1 summarizes the relative risks (RRs) of developing certain diseases, and their confidence intervals, as derived from the BCPT-P-1. Our approach is to perform a multivariate analysis that takes into account parametric uncertainty based on the distribution of the input parameters. POHEM draws from the distribution of the input parameters and associates distinct parameter values to different sub-samples. For this analysis, 40 sub-samples were used. Effectively, this is similar to conducting 'pseudo-trials' that would have resulted in point estimates within the confidence interval for each of the outcomes under study. The variation between sub-samples is used to calculate standard errors for the simulation results. In this manner, the multivariate distribution of the outcomes can be estimated and parametric uncertainty can be incorporated into the simulation run. Furthermore, using the information from the distributions allows for the calculation of standard errors that reflect the uncertainties of the outcome. For example, if a relative risk has a very wide confidence interval (i.e. its effect is highly uncertain), it will not have any significant impact on the final results. Using the approach of 'pseudo-trials' allows us to keep the information on all outcomes measured in the trial without having to make subjective decisions.
The long-term effects of tamoxifen
Since the median follow-up period in the BCPT-P-1 trial was less than 5 years, the longer-term consequences of tamoxifen use in women without breast cancer are not known. It was assumed that the relative risks (RRs) for breast and endometrial cancer and fractures would return (linearly) to 1.0 within 5 years following cessation of therapy. It was also assumed that the RRs of coronary heart disease would return to 1.0, one year after cessation of therapy. For the other outcomes, it was assumed that the RRs would return to 1.0 immediately after cessation of therapy, as some of the biological effects of tamoxifen are thought to be promptly reversed on cessation of the drug.
Reference case and new scenarios
The reference case assumes that no tamoxifen was administered. Scenario 1 simulates the BCPT-P-1. Since tamoxifen is not administered without the potential for harmful side effects, 3 additional scenarios were simulated in order to evaluate the effectiveness of preventive tamoxifen on different sub-populations. The second scenario used more conservative eligibility criteria. It was assumed that women were at high risk if they were 35 to 70 years of age with a 5-year predicted risk of ≥ 1.66%, as calculated by the Gail model. The last two scenarios assumed an even more restricted population of women for preventive tamoxifen. Tamoxifen was administered in these scenarios if the woman was between 35 and 70 and had a 5-year predicted risk of breast cancer of at least 3.32% (twice the eligibility criteria of BCPT-P-1), and 4.15%, respectively. A range of additional scenarios has been modelled (≥ 2.08%, ≥ 2.49%, ≥ 2.91%, ≥ 3.74%), but is not presented. The results are available on request.
Sensitivity analyses
Sensitivity analyses are used to determine the impact on the results of changes to one or more parameter assumptions in the analysis. If the conclusions drawn from a simulation model are not affected by such sensitivity analyses, it can be assumed that the conclusions are robust with regard to the assumptions examined.
One sensitivity analysis explored the impact of a longer duration of the protective effects of tamoxifen for cancer after cessation of therapy, tailing off over a 10-year, rather than a 5-year period. In this sensitivity analysis, we also determined the impact of alternatively assuming that the RRs of cancers and fractures would return (linearly) to 1.0 within 10 (rather than 5) years following cessation of therapy. A second analysis retained the baseline assumption of a 5-year tailing off of protective effects of tamoxifen, but instead, assumed that tamoxifen had a beneficial effect on mortality from cancers other than breast and endometrial (RR of 0.57), and a detrimental effect on all other causes of death (RR of 1.21), as inferred from Table 11 of the BCPT-P-1 study report.
In this latter case, our analysis of the detailed counts of deaths by cause suggests a statistically significant beneficial effect on 'other cancer' mortality (i.e. excluding breast and endometrial cancer), even though the published results showed no significant difference in overall mortality between the placebo and tamoxifen arms of the trial, and the study found no significant difference in 'other cancer' incidence between the 2 arms. This result appeared paradoxical to us. These effects on mortality by cause may be due to the relatively small numbers of events and the short follow-up time for mortality effects. Nonetheless, a sensitivity analysis was performed to determine if the published reduction in other cancer mortality and increase in other non-cancer mortality might have an impact on the overall results.
RESULTS
Since each POHEM scenario covers a different population with a different underlying risk of breast cancer and other diseases, the reference for each scenario is provided in Table 2. Table 3 provides the changes observed between each 'reference case without tamoxifen' and the preventive tamoxifen scenario. Table 4 presents the sensitivity analyses of the results to alternative assumptions regarding the long-term effects of tamoxifen.
In the reference case (Scenario 1 in Table 2), life expectancy for eligible Canadian women in 1991 was 83.9 years, and 67.8% of all women could expect to survive to age 80. Almost 9% could expect to have breast cancer at some point in their lives, and to spend almost 11 years living with the disease, and 3.2% could expect to die from breast cancer.
When we look cross-sectionally at women in the year 2000, using BCPT-P-1 eligibility criteria, 23% of the Canadian population would be eligible for tamoxifen. However, from a population health perspective, it is also important to look at the lifetime potential of being eligible for this drug. Overall, applying the BCPT-P-1 eligibility criteria to Canadian women (Scenario 1 in Table 2) would result in over 85% of all women being subjected to tamoxifen administration at some point in their lives. These women taking tamoxifen could anticipate a significant decrease (P < 0.05) in life expectancy of about 0.04 years, while the proportion surviving to age 80 would decrease by about 0.2%. The burden of breast cancer would fall, but the burdens of CHD, endometrial cancer, stroke, hip fracture and cataracts would all increase. Mortality from other diseases in the simulation would also fall. These simulation results suggest that preventive tamoxifen may not be beneficial to the health of Canadian women when offered according to the eligibility criteria of BCPT-P-1.
To determine if tamoxifen might be beneficial to various subsets of women at higher risk of breast cancer, several alternatives were The RR for 'Other Cancer Mortality' is the ratio of the number of deaths due to cancer (excluding breast and endometrial) in the intervention group divided by the number of person-years of follow-up for the intervention group over the same quantity from the control group. Source: Table 11 of Fisher et al (1998) reference. The RR for 'Other Non-Cancer Mortality' is the ratio of the number of deaths due to causes other than cancer, coronary heart disease, hip fracture or stroke in the intervention group divided by the number of person-years of follow-up for the intervention group over the same quantity from the control group. Source: sub-population, there would be a significant increase (P < 0.05) in life expectancy of 0.06 years, and a 0.2% increase in the proportion reaching age 80. Finally, Scenario 4 shows the results of administering tamoxifen to the 1.7% of the population of women whose risk of breast cancer would, at some point in their life course, be 2 1/2 times higher than that in the BCPT-P-1, at 4.15%. This scenario estimates a significant increase (P < 0.05) in life expectancy of 0.07 years. For this population, there would be a marked decrease (3.1%), in the incidence of breast cancer and a decrease of 0.7 years with breast cancer for this sub-population. Table 4 shows the results of 2 sensitivity analyses, juxtaposed against the 'standard' BCPT-P-1 intervention scenario (Scenario 1). The first of these analyses explores the impact of a longer duration of anti-cancer protective effects from tamoxifen, with the benefit tailing off over a 10-year, rather than a 5-year period. Even when it is assumed that the effects of tamoxifen last over a 10-year period following cessation of therapy, the life expectancy of women would not increase. The results are consequently not sensitive to this assumption. The second analysis retains the baseline assumption of a 5-year tailing off of the protective effects of tamoxifen, but assumes that tamoxifen has a beneficial effect on mortality from cancers other than breast and endometrial (RR of 0.57), and a detrimental effect on all other causes of death (RR of 1.21). The simulation results indicate an increase in life expectancy of 0.13 years accompanied by an increase in the proportion reaching age 80. In this scenario, the probability of an increase in life expectancy was estimated to be 60%. When compared to the reference case scenario, it can be seen that the results of the simulation are highly sensitive to the assumption that there is no reduction in other cancer mortality.
DISCUSSION
Tamoxifen has been used as an adjuvant therapy for metastatic breast cancer and to decrease the incidence of contralateral breast cancer for over 2 decades (Jordan, 1990(Jordan, , 1995Love et al, 1991;Early Breast Cancer Trialists' Collaborative Group, 1992;Tomas et al, 1995;Fisher et al, 1996;Early Breast Cancer Trialists' Collaborative Group, 1998). This has created considerable debate (Bruzzi, 1998;Pritchard, 1998;Fisher, 1999;Lippman and Brown 1999;Noe et al, 1999;Radmacher and Simon, 2000), particularly regarding issues such as impact on cardiovascular disease (Love et al, 1991), duration of administration (Jordan, 1990;Fisher et al, 1996) and quality of life (Day et al, 1999). However, there is a general consensus that, for breast cancer patients, the survival benefits of tamoxifen far outweigh the adverse effects (Jordan, 1990;Early Breast Cancer Trialists' Collaborative Group, 1998).
More recently, attention has been focused on tamoxifen's potential to prevent breast cancer in 'high risk' women. The release of the findings of the Breast Cancer Prevention Trial (BCPT-P-1) in March 1998 resulted in unprecedented media coverage and precipitated additional debate regarding the risks and benefits of tamoxifen administration. The trial showed a 49% reduction in breast cancer, but also showed that there were some life-threatening adverse effects associated with tamoxifen administration (Gail et al, 1999). Adding to the debate were the results of two European tamoxifen chemoprevention trials, which were unable to confirm the P-1 trial findings, but which also used different sample sizes and eligibility criteria (Powles et al, 1998;Veronesi et al, 1998).
One of the major issues concerning the BCPT-P-1 results is that the trial population has not been followed long enough to produce reliable mortality data or to determine the net health benefit to society of a tamoxifen breast cancer prevention strategy, (Pritchard (1998) and Lippman and Brown (1999)) (See April 19, 2000 JNCI for comments / critiques by Rockhill et al, and responses by Lippman and Brown, and by Fisher).
Gail and his colleagues have developed a methodology to determine the population of women most likely to benefit from preventive tamoxifen (Gail et al, 1999), and the FDA in the United States has given approval for the use of tamoxifen for women at increased risk of breast cancer. However, according to some, there is still uncertainty as to whether tamoxifen only delays the appearance of breast cancer or truly prevents the disease itself (Zeneca, 1998;Radmacher and Simon, 2000).
Should all healthy women 60 years of age and older take tamoxifen, when only 8.3% are likely to get breast cancer after that age? Given that the BCPT-P-1 showed a 49% reduction in incidence of invasive breast cancer for women on the tamoxifen arm, how many of these have actually been prevented permanently, how much of the reduction is due to the inhibition of growth of occult tumours, and what impact will there be on life time breast cancer mortality? Assuming that there is support for the thesis that tamoxifen inhibits the growth and progression of ER-positive tumours, which are generally found in older women, what proportion of premenopausal women at high risk would actually benefit from its administration? There has also been considerable discussion regarding the importance of evaluating breast cancer risk, as well as the methodology for doing so (Jordan, 1990;Costantino et al, 1999;Gail et al, 1999;Radmacher and Simon, 2000;Smith and Hillner, 2000). Finally, concern has been expressed over the ethics of administering tamoxifen to a healthy population of women and about the importance of considering its clinical toxicology (Jordan, 1990(Jordan, , 1995Emanuel et al, 2000).
In our model, the eligibility criteria, risk factors and relative risks from the BCPT-P-1 were applied to a simulated cohort of Canadian women, using Canadian incidence and mortality rates and breast cancer management patterns. Besides breast cancer, the analysis also assessed tamoxifen's effects on endometrial cancer, CHD, DVT, stroke, hip fractures, cataracts, and all other causes of mortality.
Certain important differences between our analysis and that of other researchers are relevant. The most important difference is that the POHEM microsimulation includes life expectancy, and not just the more proximate end-points of breast cancer incidence or prevalence (Fisher et al, 1998;Radmacher and Simon, 2000). We evaluated the lifetime impact and net benefit of providing preventive tamoxifen to high-risk women for a 5-year period, whereas others used 5-year risks or effects (Gail et al, 1999;Smith and Hillner, 2000). Additionally, Noe and colleagues used incidence rates from the trial, rather than the baseline incidence in an actual population. In our analysis, we considered the overall impact of all diseases mentioned in the BCPT-P-1 trial, whereas Noe et al (1999) based their analysis on only those diseases showing statistically significant differences (breast and endometrial cancer, pulmonary embolism and cataract surgery).
Although Gail et al developed tools to address the harmful risks of preventive tamoxifen, their risk-benefit analysis was based upon the number of events. In POHEM, life-years gained or lost due to these events are also considered. In Gail's analysis, endometrial cancer, pulmonary embolism, hip fracture, stroke and breast cancer were all considered to be of equal weight. By looking directly at the mortality associated with these events, one can more accurately assess the impact of these events on lifetime health. Smith and Hillner (2000) have stated that tamoxifen for breast cancer prevention should be cost-effective under nearly all circumstances, but acknowledge that the risk reduction due to tamoxifen might not result in a reduction in breast cancer deaths. However, since their analysis did not take into account the lifeyears lost due to the harmful side effects of tamoxifen, they may have over-estimated the benefits of tamoxifen. Furthermore, the POHEM approach incorporates the uncertainty associated with the input parameters as measured by the BCPT-P-1, allowing for the calculation of confidence intervals.
When comparing the results of clinical trials to 'real-life' situations, it is important to distinguish between efficacy and effectiveness. The BCPT-P-1 trial showed the efficacy of administering tamoxifen in a clinical trial setting to reduce breast cancer incidence (Fisher et al, 1998). However, practice patterns, tests, follow-up, and survival within a clinical trial setting are not the same as in the general population. In order to assess effectiveness, the setting of the analysis should be the general population, with standard practice patterns and outcomes. For this reason, in our POHEM simulation, standard disease progression and mortality data were used. In the case of endometrial cancer, there were no deaths in the BCPT-P-1. Although most endometrial cancers might be prevented with proper screening and tests, in Canada (NCIC), as in the United States (SEER) (Ries et al, 1997), there is still mortality associated with this cancer. It has recently been reported that endometrial cancers discovered in women taking adjuvant tamoxifen are more advanced at diagnosis and less likely to have a favourable outcome compared with those in women who have not taken tamoxifen (Bergman et al, 2000).
Our study has several limitations. First, the results are based on hypothetical rather than real cases, and the proportion of cases receiving specific tests and treatments is based upon the proportion of cases in various categories in the databases used. Survival data were taken from administrative sources. However, even with these limitations, the breast cancer model of diagnostic and therapeutic procedures and disease progression has been calibrated by reproducing incidence and overall life expectancy, and approximating disease-specific mortality, as seen in Canada.
Few women develop breast cancer in their lifetime. However, according to the criteria of the BCPT-P-1, 85% of women would be eligible to take preventive tamoxifen at some point in their lives. Based on the results of this POHEM simulation, although tamoxifen has a substantial benefit in reducing breast cancer incidence and mortality, the detrimental effects of tamoxifen on endometrial cancer, coronary heart disease, stroke, and deep vein thrombosis may counter-balance the protective effect tamoxifen has on breast cancer for the majority of the women meeting the eligibility criteria of BCPT-P-1. As a consequence, the results of this simulation analysis raise important questions about the use of preventive tamoxifen. In the United States, tamoxifen is approved to reduce the incidence of breast cancer in high risk women who are 35 or older and have a 5-year predicted risk of ≥ 1.67%, as calculated by the Gail model (Gail et al, 1999).
The results of our simulations are highly sensitive to the assumption regarding 'other cancer' mortality. If it is assumed that the reduction in mortality for 'other cancers' observed in the BCPT-P-1 is not artefactual, our analysis suggests that preventive tamoxifen could be effective even for the BCPT-P-1 entry criteria. However, this mortality reduction is somewhat paradoxical. As there was no difference in the incidence of other cancers in the two arms of the trial (RR = 1.0), a reduction in mortality would imply that tamoxifen had a therapeutic effect on these cancers. Alternatively the difference could be an artefact of the short follow-up.
As a consequence of these uncertainties, additional trials and longer follow-up of prevention trials would be useful to determine whether preventive tamoxifen can reduce all-cause mortality, as well as the more proximate endpoint of breast cancer incidence. New oestrogen-suppressing drugs, such as raloxifene and anastrozele are now being introduced and evaluated, and will require the same kind of careful evaluation given to tamoxifen. The ongoing Multiple Outcomes of Raloxifene Evaluation (MORE) (Cummings et al, 1998) and the Study of Tamoxifen and Raloxifene (STAR) (National Cancer Institute, 1999) clinical trials are attempts to find an intervention that will prevent breast cancer with a minimum of side effects. Overall, the analysis raises questions about the use of preventive tamoxifen in otherwise healthy women at high risk of breast cancer.
women. For all diseases mentioned below, relative risks (RRs) from the BCPT-P-1 were applied to the respective incidence rates.
Breast cancer module
The sources of data for the POHEM breast cancer module have previously been described in detail . The breast cancer module starts with age-gender incidence patterns based on the Canadian Cancer Registry (only female breast cancer is modeled) (National Cancer Institute of Canada, 1995). For this analysis, the average incidence rates are adjusted for risk factor exposures, using relative risks from the Gail model. The risk factors for the breast cancer tamoxifen intervention are derived from the following sources: the (Canadian) National Breast Screening Study (NBSS) provided information on family history of breast cancer, age at menarche and age at menopause (Miller et al, 1992); Vital Statistics provided data on age of the mother at the birth of a first child and nulliparity; hormone replacement therapy (HRT) was derived from health surveys; and the number of previous breast biopsies was calculated from the Manitoba electronic database of health care records (Roos et al, 1987;Roos, 1999).
Since breast cancer survival critically depends on staging, data on stage at diagnosis were obtained through special arrangements with provincial cancer registries. The stage distribution used for the model was Stage I-46%, Stage II-41%, Stage III-7%, and Stage IV-6%.
Following diagnosis of initial treatment, breast cancer can progress along different paths. Based on the stage of the disease at the time of diagnosis, treatment approaches and follow-up schedules are assigned according to observed proportions, as part of the Monte-Carlo microsimulation method. For women diagnosed at Stage I, II or III, three transitions are possible -from diagnosis to local recurrence, from diagnosis to distant recurrence (or metastasis), or directly from diagnosis to death.
Once a woman has a local recurrence, transitions to distant recurrence or to death are possible. Finally, when a woman is diagnosed with a distant recurrence (Stage IV), the recurrence is assigned to one of two sites: visceral or non-visceral. The only transition allowed at this point is to death, and that transition occurs at a different pace depending on the site. In general, the visceral site has a poorer survival.
Durations between these various discrete events or survival times have been estimated from detailed longitudinal microdata obtained from Saskatchewan and Northern Alberta. These stochastic waiting times are typically represented by piecewise Weibull distributions.
Coronary heart disease module
The progression of coronary heart disease (CHD) is based on Weinstein et al (1987), with case fatality matching 1991 Canadian CHD mortality from vital statistics. Incidence of CHD is modeled as one of four possible events: sudden death, cardiac arrest, myocardial infarction or angina. Because no national CHD incidence rates are available in Canada, baseline incidence rates are derived by inverting the Weinstein disease progression model, working back from 1991 Canadian CHD mortality rates, while taking account of the underlying risk factor distribution, and Framingham relative risk functions (from section 37 of the Framingham reference study) (Abbott, 1987). The major risk factors for CHD (cholesterol, blood pressure, smoking, age and gender) are derived from the cross-sectional 1978-1979Canada Health Survey, (Health and Welfare Canada, 1981 and are smoothed using a transport flow analysis to provide the simulation of longitudinal risk behaviours (Gentleman et al, 1990).
Hormone replacement therapy (HRT)
HRT use was one of the eligibility criteria of the BCPT. The incidence of HRT usage has been derived from the cross-sectional prevalence of HRT in Canada's 1994 National Population Health Survey, and a small survey on duration of use conducted by the University of Ottawa, standardized to the general population of women.
The relative risks of hip fracture, CHD and breast cancer are all affected by HRT. The magnitudes of these effects have been taken from the Office of Technology Assessment (OTA) study of Hormone Replacement Therapy (U.S. Congress, 1995) and a literature review. The risk of CHD is assumed to instantly decrease by half when HRT is taken and to return to normal levels when HRT is stopped.
Hip fracture
The hip fracture model (Flanagan et al, 1997) is based on the natural history of bone mineral density (BMD) which has been extracted from the U.S. Office of Technology Assessment Study of HRT. Baseline hip fracture rates are derived from the 1986-1990 Manitoba electronic physician and hospitalization records.
Endometrial cancer
The Canadian Cancer Registry was used to obtain endometrial cancer incidence by age groups (National Cancer Institute of Canada, 1995). Although there was no mortality associated with endometrial cancer within the BCPT-P-1 trial, the mortality rates for endometrial cancer and the other individual diseases were modeled to reflect those from Canadian vital statistics records.
Stroke
Electronic health care records from the province of Manitoba were used to calculate incidence and survival for stroke. The time till death due to stroke was calculated as a piecewise Weibull function.
Cataracts
Cataracts were also modeled in several stages, using the Manitoba database referred to above. Once a woman was diagnosed with cataracts, the time until surgery on the first eye was modeled using a piecewise Weibull function. The time between the first and second eye surgery was also modeled based on a Weibull curve.
Deep vein thrombosis (DVT)
Incidence of DVT was modeled using information from the Manitoba database referred to above. No mortality was modeled for DVT.
Parametric uncertainty
To reflect parametric uncertainty in our simulation, 40 replicates of each scenario were simulated. For each replicate, a vector of RRs was drawn using a Latin hypercube sample design (Ma et al, 1993;Cronin et al, 1998) from independent lognormal distributions with the 95% confidence intervals shown in Table 2. Then, 40 cohorts of 100 000 women each were simulated, based on these 40 vectors of RRs, for a total of 4 000 000 cases. The variability of several of the key outcomes was then derived from their distributions over the 40 replicates. From these sub-populations, the probability of a positive life expectancy can be estimated empirically. | 7,049 | 2001-09-01T00:00:00.000 | [
"Medicine",
"Economics"
] |
A search for correlation-induced adiabatic paths between distinct topological insulators
Correlations in topological states of matter provide a rich phenomenology, including a reduction in the topological classification of the interacting system compared to its non-interacting counterpart. This happens when two phases that are topologically distinct on the non-interacting level become adiabatically connected once interactions are included. We use a quantum Monte Carlo method to study such a reduction. We consider a 2D charge-conserving analog of the Levin-Gu superconductor whose classification is reduced from $\mathbb{Z}$ to $\mathbb{Z}_4$. We may expect any symmetry-preserving interaction that leads to a symmetric gapped ground state at strong coupling, and consequently a gapped symmetric surface, to be sufficient for such reduction. Here, we provide a counter example by considering an interaction which (i) leads to a symmetric gapped ground state at sufficient strength and (ii) does not allow for any adiabatic path connecting the trivial phase to the topological phase with $w=4$. The latter is established by numerically mapping the phase diagram as a function of the interaction strength and a parameter tuning the topological invariant. Instead of the adiabatic connection, the system exhibits an extended region of spontaneous symmetry breaking separating the topological sectors. Frustration reduces the size of this long-range ordered region until it gives way to a first order phase transition. Within the investigated range of parameters, there is no adiabatic path deforming the formerly distinct free fermion states into each other. We conclude that an interaction which trivializes the surface of a gapped topological phase is necessary but not sufficient to establish an adiabatic path within the reduced classification. In other words, the class of interactions which trivializes the surface is different from the class which establishes an adiabatic connection in the bulk.
I. INTRODUCTION
Recent years have witnessed an intense research effort to understand topological phases of matter [1][2][3][4] . Symmetryprotected topological (SPT) phases are described by equivalence classes of phases under symmetric adiabatic deformation. This means that two SPTs belonging to the same phase can be deformed to each other without closing the bulk gap or breaking the protecting symmetries, whereas two distinct SPT phases cannot. SPTs protected by internal symmetries such as time-reversal or particle-hole symmetry have been extensively studied for free-fermion systems 5,6 . These are all characterized by the existence of gapless anomalous surface states whose existence is a direct consequence of the bulk topology, a phenomenon known as bulk-boundary correspondence.
The inclusion of interactions can modify the topological character of free-fermion SPTs in at least three different ways: (i) The spontaneous breaking of protecting symmetry can lead to the disappearance of surface states and consequently alter the topological classification. (ii) Correlations can induce topological order that is characterized by long-range entanglement as in fractional quantum Hall states 7,8 , fractional topological insulators 9 , or quantum spin liquids [10][11][12][13] . These states do not have a non-interacting analog. (iii) The free-fermion topological classification may be reduced due to the existence of symmetric adiabatic paths in the space of interacting Hamiltonians connecting states that are disconnected at the non-interacting level [14][15][16][17][18][19][20][21][22][23][24][25] .
The first example of a single-particle topology reduction was considered by Fidkowski and Kitaev in Ref. 14. In this work, the authors study a spinless superconductor in one dimension (Kitaev chain), with spinless time-reversal symmetry T 2 = +1, representing class BDI. They have constructed an explicit interaction which preserves the symmetry and has a unique and symmetric ground state, that yet gaps out 8 topological Majorana boundary modes and allows an adiabatic connection between bulk states whose winding number differ by 8. This implies a reduction of the noninteraction classification from Z to Z 8 . Later on, this result has been generalized to different symmetry classes and higher dimensions [15][16][17][18][19][20][21][22][23][24][25] . Whereas the early work by Fidkowski and Kitaev 14 used the simple properties of the 1D model to show explicitly that two phases differing by winding of 8 can be adiabatically connected, most of the investigations in higher dimensional systems relied on bulk-boundary correspondence to argue that the existence of an interaction which symmetrically gaps out the topological boundary modes is sufficient to establish the collapse of the non-interacting classification. These approaches employed several arguments such as studying the possibility of gapping surface states [15][16][17] or 0D defects that follow from dimensional reduction 18,19 as well as investigating the signatures of these boundary states in the entanglement spectrum 20,21 . Other bulk-based approches include studying the braiding statistics arising from gauging the symmetry 22 or group cohomology 23 .
In spite of this body of work, which provided a comprehensive answer to the question of the general interactionreduced classification for topological phases protected by internal symmetries, only a few works 26 addressed the fate of a given topological phase in the presence of a specific symmetric interaction. In particular, we would like to investigate the question whether an interaction which symmetrically gaps arXiv:1912.07614v1 [cond-mat.str-el] 16 Dec 2019 out surface states at sufficient strength also enables an adiabatic symmetric path connecting two distinct non-interacting SPTs. In this work, we show that this is not generally true. As we will illustrate in detail, the caveat is that the interaction results in transitions into symmetry-broken phases along the paths, preventing adiabaticity despite the fact that at strong enough interaction strength the ground state is symmetric.
To show this, we consider a simple model of four identical layers of topological insulators protected by charge conservation and an internal Z 2 symmetry (a non-superconducting analog of the one considered by Levin and Gu 22 ) whose non-interacting Z classification is reduced to Z 4 . The noninteracting theory has an emergent SU (4) symmetry corresponding to rotations among the four different flavors and is characterized by the topological winding number w = 4. However, we consider an interaction of much lower symmetry which reduces this flavor rotation symmetry to U (1) × U (1) (while still preserving the symmetries protecting the topological classification). It is worth mentioning here that our approach differs from earlier works 21,26,27 which considered highly symmetric interactions to avoid possible symmetry breaking. Here, we consider an interaction with very low symmetry precisely to show that symmetry breaking is unavoidable along the adiabatic path although the ground state is still symmetric at large enough interaction strength.
Starting from a microscopic model, we employ the projective auxiliary-field Quantum Monte Carlo method [28][29][30][31] to study its ground state phase diagram. We find that adiabatic connection between the w = 4 and w = 0 phases of the model is not possible due to the appearance of an extended region of spontaneous symmetry breaking that additionally separates the single particle topological sectors. To overcome this problem, we add extra terms to the Hamiltonian to frustrate the long range order. At weak levels of frustration, the region of spontaneous symmetry breaking is reduced in size until it gives way to a first order phase transition at strong frustration that still blocks the adiabatic connection. As a result, we conclude that the interaction we considered, while sufficient for gapping out surface states, is insufficient for the existence of a symmetric adiabatic deformation between the trivial and nontrivial phases. This poses a counter example to the criteria derived by some of the authors in Ref. 19 which is shown to be necessary but not sufficient.
This article is organized as follows. In Sec. II, we define the two-dimensional microscopic model and discuss its symmetries.In Sec. III, we analyze possible mean-field scenarios with spontaneous symmetry breaking and identify the most dangerous channel. Additionally, we present the analytic solution in the limit of infinite interaction strength and find a unique, symmetric and gapped ground state. In Sec. IV, we briefly discuss the projective auxiliary-field Quantum Monte Carlo (QMC) method. We present the numerical extracted phase diagram in Sec. V that exhibits a region of spontaneous symmetry breaking that gives way to a first order phase transition at strong frustration. In Sec. VI, we conclude with a discussion of the implications of the phase diagram and suggest future avenues.
II. MODEL & SYMMETRIES
Here, we design a two-dimensional microscopic model that obeys an anti-unitary time-reversal symmetry (TRS) and a unitary Z 2 symmetry 18,19 . The latter can be implemented, e.g., as the conservation of the z component of the spin S z modulo 2. The topology of these free fermion model is given by a Z valued winding number w that is related to the number of helical Dirac cones at the edge of the sample. In the presence of correlations, the classification is expected to be reduced from Z to Z 4 . We define a specific interaction term that should allow for adiabatic deformations of free fermion states whose winding number differ by (multiples of) ∆w = ±4.
We begin by introducing the free fermion part of the model H 0 = k Ψ † k H(k)Ψ k which represents two copies of a Quantum-Hall system with opposite Chern number representing a topological insulator [32][33][34][35] which preserves the spin projection σ z . We have where Ψ † k is the creation operator of a four-component spinor with momentum k. The above Hamiltonian is block diagonal and we denote the sub-blocks by H ± (k). They represent a Chern insulator with a ±1 Chern number 36 . The energy scale is set by t which will be used as the unit of energy (t = 1) throughout the rest of this manuscript. Eq. (1), corresponds to a gapped Dirac cone at each of the four time-reversal invariant momenta, with the gap dictated by m(k). For the choice of parameters λ = 0, λ = −2 or λ = −4 at least one of the Dirac cones remains gapless (compare with Fig. 1) whereas any other value describes an insulating state. These points correspond to topological phase transitions. The last term in Eq. (1a), ∆ = 0.25 is used to break the particle-hole symmetry within H ± (k). We note that the dimensional reduction arguments used in Ref. 19 relied on the existence of chiral symmetry in the continuum model. The symmetry is explicitly broken here following the aforementioned scheme of having as little symmetries as possible. Also in real materials, the particle-hole symmetry is an approximate low energy symmetry, unless one considers superconductors.
The Hamiltonian H(k) obeys a spurious U (1) symmetry exp{iφσ z } instead of the required Z 2 Ising symmetry R = σ z . It separates the H ± (k) sectors, where ± is given by the eigenvalues of R. Additionally, there is one independent, anti-unitary time-reversal symmetry T = σ y τ y K. Here, K refers to the complex conjugation. Hence, the model is very closely related to the well know topological insulators (TIs). There, one may introduce spin-orbit coupling which breaks the spin conservation as long as the time-reversal symmetry is respected.
In order to discuss the topology of the gapped states, it is useful to first focus on H + . Defining the three-component vector d = (sin(k x ), sin(k y ), m(k)), the Chern number of withd = d/|d| 33 . As it can be seen in Fig. 1, the parameter λ tunes the system from a trivial insulator (λ > 0) through a semi-metal with a Dirac cone at k = (π, π) (λ = 0) to a Chern insulator (−2 < λ < 0) with Chern number +1. At λ = −2, the system exhibits one Dirac cone at each k = (0, π) and k = (π, 0). The winding number of the full Hamiltonian is then given as w = (C + −C − )/2 where C − = −C + due to the time-reversal symmetry connecting the two sectors. Note that small values of ∆ only modify the energy of the bands and, as long as the band gap does not close, the wave functions do not change as it represents a (momentum dependent) chemical potential. Hence the topology is insensitive to (small) ∆.
To study the topological reduction from Z to Z 4 , we introduce four copies of H 0 labelled by an orbital index o ∈ {A, B, C, D}. The dimensional reduction scheme 19 can be used to derive the form of the interactions that allow the adiabatic connection by introducing a lattice of zerodimensional defects. Such defects inherit the bulk topology in the sense that they exhibit n topologically degenerate zero modes where n matches the topological invariant of the bulk 37 . For two-dimensional models, this is done by first realizing one-dimensional edge modes at a domain wall and secondly adding an oscillating mass term along this domain wall with appropriately chosen symmetries, such that each node of the mass term localizes zero modes. This construction in turn allows to derive an explicit interaction term which gaps those defects without breaking any symmetry. Using this recipe, we design the following interaction with the projectors P α = 1 2 (1 + iαγ 3 γ 4 ) satisfying P = P 2 , and the γ matrices acting on the original Dirac components as γ 1,2 = σ 0 τ x,y , and γ 3,4,5 = σ z,y,x τ z . Note that the spurious U (1) symmetry is R = iγ 4 γ 5 and that the operators R, γ 4 and γ 5 form an SU (2) algebra. Hence, the symmetry generates continuous rotations in the γ 4 γ 5 plane. Using only γ 5 in the interaction terms breaks the U (1) symmetry down to the required discrete Z 2 symmetry that transforms γ 5 → −γ 5 . Physically, this interaction introduces correlated pair hopping of electrons between layers A → B and D → C while flipping the R charge. This term does allow, e.g., two R = + electrons being scattered into two R = − ones such that the R charge is only conserved modulo 2 which illustrates the U (1) → Z 2 reduction 38 . We note that this interaction term is not invariant under any rotation between the flavors since it singles out a very specific channel of coupling the flavors. Thus, out of the SU(4) flavor rotation of the non-interacting theory, only a U(1)×U(1) symmetry remains corresponding to simultaneous phase rotations of Ψ A and Ψ B or Ψ C and Ψ D .
III. MEAN-FIELD THEORY & ATOMIC LIMIT
Before presenting the method and the numerical results, let us develop an intuition on symmetric and symmetry breaking phases that the model presented in the previous section admits. Here, we discuss various mean-field scenarios and identify the channel which is most likely to exhibit symmetry breaking long-range order. We also argue that this phase may only be realized when the coupling strength is comparable or larger then the band gap U c,1 < U . Additionally, we will solve the atomic limit analytically for U → ∞. We find a unique and symmetric ground state that is gapped from the rest of the spectrum. Hence this state is also stable with respect to finite values of U . Accordingly, the symmetry broken phase is bounded both from below and above with U c,1 < U < U c,2 . Interestingly, the states of the limiting cases all share the same symmetry such that an intermediate symmetry broken phase is not required. This provides another argument, complementary to the edge state analysis of Ref. 19, on the existence of an adiabatic path and the according reduction of the topological classification.
A. Weak interaction limit and Mean-field scenarios
To discuss mean-field scenarios relevant at weak interactions, it is very useful to divide the four layers into two pairs, namely (A, B) and (C, D) and introduce the two pseudo-spin operators with β = x, y where the Pauli matrices µ β act on layer index within each pair. This allows us to rewrite the interaction as and shows the possibility to minimize the energy for ζ = −, given that U > 0, by the generation of pseudo-magnetic order in the xy-plane of Note that the operator M β i contain terms like Ψ † i,A γ 5 Ψ i,B . The required R symmetry transforms γ 5 to −γ 5 and is therefore broken by the order parameter. Physically, M β i introduces single electron hopping processes, e.g., from layer B to A, that flip the R charge. We have illustrated this for the helical edge states in Fig. 2(b). This also points out that the order parameter generates a gapped edge spectrum.
Additionally, this order parameter anti-commutes with Eq. (1) such that it also introduces a gap for the bulk semimetals. As Eq. (5) points out, the interaction is symmetric under rotation around the z-axis of the pseudo-spins, and hence the orientation of M within the xy-plane is arbitrary and also breaks this symmetry spontaneously.
Without loss of generality, we choose the x direction for the magnetization and introduce H MF = m i M x i . In Fig. 2(a) we present the energy spectrum for H 0 + H MF with open boundary condition in y direction. The solid black lines represent a non-zero mean-field expectation value m > 0 and the red lines overlay the symmetric version (m = 0). We can clearly observe the introduced gap of the former massless Dirac edge state.
To make the connection between this mean-field scenario and the phase diagram, let us discuss the limiting cases. Keeping the interaction strength small, we expect stable Dirac cones for λ ∼ 0 and λ ∼ −2 as the density of states at the Fermi level vanishes at half filling [39][40][41][42][43][44] . The insulating states provide an intrinsic scale of energy, namely the band gap, such that the correlation should reach comparable strength before it leads to significant changes. Hence we expect that a symmetry broken phase, if at all, occurs at finite interaction strength U > U c,1 .
B. Strong interaction limit
The more interesting limiting case is the strongly interacting one with U/t 1. Starting from the limit t = 0, we can solve H int analytically, as the lattice sites completely decouple and we are left with a zero-dimensional problem. In the following, we calculate the full spectrum and show that there is a unique ground state. For readability, let us drop the position index i for the remainder of this analytic derivation. Note that P ± act as projectors such that the Fock space can be decomposed into two separate blocks which have an identical spectrum. Hence it is sufficient to focus on one subspace, say H + . Let i be an eigenvalue of H + with degeneracy g i . The full spectrum is then given by i + j with degeneracy g i g j .
In the previous Sec. II, we have chosen a basis for the γ-matrices in which R is diagonal to remind the reader of popular QSH models. Here, however, it is more convenient to choose a different basis in which both γ 5 = σ x τ z and iγ 3 γ 4 = σ x τ 0 are diagonal 45 . Note that the local fermion degrees of freedom H + within one layer o is then fully classified by the eigenvalues s 5 = ± of γ 5 such that P + Ψ o,s5 = Ψ o,s5 and γ 5 Ψ o,s5 = s 5 Ψ o,s5 . This leads to the definition of four new spin operators: such that the Hamiltonian can be written as H + = h ac − h ad − h bc + h bd where we used the shorthand notation h ij = U (S + i S − j + h.c.). Note that this has mapped Eq. (3) to a model with four sites on a ring (a-c-b-d) of spinful fermions. Each site can be empty, double occupied, or have a single fermion with up or down spin. The total Hilbert space is then 2 8 = 256 dimensional. The Hamiltonian conserves the parity at each site 46 as well as the total S z spin component. Using the local parities, we can decompose the Hilbert space into 16 16-dimensional Hilbert spaces which can be studied as shown below.
We start by considering the subspaces where at least both a and b or both c and d sites are parity even. There are 7 16dimensional subspaces with this property. Observe that every second site represents a spin singlet such that spin-flip processes on all bonds are prohibited. Hence, the Hamiltonian and all eigenvalues vanish.
Next, we discuss the cases where only the a or b as well as only the c or d site is single occupied which occurs for 4 16-dimensional subspaces. Then, only one bond operator, say h ac , is non-zero. From the spin conservation follows that the sectors with maximal value of |S z | have a vanishing Hamiltonian as the spin-flip operators cannot act on those states. The S z = 0 subspace contains only two eigenstates with nonzero eigenvalues ±U given by | ↑ a ↓ c ± | ↓ a ↑ c .
Third, we consider the subspace where exactly one parity is even (there are 4 such subspaces), e.g., site b, such that H + = h ac − h ad . Once more, the states with maximal |S z | = 3/2 are eigenstates of energy zero. In the S z = ±1/2 subspace we find the Hamiltonian Here the eigenvalues are given by ± √ 2U and zero. The last and most interesting subspace has only odd parity sites. Like before, the two states with S z = 2 are eigenstates with vanishing energy. In the S z = ±1 sector we find eigenvalues of 0 and ±2U . The wave function of the −2U state is (from here on out, we drop the indicies for readability) Finally, for S z = 0 there are multiple states with vanishing energy, but only one state is associated with the eigenvalues ±2 √ 2U , respectively, where the ground state wave function is In summary, the full spectrum of H + consists of 186 states of vanishing energy, 16 states with energies ±U and ± √ 2U each, 2 modes for ±2U and a unique state at energy ±2 √ 2U . It is also interesting to notice that lowest excitation has an energy of ω = 2( √ 2 − 1)U and changes the spin by ∆S z = ±1. It is straight forward to show that the lowest states φ 0 and φ 1 are related as Note that the b and d sites have negative γ 5 eigenvalues. This, combined with the second relative minus sign between (a, b) and (c, d) indicates the relation to the operator M β i defined in Eq. (6) which has already been identified as the most dangerous mean-field channel. Here, we learn that this operator also exhibits the lowest excitation of the strongly interacting limit.
We have shown that the block H + exhibit a gapped and unique ground state, which is therefore also symmetric. The overall ground state of the full lattice model is then a product state that is also gapped, unique and respects all symmetries. Allowing finite, but small values of t will change details of the ground-state wave-function. Especially it will no longer be a product state of purely local states. However, it will remain unique and symmetric until the hopping energy-scale set by t is comparable to the many-body gap. This is quite opposite to local interactions with a degenerate ground state, e.g., the regular Hubbard model on the two-dimensional square lattice, where small perturbations can generate symmetry broken states, e.g., anti-ferromagnetic order due to a finite hopping amplitude in the Hubbard model.
To summarize, the limiting cases all exhibit unique ground states that share the same symmetries. As the strongly interacting state is a local product state, it is a representation of an atomic limit and very likely adiabatically connected to the trivial band insulators. The topological insulator is also stable against small interactions that were specifically chosen to FIG. 3. Comparison of the single-particle spectrum A(ω) at the Dirac point k = (π, π) extracted by analytic continuation using the stochastic maximum entropy method, with the band dispersion (solid black line) extracted from fitting the tail of time-displace Greens function. This proves that the assumption of a single low-energy excitation is justified. allow a connection of the topological state to the strong interacting limit as the topologically protected defect states are symmetrically gapped 19 . However, especially for intermediate coupling strength along the path from the topological insulator to the Mott phase at strong interacting, the energy scales mix and spontaneous symmetry breaking might occur. The dangerous channel here is given by pseudo-magnetic order.
IV. METHOD
Why should we go beyond mean field? The considerations of Sec. III allow two scenarios for intermediate coupling strength, both are in agreement with the theorems on non-interacting systems. Either the order parameters vanishes and the bulk gap closes at the topological phase transition, or, the non-zero order parameters ensures a finite bulk gap at the expense of a broken symmetry. However the interaction constructed following the rules of Ref. 19 aims at a different path, namely an adiabatic connection of two topological distinct phases. Such a setup has to keep the band gap finite while preserving all protecting symmetries. Hence this connection cannot be made on a mean-field level and requires an intermediate state without a quasi particle description. In other words, one replaces poles of the Greens function that cross the Fermi surface (band gap closing) by zeros (no spectral weight) in order to change the topological invariant.
To solve the interacting system, we use the ALF package 31 , a general implementation of the auxiliary field Quantum Monte Carlo 28 . The zero-temperature version of this algorithm provides access to ground-state properties by using a trial wave function |ψ T , we take the non-interacting ground state, and project it to the correlated one by applying the exponentiated Hamiltonian exp(−ΘH)|ψ T 47,48 . Here Θ controls the projection length, the result converges exponentially fast and we choose Θ = 20 for the remainder of this work. The implemented auxiliary-field QMC algorithm uses the Trotter decomposition of the partition sum Z = Tr e −β(H0+Hint) = Tr [e −∆τ H0 e −∆τ Hint ] NTrotter + O(∆τ 2 ) with ∆τ = β/N Trotter as well as a discrete Hubbard-Stratonovich transformation for the interaction of perfect squares e ∆τ 2 = l=±1,±2 γ(l)e √ ∆τ η(l) + O(∆τ 4 ). The Monte Carlo weight of each configuration {l τ,i,ζ,α,β }, where τ labels the imaginary time-slice and i, ζ, α, β are the indices of Eq. (5), is determined by integrating out the fermions and is given by the determinate of a matrix W ({l τ,i,ζ,α,β }).
The simulation of this model in the above formulation does not suffer a sign problem due an anti-unitary symmetry T defined as, We note that γ 1 γ 5 commutes with the projection P α and anticommutes with γ 5 such that T −1 S 1,α,β . This symmetry is satisfied for each field configuration and squares to −1, such that the eigenvalues of W ({l τ,i,ζ,α,β }) come in complex conjugated pairs which guarantees the positivity of each configurations weight ∼ det[W ({l τ,i,ζ,α,β })]. 49 Three observables are the main focus of this study. The first derivative of the free energy ∂F/∂U = −β/U H int signals a first-order phase transition for increasing interaction strength U . To detect second-order phase transitions, we define a correlation ratio r = 1 − S(q=δq) S(q=0) where S(q) is the correlation function in momentum space and δq the smallest but finite momentum available on the given lattice size. Observe that q = 0 assumes a homogeneous instability which is justified according to the mean-field analysis. The correlation function is given as In case of long-range homogeneous order S(0) diverges linearly with system size L 2 whereas the correlation function remains finite for any other value of q, hence r = 1 for the thermodynamic limit. In systems without long-range order, S(q) is a smooth function such that r converges to zero for large lattices. As r is an RG-invariant quantity it exhibits a crossing point for different system sizes at the phase transition. The third observable of interest is the single-particle gap that allows us to track the semi-metallic Dirac cones that separate the insulators of different topology for U = 0.
The single-particle greens functions are given by for hole excitations. If we assume a single quasi-particle mode at low energies gapped from higher energy excitations, then both greens function behave as G k (τ ) ∼ a k exp(−τ k ) for large values of τ where k is the excitation energy and a k its spectral weight. As a sanity check of this assumption, we compare the extracted energies with the full spectrum (see Fig. 3) determined by MaxEnt 50 which proves that the assumption is justified.
V. RESULTS
We have shown above that the limit of strong interaction generates a gapped and symmetric ground state. Those two properties also apply to both non-interacting ground states, representing −2 < λ < 0 and 0 < λ, such that the adiabatic connection seems to be plausible, at least in principle. To test this hypothesis, we will track the semi-metallic phase and analyze the most dangerous correlation function identified in the mean-field considerations.
A. Tracking the semi-metal Note that the two Hamiltonians with ±(λ + 2) can be mapped onto each other. First, the mapping has to shift the momenta as k → k + (π, π) such that m (λ+2) (k) = −m −(λ+2) (k + (π, π)). To absorb the sign changes in the first three terms of Eq. (1a) the Dirac spinor has to transform as Ψ k,o → γ 4 Ψ k,o . Therefore, the position of the semi-metal with two Dirac cones at (π, 0) and (0, π) has to remain at fixed λ = −2.0 whereas the Dirac cone at (π, π) generically occurs at renormalized values λ ∼ 0. It is interesting to notice that the γ 4 anti-commutes with R such that the winding w → −w is inverted and the two Hamiltonian represent opposite topologies.
One might also be concerned that the interaction might lead to a meandering of the Dirac cone within the Brillouin zone. If we had kept the PHS with ∆ = 0, then the cones are symmetry constraint to the time-reversal invariant momenta. On one hand, we fine-tuned the symmetry breaking such that the Dirac cones remain gapless in the free fermion system, and on the other hand, the numerical results show that the Dirac cone remain where they are.
Let us introduce λ c (U ) as the critical value at which the semi-metal marks the topological phase transition between the TI with winding w = +4 and the trivial insulator. For the free fermion system, we have λ c (0) = 0. To locate the phase transition, we set a fixed interaction strength, e.g., U = 1.0, and scan the single particle spectrum for various values of λ. The resulting excitation energies (π,π) of the Dirac cone are presented in Fig. 4(a). The results depend on the lattice size and a visual extrapolation suggests a semi-metal at λ c (1.0) = −0.04 ± 0.02. We repeat this analysis for various values of U and also confirm the symmetry constrained position λ c (U ) = −2.0 for the Dirac semi-metal with cones at (0, π) and (π, 0) which separates the w = +4 TI from the one with winding w = −4 at −4.0 λ < −2.0. The results are summarized in panel (c).
B. Symmetry-broken phase
Here, we focus on the intermediate region of the phase diagram where the energy scales of the correlation and the kinetic energy compete with each other. During the meanfield analysis (see Sec. III), we have identified this regime between the TI and the Mott insulator at strong interactions as FIG. 4. On the left hand side, we present the extracted energies k=(π,π) for the lowest particle/hole excitation. The system size scaling suggest a gap closing for λc = −0.04 ± 0.02. In the central panel, the correlation ratio r is presented and the size scaling is consistent with a symmetry broken phase between Uc = 1.65 ± 0.02 and Uc = 3.2 ± 0.2. The right hand side summarizes various scans in the phase diagram. the one most prone to spontaneous symmetry breaking with long-range pseudo-magnetic order.
Let us start with a fixed value of λ = −0.5 and analyze the correlation ratio r with increasing interaction strength U . The resulting data is depicted in Fig. 4(b) for various lattice sizes. We clearly see that the ratio systematically decreases with increasing L if the correlation strength is smaller than U c = 1.65 ± 0.02 or larger than U c = 3.2 ± 0.2 such that there is no long-range order here. In the intermediate region, the correlation ratio increases with system size that indicates spontaneous symmetry breaking due to a finite pseudomagnetic order in xy-plane of M β i . The critical values stated above are extracted from the crossing point where the ratio coincides for all lattices. The second phase transition from the ordered phase to the Mott insulator requires larger interaction strength such that the QMC simulation become more challenging, hence the smaller lattice sizes and larger error estimate. Again, we repeat this calculation for various values of λ and summarize the phase boundary in panel (c).
C. Phase diagram
In panel (c), we plot the full phase diagram and confirm the expected stability of the Dirac semi-metals as well as the insulators at weak coupling strength. The simulations also detect the symmetric state with strong correlation. In the middle of the phase diagram, where kinetic and potential energies are of similar order, we find long-range order in exactly the dangerous channel that we have identified in Sec. III. This phase breaks the protecting R symmetry and therefore allows a hybridization of counter-propagating edge modes as shown in Fig. 2. As a result, we cannot find an adiabatic path between the two non-interaction topological insulators. Instead, any path in this phase diagram either contains a semi-metallic state or a symmetry-breaking phase.
D. Can frustration remedy the problem?
The main idea is to add the z-component of the pseudospins defined in Eq. (4) and use this to frustrate the in-plane order without changing the wave functions of the limiting cases. To form a proper SU (2) algebra, we have to drop the γ 5 matrix acting on the Dirac components as (γ 5 ) 2 = 1 such that Observe that this z-component generates rotations within the xy-plane of the pseudo-spins that leave the Hamiltonian invariant. Additionally, the transformation (A ↔ B) combined with (C ↔ D) also is a symmetry operation under which S 1/2,α,z i → −S 1/2,α,z i is inverted. Hence, any unique ground state has to be an eigenstate of i,α=± S 1,α,z i + S 2,α,z i with a vanishing expectation value. In the large U limit, the sites and α-sub-blocks decouple such that we introduce an additional interaction term H frust = V i,α=± S 1,α,z i + S 2,α,z i 2 which minimizes S z locally without changing the ground state.
As depicted in Fig. 5, weak frustration does reduce the size of the symmetry broken phase. In panel (a), we present the phase diagram for V = 0.75. The symmetry broken phase now extends only to λ ∼ 0.5 whereas in the unfrustrated model it reaches λ ∼ 0. Additionally, we find that the long-range ordered region is shifted towards weaker coupling strength U while also the range in U has been reduced. This trend is also clearly visible in panel (b) where we kept λ = −2.0 fixed. The Dirac cones at (π, 0) and (0, π) persist for weak coupling strength U and V . Increasing U generates the long-ranged ordered state before the appearance of the symmetric Mott insulator at large U . With higher level of frustration the symmetry broken phase is replaced by a direct first order phase transition between the Dirac semi-metal and the Mott insulator. In Fig. 5(c), we show that the first order phase transition extends also to λ > −2.0 and connects to FIG. 5. The left hand side show the phase diagram for weak frustration V = 0.75 with a smaller region of long-range order compared to V = 0 that seems to be most stable for λ = −2. In the middle, we present the phase diagram for fixed λ = −2.0 for various frustration strength V . For V > 2.5, the symmetry broken phase is replaced by a first order phase transition. On the right hand side, the phase diagram is presented for high levels of frustration.
VI. DISCUSSION
In this study, we found an interaction which at sufficient strength trivializes the topological phase w = 4. At the same time, surprisingly, it does not allow for an adiabatic path between this topological phase and the trivial phase w = 0. The semi-metal separating the non-interacting insulators persists for small coupling strength in U . It is terminated either by a first order transition to a symmetric Mott insulator related to the large U limit or a second-order phase transition to a long-range ordered phase. This in turn is separated by another second-order transition at larger U to the symmetric Mott insulator. Similarly, the topological insulator either undergoes a direct first order transition to the Mott phase or a second order transition into a symmetry-broken phase followed by another transition into the symmetric Mott phase.
We emphasize that our results do not contradict the statement that the non-interacting classification is reduced from Z to Z 4 with interaction. Instead, our results show that the existence of a specific interactions which symmetrically gaps out the surface states of a topological insulator is not sufficient to establish that the corresponding bulk phase can be adiabatically connected to a trivial phase. This contradicts a popular line of reasoning which focuses on gapping out the surface states as a criterion for establishing the classification reduction. The underlying reason for the failure of such arguments is that the stability of surface states can be essentially reduced to a zero-dimensional problem by studying the zero-energy states within defects constructed in a specific way 19 . However, this decreased dimensionality does have strong implications on the possibility of spontaneous symmetry breaking in the ground state that may only occur in one (two) or higher dimensions for discrete (continuous) symmetries according to Mermin-Wagner theorem 51 . And it is exactly this mechanism which blocks the path we were looking for.
From the two-dimensional bulk perspective, the phase diagram exhibits various critical points. Even though the focus of this study was to establish the phases themselves, it is interesting to discuss those critical theories briefly. There are Wilson-Fisher transitions between the topological insulator and the ordered phase as well as between the ordered phase and the Mott insulator. Additionally, we expect the critical point where the semi-metal is gapped by symmetry breaking to be described by a Gross-Neveu theory 44,52 .
The results of this work raise a few questions, mainly focused on the missing pieces required to find the adiabatic path. In Fig. 6, we sketch two alternative scenarios for the bulk phase diagram, (a) the symmetric mass generation for the Dirac cone as well as (b) a separated region of symmetry breaking that terminates the semi-metal line. Several studies have reported the formation of a correlated single-particle gap of SU (4)-symmetric Dirac system without the generation of long-range order 26,27 . Most surprisingly, it is claimed to be a second order transition. In Ref. 53, the authors propose a theory that involves fractionalization in order to explain this exotic phase transition. It would be very promising to include the same kind of bulk criticality in our setup in order to find the adiabatic connection and then investigate the details of how this affects the topological aspects. In contrast, the scenario (b) shows that this symmetric mass generation is not required and that there also exist more conventional options in which the symmetry broken region we find is split into two separate ones.
However, there is no obvious approach to engineer this phase diagram. One possibility is to consider the symmetry of the interaction. Upon using highly symmetric interactions in the flavor space, the previous studies 54 found a direct secondorder transition to the symmetric Mott phase suggesting the possibility of an adiabatic path between the topological and trivial phases. In contrast, the interaction we use has very low symmetry in the flavor space, and we always found a symmetry-broken phase or a first order transition which completely blocks such adiabatic path. It would be interesting to There exist at least two alternative bulk scenarios with (a) symmetric mass generation (green cross) or (b) a symmetry breaking region (red circle) with long-range order (LRO) terminating the semimetal between topologically distinct non-interacting states. Observe that the region with LRO connects to only one semi-metal and not two both as in our numerical phase diagram in Fig. 4(c). Both scenarios allow an adiabatic path and are not realized in the range of parameters investigated in this study.
see if interactions which are more symmetric than ours but less symmetric than those considered previously could yield a phase diagram similar to Fig. 6a).
Another possibility yet again involves the bulk-boundary correspondence, similar to previous studies, in order to finetune the energy scales of the model. The energy scale of the helical edge state is set by its Fermi velocity v edge . Interestingly, it is possible to reduce v edge without a significant change of the bulk gap 55 . This is achieved by localizing the Berry curvature in momentum space. Hence, we can control and therefore separate the bulk energy scale ∆ bulk from the edge v edge without introducing open boundary condition by using bulk parameters. This constitutes a promising avenue towards the aforementioned hierarchy of energy scales v edge < U < ∆ bulk , and this hierarchy is also often used in the arguments on the reduced topological classifications. There, the interaction is supposed to be strong enough to gap out the edge spectrum and therefore trivializes the topology, but still weak enough to avoid the broken symmetry in the bulk. Here, it is usually believed that the phase transition of the edge coincides with the topological phase transition of the bulk and thus enables the adiabatic connection.
However, this also demonstrates an important subtlety of bulk boundary correspondence in such systems. In particular, what would happen if we consider an interaction which is very weak compared to the bulk energy scale U/∆ bulk 1 so that we expect the phase to be adiabatically connected a non-interacting bulk topological phase while at the same time being rather strong compared to the characteristic energy scale at the edge U/v edge > U c edge /v edge (where U c edge the critical interaction strength). Thus, the bulk and edge phase transition do not have to coincide, at least based on the involved energy scales. There are at least two distinct scenarios in this case: (i) the edge may gap out by spontanuously breaking the protecting symmetry, (ii) it may gap-out symmetrically and become topologically trivial 56 . Whereas the latter scenario violates bulk boundary correspondence, the former does not. In this case, we expect the symmetry at the edge to be restored as the strength of the interaction is increased beyond a characteristic scale associated with the bulk rather than the edge. Investigating which of these scenarios is realized represents a very interesting extension of the current work. | 10,340.4 | 2019-12-16T00:00:00.000 | [
"Physics"
] |
Robust Parametric Control of Spacecraft Rendezvous
This paper proposes a method to design the robust parametric control for autonomous rendezvous of spacecrafts with the inertial information with uncertainty. We consider model uncertainty of traditional C-W equation to formulate the dynamic model of the relative motion. Based on eigenstructure assignment and model reference theory, a concise control law for spacecraft rendezvous is proposed which could be fixed through solving an optimization problem. The cost function considers the stabilization of the system and other performances. Simulation results illustrate the robustness and effectiveness of the proposed control.
Introduction
With further exploration into the space, a set of complex missions is in the space development agenda such as large-scale structure assembling, sending and picking up astronauts, and repairing, saving, and docking, orbital propellant resupply based on the autonomous rendezvous technology [1].Due to the essential position, many scholars have been focusing on the control problem during rendezvous and some results enlightened deeper research.In the approximately circular orbit, C-W equations [2], derived by Clohessy and Wiltshire, have been widely applied for the depiction of the relative motion between neighboring spacecrafts.The early stage of control design based on C-W equation revealed a number of open-loop methods such as V-bar, R-bar, dual impulsive, and multiple impulsive [3].With the benefits of control theory flourishing, plenty of advanced control methods are used to solve the rendezvous problems such as using artificial potential function in [4], sliding mode control in [5], adaptive control in [6], and H-infinity theory in [7].
Though the C-W equation supplies an explicit description of the relative motion for spacecrafts, there is an obstacle when applied in reality that the real-time angle velocity of the target spacecraft could not be obtained accurately as result of detection errors and perturbation from environment.This parameter uncertainty affects the control force and system stability directly.It is necessary to investigate the uncertain model for spacecraft rendezvous not depending on accurate value of real-time angle velocity.The traditional robust control method could deal with parametric uncertainty to recognize rendezvous but some expecting system characters are hard to be included during the control design.
In this paper, the spacecraft rendezvous problem with uncertain parameter would be solved by robust parametric method which allows freedom to improve system performance.The robust control integrates eigenstructure assignment and model reference theory to propose a concise control law for spacecraft rendezvous which takes into consideration the system performance such as the control constraints and fuel saving.In the rest of this paper, a relative motion model with uncertainty for the spacecraft rendezvous is to be established; the design of the robust parametric control law follows; besides, we apply the robust parametric control for an example to illustrate the effectiveness of this design approach.
Equations of Motion.
The coordinate frame for the two spacecrafts rendezvous is based on the target spacecraft orbit, described in Figure 1.We set the original point at the target's mass center; , , and indicate along-track, the radial, and out of plane components of the position vector of the chaser satellite in the target satellite's local-vertical-local-horizontal (LVLH) frame, respectively.The spacecraft rendezvous in the circle orbit would obey the C-W equations where , , and stand for the relative position between the chase spacecraft and the target spacecraft; represents the average angle velocity of the target spacecraft; , , and stand for the control acceleration on each axis.
According to the equation, the state and control vector can be described as and output vector can be Then, we get where and 0 3 represents the matrix with the values of all elements equal to zero; I 3 represents the unit matrix.
Problem Description.
The classical C-W equations need accurate angle velocity simultaneously which is difficult to obtain due to the detection error.Therefore, we consider the uncertain item to the angle velocity to make the system model closer to reality.When the angle velocity changes are the system model can be described as where The object of the designing control law is to recognize where () is the output of the system and () represent the reference relative position between chase spacecraft and target spacecraft.Meanwhile, the uncertainty brings trouble to the stability of the system which would be taken into consideration during designing the control law.
Design of Robust Parametric Control
The design of the control law aims at reaching the reference point of the chase spacecraft and keeping the closed loop system stable.It could be separated into two parts as stabilization controller and trajectory tracking controller.
Trajectory Tracking Controller.
To begin with, we would design the tracking controller based on the model reference theory.Lemma 1 supplies theoretical evidence for the linear tracking problems referred to [8].
Lemma 1.For the system, if the stabilization feedback control law exists, the control law following the form as would obtain the result of tracking reference signal, which means that where feedforward control law could be calculated from the following equation: and , could be calculated as According to Lemma 1, the rendezvous system could track the reference position when the feedback control law stabilizes the system.Then, the critical task of designed controller is to find a robust stabilization control law .Regarding the eigenstructure assignment of linear system, some useful results would be utilized in the later part which are from [8].
Lemma 2. Suppose ∈ R × , ∈ R × , and (, ) is controllable. , = 1, 2, . . ., , are a set of complex numbers, which are symmetric about the real axis.Then, the matrices ∈ R × and ∈ C × satisfying are given by where ∈ C , = 1, 2, . . ., , are arbitrary vectors which satisfy and () and () are right comprime polynomial matrices satisfying For the rendezvous system in this paper, we could calculate according to Lemma 2 as Lemma 2 supplies a concise parametric formula for state feedback law in which the poles of the closed-loop system are included.Proper poles would not only guarantee the system stabilization but also enhance system characters through optimization in some specific fields.Besides, the parametric method offers all kinds of freedom to design the control system with the free parametric vectors , = 1, 2, . . ., , which enable us to adjust these parameters for system stabilization.
Stabilization Controller. Using the control law
the closed-loop system can be described as where When is a nondefective matrix and the closed-loop system owns the required poles ( = 1, 2, . . ., ), the sufficient condition for the system stabilization with the uncertainty item Δ is [9] where is a symmetric positive definite solution of the following: Lemma 3 provides the parametric expression for based on the eigenstucture of the system.
Optimization of Control Law.
We have established the connection between the system characters and the parameters and through the design of the control law.Therefore, the design problem for the rendezvous system can be converted into the following nonlinear optimization problem: where , , , and specify the desired areas of the closedloop eigenvalues.The performance index is chosen as follows: where denotes the initial state of system; , , and are the weighting factors.The first part of ( 27) is chosen due to the consideration of the input constraint.The second item of (27) takes into consideration fuel consumption.The last part of ( 27) is used for global stability of the rendezvous system.The optimization discussed above could be solved resorting to the optimization tool in MATLAB for its convenience.Then, the poles ( = 1, 2, . . ., ) of the system and free parametric vectors ( = 1, 2, . . ., ) would be fixed to calculate the feedback matrix for the robust control.
Numerical Simulations
In this section, our control law designed through the method proposed above would be tested by an example of spacecrafts in the final approaching in rendezvous mission.With the assumption that the target is in the geosynchronous orbit, It is obvious that the rendezvous process could reach the desired state with the control law when the inequality (22) has satisfied.The rendezvous trajectory and the relative position of the two spacecraft are showed in Figures 2 and 4 and the effectiveness could be proved simultaneously.Due to the proper optimization function, the control inputs have been constrained to [−1, 1] which can be seen in Figure 3.
The motion in every axis direction changes smoothly so that the simulation system gets closer to the real engineering requirement showed in Figure 3.
Conclusion
This paper has proposed a method to design the robust control law for spacecraft rendezvous in the final approach subject to parameter uncertainty in near circle orbit.Based on the eigenstructure assignment and model reference theory, the control law is constructed with the closed-loop poles and design freedom.Through solving an optimization problem, we obtain the poles and parametric vectors to calculate the control law which has been proved useful by simulation.
Figure 4 :
Figure 4: Relative position of two spacecraft. | 2,088.8 | 2014-05-19T00:00:00.000 | [
"Engineering",
"Physics"
] |
Lagrangian Insertion in the Light-Like Limit and the Super-Correlators/Super-Amplitudes Duality
In these notes we describe how to formulate the Lagrangian insertion technique in a way that mimics generalized unitarity. We introduce a notion of cuts in real space and show that the cuts of the correlators in the super-correlators/super-amplitudes duality correspond to generalized unitarity cuts of the equivalent amplitudes. The cuts consist of correlation functions of operators in the chiral part of the stress-tensor multiplet as well as other half-BPS operators. We will also discuss the application of the method to other correlators as well as non-planar contributions.
Introduction
Generalized unitarity [1,2,3] is a method that has been tremendously successful in computing loop-level scattering amplitudes (for a review see for instance [4]) so it is natural to attempt to apply a similar method to the computation of correlation functions.
One strategy is to compute form factors, sew them together to momentum space correlation functions and subsequently Fourier transform back into real space [5]. This approach has many merits: form factors of some operators have been shown to have simple structures reminiscent of the ones found in scattering amplitudes [6,7] and although the work cited here deals with N = 4 super-Yang-Mills it could easily be applied to other theories as well. We will employ this approach when dealing with the wholly supersymmetric case as it provides a simple way of dealing with the fermionic variables. Unfortunately, correlation functions are best expressed in real space so some of the symmetries of the expressions may not be apparent until after the Fourier transform.
Our focus will be on a different strategy: since we only consider correlation functions in N = 4 super-Yang-Mills we may employ the Lagrangian insertion procedure [8] which we will reformulate such that it becomes similar to generalized unitarity, introducing a notion of cuts in real space 1 . The advantage of this approach is that we stay in real space the whole time.
We will apply this method to the super-correlators/super-amplitudes duality which relates correlation functions of operators in the chiral part of the stress-tensor multiplet to scattering amplitudes at the level of the integrands in planar N = 4 super-Yang-Mills [10,11,12,13]. The duality was inspired by the duality between amplitudes and Wilson loops [14,15,16,17] whose supersymmetric version was found in [18,19]. The duality between scattering amplitudes and Wilson loops can be complicated to deal with at the quantum level because of the appearance of divergences needing to be regularized 2 and in an attempt at clarifying matters, it was made part of a triality with correlation functions in a special light-like limit being dual to Wilson loops [21] and at the integrand level to scattering amplitudes [10,12,13]. In [22] twistor space methods were used to prove the equivalence between the supersymmetric correlation functions and the Wilson loop introduced in [18].
The duality provides a simple example to try out this approach as one can define generalized unitarity cuts for the dual scattering amplitudes. The cuts of the correlation functions will turn out to be equivalent to the generalized unitarity cuts of the dual scattering amplitudes as long as the duality is correct in the Born approximation. The cuts will consist entirely of correlation functions of half-BPS operators whose form factors we are going to need. The calculations will not depend on the number of operators/external states in the correlation functions/amplitudes.
The duality between correlation functions and Wilson loops has also been expanded to include additional operators [23]. This duality has been discussed using Feynman diagram techniques in [24] and using twistor space methods in [25]. Even though there is no duality with scattering amplitudes, it might still be possible to compute the correlation functions with the cuts introduced here as we will discuss in the last part of the notes.
The notes are structured as follows: section 2 deals with generalized unitarity and lists the form factors we are going to need, section 3 deals with the Lagrangian insertion procedure and introduces the notion of real space cuts, section 4 deals with the duality and how to compute cuts for the correlation functions, section 5 discusses more general correlation functions and section 6 sums up the results. Note that both real space and momentum spinors appear throughout this paper: section 2 uses momentum spinors, section 4 uses real space spinors and section 4.2 uses both types of spinors.
Generalized Unitarity
Generalized unitarity is a method for computing perturbative quantities at loop order, it has been used with great success to calculate scattering amplitudes but can be applied to form factors [6,7,26,27,28,29,30,31,32] and to correlation functions [5] as well by considering Fourier transforms of the operators with some arbitrary momenta flowing in. The method exploits information found at lower loop orders by setting internal propagators on-shell. Formally, this can thought of as replacing specific propagators with delta functions: 2 See [20] for a discussion of some of the anomalies that this can cause This procedure divides the desired scattering amplitude into amplitudes of lower loop order. Since the method specifically refers to propagators it deeply depends on the existence of a Feynman diagram representation but avoids using Feynman rules directly and instead uses on-shell amplitudes as its building blocks which, at least in gauge theories, are a lot simpler.
Generalized unitarity does not seem to be as effective when applied to correlation functions as it is for scattering amplitudes as correlation functions are best formulated in real space whereas generalized unitarity is a method that must be applied in momentum space so many of the symmetries of the expressions will not become apparent before one performs a Fourier transform. Nonetheless we will use this technique when considering the wholly supersymmetric case where it shall prove to be quite useful.
To apply generalized unitarity to correlation functions requires the use of form factors which are quantities in between correlation functions and form factors consisting of both local operators and on-shell external states. The operators that will be relevant to us are of the type: where we have used harmonic variables to make the following projections: of the super space, super charges and scalar fields respectively. Here a, b are SU(2) indices, α is a spinor index and A, B are the usual R-symmetry indices. We will follow the notation and conventions of [12,13] closely with respect to both harmonic variables and spinors some of which can be found in appendix A. The form factors for these types of operators have been dealt with in [7,31]. For our purposes we are only going to need MHV form factors together with the knowledge that the other form factors can be found using MHV rules. For d = 2 the super-Fourier transform of the MHV form factor is given by: This particular operator is part of the stress-tensor multiplet and its highest component is the on-shell chiral Lagrangian that will also appear as part of the Lagrangian insertion procedure: In order to write (4) in terms of the super-space variables one has to do an inverse super-Fourier transform: so the on-shell chiral Lagrangian correspond to the part of (4) proportional to (γ) 0 . For d > 2 MHV form factors will have a fermionic content that in addition to the supermomentum conserving delta function consists of a polynomial of degree 2(d−2) in η −a = (ī) A −a η A . We are not interested in the explicit expressions of the form factors only in the relation between the form factors for an operator T d and form factors for an operator T d−1 . For this purpose we consider the quantity F T d which is the form factor stripped of the fermionic delta function: These quantities satisfy an interesting relation found through BCFW recursion [31]: where the primes on 1 and n − 1 denote that they have been shifted: in order to respect conservation of momentum and super-momentum. Note that it is always possible to use conservation of super-momentum to rewrite F such that it does not depend on the Grassmann variables of two of the external legs.
Lagrangian Insertion in the Light-Like Limit
Lagrangian insertion is a useful method for constructing correlation functions in N = 4 super-Yang-Mills. It exploits the fact that, after a suitable rescaling of fields, differentiation of a correlation function with respect to the coupling will bring down a factor of the on-shell chiral Lagrangian: which is also the operator that appeared in the expansion of the operator, T 2 , see (5). This trick allows one to relate the lth order correction of the correlator: to the l − mth order correction of the correlator: throwing away any contact terms. In addition to being easier than a direct application of Feynman rules it also gives the correlator in a form that mimics more closely the form that scattering amplitudes have in momentum space; notice for instance that computed this way the lth order correction will naturally contain l variables to be integrated over similar to the way that the loop order l of scattering amplitudes contain l loop momenta. Normally one would compute the correlator in (13) using standard Feynman rules but inspired by generalized unitarity we will instead consider different limits of the type: where each limit consists of a set of distances becoming light-like. In the denominator the Lagrangian insertions have to be replaced by other operators since the Lagrangians cannot be connected directly to each other but only by going through vertices so the lowest non-zero correlator would be at some loop level; the relevant operators will be the lowest fermionic components of the operators described in section 2 as we want the denominator to just be a collection of scalar propagators. The light-like distances fall into three different categories: y i − y j , y i − x j and x i − x j though we will mainly be interested in the first two types, the last type being important for a BCFW recursion relation [33,34] 3 . Similarly to generalized unitarity no limit will give the full result but each limit will determine a specific part of the full expression and one will have to compute several different limits until the integrand is completely fixed. It is of course not immediately obvious that these limits will completely determine the integrand, or to borrow an expression from generalized unitarity that the correlation function is cut-constructible, but for the correlators discussed in this paper we will argue that it is indeed the case.
The Super-Correlators/Super-Amplitudes Duality
The duality between correlation functions and scattering amplitudes considers operators of the type (5) placed at points x 1 to x n with neighbouring points being light-like separated but otherwise generic thereby creating a polygon. The sides of the polygon are identified with on-shell momenta: while the superspace variables are identified with the fermionic parts of the supertwistor: The duality then states that the ratio of the correlation function over its Born-level expression is equal to the square of a color-ordered amplitude divided by its tree-level MHV formula: where g is the coupling constant. An N k MHV amplitude will correspond to 4k factors of the super-space variables on the correlator side of the duality but the lowest non-trivial order of a correlation function with that many super-space variables is proportional to g 2k which is the reason behind the factor dependent on the coupling constant. Let us consider the correlation function with several Lagrangian insertions and see what happens when they become light-like separated from other operators. As mentioned in [24] the divergence of a side of a light-like polygon is related to the number of derivatives minus the number of propagators between the two operators. From this perspective one might expect that the fermions and field strengths present in (5) would create something more divergent than the scalars. However due to the chirality of the operator this is not the case: at the Born level the correlation function is given completely in terms of the scalar components of the operators while at higher loop orders the fermions and field strengths can only connect through vertices that will lower the divergences to that of simple scalar propagators.
When inserting Lagrangians this conclusion will still hold as the chiral on-shell Lagrangian is simply the highest component of the operator (5) and the divergences found in the limit (14) should be only of the type that simple scalar propagators would give. This is important as the scattering amplitudes of N = 4 super-Yang-Mills do not have any internal propagators squared for generic external momenta.
We will use the approach of [24] in the case of a purely scalar polygon where it will provide some clear insight but we will not use it for the supersymmetric case because it becomes rather cumbersome, especially finding the correct fields that sit at the corners of the polygon, and although the sides of the polygon do seem to act like the supersymmetric Wilson loops of [18,19] the appearance of ghosts at higher loop orders complicates matters.
Scalar Polygon
The scalar polygon will interact like a Wilson loop separating the space into two parts. This explains the origin of the appearance of the amplitude squared: the inside of the polygon will give one factor the amplitude and the outside another. Our goal will then be to show that cuts with all Lagrangians inside the polygon correspond to the generalized unitarity cuts of the corresponding amplitude; the generalization to cuts with Lagrangian insertions both inside and outside will then be straightforward.
It is important that the cuts separate the inside of the polygon into parts that do interact except through the shared internal lines. As an example consider the cut in figure 1 lines represents distances that have been made light-like 4 . This cut will turn out to correspond to the generalized unitarity cut in figure 1(b) so there should not be any direct interaction between the sides x 2 − x 3 and x 3 − x 4 5 just like there are no explicit factors of 23 or [23] in the generalized unitarity cut. It is not immediately obvious that this is satisfied: for instance the diagram in 2(a) where a scalar polygon interacts through gluons with a single Lagrangian insertion will contribute to the cut while the diagram figure 2(b) which is the same but with an additional gluonic interaction between the two sides of the polygon would ruin this property and so should not contribute to the cut.
To see that the cuts do in fact separate the polygon into two parts only interacting through shared internal lines consider the following: a side of the polygon spanned between the points x i and x i+1 connect through m vertices to m different Lagrangian insertions as shown in figure 3. To more easily distinguish between the insertion points and the points on the polygon we will use tildes when enumerating the insertion points and their spinors, harmonic variables and fermionic variables. Each Lagrangian insertion will supply a single derivative so the diagram will be proportional to: 4 We will be more specific about what we mean by these diagrams later 5 Except of course through the outside of the polygon but as mentioned this will be interpreted as part of the other amplitude in the duality where the propagators are given by: If we focus on the right-most integral we see that it is given by: This clearly becomes divergent when the distance between x i and y1 become light-like. From the point of view of the integral this divergences arises because the integral becomes proportional to (1 − t 1 ) −1 which diverges in the upper limit. Notice also that if x i and y1 are not light-like separated (20) contributes with a factor of (1 − t 2 ) which would ruin the divergences for the subsequent Lagrangian and indeed for all subsequent Lagrangians since the addition of one propagator and one derivative cannot raise the divergence only maintain it. If x i and y1 are light-like separated the integral will not influence the subsequent integrals and one can do the same analysis for the second right-most integral 6 .
Reiterating this argument leads to the conclusion that the diagram in figure 3 only contributes to the cut where a specific y becomes light-like separated from x i if all the Lagrangians to the right of y are also light-like separated from that point and that the diagram only contributes to the cut where y becomes light-like separated from x i+1 if all the Lagrangians to the left of y are also light-like separated from that point.
This shows that the necessary separation does appear. Let us consider what happens when y1 become light-like separated from x i and include the spinor structure from the Lagrangian insertion in (20). Defining (x i − y1) αα to be λ α 1λα 1 we get: . (21) This contribution would then have to be added to the one with the Wilson loop vertex on the other side of x i which can be found through an equivalent calculation though the sign will be opposite 7 : Additional vertices can be added on the gluon line connecting the scalar polygon with y1 and one can show that they will act like the Wilson line vertices. This point is slightly non-trivial as the counting arguments from [24] do not remove all of the unwanted terms. There will be some remaining terms that cannot be described as simple Wilson line vertices times the quantities in (18), (20) and (21) but these terms will not depend on the point x i+1 and when adding the contributions from having a Wilson loop vertex on either side of x i these terms cancel out. This means that in the limit where a Lagrangian insertion point y1 is made light-like separated from x i the diagram acts as if the operator at x i is of the type T 3 and the operator at y1 has its fermionic degree lowered by two which can be put diagrammatically in the form: where full lines represent distances made light-like after dividing by a scalar propagator and vertices where d lines meet correspond to local operators of the type T d 8 and we have suppressed a numerical factor including the coupling constant 9 . Because the line connecting x i and y1 acts like a regular Wilson line it is straightforward to generalize to the case where x i is lightlike separated from more than one Lagrangian insertion. For a generalization to the wholly supersymmetric case we will however need more effective methods.
Supersymmetrization
To find the correct supersymmetrization of the cuts we are going to use generalized unitarity cuts dividing the real space cuts into products of form factors each with exactly one operator. Note that diagrams where the propagator between two operators is completely canceled will give something proportional to a real space delta function; those involving two points on the polygon are excluded because all those points are assumed to be distinct in the duality and the Lagrangian insertion procedure specifically throws away any contact terms so the generalized unitarity cuts with only single-operator form factors completely determine the cuts. In fact the arguments are sufficiently general that we may conclude that the correlation functions in the light-like limit with any number of Lagrangian insertions can be determined completely by generalized unitarity cuts consisting of only single-operator form factors.
The relevant form factors will be the ones found in section 2. We are not going to compute the full generalized unitarity cuts only draw certain conclusions about the fermionic structure of the real space cuts.
It will continue to make sense to divide the correlation function into a part inside and a part outside of the polygon because when connecting the form factors into a polygon there is going to be some number of propagators, r, and twice as many fermionic delta functions, 2r, on the polygon. Upon integration we will be left with r fermionic delta functions all depending on spinor products where one of the spinors correspond to momentum flowing along a side of the polygon; in the light-like limit these spinors will become proportional to the real space spinors (15) and so the fermionic delta functions will correspond to either the outside or the inside of the polygon interacting with the sides of the polygon without any direct interactions between the inside and the outside of the polygon. Planarity ensures that the denominators on the polygon as well as factors not part of the polygon will not give such direct interactions either.
In order to generalize (23) we are going to start with an ansatz and use generalized unitarity to confirm it. Our ansatz will be that the two fermionic delta functions get replaced by: where (1) +a A are the harmonic variables associated with the Lagrangian insertion at y1 and we use χ ã 1/1 to denote 1 θ +ã 1 similar to the notation in (16). One should note that θ A iα does not appear freely in the construction of the correlation functions as that would give twice as many Grassmann variables as for the scattering amplitudes, it only appears as part of very specific products with spinors and harmonic variables so the second term should be interpreted in terms of the following: The factors (ıj) −1 a ′ a are the inverse matrices of (ī) A −a ′ (j) +a A . In order to find the effect of this fermionic delta function on the super-Fourier transform of the form factors we write it in terms of an integral: When multiplied by the form factors these exponents can be removed by shifting the fermionic integration variables for the form factors as follows: The delta function (24) is thereby replaced by imposing the invariance under a specific shift of the fermionic variables. We may write this as: where the γ α 's are now functions of theγ α 's and γ a . To see the effect of this on the generalized unitarity cuts we only need to consider the MHV form factor of an operator T d placed at the point x i and the super-momentum conserving delta functions from the inserted Lagrangian and the neighbouring points on the polygon; the specific details of the generalized unitarity cuts will of course differ a lot but since the N k MHV form factors can be computed from MHV rules these elements will always be present. The form factor for T d will have to share at least one leg with each of these other operators which will give the divergences in the light-like limit, the momenta of these leg will be denoted P1, P i−1 and P i and since they are responsible for the light-like divergences we may replace the real space spinors λ1, λ i−1 and λ i by the momentum spinors λ P1 , λ P i−1 and λ P i at the cost of rescaling γ a so the quantity we are interested in is: , · · · , P i−1 , · · · , P i , · · · , P1, · · · , n .
Notice that the sum γ (27) and that for the first three delta functions integrating over the Grassmann variables of the respective cut legs will also make them invariant under the shift. This first of all means that for d = 2 the shift is in fact a symmetry of the expression, because F M HV T 2 do not depend on any Grassmann variables, as expected. For d > 2 there will some additional Grassmann variables in F M HV T d ; we will write this function without any explicit dependence on either η P i−1 or η P i using conservation of super-momentum and subsequently find the term proportional to η P1−a ǫ ab η P1−b as well as similar factors for all other directions that has been made light-like as part of the cut, from a Feynman diagram perspective we know that such a term should always be present. The integration over the variables γ a can be used to remove this factor: and similar considerations for the other light-like distances will remove all the other Grassmann variables from F M HV The form factor is thus reduced to that of an operator T 2 times some spinor products which can be found by comparing the remnant left from imposing the shift symmetries with F M HV T 2 . It is important to note that (29) is not the entire generalized unitarity cut only a small part but the rest do not interfere with this calculation nor is it directly affected by it and whereas the rest will change depending on the exact generalized unitarity cut considered this part will always be present. Potentially it should be possible to use generalized unitarity to find the correct spinor factor in a systematic way by exploiting relations like (8) however we will instead use that we already found this factor for the scalar polygon in section 4.1. Combining the information gained from the two approaches we get: . . .
. Again full lines represent distances made light-like after dividing out scalar propagators, vertices where d lines meet correspond to an operator T d . As we used that all of the Grassmann variables from F M HV T d in (29) were removed by imposing shift symmetries it is implied that there are delta functions similar to (24) for all but two of the lines meeting at x i .
We will now apply all this to a full cut where a string of m Lagrangian insertions are made light-like separated from each other and the polygon such that the inside of the polygon is split into two with x i being light-like separated from y1 and x j from ym. In terms of the diagram in figure 4(a) the cut can be defined as: We define spinors λ α r with r going from 1 to m + 1 ordered such that λ α 1 is a spinor corresponding to the light-like distance x i − y1 and λ α m+1 is a spinor corresponding to the distance ym − x j and introduce the factor: Finally we use (31) to write the cut in terms of the diagram in figure 4(b) and phrase it in variables common to scattering amplitudes using some of the identities found in appendix B: When reconstructing the part of the correlation function with the propagators corresponding to the cut, the products of the harmonic variables are removed; from a generalized unitarity perspective they correspond to normalizations of the external states. The rest of (34) is exactly equivalent to a generalized unitarity cut with m + 1 cut propagators as shown in figure 5 with the spinor products corresponding to the generalized unitarity cut of an MHV amplitude while everything beyond MHV lies in figure 4(b).
The general nature of (31) allows us to make cuts that also divide the two parts of this cut into smaller pieces just like a generalized unitarity cut can separate the loop amplitude into more than two lower level amplitudes; the calculation is not going to be different from the one above and it is straightforward to show that it will correspond to the correct generalized unitarity cuts.
For the sake of completeness let us point out that nowhere in the calculation leading up to the supersymmetric generalization in (31) did we use that the operator at y1 was the highest fermionic component of the multiplet and the calculation can easily be generalized to limits where the distance x i − x j becomes light-like; in this case the relevant delta functions will be: where as in (25) some of the Grassmann variables should be interpreted in terms of the specific products that appear in the construction of the correlation function.
Cut-Constructibility
In the previous sections it was shown that the real space cuts of the super-correlators correspond to generalized unitarity cuts of the equivalent super-amplitudes. The only issue that remains is whether or not the correlators are completely determined by the cuts or put differently whether or not for every diagram there exists some sequence of propagators not canceled by any numerator factors that divide the diagram into separate patches. For this purpose we will consider the lth order correction to a super-correlator for which the total number of superspace variables sum up to 4k. This specific loop order can be computed by using l Lagrangian insertions and it should be proportional to the coupling constant to the power 2l + 2k. As argued in the beginning of section 4.2 it can be completely determined by generalized unitarity cuts involving only single-operator form factors and we know that the form factors for the operator T 2 with 2c i external legs are proportional to the coupling constant to the power 2(c i − 1) at tree level. Combining these two ways of counting the power of the coupling constant gives us: Since there are n + l operators we can write this as: and because there are no external legs this sum will be equal to the number of cut propagators. Every cut propagator comes with an integration over the Grassmann variables and so the form factors should have exactly 4(n + k + 2l) Grassmann variables which can only be accomplished if they are all MHV. Using the expression from (4) and the counting arguments from [24] we get that two form factors connected through a cut propagator has exactly the right number of momentum factors to give the divergence of a scalar propagator when the distance between the two corresponding points is made light-like. Of course as shown in section 4.1 having the right number of momentum factors do not necessarily imply that the divergence appears. Still because of the existence of an operator product expansion the correlator should be a function of real space scalar propagators connecting two points of the local operators. Anything not captured by the real space cuts would then be terms where a group of Lagrangian insertion points only connect among themselves or only once to a point on the polygon. We would expect that such terms correspond to unconnected L O L Figure 6: Full lines represent distances made light-like after dividing by the appropriate propagators. O represents some arbitrary operator diagrams and diagrams proportional to a group structure constant with two identical indices and so are not part of the correlation functions used in the duality. To see that this expectation holds: consider any of the unwanted terms, take the light-like limit of all the propagators present and try to construct the generalized unitarity cuts corresponding to these limits. By considering the limit of all the available propagators the arguments of section 4.1 are no longer relevant and all the generalized unitarity cuts one can write down for the unwanted terms, that are consistent with the above considerations, will conform to our expectations.
Note that these arguments did not explicitly use that the operators of the correlation function were light-like separated from each other though we did use that they were placed at distinct points.
More General Correlators
Let us finally turn towards other correlators as well as non-planar contributions and discuss how they can be computed.
One type to consider is correlation functions with both operators from the stress-tensor multiplet arranged in a light-like polygon and other operators not part of the polygon. These correlators are a natural extension to the duality between correlation functions and Wilson loops as the light-like limit simply gives the correlation function of a Wilson loop and the additional operators [23]. The additional operators can also be arranged to form a second Wilson loop.
It is still possible to define real space cuts for such correlators even though there is no duality with amplitudes: these cuts will include diagrams where Lagrangian insertion points are made light-like separated from the additional operators as shown in figure 6. It is not clear if such correlators will be cut-constructible, something that may well depend on the specific choice of operators. Adding a single operator to a light-like polygon could be a good starting point for considering correlation functions of other operators as many cuts will be similar to those of the light-like polygon. In the case of the additional operators forming a second Wilson loop the cuts will be similar to the ones used for the duality and can be computed from (31). One way to construct other correlators that are cut-constructible and also dual to scattering amplitudes is to consider operators of the type half-BPS operators as in (2) with d > 2 as shown in figure 7. This sort of diagram will appear as part of the cuts used in section 4 but one could also use this as the starting point and since equations (28) and (29) do not rely on integration over the super-space variables it should be dual to three different four-point amplitudes and a single six-point amplitude provided we introduce some additional fermionic delta functions like the ones in (24).
Operators at Generic Points
For the duality we considered operators in the stress-tensor multiplet in a specific light-like limit but it would be interesting to compute the cuts for the correlator with operators at generic points. As long as all operators are placed at distinct points the arguments given in section 4.3 are sufficient to argue that the correlation function is cut-constructible.
For the correlation function of only four purely scalar operators the integrand is known to a high loop order [35,36,37,38,39]. We can use the results to check whether the correlators can be constructed from cuts and what those cuts are. The one-loop integrand consists of two types of terms: those that contribute to the duality between scattering amplitudes and correlation functions and some additional terms that can be captured by cuts where all distances to the If this is to make sense as a cut we require that also for similar limits at higher loop orders the Lagrangian insertion should still act like four scalars. This is obviously true for the part of the on-shell Lagrangian proportional to four scalars but the arguments from section 4.1 can be used to argue that it will also be the case for the other parts. Consider for instance the diagrams in figure 9 and the limit where the Lagrangian in the center of the diagrams is made light-like separated from the four operators at the corners of the diagrams: the diagram in 9(a) contribute to the mentioned limit but the diagram in 9(b) do not without some additional light-like limit involving two of the original five operators whereas 9(c) do contribute without any additional limits. The systematic continues with more interactions: only as long as the additional interactions are with the scalar lines will the diagrams contribute to the aforementioned limit without the need for any additional light-like limits involving two or more of the original five operators. For this reason we may conclude that in this limit the Lagrangian insertion acts like four scalars and the cut can be described in terms of the correlation function where the Lagrangian insertion has been replaced by the lowest fermionic component of T 4 .
By inspecting the results from the literature we see that at higher loop orders it is always possible to make the insertion points light-like separated from four other points and let the points of the original operators be light-like separated from each other in pairs so the correlation function should be determined by the cuts if we include cuts where the Lagrangian insertions get replaced by T 4 . These new cuts may appear identical to applying (31) twice, indeed if a Lagrangian insertion is made light-like separated from two purely scalar operators using this relation the result would be proportional to a correlation function with T 4 in place of the on-shell Lagrangian. The difference lies in the fact that for these new cuts the operators made light-like from the Lagrangian insertion may be light-like separated from only one other operator and the factor being pulled out in front of the correlation function when doing the cut seems different. For the relation (31) this factor consists of spinor products and could be found through a fairly simple Feynman diagram calculation but the computation needed for the additional cuts do not seem to be as simple and the factor would involve tensors connecting the SU(4) indices of the harmonic variables.
Non-Planar Diagrams
The duality discussed in setion 4 considers only planar diagrams so we only formulated cuts for the planar theory. However, it is also possible to consider non-planar cuts; equations (28) and (29) do in fact not rely on planarity so the Grassmann structure of (31) will be the same in the non-planar case. The kinematical factor can again be found from considering purely scalar operators and the calculations will be very similar to the one leading to (23). For the sake of clarity we only display the result with a limited number of operators though the generalization is straightforward. We have introduced an additional point x j and defined spinors such that (x i − x j ) αα = λ α jλα j , the limit needed for the cuts is then given by: One should note that outside the planar limit the cuts no longer separate the correlation functions into separate patches as in section 4 though the cuts may still simplify the expression.
Discussion
In these notes we have introduced a notion of cuts in real space and shown how this type of cuts on a specific set of limits of correlation functions correspond to generalized unitarity cuts of scattering amplitudes confirming the super-correlators/super-amplitudes duality on a cut-by-cut basis. We also checked that the super-correlators considered in the duality are in fact completely determined by the real space cuts. The results are hardly surprising as the supersymmetric correlation functions have been found to be dual to the supersymmetric Wilson loop of [18] and since the duality between scattering amplitudes and correlation functions is at the integrand level no regularization issues should arise. Nonetheless it provided a simple example to try out this reformulation of the Lagrangian insertion technique.
The real space cuts are written in terms of correlation functions of other half-BPS operators but there is a non-trivial factor that emerges from doing the cut unlike for generalized unitarity where all the non-trivial information lies in the product of amplitudes. This might be a problem for more general correlation functions like the ones considered in section 5.1 where we identified a second type of cuts still written in terms of correlators of half-BPS operators but with factors that do not follow as easily as for the cuts used in the duality.
In general it would be interesting to extend this approach to other operators. It will certainly be possible to define the cuts but it is not clear if the correlators will be cut-constructible nor whether the cuts will be simple. The extension to non-planar diagrams is more straightforward though the cuts will no longer divide the correlation functions into separate patches.
Since the generalized unitarity methods are related the twistor space methods for scattering amplitudes we expect that this approach is related to the twistor space methods used in [22,40] and it would be interesting to find the direct relation.
Acknowledgments
I have benefited from discussions with Henrik Johansson and Radu Roiban and am grateful for useful comments on an early draft by Henrik Johansson and Gregory Korchemsky. This work is supported by the Knut and Alice Wallenberg Foundation under grant KAW 2013.0235. Figures in these notes were drawn using JaxoDraw [41].
A Harmonic Variables and Spinors
In this appendix we briefly sum up some of the conventions and notations used. The harmonic variables are matrices satisfying the following relations: Upper-case Latin indices are SU(4) indices while lower-case Latin indices are SU(2) indices. Since we will be dealing with operators at many different points it is convenient to use a notation that makes for an easy identification of the corresponding harmonic variables for each operator: we choose to denote the harmonic variables of the operator at point x i on the polygon by (i) +a A and the harmonic variables of the Lagrangian insertion at point ym by (m) +a A . It is also useful to introduce this product of harmonic variables: The product correspond to the determinant of the matrix: (ıj) a a ′ = (ı) A −a ′ (j) +a A .
As long as the product (41) is non-zero it is possible to define the inverse matrix (ıj) −1 a a and by expressing SU(4) vectors in terms of (i) +a A and (j) +a A it is possible to show that the following is the identity matrix: We use greek letters from the beginning of the alphabet for spinor indices while µ and ν are reserved for regular Lorentz indices. The spinor indices are raised and lowered as follows: while the spinor product is defined to be: The Levi-Civita symbols are chosen to be: Lorentz vectors can be written in spinor notation by using Pauli matrices: x αα =σ µ αα x µ .
B Jacobians and Useful Identities
Changing from a measure for the fermionic variables θ +ã rα into a measure for the variables χ ã r/r = rθ ã r and χ a r+1/r = r + 1θ ã r is going to introduce the Jacobian: m r=1 r r + 1 2 .
The duality gives the scattering amplitudes in terms of the fermionic parts of the supertwistors, they can be related to the Grassmann variables, η A i , where (η i ) 0 indicates a positive helicity gluon of momentum p i and (η i ) 4 indicates a negative helicity gluon, in the following way: For the Grassmann variables of the internal states there are different possible definitions, we choose: which gives the following super-momentum conserving delta function: The factor in front of the delta function cancels part the Jacobian that arises when changing the measure for the χ A variables into the measure for the η A variables which is given by: ij i1 12 · · · m + 1j 4 . (53) | 9,956.8 | 2015-02-06T00:00:00.000 | [
"Physics"
] |
INFLUENCE OF CHROMIUM CONCENTRATION ON THE ABRASIVE WEAR OF Ni-Cr-B-Si COATINGS APPLIED BY HIGH VELOCITY OXYGEN FUEL
This research work studies the characteristics of wear and wear resistance of composite powder coatings, deposited by high velocity oxygen fuel, which contain composite mixtures Ni-Cr-B-Si having different chromium concentrations – 9.9%; 13.2%; 14%; 16% and 20% , at one and the same size of the particles and the same content of the remaining elements. The coating of 20% Cr does not contain B and Si. Out of each powder composite coatings have been prepared without any preliminary thermal treatment of the substrate and with preliminary thermal treatment of the substrate up to 650С. The coatings have been tested under identical conditions of dry friction over a surface of solid firmly attached abrasive particles using tribological testing device „Pin-on-disk“. Results have been obtained and the dependences of the hardness, mass wear, intensity of the wearing process, absolute and relative wear resistance on the Cr concentration under identical conditions of friction. It has been found out that for all the coatings the preliminary thermal treatment of the substrate leads to a decrease in the wear intensity. Upon increasing Cr concentration the wear intensity diminishes and it reaches minimal values at 16% Cr. In the case of coatings having 20% Cr concentration the wear intensity is increased, which is due to the absence of the components B and Si in the composite mixture, whereupon no inter-metallic structures are formed having high hardness and wear resistance. The obtained results have no analogues in the current literature and they have not been published by the authors.
Introduction
The basic priorities of contemporary engineering science and practice refer to enhancement of energy effectiveness and functionality of the industrial systems in harmonic coexistence with clean environment, preservation of natural resources and improving of the quality of life. These priorities are connected with the genesis of tribology as contemporary science and technology for contact processes of friction, wear and greasing in the technical systems [1÷3]. The lowering of the wear intensity in the machines appears to tbe the central task of tribology tribological technologies. It is the reason for more than 85% of the machine failures, huge expenses of materials and human resources for spare parts, consumables and expenses for maintenance in the process of operation. The decrease in the wear intensity results not only in
Materials and technology
Ten types of HVOF-coatings have been prepared, combined in five groups of chromium concentration in the powder mixture -9,9%, 13,2 %, 14%, 16%, 20% and approximately the same composition with respect to the other chemical elements. Two types of coatings with definite Cr concentration have been obtained for each groupwithout thermal treatment of the substrate (cold HVOF process) and with preliminary calcination of the substrate in a thermal chamber at temperature 650 о С in the course of 60 minutes. The coatings having thermal treatment of the substrate are denoted by PHS. The group of coatings №9 and №10 contain in their composition 20% Cr and 80% Ni without inclusion of other elements, which are present in the other coatings. Table 1 represents the designation, description, chemical composition, hardness and thickness of the studied coatings. All the coatings have been deposited on a substrate of one and the same materialsteel of chemical composition: С -0.15%; S -0.025%; Mn -0.8%; P -0.011%; Si -0.21%; Cr -0.3%; Ni -0.3% and hardness 193.6 ÷ 219.5 HV.
The particles in all the powder composites have one and the same size -45 2,5 µm. Before placing of the powder composite inside the system for thermal spraying, it is heated for 30 minutes at temperature 150 о С in thermal chamber for removing the moisture and the other adsorbed organic molecules.
In order to increase the adhesion strength of the coatings the substrate is heated preliminarily in three stages: cleaning, erosion with abrasive particles (blasting) and mechanical treatment. The cleaning is aimed at the removal of mechanical contaminants, adsorbed organic molecules, moisture and other components, and it is carried out using a solvent. The extraction of the adsorbed gas molecules and elements in the depth of the surface layer is achieved by burning of the surface of the substrate with a flame to reach 100 о С at a distance of the nozzle 40 mm and at an angle of 45 о or with vapour spraying device. After this operation again the surface is cleaned with a solvent.
Upon erosion of the surface of the substrate (blasting) one can achieve a definite level of roughness of the substrate, which is of essential importance for the level of the adhesion strength of the coating. We used abrasive material "Grit", in accordance with the requirements of the standard ISO 11126, having granular composition of the abrasive material in the following percentage ratio: 3.15 ÷ 1.4 mm -9.32%; 1.63 ÷ 0.5 mm -16.4%; 1.4 ÷ 1.0 mm -15.8%; 1.0 ÷ 0.63 mm -39.6%; 0.5 ÷ 0.315 mm -9.32%; 0.315 ÷ 0.16 mm -9.32%; particles having sizes below 0.15 mm of the various fractionsup to 100% of the following chemical compounds: SiO2 -41% , combined in the form of silicates; AlO -8.3%, MgO -6.6%, CaO -5.5% and MnO -0.4%.
The system for blasting has the following technical parameters: input pressure 8 atm; operating pressure in the nozzle -4 atm; diameter of the nozzle 7 mm; distance between the nozzle and the surface -30 mm; angle of interaction of the jet with the surface -90 о .
The coatings have been deposited using the device MICROJET+Hybrid, which makes use of fuel mixture of acetylene and oxygen. The parameters of the technological regime of deposition of the coatings are listed in Table 2. In case of deposition of coatings without thermal treatment (cold HVOF process) the surface of the substrate is heated using flame having temperature up to 200 о С, which is measured by Laser infrared thermometer INFRARED.
The coating is deposited in several layers. In the case of the first layer the nozzle is situated at an angle of 45 о and at a distance from the substrate 10 mm, while in the consecutive layersat a distance of 25 mm. Coatings have been prepared having thickness within the range from 393 µm up to 415 µm. The thickness of the coatings is measured by a portable device Pocket Leptoskop 2021 Fe in 10 points on the surface and the mean arithmetic value is taken (Table 1).
After polishing all surfaces of the coatings have the same roughness Ra=0,450 ÷0,455 µm, which is measured by recording the profile diagram using profile metering device "TESA Rugosurf 10-10G". Samples of the same sizes have been prepared, which for the purpose of testing the abrasive wear represent plates of dimensions 25mm х 25mm х 6mm, while for testing the erosive wear the samples represent plates of dimensions 30mm x 20 mm x 6 mm.
The hardness of the coatings is measured by hardness-metering device "Bambino" based on the scale of Rockwell (HRC) taking the mean arithmetic value out of three measurements for each sample in order to eliminate some possible effects of segregation.
Experimental procedures
The abrasive wear of the coatings is studied under conditions of dry friction during sliding along the surface with firmly attached abrasive particles.
The methodology consists in measurement of the mass wear of the coatings after a definite pathway of friction (number of cycles) under set permanent conditionsloading, sliding velocity, kinds of the abrasive, temperature of the environment. The mass of the samples before and after after a definite pathway of friction is measured by electric balance WPS 180/C/2 with an accuracy of 0.1 mg. In each experiment with each sample the abrasive surface is replaced and prior to each measurement the sample is cleaned removing mechanical and organic particles, thereafter it is dried up using ethyl alcohol in order to prevent the electrostatic effect.
After measuring of the mass wear the wear process characteristics are calculatedreduced intensity of the wear process, the absolute or the relative wear resistance.
The mass wear in [mg] is obtained as the difference between the initial mass of the sample and its mass after a definite number of cycles of friction: The reduced intensity of the wear process r i represents the mass wear m of the coating per unit of loading Р and per unit of path length L of friction. It is measured in Nm / mg and it is estimated by the formula: The absolute abrasive wear resistance r I is represented as the reciprocal value of the reduced intensity and it has dimension The relative wear resistance Ri,j is dimensionless quantity and it represents the ratio between the wear resistance of the tested sample i r I and the wear resistance of a sample, taken as a standard j r I , determined during identical regimes of friction, i.e.
The abrasive wear is studied using tribological tester "Pin-on-disc" in case of plane-like contact using the functional scheme, shown in Fig.1. The studied sample with coating 1 (pin) is firmly attached in the holder 2 of the loading head 8, in such a way that the frontal surface of the sample is in contact with the abrasive surface 3, fixed to a horizontal disc 4. The disc 4 is driven by the electric motor 6 and it is rotating around its central vertical axis at constant angle velocity. The normal loading pressure Р is adjusted by means of the lever system in the center of the contact plate between the sample and the abrasive surface. The pathway length of friction as a number of cycles (N) is selected and then measured by the turnover number metering device 7. The abrasive surface 3 is modeled by impregnated corundum P 320 of hardness 9.0 on the scale of Moos, whereupon the requirement of the standard for minimum 60% higher hardness of the abrasive is observed with respect to that of the surface layer of the tested materials. The investigation of all the coatings has been carried out using a set of the following parameters of the regime of friction: loading 4.5 N, nominal contact surface area 2.25x10 -6 m 2 ; nominal contact pressure 2.0 N/cm 2 , sliding velocity of 0.155 m/s; type of the abrasive surface -Corundum Р 320, temperature of the environment 21 о С.
Experimental results and discussion
Applying the above described methodology and the device experimental results have been obtained for the mass wear process, the reduced intensity, the absolute and the relative wear resistance for all the studied coatings, listed in Table 1.
The results are represented in the Tables 3, 4 and 5.
In accordance with the data in Table 3 the kinetic curves have been plotted in regard to the mass wear for all coatings with and without thermal treatment of the substrate, represented in the Figures 2, 3, 4, 5 and 6. Each plotted graph represents regression equations of the wearing process as a function of the length of the friction pathway m=m(L) and the value of the wear intensity r i at friction pathway length L=80 m. It is seen that in the case of dry abrasive friction the dependence of the mass wear as a function of the sliding pathway has linear character for coatings with and without thermal treatment of the substrate. The second observation in the analysis of these curves refers to the fact that the wear of all coatings having thermal treatment of the substrate is less than the wear of the coatings without any thermal treatment of the substrate. Figure 7 represents graphically the dependence of the wear intensity on the concentration of chromium for coatings without thermal treatment of the substrate and for coatings with thermal treatment of the substrate for one and the same friction pathway length 80 L = m. The curves have non-linear character with a clearly expressed minimum of the wear intensity at 16% concentration of Cr for coatings with and without thermal treatment. In the first section, in which the chromium concentration is changing within the range from 9.9% to 16% upon increasing of the chromium concentration the wear intensity decreases down to reaching a minimal value at chromium concentration 16% respectively: for coatings without thermal treatment of the substrate r i = 0.91 х 10 -2 mg/Nm and for coatings with thermal treatment of the substrater i = 0.49 х 10 -2 mg/Nm. In the second section at higher chromium concentration 20% (coatings HN40 and HN40: PHS) the wear intensity is increasing sharply. In spite of the higher chromium concentration the increase in the wear is due to the absence of the elements B, Si, Cu and the others, which are contained in the other coatings. These elements in the process of contact interaction of the flame jet with the substrate at the high temperature are forming intermetallic compounds with the chromium, which lead to decrease in the wear intensity […].
Fig. 7. Dependence of wear intensity on the concentration of chromium in the tested HVOF-coatings
The curve of the dependence of the wear resistance on the concentration of chromium in coatings without thermal treatment and with thermal treatment of the substrate is reciprocal to the curve of wear intensity (Figure 8). Figure 9 shows the diagram of the wear resistance of all the tested coatings, which gives clear evidence, that the lowest wear resistance is displayed by coatings having the lowest concentration of chromium without thermal treatment of the substrate -Ir = 0.24x10 2 Nm/mg, while the greatest wear resistance is manifested by the coatings having 16% content of chromium with thermal treatment of the substrate -Ir = 2.04x10 2 Nm/mg. Table 6 represents results on the relative wear resistance, calculated by the formula (4). The last two columns reflect the results respectively for the influence of the thermal treatment of the substrate and the effect of chromium concentration in the powder composites. These results are represented in the form of diagrams in Figures 10 and 11. The strongest influence on the wear resistance is observed for the thermal treatment of the substrate in the case of the coating 1355:PHS having concentration of chromium 16%, for which the wear resistance is 1.84 times higher than that of the same coating without thermal treatment of the support. Next to it follows the coating 80M60:PHS with concentration of chromium 14%. For the remaining coatings the influence of the thermal treatment is almost one and the same ranging from 1,14 to 1,21. The influence of the concentration of chromium upon the abrasive wear resistance of the coatings is the greatest in the cases of the coatings 1355:PHS and 1355 (16% Cr). For coating with thermal treatment (the coating 1355:PHS) the wear resistance is increased 7,29 times, while for the same coating without thermal treatment (the coating 1355) it is increased 4,58 times, which is an extraordinary result. Another good result is the increase in the wear resistance of the coatings 80M60:PHS and HN40:PHS, which becomes higher almost to the same degreerespectively 1.68 and 1.67 times. The results on the wear resistance correlate with the hardness of the coatings having different concentrations of Cr ( Figure 12). Figure 13 illustrates the diagram of the interconnection between the abrasive wear resistance and the hardness of the tested coatings.
Regression models
The experimental curve of the dependence of the wear intensity r i on the chromium concentration in the range % 16 w % 9 , 9 is considered (fig.14). The section of the curve where % 16 w is not considered because it includes coatings №9 and №10 (HN40 and HN40:PHS) with different chemical composition from the other coatings. These coatings contain only nickel (80%) and chromium (20%), which does not give us reason to analyze them in parallel with the other coatings.
Experimental results for the wear intensity at more points on the curve in fig. 7 are presented in Table 7. Graphically, the dependence of the wear intensity on the percentage of chromium is shown in fig. 14. Based on the regression analysis, analytical dependences of the wear intensity on the chromium concentration were obtained, presented as second-and third-degree polynomials.
For coatings without heat treatment of the substrate the dependence has the following form The results are presented on the next figure. The Adjusted R Square is 0,745894827 and shows that 74,58% of the variance of the intensity of wear is predictable from chosen factors (w 3 , w 2 ), i.e. i.e. they are adequately included in the model. The value of the significance F with significance level 0.05 is 0.00012<0.05 (0.012%<5%), i.e. the results are reliable (statistically significant) and the model is adequate. Рvalues of the coefficients of the regression equations with level of significance 0.05 are smaller than 0.000032, i.e. they are smaller than 0.05, which means that the coefficients are statistically significant, and the adequacy of the model is confirmed.
For coatings without heat treatment of the substrate the dependence has the following form: The results are presented on the next figure. The Adjusted R Square is 0,74616892 and shows that 74,62% of the variance of the intensity of wear is predictable from chosen factors (w 3 , w 2 ), i.e. i.e. they are adequately included in the model. The value of the significance F with significance level 0.05 is 0.00011<0.05 (0.011%<5%), i.e. the results are reliable (statistically significant) and the model is adequate. Рvalues of the coefficients of the regression equations with level of significance 0.05 are smaller than 0.000019, i.e. they are smaller than 0.05, which means that the coefficients are statistically significant, and the adequacy of the model is confirmed.
Conclusions
The present research work represents comparison of results for the characteristics of the wear process and the wear resistance of composite powder coatings, deposited by means of high velocity oxygen flame (HVOF), which contain composite mixtures Ni-Cr-B-Si having different concentrations of chromium -9.9%; 13.2%; 14%; 16% and 20%, at one and the same size of the particles 45 µm and equal content of the other elements boron and silicon. The coating, prepared to have 20% Cr, does not contain the elements B and Si. Each powder composition was applied to obtain coating without preliminary thermal treatment of the substrate and with preliminary thermal treatment of the substrate up to 650 о С. The coatings have been tested under identical regimes of dry friction along the surface with firmly attached abrasive particles using a tribotester "Pin-disc".
Results have been obtained on the dependence of the mass wear as a function of the length of the pathway of friction, the variation of the wear intensity depending on the concentration of chromium for coatings without thermal treatment of the substrate and with thermal treatment of the substrate.
It has been ascertained that for all coatings the preliminary thermal treatment of the substrate leads to a decrease in the wear intensity.
It has been shown that upon increasing the concentration of chromium the wear intensity is decreasing non-linearly, whereupon it reaches minimal values at 16% Cr. In the case of coatings having 20% concentration of Cr the wear intensity is higher, which is due to the absence of the components B and Si in the composite mixture. In this case no new inter-metallic structures are formed, having high hardness and high wear resistance. A diagram of the interconnection between the hardness of the coatings and their abrasive wear resistance is represented.
Based on the regression analysis, analytical dependences of the wear intensity on the chromium concentration for coatings without heat treatment and with heat treatment of the substrate, presented as polynomials of second and third degree, were obtained. | 4,597.6 | 2021-04-22T00:00:00.000 | [
"Materials Science"
] |
Frequency based Digital Image Forgery Detection Through Optimal Threshold Using SOELTP
INTRODUCTION: Image forgery detection is a very challenging task now a day. Latest tools and applications make it easy. Artefact change our thought and perceptions. OBJECTIVES: A forgery detection system is a need of time to detect image forgery. METHODS: We proposed a blind image forgery detection technique. Optimal threshold-based Enhanced Local Ternary Pattern (OELTP) technique implemented on smoothed image. Features are extracted in the form of frequency to implement Discrete Wavelet Transform (DWT) on the chrominance component of the image. Support Vector Machine is used for classification. RESULTS: The accuracy of the forgery detection on the proposed technique is better than some of the previous states of work. CONCLUSION: Image forgery detection system performance has been improved by better localization of the forgery. Performance of the global threshold improved by using the latest technique, and reducing the operational complexity.
Introduction
Image forgery is not a current-day problem, it's happened from decay ago.The latest software tools and techniques make it easy as fingertips.Now a day sharing of information through images is widespread.The number of social networking sites plays a significant role in spreading fake news.Post-processing is implemented on images for their better appearance, but people do this to spread fake news and fill their desire through wrong evidence.Directly or indirectly, people are misusing these tools.As per research, it is found that the effect of image on the mind is very longlasting compared to others.It changes our perceptions about what we should eat, wear, etc.If we observe, then we will find a lot of fake images surrounding us.These fake images change our perceptions to see our past and affect our present and future also.So need to develop such a system that detects the image's forgery to verify the authenticity of the image.Image forgery is categorized into two groups, Active and Passive shown in fig. 1.In active, images are prevented by watermarks or digital signature.In passive, it's further divided into two groups copy-move and splicing.In copymove forgery, a single image is used to copy some part of the image and pest some ware in the same image.In splicing, number of different images is used to copy and paste some ware in another image.Splicing technique is also known as blind because there is no previous information about the image.Number of operations performed on images like resampling, filtering, contrast enhancement for mainly two purposes, first to hide traces of the image, and second for retouching.In this paper, we proposed an image splicing forgery detection technique to detect the forgery of the image.Two operations are performed on them, Training and Testing.In training, features of the original and forged image have been extracted through the feature extraction technique and trained the system after classifying it.In testing, features are extracted by original and forged images such as training, and it test to find the image is forged or not with the classifier.Fig. 2 shows the image splicing detection technique.
Literature Review
Image forgery is very common now.Different techniques are implemented on images to generate forged image based on feature extraction through various approaches like illumination, JPEG compression, camera-based property or limitations, and so on.In Paper, Alahmadi et al. [1] have discussed, chrominance colour component of YCbCr colour channel is much more feasible for forgery detection.Features are extracted through LBP by DCT.Features are classified by SVM.In paper, P. Cavalin et al. [2] had implemented, texture descriptor-based local binary pattern (LBP) technique on traditional gray level co-occurrence material (GLCM).CNN and multi-scale patch-based recognition were used as a fisher vector.In paper, Cortes C, Vapnik V [3] had defined, SVM as a very simple to use and reliable classifier.It's works like a learning machine used for classification.SVM is following the linear mapping technique, implemented over a very high dimensional feature vector.It's also reliable to implement over nonseparable training data.Dong J et al. [4] represent the features of the CASIA images data sets and inform their versions.How it becomes effective to use as a database.In paper, Goh J, Thing VL [5] has proposed, a hybrid framework used to develop the best feature set for existing all possible features of the image tampering.In paper, Hakimi et al. [6] had detected, image forgery identifies to implement LBP in DCT.The chrominance component of the colour is used to identify the non-overlapping blocks.Frequency-based features extraction technique is implemented using K-Nearest Neighbour (KNN) algorithm to classify the image.In paper, Hakimi et al. [7] use the chrominance colour component to divide the nonoverlapping block.To extract features from the overlapping block, LBP is implemented with wavelet transform through principal component analysis (PCA).SVM is used for classification to identify an image is forged or not.In paper, He Z et al. [8] had proposed, the run length is detected by an edge gradient-based matrix.DWT is used to find more feature.With the help of the approximation coefficient (Low-Low) band, more features are extracted in DWT.Features are classified by support vector machine (SVM) to identify forgery.In paper, He et al. [9] used the Markov approach to identify the image forgery, a Block-based feature extraction technique implemented through DCT coefficient on intra-block and inter-block.In Paper, Hsu, Chang et al. [10] had defined that every camera has different intensity, generating geometry invariants of the pixels.The noise of camera used as a fingerprint to identify image splicing, and used to generates a difference between pixel intensity.In paper, Muhammad et al. [11] is used the chrominance colour component of the image to extract the feature through LBP histogram by steerable pyramid transform (SPT).Features are classified by support vector machine (SVM).In paper, Kanwal, N. et al. [12] had used Otsu-based optimal threshold by mean absolute deviation to extract the feature of the image.The energy of the OELTP is used as a primary feature of each block for dimensionality reduction.In paper, Ojala et al. [13] had implemented rotation invariant technique to the excess uniform pattern over grayscale image to quantize the regular space for special resolution.Multiple operators are used for multiresolution analysis.Shah A, El-Alfy ES [14] has proposed a novel approach for forgery detection to implement multilevel LBP on DCT.In paper, Cortes, Tan, Truggs [15] has proposed three value codes (0, 1, -1) to implement LTP on image.The LBP is used two value codes (0, 1) to excess the feature in depth.The performance of LTP is reliable compare to LBP in noise sensitivity.In paper, Yao et al. [16] has discussed, noise has been generated on the image after post processing operations which generate an important role in forgery detection.Noise level function (NLF) is used to characterize noise.Image intensity is used to generate a relation between NLF and camera response function (CRF) for forgery detection.Image intensity differences have defined image splicing.In paper, Yuan JH et al. [17] had discussed the advanced LTP approach in the form of ELTP technique.The complete enhanced local ternary pattern (CELTP) concept is used for feature extraction.Threshold has been generated through auto adaptive strategy on behalf of traditional gray value of the central pixel.
Enhance Local Ternary Pattern (ELTP)
In LTP constant threshold value concept is used, which is not entirely invariant.A dynamic threshold concept is used in enhance local ternary patterns (ELTP) [17].A predefine mean absolute deviation (MAD) concept is implemented to generate the threshold of a block.After implementing the ELTP technique on an image it gives better results compared to LBP [25] EAI Endorsed Transactions on Scalable Information Systems Online First Fig. 5 represents the ELTP matrix generation process.First, we take 3x3 image matrix to assign -1, 0, 1 as per equation 2. Upper case (+) and Lower case (-) segmented of the ELTP matrix have been used to find the decimal code of the ELTP matrix, [20] as per Fig. 5. ELTP code of the pixel at coordinate (x,y) [17,18] is represented by equation 3.
Effect of DWT and SOELTP on Image
In the proposed work, a digital filtering technique on the image is implemented by DWT to obtain time scale depiction.DWT is implemented of the image to find features in the frequency domain [22,23].The threshold of the digital image has been improved by implementing the smoothing technique on the image.Otsu technique is used to enhance the performance of the histogram of the image.The performance of the Otsu has been represented by Fig. 6.
After implementing the Smoothing technique on the image the performance of the threshold has been improved.The performance of the histogram is shown in Fig. 6(e).represent image after smoothing.The effect of the Otsu is shown in Fig. 6(f), after implementing the smoothing on the image.Fig. 6(a) shows the noise of the image, and Fig. 6(b) represents the histogram of the noise image.The Otsu effect is represented in Fig. 6(c), and the smoothing technique effect is shown in Fig. 6(d).
Result and Discussion
We use the accuracy of the forgery detection as a performance evaluation.Accuracy estimation is based on percentage with TP, TN, FP, and FN parameters, represented by equation 4. The result of the SOELTP is compared with the previous state of work [1,5,9,11,12] in terms of accuracy of the forgery detection.A K-fold cross technique is implemented to classify with an SVM classifier.The result of the proposed technique is based on three different open-access data sets, CASIA v1.0, CASIA v2.0, and Columbia [4].The average accuracy rate on CASIA v1.0 is 99.02%, on CASIA v2.0 is 98.35%, and at the end 97.85% on Columbia [26], it compare with previous techniques like LBP, and OELTP showed in Fig. 1.The performance of the SOELTP is better than LBP, and OELTP.The performance of the proposed technique is compared with the previous state of work, shown in Table 1.
Conclusion and Future Scope
In
Figure 1 .Figure 2 .
Figure 1.Image Tampering Detection Techniques The block-based feature extraction technique has been proposed in this paper.It's an advanced image splicing detection technique due to its local feature extraction ability.In this proposed work Overlapping block-based feature extraction technique is used to extract the feature.Three different open-access data sets, CASIA v1.0, CASIA v2.0, and Columbia are used to implement the proposed work.The chrominance colour component of YCbCr colour image is used to extract the feature because Cb and Cr are highly reliable in forgery detection[18].The effect of the chrominance component is shown in Fig.3.The discrete wavelet transform (DWT) technique is implemented on the image to hold the feature in the form of frequency.DWT break the image component into four frequency band (LL, LH, HL, HH).It gives satisfactory results in texture classification and edge detection[19].We proposed a unique features extraction technique, Smoothing Otsu-based Enhanced Local Ternary Pattern (SOELTP) to identify the feature of the image.In this proposed work, before implementing OELTP on the image, a smoothing technique was implemented over the image's approximation coefficient (LL band).The otsu-based dynamic threshold value is used to generate the effect of ELTP on images for feature extraction.The architecture of the proposed technique is shown in Fig.4.The enhanced local ternary pattern (ELTP) is used ternary values (0, 1, -1) to find the difference in the intensity of the neighbor pixels.In this proposed work, the 3x3 image mask is used to implement the proposed work.DWT generates the frequency-based features, and novel features are extracted after applying the OELTP technique over smoothing images.It's normalized by K-fold technique and classified by SVM to know the image is forged or not.
Figure 3 .
Figure 3. RGB Image and Corresponding Luminance and Chrominance Component
Figure 4 .
Figure 4. Architecture of the Proposed Model and LTP.The intensity of the central pixel (I e C) and threshold value (ts e ) of ELTP define in equation 1. M = Ii | i = 0,1,2,3….8| I e C = Mean(M) (1) ts e = MAD(M) In the above equation, M is used to represent the 3x3 matrix of the image.The intensity of the surrounding pixels is represented by Ii.The intensity of the central pixel depends on the mean of the matrix, and the threshold value is based on the value of mean absolute deviation (MAD) of surround pixels.ELTP ternary matrix is generated by equation 2.
Figure 6 .
Figure 6.(a) Image Noise (b) Histogram of the Noise Image (c) Otsu's effect on Image (d) Effect of Smoothing on Noisy Image using 3x3 Averaging Mask (e) Histogram of Smoothed Image (f) Otsu effect on Smoothed Image
Figure 7 .
Figure 7. DWT Frequency Representation Accuracy = (TP + TN) x 100 / ( TP + TN + FP + FN) (4) Where TP : True Positive, No. of fake image identify as fake.TN : True Negative, No. of original image identify as original.FP : False Positive, No. of original image identified as fake.FN : False Negative, No.
Figure 8 .
Figure 8. Performance Evaluation of Proposed Technique Using LBP, OELTP, SOELTP Accuracy this paper, a robust overlapping block-based feature extraction technique has been discussed to detect image forgery.In this proposed work DWT is implemented over the chrominance component of the image for accessing the feature in terms of frequency.Before implementing the OELTP feature extraction technique on the image we perform the smoothing technique to get a better histogram of the image.The accuracy of the proposed work gives a better result to the previous state of the work.The accuracy of the proposed technique is, 99.02% on CASIA v1.0, 98.35% on CASIA v2.0, and 97.85% on the Columbia dataset.In the future, the performances of the proposed work have been improved by localization of the forgery, and improve the performance of the global threshold using the latest technique, reducing the operation complexity, and deeply exploring the localization.
Table 1 .
Proposed Method Performance of Accuracy in Percentage Comparison with Previous State of work | 3,164 | 2021-12-02T00:00:00.000 | [
"Computer Science"
] |
Thermo‐Mechanical Modeling of Pre‐Consolidated Fiber‐Reinforced Plastics for the Simulation of Thermoforming Processes
Multi‐material design aims at the targeted combination of materials with different characteristics in order to meet technical requirements. Especially the combination of metals and fiber‐reinforced plastics (FRP) has led to innovative lightweight structures with high loading capacity and ductility in recent years. The process chain required to produce such structures is characterized by a variety of process parameters which have a significant influence on the quality of the manufactured workpiece. In order to treat the thermoforming process numerically, we present constitutive models for the metal and composite part to simulate the deformation behavior of these components. In addition, experimental setups are described to identify the required material parameters. Simulations of the manufacturing process will indicate correlations between material as well as process parameters and possible defects of the final structure that may occur during manufacturing.
Introduction
In recent years, innovative lightweight structures in multi-material design have been generated by the targeted combination of materials with different property profiles. Especially the combination of metals and FRP has opened new potentials for the generation of structural components with high load-bearing capacity, high ductility and minimum mass [1]. The development process of such structure's places high demands on the engineer due to the complex interactions between design, dimensioning and manufacturing [2]. Numerical optimization of the production process of such lightweight assemblies is therefore of high importance [3]. It can reduce the time to market and can avoid the production of costly prototypes. The considered one-step thermoforming process consists of deep-drawing the metal sheet with a pre-heated fiber-reinforced thermoplastic. The forming process and plastic flow of the matrix material induce local changes to the fiber orientation. During the following cooling phase, delamination may occur because of anisotropic properties of the FRP and different thermal expansion coefficients of metal and composite which will cause residual stresses. These processes strongly affect the shape and loading capacity of the final work piece [4].
Constitutive models and parameter identification
In the present paper the manufacturing of a hybrid structure consisting of a metal component and a bi-axial reinforced composite plate with a thermoplastic matrix and carbon fibers is considered. For numerical investigations of the forming process suitable material formulations and corresponding parameters must be determined first.
Metal sheet
The numerical simulation of metal parts and structures is widely used in industrial applications. It is applied to predict the behavior of the workpiece especially in deep drawing processes regarding thickness changes and wrinkling. The accuracy of the simulation depends on the material models for the plastic material behavior. The stress update is based on a rate formulation using the JAUMANN stress rate to calculate the stress increment. To determine the plastic strain increment, the HILL yield condition is used to consider the orthotropic deformation behavior of the rolled metal sheet. In (1) σ 1 and σ 2 are the in-plane principal stresses and the LANKFORD parameter r 0 and r 90 describe the shape of the yield surface. In order to identify the mechanical parameters, uniaxial tensile tests of samples with different orientation with respect to the rolling direction have to be performed.
Fiber-reinforced Thermoplastic
The material model employed to describe the behavior of the composite assumes an additive superposition of textile and matrix contribution The latter one is modeled by an hypoelastic-plastic constitutive law similar the model used for the metal part. An associated flow rule combined with the VON-MISES yield criterion describes the evolution of plastic deformation. The mechanical properties can be defined as temperature dependent. Fibers are described as an anisotropic hyperelastic material with orientation vectors stored at the integration points. Their tension/compression and shear behavior are decoupled and can be defined by means of stress-strain curves and shear response as a function of shear angles of the fibers. A modified integration rule in the thickness direction of the shell elements can account for the typical low bending stiffness at temperatures above the melting point while still having a high in-plane tension stiffness. A more detailed description of the material model is shown in [5]. Experimental data for the parametrization are the force-displacement curves under tensile loads as well as the shear force vs. shear angle curve. The characteristic behavior for in-plane tension is determined from tensile tests on strip specimens. The typically non-linear shear force vs. shear angle curves are recorded using the picture-frame test. Gravimetric cantilever tests may be used to determine the temperature dependent bending stiffness. The mentioned material tests are performed at different process-related temperatures. The temperature dependent material parameters of the polymer matrix were identified based on stress-strain curves at varying temperatures of polyamide 6.6 from the campusplastic database [6].
Numerical results
Simulations were performed using the finite element software LS-DYNA. A metal sheet and FRP sheet are formed simultaneously with varying process and material parameters. Fig. 1 shows the correlation between process temperature and the formation of wrinkles. Lower temperatures lead to higher compressive stresses in the matrix which induce more wrinkles.
ϑ m ϑ m + 10 K ϑ m − 10 K Further parameter studies were performed to show more correlations between process parameters like forming temperature and binder force on the deformation behavior of the FRP and metal components and its tendency of the occurrence of defects like the formation of wrinkles. | 1,241.8 | 2019-11-01T00:00:00.000 | [
"Materials Science"
] |
Measurement of differential $b\bar{b}$- and $c\bar{c}$-dijet cross-sections in the forward region of $pp$ collisions at $\sqrt{s}=13 ~ \mathrm{TeV}$
The inclusive $b \bar{b}$- and $c \bar{c}$-dijet production cross-sections in the forward region of $pp$ collisions are measured using a data sample collected with the LHCb detector at a centre-of-mass energy of 13 TeV in 2016. The data sample corresponds to an integrated luminosity of 1.6 fb$^{-1}$. Differential cross-sections are measured as a function of the transverse momentum and of the pseudorapidity of the leading jet, of the rapidity difference between the jets, and of the dijet invariant mass. A fiducial region for the measurement is defined by requiring that the two jets originating from the two $b$ or $c$ quarks are emitted with transverse momentum greater than 20 GeV$/c$, pseudorapidity in the range $2.2<\eta<4.2$, and with a difference in the azimuthal angle between the two jets greater than 1.5. The integrated $b \bar{b}$-dijet cross-section is measured to be $53.0 \pm 9.7$ nb, and the total $c \bar{c}$-dijet cross-section is measured to be $73 \pm 16$ nb. The ratio between $c \bar{c}$- and $b \bar{b}$-dijet cross-sections is also measured and found to be $1.37 \pm 0.27$. The results are in agreement with theoretical predictions at next-to-leading order.
Introduction
Measurements of bb and cc production cross-sections provide an important test of quantum chromodynamics (QCD) in proton-proton collisions. In these collisions, bottom and charm quarks are mostly produced in pairs by quark and gluon scattering processes, predominantly by flavour creation, flavour excitation and gluon splitting [1]. At the LHC energies beauty and charm quarks produced in the collisions are likely to generate jets through fragmentation and hadronization processes. Experimentally, one can infer the production of a beauty (charm) quark either through exclusively identifying b (c) hadron decays, or through the reconstruction of jets that are tagged as originating in heavy flavour quark fragmentation.
As bb-and cc-dijet differential cross-sections can be calculated in perturbative QCD (pQCD) as a function of the dijet kinematics, comparisons between data and predictions provide a critical test of next-to-leading-order (NLO) pQCD calculations [2,3]. Measurements of differential cross-sections of heavy-flavour dijets can also be a sensitive probe of the parton distribution functions (PDFs) of the proton. Among the LHC experiments, the PDF region with low Bjorken-x values is accessible only to LHCb, due to its forward acceptance [4]. Moreover, the knowledge of the inclusive b and c quarks production rate from QCD processes is necessary to understand the background contributions in searches for massive particles decaying into b or c quarks, such as the Higgs boson or new heavy particles.
In 2013, the LHCb collaboration measured the integrated bb and cc production cross-sections at a centre-of-mass energy of √ s = 7 TeV in the region of pseudorapidity 2.5 < η < 4.0, tagging the quark flavour via the reconstruction of displaced vertices [5]. The LHCb collaboration has also measured the b-quark production cross-section at √ s = 7 and 13 TeV in the region with pseudorapidity 2 < η < 5, using semileptonic decays of b-flavoured hadrons [6]. The ATLAS collaboration has measured the inclusive bb-dijet production cross-section [7], while the CMS collaboration has performed a measurement of the inclusive b-jet production cross-section [8]. The latter two measurements were performed at √ s = 7 TeV in the central pseudorapidity region, with |η| < 2.5. In this paper, a measurement of the inclusive bb-and cc-dijet cross-sections at √ s = 13 TeV is presented. The data sample used corresponds to a total integrated luminosity of proton-proton (pp) collisions of 1.6 fb −1 , collected during the year 2016. Cross-section measurements are also performed differentially as a function of the dijet kinematics. The ratio of the cc to the bb cross-sections is also determined. This is the first cc-dijet differential cross-section measurement at a hadron collider. This paper is structured as follows. The LHCb detector and the simulation samples used in this analysis are introduced in Sec. 2. Section 3 presents the selection of the events and the tagging of jets as originating from b and c quarks, as well as the definition of the variables used for the cross-section measurement. The fitting procedure is described in Sec. 4. The unfolding procedure used to convert the raw observables into generatorlevel observables is described in Sec. 5. Systematic uncertainties on the cross-section measurements are discussed in Sec. 6. The determination of the cross-section ratios is introduced in Sec. 7. Finally, results are shown in Sec. 8 and conclusions are drawn in Sec. 9.
Jet reconstruction and event selection
Jets are reconstructed using particle flow objects as input [22]. The objects are combined employing the anti-k T algorithm [23], as implemented in the Fastjet software package [24], with a jet radius parameter of R = 0.5. The offline and online jet reconstruction algorithms are identical, however minor differences between offline and online may arise from different reconstruction routines for tracks and calorimeter clusters that are used in the two contexts. Systematic uncertainties are evaluated to cover these small differences and described in Sec. 6.
To improve the rejection of fake jets, such as jets originating from noise and high energy isolated leptons, additional criteria, similar to those explained in Ref. [22], are imposed. In particular jets are required to contain at least two particles matched to the same PV, at least one track with p T > 1.2 GeV, no single particle with more than 10% of the jet p T and to have the fraction of the jet p T carried by charged particles greater than 10%. These requirements have been optimized using simulated samples produced with 2016 running conditions.
In this paper the jet flavours are distinguished by using a heavy-flavour jet-tagging algorithm, that is referred to as "SV-tagging". The SV-tagging algorithm reconstructs secondary vertices (SVs) using tracks inside and outside of the jet and is described in detail in Ref. [25]. In this algorithm tracks that have a significant p T and displacement from every PV are combined to form two-body SVs. Then good quality two-body SVs are linked together if they share one track, in order to form n-body SVs. If a SV is found inside the cone of the jet, the jet is tagged as likely to be originating from b-or c-quark fragmentation. To further distinguish light-flavour jets from heavy-flavour jets and b-jets from c-jets multivariate analysis algorithms as described in Ref. [25] are used. Two boosted decision tree (BDT) classifiers [26][27][28], that use as inputs variables related to the SV, are employed: one for heavy-/light-jet separation (BDT bc|q ) and the other for b-/c-jet separation (BDT b|c ).
The offline selection is applied to events that pass the trigger criteria for heavy-flavour dijets. Two offline-reconstructed jets originating from the same PV are selected as dijet candidates. The kinematic requirements in Tab. 1 are applied to the reconstructed jets. In the table and in the remainder of the paper, the leading jet, j 0 , is that with the largest p T , and j 1 is the other jet in the pair. The kinematic selection includes a requirement on |∆φ|, the difference in the azimuthal angle between the jets. In 0.4% of the selected events multiple dijet candidates exist after applying all the requirements; the jet pair with maximum sum of the p T of the two jets is selected in these cases. It has been verified in simulation that this choice does not bias the results. The fraction of events with multiple candidates found in simulation is similar to that in data. The differential cross-section is measured as a function of four observables: the leading jet pseudorapidity η(j 0 ), the leading jet transverse momentum p T (j 0 ), the dijet invariant mass, m jj , and where y 0 and y 1 are the jet rapidities.
Finally, a data sample in which a Z boson is produced in association with a jet and decays to a µ + µ − pair is used to measure efficiencies and assess several systematic uncertainties in this analysis. A similar selection to that of Ref.
[29] is applied, with some differences introduced to match the jet phase space considered in this analysis. This sample is further referred to as Z + jet.
Fitting procedure
A fit to SV-tagging-related observables is performed in order to extract the bb-and cc-dijet yields. The fit is performed in intervals of the dijet kinematics introduced in the previous section. The expected distributions of the tagging observables for bb, cc and background samples are obtained as histograms using simulated samples. Four tagging observables are used for disentangling the bb and cc processes from the background: the output of the classifiers BDT bc|q and BDT b|c for the leading jet and BDT bc|q and BDT b|c for the second jet. In principle a four-dimensional fit would give the best result in terms of the statistical uncertainty, but this is not optimal given the finite simulated sample sizes. Instead, two new observables are built, introducing linear combinations of the four tagging observables: An alternative method, where the multiplication of SV-tagging observables is considered instead of the sum, is used to evaluate a systematic uncertainty on the procedure. Three different types of processes are expected in the data sample: same-flavour processes, different-flavour processes and background from light jets. In same-flavour processes two b-jets or two c-jets are detected in the acceptance, they are labeled as bb and cc. Different-flavour processes are bbq and ccq processes (with q = u, d, s, g) where one b-or c-jet is detected in the LHCb acceptance and the second jet in the dijet is a light flavour jet, they are labeled as bq and cq. The bbcc process has a cross-section of about three orders of magnitude smaller than the bbq and ccq processes [16], and is neglected in the fit. The background from light jets, where the two light jets may have different flavours, is labeled as qq . Fit templates are constructed as two-dimensional (t 0 , t 1 ) histograms, with a 20 × 20 binning scheme and t 0 , t 1 ∈ [−2, 2]. For the same-flavour processes and the background from light jets, the histograms are filled with simulated events. For the different-flavour processes, two-dimensional (BDT bc|q ,BDT b|c ) single-jet templates with 20 × 20 bins are built using the bb, cc and light partons simulation samples. Two-dimensional (t 0 , t 1 ) different-flavour templates are then obtained from a convolution of two single-jet templates. Same-flavour, different-flavour and light jets template projections in t 0 and t 1 obtained in this way are shown in Fig. 1. For different-flavour processes, separate bq, qb, cq, qc templates are considered where the first flavour is associated to j 0 and the second to j 1 .
The outputs of the BDT bc|q and BDT b|c classifiers show a correlation with the jet p T , while they are almost uncorrelated with the jet pseudorapidity and the other kinematic variables. For this reason different templates are built for different intervals of [p T (j 0 ), p T (j 1 )]. The p T binning scheme is the following: [20,30] GeV/c, [30,40] GeV/c, [40,50] GeV/c, [50,60] GeV/c and p T > 60 GeV/c. As by definition p T (j 0 ) is higher than p T (j 1 ), only 15 non-empty [p T (j 0 ), p T (j 1 )] intervals are present.
In order to measure the differential bb-and cc-dijet yields the dataset is divided in subsamples for each of the kinematic observables. For a given observable the data sample is divided in three dimensions into bins of that observable and in bins of [p T (j 0 ), p T (j 1 )]. The [p T (j 0 ), p T (j 1 )] binning scheme is identical to that employed in the template construction. For each bin a (t 0 , t 1 ) fit is performed using templates corresponding to the [p T (j 0 ), p T (j 1 )] interval and the extracted bb-and cc-dijet yields are summed over the [p T (j 0 ), p T (j 1 )] bins, in order to obtain the yields for the different kinematic observable intervals. The fits are performed with the yield of each species as a freely varying parameter and each bin is fitted independently.
The results obtained by summing the fitted yields of bb, cc, bq + qb and cq + qc over the η(j 0 ) and [p T (j 0 ), p T (j 1 )] bins are shown in Fig. 2. The uncertainty on the fit includes the statistical uncertainty on the data and systematic uncertainties related to the fit procedure, the template modeling and the finite size of the simulation samples used to construct the templates. The evaluation of these systematic uncertainties is described in Sec. 6.
Pseudoexperiments are performed to assess the fit stability and to determine the fit bias and coverage. Relative biases of the order of 0.01% (0.02%) on the fitted bb (cc) yields are found, these values are used to correct the fit result. Moreover, the pseudoexperiments indicate that relative biases of the order 10% on the fitted yield uncertainties are present. A correction is therefore also applied to the fit uncertainty.
Determination of the cross-section
The yields in each of the bins of the observables are used to calculate differential crosssections at generator level, using an unfolding technique to correct for bin migrations due to detector effects and resolution. A least square method with Tikhonov regularisation [30] is employed. Generator-level jets are defined as jets clustered with the anti-k T algorithm [23] using simulated quasi-stable particles, excluding neutrinos, as inputs. The fiducial region of the measurement is defined by the kinematic requirements in Tab. 1 applied to jet observables at generator level. The cross-sections are evaluated using where z is the variable under study at generator level, i indicates the index of the bin is the width of the bin, L is the integrated luminosity, N (j) is the number of fitted events in the bin [z reco j − ∆z reco , z reco j + ∆z reco ] defined using the reconstructed variables, A(i) is the acceptance factor for the bin i, (i) is the efficiency for the bin i and U ij is the unfolding matrix that maps reconstructed to generator-level variables. The acceptance factor is introduced in the cross-section formula to account for the migration of events in to and out of the fiducial region, which is not accounted for in the unfolding matrix.
The efficiency is written as the product where reco is the jet reconstruction efficiency, tag is the jet tagging efficiency of reconstructed jets and trig is the trigger efficiency evaluated on tagged jets. The total efficiency is obtained using simulated bb and cc samples. Per-event weights are applied to simulated events in order to correct for data/simulation differences. For the trigger efficiency, trig , per-jet data/simulation weights are measured following the procedure in Ref. [31]. The trigger efficiency must be also corrected for the GEC requirement, since its efficiency is about a factor 0.6 lower in data than in simulation. The GEC efficiency is determined in data using independent samples of events with no GECs applied. Finally trig is corrected for data/simulation differences in the efficiency of the other trigger requirements. To do this per-jet weights are measured with a tag-and-probe technique, comparing data and simulation samples of Z + jet events. The total selection efficiency for the bb process is found to be about 15% and about 1.5% for the cc process. This difference is due to the SV-tagging efficiency, as explained in Sec. 3.
Unfolding matrices are obtained for each of the four considered observables using simulation. Uncertainties due to the finite simulated sample size in the unfolding matrix construction are propagated to the result. Since the detector response is known to be similar for b-and c-jets and the bb simulation sample is larger, the bb sample is used for both the bb and cc unfolding. A systematic uncertainty due to differences in the underlying dijet kinematic for bb and cc is discussed in Sec. 6.
Systematic uncertainties
Systematic uncertainties can affect the fitting procedure, the selection efficiencies, the acceptance factor, the unfolding and the integrated luminosity.
The systematic uncertainty affecting the GEC efficiency arises mainly from the different values obtained in data subsamples where no GECs are applied, since these have different compositions of bb, cc and qq events. The resulting uncertainty on the efficiency determination is 6.3%, correlated across all bins. Remaining differences between data and simulation in the trigger efficiency are taken into account using per-event weights. The statistical uncertainty on the weights is taken as a systematic uncertainty. The mean relative uncertainty associated to these weights is around 3%. An additional source of systematic uncertainty arises from the difference between the online and offline physical objects reconstruction algorithms. A subset of data events where only one jet is required to be SV-tagged at trigger level is used to assess this uncertainty. This systematic uncertainty is again around 3%.
Data-simulation corrections are applied to simulated events in the evaluation of the SV-tagging efficiency. The corrections and corresponding uncertainties follow Ref. [25], in which these values were computed with data taken at √ s = 7 and 8 TeV. A tag-and-probe technique was used on control samples with a jet and a W boson, or a B or D meson. The main systematic uncertainties arise from the modeling of the IP distribution for light jets in simulation, since a fit to this observable has been used to disentangle the different flavour components in the control samples prior to the SV-tagging requirement. These corrections and uncertainties are verified in Ref.
[31] to agree within 3% between that data sample and the one used in this analysis. The systematic uncertainty for the SV-tagging efficiency, of around 20%, dominates the total uncertainty on the cross-section measurements. It is correlated across all bins of the analysis. Uncertainties affecting the jet identification efficiency are evaluated as described in Ref. [22], using the Z + jet data sample. The relative variation of the number of selected jets is compared between data and simulation, and the differences observed, which are at the level of 5% per-jet, are used as a systematic uncertainty on reco .
The uncertainty associated to differences between data and simulation in the jet energy resolution and jet energy scale affect the unfolding procedure, the acceptance factor and the efficiency measurement. Both the jet energy resolution and scale uncertainties are evaluated as explained in Ref.
[29], using the Z + jet data sample introduced in Sec. 3. To account for the jet energy resolution uncertainty, Z + jet events are used to evaluate the maximum gaussian smearing one needs to apply to the jet p T in simulation to have an agreement with data within one standard deviation. To determine the uncertainty arising from the jet energy scale, the same events are used to evaluate the multiplicative factor one needs to apply to the jet p T in simulation to have an agreement within one standard deviation. The uncertainty associated with both of these effects is found to be negligible in this analysis.
Systematic uncertainties associated to the modelling of the templates may arise from differences between data and simulation in the BDT classifiers distributions and affect the fitting procedure. In order to evaluate them, the analysis is repeated using two other variables related to the SV employed for the jet SV-tagging: the corrected SV mass [25] and the number of tracks in the SV. In analogy with t 0 and t 1 , new observables are built by summing these variables for j 0 and j 1 , and the fits are then performed in the new space. The difference between these and the nominal results are used to evaluate the systematic uncertainties. These are on average at the level of 3%. Concerning the fitting procedure itself, an alternative algorithm is applied, where rather than combining linearly, the responses of the two BDT classifiers are multiplied. Once again, the fits are repeated and the results compared with the nominal ones. The uncertainties for the bb yields are below 1%, for the cc yields they are of about 10% on average.
In order to assess the uncertainty due to the finite simulated samples size in the template construction, new templates are obtained with a "bootstrapping with replacement" technique [32]. For each simulation sample used to build a template comprising N events, an equal number of N events are randomly extracted from the sample allowing to take multiple times the same event (repetitions). This new set of events is then used to obtain a new template. It has been demonstrated that the distribution of fit results obtained with the bootstrap technique mimics the distribution of results due to the finite simulated sample size [33]. The width of the distribution of the fit results using the different bootstrap templates is taken as the uncertainty associated to the simulated sample size. The relative mean uncertainty is 0.8% for bb and 3.6% for cc yields. The finite simulated samples size also affects the efficiency evaluation, as well as the unfolding matrices. For the former, their effect is small compared to other uncertainties. For the latter, this is taken automatically into account by the unfolding algorithm [34].
The uncertainty on the modelling of initial-state (ISR) and final-state radiation (FSR) in simulation may affect the acceptance-factor determination, since different parametrisations of the gluon emission could change the jets kinematical distributions. Simulation samples where ISR and FSR parameters are varied are generated to determine the uncertainty. In particular the multiplicative factor applied to the renormalisation scale for ISR (FSR) branchings µ ISR R (µ FSR R ) is varied between 0.5 and 2, and the additive non-singular term in the ISR (FSR) splitting functions c ISR N S (c FSR N S ) is varied between −2 and 2 [16]. The new acceptance factors are compared to the nominal ones, and their relative variation is considered as a systematic uncertainty. On average the variation is about 3%. The unfolding matrix receives systematic uncertainties from all the different sources described in this section. These are propagated through the unfolding procedure. In the unfolding algorithm a regularisation parameter is chosen via a minimisation procedure. To further cross-check the algorithm, the regularisation parameter is varied around the minimum and the unfolding is repeated. Using a conservative approach the variation is chosen to be ±50% of the parameter value. The difference with respect to the nominal result is taken as systematic uncertainty associated to the unfolding procedure, which is below 1%. Another source of uncertainty is associated to the unfolding model. It is assessed by varying the underlying dijet kinematic distributions in the simulation samples used for the determination of the unfolding matrix. The unfolding procedure is repeated with this alternative set of unfolding matrices and unfolded distributions are compared with the nominal ones. The relative variation in each bin is used as systematic uncertainty. This is again in average below 1%.
Finally, the systematic uncertainty on the integrated luminosity is about 4%, determined as explained in Refs. [31,35].
The systematic contributions from each source are summarized in Table 2. In the table, the mean relative uncertainties calculated averaging over η(j 0 ) intervals are reported separately for bb and cc events. Since the different sources are considered to be uncorrelated, the total uncertainties are obtained by summing the individual uncertainties in quadrature. The unfolding-related systematic uncertainties are in principle correlated with some of the efficiency uncertainties, but since they are sub-dominant this correlation is neglected. The dominant systematic uncertainties are those related to the GEC efficiency, the SV-tagging and the fit procedure. Finally, a closure test is performed to assess the validity of the analysis procedure. For this, the simulation samples are used to prepare a test dataset, and the full analysis chain is applied, from the fit to the unfolding. The measured and reference cross-sections are compatible within their statistical and systematic uncertainties.
Ratio of cc-and bb-dijet cross-sections
This section presents the method used to determine the cross-section ratio, R, between cc and bb production. The measurement of R is also performed in the different bins of kinematic observables: leading jet η, leading jet p T , ∆y * and m jj . The same binning scheme for reconstructed and generator-level observables is used as for the cross-section measurements.
Since several experimental and theoretical uncertainties cancel in the ratio it provides an excellent test of the SM and of pQCD, and can also be used to obtain valuable information when used in the global fits to extract the proton PDFs [3].
In analogy with Eq. 2, the unfolded R can be obtained with the following formula: where i indicates the index of the bin defined for generator-level variables, N cc (j) (N bb (j)) is the number of fitted cc (bb) events in the bin j defined for reconstructed variables, U ij is the unfolding matrix introduced in Sec. 5 and cc tag (j) ( bb tag (j)) is the cc (bb) tagging efficiency in bin j. Apart from the tagging efficiency, which depends on the properties of band c-hadrons, it has been verified in simulation that all other efficiencies and acceptance factors are compatible and fully correlated between b-and c-jets, therefore they cancel in the ratio and they are not considered in the formula. The correlation between N cc (j) and N bb(j) is neglected when determining the uncertainty on R. This correlation leads to a small change on the statistical uncertainty and is negligible compared to the total uncertainty on R, which is in the order 20%.
Most sources of systematic uncertainty are common between the numerator and denominator of Eq. 3, cancelling their impact. The exceptions are the SV-tagging systematic uncertainty, since this is measured on complementary data samples for b-and c-jets [25]; the fit procedure systematic uncertainty; the template modelling systematic uncertainty and the simulation sample size uncertainty. These systematic uncertainties are considered uncorrelated. Although those related to the fit procedure should take into account the anti-correlations between bb and cc fitted yields, these uncertainties are found to be negligible with respect to the statistical uncertainty.
Results and predictions
In this section the measurements of the bb-and cc-dijet differential cross-sections are presented, as well as the measurement of their ratio.
The measurements in this section are compared with the pQCD NLO cross-section predictions obtained with Madgraph5 aMC@NLO [2] for matrix elements computation and Pythia for parton showers. The predictions take into account the FSR and ISR contributions [16]. The NNPDF2.3 NLO set [36] has been used as PDF set for the calculation. At least two generator-level jets are required in the fiducial region, and the two jets with highest p T that fulfill the requirements are used to calculate the differential distributions. The renormalisation (µ r ) and factorisation scales (µ f ) are set dynamically to the sum of transverse masses of all final-state particles divided by two. The scale uncertainty has been obtained with an envelope of seven combinations of (µ r ,µ f ) values (µ r , µ f = 0.5, 1, 2). The PDF uncertainty has been obtained as the envelope of 100 NNPDF2.3 NLO replicas. The uncertainties on the predictions are correlated across the kinematical intervals considered for the measurement. At high leading jet p T and m jj the prediction uncertainties are of the order of 15%, for both the bb and cc cross-sections. In principle more advanced techniques can reduce the prediction uncertainty [3] in the high m jj region, while phenomenological studies at low mass, where the renormalisation and factorisation scale uncertainty is larger, do not exist. The measurements are also compared with a leading-order prediction obtained with Pythia for both process generation and parton showering. Figure 3 shows the bb-and cc-dijet differential cross-sections as a function of the leading jet η, the leading jet p T , ∆y * and m jj . The cross-sections as a function of ∆y * and m jj are presented in logarithmic scale, while in App. A they are presented in linear scale. The numerical values of the measured cross-sections, the covariance matrices for the bb (cc) intervals and the cross-correlation matrix between bb and cc intervals are reported in App. B. The total uncertainty is almost fully correlated across the bins, since it is dominated by common systematic uncertainties. The only uncorrelated contributions to the total uncertainty are the statistical uncertainty and the systematic uncertainty related to the finite simulated sample size, which are negligible with respect to the total uncertainty. Note that the leading jet p T and m jj ranges are reduced to [20,70] GeV/c and [40,150] GeV/c respectively, because the unfolding produces cross-sections compatible with zero events in the high p T and high mass bins. The measurements are generally slightly below the predictions. The compatibility of the measurements with the prediction, obtained including the uncertainties on both, is within 1 to 2 standard deviations. It can be noticed that the predictions at low leading jet p T and m jj show large uncertainties, that are dominated by the renormalisation and factorisation scale uncertainty. The global compatibility of the measurements with predictions, calculated considering the correlations between the different bins, is 0.9 σ for the bb-and 0.8 σ for the cc-dijet cross-sections. Figure 4 shows the cross-section ratios R as a function of the leading jet η, the leading jet p T , ∆y * and m jj . The R measurements are compatible with the prediction within its uncertainties. It can be noticed that the measured ratio R is in the order of 1.4, significantly lower than the inclusive cc/bb ratio expected in pp collisions: this is due to the jet p T > 20 GeV requirement of the fiducial region that partially compensates the effect of the different b-and c-quark masses.
The differential distributions are summed up to obtain the integrated cross-sections in the fiducial region. In this way four different values of σ(pp → bb-dijet X), σ(pp → cc-dijet X), where X indicates additional particles produced in the collisions, and R are obtained, one for each observable. The different measurements of the same quantity are in agreement within their total uncertainty. The integrated measurements obtained from the ∆y * distributions have the smallest relative uncertainty from the fit procedure and are considered as the nominal result. The total integrated bb-and cc-dijet cross-sections and their ratio are presented in Table 3. In this table the statistical, systematic and luminosity uncertainties are presented separately. The total cross-sections and R are compatible with the prediction from Madgraph5 aMC@NLO + Pythia within its uncertainty.
Summary
Measurements of the total and differential bb-and cc-dijet production cross-sections in pp collisions at √ s = 13 TeV in the LHCb acceptance have been presented. The ratio, R, between the cc-and bb-dijet cross-sections has also been measured. Results are presented for the fiducial region for generator-level jets with transverse momentum p T > 20 GeV/c, pseudorapidity 2.2 < η < 4.2 and azimuthal difference |∆φ| > 1.5.
The total measured bb-dijet cross-section in the fiducial region is where the first uncertainty is the combined statistical and systematic uncertainty and the second is due to the precision of the luminosity calibration. The relative statistical uncertainty is 0.012%. The total measured cc-dijet cross-section in the fiducial region is σ(pp → cc-dijet X) = 72.6 ± 16.1 ± 2.9 nb, with a relative statistical uncertainty of 0.03%. The measured ratio between the two Table 3: The total bb-dijet and cc-dijet cross-sections and their ratio in the fiducial region, compared with the NLO predictions. The first uncertainty on the measurement is the combined statistical and systematic uncertainty and the second is the uncertainty from the luminosity. For the measurement of R the luminosity uncertainty cancels in the ratio. The statistical uncertainty for the cross-section and R measurements is also reported. For the predictions the first uncertainty corresponds to the scale uncertainty, the second to the PDF uncertainty.
cross-sections is
with a relative statistical uncertainty of 0.03%. The total cross-sections and the ratio between the two are compatible with the Madgraph5 aMC@NLO + Pythia expectation within the total uncertainties. Differential cross-sections are measured as a function of the leading jet η, the leading jet p T , ∆y * and m jj and found to agree within 1 to 2 standard deviations with the predictions, depending on the intervals. The numerical values of the cross-sections and cross-sections ratios are summarized in App. B. This is the first inclusive, direct measurement of the differential cc-dijet production cross-section at a hadron collider.
[29] LHCb collaboration, R. Aaij et al., Measurement of forward W and Z boson production in association with jets in proton-proton collisions at √ s =8 TeV, JHEP 05 (2016) 131, arXiv:1605.00951. Figure 5: Measured differential bb-and cc-dihet cross-sections as a function of the (left) leading jet p T and (right) m jj on a linear scale. The error bars represent the total uncertainties, that are almost fully correlated across the bins. The next-to-leading-order predictions obtained with Madgraph5 aMC@NLO + Pythia are shown. The prediction uncertainty is dominated by the renormalisation and factorisation scale uncertainty. The leading-order prediction obtained with Pythia is also shown.
B Numerical results and covariance matrices
The numerical values of the measured differential bb-and cc-dijet cross-sections, cc/bb dijet cross-section ratios and their uncertainties are reported in Tables 4, 5, 6 and 7. The covariance matrices for the bb (cc) intervals and the cross-correlation matrix between bb and cc intervals are reported in Tables 8-19. ---0.0067 Table 16: Covariance matrix, corresponding to the total uncertainties, obtained between the leading jet p T intervals of the bb (horizontal) and cc (vertical) differential cross sections. The unit of all the elements of the matrix is nb GeV/c 2 and the p T intervals are given in GeV/c. [20,30] | 7,956.2 | 2020-10-19T00:00:00.000 | [
"Physics"
] |
Integrin αvβ3 Signaling in Tumor-Induced Bone Disease
Tumor-induced bone disease is common among patients with advanced solid cancers, especially those with breast, prostate, and lung malignancies. The tendency of these cancers to metastasize to bone and induce bone destruction is, in part, due to alterations in integrin expression and signaling. Substantial evidence from preclinical studies shows that increased expression of integrin αvβ3 in tumor cells promotes the metastatic and bone-invasive phenotype. Integrin αvβ3 mediates cell adhesion to several extracellular matrix proteins in the bone microenvironment which is necessary for tumor cell colonization as well as the transmission of mechanical signals for tumor progression. This review will discuss the αvβ3 integrin receptor in the context of tumor-induced bone disease. Specifically, the focus will be the role of αvβ3 in modulating cancer metastasis to bone and tumor cell response to the bone microenvironment, including downstream signaling pathways that contribute to tumor-induced osteolysis. A better understanding of integrin dysregulation in cancer is critical to developing new therapeutics for the prevention and treatment of bone metastases.
Introduction
Advanced solid tumors frequently metastasize to bone, occurring in approximately 70-80% of patients with breast or prostate cancer, and in 30-40% of lung cancer patients [1]. Metastatic tumors disrupt normal bone remodeling to induce bone destruction by secreting factors (e.g., parathyroid hormone-related protein (PTHrP), interleukin-8, interleukin-11) that promote osteoclast formation. Subsequently, osteoclast-mediated bone resorption releases matrix-bound growth factors such as transforming growth factor beta (TGF-β), which further stimulate tumor growth and bone destruction [2,3]. Alternatively, metastatic tumors can secrete factors (e.g., bone morphogenetic proteins, insulin-like growth factors, endothelin-1) that promote osteoblast proliferation and differentiation, resulting in bone formation and sclerotic lesions [4]. This vicious cycle of tumor-induced bone disease (TIBD) results in severe comorbidities including extreme bone pain, spinal cord compression, hypercalcemia, and pathological fractures that significantly decrease patient quality of life and increase mortality [5][6][7]. Numerous preclinical studies have shown that the expression of specific integrin heterodimers, and their downstream signaling pathways, are perturbed in cancers that metastasize to bone. Most notably, integrin αvβ3 is upregulated in bone-metastatic tumor cells as well as multiple myeloma cells, and has been implicated in the progression of TIBD [8][9][10]. Interestingly, while integrin αvβ is also expressed in primary bone cancers such as osteosarcoma and chondrosarcoma, high αvβ3 expression has primarily been shown to promote metastasis of these tumors to the lung [11,12]. Hence, αvβ3 is a promising therapeutic target against bone metastases and the mechanisms by which it mediates the pathogenesis of secondary bone cancers and multiple myeloma are an area of extensive study [13]. This review will discuss integrin αvβ3 in the context of metastatic cancers in bone, particularly how αvβ3 modulates tumor cell response to the bone microenvironment as well as downstream signaling pathways that promote tumor-induced bone destruction.
The Biology of Integrin αvβ3
Integrin αvβ3 is a heterodimeric transmembrane glycoprotein that mediates cell adhesion to the extracellular matrix (ECM) through recognition of conserved arginine-glycine-aspartic acid (RGD) motifs in various ligands including osteopontin, vitronectin, and fibronectin [14]. Like other integrins, αvβ3 acts as a bidirectional signaling molecule. During "inside-out" signaling, adaptor proteins talin and kindlin bind the cytoplasmic tail of the β3 subunit, which not only links the integrin to the actin cytoskeleton but also causes conformational changes that increase its affinity for extracellular ligands [15,16]. In turn, ligation of activated αvβ3 triggers integrin clustering at the plasma membrane and recruitment of additional focal adhesion proteins (e.g., FAK, SFKs, paxillin, vinculin) which are important for actin cytoskeletal assembly as well as signal transduction ("outside-in" signaling) [17,18]. Integrin αvβ3 signaling is also modulated by lateral associations with growth factor receptors such as epidermal growth factor receptor (EGFR) [19] and TGF-β receptor II (TGFβRII) [20], and there is significant crosstalk between the downstream pathways (e.g., Ras-MEK-MAPK, PI3K-Akt, RhoA-ROCK) regulating cell migration, proliferation, and survival [21,22]. With respect to normal bone physiology, αvβ3 plays an important role in osteoclast-mediated bone resorption [23,24], angiogenesis [25,26], and phagocytosis of apoptotic cells [27].
Integrin αvβ3 Is Upregulated in Cancers that Metastasize to Bone
Metastasis is a multi-step process whereby cancer cells detach from the primary tumor, locally invade the surrounding tissue, transit through the vasculature or lymphatics, and colonize distant sites. Each stage of the metastatic cascade requires the activity of many different cell adhesion molecules, including integrins. Although several integrin heterodimers have been implicated in tumor cell interactions with the bone microenvironment (e.g., α2β1, α4β1, α5β1) [28], αvβ3 has been identified as a critical integrin for bone metastasis. Previous investigations have shown that the expression of integrin αvβ3 is increased in various bone-metastatic tumors such as breast, lung, and renal cancer compared to normal tissues [29]. One notable early study also demonstrated by immunohistochemistry that bone-residing metastases from breast cancer patients expressed higher levels of integrin αvβ3 compared to their respective primary tumors [30]. Collectively, these findings emphasize the importance of integrin αvβ3 in bone metastasis.
Another study illustrated that bone-metastatic subclones of a parental cancer cell line constitutively overexpressed integrin αvβ3 [31]. Specifically, a bone-tropic human breast cancer cell line (B02) was first established by repeated in vivo passages during which MDA-MB-231 breast carcinoma cells were injected into the left ventricle of the heart of nude mice and isolated from bone metastases [32]. The expression of various integrin heterodimers in these B02 cells was then assessed by immunoblotting and flow cytometry [31]. Results showed that integrin αvβ3 was overexpressed in B02 cells compared to the parental MDA-MB-231 cells while the cell surface expression of other integrins was not significantly different between the two cell lines.
In a more recent report, de novo expression of integrin αvβ3 in tumor cells that typically metastasize to the lungs was sufficient to promote homing to bone [33]. First, αvβ3 was exogenously expressed in the 66cl4 mouse mammary carcinoma cell line (66cl4beta3) and injected into the mammary fat pad of Balb/c mice. The 66cl4beta3-tumor bearing mice had significantly higher metastatic burden in the spine (20-fold increase) compared to mice that were inoculated with control 66cl4 cells. Spontaneous metastasis of 66cl4beta3 tumors to the long bones, particularly the femur, was also observed but these metastases were not detected in mice injected with control 66cl4 cells. Furthermore, several studies have shown that expression of functionally inactive αvβ3 mutants or treatment with αvβ3 antagonists significantly reduced the ability of tumor cells to colonize bone [9,31,34]. Taken together, these data demonstrate that integrin αvβ3 contributes to the osteotropism of metastatic cancer cells.
Expression of Tumor-Specific αvβ3 Promotes Bone Destruction
It is well-established that metastatic cancers induce osteoclastogenesis to initiate bone resorption, which facilitates tumor expansion in this metastatic niche [2,3,35]. Evidence from one preclinical study showed an increased number of osteoclasts adjacent to bone-residing tumors that overexpressed integrin αvβ3 [33]. In a previously described study, bone-metastatic human breast cancer cells that constitutively overexpressed αvβ3 (B02) induced significantly larger and more numerous osteolytic lesions in animals compared to the parental MDA-MB-231 cells from which they were derived [31]. In a later study by the same group, human MDA-MB-231 breast cancer cells were stably transfected to overexpress αvβ3 and subsequently injected into the tail vein of nude mice [36]. Mice bearing αvβ3-overexpressing tumors had significantly more bone destruction (2-fold increase) compared to mice inoculated with mock-transfected cells. Furthermore, treatment with the αvβ3 inhibitor PSK1404 significantly reduced the incidence of osteolysis in mice with αvβ3-overexpressing tumors. Interestingly, prostate cancer cells lacking integrin αvβ3 expression promote bone resorption while αvβ3-expressing prostate cancer cells stimulate bone formation, thus illustrating the role of αvβ3 in the development of osteoblastic lesions [9].
The molecular mechanisms by which tumor-specific αvβ3 promotes osteolysis are still being explored, but prior studies have shown that αvβ3 signaling resulted in the nuclear localization of transcription factors such as Runx2, which upregulated matrix metalloproteinases (e.g., MMP-9, MMP-13) and soluble receptor activator of NF-κB ligand (RANKL) to aid in bone matrix dissolution as well as osteoclast recruitment, differentiation, and function [37,38]. More importantly, integrin αvβ3 can augment TGF-β signaling [20] which has been shown to stimulate the expression of PTHrP by tumor cells and osteoblast expression of RANKL, thereby promoting osteoclast-mediated bone destruction [2,39]. In summary, these studies illustrate that increased αvβ3 expression in metastatic cancer cells contributes to the pathophysiology of tumor-induced bone destruction.
Integrin αvβ3 Modulates Tumor Response to the Rigid Bone Matrix
Over the past few decades, the ECM has been increasingly recognized as an important regulator of cell behavior and gene expression. For instance, matrix stiffness is increased in fibrotic soft tissues and has been linked to the malignant transformation of epithelial cells [40,41]. Matrix rigidity also stimulates integrin clustering, focal adhesion assembly, and RhoA-ROCK-dependent actomyosin contractility that can induce changes in gene expression. Mineralized bone is unique in that it has an elastic modulus ranging from 1.7 to 2.9 × 10 10 Pa, which is orders of magnitude more rigid than soft tissues (10 2 -10 6 Pa) [42,43]. One study explored the effects of bone matrix rigidity on metastatic tumors by culturing osteolytic MDA-MB-231 breast cancer cells and non-osteolytic MCF-7 cells on rigid bone-like substrates [44]. MDA-MB-231 cells significantly upregulated their expression of PTHrP (2.5-fold increase) and other genes involved in TIBD in response to substrate stiffness while MCF-7 cells showed no difference in PTHrP expression. Although tumor-specific integrins were not investigated, strong evidence indicated that the effects of substrate rigidity on PTHrP expression were mediated by mechanically transduced signals, particularly through activation of ROCK.
The mechanism by which matrix rigidity mediates osteolytic gene expression in metastatic tumors was further elucidated in a more recent study [45]. Specifically, metastatic breast (MDA-MB-231), prostate (PC-3), and lung (RWGT2) cancer cells cultured on bone-mimetic rigid substrates had increased expression of both integrin αvβ3 and PTHrP compared to cells cultured on more compliant substrates. Subsequently, fluorescence resonance energy transfer and co-immunoprecipitation assays were performed to investigate whether αvβ3, in addition to TGF-β, was regulating PTHrP expression.
Results showed that colocalization of integrin αvβ3 and TGFβRII was significantly increased in tumor cells cultured on rigid substrates. The authors proceeded to demonstrate that rigidity-stimulated clustering of αvβ3 and TGFβRII activates Src which phosphorylates TGFβRII to induce p38 MAPK signaling and PTHrP expression. Inhibition of integrin αvβ3 in MDA-MB-231 cells using either an shRNA or the monoclonal antibody LM609 significantly decreased PTHrP expression. Furthermore, mice injected with MDA-MB-231 cells stably expressing shRNA against αvβ3 had reduced bone destruction. Collectively, these data indicate that crosstalk between integrin αvβ3 and TGFβ signaling modulates tumor cell response to the rigid bone microenvironment and promotes the transition of tumor cells to a bone-destructive phenotype.
Targeting Integrin αvβ3-Expressing Tumors in Bone
Currently, the standard of care for patients with TIBD are drugs that interfere with osteoclast-mediated bone resorption such as bisphosphonates [46] and RANKL inhibitors [47]. Clinical trials have demonstrated that these drugs are efficacious in reducing the frequency of skeletal-related events (SREs) (e.g., pathologic fractures, spinal cord compression, hypercalcemia) in patients with bone metastases [46]. However, there remains a need for therapies that directly target tumor cells residing in bone. Integrin αvβ3 is a promising therapeutic target for TIBD due to its high expression in metastatic tumors, angiogenic cells, and osteoclasts [48]; thus, αvβ3 antagonists could potentially disrupt multiple aspects of disease progression. Substantial evidence from preclinical investigations show that treatment with integrin αvβ3-targeting peptides (e.g., ATN-161, S247, cilengitide), non-peptide small molecules (e.g., PSK1404), or monoclonal antibodies (e.g., LM609) significantly reduces tumor growth and osteolysis in a variety of cancer types [34,36,45,49].
Several αvβ3-targeting drug candidates have advanced to clinical trials for the treatment of osteoporosis and cancer. The RGD-mimetic cyclic peptide cilengitide was first developed for treatment of glioblastoma multiforme [50,51] but has been investigated for use in patients with advanced solid tumors including prostate cancer, non-small cell lung cancer, and squamous cell carcinoma. The humanized monoclonal antibody etaracizumab was also in clinical trials for prostate cancer, ovarian cancer, and metastatic melanoma [52]. More recently, the small molecule GLPG0187 was evaluated for its effects in patients with progressive glioma and other advanced solid malignancies [53]. Despite success in early clinical trials, many of these therapies did not produce clinically relevant outcomes compared to standard chemoradiotherapy; however, few studies specifically targeted cancer patients with bone metastases. To evaluate the efficacy of novel or existing αvβ3 antagonists against bone metastases, future trials will need to be more inclusive of patients with TIBD.
Concluding Remarks
Patients with advanced solid cancers frequently develop TIBD which involves growth of metastatic tumors in bone as well as osteoclast-mediated bone destruction. Despite palliative treatments, TIBD remains a highly debilitating disease for many cancer patients. Current therapies focus on inhibiting osteoclast-mediated bone resorption to reduce the risk of SREs, but there is a compelling need for therapies directly targeting metastatic tumor cells in bone. Despite the failure of existing drugs against advanced soft tissue tumors in clinical trials, integrin αvβ3 may be a promising therapeutic target for patients with TIBD as it is highly expressed in several bone-metastatic tumors including breast, prostate, and lung cancer. Preclinical studies have also demonstrated that the aberrant expression of tumor-specific αvβ3 promotes metastasis to bone, thereby increasing skeletal tumor burden and osteolysis. Mechanistically, integrin αvβ3 has been shown to mediate tumor cell response to the rigid bone microenvironment, which results in the upregulation of genes associated with bone destruction (Figure 1). Still, the exact mechanisms of integrin αvβ3 regulation in TIBD are not fully understood and the signaling pathways that are altered by changes in αvβ3 expression will need to be further explored in order to identify potential therapeutic targets. It is also important to note that because integrin αvβ3 is expressed by osteoclasts, proliferating endothelial cells, and certain immune cell populations, therapies that target αvβ3 may affect multiple aspects of TIBD in addition to bone resorption, including angiogenesis and inflammatory immune responses. Future studies will need to examine, in greater detail, the impact of integrin αvβ3 suppression on the tumor-bone microenvironment. A better understanding of integrin dysregulation in cancer and the mechanisms by which tumors respond to the bone microenvironment is crucial in order to develop novel therapeutics for the treatment of bone metastases.
Cancers 2017, 9, 84 5 of 8 microenvironment. A better understanding of integrin dysregulation in cancer and the mechanisms by which tumors respond to the bone microenvironment is crucial in order to develop novel therapeutics for the treatment of bone metastases.
Figure 1.
Expression of integrin αvβ3 promotes tumor growth and metastasis to bone. In the rigid bone microenvironment, αvβ3 interacts with TGFβRII to induce the expression of osteolytic genes such as PTHrP to stimulate osteoclast-mediated bone destruction.
Conflicts of Interest:
The authors declare no conflict of interest. Figure 1. Expression of integrin αvβ3 promotes tumor growth and metastasis to bone. In the rigid bone microenvironment, αvβ3 interacts with TGFβRII to induce the expression of osteolytic genes such as PTHrP to stimulate osteoclast-mediated bone destruction.
Conflicts of Interest:
The authors declare no conflict of interest. | 3,472 | 2017-07-01T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Can We Do More with Less? Analyzing the Organization of Flexibility of Space and Infrastructure at UDCs: A Case Study for Food Center Amsterdam
: Background : How can flexible applications of the space and infrastructure of urban distribution centers (UDCs) be organized to help lower demands on space and infrastructure in cities? The application of flexible use of space and infrastructure can improve the efficiency of a UDC, but the challenge lies in the organization of the application of flexibility. Methods : The goal of this research was to identify how flexibility can be organized to impact overall societal benefits for the stakeholders in UDCs. This explorative and qualitative research was applied to the case of Food Center Amsterdam. Results : The results show that stakeholders have a limited understanding of the potential that flexibility can offer; that there is a need for an independent organizing capability and responsibility for collaboration on flexibility; and that a clear way to divide costs, benefits, risks, and opportunities in relation to stakeholder interests is required. Conclusions : Overall, flexibility shows potential to improve the efficient use of infrastructure and space. Further research avenues include the initiation of an organizing capability and distribution method for costs, benefits, risks, and opportunities between stakeholders. The remaining question is, can we get this organized in order to do more with less?
Introduction
The continuous "battle for space" between housing, businesses, public space, etc. in cities puts pressure on existing and new to develop space.This, combined with growing demands for space [1] and infrastructure (such as charging infrastructure for electric vehicles) [2] and developments in the accessibility of cities due to vehicle restrictions in central areas caused by zero-emission zones [3,4], limited time windows for delivery [5], and car-free zones [6], introduce new preconditions for logistics in cities. Urban distribution centers (UDCs), as part of the logistics in cities, require space for their building, operations, parking, etc.With the pressure for space increasing and new preconditions arising, this emphasizes the need for efficient use of the space of UDCs, in order to better deal with the overall battle for space in cities.
A possible way to improve the efficiency of space for UDCs is to apply flexibility in their development and use.Flexibility is defined as "the ability to be easily modified" [7].It can exist in different forms, such as flexibility of physical infrastructure, management, stakeholders, and goals [8].Furthermore, flexibility contains elements such as resilience, adaptability, and robustness [9].This shows that both the form of flexibility and the applied elements can differ, indicating the broadness of the topic of flexibility.Examples of flexibility applications include the shared use of charging infrastructure of electric buses by logistic The literature review focused on the current knowledge about the organization of forms of flexibility for UDCs.The literature was collected using (1) a search on Google Scholar and Scopus for (combinations of) the words "Urban/city distribution/consolidation centers/hubs", "flexibility", "infrastructure/space" and "organizing ability/cooperation/ collaboration/stakeholder"; (2) references in the papers found; and (3) authors and papers as indicated by experts.Papers were subsequently selected based on their relevance in regard to the topics.The review was structured by starting with a review on flexibility and subsequently on collaboration between stakeholders on UDCs.This was followed by a framework being derived from the literature review.
Literature Review of Flexibility and Collaboration between Stakeholders for UDCs 2.2.1. Forms of Flexibility of Space and Infrastructure
Flexibility of space and infrastructure exists in a variety of forms, including flexibility in the physical infrastructure (and space), management of infrastructure, changing goals for space and infrastructure over time, and stakeholder involvement, as indicated for passenger transport hubs [8].From the perspective of UDCs, a variety of literature related to specific forms of flexibility exist for logistic hubs, including shared space in UDCs [20] and automation in warehousing [21].These applications of flexibility entail one or several forms of flexibility but lack an overview of the forms of flexibility for UDCs.To fill this gap in the overall view, the forms as identified by Pennings et.al. were further applied [8].
In regard to the application of flexibility in collaborations, limited knowledge on the topic of flexibility and its application exists among stakeholders [8,9,22].Furthermore, challenges in the integration of flexibility in projects limit its actual application [23].This hampers its application and shows a gap in the understanding of what flexibility can bring to a project and how it can be applied.
Although the application of flexibility is seen as showing potential for lowering overall societal costs for passenger transport hubs, the challenge lies in the valuation of the added value of flexibility [14].For UDCs, which similarly to passenger transport hubs are also battling for space, the question is whether there is also added value in the application of flexibility.Decision-making on whether or not to apply flexibility in projects is hampered by uncertainty about life cycle costs [9], the risk of entrenchment [24], and the costs (both financial and in time) in applying flexibility [23].These elements indicate a gap in the valuation of flexibility and thereby the challenge in the trade-off of its application in projects.
To conclude, the literature shows gaps in the overview of what forms of flexibility exist for UDCs, the limited understanding of flexibility by the relevant stakeholders, and the challenges in the valuation of flexibility for UDCs.
Collaboration on UDCs
Collaboration in the logistic sector already exists in many forms, in order to improve efficiency by sharing available resources (such as warehouse space, vehicle load space, and personnel).This can bring financial, environmental, and social benefits, such as lower costs, less congestion, and lower CO 2 emissions [25,26].An important aspect of collaboration is that it should bring stakeholders extra profits, greater than the value when a stakeholder acts alone [26].For UDCs, the literature shows that these can offer overall benefits compared to situations where every party works for themselves [13,15], which indicates a potential driver of collaboration.At the same, limited examples of successful multi-user UDCs exist, which in turn brings the question of to what extent multi-user UDCs are successful in practical application, or whether acting alone provides more benefits.As seen in examples of operating UDCs [13,27], there is a strong dependency on government support and subsidies for these UDCs to continue operating.With growing demands on overall delivery and service levels, expenses can be overcome to create and maintain logistic facilities in urban areas [28], which indicates the potential business case for UDCs, especially in the changing landscape of preconditions for logistics in urban areas.Overall, this shows potential for collaboration on UDCs.It shows that, in theory, much potential is seen, but at the same time the limited examples of successfully operating UDCs indicate that thresholds exist for the development of UDCs.
An essential element in order to make collaboration work is a viable value case (from a combined financial, environmental, and social perspective) and a working business case (from a purely financial perspective).The challenge in sharing (parts of) a logistic chain is that the value proposition is based on collective savings, instead of higher customer prices, which means that investment costs need to be compensated by operational efficiency and positive societal results that do not directly impact the financial benefits [29].This highlights the challenge in satisfying the financial side as well as the environmental and social side.The challenge with the environmental and social parts is to find a willing stakeholder-e.g., local authorities-to pay for these benefits [29].Different methods such as a business model canvas and business model analysis can help stakeholders in their assessment of decisionmaking [29,30] and help show whether benefits are financial, environmental, and/or social.Overall, a business model analysis can help to clearly demonstrate the financial, environmental, and social potential of solutions to each stakeholder, but challenges exist in finding a suitable and willing combination of stakeholders to form a beneficial combination.This includes the question of to what level are governments and other stakeholders willing to pay for social benefits while commercial parties can still make a profit.Furthermore, collaboration requires long-term commitment and aligned information and communication between partners [31], which might be in contrast to the short-term needs and agility a stakeholder requires.Overall, these challenges show the need for an understanding of the focus and willingness for stakeholders to invest in these different types of benefits and how suitable solutions can be found to overcome these challenges.
The literature shows a number of challenges for the development of collaboration on UDCs.These include the diversity of stakeholders and their objectives, and potential conflicting and common goals [16,[32][33][34][35]; the conflicting interests of stakeholders [16,36]; lacking collaboration, the involvement of stakeholders and gaining stakeholder support [15,37]; the difference between the level of maturity of stakeholders for sharing, liability, insurance, transparency, and regulation frameworks [22]; the undefined overall problem owner [32]; the required neutrality of the person managing the process [37]; the timing of implementation [13]; the dispersed costs and benefits [38]; and the division of these costs and benefits between the stakeholders in collaboration [32].Given the variety of possible challenges for stakeholders in collaborating, the question is which challenge(s) are the main issues hampering collaboration on UDCs and how (and whether) these can be overcome from each stakeholder's perspective.This highlights the importance of investigating stakeholders' interests and needs.
The literature shows possible solutions in a dedicated approach to identifying the value proposition(s) [31], as well as the setup of an agreement between the partners on how to fairly allocate profits gained from the collaboration between stakeholders [26].These solutions require significant up-front knowledge of unique local situations, which highlights the required effort to make these work.This furthermore requires combining physical, financial, and information flows [13], while this introduces the challenge of the willingness of potential competitors to share information.As can be seen in other sectors-for example, in the case of the construction sector-solutions lie in the authorization of an independent authority to facilitate the collaboration process, acknowledging interdependencies between stakeholders (which requires an holistic view), and creating awareness between stakeholders [39].Overall, this shows solutions to the challenges in setting up and managing a collaboration, and highlights the need for different parties to make trade-offs.What is of interest here, is what the different stakeholders see as possible solutions to their (and other's) challenges.
For UDCs, the stakeholders involved can include municipalities, city regions, logistic service providers, retailers, user associations, the UDC management, and wholesalers [16,33].At the same time, the specific stakeholders are case-dependent.Cities can play a key role as infrastructure providers, coordinators for stakeholders to work together for efficient logistics [40], and in allocating costs and benefits [38].This shows the puzzle of aligning stakeholders with common goals and bridging conflicts of interests to come to collaboration.Overall, this shows the importance of a full understanding of stakeholder involvement, interests, and drivers, in order to set up a successful collaboration and the potential role of a city/government.
A variety of stakeholder management methods exist, based on topics such as engagement, decision-making, relationship-management communication, and innovation [41,42].In regard to flexibility in UDCs in this research, the focus is on the stakeholder approach, since this contributes to the active integration of the interests of stakeholders [43], as indicated by the challenge in the development of collaborations.From the point of individual interests versus collective maximum value creation [44], the setting up of stakeholder governance (central to shared) is taken into account, to build on the understanding of the required organizational capability and the preferences of stakeholders.
Overall, this shows the importance of building an understanding of the ways in which collaboration between different stakeholders can be set up for UDCs.A combination of challenges and solutions can possibly be made into a positive business and value case.This requires clear insights into the interests of the different stakeholders involved.
A Framework for the Application of Flexibility and Its Added Value for UDCs
Based on elements from the literature review, a framework was developed.This framework-as shown in Figure 1-indicates the relations between the assets, stakeholders, and the financial benefits, based on the current situation (approached individually per stakeholder).It furthermore shows the approach to which forms of flexibility are applied in a collaboration between stakeholders for UDCs, with a subsequent influence on the financial, environmental, and social benefits.This shows the starting point for the application of flexibility, with the different forms of flexibility, which in turn sets a basis for a collaboration, with subsequent added societal value.
A variety of stakeholder management methods exist, based on topics such as engagement, decision-making, relationship-management communication, and innovation [41,42].In regard to flexibility in UDCs in this research, the focus is on the stakeholder approach, since this contributes to the active integration of the interests of stakeholders [43], as indicated by the challenge in the development of collaborations.From the point of individual interests versus collective maximum value creation [44], the setting up of stakeholder governance (central to shared) is taken into account, to build on the understanding of the required organizational capability and the preferences of stakeholders.
Overall, this shows the importance of building an understanding of the ways in which collaboration between different stakeholders can be set up for UDCs.A combination of challenges and solutions can possibly be made into a positive business and value case.This requires clear insights into the interests of the different stakeholders involved.
A Framework for the Application of Flexibility and Its Added Value for UDCs
Based on elements from the literature review, a framework was developed.This framework-as shown in Figure 1-indicates the relations between the assets, stakeholders, and the financial benefits, based on the current situation (approached individually per stakeholder).It furthermore shows the approach to which forms of flexibility are applied in a collaboration between stakeholders for UDCs, with a subsequent influence on the financial, environmental, and social benefits.This shows the starting point for the application of flexibility, with the different forms of flexibility, which in turn sets a basis for a collaboration, with subsequent added societal value.
Method
This research is aimed at giving a qualitative indication of the concept of flexibility and potential approaches to incorporating forms of flexibility into infrastructure development and use in logistic hubs.A case study is applied, in order to build an in-depth understanding of the processes [45].This can help analyze what gaps exist and give insights into important aspects of the topic [46].The information required for this qualitative approach was obtained through interviews with relevant stakeholders.This approach was chosen since it can gather detailed information from different stakeholder perspectives.
The data collection consisted of four steps: (1) the determination of the relevant stakeholder types for the case, (2) the determination of suitable parties and roles to interview, (3) conducting of the interviews, and (4) the analysis of the output of the interviews.This analysis consisted of (i) in-depth thematic analysis of the collected output per interview and the identification of themes, (ii) the determination of common themes between the stakeholders and types of stakeholders, (iii) the identification of similarities and
Method
This research is aimed at giving a qualitative indication of the concept of flexibility and potential approaches to incorporating forms of flexibility into infrastructure development and use in logistic hubs.A case study is applied, in order to build an in-depth understanding of the processes [45].This can help analyze what gaps exist and give insights into important aspects of the topic [46].The information required for this qualitative approach was obtained through interviews with relevant stakeholders.This approach was chosen since it can gather detailed information from different stakeholder perspectives.
The data collection consisted of four steps: (1) the determination of the relevant stakeholder types for the case, (2) the determination of suitable parties and roles to interview, (3) conducting of the interviews, and (4) the analysis of the output of the interviews.This analysis consisted of (i) in-depth thematic analysis of the collected output per interview and the identification of themes, (ii) the determination of common themes between the stakeholders and types of stakeholders, (iii) the identification of similarities and differences between stakeholder's perspective per theme, and (iv) the processing of the information into the results, based on the structure of the framework.
The scope of this research was focused on a single urban hub location with a physical area as a common asset.Food Center Amsterdam (FCA) was chosen as the case study, since (1) it is an existing UDC in an urban area that is under pressure from the demand for space, (2) it has clear transport and data flows for specific products (perishable products), (3) its users make shared use of limited space, and (4) there is easy access to stakeholders for research purposes.
The case study was based on a set of representative interviews with stakeholders with specific knowledge and experience.To decide on the number of stakeholders, two criteria were used: (1) having at least two interviewees per stakeholder type (as indicated in Figure 1), and having saturation in input from the interviews.In total 17 people were interviewed during the period January-April 2023.Their roles and organizations are indicated in Appendix A. The interviews consisted of an explanation of the case study, the involved stakeholders, and the introduction of the forms of flexibility.All stakeholders were asked to answer the questions from their single stakeholder's perspective.
Based on the framework, the interview questions were classified as What is the understanding of flexibility and its forms?2.
What is the perception of the potential added value of forms of flexibility, both in general and case-specific? 3.
What interests do stakeholders have in relation to flexibility applications?4. In what way can collaboration be organized to achieve the potential value of flexibility?[47,48].The restructuring means that part of the area is being redeveloped as a residential area.FCA is located within an area where no extra electricity grid capacity is available, which means that they cannot obtain additional grid capacity on top of their current grid capacity [49].Overall, it represents an existing urban UDC under redevelopment with certain preconditions.The focus of the research was on the sharing of infrastructure and space, and not the redevelopment per se, although this does offer new opportunities for applying flexibility.
The Physical Object Aspects of FCA
Regarding the physical development of the business area, this means that the available space will be reduced by approximately 35%, but a similar capacity of business is planned.This implies a more compact business area, with new and up-to-date infrastructure.Figure 2 shows the division of the area into residential and business areas.Part of the existing buildings in this area will be demolished and redeveloped, and part will be retained [50].Overall, this indicates a growing pressure on available and new business infrastructure and space, all as part of the battle for space in Amsterdam.
The Stakeholders and Their Relations at FCA
The relevant stakeholders for FCA and their responsibilities are indicated in Table 1, and the relations between these stakeholders are indicated in Figure 3.
The relations between the stakeholders indicate different stakeholder positions, which in turn lead to different interests per type of stakeholder.These different interests can lead to different preferences in the way collaboration is organized and the desired outcome of the collaboration.These stakeholder interests and preferences on how the collaboration can be organized were incorporated into the question list for the interviews.
The Stakeholders and Their Relations at FCA
The relevant stakeholders for FCA and their responsibilities are indicated in Table 1, and the relations between these stakeholders are indicated in Figure 3.
Table 1.Overview of the stakeholders and their responsibilities.
Stakeholder Type
Responsibilities Wholesalers (Users of FCA with their own physical location; companies) This includes major (e.g., BidFood) and smaller companies (e.g., companies in specific perishable goods).These companies are all member of the cooperation.
The wholesalers are mainly business-case driven.
Cooperation (Area management: includes both Vereniging Herstructurering and
Coöperatie FCA) The cooperation represents the interests of all (member) companies in the FCAarea and is responsible for the management of the area.It coordinates collaboration on, e.g., collective garbage disposal, collective energy approaches, and charging infrastructure.Vereniging Herstructurering exists to support the wholesalers with the process of the restructuring of the area.
Developer (consortium Marktkwartier)
Developers are responsible for the redevelopment of the area.The consortium Marktkwartier holds the concession.The consortium is a joint venture of Ballast Nedam Development BV and VolkerWessels Vastgoed BV.
Municipality (Gemeente Amsterdam)
The municipality is responsible for public space, land, and transport infrastructure, and is the commissioning party for the Marktkwartier (concession).The municipality is owner of the land of FCA, and part is under leasehold ('erfpacht') [48].Relevant departments of the municipality include Verkeer en Openbare Ruimte, Ruimte en Duurzaamheid, Economische Zaken, Grond en Ontwikkeling, and Deelnemingen.These departments can have different interests, therefore the municipality can be seen as a number of stakeholders.The focus of the municipality has both societal and financial perspectives.Table 1.Overview of the stakeholders and their responsibilities.
Stakeholder Type Responsibilities
Wholesalers (Users of FCA with their own physical location; companies) This includes major (e.g., BidFood) and smaller companies (e.g., companies in specific perishable goods).These companies are all member of the cooperation.The wholesalers are mainly business-case driven.
Cooperation (Area management: includes both Vereniging Herstructurering and Coöperatie FCA)
The cooperation represents the interests of all (member) companies in the FCA-area and is responsible for the management of the area.It coordinates collaboration on, e.g., collective garbage disposal, collective energy approaches, and charging infrastructure.Vereniging Herstructurering exists to support the wholesalers with the process of the restructuring of the area.
Developer (consortium Marktkwartier)
Developers are responsible for the redevelopment of the area.The consortium Marktkwartier holds the concession.The consortium is a joint venture of Ballast Nedam Development BV and VolkerWessels Vastgoed BV.
Municipality (Gemeente Amsterdam)
The municipality is responsible for public space, land, and transport infrastructure, and is the commissioning party for the Marktkwartier (concession).The municipality is owner of the land of FCA, and part is under leasehold ('erfpacht') [48].Relevant departments of the municipality include Verkeer en Openbare Ruimte, Ruimte en Duurzaamheid, Economische Zaken, Grond en Ontwikkeling, and Deelnemingen.These departments can have different interests, therefore the municipality can be seen as a number of stakeholders.The focus of the municipality has both societal and financial perspectives.The relations between the stakeholders indicate different stakeholder positions, which in turn lead to different interests per type of stakeholder.These different interests can lead to different preferences in the way collaboration is organized and the desired outcome of the collaboration.These stakeholder interests and preferences on how the collaboration can be organized were incorporated into the question list for the interviews.
Results and Analysis of Organizing Collaboration on Flexibility
In the part below, the main results of the interviews are elaborated and discussed per topic, based on the structure of the framework shown in Figure 1.This starts with the forms of flexibility, followed by the perceived added value of forms of flexibility, the alignment of stakeholders, and the organization of flexibility between stakeholders.
There Is a Limited Understanding of What Flexibility Is and When to Apply It
When asked what applications of flexibility are seen, interviewees indicate shared parking, collective cooling and heating, collective solar panels and energy storage, shared charging infrastructure, and shared space and storage.Based on these four forms of flexibility [8], most interviewees indicated that they see flexibility in the physical domain, and to lesser extent over time and in management.Flexibility of actor involvement was hardly mentioned, and when it was mentioned, it was seen in strong relation to management flexibility.This shows a limited application of the different forms of flexibility.This is not surprising per se, since these forms of flexibility are not generally applied.It does, however, show that-from this perspective of "unknown unknowns"much potential can exist in making the potential of different forms of flexibility known to the stakeholders involved.Furthermore, the interviewees indicated that pathdependencies and the increased impact of combined forms of flexibility can bring added value.Overall, this shows a lack in overall understanding and overview of the forms of flexibility, which can lead to a bias in decision-making [8,9,22].With this limited knowledge on, and overview of all, options of flexibility, the application of flexibility is currently hampered.This indicates a need for a better understanding of flexibility by the stakeholders, in order to make appropriate trade-offs in flexibility.
The timing of application of flexibility also has a strong influence on its effectiveness.The interviewees indicated that when initial plans, policies, and (long-term) contract requirements do not give sufficient space for flexibility in the initial stages of a development, this limits the options for flexibility in later stages of a project, where
Results and Analysis of Organizing Collaboration on Flexibility
In the part below, the main results of the interviews are elaborated and discussed per topic, based on the structure of the framework shown in Figure 1.This starts with the forms of flexibility, followed by the perceived added value of forms of flexibility, the alignment of stakeholders, and the organization of flexibility between stakeholders.
There Is a Limited Understanding of What Flexibility Is and When to Apply It
When asked what applications of flexibility are seen, interviewees indicate shared parking, collective cooling and heating, collective solar panels and energy storage, shared charging infrastructure, and shared space and storage.Based on these four forms of flexibility [8], most interviewees indicated that they see flexibility in the physical domain, and to lesser extent over time and in management.Flexibility of actor involvement was hardly mentioned, and when it was mentioned, it was seen in strong relation to management flexibility.This shows a limited application of the different forms of flexibility.This is not surprising per se, since these forms of flexibility are not generally applied.It does, however, show that-from this perspective of "unknown unknowns"-much potential can exist in making the potential of different forms of flexibility known to the stakeholders involved.Furthermore, the interviewees indicated that path-dependencies and the increased impact of combined forms of flexibility can bring added value.Overall, this shows a lack in overall understanding and overview of the forms of flexibility, which can lead to a bias in decision-making [8,9,22].With this limited knowledge on, and overview of all, options of flexibility, the application of flexibility is currently hampered.This indicates a need for a better understanding of flexibility by the stakeholders, in order to make appropriate trade-offs in flexibility.
The timing of application of flexibility also has a strong influence on its effectiveness.The interviewees indicated that when initial plans, policies, and (long-term) contract requirements do not give sufficient space for flexibility in the initial stages of a development, this limits the options for flexibility in later stages of a project, where progressive insights into flexibility become visible and applicable.At the same time, uncertainty at the beginning of the project, limits this application.This is in line with the literature [13] and indicates a chicken-and-egg situation.Overall, this shows the strong interlinkages between the different forms of flexibility and the limitations in application due to uncertainty and lack of clarity on where to start.
The Potential for a Value Case for Flexibility Is Seen, the Business Case Is More Difficult
Almost all interviewees saw a strong value case (combination of social, environmental, and financial elements).These drivers were seen as lowering the overall space and infrastructure needed, lowering vehicle movements, and reducing the pressure on the electricity grid, as well as developing a possible showcase for the application of flexibility collaboration in hubs.Note that these drivers all look beyond the area of the UDC.This requires a clear overview of possible costs and benefits, which is currently lacking.As stated by the interviewees, "It is important to have a clear answer on why stakeholders should move from 'what's in it for you?" to "what's in it for us all?" and "we need to go from our focus on optimizing costs to optimizing benefits together".
For the business case, the interviewees indicated that they saw a business case to a lesser extent.As the main reason, they showed a limited understanding of the full potential added value of the application of flexibility, in line with the earlier findings in this research and the literature [14].This does not mean that there is no positive business case, but it does indicate that an overview for developing one is missing.As drivers for the business case, interviewees saw reductions in costs for the use of infrastructure, storage and distribution, timely access to sufficient electricity grid capacity, and adding value to real estate and land.Challenges are identified in whether the business case of flexibility is more beneficial than other business cases (for example, it might be more beneficial for a real estate party to bring their own business models), in the balance of power between short-term profit and long-term interests, and in changing external priorities and needs over time (for example, when the concession for Marktkwartier was given, the issue around limited grid capacity was far less pressing).This requires a good understanding of costs, benefits, risks, opportunities, and the minimum required stake for each stakeholder in order to participate, in line with the existing literature [32].This stakeholder mapping could help build a good understanding of each type of stakeholder interest (financial, environmental, and/or social) and thereby find combinations to collaboratively unlock the potential of flexibility.
Overall, the potential of the value case was seen, but for a business case was seen to a lesser extent.This was partly due to limited insights into its actual added value, which in turn highlights the limited understanding of stakeholders about what flexibility can bring.In order to make this more visible, this requires clarity before a project starts on what added value can be attained.This highlights a current lack in the approach towards projects.
A Challenge Lies in the Alignment of Stakeholders' Interests to Come to a Win-Win
In regard to stakeholders' own interests, these showed a range between purely financial to a combination of financial, environmental, and social.Wholesalers saw the continuity of their business as their main interest, which shows a strong financial drive.For the municipality, the interests were more from a social and environmental point of view.Developers indicated the need for a positive business case as their interest, together with the value of their real estate, which also indicates a strong financial drive.For the cooperation, the main interests were the representation of their members and the logistics system, as well as creating and maintaining a positive business climate.This makes this an indirect financial drive.This variety in interests between the types of stakeholders and the potential of conflicts of interests are in line with the literature [16,33,34].This underlines the importance of building a full understanding of these positions, in order to address the mismatches and motivate stakeholders to collaborate.
In regards to motivating stakeholders to collaborate, a project developer put this as follows: "The biggest value is for stakeholders to understand their interest in adjusting their existing business cases to make use of the potential of flexibility".This understanding is further needed to better understand the trade-off between individual and collective flexibility for each stakeholder.The interviewees saw possible returns for collaboration in lower costs, since the costs would be shared between more parties and with quicker access to limited assets (such as charging infrastructure), but they would have to forfeit part of their own flexibility.As the main challenges for collaboration, interviewees indicated that each stakeholder focuses on their own interests, has a limited sense of urgency, and does not yet see the value of collective interests, as well as each party having their own beliefs and culture.Furthermore, the trade-off between individual versus collective flexibility was indicated as a challenge, since a stakeholder needs sufficient reassurance and benefits to make the decision towards collective flexibility.The question here is when the collective approach towards flexibility will be positive enough for stakeholders to overcome the risks.A challenge that needs to be overcome is that positive societal results do not directly impact financial benefits [29], therefore introducing a further challenge to make this puzzle fit.The understanding of this turning point is key to finding a balance in the system to realize a win-win situation for all.Overall, this shows a number of thresholds that should be overcome.This indicates the need for a distribution method for financial and societal costs and benefits between stakeholder types with different interests, to come to a win-win situation in the application of flexibility.5.1.4.The Organization of Collaboration on Flexibility Requires Initiation, Independent Au-Thority, and a Fair Division of Costs and Benefits When asked about the best way to organize collaboration on flexibility, the interviewees indicated several elements of importance: First, they indicated the importance of an initiator to start up the collaboration and an independent authority to protect the interests of all stakeholders and to develop and run this collaboration.For the initiator, the interviewees described this role as a stakeholder responsibility and a number of competencies a person executing this role should have.As the most logical party to do this, opinions differed between a governmental responsibility, the existing cooperation, and a combination of users.The argument behind these choices all came from the financial and/or societal responsibilities of each of the parties.For the competency specifics, this required people who are intrinsically motivated and driven to achieve development or change.This was indicated as going further than solely having a role and responsibility.It indicated both the complexity in the fragmented stakeholder field, which is in line with the literature [8], and the specific demands stakeholders had for the actual person(s) executing this role.For the independent authority, the goal is to protect the interests of all stakeholders and to develop and run the collaboration.Some interviewees also indicated that this should be combined with a counsel consisting of stakeholders and independent experts, to protect the continuity of the collaboration and limit opportunism.This is in line with earlier findings for flexibility for passenger transport hubs [8] and further emphasizes the challenge for setting a clear scope for flexibility, as also indicated by the fragmented application of the literature to specific forms of flexibility [23].The need for a neutral party to take charge is in line with [32,37] and highlights a current missing role within the stakeholder landscape.Interviewee answers on which party should be in the lead varied between the cooperation and the municipality.Arguments for putting the municipality in charge were based on its broader societal and geographical focus.Arguments for the cooperation or single users were based on a positive business case as a driver and the need for the involvement of the members of the cooperation.A possible way to fill this role is by giving a mandate or a concession to a specialized and neutral third party.
Second, the interviewees indicated the importance of maintaining fairness regarding who is going to invest at what time and how costs, benefits, risks, and opportunities will be divided.In regard to setting up the allocation of costs and benefits, most interviewees indicated that this should be coordinated by the municipality, since it sets the rules and regulations, while the executing party could be a different party than the municipality.For the operation, the costs should be paid by the parties who make use of it.This requires that it should bring more benefit to use it than to approach this individually.This motivates the system to keep costs low.The challenge in the dispersed costs and benefits, and the division of these costs and benefits between stakeholders in collaboration, are in line with the literature [38] and the division of these factors [32].For the allocation of benefits, the interviewees indicated that the benefits should be used to lower the costs of the service, to keep the service as attractive as possible.Part of the (initial) benefits could go to the party who set this up and took the risk.It was seen as important that the benefits returned back to the users.One way to achieve collective buy-in of flexibility would be to get the involved stakeholders to put skin-in-the-game by investing in the initiation.However, this relies on a widening of the scope of responsibilities of the stakeholders involved, which of course would introduce its own challenges.
Furthermore, as other elements of importance, the interviewees indicated the importance of setting clear goal(s) together, clear agreement on the collective way of working, and clarity and commitment on the rules and regulations from the government.Overall, this is in line with the expectations from literature [16,31,33,34,36,39].This highlights the complexity of the puzzle to deal with the broad variety of preconditions, since it requires both stakeholders to align their goals, a long-term commitment, and clarity from the government on rules and regulations.
Challenges for the feasibility of a collaboration were seen in the required number of parties in the collaboration.As questioned by one interviewee: "does flexibility need to fully involve all stakeholders or will a combination of a smaller number relevant partners work better?"The interviewees indicated the risk of smaller collaborations forming between users, with possible consequences for other users such as not receiving enough charging infrastructure in time for the application of the ZE-zone.To counter this fragmentation, several interviewees emphasized the importance of involving all stakeholders and that this could be handled through a collective approach.This would require the participation of each stakeholder.
Overall, the lack of full ownership and a clear collective goal limits the potential for organizing ownership to add more value.This shows the complexity of developing an organizing capability to setup a collaboration on the application of flexibility.It is recommended to build further understanding on how to initiate collaboration between stakeholders in collective value and business cases in complex multi-stakeholder environments in relation to flexibility.
Reflection on the Case and Its Added Value
Currently limited flexibility is being applied in the FCA case.Although with the redevelopment there is much potential for further collaboration by sharing infrastructure and space, the question remains whether this will happen and to what extent.The way this will be organized is still unclear, although different options exist and some are partly in place.The case study showed a complex situation, due to the redevelopment of the area.In the interviews, it was already seen that, when flexibility is applied, this is expected to mix with other discussion points in relation to the redevelopment, thereby enlarging the discussion.The complexity of the issues facing users might overwhelm them, which leaves no or limited capacity to also initiate a collaboration focused on flexibility.An exception is where flexibility can help directly solve existing issues.This raises the question of whether the addition of flexibility will help or disturb the redevelopment process.
An added value of the case study is in giving a real-life picture of the application of flexibility in an environment with a wide variety of stakeholders, each with different interests and as potential competition for each other.This shows the potential struggle in aligning stakeholders in common interests and similar timelines, while at the same time proposing changes in their way of working.Although this case is not seen as the standard situation of UDCs, it does give an in-depth picture of the challenges ahead for applying flexibility to existing UDCs, with their own unique setup, challenges, and opportunities.One question that arose here is whether the benefits and opportunities of flexibility will outweigh its costs and risks.Flexibility is not a goal in itself but a means to an end, and it is important to keep this nuance in mind.
Since the case study was based on interviews, there was potential weakness due to the potential bias in the actual relationships between stakeholders [45].In total, 17 interviews were conducted with a variety of stakeholders.The number of interviews was decided through having at least two representative stakeholders per stakeholder type and for achieving saturation of interviewee input.Although this number was seen as sufficient for explorative and qualitative research, it indicates limitations in the robustness of the case study.For validation of the research, it can be seen that much of the findings were topic specific, in line with the existing literature.Given the context-specific and explorative nature of this research, this also indicates the limitation of the case study in extrapolating its findings to general cases.
Conclusions
The main question in this research was how can flexibility be organized to impact the overall societal benefits for stakeholders in urban distribution centers?The research showed, for the case of Food Center Amsterdam, that this requires sufficient understanding by stakeholders of what flexibility is, in what forms it can be applied, and what added value it can bring.The main points of attention in making this work are seen in how to initiate a collective business and value case in complex multi-stakeholder environments, and the need for a distribution method for the financial and societal costs, benefits, risks, and opportunities between stakeholders with different interests, in order to come to a win-win situation in the application of flexibility.This requires a party to initiate and take ownership of the overall opportunity of applying flexibility, which in turn highlights the need for a new and neutral organizing capability.The case study showed a strong resemblance to specific findings in regard to collaboration between stakeholders [16,33,34].It showed that, as long as an overall responsibility in facilitating the application of flexibility is not taken by a party, the solutions remain limited to known applications, leaving a vast area of opportunities to make better use of space and infrastructure untapped.This, in turn, can hamper relevant transitions, such as the one toward zero-emission vehicles, and negatively impact the business and value cases of the different stakeholders involved.This shows the novelty of this research, since it highlights the potential of applying flexibility, but at the same time indicates the limitations in the current approach towards space and infrastructure development and use for a UDC.Although this explorative research focused on a single case, its findings are relevant for the application of flexibility to UDCs in general.This leaves the questions of to what level the application of flexibility can be brought into practice on UDCs in general, and who is going to take the lead to do this?With the growing pressure on space and infrastructure, it is clear that flexibility can help us to do more with less.
Figure 1 .
Figure 1.Framework for the application of flexibility in UDCs.
Figure 1 .
Figure 1.Framework for the application of flexibility in UDCs.
Figure 2 .
Figure 2. The zoning plan for the FCA viewed from the west.The current area is encompassed by the dotted line, the future business area is in blue, and the future residential area is in orange.Adapted from Google Earth (Map data: Google © 2019).
Figure 2 .
Figure 2. The zoning plan for the FCA viewed from the west.The current area is encompassed by the dotted line, the future business area is in blue, and the future residential area is in orange.Adapted from Google Earth (Map data: Google © 2019).
Figure 3 .
Figure 3. Simplified relations between the stakeholders.
Figure 3 .
Figure 3. Simplified relations between the stakeholders.
4. The Study Object: Food Center Amsterdam
4.1.The Context of Food Center Amsterdam Food Center Amsterdam (FCA) is a physically enclosed (gated) wholesale market for food located in Amsterdam West.It houses approximately 70 companies, supplying retailers, the catering industry, supermarkets, and more.It was established in 1934, further developed over time, and is currently being restructured | 9,977.8 | 2023-12-01T00:00:00.000 | [
"Environmental Science",
"Business",
"Engineering"
] |
In-Line Detection of Clinical Mastitis by Identifying Clots in Milk Using Images and a Neural Network Approach
Simple Summary The study focused on improving the detection of clinical bovine mastitis, the inflammation of the udder in cows as a response to intramammary infection, which can be identified by the presence of clots in the milk. Currently, automated milking systems do not detect this important disease very accurately. To address this, we developed a clots detection program using a neural network. This neural network was trained to recognize clots in milk samples from dairy cows by using a large number of pictures of milk filter socks, some with and some without clots. These pictures were divided into different sets for training, validating, and testing the program, respectively. The settings of the neural network were optimized using a genetic algorithm. The program’s interpretations were explained using a method called integrated gradients. The program was found to be 100% accurate in identifying clots in the test pictures. This suggests that the method could be very useful for automatically checking for clinical mastitis on dairy farms, although further field validation through integration into the existing systems is needed. Abstract Automated milking systems (AMSs) already incorporate a variety of milk monitoring and sensing equipment, but the sensitivity, specificity, and positive predictive value of clinical mastitis (CM) detection remain low. A typical symptom of CM is the presence of clots in the milk during fore-stripping. The objective of this study was the development and evaluation of a deep learning model with image recognition capabilities, specifically a convolutional neural network (NN), capable of detecting such clots on pictures of the milk filter socks of the milking system, after the phase in which the first streams of milk have been discarded. In total, 696 pictures were taken with clots and 586 pictures without. These were randomly divided into 60/20/20 training, validation, and testing datasets, respectively, for the training and validation of the NN. A convolutional NN with residual connections was trained, and the hyperparameters were optimized based on the validation dataset using a genetic algorithm. The integrated gradients were calculated to explain the interpretation of the NN. The accuracy of the NN on the testing dataset was 100%. The integrated gradients showed that the NN identified the clots. Further field validation through integration into AMS is necessary, but the proposed deep learning method is very promising for the inline detection of CM on AMS farms.
Introduction
From an economic viewpoint, mastitis is one of the most important diseases in dairy cows [1][2][3][4][5], due to its effects on animal health and the subsequent losses in milk production, as well as the need to discard abnormal milk or milk from diseased cows (European Union Directive EC/853/2004 and US Food and Drug Administration Grade A pasteurized milk ordinance).Depending on the study, the cost of each clinical mastitis (CM) varies between USD65 and 930 [2][3][4].The early detection of CM can reduce both the economic impact and the long-term impact on cow health and welfare [6,7].
Many dairy farms are transitioning to automated milking systems (AMSs), with around 38,000 units installed worldwide in 2017 [8].When using AMSs, there are fewer opportunities for the farmer to detect CM in individual animals.The current AMS incorporates a variety of milk monitoring and sensing equipment, but the sensitivity and specificity of its CM detection capabilities remain relatively low, with most systems having a sensitivity between 47 and 90% and a specificity between 56 and 99% [9,10].For reference, the International Standards Organization (ISO) describes a standard target of 90% sensitivity and 99% specificity for the detection of abnormal milk (ISO/FDIS (Final Draft International Standard) 20966 [11]), Annex C (Automatic Milking Installations-Requirements and Testing).Most of these sensor systems try to detect mastitis by measuring and analyzing indirect parameters, such as (but not limited to) electrical conductivity, somatic cell count, milk flow rate, changes in milk color, milk yield per hour or quarter, and cow activity [7,10,[12][13][14][15].
A typical symptom of CM is the presence of clots in the milk during pre-milking, which has been proposed as the gold standard for the detection of CM [16,17].Therefore, we propose to use an in-line camera to detect such clots in the filter after the pre-milking phase.A similar sensor has been proposed in the past but was limited in its capabilities of detecting the clots on the filter, as it was developed to score the quality of the milk and needed to be adapted for instances of different detriments occurring in the milk [18].Another study proposed to measure clot density in quarter milk samples, which could be useful in monitoring milk quality and clinical mastitis [19].The researchers used in-line filters to collect quarter milk samples and visually scored the clot density, based on the coverage of the filter area.They showed that high scores clustered within certain cows and periods, suggesting a potential threshold for detecting abnormal milk.The objective of the present study was the development and evaluation of a neural network (NN) capable of detecting such clots on pictures of the filters of the milking system after the pre-and/or milking phase at the cow level.
Experimental Data
The data for this study were generated by adding debris (including straw, hay, manure, bedding material, mud, teat sealer, calcium, and/or flies) and/or clots from used milk filters of AMSs to milk, before passing this milk through a circular milk filter (Universal Hygia Favorit filters, Universal dairy equipment) mounted in a PVC tube.Debris and clots were collected from 40 filters, half of which had clots and half of which did not.These samples were gathered from multiple AMSs and various cows.A vacuum pump provided suction for pulling the milk through the filter.The filters were painted blue for better visualization of the clots.An iPhone 6s was mounted in the PVC pipe to take a photo with the flashlight after each pass of milk.In total, 696 pictures were taken from filters with clots, and 586 pictures from filters without clots.
Image Analysis
For the training dataset, the images without clots were randomly resampled using the built-in python random.uniformfunction to obtain an even number of images with and without clots for balancing the NN weights.In total, 1676 images were used for training, validating, and testing.During the training of the NN, the images were augmented by random rotations, flipping, rescaling, zooming, and shearing (Table 1), using the Keras ImageDataGenerator function.To avoid the NN learning features from outside of the filter, e.g., milk spatters on the PVC pipe, the PVC pipe was removed from the picture using OpenCV v4.1.The image was then rescaled to 500 × 500 pixels as the NN input (Figure 1).To avoid the NN learning features from outside of the filter, e.g., milk spatters on the PVC pipe, the PVC pipe was removed from the picture using OpenCV v4.1.The image was then rescaled to 500 × 500 pixels as the NN input (Figure 1).A genetic algorithm was used to optimize the hyperparameters of the NN (Figure 2) based on the validation dataset.A non-dominated sorting genetic algorithm II (NSGA-II) was used, with a population size of 20 and 10 optimization generations, using the accuracy as the fitness value [20].The training of each child NN of the algorithm was ended after 50 epochs or when the training stopped improving for three epochs.Optimization was used for the following hyperparameters: the number of filters, the width of convolution and subsampling for each convolutional layer, the number of neurons for each fully connected layer, L2 regularization, and the dropout used for training (Table 2).A genetic algorithm was used to optimize the hyperparameters of the NN (Figure 2) based on the validation dataset.A non-dominated sorting genetic algorithm II (NSGA-II) was used, with a population size of 20 and 10 optimization generations, using the accuracy as the fitness value [20].The training of each child NN of the algorithm was ended after 50 epochs or when the training stopped improving for three epochs.Optimization was used for the following hyperparameters: the number of filters, the width of convolution and subsampling for each convolutional layer, the number of neurons for each fully connected layer, L2 regularization, and the dropout used for training (Table 2).
For training, the SoftMax cross entropy was used to calculate the loss with the Adam optimizer and with the default parameters at a learning rate of 0.0001 for updating the weights of the network [21].The network was trained using parameters optimized by the genetic algorithm, with each epoch evaluated using a validation dataset.This process was repeated for 100 epochs, selecting the best network weights based on validation results, to avoid overfitting the training dataset.The testing dataset was employed for statistical analysis.The network was built in Keras with a TensorFlow v2.0.1 backend [22].The batch size was set to 16. Afterwards, the integrated gradients of the NN were calculated and compared to a completely black baseline image to obtain an insight into the input-output behavior of the neural network.The attribution of the input pixels to the output labels was projected as a mask over the input image using the OpenCV toolbox (Figure 3).For training, the SoftMax cross entropy was used to calculate the loss with the Adam optimizer and with the default parameters at a learning rate of 0.0001 for updating the weights of the network [21].The network was trained using parameters optimized by the genetic algorithm, with each epoch evaluated using a validation dataset.This process was repeated for 100 epochs, selecting the best network weights based on validation results, to avoid overfitting the training dataset.The testing dataset was employed for statistical analysis.The network was built in Keras with a TensorFlow v2.0.1 backend [22].The batch size was set to 16. Afterwards, the integrated gradients of the NN were calculated and compared to a completely black baseline image to obtain an insight into the input-output behavior of the neural network.The attribution of the input pixels to the output labels was projected as a mask over the input image using the OpenCV toolbox (Figure 3).For training, the SoftMax cross entropy was used to calculate the loss with the Adam optimizer and with the default parameters at a learning rate of 0.0001 for updating the weights of the network [21].The network was trained using parameters optimized by the genetic algorithm, with each epoch evaluated using a validation dataset.This process was repeated for 100 epochs, selecting the best network weights based on validation results, to avoid overfitting the training dataset.The testing dataset was employed for statistical analysis.The network was built in Keras with a TensorFlow v2.0.1 backend [22].The batch size was set to 16. Afterwards, the integrated gradients of the NN were calculated and compared to a completely black baseline image to obtain an insight into the input-output behavior of the neural network.The attribution of the input pixels to the output labels was projected as a mask over the input image using the OpenCV toolbox (Figure 3).Here the pixels (around) the clots attribute the most to the output label.Panel (B) shows the integrated gradients of a NN which was trained on images where the PVC pipe was not removed from the image.This 'cheating' NN used the milk spats on the PVC pipe to identify if this was an image with clots.
Statistical Analysis
The dataset was randomly divided into a training subset containing 60% of the data, a validation dataset containing 20%, and a holdout subset containing 20%, which was not used for tuning the model; this resulted in 1006 training, 335 validation, and 335 holdout images.The following metrics were calculated on the holdout dataset: accuracy, positive and negative predictive values, specificity, and sensitivity.
Results
The accuracy, specificity, positive and negative predictive values, and sensitivity results of the NN on the testing dataset were 100% (Table 3).The integrated gradients showed that the NN identified the clots and accurately distinguished the clots from other materials including straw, hay, manure, bedding material, mud, teat sealer, calcium, and flies.
Discussion
The accuracy, positive-and negative predictive value, specificity, and sensitivity of the NN on the holdout dataset are a clear improvement, in comparison with other milk-based CM detection methods.Since the proposed method can reliably detect clots in foremilk, this method could be a feasible approach to detect CM in AMSs.Since clots during foremilking are considered the gold standard for detecting CM, the current approach has the potential to be a more reliable CM detection implementation for AMSs, in comparison with current CM detection sensors, which try to detect CM with other parameters, including electrical conductivity, L-lactate dehydrogenase, milk color, and somatic cell counting [9,16].Many different sensors and algorithms have been proposed for the detection of CM, but, thus far, none of the published CM detection methods achieved the ISO target of at least 80% sensitivity and 99% specificity [23].
One of the main benefits of the current approach is the extremely high accuracy.The main frustration of dairy farmers is the current high number of false alarms made by the available CM detection sensors in AMSs [24].Due to the low prevalence of (severe) CM, the majority of alerts will indeed be false positives, leading to a potential underreporting of CM cases, as farmers may stop investigating all alerts [23][24][25].With the proposed sensor, the detection and management of severe mastitis on AMS farms could be significantly improved, reducing the number of false-positives and ensuring that all cases of severe CM are accurately identified and treated and that milk is separated.Even if the practical implementation of the current sensor would not have an accuracy/precision of 100%, a NN, as we have proposed, can be adapted in order to maximize the accuracy, by penalizing false positive results during the training process or by calculating the receiver operating characteristic curve and setting a manual threshold for the minimal required specificity [26].An additional benefit of using a NN is their robustness for the presence of a variety of detriments (such as straw, manure, udder or tail hair, sawdust, sand, and the remainders of internal teat sealants) on the image, without the need to retrain the algorithm for every possible detriment.This is in contrast with the previous proposed work, in which the fuzzy logic algorithm had to be adapted to recognize the different detriments [18].If environmental changes (e.g., change in filter type) would overwhelm the robustness of the NN, the weights of the NN could be updated on-site using transfer learning to adapt to the new environment [27].If the farmer receives multiple false positive results from the sensor, he could initiate an update of the algorithm remotely based on the incorrectly classified images without the need to re-engineer the algorithm.In addition, the proposed NN approach could also use an optional reference image to differentiate and track animals with varying mastitis cases.The algorithm identifies changes in clots by comparing new images with the reference image, which would not be possible with the fuzzy logic algorithm.
If a 3D representation of the filter could be created, e.g., by adding a second camera for 3D stereovision, the NN could also be adapted to calculate the volume of the clots.Calculation of the animal's clot volume allows us to monitor diseased animals with CM over time and to estimate the severity of the disease and clinical recovery, and, hypothetically, even the likelihood of a bacteriological cure.For example, if the clot volume of a diseased animal is decreasing between consecutive milkings, the animal is likely to be recovering.If few clots are present, increasing milking frequency may suffice as treatment, reducing antibiotic use.On the other hand, treatment can be necessary when many clots in milk are detected.Monitoring the dynamics, as well as the gradual increase in the densities, of the clots on the filter over a period of time could also be a valuable tool to identify cows with chronic mastitis [19].
The integrated gradients showed that the NN identified the region of the clots as an input feature (Figure 3).Neural networks are notorious for having a "black box" approach.It is difficult to attribute the prediction of an NN to its input features and, thus, to know why an NN tells us which pixels of an image are responsible for picking a certain label [28].The aim of explainable artificial intelligence (AI) is to understand the input-output behavior of NNs.One such explainable AI method is the integrated gradients approach, in which the attribution of each pixel is calculated by summing the gradients (a partial derivative of each variable, while all others are held constant) of the network on different points at the path between a baseline image (e.g., a black image) to the actual input image (e.g., the image of the filter with clots).If the PVC pipe had not been removed from the input images, the integrated gradients would have clearly shown that the NN had learned to recognize the different milk spatter patterns instead of recognizing the clots.
The current study encountered limitations due to the relatively small dataset, which lacked diversity.Although image augmentation techniques were applied to introduce variability, such approaches do not compare to the larger, more complex datasets typically utilized in deep learning studies [29].Furthermore, the consistency of the recording setup throughout the study, i.e., exclusively using an iPhone's flash for illumination, has left the model's robustness to alternate lighting conditions untested; this is a notable concern since, in field conditions, lighting can be inconsistent and obstructed.Consequently, the findings presented here should be interpreted as preliminary, serving as an exploratory investigation into the application of deep learning for mastitis detection.It is recommended that future research be conducted with more extensive datasets gathered from field conditions in AMSs, to thoroughly evaluate the model's performance and practicality.
While our proposed milk sensor can greatly enhance the detection and management of severe mastitis on AMS farms, it is important to remember that, due to the sudden onset of severe CM, sensor information based solely on changes in milk and measurements collected during milking may not be sufficient for all cases [23].Cows with severe CM may not visit the AMS.Therefore, a combination of several sensor-based (including activity sensors) and AMS-based indicators may have to be incorporated to meet the necessary demands.It is worth noting that other proposed methods already work at the quarter level [10], and by incorporating additional filters and cameras into the AMS at the quarter, we could potentially enhance the performance of our proposed detection system to also achieve this level of monitoring.The performance of a sensor-based detection system may also be enhanced by the combination of sensor-based or automatic milking-based monitoring systems with additional monitoring strategies, such as visual observations.Therefore, while our sensor offers significant advancements, it should be used in conjunction with other tools and strategies for optimal results.
Figure 1 .
Figure 1.Image pre-processing steps on filter image after the passage of milk with clots.Panel (A): original image taken with a resolution of 4032 × 3034 pixels.Panel (B): image after applying a black mask on the region of the PVC pipe.Panel (C): resulting image used for the neural network after rescaling to 500 × 500 pixels.
Figure 1 .
Figure 1.Image pre-processing steps on filter image after the passage of milk with clots.Panel (A): original image taken with a resolution of 4032 × 3034 pixels.Panel (B): image after applying a black mask on the region of the PVC pipe.Panel (C): resulting image used for the neural network after rescaling to 500 × 500 pixels.
Figure 2 .
Figure 2. High-level architecture of the neural network.CONV: convolutional block; BN: batch normalization; RELU: rectified linear activation unit; Max Pool 2D: Max pooling operation; Dense: fully connected layer; Softmax: Softmax activation block with the two different output classes.
Figure 2 .
Figure 2. High-level architecture of the neural network.CONV: convolutional block; BN: batch normalization; RELU: rectified linear activation unit; Max Pool 2D: Max pooling operation; Dense: fully connected layer; Softmax: Softmax activation block with the two different output classes.
Figure 2 .
Figure 2. High-level architecture of the neural network.CONV: convolutional block; BN: batch normalization; RELU: rectified linear activation unit; Max Pool 2D: Max pooling operation; Dense: fully connected layer; Softmax: Softmax activation block with the two different output classes.
Figure 3 .
Figure 3. Visualization of the integrated gradients by an attribution mask over the original input image.Yellow indicates a high attribution of the indicated pixels to the output label of the neural network (NN).Panel (A) shows the integrated gradients of the currently used NN.Here the pixels (around) the clots attribute the most to the output label.Panel (B) shows the integrated gradients of a NN which was trained on images where the PVC pipe was not removed from the image.This 'cheating' NN used the milk spats on the PVC pipe to identify if this was an image with clots.
Table 1 .
Image augmentation parameters used by the Keras ImageDataGenerator function during training of the neural network.
Table 1 .
Image augmentation parameters used by the Keras ImageDataGenerator function during training of the neural network.
Table 2 .
Hyperparameters selected by the genetic algorithm after 10 optimization generations.For the convolutional layers, the first number indicates the residual block and the second number indicates the convolutional layer within the block.Abbreviations: Conv: convolutional layer.
Table 2 .
Hyperparameters selected by the genetic algorithm after 10 optimization generations.For the convolutional layers, the first number indicates the residual block and the second number indicates the convolutional layer within the block.Abbreviations: Conv: convolutional layer.
Table 3 .
Coherence matrix of results. | 4,970 | 2023-12-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
Evaluation of SNP calling using single and multiple-sample calling algorithms by validation against array base genotyping and Mendelian inheritance
Background With diminishing costs of next generation sequencing (NGS), whole genome analysis becomes a standard tool for identifying genetic causes of inherited diseases. Commercial NGS service providers in general not only provide raw genomic reads, but further deliver SNP calls to their clients. However, the question for the user arises whether to use the SNP data as is, or process the raw sequencing data further through more sophisticated SNP calling pipelines with more advanced algorithms. Results Here we report a detailed comparison of SNPs called using the popular GATK multiple-sample calling protocol to SNPs delivered as part of a 40x whole genome sequencing project by Illumina Inc of 171 human genomes of Arab descent (108 unrelated Qatari genomes, 19 trios, and 2 families with rare diseases) and compare them to variants provided by the Illumina CASAVA pipeline. GATK multi-sample calling identifies more variants than the CASAVA pipeline. The additional variants from GATK are robust for Mendelian consistencies but weak in terms of statistical parameters such as TsTv ratio. However, these additional variants do not make a difference in detecting the causative variants in the studied phenotype. Conclusion Both pipelines, GATK multi-sample calling and Illumina CASAVA single sample calling, have highly similar performance in SNP calling at the level of putatively causative variants. Electronic supplementary material The online version of this article (doi:10.1186/1756-0500-7-747) contains supplementary material, which is available to authorized users.
Background
Numerous NGS pipelines and tools have been developed in recent years that are valuable to users in the field, but also create confusion in selecting the desired tool. Some of the commercial NGS pipelines are CLC genomics workbench, DNASTAR, CASAVA, Genious, Genomatix Solutions, GenoMiner, Partek Genomics Suite and so on. Most of the commercial NGS pipeline tools are targeted to biologists as end-users highlighting easy and user friendly interface. Often, these commercial tools become difficult to customize for speed when processing large number of samples. Alternatively, commercial vendors provide the facility to process and ship the complete variants sets along with the sequencing of samples. Noncommercial open source NGS pipelines such as GATK [1,2], SAMtools [3], SOAP [4,5], SNPAAMapper [6], WEP [7], Atlas2 [8] are also being used extensively in academia and many organizations. These open source NGS pipelines are highly customizable but require expertise to set up optimally. Many studies have been done to evaluate NGS data analysis pipelines and tools. Bao S. et al. [9] have evaluated various mapping and assembly software.
Pabinger et al. [10] have surveyed around 205 variants of NGS tools at different analytical steps like quality assessment, alignment, variant identification, variant annotation and visualization. Nielsen et al. [11] have evaluated various SNP and genotype calling algorithms. Although these studies have helped tremendously in determining which tools and pipelines to use, they do not answer the concrete question of whether to use data provided from a commercial vendor or to put in extra efforts to run additional well-known open source pipelines. Also, in situations where we fail to identify a causative variant in the data set provided by commercial vendors, we may doubt the pipeline's ability to find the variants. Thus, it becomes important to compare the variant sets provided by commercial vendors with variants obtained through one of the wellreputed tools. Several studies have confirmed the GATK pipeline's excellent performance in detecting variants. The GATK pipeline is being used in large projects, such as the 1000 Genomes Project and The Cancer Genome Atlas [1,12]. However, smaller labs and institutes often rely fully on commercial vendors for complete sequencing and analysis services. Illumina Inc. is a leader in providing NGS services. Illumina uses the CASAVA and ISSAC pipelines for variant detection. Illumina has reported comparison among ISAAC, CASAVA and GATK pipelines; mostly for the speed of completing the pipeline [13]. However, an independent detailed comparison between the Illumina and GATK pipeline using multi-sample calling algorithm in larger cohorts is necessary. Here we compared variant sets supplied by the Illumina CASAVA pipeline and the wellknown GATK pipelines in great detail on concrete study cases and discuss the differences from a user's perspective. In general, genotype calling errors by the variant callers are associated with Mendelian violation when the caller is unaware of family structure [14]. In this study, both GATK and CASAVA are unaware of family pedigree and therefore Mendelian inheritance is examined in familial samples for the genotypes of discordant variants by the pipelines. As an additional independent quality control we use genotyping array data from the Illumina OMNI 2.5 array. We present an evaluation of the CASAVA and the GATK pipelines for three different data sets: 108 unrelated Qatari genomes, 19 trios from studies on obesity and diabetes, and 2 larger families with suspected rare genetic diseases.
CASSAVA SNP calling
Illumina SNP calls were based on the CASAVA −1.9.0a1_110909 pipeline. SNPs and the genotype from the CASAVA pipeline were called for each sample individually. We created a pass quality subset of these variants by keeping the variant for which Filter column in VCF file has value "PASS" and removing all other variants. Thus, the first set without any quality filter will be called CASAVA ALL and the quality filtered set will be called CASAVA PASS in this paper. In many cases, we have compared the pipelines for a group of samples. In these cases, we merged these SNPs from the CASAVA pipeline using vcftools [15]. Similarly, we created merged VCF for quality filtered (PASS quality) from the CASAVA pipeline by merging all the PASS quality SNPs based on quality column annotation (Genotype quality >20) in all single sample VCF files.
GATK best practice pipeline
In our in-house pipeline, we used Bowtie2 [16] to align the sequencing reads against the human reference genome build 37. We also used other necessary tools like SAMtools [3], Novosort and Picard [17] to process and format alignment files before processing them with GATK. We implemented the best practices of GATK pipeline to call SNPs and Indels. We have used GATK 2.4 version and GATK-UnifiedGenotyper as SNP caller in this study. We have used multi-sample variant calling by GATK-UnifiedGenotyper. The reason of using multi-sample calling is to distinguish non-variant genotypes between homozygous reference genotype and missing genotype in cohort analysis. With single sample calling genotype called only for variants we can't be sure if the nonvariants have missing genotype or same as reference. Also, big projects like 1000 genomes have preferred multi-sample calling over single sample calling [18]. We used GATK-UnifiedGenotyper instead of GATK-HaplotypeCaller, a similar or better variant caller by GATK, in this study because of similar accuracy in calling SNPs and computational feasibility to run for large number of samples. For more than 100 samples, according to GATK website, GATK-UnifiedGenotyper is advised over GATK-HaplotypeCaller. The real advantage of Haplotypecaller over UnifiedGenotyper is in calling Indels but in this paper we are focusing on SNPs only. Next, similar to the CASAVA pipeline, we created two variant sets, GATK ALL (without any quality filter) and GATK PASS (by keeping the variant for which Filter column in VCF file has value "PASS" and removing all other variants) from our in-house GATK pipeline. The variants found by GATK pipeline were recalibrated using GATK walker VariantScoreRecalibrater. The input true sites in creating the model were SNPs from dbSNP Human build 132 [19], genotyping OMNI array calls of 1000 genomes project and Hapmap SNP calls for estimating the probability that SNPs are true genetic variants rather than a sequencing or data processing artifact. The call sets were partitioned into quality trenches and are shown in the plot below. We took the variants until we found 99% of known variants (truth sensitivity) in the GATK PASS variant set.
Genotyping Omni array
Human genotyping array data is from Illumina Human-Omni2.5-8 platform. This array has about 2.37 million tag SNPs from 1000 genomes pilot project with MAF ≥2.5%. Illumina Inc. supplied genotypes for all the samples from HumanOmni2.5-8 by performing Illumina Infinium LCG assay and thereupon calling the genotypes using their propriety software called GenomeStudio. They provide genotype for each of these probes with GenCall scores. Illumina recommends a GenCall score cut-off of 0.15 for their infinium assay based products [20]. This recommended GenCall score cut-off of 0.15 was used to test the concordance with the GATK and CASAVA pipelines. For all three evaluation data sets, although Illumina supplied annotated VCF files, we annotated both Illumina and GATK VCF files using SnpEff [21] and AnnTools [22] to provide a uniform annotation for comparison between pipelines.
Results
The summarized comparison results between the CASAVA and GATK pipeline are presented in Table 1. Both CASAVA and GATK have very high similarity to OmniArray genotypes. However, while comparing all variants from NGS, GATK identifies a higher number of variants than CASAVA. The robustness of these additional variants are analyzed and discussed below in the results presented for comparison between the pipelines for various data sets.
Comparison of NGS pipelines with genotyping array
The Illumina Omni 2.5 platform can detect genotypes at 2.37 million SNP loci in the human genome. In every single individual about 30% of these 2.37 million SNPs were present either in a heterozygous or a homozygous for the non-reference variant state. Illumina only reports genotypes for such variants in the VCF files. Reference allele homozygous calls are not differentiated from noncall. We therefore compare the pipeline only on SNPs that are reported in the VCF files. Both pipelines have very high concordance (~99%) with genotyping array data (Table 1). GATK pipeline has a higher number of non-reference SNPs compared to CASAVA, but CASAVA has slightly higher genotyping matches (99.67%) compared to GATK (98.33%). For quality passed variants (CASAVA PASS, GATK PASS) both pipelines have approximately the same concordance with Illumina Genotyping OmniArray data (Table 1 and Additional file 2). False positives and false negatives in Table 1 are calculated assuming Illumina OMNI 2.5 genotype data to be correct. GATK has lot more false positive compared to CASAVA before PASS filter and the opposite after PASS filter. To our surprise, TsTv ratios of these false positives are not very far from ideal TsTv ratio of 2.0-2.1. Furthermore, TsTv ratio of false positive by GATK is better, closer to 2, than the TsTv ratio of false positive by CASAVA in both before and PASS filter. Moreover, the TsTv ratio of common false positive is near to 2 suggesting these small numbers of common false positive by both pipelines could be false negative in OMNI 2.5 genotype array data.
Pipeline comparison in unrelated individuals
Venn diagram in Figure 3 shows the comparison between CASAVA and GATK pipeline for the combined variants of all 108 unrelated individuals. For the unfiltered variants set in Figure 3A, GATK ALL and CASAVA ALL have an approximately equal number of SNPs (24.01 million for GATK and 23.99 million for CASAVA) and an equal number of unique SNPs (2.4 million for GATK and 2.39 million in CASAVA). However, if we look at the individual sample from GATK and CASAVA in Figure 4E, we find GATK has many more SNP calls than CASAVA (4.33 million by GATK and 4.02 million by CASAVA). This discrepancy, similar number of variants by pipelines at population level but different at sample level, can be explained by exploring shared and unique variants across the samples ( Figure 5). The number of shared variants among 108 individuals identified by GATK is lot more than in CASAVA for both Figure 5d). Also, we can explain the discrepancy by pipelines at population and sample level by looking at the pipeline specific calls (GATK ONLY and CASAVA ONLY calls). Theoretically, CASAVA ONLY calls should be very different across the 108 samples and GATK only calls should be similar across 108 samples to justify the observed discrepancy. When we checked the GATK ONLY 2.4 million SNPs of combined variants set ( Figure 3A), we found that around 56.6% (1.29 million) were present in more than 5 out Comparison between the pipelines have been done for unfiltered sets (CASAVA ALL, GATK ALL), and for quality filtered sets (CASAVA PASS, GATK PASS).
of 108 samples. In contrast, in CASAVA ONLY 2.39 million combined variants ( Figure 3A), only 18.8% (0.45 million) were present in more than 5 out of 108 samples. The higher percentage of consensus call across the sample in GATK ONLY SNPs compared to CASAVA ONLY SNPs indicates the effects of multi-sampling calling using the GATK pipeline. We hypothesize that this effect is desired since the samples are from the same population. In other words, in order to have confidence in the SNPs that are non-agreeing across the pipelines, the variant calls should have agreement across the samples, provided that the samples originate from same population. However, the variants identified by only one pipeline (GATK ONLY SNPs or CASAVA ONLY SNPs) have lower TsTv ratio compared to variants that are common to both pipeline ( Figure 4A and Figure 4B). TsTv ratio for GATK ONLY SNPs before pass filter in Figure 4A is very low (1.096 ± 0.003). Similarly, TsTv ratio of CASVA ONL SNPs in Figure 4B is low (1.485 ± 0.001). The lower TsTv ratio of pipeline specific variants indicates the presence of false positives. Furthermore, Het/Hom ratio of GATK ONLY subset after GATK PASS filter is very high, as shown in Figure 4D, indicates that GATK calls more false positive heterozygous calls than homozygous false positive calls. In general, the explanation of lower TsTv for both before and after PASS filter should be similar. The more number of pipeline specific variants has more false positives. In addition to the pipeline specific variant count, the lower quality variants could be reason of of very low Tstv ratio for GATK ONLY in Figure 4A compared to TsTv ratio of GATK ONLY subset in Figure 4B. However, before pass filter the number of combined set of variants for GATK ONLY (2.4 million) is similar to CASAVA ONLY (2.39 million) and, therefore, should not have drastically different TsTv in data set. Moreover, GATK ONLY subset has more number of shared variants among 108 samples compared to CASAVA ONLY and intuitively we would be expecting better TsTv for GATK ONLY compared to CASAVA ONLY. The opposite behavior of TsTv can thus be attributed to GATK multi-sample calling which might be placing doubtful SNP in samples at particular locus if it one or more samples have confirmed SNP at that locus. This suggests that multi-sample calling has the advantage of calling more variants but at the cost of more false positives. The other possible explanation of lower TsTv ratio of pipeline specific variants could be non-universal nature of TsTv ratio [23]. However, we tested this by random sampling the 2.4 million variants 10 times and computed TsTv ratio. We found TsTv ratio of these randomly sampled variant to be 2.051 ± 0.001. This excludes non-universal nature of TsTv ratio as possible explanation. Thus, lower TsTv for pipeline specific (GATK ONLY and CASAVA ONLY) subset is indication of false positives. The non-agreeing SNPs between the pipelines can also be analyzed in a family structure to see the Mendelian violation, which we did by looking at 19 trios (Father, Mother, and Offspring) and 2 families having homozygous recessive diseases. Pipeline differences after PASS filter at per sample level ( Figure 4F) is apposite to before PASS filter ( Figure 4E) i.e. the number of SNPs per sample in GATK call set is lower than in CASAVA. However, at population level GATK called more SNPs in both before and after PASS filter ( Figure 3A and Figure 3B). It is important to see how PASS filter changed the allele frequency distribution in GATK and CASAVA. Minor Allele Frequency (MAF) distribution plot shown in Additional file 3 and variants frequency distribution shown in Additional file 4 to see the effect of PASS filter for both GATK and CASAVA. In Additional file 3, we can see that PASS filter removes low frequency with high MAF and, therefore, we see higher frequency for low MAF. In Additional file 4, we can see the distributions of GATK before and PASS filtering is far apart while the distribution of CASAVA before and PASS filtering has some overlap. This shows that there are many low quality variants from each of the 108 unrelated samples identified by GATK. This also explains the reason of higher false positives and lower TsTv ratio for of GATK compared to CASAVA before PASS filter.
Pipelines comparison in trios
The CASAVA and GATK pipelines were compared for 19 trios from the Qatari population by taking combined variants sets of each trio separately ( Figure 6 and Additional file 5). On average GATK ALL have 7 million variants in any trio compared to 5.25 million variants in CASAVA ALL ( Figure 6). The large difference between the GATK ALL and CASAVA ALL variant sets in any trio can be attributed to GATK multi-sample calling, but this gives rise to the question about the qualities of these extra variants. Both pipelines have approximately equal percentage of variants having Mendelian violation (3.40% for CASAVA ALL and 3.47% for GATK ALL ( Figure 6C). Assuming Mendelian violation as a criterion to judge confidence in variants, CASAVA pipeline missed those extra 1.75 million variants present in GATK ALL, which were comparable in quality. However, the lower TsTv ratio of 1.01 for Mendelian violated GATK ALL variants compared to TsTv ratio of 1.47 for Mendelian violated CASAVA ALL variants ( Figure 6A) creates doubt about these extra 1.75 million variants of GATK ALL. The difference between number
Pipelines comparison for calling variants in monogenic homozygous recessive diseased families
We analyzed two different families with affected children.
In Family 1, affected children were diagnosed with the phenotype of hypoplasia of cerebellum which is monogenic homozygous recessive disease [24][25][26][27][28]. In Family 2, affected children were diagnosed with abnormal pain sensation, which is also monogenic homozygous recessive disease [29][30][31][32][33]. The number of variants between pipelines and between quality filter sets follows the similar pattern of what we saw above in the comparison between the pipelines in trios. However, in these cases, the difference in Mendelian violation between the pipelines is strongly pronounced. The difference is more between CASAVA ALL and GATK ALL variants set and the detail of this shown in Table 2 and Table 3 Mendelian violation in the CASAVA PASS set. In contrast, GATK has only 2.96% (187920 out of 6337108) of variants with Mendelian violation in the GATK ALL set and 0.14% (7122 out of 5004048) of variants with Mendelian violation in the GATK PASS set (Table 2 and Additional file 7). Because both children are affected by the hypoplasia of cerebellum, and the parents and aunt are unaffected, the causative variant should be a homozygous variant [34]. We further investigated the pipeline performance to find the homozygous recessive variants. In this paper, we use the term Homozygous Recessive Condition (HRC) for any particular variant position in a family when all three of the following conditions are met: 1) all affected off-springs are homozygous, 2) all affected off-springs have the same genotype and their genotype is different than normal individuals in the family, and 3) all affected off-springs follow Mendelian inheritance (e.g. Father GT = A/C, Mother GT = A/C, Affected Child 1 GT = C/C, Affected Child 2 GT = C/C). Both CASAVA and GATK pipelines have approximately a similar number of HRC variants (Table 2). They also have a similar number of region specific or known variants like exonic, CDS, 3'UTR, 5'UTR, intronic, non-synonymous coding, 1000genome and so on. Furthermore, the pipelines have a similar number of commonly known variants such as those in 1000 genomes, and Q108 (108 unrelated individuals from Qatar). After filtering the known variants, we tried to map these variants to known genes for the phenotype in the literature. We could not map the set of possible causative variants to known genes in this case. Therefore, we tried another real case of homozygous recessive disease with a pair of normal and affected siblings.
Diseased family 2
This family is different in structure because of the presence of unaffected siblings ( Figure 2B), which gives extra power to evaluate the pipeline because of the inherent validation about the variants, e.g. evaluating homozygous recessive variants identified by both the pipelines but mismatched in genotypes according to Mendelian inheritance in affected and unaffected separately. We have presented a detailed comparison of the pipeline performances for this family in Table 3. The additional parameters to judge the pipelines in Table 3, as compared to the previous case in Table 2, are due to the additional two normal siblings in this case. Exclusively determined HRC variants are divided into two sets of variants for Comparison between the pipelines have been done for unfiltered sets (CASAVA ALL, GATK ALL), and subset of variants fulfilling the criteria of Homozygous Recessive Conditions (HRC).
analyzing pipeline performance: 1) HRC variant by Pipeline1 and not by Pipeline2 and having mismatch in genotype calls between the pipelines, and 2) HRC variant by Pipeline1 and none from Pipeline2 for all five individuals.
In the first set of variants (Table 3, Column "GT mismatch by GATK" and "GT mismatch by CASAVA"), in which only one pipeline meets HRC and the pipelines have mismatch in genotype calls, the pipeline not meeting HRC The cases where one pipeline has both HRC and Mendelian inheritance and other pipeline has neither could be a strong indication that the second pipeline calls are wrong in these variants. Example variant position genotypes in this family are as follows: In Table 2, we can see in the CASAVA ALL and GATK ALL sets that out of 1499 exclusively determined HRC variants by GATK, 929(62%) had both Mendelian violations and different genotypes by the CASAVA pipeline. In contrast, out of 781 exclusively determined HRC variants by CASAVA, only 244 (31%) have both Mendelian violations and different genotypes. Therefore, we can say that for exclusively determined HRC where there is mismatch between the genotype calls between the pipelines, the GATK pipeline is more robust than the CASAVA pipeline, if we compare all the variants without any quality filter.
We also examined Mendelian violation in another set of exclusively determined HRC variants by one pipeline where there were no variants in any member of the family by the second pipeline (Table 3, Column "Absent in GATK" and "Absent in CASAVA"). Both CASAVA and GATK have almost no Mendelian violation in these cases. Table 3 also shows many categories to compare CASAVA and GATK. CASAVA identifies slightly more number of Non-synonymous variants compared to GATK. However, GATK has higher percentage of Non-synonymous variants as HRC variants compared to CASAVA. About one hundred of these Non-Synonymous variants of both the pipelines are linked to 60 pain related genes by literature identified using SnpEff [21] and AnnTools [22]. After excluding the common variants (variants present in homozygous state in either 1000 genomes or 108 unrelated Qatari individuals, and variant present in heterozygous state with MAF >5%) from these non-synonymous variants, there were 5 variants left by both the pipelines (Non-synonymous pain related rare variant in Table 3). From both pipelines, out of these 5 variants only one was HRC variant and most probably the causative variant.
Discussion
We found excellent performances of both GATK and CASAVA pipelines in matching the genotype calls when matching with Illumina OmniArray genotype calls. However, we saw differences in the number of variants called by each pipeline in unfiltered variant sets (CASAVA ALL, GATK ALL) and generally GATK identifies more variants because of its multi-sample calling algorithm. Most of these additional variants are of low quality but not bad in terms of Mendelian inheritance. CASAVA pipeline, in most of the cases, have TsTv ratio closer to 2 compared to GATK. Since both CASAVA and GATK pipeline were unaware of the pedigree structure while calling the genotypes, in conflicting or discordant genotypes by the pipelines, Mendelian inheritance is a good criterion to judge the confidence of variants for familial samples. In general, GATK pipeline called less Mendelian violation for all different sets. Notably, PASS filter in GATK pipeline drastically minimizes Mendelian Violation, from 2.4% in GATK ALL to 0.14% in GATK PASS in disease family 1 and from 4.86% in GATK ALL to 0.19% in GATK PASS in disease family 2. However, in CASAVA pipeline PASS filter does not reduce Mendelian violation significantly, from 4.78% in CASAVA ALL to 1.23% in CASAVA PASS in disease family 1 and from 8.03% in CASAVA ALL to 1.87% in CASAVA PASS in disease family 2. By assuming Mendelian violation to be inversely correlated to pipeline performance in cases of genotype mismatch and where the other pipeline satisfies HRC, GATK multisample calling performs better than CASAVA single sample calling for these cases. However, we didn't find any significant difference in the ability of these pipelines to identify causative variants in this abnormal pain perception family. We also found extremely low Mendelian violation in exclusively determined homozygous recessive condition for which variants were not called in any family member by the other pipeline, which suggests robustness of both GATK and CASAVA pipelines in finding the functional variants. This broad level agreement between the pipelines suggests that normally we can avoid calling variants again using more sophisticated algorithm except for specific scientific goals. One of such specific scientific goals could be finding de novo mutation in samples where comprehensiveness of variants are desired and can be obtained by taking combining the variant sets from the pipelines with tolerated false positives. Also, if the cohort sample size is large and scientific goal is based on the phase SNPs, it is desirable to use more sophisticated SNP calling platform such as GATK multiplesample calling.
On other note, the results presented here should hold for newer version of GATK as well. In furtherance, we did the sensitivity analysis (see Additional file 9) for 10 different versions of GATK released in last one and half year for our diseased family 2 data set. The relative standard deviation of variant counts of different versions of GATK for before and PASS filter sets are only 0.89% and 2.02% respectively while the difference between GATK and CASAVA presented in this paper using GATK v2.4 are around 4.9% and 7.6% before and PASS filter set respectively. Similarly, the relative standard deviation of TsTv of different versions of GATK for before and PASS filter sets are and only 0.58% and 0.59% respectively while the difference between GATK and CASAVA presented in this paper using GATK v2.4 are around 8.2% and 2.4% before and PASS filter set respectively. Thus, the different version of GATK have very little effect on the number of variants identified and thus doesn't change the results and conclusion drawn in this paper using GATK v2.4.
We have used 3 different type of data set (108 unrelated, 19 trios, and 2 diseased families) to cover some of the various possible data sets. We have found difference in results for related and unrelated individuals. In general, the pipeline comparison results should hold for most of the possible data set with some limitations. We have only tested for sequences coming from Illumina platform that helps in fair comparison of the pipeline but the result might deviate for sequence reads from some other platform. Also, we have not tested for complex diseases like cancer where somatic mutation is frequent. | 6,407.8 | 2014-10-22T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Disruption of the microbiota affects physiological and evolutionary aspects of insecticide resistance in the German cockroach, an important urban pest
The German cockroach, Blatella germanica, is a common pest in urban environments and is among the most resilient insects in the world. The remarkable ability of the German cockroach to develop resistance when exposed to toxic insecticides is a prime example of adaptive evolution and makes control of this insect an ongoing struggle. Like many other organisms, the German cockroach is host to a diverse community of symbiotic microbes that play important roles in its physiology. In some insect species, there is a strong correlation between the commensal microbial community and insecticide resistance. In particular, several bacteria have been implicated in the detoxification of xenobiotics, including synthetic insecticides. While multiple mechanisms that mediate insecticide resistance in cockroaches have been discovered, significant knowledge gaps still exist in this area of research. Here, we examine the effects of altering the microbiota on resistance to a common insecticide using antibiotic treatments. We describe an indoxacarb-resistant laboratory strain in which treatment with antibiotic increases susceptibility to orally administered insecticide. We further reveal that this strains harbors a gut microbial community that differs significantly from that of susceptible cockroaches in which insecticide resistance is unaffected by antibiotic. More importantly, we demonstrate that transfer of gut microbes from the resistant to the susceptible strain via fecal transplant increases its resistance. Lastly, our data show that antibiotic treatment adversely affects several reproductive life-history traits that may contribute to the dynamics of resistance at the population level. Together these results suggest that the microbiota contributes to both physiological and evolutionary aspects of insecticide resistance and that targeting this community may be an effective strategy to control the German cockroach.
Introduction
The German cockroach, Blatella germanica, is a widespread urban pest of medical and economic importance. B. germanica is a known or suspected mechanical vector of numerous enteric bacterial pathogens [1][2][3][4][5] and is also a significant contributor to allergic asthma disease [6][7]. The saliva, feces and shed cuticle of the German cockroach contain several potent allergens that have been detected at concentrations associated with allergic sensitization and asthma morbidity in 10-13% of U.S. homes [8]. Due to growing concerns regarding the development of resistance to commonly used insecticides in field populations [9], the improvement of current tools for the management of cockroach infestations is an ongoing research priority.
The use of antibacterial treatments has the potential to facilitate control of a range of insects. Most arthropod species have evolved intricate relationships with symbiotic bacteria and depend on microbes for reproduction, development, metabolism, and immunity, to some extent [10]. Disrupting these commensal bacteria can have adverse effects on insect physiology resulting in death or reduced fitness. For example, direct mortality was observed in adult tsetse flies fed a blood-meal containing the antibiotic tetracycline as well as in lice fed four different antibiotics [11][12]. On the other hand, elimination of Wigglesworthia glossinidia from tsetse flies by oral provisioning of tetracycline reduced fecundity and pupal emergence of F1 offspring, while elimination of Sodalis glossinidius with streptozotocin reduced the longevity of the offspring [13]. Similarly, pea aphids fed rifampicin had shorter adult lifespans and reduced production of F1 offspring relative to controls [14]. In the omnivorous American cockroach, removal of symbiotic bacteria from the gut with metronidazole reduced weight gain in developing nymphs [15]. The removal of bacterial symbionts can not only affect insect lifespan and fecundity, but may also impact insecticide resistance [16]. Specifically, in both the bean bug, Riptortus pedestris, and oriental fruit fly, Bactrocera dorsalis, insecticide resistance has been attributed to the production of detoxification enzymes by Proteobacteria located in the midgut and antibiotic treatments can restore susceptibility to resistant individuals [17][18].
The German cockroach harbors a diverse microbiota. The primary cockroach symbiont, Blattabacterium cuenoti, is a vertically transmitted intracellular bacterium found at high titers in specialized cells of the fat bodies of all German cockroaches. During the nymph stage, the endosymbiont migrates to the ovaries and eventually becomes incorporated into developing oocytes, resulting in transmission from mother to offspring [19]. In addition, the gut microbiota of the German cockroach was recently characterized and found to be comprised primarily of Bacteroidetes, Firmicutes, Fusobacteria, and Proteobacteria [20]. This dynamic microbial community, which changes throughout development from nymph to adult, is thought to be acquired horizontally from the environment and diet, as well as vertically through the consumption of feces (coprophagy) from other members of a colony [20][21][22][23]. In both B. germanica and the related Panchlora cockroach, the microbial community also differs dramatically across sections of the gut [24][25].
Prior studies of multiple cockroach species have investigated the effects of the microbiota on host physiology, implicating commensal microbes in nutrient provisioning and metabolism, development, immunity, and aggregation behavior [15,[25][26][27][28][29]. However, the phenomenon of symbiont-mediated resistance has not been explored in B. germanica. Here, we sought to determine how the microbiota is involved in the development of insecticide resistance and the propagation of these traits through German cockroach populations using antibiotic treatments. Our results indicate that commensal gut bacteria are involved in physiological resistance to insecticide and support a role for both fat body and gut microbial communities in the regulation of reproductive life history traits that may contribute to the establishment of resistance at the population level.
Cockroach strains and maintenance
Three strains of German cockroach (Blatella germanica) were used in our experiments. The susceptible Orlando normal strain (ORL), which was colonized prior to the widespread use of insecticides (>60 years ago), was maintained without insecticide selection [30]. Further, an indoxacarb-resistant field strain (DE) was collected from Destin, FL in 2011 and subsequently maintained in the laboratory without insecticide selection. A portion of this field-collected colony was also separated and maintained under indoxacarb bait selection pressure by periodic treatment with Advion cockroach gel bait containing 0.6% indoxacarb (Syngenta, Basel, Switzerland), as previously described [9]. This strain was termed DEA. All cockroaches were reared in 15 x 9 inch plastic arenas held in environmentally controlled rooms at 27 C and 45% relative humidity on a 12:12 light:dark cycle. Insect colonies and experimental insects were provided with cardboard harborages, water, and dog chow (Purina, St. Louis, MO) unless otherwise indicated.
Antibiotic and insecticide treatments
For analyses of the effects of bacteria on insecticide resistance (Figs 1 and 2), the antibiotic doxycycline (Sigma Aldrich, St. Louis, MO) was provided in either a customized gel bait or in water, while indoxacarb was administered either in the same gel or topically for determination of the LD 50 . Gel baits for oral toxicity experiments (Fig 1) consisted of the following by weight: 48% ground chicken feed (Purina), 1% potassium sorbate, 0.5% agar, and either 0.5% doxycycline, 0.05% indoxacarb, or both, with the balance consisting of distilled water. Active ingredient doses were selected to both minimize effects on gel palatability and allow for maximum differentiation of mortality between resistant and susceptible insect strains. Baits were administered to experimental replicates of 50 healthy cockroaches (15 males, 10 females, 25 nymphs/ group) in plastic arenas under standard conditions and mortality was assessed over a period of 4 days. Mortality curves were compared by two-way ANOVA using GraphPad Prism 5 (Graphpad Software Inc., La Jolla, CA).
For determination of the LD 50 of topically applied indoxacarb (Fig 2), healthy male cockroaches from the resistant DEA colony were pre-treated by adding 0.5% doxycycline to their water source for 7 days. Insects were then anesthetized on ice prior to applying 1 μL of acetone with either no insecticide (control), or one of a series of concentrations of indoxacarb (0.1μg-0.001μg/μL) that were chosen such that at least 3 concentrations would yield 1-99% mortality [9]. Applications were made to the ventral thorax between the coxae of 20 cockroaches for each concentration using a microapplicator (Hamilton, Reno, NV) equipped with a glass syringe. After application, insects were transferred into arenas and maintained under standard conditions. Mortality was examined after 24 hours and moribund cockroaches (those unable to right themselves when on their back) were counted as dead in our analyses. The LD 50 was determined by nonlinear regression analysis using GraphPad Prism 5.
Microbiota sequencing
Sequencing of the cockroach microbiota (Fig 3) was carried out at Molecular Research DNA Lab (Shallowater, TX, USA) using an Illumina MiSeq system (Illumina, San Diego, CA) according to the manufacturer's guidelines. For each strain or treatment group, whole guts (foregut, midgut, hindgut) were dissected from 6 male cockroaches using sterilized tools. Insects were starved of food for 24 hours to minimize non-stably associated bacteria and were rinsed with ethanol and sterile water immediately before dissection. Gut tissues from the 6 susceptible ORL cockroaches were exposed to customized gel baits containing 0.05% indoxacarb, 0.5% doxycycline, or a combination of both, and mortality was assessed over a period of 4 days. N = 3 replicates of 50 insects. The microbiota and insecticide resistance in the German cockroach insects were pooled together to account for variation between individuals from the same treatment, though this is expected to be minimal given cockroach aggregation behavior [31]. DNA was extracted using the DNeasy Powersoil Kit (Qiagen, Hilden, Germany). Primers for the V4 variable region of the bacterial 16S rRNA gene with barcode on the forward primer [32] were used to conduct PCR using the HotStarTaq Plus Master Mix Kit (Qiagen). Cycle conditions were as follows: 94˚C for 3 minutes, followed by 28 cycles of 94˚C for 30 seconds, 53˚C for 40 seconds and 72˚C for 1 minute, after which a final elongation step at 72˚C for 5 minutes was performed. Following PCR, products were checked in 2% agarose gels to verify successful amplification. The different samples were then pooled together in equal proportions based on their molecular weight and DNA concentrations. Pooled samples were purified using calibrated Ampure XP beads (Beckman Coulter, Brea, CA) and subsequently used to prepare a DNA library through the Illumina TruSeq DNA library preparation protocol.
Analysis of sequencing data
16S rRNA sequencing reads were demultiplexed using MRDNA software (http://www. mrdnafreesoftware.com). Reads were then processed and assembled into amplicon sequence variants (ASV) using the most recent release of dada2 in R (https://benjjneb.github.io/dada2/ index.html) [33]. Forward reads shorter than 240 base pairs and reverse reads shorter than 160 base pairs were discarded, as well as chimeric reads and any reads with more than 2 expected errors (see S1 Table for raw sequence statistics). Taxonomy was assigned using the dada2 formatted greengenes 13.8 training dataset (https://zenodo.org/record/158955#.W61TS2hKist)
Fig 3. The gut microbiota varies among resistant and susceptible German cockroach strains.
Whole guts from untreated cockroaches, or cockroaches continuously exposed to 0.5% doxycycline for 4 days were dissected and DNA was isolated for PCR amplification and sequencing of bacterial 16S rRNA genes. (A) Relative abundance of ASVs that were called to the taxonomic rank of Class. Taxa with <1% relative abundance were grouped together as "other." (B) Alpha diversity Index (Shannon Index) of amplicon sequence variants (ASVs). (C) Hierarchical clustering analysis (beta diversity) of ASVs. https://doi.org/10.1371/journal.pone.0207985.g003 The microbiota and insecticide resistance in the German cockroach PLOS ONE | https://doi.org/10.1371/journal.pone.0207985 December 12, 2018 [34] and rarefaction analysis was done to confirm that sequencing captured all ASVs. Contaminating reads from the Blattabacterium genus, which were present due to small amounts of difficult to remove fat body tissue surrounding the dissected guts, were subsequently removed using the prune taxa function in phyloseq. Relative abundance plots were constructed using phyloseq and ggplot2 [35] and Pearson's chi-square testing was conducted to identify significant differences in distribution between samples at the class level. Shannon Index analysis of alpha diversity was performed using phyloseq and beta diversity analysis was performed using a complete-linkage hierarchical clustering model with the base R stats package, with Euclidean distance used as the distance metric for the dendrogram. Venn diagrams were generated using ggforce [36]. All raw sequencing reads associated with the manuscript were deposited into the NCBI Sequence Read Archive (SRA) under accession number SRP145206, BioProject: PRJNA470750
Fecal transplant
For fecal transplant experiments (Fig 4), replicates of 50 insects of the susceptible lab strain (15 males, 10 females, 25 nymphs) were placed into arenas and pre-treated with 0.5% doxycycline by addition to their water source for 4 days. After 4 days, the antibiotic was removed and each arena received 1 gram of feces from a resistant, bait-selected colony, or from a susceptible colony (control). For the following 3 days, cockroaches were starved of food to promote the consumption of feces. Afterwards, dog chow was added to the arenas and cockroaches were allowed to recover from starvation for 24 hours before insecticide exposure. Insecticide treatments were carried out using customized gel baits as described above and survival curves were compared by two-way ANOVA using Graphpad Prism 5.
Incorporation of synergists into indoxacarb and antibiotic gel baits
The insecticide synergists piperonyl butoxide (PBO), a cytochrome P450 monooxygenase inhibitor, and diethyl maleate (DEM), a glutathione-S-transferase inhibitor, were incorporated into customized gel baits at a final concentration of 0.25% using the aforementioned formulas containing indoxacarb and doxycycline (Fig 5). Cockroaches were treated with these baits and mortality over a period of 4 days was recorded. Survival curves were analyzed by two-way ANOVA using GraphPad Prism 5.
Life history analysis
Analyses of the effects of microbes on life history traits were performed on the susceptible ORL strain (Fig 6). In control groups, individual oothecae (egg cases) were collected prior to hatching and placed into experimental areas under the standard conditions described above. Once the oothecae hatched, cockroaches were monitored at regular intervals and the time taken to reach adulthood was recorded. Cockroaches were also weighed at adulthood. After these measurements were taken, the insects were maintained in the same experimental containers until females developed mature oothecae. These oothecae were then collected, placed into new arenas individually, and the number of viable offspring produced from each ootheca was recorded. For comparison, the same measurements were taken from cockroaches that were continuously exposed to 0.5% doxycycline (Sigma Aldrich) by addition to their water source upon hatching. Development curves were analyzed by Mantel-Cox Log rank test, while adult weight and fecundity data were analyzed by t-test using Graphpad Prism 5.
PCR
The presence of Blattabacterium throughout development in antibiotic-treated cockroaches was examined by PCR (Fig 6). Single cockroaches were collected into Extract-N-Amp tissue preparation buffer (Sigma Aldrich) and homogenized using a pestle. DNA extraction was performed according to the manufacturer's protocol and 4 μL of extraction solution was used for amplification of insect or Blattabacterium sequences with Extract-N-Amp PCR ReadyMix on a MiniPCR system (Amplyus, Cambridge, MA). The primers and cycle conditions used were as previously described to amplify a 460 base pair fragment of the 16S RNA gene of B. cuenoti [37][38] or a 400 base pair fragment of insect mitochondrial DNA [39]. Amplified PCR products were verified by electrophoresis on 1% agarose gels using a bluegel integrated electrophoresis and visualization system (Amplyus). The experiment was replicated with 2 independent sets of cockroaches. Original images of gels used in the figure are presented in the supplementary material.
Antibiotic treatment enhances oral but not topical toxicity of indoxacarb
We began our investigation of microbe-mediated insecticide resistance in the German cockroach by examining the effects of antibiotic treatment on insecticide toxicity. Specifically, we first determined the outcome of combining doxycycline (0.5%) and the commonly used oxadiazine insecticide, indoxacarb (0.05%), in gel baits fed to 3 cockroach strains with differential levels of resistance (Fig 1). In these experiments, neither blank gels containing only chicken feed nor gels containing doxycycline without indoxacarb caused any mortality, while gels containing indoxacarb alone led to variable mortality that corresponded with the resistance status of the strain being tested. More importantly, in the resistant, bait-selected DEA (Fig 1A, ANOVA, n = 3, P<0.001) and resistant DE (Fig 1B, ANOVA, n = 3, P = 0.002) strains, gels containing a combination of doxycycline and indoxacarb elicited significantly higher mortality than the gels containing the insecticide alone. This was the case despite the Unexpectedly, we did not observe the same mortality trend in the susceptible laboratory strain. In these cockroaches, the effects of the combination gel were not statistically different from those of the indoxacarb gel (Fig 1C, ANOVA, n = 3, P = 0.196). These results indicated that the impact of commensal microbes on insecticide resistance in the German cockroach is strain-dependent and not conserved across populations. One plausible explanation for the observed specificity is that microbe-mediated resistance is derived from microbial populations in the gut that may vary among cockroach strains. To test this hypothesis, we then examined the effect of antibiotic treatment on the toxicity of indoxacarb topically applied directly to the cuticle (Fig 2). This approach bypassed the gut barrier that is encountered by insecticides delivered in bait and prevented interaction between indoxacarb and the gut microbiota. , and (C) fecundity (n = 5-18) were measured in untreated cockroaches (black) and cockroaches continuously exposed to 0.5% doxycycline (blue). (D) Blattabacterium was detected by PCR in early nymphs (EN), late nymphs (LN), and adults (A) that were continuously exposed to doxycycline, but not in the offspring (O) of these cockroaches. Different letters within graphs indicate statistically significant differences between groups. https://doi.org/10.1371/journal.pone.0207985.g006 The microbiota and insecticide resistance in the German cockroach PLOS ONE | https://doi.org/10.1371/journal.pone.0207985 December 12, 2018 Doxycycline had no effect on indoxacarb toxicity when it was administered in this fashion. That is, in the resistant, bait-selected strain, topically applied indoxacarb was equally toxic to cockroaches that were pre-treated with doxycycline for seven days and those that received no antibiotic treatment (LD 50 of 0.0276 μg/insect and 0.0263 μg/insect, respectively). These results were in direct contrast to data obtained in our oral toxicity experiments and further suggested that the impact of doxycycline on the oral toxicity of indoxacarb is dependent on antibiotic effects against microbial populations in the cockroach gut.
The gut microbiota differs among resistant and susceptible cockroach strains
To identify potential disparities among resistant and susceptible cockroaches that could explain the variable effect of doxycycline on indoxacarb toxicity, we next sequenced the gut microbiota of each strain under basal conditions, as well as after antibiotic treatment in the bait selected, resistant strain. Consistent with previous analyses [20-21, 23,25], albeit with some deviation, the microbiota of our insects consisted primarily of Proteobacteria, Bacteroidia, Firmicutes (i.e. Bacilli, Clostridia) and Fusobacteria. The relative abundances of these taxa in the gut of German cockroaches can vary drastically among individuals from different locations, and even among individuals in a laboratory environment kept under different dietary regimes [21,40]. In our samples, Gammaproteobacteria were prevalent due to the presence of what may be a non-Blattabacterium endosymbiont. Nonetheless, several differences in the remaining taxa were also observed between strains and treatment groups in our studies ( Fig 3A). Primarily, we found that alpha diversity was greater in resistant cockroaches than in the susceptible lab strain (Fig 3B) and these strains (DEA vs ORL) harbored a significantly different distribution of microbial taxa (chi-square, P<0.001). Treatment with antibiotic (DEA vs DEA.Ab) reduced alpha diversity and significantly altered the distribution of microbial taxa (chi-square, P<0.001), while selection with insecticidal bait (DE vs DEA) appeared to have minimal impact on diversity but still significantly altered the distribution of the taxa present (chi-square, P<0.001). Only the resistant but not insecticide selected strain (DE) harbored Methanomicrobia and Verrucomicrobiae at an abundance >1%, while the bait-selected strain (DEA) exclusively harbored Fusobacteria at an abundance >1%, but only in the absence of antibiotic treatment. Clostridia and Deltaproteobacteria were noticeably lacking in susceptible cockroaches, as neither was present at a relative abundance >1%. Additionally, the relative abundance of Bacteroidia was reduced in the susceptible strain. Treatment of the bait-selected, resistant strain (DEA) with doxycycline reduced the abundance of Deltaproteobacteria and Bacteroidia. In total, our beta diversity analysis showed that the communities of the resistant strains before and after bait selection were most similar and markedly dissimilar from the susceptible strain, but treatment of the bait-selected strain (DEA) with doxycycline shifted the composition of the microbiota towards one that more closely resembled that of the susceptible strain (Fig 3C). The aforementioned patterns were apparent even when highly abundant Gammaproteobacteria were not considered (S2 Fig).
Fecal transplant increases insecticide resistance in susceptible cockroaches
Having implicated variation in the gut microbiota as a potential contributor to insecticide resistance, we sought to determine whether resistance could be recapitulated by transfer of microbes from the gut of the resistant, bait-selected strain to the susceptible strain (Fig 4A). We hypothesized that this transfer could be accomplished by supplementing cockroaches with conspecific feces. Indeed, when susceptible cockroaches were fed feces obtained from a colony of resistant counterparts prior to insecticide treatment, mortality after 4 days of exposure to a gel bait containing indoxacarb was decreased relative to groups fed feces from their own susceptible colony as a control (Fig 4B, ANOVA, n = 3, P = 0.008). Sequencing of the gut microbiota of susceptible roaches transplanted with feces from the resistant, bait-selected strain further revealed that fecal transplant significantly altered the distribution of microbial taxa (chi-square, P<0.001) and introduced a number of unique assigned taxa that were not previously present (Fig 4C and 4D). In particular, the abundance of Bacteroidia appeared to expand, while Clostridia and Deltaproteobacteria, which are present in the resistant strains but negligible in the susceptible strain, were detected at augmented levels. Therefore, microbiotamediated resistance can be partially recapitulated in susceptible cockroaches through a transfer of gut microbes that occurs during coprophagy.
Inhibition of glutathione-S-transferase prevents the effect of doxycycline on indoxacarb toxicity
Experiments involving the incubation of indoxacarb in bacterial cultures derived from cockroach feces produced no evidence of direct detoxification by these microbes (S3 Fig). We instead hypothesized that the gut microbiota may confer insecticide resistance through indirect detoxification, by enhancing the expression or activity of endogenous host detoxification pathways. This hypothesis was tested by inhibiting several key enzymes involved in xenobiotic metabolism. When we incorporated the glutathione-S-transferase (GST) inhibitor diethyl maleate (DEM) into gel baits containing indoxacarb, mortality in resistant, bait-selected cockroaches (DEA) exposed to the baits was equal regardless of whether or not the bait contained doxycycline (Fig 5A, ANOVA, n = 3, P = 0.093). These results were inconsistent with those obtained in the same strain without GST inhibition (Fig 1). In other words, inhibition of GST prevented doxycycline-mediated enhancement of indoxacarb toxicity against resistant cockroaches. However, in the context of cytochrome P450 inhibition using PBO, the effects of doxycycline were conserved, as addition of antibiotic to indoxacarb gel baits increased toxicity in the resistant, bait-selected strain relative to baits containing indoxacarb alone (Fig 5B, ANOVA, n = 3, P = 0.003). In contrast to GST inhibition, these results were consistent with the effects of doxycycline on indoxacarb toxicity in the same cockroaches not treated with inhibitors.
Perturbation of the microbiota adversely affects reproductive fitness
Insecticide resistance is propagated through a cockroach population because individuals that are able to survive insecticide exposure gain a reproductive advantage over susceptible insects in an environment where insecticide is present. At the same time, when the pressure of insecticide selection is removed, resistance is often accompanied by one or more fitness costs. Thus, our study also examined several life history parameters as a readout for the reproductive fitness of antibiotic-treated cockroaches, to determine if the microbiota is also involved in this evolutionary aspect of resistance (Fig 6). From the time of hatching from the ootheca, control cockroaches took an average of 41.5 days to reach adulthood (Fig 6A). Meanwhile, cockroaches continuously exposed to doxycycline took an average of 98.5 days, a statistically significant increase of more than 2-fold (Mantel-Cox Log rank, P<0.0001). When adults from each treatment group were weighed, additional differences were apparent (Fig 6B). Both males (0.038g vs 0.027g/insect) and females (0.06g vs 0.038g/insect) treated with antibiotic were significantly smaller in size when compared to control cockroaches (t-test, P<0.05). Antibiotic treatment further led to a marked drop in fecundity relative to controls (Fig 6C, 33.5 vs 8.6 viable eggs/ ootheca, t-test, P<0.05). Interestingly, while short-term doxycycline treatment had an effect on the gut microbiota (Fig 3), it did not rapidly remove Blattabacterium. Rather, Blattabacterium was detectable by PCR throughout the lifespan of cockroaches treated with antibiotic. Only during oogenesis did Blattabacterium appear to be fully eliminated, as it was not present in the offspring of antibiotic-treated cockroaches (Fig 6D and S4 Fig). Taken together these results indicate that the microbial community in the gut is not only directly involved in the physiological response to insecticide, but also contributes to reproductive fitness, likely in combination with Blattabacterium-mediated effects.
Discussion
In the present study, we reveal that the microbiota of the German cockroach contributes to resistance to an oxadiazine sodium channel blocker, as well as multiple aspects of the insect's life history. These results significantly advance current knowledge of the cockroach microbiota and the development of insecticide resistance from the biochemical to the evolutionary level.
At the core of our findings are two key observations: (1) microbial regulation of resistance to indoxacarb is specific to select, field-derived laboratory cockroach colonies, and (2) microbial regulation of resistance applies only to orally administered insecticide. These two stipulations indicate that resistance is not mediated by the fat body endosymbiont, Blattabacterium, which is present in all German cockroach strains and does not localize to the gut [41]. They also argue against the involvement of off-target antibiotic effects on host physiology. Instead, the ability of antibiotic treatment to increase the toxicity of indoxacarb in certain cockroach strains appears to be due to its effects on bacteria that are present in the gut of resistant, but not susceptible cockroaches. Sequencing of the gut microbiota of susceptible and resistant strains indeed revealed key differences between the two communities. Notably, the diversity of the microbiota was substantially lower in the susceptible Orlando strain that was colonized >60 years ago, suggesting that some microbial species and their respective functions may have been lost as part of long-term laboratory adaptation process. A specific, causative agent of resistance was not pin-pointed, but across our experiments there appeared to be a correlation with the presence of Deltaproteobacteria and Clostridia [42], which are known to be involved in the response to xenobiotics in the mammalian gut. Alternatively, it may be that resistance is not caused by the presence of a singular taxa or species, but rather depends on the overall diversity and/or composition of the microbiota and its metabolic dynamics that are disrupted during dysbiosis caused by antibiotic treatment [43]. Testing for microbe-mediated resistance using antibiotics with varied spectra of activity in both indoxacarb resistant strains and strains that are resistant to other insecticides may help determine the prevalence of this phenomenon as well as the bacteria responsible.
As it stands, the mechanism for microbe-mediated resistance to indoxacarb in the German cockroach has not been fully elucidated. It is possible that exposure to the insecticide may select for insects that harbor gut microbes that contribute to survival under this pressure. Similar shifts in the composition of the gut microbiota have been reported in aphids following treatment with spirotetramat [44]. Our fecal transplant experiments demonstrate that shifts in the microbial community of the gut, along with associated resistance traits, can be passed between individuals through coprophagy. These results support recent work evidencing vertical transmission of the microbiota via coprophagy in B. germanica [23,40], and expand the significance of this behavior to include a putative role in the propagation of insecticide resistance. Intriguingly, we found no evidence of direct detoxification of indoxacarb by bacteria cultured from the feces of resistant cockroaches. It should be noted, however, that these assays only examined the effects of culturable bacteria in liquid LB media. Thus, the possibility remains that some microbes that are fastidious, anaerobic, or of low abundance contribute to direct detoxification of indoxacarb in vivo [45].
A more probable mechanism involves indirect detoxification of indoxacarb by cells of the gut (i.e. detoxification through endogenous host pathways induced by the microbiota). In the German cockroach, indoxacarb undergoes cytochrome P450-dependent biotransformation that influences its toxicity [46]. In our experiments, inhibition of this process using the synergist PBO did not affect microbe-mediated resistance. On the other hand, when GST-dependent metabolism was blocked using DEM, microbe-mediated resistance was reversed, suggesting that the microbiota boosts either GST gene expression or enzymatic activity, thereby contributing to detoxification. Indoxacarb resistance has been linked to GST activity in other insects, supporting this conclusion [47]. Moreover, in the mosquito Anopheles stephensi, symbiotic bacteria from the midgut can alter GST activity [48]. Therefore, the effects of the microbiota on GST in the cockroach gut should be explored further using molecular methods to determine if specific bacterial metabolites play a role in its function.
In addition to affecting insecticide resistance, we show that antibiotic treatment changes cockroach development and reproduction. However, because effects on life history are conserved in the susceptible cockroach strain, the microbes involved in this biology are almost certainly different from those that mediate insecticide resistance. Whether the effects of doxycycline on Blattabacterium, which may provide nutritional benefits to these causes [26,49], contribute to the phenotypes we observed is unclear given the unusual kinetics of Blattabacterium in response to antibiotic treatment. That is, while we and others [23] show that Blattabacterium endosymbionts do not appear to be killed by antibiotic exposure except during their transit to the developing oocyte, it is difficult to rule out sub-lethal antibiotic effects on these microbes that may impact their metabolic status. It is highly probable that the microbial community in the gut is involved in regulating metabolic rate, as in the American cockroach [50]. Because this community is efficiently targeted by doxycycline treatment, it is a more likely contributor to life history. Although the pathways by which the microbiota affects life history traits remain unknown, our studies lay a foundation for future work in this area. Of additional interest is the potential involvement of endosymbionts besides Blattabacterium, such as Wolbachia, which have been detected in a small percentage of German cockroach populations but remain understudied in this insect [51]. Regardless of the particular microbes involved, the above findings have important implications. By affecting reproduction and development, the microbiota could influence the growth of resistant populations under insecticide selection pressure and also explain the fitness costs commonly associated with resistance outside of this context [52]. That is, under exposure to insecticide, the microbial community in the gut may be altered. At the individual level, we show that these shifts can contribute to physiological resistance. Simultaneously, alterations in the microbiota may, depending on environmental pressure, increase or decrease reproductive fitness leading to changes at the population level.
The relevance of our results to the control of resistant cockroach infestations outside the laboratory ultimately remains to be determined, as the impact of microbe-mediated resistance is likely to be dependent on interactions with other more prevalent host mechanisms of resistance. Moreover, the microbiome of field-collected cockroaches was found to be more variable than that of insects raised in the lab [40], and this variation may affect microbe-mediated resistance. No less, if the phenomenon presented here is confirmed in field populations, the approach of targeting commensal bacteria shows strong promise as a tool for integration into cockroach IPM programs. While our results suggest that this strategy is unlikely to be effective in boosting the toxicity of contact insecticides, antibacterial compounds can be easily incorporated into readily consumed bait products and their delivery through this route would require minimal effort but could substantially improve control with ingested active ingredients. Notably, a two-pronged approach that targets resistant populations by simultaneously enhancing | 7,358.4 | 2018-12-12T00:00:00.000 | [
"Biology"
] |
Determinants of adoption of improved maize varieties in Osun State , Nigeria
The paper examines the level of awareness and identified the various improved maize varieties (IMVs) cultivated in Osun state, Nigeria. It also analyzed the socioeconomic factors influencing the adoption and intensity of use of improved maize varieties. A multistage sampling design was used to select 360 farming households that were interviewed using structured questionnaire. Data collected included demographic and socioeconomic characteristics of respondents such as age, household size, gender, farm size and other improved maize production related activities. Descriptive statistics and double hurdle model were used as analytical tools. Results showed that the level of awareness of improved maize varieties was 97.8%. About 91% of these estimates were adopters while 8.8% were non adopters. The double hurdle model estimates showed that age (t=4.50, p<0.05), level of education (t=3.33, p<0.05), farming experience (t=4.33, p<0.05), household size (t=2.18, p<0.05), farm size (t=4.02, p<0.05), and household’s distance to market (t=2.26, p<0.05) were significant determinants of adoption of IMVs while age of respondents (t=2.31, p<0.05), level of education (t=2.27, p<0.05), household size (t=2.79, p<0.05), farm size (t=2.51, p<0.05), frequency of contact with extension agent (t=10.46, p<0.05), off farm income (t=2.19, p<0.05), and membership in association (t=2.46, p<0.05) determined use intensity of improved maize varieties. The study concluded that policies that increase farmers’ level of education and effectiveness of extension services contact will facilitate adoption and use intensity of improved maize varieties.
INTRODUCTION
For most developing countries, agriculture provides a leading source of employment and contributes large fractions of their national income.In addition, provision of adequate food for an increasing population and supplying of adequate raw materials to and providing markets for the product of a growing industrial sector have been identified as part of the major roles of agriculture in the economy of Nigeria.However, for agriculture to perform its roles towards economic development depends largely on agricultural productivity.High yielding seed varieties that are fertilizer responsive, tolerant to drought and resistant to pest is one of the key elements that constitute the pivot on which increased agricultural productivity per unit of land is rested (Idachaba, 1994).As noted by Duflo et al. (2006), the rapid population growth has made countries in Africa to be no longer viewed as a landabundant region where food crop supply could be increased by expansion of land used in agriculture.Demographic and environmental pressures have made arable land to become scarce and increasingly marginal for food production in Africa.
Maize is one of the major cereal crops grown and consumed across all agro ecological zones of Nigeria.It currently accounts for approximately 20% of domestic food production in West and Central Africa.It has also achieved the highest growth rate of the major crops since the 1970s (Kamara et al., 2006).Despite the high yield potential of maize; its production is faced with numerous constraints.Studies (Babatunde et al., 2008, Kudi et al., 2011) have shown that maize average yield is still low compared to its potential yield.Thus, enough maize has not been produced in Nigeria to meet both the food and industrial needs of Nigeria.The International Institute of Tropical Agriculture (IITA) has developed extra early maturing, decreasing susceptibility to drought, disease tolerant and high yielding maize varieties that are adapted for growth in West Africa.All these positive attributes of improved maize varieties will reduce the chronic food shortages, stabilizes rural income and lessening the risk of farming.
The importance of farmers' adoption of new agricultural technology has long been of interest to agricultural economists, extensionists and rural sociologists.It is believed that an effective way to increase productivity is broad based adoption of new farming technologies (Minten and Barret, 2008).This hypothesis is supported by the substantial improvement in productivity of cereal crop in mid-1990's following extensive promotion of improved technologies by Sasakawa Global 2000, an international NGO working to improve productivity of smallholder agriculture (Tura et al., 2009).Adoption of agricultural technologies refers to the decision to apply a technology and to continue with its use.The adoption decision is divided into three phases: acceptance, actual adoption, and continued use.It is generally a multistage process undertaken most often sequentially and being influenced by a wide range of economic, social, physical and technical aspects of farming (Paudel and Thapa, 2004;De Graff et al., 2005).The low productivity levels have been attributed to low yield potential of seed cultivars, susceptibility of seeds to biotic and abiotic stress, low adoption rate and other recommended management practices (Asnake et al., 2005).
The objective of this paper is to assess the level of technology (IMVs) adoption, identify IMVs cultivated in the study location, and analyze the socioeconomic determinants of adoption and intensity of use of IMVs.By understanding farming households' adoption pattern of improved maize varieties (IMVs), extension programmes can be better designed.Hence, the outcome of this study will enable agricultural policy makers to design policies that will address factors determining the adoption of IMVs.
STUDY AREA AND SAMPLING TECHNIQUE
The study was conducted in Osun State, southwestern part of Nigeria and lies between latitude 05° 58'N and 08° 07'N and longitude 04° 00 ' E and 05° 05 ' E. It covers a total land area of approximately 14,875 km 2 with total population of 3,423,535 with sex distribution of 1,740,619 male and 1,682,916 female and population density of 238.1/km 2 .The state has three Agro Ecological Zones (AEZs) namely rain forest (Ife/Ijesa), derived savannah (Osogbo), and savannah (Iwo) zones.
The climate is tropical and characterized by bi-modal rainfall pattern with the annual rainfall ranges from 800 mm in the derived savannah to 1500 mm in the rain forest while the mean annual temperature varies from 21.1 to 31.1°C (OSSG, 2004).The state's soil type is of the highly ferruginous tropical red soil and the vegetation is mostly rain forest.
The people of the State are mostly farmers, traders and artisans with larger percentage being farmers.The farmers cultivate permanent crops such as cocoa (Theobroma cacao), kolanut (Cola nitida and C. acuminata), plantain and bananas (Musa spp), Oil palm (Elias guinensis) and citrus (Citrus Spp).They also cultivate arable crops especially maize (Zea mays) with different varieties widely cultivated.Other arable crops cultivated include yam (Discorea spp), cassava (Manihot esculenta), rice (Oryza sativa) and cocoyam (Colocasia spp).
Multi-stage random sampling technique was used to select sample of 360 maize farmers.The first stage involved purposive selection of four Local Government Areas (LGAs) noted for maize production in each of the three agro-ecological zones (AEZs) in Osun State, based on the classification of the state's Agricultural Development Programme (ADP).The second stage also involved purposive selection of three high maize producing villages in each of the LGAs.In the third stage, stratified random sampling was used to categorize maize farmers into adopters and non adopters of improved maize varieties in each of the village.The fourth stage involved simple random selection of five maize farmers in each of the two categories making a total of 360 respondents for the study.
Data collected include the socioeconomic variables such as: Age, sex, farming experience, level of education, frequency of contact with extension service, credit availability, market access and farm size.The data also included the level of awareness and IMVs cultivated in the study area.
Data analysis
Descriptive statistics was used to assess the level of awareness and adoption of IMVs as well as identifying IMVs cultivated while double hurdle model was used to analyze socioeconomic factors determining adoption and intensity of use of improved maize varieties.
Specification of double-hurdle model
The underlying assumption in the double-hurdle approach is that a farmer makes two decisions with regard to his willingness to grow improved maize.The first decision is whether to allocate a positive amount of land to improved maize variety at all while the second decision is about the share of land to allocate, conditional on the first decision.
Originally proposed by Cragg (1971), the double-hurdle model is a parametric generalization of the Tobit model, in which two separate stochastic processes determine the decision to adopt and the intensity of adoption of the technology (Greene, 2000).The double-hurdle model has an adoption (D) equation: Where D* is a latent variable that takes the value 1 if the farmer adopts improved maize varieties and zero otherwise, D i is the observed variable which represent farmers adoption decision, Z ' is a vector of explanatory variables hypothesized to influence adoption, α is a vector of parameters and µ is the error term.The level of adoption (Y) has an equation of the following: Where Y i is the proportion of land area planted with improved maize varieties (signifying the extent or intensity of adoption), Y * i is the unobserved or latent variable for the intensity of adoption, X is a vector of explanatory variables hypothesized to influence intensity of use of improved maize varieties, β is a vector of parameters to be estimated and ν i is the error term.The error terms, µ i and ν i are distributed as follows: µ i ~ N (0, 1) The error terms, µ i and ν i are assumed to be independent of each other and normally distributed with zero mean and constant variance.The log-likelihood function for the double-hurdle model following Greene (2000) is: (3) In this case, the model relating to adoption was specified as: Where the explanatory variables are defined as: X 1 = Education of the household head (years), X 2 = Age of household head (years), X 3 = Farming experience (years), X 4 = Household size (number), X 5 = Farm size (ha), X 6 = Off-farm income (N), X 7 = Access to credit (1 if yes, 0 otherwise), X 8 = Frequency of extension service contacts (frequency), X 9 = Distance to market (km), X 10 = Membership in association (1 if yes, 0 otherwise), X 11 = Seed availability (1 if adequate, 0 otherwise), X 12 = Land security (1 if secured, 0 otherwise) while the dependent variable, Y 1 = adoption of improved maize varieties (1 if adopted, 0 otherwise) and βs are coefficients of parameters to be estimated.
Level of awareness and adoption of IMVs
Descriptive statistics results showed that majority (97.8%) of sampled households were aware of improved maize varieties.Of the aware households, majority (91.2%) were adopters while 8.8% were non adopters (Table 1).
Improved maize varieties cultivated in Osun state
Five improved maize varieties were commonly cultivated in the study area.DMR-ESR-Y was most widely adopted (61.9%) while 3.1% TZMSR-W was least adopted (3.1%) as shown in Table 2.
Socioeconomic characteristics of respondents
The mean age of the total respondents was 58.6±13.3 years which was an indication that the respondents were fairly in their active years.The mean age of the adopters and non adopters were 52.1±9.4 years and 54.2±10.8 years respectively.The mean age difference between adopters and non adopters was observed to be significant at 5% level.This indicated that age influences adoption of improved maize varieties.
Respondents composed of both male and female household heads.Majority (83.6%) were male while 16.4% were female headed.The male headed households' proportion for adopters and non adopters were 84.7 and 74.4% respectively.This shows that male headed were higher than female headed households for each category of respondents.This could be attributed to various reasons such as economic and social position of female headed households, including labour shortages and limited access to required information and inputs.
Education is an important determinant of adoption decisions.It helps to understand and interpret the information coming from any direction to farmers (Bekele and Mekonnen, 2010).To a greater degree, education determines the ability to read and/or write by farmers.As shown in Table 3, 36.7% of the total respondents had no formal education while 21.9, 28.6 and 12.8% completed primary, secondary and tertiary education respectively.If completion of primary school is taken to measure ability to read and/or write, the finding revealed that 63.3% of the total respondents could read and/or write, while 36.7% of them could not.This indicated that the level of literacy is high; this may be the reason for high rate of adoption as shown in Table 1.The mean year spent in acquiring formal education for the total respondents was 7.1±6.3years while it was about 6.3±2.5 years and 5.6±2.2 years for adopters and non adopters respectively.The mean difference of the year spent in school between adopters and non adopters was significant at 5% level.This indicated that there is a relationship between education and adoption of improved maize varieties in the study area.Membership of respondents in different farmers' association is assumed to have influence on adoption decision of farm households.It makes farmers to have more access to input, information and better interpretation of available information related to new technology.More than half of the respondents (52.8%) were members in one farmers association or the other while, 47.2% were not.Further, within each category, greater percentages of adopters (54.2%) were members in farmers' association while greater percentages of non adopters (64.1%) were not.
The mean farm size of the total respondents was 3.2±2.2ha.The mean farm size for adopters category was the highest (2.8±2.1 ha) whereas it was 2.6±2.0 ha for non adopters.The mean difference of the total farm size between the adopter and non adopters' categories was found to be significant at 5% level.This shows that farm size has a relationship with adoption of improved maize varieties in the study area.Farming experience is likely to have a range of influences on adoption.A more experienced farmer appears to be more knowledgeable and may have a lower level of uncertainty about new technologies.Table 3 shows that the mean years of farming experience was 31.9±13.8years for the total respondents.The adopters and non adopters category of the respondent had mean years of 30.5±13.4 and 26.9±12.3respectively.The mean difference of farming experience between adopters and non adopters was found to be significant at 5% level.This indicates that farming experience influences adoption of improved maize varieties in the study area.Some respondents had other sources of income besides farming which implied that they engaged in off farm activities.Of the total respondents, 54.7% got their income solely from farming.Others combined petty trading (19.4%), artisans (14.7%) and civil service (11.2%) with farming.Further, within each category; majority of adopters (57.1%) got their income solely from farming.For the non adopters, majority (43.6%) earned their income from farming and petty trading.
Determinants of adoption of improved maize varieties
The double hurdle model shows the result of probit model for adoption and the truncated regression model for use intensity of improved maize varieties in the study area.The estimated coefficients of probit model and truncated regression model are presented in Tables 3 and 4 respectively.
Factors determining adoption of improved maize varieties
Age of the household's head was found to be a statistically significant variable at 1% level with negative relationship.The negative relationship implies that age reduces adoption probability.A unit increase in the age of the respondent reduces probability of adoption by 0.9%.This implies that the older the respondent, the lower the probability of adoption.This finding agrees with previous studies on technology adoption such as Bamire et al. (2002) and Akinola et al. (2008).
The coefficient of level of formal education of the household head was positive and statistically significant at 1% level.Education which is the ability of respondents to read and/or write increased adoption probability in the study area.Educated farmers are more analytical and observe easily the obvious advantages of new technologies.The positively significant influence implies that the higher the level of formal education, the higher the probability of adoption of improved maize varieties.This agreed with previous studies on technology adoption such as Lemchi et al. (2005) and Nnadi and Akwiwu (2008).
The coefficient of farming experience was positive and statistically significant at 1%.This implies that the more the years of experience in farming, the higher the likelihood of adopting improved maize varieties.Increased years of farming experience furnished farmers with more knowledge that increased their rationality in the use of innovations.This is in consonance with Nnadi and Amaechi (2007) that explained increased years of farming experience as a valuable asset in adoption decision making.However, this contradicted Bamire et al. (2002) that indicated that the older the farmer, the less likely he is to adopt new ideas as he gains more confidence in his accustomed ways and method because experience affects individual mental attitude to new ideas differently and influences adoption in several ways.
Household size was statistically significant and positively related to the probability of adoption at 5%.The direct relationship implies that large household size predisposes adoption of improved maize varieties.This may be due to the fact that large household size is assumed to be an indicator of labour availability and that such a household would like to improve its food security.This is in agreement with the study conducted by Nnadi and Akwiwu (2006).
Total farm size of the respondent was positive and had statistically significant influence at 1% level on the adoption of improved maize varieties.Nowak (1987) argues that larger farm owners have more flexibility in their decision making, greater access to discretionary resources, and more opportunities to use new practices on a trial basis with more ability to deal with risk.This could be explained by the fact that large farm size presupposes large farm asset.Thus, farmers who had more assets had more dispositions to adopt new technologies than those who had less.A similar result was reported by Nkonya et al. (1997) and Aklilu and De Graaf (2007).
Distance of the farmers' village to market center was found to be statistically significant with negative relationship at 5% level.The negative relationship implies that the farther the distance between farmers' village and the market center, the lower the probability of adopting improved maize varieties.This may be due to the fact that relative proximity to market reduces marketing cost.Therefore, longer distance to market, better yield associated with improved maize and huge associated marketing costs might be responsible for the lower probability of adoption decision.This result is consistent with other studies such as Tesfaye and Alemu. (2001) and Kebede (2006).
With respect to other variables, none was statistically significant.Off farm income, membership in association, land security and improved seed availability were positive as expected.However, access to credit and frequency of extension services paradoxically had negative influence on adoption of improved maize varieties in the study area.
Factors determining use intensity of improved maize varieties
The result of the truncated regression model for the use intensity of improved maize varieties upon adoption showed that age of the household head was statistically significant at 5% level but had a negative influence on the hectares of land cultivated (Table 5).This implies that the older the respondents, the smaller the land area planted with improved maize varieties.An increase in the age of the respondent reduced the use intensity of improved maize varieties.This agreed with previous studies on technology adoption, such as Bamire et al. (2002) and Akinola et al. (2007).
The level of formal education was positive and statistically significant at 5% level.This outcome was expected and conforms to the study conducted by Bekele and Mekonnen (2010).An increase in the level of education of the respondents increased the intensity of use of improved maize varieties.The more educated a farmer, the more he is to diagnose and observe the benefits of new technologies, hence, more hectares of land was put into cultivation of improved maize varieties.
Household size which is an indication of labour availability had a positive influence on the intensity of use of improved maize varieties and was significant at 1% level.As the household size increased, the size of land in hectares planted with improved maize varieties increased.An increase in household size of the respondents increased the use intensity of improved maize varieties.This outcome was in line with the expectation as the sign could be either positive or negative (Zeller et al., 1998).The larger the household, the more the pressure to ensure food security, hence, the cultivation of more hectares of land with improved maize varieties.
The coefficient of farm size of the respondent was positive and statistically significant at 5%.As expected, the larger the farm size, the more the areas planted with improved maize varieties.An increase in the farm size of the respondent increased the areas planted with improved maize varieties.This agreed with Gebremedhin and Swinton (2003) and Kabubo-Mariaura et al. (2010).Frequency of extension service contact was a positive and statistically significant variable in determining intensity of use at 1% level.Households that had regular contacts with extension agents are more enlightened through advisory services and therefore appreciate the more, benefits of a new technology.An increase in frequency of contact with extension agent increased the intensity of use of improved varieties.This finding agrees with Knowler and Bradshaw (2007).
Coefficient of off farm income was statistically significant positively in determining the use intensity of improved maize varieties at 1% level.This implies that the more the income realized from off farm engagements, the more the hectares of land cultivated with improved maize varieties.Increase in the income from off farm engagements increased the intensity of use of improved maize.This may be due to the fact that more money is available to acquire more hectares of land, more seeds and associated inputs.The positive sign was in line with "a priori" expectation and agreed with Lapar and Pandey (1999).
Membership in association was statistically significant and had a positive relationship at 5% level.The more the respondents join farmer's association, the more the hectares of land cultivated with improved maize varieties.This implies that information related to procurement and benefits of improved maize varieties were discussed and disseminated at farmers' meetings.An increase in the number of respondents in farmers' association increased the use intensity of improved maize varieties.This finding is consistent with the expectation and agrees with Akinola et al. (2008).
However, farming experience, access to credit and land security variables had positive influence on intensity of use of improved maize varieties as anticipated but statistically insignificant.On the other hand, improved seeds availability and distance of farmers' village to market were also statistically insignificant with negative relationship.
Conclusion
The study revealed that the level of awareness and adoption of improved maize varieties were high in the study area and that IMV adoption was profitable.The study concluded that the adoption decision of improved maize varieties was driven by a host of socioeconomic factors such as age, level of education, farm size, farming experience, household size and distance to the nearest market center while insight into key socioeconomic factors influencing use intensity was provided which included age, level of education, farm size, household size, frequency of contact with extension agent, off farm income and membership in associations.Policies should target strengthening maize farmers to have access to improved education and frequent extension service contact for aiding acceptance and dissemination of agricultural technology information which has the potential to increase the rate of adoption and intensity of
Table 1 .
Awareness and adoption of improved maize varieties.
Table 3 .
Socioeconomic characteristics of respondents.
Table 4 .
Probit estimates of socioeconomic factors determining adoption.
Table 5 .
Truncated regression estimates for use intensity of improved maize varieties. | 5,350.8 | 2015-03-31T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
An Online Multidomain Validation Method for Wireless Sensor Nodes
If wireless sensor network was deployed in electric power plants to provide equipment health data, it is essential that the data should be accurate and reliable. Nodes of wireless sensor network are different than wired test units, for they need to be distributed and have some constrains. Then online validation method is important to ensure data of networks to be reliable in important and safety related fields. It has been proved that calculation (such as FFT) can be validated by two orders functional (e.g., energy) in time and frequency domain, and with doing different number of times time-frequency signal analysis, the principle of test signal measurement validation method has been introduced. And with steady state signal, metastable signal, and the nonstationary signal the different online validation method is presented. And this method has been proved to be highly reliable and less uncertain in theory and experiment.
The Challenge of Online Validation Method for Wireless
Sensor Nodes. Wireless sensor networks can be applied in electric power plants (EPPs) to provide equipment health data, for their advantages of being easy to install, cost effective, self-healing, built-in redundancy, noninvasion, and so on [1][2][3][4][5][6][7]. The difference between nodes used in WSN and test units used in wired systems is shown in Table 1.
In WSN, the reliability and accuracy of test data are mainly concerned issues [8]. And validation is an effective way to improve these performances.
The validation is defined as the process of checking whether a node of a WSN satisfies its specifications. And online validation is typically focused on software functional and nonfunctional executed properties that need to be ensured under different environmental and other dynamic conditions [9,10]. In the meantime, physical models such as dynamic stochastic process signal, complicated random process signal, and complicated noise come. Now existing online validation or self-test method typically focuses on following fields: the first are embedded processors and non-core components online software test SBST (software-based self test technology), using functional and structural test methods to cover with static faults, slightly dynamic faults and so on [11][12][13][14][15][16]; the second is automated sensor self-validation algorithm, using approximate reasoning techniques such as fuzzy logic to process the sensor measurement in industry harsh environment (normally slowly change signal, such as temperature) and make it have high confidence before using these test data [17][18][19]. The third is redundancy online covalidation method, which is one of the principal ways against the negative effects of random failures; however, as systems age, the likelihood of simultaneous failures of redundant safety systems becomes more compelling [20][21][22][23][24][25][26].
On other related researches, traditional hypothesis testing can be used by establishing confidence bounds and critical values for PIT [27,28]. The distribute control algorithms with all-to-all and limited communications based on source seeking can be applied in order to use communication resource effectively [29].
So the challenges of online validation methods include distributed dynamic data validation, no-redundancy node or Currently, the Multidomain (MD) method is often used in online learning, adaptation, and sampling, such as confidence-weighted parameter combination, classifiers, and domain-based representations with the MD sampler [30][31][32][33][34]. The main methods of MD are FEDA (frustratingly easy domain adaptation), MDR (multidomain regularization), MTRL (the multitask relationship learning), and so on. Unfortunately these methods are not suitable for the distributed sensor test data validation processing.
The main contributions of this paper are as follows.
(i) An MD method validates correctness and trend prediction of test data, so that the data meets specified test requirements, and also meets the constraints of distributed sensor nodes. The proposed approach can handle the dynamic data acquired of distributed wireless sensor nodes. (ii) Compare it with concurrent domain, frequency, timefrequency filter in harsh noisy environment. And a prototype of different number of times time and frequency domain signal analysis is discussed (iii) Simulation results demonstrate that method can meet accuracy and reliability demands, while the traditional filter and validation used in wireless sensor nodes is typically unable to achieve this.
The remainder of this paper is organized as follows. In Section 2, the related background information and some assumptions of wireless nodes are given. Section 3 discusses the principle, formula, and analysis of MD method. The simulation results are presented in Section 4, and the conclusion is given in Section 5.
Construction Characteristics of Wireless Sensor Network
Nodes. Here the construction characteristics of wireless sensor nodes are analyzed and three assumptions are given. As Figure 1 there are two different construction forms of wireless sensor network system.
Construction I. Like System 2, there are three different units; every unit has no or very less impact on other parts and it can do its task completely and independently.
Construction II. Like System 1, every unit has some impact on other parts or every unit that accomplished its task needs other units running rightly.
Definition 1. "Dependence of system" means if parts A, B, C have task A, task B, task C, the degree that each one fulfills its task should depend on other tasks running or undertaking. ep ([0, 1]) stands for dependence of system (1) In a serial system, ep is almost 1; the reliability (MTBF ) is the minimum part's MTBF of system: And as component of system increases, the failure rate and ep increase. is failure rate, a positive value, so here the simplest model be considered as In this assumption, if dependence of system increases, the reliability of system will decrease. So distributed function completeness is needed.
Assumption 2. Every node in system should have distributed function completeness (like system 2). As energy issue, data compressed is an important means to decrease energy consume and meanwhile improve throughput of networks (as Figure 2).
For example, if the RF power and throughput of received data were not considered, then data = Rowdata , is compressed rate, is power, and TH is throughput. The other two assumptions are as follows. The second assumption of wireless vibration network node is that it should have high data compression rate.
The third assumption of wireless vibration network node is expounded from to the other part, which is to arrange task to every node, as every node should have meaningful output, real time mark, and precision.
For instance, here is given a vibration sensor example. With those assumptions, the wireless vibration sensor nodes should acquire data, process data, and output data which have meaningful format. The simplest way to do it is following the ISO 10816 and ISO 7919 where every node calculated the vibration density (for constrains of sensor nodes as Table 1). Consider
Online Industrial Automated Sensor Self-Validation.
Deployed wireless sensor networks in EPP can continuously monitor and assess the health of EPP structures, systems, and components (SSC).
It is well known that noises of sensor measurements contain (a) harmonic noise; (b) sudden large deviations (often caused by electromagnetic interference or external disturbances); (c) component noise under environment stressors; (d) measurement noise of improper installation and methods; (e) unpredicted noise.
For active component wired online monitor system in EEP (for our example, vibration sensor nodes should be used), data analysis is based on pattern recognition for anomaly detection and so on.
(1) Sensor Self-Validation Algorithm. In an automated sensor self-validation system for cupola furnaces, the digital outputs are removed of noise and invalid data by the median filter, and then every sensor obtains a set of parameters that represent the temperature reading, its rate of change, and its variance. Secondly all these data are sent to the fuzzy logic system to do self-validation, and then an output that represents the selfconfidence in the parameter set of this sensor is produced.
In this method, the temperature, the rate of change in temperature, and the variance of change in temperature are three inputs of fuzzy system. But they are not suitable for vibration test, for vibration signal is dynamic, and its noise is more complicated [35]; the wireless vibration sensor nodes have constrains as Table 1.
(2) Online Multidomain Learning, Adaptation, and Sampling. Multi-domain learning combines characteristics of both multi-task learning and domain adaptation and drawing from both areas.
And the multi-domain sampler can construct domainbased representations for an arbitrary multimodal distribution. And it can be applied to a wide range of Bayesian inference problems and it is particularly powerful in tackling problems with complicated posterior distributions.
For its learning and sampler ability, it may be predicted that this method may be effective in online validation.
Calculation Validation and Measurement Validation.
The feature of validation is often regarded as using different methods to compute the same data in order to get high confidence.
From this, we here define calculation validation and measurement validation in online test.
Definition 3 (calculation validations). For a calculation process , it stands for using function to calculate input data data and obtain output data data by using a kind of method. That is, International Journal of Distributed Sensor Networks And another calculation process stands for using function to calculate input data data and obtain output data data , or using data obtain input data data . Consider If processing can be done only if the calculation of is correct in every step, then the process of is calculation validations process.
Definition 4 (measurement validations). Because of test error, application demands, and so on, there are acceptance limits of calculation validations in measurement application.
When the input data is converted to a signal, there is an error between output data data of process and output data data of process . When the error meets the system's demands (or less than measurement acceptances limits), then process is validation process.
Extension of measurement validations is using different independent way to process data in order to obtain the high precision and reliability.
The difference between the validation and data fusion is that data fusion has multiple different data sources and validation has only single data source.
Online Multidomain Validation Method.
Here we refer to calculation validation and measurement validation during software executing.
In this section, the theory or formula of online multidomain validation is presented.
(1) Functional Analysis. In signal space, the input signal will be denoted by ( ), ∈ . The independent variable is typically interpreted as sample sequence. The output data will be denoted by ( ), ∈ . The independent variable is typically interpreted as output sequence. As discussed in Section 2, the output signal or data ( ) has significant physical meanings.
Then seeking the function of validation process is a functional problem. In signal space (reality physical 2 waveforms, viewed as vectors in the inner product space known as signal space), all input signals ( ) obtain a signal set in { 1 ( ), 2 ( ), . . .}; all output data obtain an output data set out { 1 ( ), 2 ( ), . . .}. All the maps from in to out form Functional set FS (e.g., sample process is linear functional in 2 space).
The different map from in to out may come from different discrete representation of signals. If orthogonal subspace spans from { 1 , 2 , 3 , . . . , }, signal ∈ , then Then inner production and orthogonal properties get In frequency domain, Fourier series form orthogonal basis.
As Fourier Transform is unitary transformation, its time domain two orders Functional can convert to its frequency domain two orders Functional. And it is same for the invert convert. And so does the invert convert.
For energy function being two orders functional, the calculation validation can be done from two different domains using this relationship.
The important relationships between these are the following.
(i) Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, the sum (or integral) of the square of a function is equal to the sum (or integral) of the square of its transformation (ii) Uncertainty principle limits the simultaneous timefrequency resolution one can achieve without interference, for real signal. Consider is a (suitably chosen) measurement of bandwidth (in hertz) and is a (suitably chosen) measurement of time duration (in seconds).
Although formula (5) is two orders functional, unlike formulas (10) and (11), the definition of vibration density (measurement validation) is not intrinsic propensities concept (i.e., energy and uncertainty have no difference in different domains). Here come stochastic process signal and different number of times time and frequency domain signal analysis.
(2) Different Number of Times Time and Frequency Domain
Signal Analysis. In ergodicity random process, some statistic value of signal calculated from space domain is the same as calculated from time domain.
To do measurement validation, we should first understand test signal properties. Signal can be divided into steady state signal, metastable signal, and the nonstationary signal, and so on.
To compare the simulated signal with real test signal, two different test figures were put together (as Figure 3); the left is simulation figure and the right is real test signal. They are very similar. Then a simulated vibration random process signal can be obtained by random phase or random noise amplitude, and so on. That is, And this signal may be more precise than first simulated signal.
Even simulated signal as (12), calculation process , and validations process are changed at different time (for random noise is changed); or in other mean, different measure times the two process are all different.
When doing validation (or obtain high precise), the steady value of output data set (may be eigenvalue vectors of input signal set) should be found. Then the space, time, frequency domain properties of signal should be considered. So does the signal (13). Consider space properties of signal and give an assumption that the output data from time domain and frequency data have same distribution (for unitary transformation and antithesis basis property). (1) In Single Time or Frequency Domain (i) If it were steady state signal, owing to its random error properties, the simplest way to obtain stead value is averaging output data.
(ii) If it were metastable signal, owing to its stochastic process properties, the powerful way to get stead value is using windows (continuous sliding) averaging output data.
(iii) And this arithmetic is not fit for non-stationary signal.
(2) In Time-Frequency Domain. For non-stationary signal, the hidden Markov chain or KL distance should be considered.
(3) An Online Multidomain Validation Method for Vibration Sensor Nodes
(i) When doing FFT, use (10) to do calculation validation.
(ii) As steady state signal, when doing measurement validation (is not suitable for shock or violent vibration test), first calculate moving average of vibration density and maximum peak-peak value, if validation is true, output mean value and validated flag 1; else output maximum value and validated flag 0.
(iii) As metastable signal, observe new data in every single domain, and obtain the trend of data from its versus domain, if its change trends were same as prediction then use new data as output, or use the mean data as output like steady state signal. And in the meantime the data relationship of each domain will be similar. Then recursion can be done.
(iv) As nonstationary signal, as discussed in Section 2.2, if domain-based representations for an arbitrary multimodal distribution are obtained, hidden Markov chain can be used to charge march or not like metastable signal.
Like Figure 4, that in harsh environment or other situation, the uncertain noise, uncertain condition of device working and calculation, and other unpredicted elements, the multi domains parallel testing can improve its reliability and uncertainty. As it is obvious, if the different domains test data can be validated independently, the reliability of system will increase to be double.
And as analysis in error theory, for the test free degree increasing two times, the uncertainty of system will decrease 1/ √ 2.
Generally, if domains to test increase, the reliability will increase times, and uncertainty of system will decrease 1/√ .
Conclusion 2. Vibration intensity is increased when noise intensity increased
This result is same when noise is impulse noise, not Guess white noise and so on.
Shift Average of Noise Influence Vibration Intensity Calculation.
As an example to decrease noise, a method of shift average (accumulate) is simulated, both in time domain and frequency domain. Figure 6. When using average the uncertainty decreases in two different domains. Its application fields are steady state signal test. Figure 7 shows two-signal change trend with time-frequency analysis. One is mechanic fault signal and the other is normal signal. Consider
Experiment on Rotation
Lab. The uncertainty of test results only using formula (5) and plus windows technology is 15% or more. When using shift average the uncertainty of test results is less than 5%. And if method of median average pre-filter had been used, the absolute error is also less than 5%.
The vibration module used for test is shown in Figure 9 (includes ADUC7060 and ADUC345). And the multidomain validation method has been tested in the device. Steady state signal and sudden change signal validation had been tested. Uncertainty of test results is less than 5% also, and sudden change can be predicted and validated by different domain.
Conclusions
In the paper, characteristics of wireless sensor network nodes are discussed firstly, and three assumptions are obtained; that is, every node in system should have distributed function completeness, and every node should have high data compression rate, and every node should have meaningful output, real time mark, and precision.
Calculation validation can be done by their two orders functional, for example, using Parseval's theorem.
Moving average, windows and sliding average are proved to be useful to validate steady state signal and to judge the variation trend of metastable signal by simulations and experiments. According to these methods, when new test data of metastable signal were obtained, recursion calculation and the validation in multi domain can be done. | 4,163.6 | 2013-07-21T00:00:00.000 | [
"Computer Science"
] |
POSITIONING OF THE PRECURSOR GAS INLET IN AN ATMOSPHERIC DIELECTRIC BARRIER REACTOR , AND ITS EFFECT ON THE QUALITY OF DEPOSITED TiO x THIN FILM SURFACE
Thin film technology has become pervasive in many applications in recent years, but it remains difficult to select the best deposition technique. A further consideration is that, due to ecological demands, we are forced to search for environmentally benign methods. One such method might be the application of cold plasmas, and there has already been a rapid growth in studies of cold plasma techniques. Plasma technologies operating at atmospheric pressure have been attracting increasing attention. The easiest way to obtain low temperature plasma at atmospheric pressure seems to be through atmospheric dielectric barrier discharge (ADBD). We used the plasma enhanced chemical vapour deposition (PECVD) method applying atmospheric dielectric barrier discharge (ADBD) plasma for TiOx thin films deposition, employing titanium isopropoxide (TTIP) and oxygen as reactants, and argon as a working gas. ADBD was operated in filamentary mode. The films were deposited on glass. We studied the quality of the deposited TiOx thin film surface for various precursor gas inlet positions in the ADBD reactor. The best thin films quality was achieved when the precursor gases were brought close to the substrate surface directly through the inlet placed in one of the electrodes. High hydrophilicity of the samples was proved by contact angle tests (CA). The film morphology was tested by atomic force microscopy (AFM). The thickness of the thin films varied in the range of (80 ÷ 210) nm in dependence on the composition of the reactor atmosphere. XPS analyses indicate that composition of the films is more like the composition of TiOxCy.
Introduction
Thin film deposition techniques and technologies have undergone serious development and cultivation in recent decades.A vast number of deposition methods are now available [6,7], but it still remains arduous to select the best deposition method which is, at the same time, environment friendly.The application of cold plasmas sustaining under atmospheric pressure in combination with the chemical vapour deposition method seems to be a promising approach.Cold plasma is often produced in plasma jets, plasma torches and ADBD (for details about ADBD, see e.g.[5]).However, research in this area is still mostly restricted to the laboratory stage.For practical reasons, the most tested films are SiO x and TiO x coatings.
This paper summarizes results obtained when TiO x films are deposited on glass substrates using the PECVD method in an ADBD plasma reactor when TTIP, (less toxic than TiCl 4 , which was used e.g. in [2]), is applied as a precursor, namely the connection between the precursor gas inlet position in the ADBD reactor and the deposited film quality.Some preliminary results have been published in [3,4].
The reactor and the experimental conditions
Film deposition was performed with discharge power of about 350 mW [14 kV, 50 Hz].
The experiments were carried out in an open flow-through type plasma reactor with dimensions (90 × 79 × 41) mm.The scheme of the plasma reactor is shown in Fig. 1.Plasma sustained between two brass electrodes [(45 × 8 × 18) mm HV electrode, (40 × 17 × 18) mm ground electrode] placed within the reactor.A barrier manufactured from glass plate ((70 × 46 × 1) mm) covered the ground electrode.The distance between the electrodes was fixed at 4 mm.Three types of HV electrode were used in the experiments, all of them with identical external dimensions, but they were differentiated by the hole (diameter 3 mm) leading into the interelectrode region.The first electrode was without a hole, the hole of the second type was connected with one inlet (C) only, and the third type had two inlets, C and D (see Fig. 1, for simplicity all four inlets possitions are drawn here, although only one pair of inlets was used in each experiment).
ADBD sustained in filamentary mode.Thin films were obtained at atmospheric pressure using titanium (IV) isopropoxide as a precursor (TTIP, Ti[OCH(CH 2 )] 4 and 97 % purity).TTIP volatilized at temperatures of (30.0 ± 0.5) • C. It was mixed with argon in the evaporator, transported into the reactor and reacted with oxygen (or, in the first experiments, merely with dry air, when atmospheric oxygen took part in the reaction).
The gas flow rates were adjusted by means of the mass flow controllers.Deposition tests were performed with TTIP/Ar flows (0.5 ÷ 4.0 ) slm.The oxygen/dry air flow was controlled within (2.5 ÷ 10) slm.
The outer atmosphere was air with relative humidity of (36 ÷ 47 ) % and room temperature about 20 • C. The deposition time was 10 minutes in all experiments.
Unfortunately, the reactor has no "clean interface", so surface-related chemical reactions or contamination by ambient air species began while the films being removed from the reactor, before any test was initialized.
These chemical reactions also proceeded while the films were being stored.The deposited films were stored in darkness at room temperature (20 ÷ 23 ) • C, relative humidity (30 ÷ 40 ) %, in plastic boxes in air at atmospheric pressure.
Film analysis
The surface morphology of the films was examined using the atomic force microscopy (AFM) technique in noncontact mode, performed under ambient conditions on an FRT AFM Scanning Probe Microscope at the Technical University of Liberec.All (2 × 2) µm scans were processed by Gwyddion software for SPM (scanning probe microscopy) data visualization and analysis.
To analyse the hydrophilicity, a contact angle test (CA) was applied.CA was measured by a sessile drop technique when a constant time (30 s) passed after the dropping of distilled water about 0.5 µl in volume.The CA of each sample was performed in 5 different positions at room temperature.The surface chem- ical composition of the films was investigated by X-ray photoelectron spectroscopy (XPS).A multi-channel hemispherical electrostatic analyser (Phobios 100, Specs) was used.The Al Kα line (1486.6eV) was used with an X-ray incidence angle 45 • to the surface plane.The analyser was operated in retarding-field mode, applying pass energy of 40 eV for the survey scans and 10 eV for all core level data.The XPS peak positions were referenced at the aliphatic carbon component at 285.0 eV.Due to inadequate equipment we were unfortunately not able to clean the film profile by sputtering and perform XPS profile tests, so our research had to focus on film surface tests only.
Results
Deposition of the films proceeds in substrate-surface limited reactions.The films were deposited in ADBD sustaining in filamentary mode.
The deposited surface was in general irregularly corrugated, hummocky, with growing thickness in spots where filaments bridged electrodes.Growing/diminishing thickness of the film area reflects local electric field inhomogeneities associated with the existence of filaments.The surface topography was similar to layers deposited with TiCl 4 precursor [2,10].
The gassing of the reactor was performed through various pairs of gas inlets (see Fig. 1).We used following combinations: (1.) A (air at atm. pressure) and B (TTIP/Ar) (2.)A (air at atm. pressure) and C (TTIP/Ar) (3.) C (TTIP/Ar) and D (oxygen) Only the best results are mentioned in the following summary.1.The reactor was gassed both with TTIP/Ar (0.5 slm) and with dry air through inlets A and B in the walls of the reactor.Several differents positions of both inlets were tested, but only powder-like structures were deposited on the glass substrate for all used combinations of A and B.
2.
To reduce pulverization, the reactor was gassed through inlet A in the wall of the reactor with dry air, and TTIP/Ar (0.5 slm) was fed through inlet C (the hole, 3 mm in diameter, in the HV electrode).
The film that formed on the glass substrate was very thin and was barely detectable.We suppose that the TTIP/oxygen reaction was weak, due to the low concentration of oxygen atoms (from the air) in the reactor atmosphere.Then the precursor molecules flew out from the inter-electrode space and later reacted with oxygen atoms (and we observed the TiO x thin film deposited on the inner walls of the reactor).
AFM measurements revealed the existence of some salient parts on the uniform powder-like film surface (Fig. 2).Small hummocks were also visible.
CA measurements were almost impossible due to the inhomogeneity and the thinness of the film, and the results were not reproduciable.Only powderlike TiO x structures were deposited on the substrate for TTIP/Ar flows higher than 0.5 slm.
The high-resolution XPS spectra for the main elements in the films are shown in Figs.3-5. Figure 3 represents the Ti 2p spectrum that consists of 2p 3/2 and 2p 1/2 spin orbit components located at 458.8 eV and 464.6 eV, respectively.This position of the peak maxima indicates that the main titanium compound is TiO 2 .The small components on the lower binding energy side correspond to sub-stoichiometric titanium oxide TiO x , x < 2.
The O 1s spectrum is shown in Fig. 4. The peak consists of two distinct components at 530.5 eV and 532.7 eV.The first component is classified as TiO 2 , whereas the second component corresponds to the carbon-oxygen species.The carbon-oxygen species are about three times more abundant than oxygen bound in titanium (24 % against 76 %).We suppose that the carbon contamination was probably only superficial.This contamination may have originated both during the deposition process by impurities from the air and by post-discharge reactions and adsorption of various species from the ambient air atmosphere after the film removal from the reactor.
3.
To improve the quality of the films and optimize the deposition conditions, both components were introduced through the inlets in the HV electrode, and the atmospheric air was replaced by oxygen (C (TTIP/Ar) and D (oxygen)).
Nevertheless problems still persisted.AFM analysis proved that the deposited films were not fully homogeneous (Fig. 6), and there were problems with creating the powder-like structures.For further details, see [3].
4.
The best quality films were deposited in the same adjustment, but unlike with the previous combination of inlets, oxygen (2.5 slm) was fed through inlet C and TTIP/Ar mixture (1 slm or 2 slm), i.e. the TTIP content (0.05 or 0.10) % was fed in through inlet D. The mixture entered through the hole (diameter 3 mm) in the HV electrode into the inter-electrode space.Figure 7 is AFM scan of the film surface deposited under optimum conditions.The surface is similar to the surface described in [10].It is characterised by higher hummocks than in Fig. 6, when film deposition in surface-limited reactions was accompanied by volume-limited dust generating reactions.
The CA tests proved that all samples were hydrophilic immediately after deposition.For an Ar/TTIP flow rate of 1 slm, the CA value attained 5 • immediately after deposition.The hydrophilicity of the films remained almost invariable in the first 7 days after deposition.Later, the wettability had worsened and within 28 days after deposition the CA value of all samples exceeded 40 • .
Films deposited with TTIP/Ar flow 2 slm changed more rapidly from hydrophilic to hydrophobic.
The chemical composition of the films (TTIP/Ar flows (0.5 ÷ 4.0 slm), oxygen (2.5 ÷ 10) slm was almost constant.High-resolution spectra for the main elements in films are shown in Figs.8-10.
The relatively high hydrocarbon contamination on the film surface was again most probably produced by the post-discharge reactions, and by the adsorption of various species from the ambient air atmosphere after the film was removed from the reactor.The carbon (contamination) is partially bonded with titanium (the C−Ti bond is about 283 eV).Carbon forms mostly C−C (285 eV) backbone chains, some of which are partly oxidized (Fig. 8).
The O 1s spectra are shown in Fig. 9. Note, that FWHM (full width at half of maximum) is more than 2 eV.This broadening is evidently caused by substoichiometric titanium oxides [1].In addition, the second peak at 532.8 eV can be considered as a contribution from single and double oxygen-carbon bonds [9].
Titanium is a reactive element and easily forms oxides and carbides,which can be seen in the curve of Ti 2p (Fig. 10).The location of the strongest peaks at 458.8 eV and 465 eV indicates that the main titanium species is TiO 2 [8].The small peaks at 457 eV and 462.6 eV indicate mixed presence of substoichiometric titanium oxides Ti x O y and titanium carbides TiC.The XPS spectra demonstrate that the titanium in the near surface area is strongly oxidized, the dominant species being TiO 2 and substoichiometric titanium oxides.The deposition process more likely produced TiO x C y films instead of the primarily desired TiO x films.
For more details see [4].
Conclusion
Film deposition on glass substrates was performed by the PECVD method in the ADBD plasma reactor.The plasma reactor was an open flow-through type.ADBD sustained in filamentary mode.The reactor atmosphere consisted either of TTIP/Ar/dry air or of a TTIP/Ar/oxygen mixture.
We studied the quality of the deposited TiO x thin film surface for various precursor gas inlet positions in the ADBD reactor, and various precursor/oxidizer mixture compositions.The best film quality was achieved when the precursor and the oxydizer entered the discharge region instantly after they were mixed, through the hole adjacent to the substrate.
The surface topography was influenced by the nonequilibrium character of ADBD, leading to the irregularly corrugated and hummocky surface of the film.
CA tests proved the high hydrophilicity of the samples immediately after deposition.Later, the wettability of the films diminished, and the CA value of all samples exceeded 40 degrees after 28 days; the changes were probably related with chemical reactions between the surface of the film and the chemical groups involved in the air atmosphere.
XPS tests indicate that the deposition process more likely produced TiO x C y films instead of the primarily desired TiO 2 or TiO x films.All samples exhibit contamination with carbon, probably caused by postdischarge reactions and by adsorption of various species from the ambient air atmosphere after the film was removed from the reactor.Some problems of this deposition method are related with two different chemical processes that take place during deposition: surface-related chemical processes resulting in conventional PECVD film deposition, and undesired volume-related chemical processes resulting in dust production.The dust-producing mechanism prevails under certain working conditions (e.g. higher oxygen flow rate values).Dust particles, when created, remain in the discharge region and their layer(s) on the substrate hinder the effective formation of a more homogeneous film, and influence the quality of the film.Another problem of PECVD thin film deposition with ADBD seems to be with the filamentary character of the ADBD in some applications, leading to generation of hummocks that form a rough surface of the films.
( 4 .
) C (oxygen) and D (TTIP/Ar) For each combination of inlets we tested the effect of TTIP and O 2 in various concentrations in the TTIP/Ar/O 2 mixture on the film characteristics.
Figure 3 .
Figure 3. Ti 2p XPS spectrum of the film, inlets A, C.
Figure 4 .
Figure 4. O 1s XPS spectrum of the film, inlets A, C.
Figure 5 .
Figure 5. C 1s XPS spectrum of the film, inlets A, C. | 3,488.8 | 2013-01-02T00:00:00.000 | [
"Physics"
] |
FAM20C directly binds to and phosphorylates Periostin
It is widely accepted that FAM20C functions as a Golgi casein kinase and has large numbers of kinase substrates within the secretory pathway. It has been previously reported that FAM20C is required for maintenance of healthy periodontal tissues. However, there has been no report that any extracellular matrix molecules expressed in periodontal tissues are indeed substrates of FAM20C. In this study, we sought to identify the binding partner(s) of FAM20C. FAM20C wild-type (WT) and its kinase inactive form D478A proteins were generated. These proteins were electrophoresed and the Coomassie Brilliant Blue (CBB)-positive bands were analyzed to identify FAM20C-binding protein(s) by Mass Spectrometry (MS) analysis. Periostin was found by the analysis and the binding between FAM20C and Periostin was investigated in cell cultures and in vitro. We further determined the binding region(s) within Periostin responsible for FAM20C-binding. Immunolocalization of FAM20C and Periostin was examined using mouse periodontium tissues by immunohistochemical analysis. In vitro kinase assay was performed using Periostin and FAM20C proteins to see whether FAM20C phosphorylates Periostin in vitro. We identified Periostin as one of FAM20C-binding proteins by MS analysis. Periostin interacted with FAM20C in a kinase-activity independent manner and the binding was direct in vitro. We further identified the binding domain of FAM20C in Periostin, which was mapped within Fasciclin (Fas) I domain 1–4 of Periostin. Immunolocalization of FAM20C was observed in periodontal ligament (PDL) extracellular matrix where that of Periostin was also immunostained in murine periodontal tissues. FAM20C WT, but not D478A, phosphorylated Periostin in vitro. Consistent with the overlapped expression pattern of FAM20C and Periostin, our data demonstrate for the first time that Periostin is a direct FAM20C-binding partner and that FAM20C phosphorylates Periostin in vitro.
www.nature.com/scientificreports/ while Tenascin-C 7 , CCN3 11 and BMP1 12 through Fas I domain. These findings suggest that Periostin is a multifunctional protein possibly through orchestrating these binding proteins inside and outside of the cell 13 . FAM20C is a member of "Family with sequence similarity 20", consisting of three members; FAM20A, FAM20B and FAM20C. FAM20C, also known as DMP4 in mice 14 , is highly expressed in chondrocytes, osteoblasts, osteocytes, odontoblasts, ameloblasts and cementoblasts as well as dentin, enamel and bone matrices 15 . The expression pattern of FAM20C suggests that it has an important role in the formation of these mineralized tissues and subsequent mineralization process. Recently, FAM20C was identified as an intracellular kinase, namely golgi casein kinase (GCK), which was first described in lactating mammary glands that GCK enzymatically phosphorylates endogenous casein 16 . GCK/FAM20C phosphorylates secretory pathway proteins within Ser-X-Glu/phospho-Ser (SXE/pS) motif, which many of small integrin-binding ligand, N-linked glycoprotein (SIBLING) family members possess 17 . SIBLING proteins are known to function as nucleaters or inhibitors of biomineralization. It has been previously reported that acidic serine-and aspartate-rich motif (ASARM) peptide derived from osteopontin inhibits mineralization by binding to hydroxyapatite in a phosphorylationdependent manner 18 . FAM20C is also known as a causative gene for Raine syndrome (OMIM#259775) 19 . Raine syndrome is an autosomal recessive rare disorder characterized by generalized osteosclerosis with periosteal bone formation, manifesting distinctive facial phenotype 20 . Oral/craniofacial phenotypes have been reported that the non-lethal type of Raine syndrome patients have small teeth with enamel dysplasia 20 , enlarged gingiva 21 and amelogenesis imperfecta with significant gingivitis 22 . A previous report suggested that FAM20C is required for maintenance of healthy periodontal tissues 23 . As FAM20C functions as an intracellular protein kinase in the secretory pathway, it is reasonable to speculate that there might be some secretory proteins working together with FAM20C in PDL tissues.
Here we show that Periostin was identified during the course of FAM20C protein purification by Mass Spectrometry analysis. FAM20C interacted with Periostin in cell cultures. Using recombinant proteins, FAM20C directly bound to Periostin in vitro. We further narrowed down the binding domain of FAM20C and found that Fas I domain in Periostin is necessary for the binding to FAM20C. Immunohistochemical analysis demonstrated that immunolocalization of FAM20C was observed in murine PDL, which was overlapped with that of Periostin. Periostin was phosphorylated by FAM20C in vitro.
Results
Identification of Periostin during FAM20C protein purification process by Mass Spectrometry (MS) analysis. We intended to obtain FAM20C proteins, thus FAM20C-stablely transfected clones were first generated by transfecting FAM20C-WT-V5/His or FAM20C-D478A-V5/His expression vector into HEK 293 cells. HEK 293 cells were selected for their higher transfection efficiency.
The conditioned media from the FAM20C-WT-V5/His-transfected cell clone or the FAM20C-D478A-V5/ His-transfected cell clone were collected, and FAM20C proteins were purified by Ni-NTA purification system. The expression of FAM20C-WT-V5/His and -D478A-V5/His were confirmed by Western blotting with both anti-V5 and anti-FAM20C antibodies after purification (data not shown). The purified FAM20C-WT-V5/His and FAM20C-D478A-V5/His proteins were prepared, electrophoresed, and the gel was stained with CBB. A single CBB-positive band was then found at a molecular weight corresponding to the size of FAM20C-WT-V5/His or -D478A-V5/His protein (~ 75 kDa band in Fig. 1). The purified FAM20C-V5/His fusion protein was detected at a slightly higher molecular weight position than previously reported 17 likely due to the presence of V5/His tag (~ 5 kDa). Interestingly another band appeared at a higher molecular weight than the FAM20C-WT-V5/ His or FAM20C-D478A-V5/His protein when the amount of FAM20C-WT-V5/His (Fig. 1A, lane 3, indicated by an arrow) or -D478A-V5/His (Fig. 1A, lane 5, indicated by an arrowhead) protein applied was increased, suggesting the presence of FAM20C-binding protein(s). These CBB positive bands with broader appearance were cut (Fig. 1A, indicated by an arrow and an arrowhead), further treated with trypsin and the digested peptides were subjected to protein identification analysis. To exclude the false-positive peptides, the data were analyzed with the filter setting condition (Xcorr2; 2.2, Xcorr3; 3.5) as previously reported 24 . This analysis revealed that the peptides of FAM20C-WT and -D478A were identified with a protein coverage of 58.73% (56 unique peptides and 197 total peptides found) and 59.25% (49 unique peptides and 179 total peptides found) ( www.nature.com/scientificreports/ To determine whether the binding between POSTN and FAM20C was direct without the presence of other molecules, we examined the binding between recombinant POSTN protein produced by Sf21 baculovirus system and FAM20C-WT-V5/His or -D478A-V5/His protein purified from the conditioned media in a dose dependent manner. Proteins were incubated and the immunoprecipitation (IP)-Western blot analysis was performed. The results showed that the binding between POSTN and FAM20C-WT ( www.nature.com/scientificreports/ show higher magnification views of the open boxed areas in (a), (c), (e) or (g), respectively. Immunolocalization for FAM20C (brown color) was present in PDL (a, b), with preferential distribution at the bone surface region (b; arrows) and bone embedded Sharpey's fibres (b; arrow heads). Periostin was present (brown color) in PDL (c, d) with strong positive signals along the thick PDL collagen, both at the bone surface and cementum surface regions of PDL (d; arrows), and bone embedded Sharpey's fibres (d; arrow heads). No immunoreactivities were detected when non-immune IgG was used (e-h).
FAM20C phosphorylates Periostin in vitro.
We examined S-X-E FAM20C phosphorylation consensus sites in mouse Periostin and there were several potential phosphorylation sites (Fig. 5B). We then investigated whether Periostin protein could be phosphorylated by FAM20C in vitro. In vitro kinase assay was performed according to the previous reports 17, 25 and phosphorylated substances were enriched with Phos-tag agarose. Isolated phosphorylated proteins were separated by SDS-PAGE and detected by anti-Periostin antibody. Our results demonstrated that Periostin was detected when FAM20C WT (Fig. 5A, lane 4), but not D478A (Fig. 5A, lane 5), was incubated with Periostin. When Periostin alone was used in the absence of FAM20C protein, immunoreactivity to anti-Periostin antibody was not observed (Fig. 5A, lane 6). The data thus indicate that FAM20C phosphorylates Periostin in vitro.
Discussion
There has been increasing evidence that Periostin plays an important role in periodontal tissue development.
Periostin knock-out (KO) mice were generated and the homozygous mice showed periodontal tissue phenotypes [26][27][28] . In Periostin KO mice, PDL fibroblasts were irregularly distributed among collagen fibrils and collagen fibrils were disorganized 28 . While periodontium of WT and KO mice appeared intact when teeth were unerupted, after tooth eruption, dramatic periodontal defects were observed including enlarged gingival tissue, attachment loss, irregular PDL width, alveolar bone loss and external root resorption. Removal of masticatory forces partially rescued the PDL phenotype, suggesting the potential role of Periostin via transcription level 26 . It has also been reported that Periostin deficiency leads to decrease in collagen fibril diameter, which may explain why KO mice's PDL collagen fibrils were susceptible to occlusal force and damaged 8 . Besides the PDL defect, Periostin KO mice exhibited dwarfism and enamel defects, suggesting that Periostin plays crucial roles in bone and tooth development 27 . Despite its possible critical role of Periostin reported in certain tissues including PDL, there has been so far no gene mutations found in humans. Among the FAM20 family, it has been reported that periodontitis may be a part of clinical phenotypic spectrum in FAM20A mutations 29 . Some reports showed the presence of periodontitis in non-lethal type of Raine syndrome, where FAM20C is mutated, but genotype-phenotype correlation was not established. Although FAM20A is considered to be an allosteric activator of FAM20C kinase 30 and some clinical phenotypes in patients with FAM20A mutations are overlapped with those who with FAM20C mutations, it has not been ruled out that periodontitis in Raine syndrome patients is just coincidence. On the contrary, in mice, the involvement of FAM20C in periodontal tissues has been more extensively studied and established. The expression of Fam20C mRNA is detected in PDL fibroblasts at 5 and 7 weeks and FAM20C protein is expressed in PDL matrix in the incisor at 7 weeks 15 . Consistent with this, our data demonstrated that FAM20C is expressed in PDL tissues by immunohistochemistry (Fig. 4). More recently, conditional Fam20C KO mouse in cells expressing type I collagen (Fam20C cKO) was generated and showed periodontal disease phenotype. Since collagen fibers appeared thinner and unevenly distributed in these mice, this disrupts PDL structure, which likely allows direct infiltration of bacteria from periodontal pocket leading to periodontal disease. Additionally, in Fam20C cKO mice, the expression level of Periostin in PDL was dramatically reduced 23 . Fam20C global KO mouse (Fam20C −/− mouse) was generated by another group and demonstrated approximately 20% of mortality rate. Surviving Fam20C −/− mice showed decreased body weight and length, bone abnormalities and lack of tooth enamel 31 , manifesting similar clinical phenotypes to Raine syndrome patients. Therefore, these findings in mouse model studies and observations in human case reports highly suggest the possible relationship between FAM20C and Periostin in PDL, bone and enamel, and these two genes are genetically associated.
It has been known for almost 30 years that non-collagenous proteins in bone, dentin and enamel contain phosphoproteins with O-phosphoserine 32 . Despite the critical biological importance of phosphorylation in noncollagenous matrix proteins, kinase(s) responsible for phosphorylation have not been identified until recently. FAM20C kinase favors phosphorylation of an S-X-E/phosphorylated-S motif 17,25,33 . As illustrated in Fig. 5B, Periostin has several S-X-E motifs, showing one of potential candidate substrates for FAM20C kinase. Our data indicate that Periostin is phosphorylated by FAM20C (Fig. 5A), however, we could not identify the precise phosphorylation site(s) in Periostin using MS analysis due to lack of availability of the peptides containing these motifs (Fig. 1B). Four S-X-E motifs are found in mouse Periostin (S 140 -N-E, S 548 -E-E, S 611 -K-E, S 805 -R-E; based on NCBI Reference Sequence; NP_056599.1), of which three of them are conserved between mouse and human Periostin (S 138 -N-E, S 546 -E-E, S 609 -K-E; based on NCBI Reference Sequence; NP_006466.2). The conserved three S-X-E motifs are located within Fas I domains, i.e. S 140 -N-E in mouse RD1, S 548 -E-E and S 611 -K-E in mouse RD4. A possible reason why we could not retrieve any S-X-E motif peptides from our MS analysis is likely due to the detection limitation. As trypsin was used for peptide digestion and due to the presence of K or R residues near S-X-E motif, the length of peptide is too short to detect by the analysis. For example, S 546 -E-E containing peptide is supposed to be digested as "GMTSEEK", and S 609 -K-E containing peptide as "SK". At this point, it is still unclear why we could not detect any S 138 -N-E containing motif, however the peptide detection setting is normally 7-20 peptides and this S 138 -N-E containing peptide may be too long to detect. Further studies are required to verify the location of phosphorylation site(s) in Periostin.
Scientific Reports
| (2020) 10:17155 | https://doi.org/10.1038/s41598-020-74400-6 www.nature.com/scientificreports/ The potential molecular function of Periostin has been characterized. It has been reported that Periostin is associated with cell proliferation, migration and activation of the survival signaling pathway PI3K/AKT/ mTOR 34 . More recently, functional domain of this Periostin's biological effect has been identified and reported by two independent groups; one is that a peptide sequence (amino acids 142-151) of Periostin stimulates chemotactic migration, adhesion, proliferation and endothelial tube formation of human endothelial colony forming cells in vitro 35 and another that monoclonal antibodies that recognize amino acids 136-151 of Periostin inhibit Periostin-induced migration of human endothelial colony forming cells 36 . Interestingly, the functional domain reported in both studies is located within Fas I domain of Periostin, more specifically in RD1 which contains S 138 -N-E motif (Fig. 5B). Therefore, this suggests that phosphorylation by FAM20C may regulate Periostin-mediated cell functions and it is of our particular future interest to investigate the biological role of phosphorylated Periostin and its molecular function in both healthy periodontal tissues and periodontal disease.
Materials and methods
Ethics statement. The use of animals and all animal procedures in this study were approved by the Institutional Animal Care and Use Committee (IACUC) at Boston University Medical Campus (approved protocol number: AN-15050), and all efforts were made to minimize suffering animals. This study was performed in accordance with the NIH Guide for the Care and Use of Laboratory Animals.
Cell culture. The human embryonic kidney (HEK) 293 cells were maintained as previously described 37,38 and used in this study.
Reagents and antibodies. X-tremeGENE 9 DNA transfection reagent was obtained from Roche Life Science. Recombinant mouse Periostin protein (2955-F2) was obtained from R&D Systems. The antibodies used in this study were as follows; anti-V5 (Life Technologies), anti-HA (clone 12CA5, Roche Life Science), anti-HA high affinity (clone 3F10, Roche Life Science), goat polyclonal anti-FAM20C (Santa Cruz Biotechnology), and rabbit polyclonal anti-Periostin (ab14041, Abcam) antibody. Rabbit polyclonal anti-Periostin antibody against RD1 domain of Periostin previously generated 39 was used in the immunohistochemical analysis.
FAM20C and Periostin expression vectors. Human FAM20C expression vector constructs, including
wild-type (WT), Raine syndrome-mutant form (P328S), and a mutant form lacking its kinase activity (D478A) were generated by PCR methods as previously reported 40 . The plasmids harboring FAM20C-WT, FAM20C-D478A, and FAM20C-P328S cDNAs followed by V5-6XHis-tag (pcDNA3.1-FAM20C-WT-V5/His, Purification of FAM20C proteins. The stable transfected HEK 293 cell clones that overexpress FAM20C-WT-V5/His or FAM20C-D478A-V5/His were generated and FAM20C-V5/His proteins were purified in the same manner as previously described 41 . Briefly, cells were transiently transfected using X-tremeGENE 9 DNA transfection reagent with pcDNA3.1-FAM20C-WT-V5/His and pcDNA3.1-FAM20C-D478A-V5/His according to the manufacturer's protocol. The transfected cells were treated with 400 μg/ml of G418 neomycin analogue and further cultured. Ten of single colony-derived clones transfected with either FAM20C-WT or FAM20C-D478A were isolated, further cultured and the expression of FAM20C proteins was verified by Western blotting with anti-V5 antibody. The cell clone that expressed the strongest band intensity for FAM20C-WT-V5/His or FAM20C-D478A-V5/His was chosen, further cultured in a larger scale for protein production, and the conditioned media of FAM20C-WT-V5/His and FAM20C-D478A-V5/His were collected. The conditioned media were centrifuged at 1500 rpm for 5 min, the supernatant was incubated with Ni-NTA agarose beads (Qiagen), and FAM20C-WT-V5/His and FAM20C-D478A-V5/His proteins were purified as previously described 38,41 . The purified proteins were dialyzed against distilled water, lyophilized and resuspended in distilled water. The protein concentration was measured and the purified proteins were kept at -20 °C until use. www.nature.com/scientificreports/ (b-)-WT-HA; POSTN-WT-HA) and FAM20C-WT-V5/His, FAM20C-D478A-V5/His, or FAM20C-P328S-V5/ His. Total amount of cDNA was kept constant by supplementation with empty vector. After 24 h of transfection, cell lysates were collected, immunoprecipitated with anti-V5 antibody and subjected to Western blotting (WB) analysis using anti-HA antibody to identify the binding. The same membrane was stripped by stripping buffer, and WB with anti-V5 antibody was performed to verify the expression of FAM20C-V5/His proteins. An aliquot of the same cell lysates was subjected to WB analysis with anti-HA antibody to verify the expression of POSTN-WT-HA.
Protein identification by mass spectrometry (MS) analysis. Various amounts of purified
In vitro binding assay. The mouse recombinant Periostin and FAM20C-WT-V5/His or FAM20C-D478A-V5/His proteins were prepared in FAM20C protein's dose dependent manner and incubated in PBS where the total amount of proteins per sample was kept constant by adding bovine serum albumin (BSA). Samples were then immunoprecipitated with anti-V5 antibody followed by Western blot (WB) analysis with anti-Periostin (Abcam) antibody. The same membrane was stripped, and WB with anti-V5 antibody was performed to verify the expression of FAM20C-V5/His proteins. Immunohistochemistry. The maxillary periodontal tissue including molars were dissected from mouse neonates (C57BL/6 strain, male, postnatal day 28), fixed, decalcified and embedded in paraffin. Immunohistochemical staining was performed as previously described 37 with anti-FAM20C antibody (1:100 dilution), anti-Periostin antibody (1:400 dilution) 39 or a non-immune goat or rabbit immunoglobulin (IgG) (at the same concentration as primary antibodies) as negative controls. The immuno-reactivity was amplified using VECTASTAIN Elite ABC HRP kit (Vector Laboratories Inc.). The sections were counterstained with hematoxylin (Sigma). www.nature.com/scientificreports/ | 4,118.8 | 2020-10-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
Joint source and relay optimization for interference MIMO relay networks
This paper considers multiple-input multiple-output (MIMO) relay communication in multi-cellular (interference) systems in which MIMO source-destination pairs communicate simultaneously. It is assumed that due to severe attenuation and/or shadowing effects, communication links can be established only with the aid of a relay node. The aim is to minimize the maximal mean-square-error (MSE) among all the receiving nodes under constrained source and relay transmit powers. Both one- and two-way amplify-and-forward (AF) relaying mechanisms are considered. Since the exactly optimal solution for this practically appealing problem is intractable, we first propose optimizing the source, relay, and receiver matrices in an alternating fashion. Then we contrive a simplified semidefinite programming (SDP) solution based on the error covariance matrix decomposition technique, avoiding the high complexity of the iterative process. Numerical results reveal the effectiveness of the proposed schemes.
Introduction
Due to scarcity of frequency spectrum in practical wireless networks, multiple communicating pairs are motivated to share a common time-frequency channel to ensure efficient use of the available spectrum. Co-channel interference (CCI) is, however, one of the main deteriorating factors in such networks that adversely affect the system performance. The impact is more obvious in 5G heterogeneous networks where there is oceanic volume of interference due to hyper-dense frequency reuse among small-cell and macro cell base stations. Therefore, it is important to develop schemes to mitigate the CCI, which has been a major research direction in wireless communications over the past decades.
In the literature, various schemes have been proposed to control CCI at an acceptable level. A conventional approach in MIMO systems is to exploit spatial diversity for suppressing CCI [1]. Such spatial diversity technique has been used to solve many power control problems in interference systems for different network setups. In [2], a power control scheme has been designed and thus it is difficult to find an analytical solution.
To tackle this, we first devise an algorithm to optimize the source, relay, and receiver matrices alternatingly by decomposing the original non-convex problem into convex subproblems. To avoid the complexity of the iterative process, we then extend the error covariance matrix decomposition technique applied to pointto-point MIMO relay systems in [18] to interference MIMO relay systems in this paper. More specifically, under practically reasonable high first-hop signal-to-noise ratio (SNR) assumption, we demonstrate that the problem can be decomposed into two standard semidefinite programming (SDP) problems to optimize source and relay matrices separately. Note that high SNR assumption has also been made in [19] to simplify the joint codebook design problem in single-user MIMO relay systems and in [20,21] for multicasting MIMO relay design. Hence our work is a generalization to multi-pair communication scheme taking co-channel interference into account.
The remainder of this paper is lined-up as follows. In Section 2, the interference MIMO relay system model is introduced. The joint optimal transmitter, relay, and receiver beamforming optimization schemes are developed in Section 3 and Section 4, respectively, for oneway and two-way relaying. Section 5 provides simulation results to analyze the performance of the proposed algorithms in various system configurations before concluding remarks are made in Section 6.
System model
Let us consider a communication scenario, as illustrated in Fig. 1, where each of the K source nodes communicates with the corresponding destination node sharing the same frequency channel via a common relay node. The direct link between each transmitter-receiver pair is assumed to be broken due to strong attenuation and/or shadowing effects. The kth source, the relay, and the kth destination nodes are assumed to be equipped with N s,k , N r , and N d,k antennas, respectively.
One-way relaying
In this section, we consider that communication takes place in one direction only. The relay node is assumed to work in half-duplex mode which implies that the actual communication between the source and destination nodes is accomplished in two time slots. In the first time slot, the source nodes transmit the linearly precoded signal vectors B k s k , k = 1, · · · , K, to the relay node. The received signal vector at the relay node is therefore given by where H k denotes the N r × N s,k Gaussian channel matrix between the kth source node and the intermediate relay precoding matrix, and n r is the N r × 1 additive white Gaussian noise (AWGN) vector introduced at the relay node. Let us denote N b = K k=1 N b,k as the total number of data streams transmitted by all the source nodes. In order to successfully transmit N b independent data streams simultaneously through the relay, the relay node must be equipped with N r ≥ N b antennas.
After receiving y r , the relay node simply multiplies the signal vector by an N r × N r precoding matrix F and transmits the amplified version of y r in the second time slot. Thus the relay's N r ×1 transmit signal vector x r is given by Accordingly, the signal received at the kth destination node can be expressed as where G k denotes the N d,k × N r complex channel matrix between the relay node and the kth destination node, n d,k is the N d,k × 1 AWGN vector introduced at the kth destination node,H k G k FH k B k is the equivalent source-destination channel matrix, andn d,k G k F( K j=1 j =k H j B j s j +n r )+n d,k is the equivalent noise vector.
All noises are assumed to be independent and identically distributed (i.i.d.) complex Gaussian random variables with mean zero and variance σ 2 n , where n ∈ {r, d} indicates the noise introduced at the relay or at the destination.
Remark Note that the interference term in ( 3) does not appear in the received signal of the single-user MIMO relay system considered in [19] or in the multicasting MIMO relay system considered in [20,21]. Hence the subsequent analyses remain considerably simpler in [19][20][21], whereas we need to deal with this troublesome interference term in this paper.
Considering the input-output relationship at the relay node given in (2), the average transmit power consumed by the MIMO relay node is defined as where tr(·) denotes trace of a matrix, E{·} indicates statistical expectation, and E{y r y H r I N r represents the covariance matrix of the signal vector received at the relay node.
For signal detection, linear receivers are used at the destination nodes for simplicity reasons. Denoting W k as the N d,k × N b,k receiver matrix used by the kth destination node, the corresponding estimated signal vectorŝ k can be written aŝ where (·) H indicates the conjugate transpose (Hermitian) of a matrix (vector). Thus the MSE of signal estimation at the kth receiver can be expressed as (7) where E k denotes the error covariance matrix at the kth receiver, and is the combined interference and noise covariance matrix.
In the following subsections, we develop optimization approaches that minimize the worst-user MSE among all the receivers subject to source and relay power constraints.
Problem formulation
In this section, we formulate the joint source and relay precoding optimization problem for MIMO interference systems. Our aim is to minimize the maximal MSE among all the source-destination pairs yet satisfying the transmit power constraints at the source as well as the relay nodes. To fulfill this aim, the following joint optimization problem is formulated: where (9b) and (9c), respectively, constrains the transmit power at the relay node and the kth transmitter to P r > 0, P s,k > 0. Our next endeavor is to develop optimal solutions for this problem. Note that the problem is strictly non-convex with matrix variables appearing in quadratic form, and hence any closed-form solution is intractable. Therefore, we first resort to developing an iterative algorithm for the problem and then propose a sub-optimal solution which has lower computational complexity.
Iterative joint transceiver optimization
In this subsection, we investigate the non-convex source, relay, and destination filter design problem in an alternating fashion. We tend to optimize one group of variables while fixing the others. Given source and relay matrices {B k }, F, the optimal receiver matrices {W k } are obtained through solving the unconstrained optimization problem of min W k E k , since E k does not depend on W j , for j = k, and W k does not appear in constraints (9b) and (9c). Using the matrix derivative formulas, the gradient ∇ W H k (tr (E k )) can be written as Equating ∇ W H k (tr (E k )) = 0 yields the linear MMSE receive filter given by (11) where (·) −1 indicates the inversion operation of a matrix. Then for given source and receiver matrices {B k } and {W k }, the relay precoding matrix F optimization problem can be formulated as Note that (12) is non-convex with a matrix variable since F appears in quadratic form in the objective function as well as in the constraint. However, we can reformulate this problem as an SDP using Schur complement [22] as follows. By introducing a matrix k we conclude from the second equation in (7) that the k-th link MSE will be upper-bounded if In the above inequality, A B indicates that the matrix B − A is positive semidefinite (PSD). Now, by introducing a matrix such that F F H , and a scaler variable τ r , the relay optimization problem (12) can be transformed to where we have used the Schur complement to obtain (14c) and (14d). Note that the problem (14) is an SDP problem which is convex and can, as a result, be efficiently solved using interior-point based solvers [23] at a maximal complexity order of O (K + 2N 2 r + K k=1 N 2 b,k + 2) 3.5 [24]. However, the actual complexity is usually much less in many practical cases. Interested readers are referred to [24] for a detailed analysis of the computational complexity based on interior-point methods.
Finally, we optimize the source matrices {B k } using the relay matrix F and the receiver matrices {W k } known from the previous steps. Let us defineH k,j (7) as where the vector b k vec(B k ) is created by stacking all the columns of the matrix B k on top of each other, where bd(·) constructs a block-diagonal matrix taking the parameter matrices as the diagonal blocks, (15) can be rewritten as By introducing M k FH k , the power constraints in (9b) can be rewritten as Using (17) and (18), problem (9) can be written as where I bd(I k1 , . . . , I kk , . . . , I kK ) with I kk = I N s,k N b,k and I kj = 0, if j = k. Problem (19) is a standard quadratically-constrained quadratic program (QCQP) which can be solved using off-the-shelf convex optimization toolboxes [23]. In the following, we also provide an SDP formulation of problem (19): where τ s is a slack variable and p K k=1 N s,k N b,k . The problem (20) can be solved at a maximal complexity order [24]. The proposed iterative optimization technique for solving the original problem (9) is summarized in Table 1.
Since in each step of the iterative algorithm we solve a convex subproblem to update one set of variables, the conditional update of each set will either decrease or maintain the objective function (9a). From this observation, a monotonic convergence of the iterative algorithm follows. However, the overall computational complexity of the iterative algorithm increases as the multiple of the number of iterations required until convergence. Thus the complexity of the iterative algorithms is often reasonably high. Note that the sum-MSE based iterative algorithms proposed in [8][9][10] have similar complexity orders. Hence in the following subsection, we contrive an algorithm for the joint optimization problem such that the computational overhead is substantially reduced.
Simplified joint optimization algorithm
In the previous subsection, we optimized the source, relay, and receiver matrices in an alternating fashion. Here, we propose a simplified approach to solve problem (9) using the error covariance matrix decomposition technique. The following theorem paves the foundation of the simplified algorithm.
where T [T 1 , . . . , T K ] and D [D 1 , . . . , D K ] with T k and D k , respectively, defined as λ r and λ e,k , ∀k, are the corresponding Lagrange multipliers as defined in Appendix 1.
Proof See Appendix 1.
H k B k can be regarded as the MMSE receive filter of the first-hop MIMO channel for the kth transmitter's signal received at the relay node given by (1). The implication of the structure of the relay amplifying matrix in the proposed simplified design can be observed while applying the following theorem.
Theorem 2
The MSE term appearing in (9a) can be equivalently decomposed into Proof See Appendix 2.
Even given the structure, an analytical optimal solution to the joint optimization problem is still difficult to obtain due to the cross-link interference from the relay node to the destination nodes. Therefore, we resort to develop an efficient suboptimal solution. The following proposition provides the foundation of the proposed simplified suboptimal solution.
Proposition 1 In the practically reasonably high SNR regime, the term B
Proof See Appendix 3.
The result in Proposition 1 is guided by the observation that the eigenvalues of B H k H H k −1 H k B k approach unity with increasing first-hop SNR. It will be demonstrated in Section 5 through numerical simulations that such an approximation results in negligible performance loss while reducing the computational complexity significantly. Applying Proposition 1, the transmit power of the relay node defined in (5) can be expressed as tr . Therefore, problem (9) can be approximated as Note that the optimal receiver matrices {W k } can be obtained as in (11). Interestingly, the source and relay optimization variables {B k } andT are separable both in the objective function as well as in the constraints in problem (25). Therefore, applying the results from Theorem 2 and Proposition 1, we can decompose the problem (25) into the following source precoding matrices optimization problem: and the relay amplifying matrix optimization problem: Note that the objective function in (26a) can be interpreted as the MSE of the kth transmitter's signal vector s k . In particular, the equivalent received signal for the kth transmitter's signal in the first hop received at the relay node is given by y (k) r = H k B k s k + K j =k H j B j s j + n r , treating other users' signals as noise. As such, the corresponding MMSE receiver is given by D k in (23). Thus the MSE expression in (26a) actually represents the equivalent first-hop MSE of the kth transmitter's signal s k .
Given the corresponding MMSE receiver D k , (26a) can be rewritten as where ω k σ r tr(D H k D k ) and ϒ k [ϒ k1 , . . . , ϒ kk , . . . , ϒ kK ] with ϒ kk = I N r and ϒ kj = 0, if j = k. Introducing an auxiliary variable t s , problem (26) can be rewritten as the following second-order cone program (SOCP): which can be efficiently solved by standard optimization packages at a complexity order of O ( K k=1 N 2 b,k + 1) 3 [24]. Thus, we can update {D k } and {B k } in an alternating fashion.
Regarding the relay amplifying matrix optimization, by introducingT HT Q, the relay matrix optimization problem (27) can be equivalently transformed to Let us now introduce a matrix variable Y k I N d ,k + G k QG H k −1 , and a scalar variable t r . Using these variables, the relay optimization problem (30) can be equivalently rewritten as the following SDP: Problem (31) is convex and the globally optimal solution can be easily obtained [23]. The complexity order of solving problem (31) is at most O ( K k=1 N 2 b,k + K k=1 N 2 d,k + K + 2) 3.5 [24]. Note that in the simplified algorithm, only the source matrices are obtained in an alternating fashion.
The overall joint optimization procedure is summarized in Table 2.
Two-way relaying
Two-way relaying is being considered as a promising technique for future generation wireless systems since twoway relaying can significantly improve spectral efficiency. Hence, in this section, we consider two-way relaying in an interference MIMO relay system where each pair of users transmit signals to each other through the assisting relay node. The information exchange in the two-way relay channel is accomplished in two time slots: MAC phase and the BC phase. During the MAC phase, all the users simultaneously send their messages to the relay node. Thus the signal vector received at the relay node during the MAC phase can be expressed as where H K+k G T k for k = 1, . . . , K and n r is the N r × 1 AWGN vector received at the relay node.
Upon receiving y r , the relay node linearly precodes the signal vector by an N r × N r amplifying matrix F and transmits the N r × 1 precoded signal vector x r in the MAC phase: x r = Fy r . (33) The received signal at the kth user in the BC phase is given by where we have definedk as the index of user k's partner (e.g.,1 = K + 1, K + 1 = 1), n d,k is the N d,k × 1 AWGN vector at the kth destination node. As in the case of the one-way relaying system, all noises are assumed to be i.i.d. complex Gaussian random variables with mean zero and variance σ 2 n . Since the transmitting node k knows its own signal vector s k and the full CSI of the corresponding sourcedestination link H T k FH k B k , each transmitter can completely cancel the self-interference component in (34). Thus, the effective received signal vector at the kth receiving node is given by Using (33), the transmission power required at the relay node can be defined as where is the covariance matrix of the signal received at the relay node from all the transmitters. Furthermore, the MSE of the estimated signal using an N d ×N b linear weight matrix W k at the kth receiving node can be expressed as Similar to the case of one-way relaying, the problem of optimizing the transmit, relay, and receive matrices for the two-way scenario can be formulated as where (39b) and (39c) indicates the corresponding transmit power constraints.
Iterative joint transceiver optimization
Similar to the one-way relaying scenario, it can be shown that the transmitter, relay, and receiver matrices can be optimized in an alternating fashion through solving convex sub-problems. In each iteration of the algorithm, the receiver weight matrices are updated as follows: The relay beamforming matrix F is optimized through solving the following SDP problem: where we have defined Finally, the optimal source precoding matrices are obtained by solving
Simplified non-iterative approach
Assuming moderate SNR in the MAC phase, it can be shown, similar to the one-way relaying case, that the generic structure of the relay matrix F is defined as F = TD H . Using this particular structure of F, the MSE at the kth receiver can be equivalently decomposed into two parts as shown below: Accordingly, the joint precoding design problem (25) can be decomposed into two sub-problems, namely, the source precoding matrices optimization problem: and the relay beamforming matrix optimization problem: which can be solved following the similar approach as for the one-way relaying scenario.
Numerical simulations
In this section, we analyze the performance of the proposed one-and two-way MIMO relay interference system optimization algorithms through numerical examples. For simplicity, we assume that the source and the destination nodes are equipped with N s and N d antennas each, respectively, and P s,k = P s , ∀k. We simulated a flat Rayleigh fading environment such that the channel matrices have zero-mean entries with variances 1/N s for H k , ∀k, and 1/N r for G k , ∀k. All the simulation results were obtained by averaging over 500 independent channel realizations. The performance of the proposed min-max MSE algorithms have been compared with that of the naive AF (NAF) algorithm in terms of both MSE and bit error rate (BER). The NAF algorithm is a simple baseline scheme that forwards the signals at the transmitters and the relay node assigning equal power to each data stream. In particular, the source and the relay matrices, in their simplest forms, in the NAF scheme are defined as B k = P s /N s I N s , for k = 1, . . . , K, In the first example, we compare the performance of the proposed min-max MSE-based one-way algorithms with that of the sum-MSE minimization algorithm in [8] as well as the NAF approach in terms of the MSE normalized by the number of data streams (NMSE) with K = 3, N s = 3, N r = 9, and N d = 3. Figure 2 shows the NMSE performance of the algorithms versus transmit power P s with fixed P r = 20 dB. Note that for the proposed simplified non-iterative algorithm, we plot the NMSE of the user with the worst channel (Worst) as well as the average perstream MSE of all the users (Avg.). On the other hand, for the rest of the algorithms, the worst-user NMSE has been plotted. The results clearly indicate that the proposed joint optimization algorithms consistently yield better performance compared to the existing schemes. It can also be revealed that the proposed iterative algorithm has the best MSE performance compared to the other approaches over the entire P s range. It is no surprise that the NAF algorithm yields much higher MSE compared to the other schemes since the NAF algorithm performs no optimization operation. Most importantly, the iterative sum-MSE minimization algorithm in [8] always penalizes the user with the worst channel condition.
Since the NAF algorithm does not allocate the transmit power optimally and equally divides the power among multiple data streams instead, the inter-stream interference and the inter-user interference increase significantly at higher transmit power. Hence, the MSE of the NAF algorithm does not improve notably at higher transmit power.
Further analysis of the results in Fig. 2 reveals that the proposed simplified algorithm yields the worst-user MSE performance which is comparable to that of the iterative algorithm, even at low P s region. This observation illustrates that the approximation made in the simplified algorithm encounters negligible performance loss compared to the iterative optimal design. On the other hand, the computational complexity of the proposed simplified optimization is less than that of even one iteration of the ( d B ) 0 5 1 0 1 5 2 0 2 5 Iterations 3 3 3 4 5 5 iterative design, making it much more attractive for practical interference MIMO relay systems. The number of iterations required for convergence up to 10 −3 in terms of MSE in a random channel realization for the iterative algorithm are listed in Table 3.
In the next example, we focus on the proposed simplified optimization scheme and compare its performance with that of the proposed iterative approach and the NAF algorithm in terms of BER. Quadrature phase-shift keying (QPSK) signal constellations were assumed to modulate the transmitted signals and maximum-likelihood detection is applied at the receivers. We set K = 3, N s = 2, N r = 6, N d = 3, and transmit 1000N s randomly generated bits from each transmitter in each channel realization. The BER performance of the algorithms are shown in Fig. 3 versus P s with P r = 20 dB. As we can see, the proposed simplified algorithm yields a much lower BER compared to the conventional NAF scheme. Compared with the iterative approach the simplified algorithm has much lower computational task at the cost of marginal performance loss.
In the last couple of examples, we analyze the performance of the two-way MIMO relaying scheme. The NMSE performance of the two-way relaying algorithms is shown for different number of communication links K in Fig. 4. This time we set N s = 2, N r = KN s , and N d = 6 to plot the NMSE of the proposed algorithms versus P s with P r = 20 dB. It can be clearly seen from Fig. 4 that as the number of links increases, the worst-user MSE keeps increasing. This is due to the additional crosslink interferences generated by the increased number of active users.
In Fig. 5, the BER performance of the proposed twoway relaying algorithms has been compared with the sum-MSE-based algorithms originally proposed for oneway relaying in [8][9][10]. QPSK signal constellations were assumed to modulate the transmitted signals. We set N s = 2, K = 3, N r = KN s , N d = 6, P r = 20 dB, and transmit 1000N s randomly generated bits from each transmitter in each channel realization. Most importantly, the iterative sum-MSE minimization algorithms in [8][9][10] always penalize the user with the worst channel condition in the two-way relaying system.
Conclusions
We considered a two-hop interference MIMO relay system and developed schemes to minimize the worst-user MSE of signal estimation for both one-and two-way relaying schemes. At first, we proposed an iterative solution for both relaying schemes by solving several convex subproblems alternatingly and in an iterative fashion. Then to reduce the computational overhead of the optimization approach, we develop a simplified non-iterative algorithm using the error covariance matrix decomposition technique based on the high SNR assumption. Simulation results have illustrated that the proposed simplified approach performs nearly as well as the iterative approach, while offering significant reduction in computational complexity. 1 The min-max MSE criterion is considered by many to be more desirable than the min-sum MSE criterion in [8][9][10] because fairness is imposed and weaker users are not being sacrificed for the minimization of the sum. | 6,209 | 2017-03-07T00:00:00.000 | [
"Business",
"Computer Science"
] |
Current-induced magnetic skyrmions oscillator
Spin transfer nano-oscillators (STNOs) are nanoscale devices which are promising candidates for on-chip microwave signal sources. For application purposes, they are expected to be nano-sized, to have broad working frequency, narrow spectral linewidth, high output power and low power consumption. In this paper, we demonstrate by micromagnetic simulation that magnetic skyrmions, topologically stable nanoscale magnetization configurations, can be excited into oscillation by a spin-polarized current. Thus, we propose a new kind of STNO using magnetic skyrmions. It is found that the working frequency of this oscillator can range from nearly 0 Hz to gigahertz. The linewidth can be smaller than 1 MHz. Furthermore, this device can work at a current density magnitude as small as 108 A m−2, and it is also expected to improve the output power. Our studies may contribute to the development of skyrmion-based microwave generators.
Introduction
Skyrmions are topologically protected objects with particle-like properties that play an important role in many different contexts, such as liquid crystals [1], quantum Hall magnets [2], Bose-Einstein condensates [3], etc. Recently, with the development of observation technology, particularly in the domain of neutron scattering [4], spin-polarized scanning tunneling microscopy (STM) [5], Lorentz force microscopy [6][7][8], and electron holography [9], skyrmions have been observed in bulk ferromagnetic crystals, thin films and nanowires. The spin texture of magnetic skyrmions is a stable configuration that, in most systems, results from a balance between the ferromagnetic exchange coupling, the Zeeman energy from the applied field and the chiral interaction, known as the Dzyaloshinskii-Moriya interaction (DMI) [10][11][12]. The DMI is induced because of the lack of, or breaking of, inversion symmetry in the magnetic structure, either due to the non-centrosymmetric crystal lattice or to the interfaces between different materials [8].
Magnetic skyrmions were originally discovered in bulk ferromagnets lacking inversion symmetry, such as MnSi [13], FeGe [7,14], Fe 0.5 Co 0.5 Si [15] and other B20 transition metal compounds [16]. Then they were observed in thin films and nanowires of similar materials [9,17,18], and recently, in the multiferroic insulator Cu 2 OSeO 3 [19]. In addition, a more stable two-dimensional skyrmion crystal has been created artificially by nanopatterning [20] and a spontaneous skyrmion ground state has been created in Co/Ru/Co multilayer nanodisks without the DMI (the competition of the exchange energy, demagnetization energy and uniaxial anisotropy energy acts similar to the DMI) by a numerical approach [21]. Meanwhile, an effective method was reported to nucleate or annihilate isolated skyrmions experimentally by using STM at one monolayer of Fe grown in Ir(111) [22].
It was recently realized that the magnetic skyrmions not only have mathematical beauty but can also be used as spintronic devices. Recent research has demonstrated that magnetic skyrmions have great potential to act as the next generation of magnetic memories in nanowires [23][24][25] because of two evident advantages: (i) skyrmions have stable small size (10-100 nm or even as small as a few atoms in diameter), suggesting ultra-high density data encoding, (ii) skyrmions can be easily manipulated using extremely low spin current density of only about 10 6 A m −2 which is about 10 5 to 10 6 smaller than that required to drive magnetic domain walls [14,26,27]. These two unique properties point to an opportunity for the realization of many other novel skyrmion spintronic devices. Here, we propose another spintronic application of a skyrmion device: a spin transfer nano-oscillator (STNO). The STNO is used to generate microwaves [28]. Key features of STNO devices are: (i) small size (i.e. at the nanoscale), (ii) broad and steady working frequency. Currently, STNOs are roughly divided into two kinds: (a) precessional motion of uniform magnetization [29] and (b) magnetic vortex oscillations [30][31][32][33][34][35][36]. Vortex-based STNOs could present a high output power [37], and a narrow spectral linewidth [32]. However, the current density used to manipulate the oscillation of the vortex is of the magnitude of 10 11 to 10 12 A m −2 . Moreover, one nanodisk allows the existence of only one magnetic vortex, which limits the output power of an STNO, and also the size of the magnetic vortex is larger than a skyrmion [31,32]. In this work, using micromagnetic simulation, we demonstrate that a magnetic skyrmion can be excited into oscillation by a spin-polarized current, and the linewidth could be smaller than 1 MHz. Arising from this effect, we propose a spintronic application of skyrmion-based STNOs. To improve its performance, an STNO with multiple skyrmions is further proposed. It is found that the range of working frequency is hugely extended. This device could work at the current density magnitude of 10 8 A m −2 with the start oscillation time markedly reduced. This device is also expected to improve the output power. Figure 1 shows a simple schematic diagram of our STNO device (a single skyrmion here) which consists of a fixed layer, a non-magnetic spacer (either a non-magnetic-metal or a thin insulator), a free layer, and a pair of point contact electrodes at the top and bottom. The magnetization orientations of both the free layer and polarizer are perpendicular to the sample plane. The electrical current flows perpendicularly to the nanodisk through the point contact electrodes, and has a uniform distribution. The OOMMF public code [38], including the extension modules of the DMI (which was completed by S Rohart's group), [39] and STT [40] (which works only inside the area of electrodes) are performed for the free layer at T = 0 K. The time dependence of the spin dynamics of each unit cell follows the extended Landau-Lifshitz-Gilbert (LLG) equation:
Model and simulation details
where the third and fourth terms describe the coupling between spins and spin-polarized current [41], and m i is the unit vector of the local magnetization, γ is the gyromagnetic ratio, α is the Gilbert damping coefficient and is set to 0.01, m p (inside the area of electrodes) is the current polarization vector, l is the thickness of the free layer, ξ is the amplitude of the out-of-plane torque relative to the in-plane torque and is set to 0.1. The parameter u has the form where J is the current density, P is the spin polarization and is set to 0.3, M s is the saturation magnetization. The effective magnetic field H eff is the sum of demagnetization field, anisotropy field, exchange field, Oersted field induced by the applied current (the skyrmion's dynamics is less affected by the Oersted field according to our simulations, thus, in order to understand the intrinsic dynamics driven by STT, the Oersted field is neglected here), and the DMI. For bulk materials lacking inversion symmetry, the DMI in an atomic description is given by: ij is the DMI vector for the atomic bond i j, ⎯ → u ij is the unit vector between atoms i and j, and ⎯ → S i is the atomic moment unit vector. As the film is very thin, magnetization direction variation along the film normal can be neglected. Supposing that the atomic spin direction evolves slowly at the atomic scale, the DMI is continuous, and the DMI energy becomes: where D is the continuous effective DMI constant, m x , m y and m z are the components of the normalized magnetization = m M M / s . We consider a thin film of a chiral magnet with a DMI that supports vortex-like skyrmions as the free layer, which is a nanodisk with a thickness of 0.6 nm and whose radius R varies from 20 to 70 nm. The unit cell size is chosen to be 0.5 × 0.5 × 0.6 nm 3 for R ⩽ 30 nm and 1 × 1 × 0.6 nm 3 for R > 30 nm. The material parameters are chosen similar to those of [25]: exchange stiffness constant A = 1.5 × 10 −11 J m −1 , uniaxial anisotropy constant K u = 0.8 × 10 6 J m −3 , saturation magnetization M s = 5.8 × 10 5 A m −1 and D varies from 2 to 9 mJ m −2 .
Considering these values, it is reasonable to assume the existence of more than one skyrmion in one nanodisk [42,43]. Therefore, to improve the performance of the STNO, we also propose a multiple-skyrmion STNO device which we will study in detail later.
Skyrmion nucleation and its stability
We initially created a skyrmion at the center of the nanodisk as shown in figure 2 (this can be carried out by local injection of a spin-polarized current pulse perpendicular to the nanodisk in the experiment of [22] or as in [44]). The radius of the nanodisk R is set to 30 nm. For D < 2.5 mJ m −2 , the relaxed state is a ferromagnetic (FM) state. For 2.5 mJ m −2 ⩽ D ⩽ 8.0 mJ m −2 , the relaxed state is a skyrmion at the center of the nanodisk and the size (the diameter of the circle with m z = 0) increases with the increase of D [45]. Note that, for our system with edge effects, the skyrmion size is slightly different from that of skyrmion lattices in unbounded films, where the lattice period (proportional to A/D, and strongly influenced by the external magnetic field) decreases with the increase of D, and the reason has been discussed in [25]. To understand why the skyrmion stays at the center of the nanodisk, figure 3 shows the x, y and z components of the magnetization along the diameter (along the x axis) of the nanodisk where there is no skyrmion. m z is −1 (m x = m y = 0, direction of the magnetic moment is down) at the center (position = 30 nm) of the nanodisk. Then m z increases and m y becomes larger positioned closer to the edge of the nanodisk, which reveals that the edge magnetization rotates in a plane parallel to the edge surface because of the DMI. For our case, there is a skyrmion (the magnetization direction at the boundary is down) in the disk. Thus, a skyrmion positioned away from the center will increase the total energy of the system. Due to energy minization, it is energetically more favorable for the skyrmion to stay in the center.
To confirm that the spin structure is that of a skyrmion, we calculated the skyrmion number using the following formula: where q is the topological density. S is approximately equal to 1, proving that the spin structure is just a skyrmion state [8]. For D ⩾ 9.0 mJ m −2 , it becomes a multiple-domain state. Considering that we are exploring the potential application of STNOs based on skyrmions, our later simulations are performed with D = 3 mJ m −2 .
Current-induced dynamic of a single skyrmion
Then spin-polarized current is applied through the nanocontact and the yellow arrow (as shown in figure 1) indicates the direction of current with the actual electron flow in the opposite direction. Note that it is regarded as an idealized current distribution between two electrodes in the simulations. First, R is set to 30 nm, the radius of electrode r e is set to 4.24 nm (unless noted otherwise, R and r e are set to 30 nm and 4.24 nm in the later simulations), and current density J = 1 × 10 11 A m −2 is applied. Figure 4(a) shows the related trajectory of the guiding center (R x , R y ) of the skyrmion (the center of the topological skyrmion) which is defined by [46].
( 5 ) x y The force from the perpendicular spin-polarized current drives the skyrmion outside of the nanocontact gradually with a spiral trajectory and finally the skyrmion reaches a persistent oscillation around the injection site of the current. The final steady oscillation radius of the skyrmion core r s = + ( ) 10.98 nm. In order to investigate the time evolution of the skyrmion motion, figure 4(b) shows R x as a function of simulation time. R x starts to oscillate as soon as the current is applied, but the magnitude of the amplitude is . Then the amplitude increases gradually with the simulation time and R x starts to oscillate more vigorously (magnitude of nm) at about τ = 25 ns (τ is considered as the start oscillation time) and then the amplitude increases rapidly to a steady value. The final frequency of steady oscillation f is about 0.76 GHz. To describe the dynamics of the single skyrmion oscillation, we use the approach developed by Thiele [47] for the skyrmion's translational motion. The skyrmion is considered to be a rigid particle and thus we assume . Following the treatment in [48,49], we obtain is a gyrocoupling vector, F st is the spin transfer force and U(R) is the potential acting on the skyrmion due to the boundary effect and where k denotes the spring constant for restoring force. The damping tensor D can be computed as where η is a shape factor of the skyrmion. The component F i of the force F st is given by where the integration is over the point contact area P c . We write in polar coordinates since the system is a disk. The potential U for the skyrmion is expected to be symmetric about the z axis, i.e, ρ = U U ( ). We can also split the force F st into two components = + where F t corresponds to the first term while ρ F is related to the ξ term in equation (7). Therefore, equation (6) can be written as where ρ is equal to r s . Then combining our analytical results with micromagnetic simulations, we calculate the oscillation frequency. Figure 5(a) shows the spin transfer force F t and ρ F as fuctions of oscillation radius r s . Both F t and ρ F decrease as expected with the increase of r s . The potential U is equal to the variation of total energy of the system (relative to the case where there is no applied current) as shown in figure 5(b). The variation of total energy is almost a linear function with r s 2 as shown in inset of figure 5 (9) and (10) which are very close to our micromagnetic simulation result of 0.76 GHz. Note that, according to our simulation, ρ ≫ ρ k F , so oscillation frequency is mainly determined by potential U instead of ρ F .
STNO with a single skyrmion
Since the DC spin-polarized current excites gigahertz skyrmion oscillation in the free layer, which could give rise to a temporal variation of the resistance due to the magnetoresistive (MR) effect, it could be used as an STNO. Figure 6 shows a schematic diagram of an STNO device with multiple pairs of point contact electrodes. The electrode at the center of the nanodisk is used to drive the skyrmion, and the others, which have a centrosymmetric distribution, are used to detect the voltage signal (one pair of detection electrodes is sufficient, but in order to improve efficiency, we use six pairs here). Note that the detection current should not be so large as to influence the motion of the skyrmion. There are some parameters describing the STNO performance, such as working frequency f w , linewidth, power dissipation, output power, phase noise, Q factor, and so on. The working frequency, output power and linewidth are the three most important parameters, so our later simulations mainly focus on these three. The output power delivered from the STNO to a load can be approximately defined as [28] where I is the DC current, R is equal to the DC resistance, ΔR is the oscillation amplitude of the resistance induced by I and R L is the impedance of the load. It is evident that maximizing P out requires maximizing ΔR, which requires a large oscillation amplitude. When the skyrmions move into the area of the detection electrodes, and the magnetization of the free layer and the fixed layers at the area of the detection electrodes are almost in an antiparallel alignment, the resistance is relatively high, as in the case of 3 in figure 6. When the skyrmions move out of the area of the detection electrodes, the magnetizations are parallel and the resistance is relatively low, as in the case of 1, 2, 4, 5 and 6 in figure 6. Thus, the value of ΔR for our skyrmion-based STNO should be larger than that of a vortex-based STNO. What is interesting is that we could obtain six signals with different phases from six pairs of detection electrodes. Each signal is a pulse signal with a frequency in the microwave range, and the duty ratio is about 1/6. Another method to generate microwaves arises from the electromagnetic induction principle. Putting a coil at the top and covering half of the nanodisk, the skyrmion acts as a magnet as when it goes in and out of the coil area with a frequency in the microwave range, a temporal variation voltage is generated at the coil.
Then we investigated the oscillation frequency of the skyrmion,. When the spin-polarized current is J < 1 × 10 10 A m −2 , it is too weak to move the skyrmion away from the center of the nanodisk. In contrast, for J > 18 × 10 11 A m −2 , the skyrmion will be destroyed and the free layer becomes FM again. When 1 × 10 10 A m −2 ⩽ J ⩽ 18 × 10 11 A m −2 , which is large enough to supply sufficient STT to cancel out the intrinsic damping losses, it will lead to steady oscillation of the skyrmion in the free layer. The oscillation frequency f and linewidth can be obtained by conducting a fast Fourier transform (FFT) of R x as shown in the inset of figure 4(b) (J = 10 × 10 11 A m −2 ). It is particularly worth mentioning that the linewidth (full width at half maximum of the power spectra) is smaller than 1 MHz, which offers a huge advantage to STNO application. Figure 7 shows f, r s , and τ as functions of current density J. With the increase of J, both f and r s increase rapidly at first as expected as shown in figures 7(a)-(b). Further increasing J will result in r s gradually approaching a stable value of 14.5 nm due to the effect of the nanodisk edge. In the meantime, f decreases slowly. The reason why the range of f is very narrow (f is around 0.7 GHz) is that the oscillation frequency is mainly determined by potential U instead of ρ F as has been discussed before in equation (9). To understand the effect of the nanodisk edge, the inset of figure 7(b) shows contour plots of M z with J = 18 × 10 11 A m −2 . The yellow area indicates the area of the electrodes and the black circle is a perfect circle for comparison (the skyrmion core is deflected to the left of the center of the circle), which reveals that the skyrmion is extruded by the edge of the nanodisk, leading to a deformation of the skyrmion especially near the edge of the nanodisk. In addition, it is notable that the starting oscillation time τ is extremely long when J is small (when J = 1 × 10 10 A m −2 , τ = 774 ns), As J increases, τ decreases rapidly and then becomes stable as shown in figure 7(c). For applications, the STNO is expected to work at smaller J and smaller τ. Thus τ should be reduced, which will be discussed later.
Then J is set to be 4 × 10 11 A m −2 . We try to adjust f by regulating r e . Figures 8(a)-(b) shows the simulated f and r s as fuctions of r e . r s increases almost linearly with r e , while f first increases and then decreases with the increase of r e . The range of f is narrow due also to equation (9). Note that r e shoud not be larger than the skyrmion size, otherwise we could not obtain steady-state oscillation of the skyrmion, but it stays at the center of the nanodisk instead. f can also be regulated by changing the radius of the nanodisk R because the potential U can be adjusted hugely by the size of the nanodisk. Figure 9 shows f and r s as functions of R with r e = 2 nm and J = 2 × 10 11 A m −2 . r s increases almost linearly with R as shown in figure 9(b). In the meantime, f decreases as expected with the increase of R and finally approaches about 0 Hz when R < 70 nm because the potential U decreases with the increase of R. Note that when R > 70 nm, the skyrmion will be driven to a certain position and no longer move, which reveals that the oscillation of the skyrmion depends strongly on the force both from the spin-polarized current and the edge of nanodisk.
Stability of the STNO-a non-magnetic impurity in the nanodisk
In practical applications, it is inevitable that the nanodisk is impure, which may affect the STNO's performance.
To study the effect of impurities on the skyrmion dynamics, we place a non-magnetic circular impurity (a hollow area 4 nm in diameter) at 25 nm from the center of the nanodisk as shown in figure 10(a). It has been demonstrated that topological protection could drastically reduce the influence of defects on skyrmions [25]. Figure 10(b) shows the trajectory of the skyrmion core with J = 3.5 × 10 11 A m −2 , which reveals that the skyrmion can still oscillate, though its motion trajectory is slightly distorted (r s become larger) at the location of the impurity. The reason why r s becomes larger is that the size of the skyrmion is changed (becomes smaller) while we estimate the guiding center of the skyrmion as its location. Actually, both the size and the shape of the skyrmion show slight variations. The deformation of the skyrmion is related to the magnetization of the nanodisk, thus, to have a better understanding of the skyrmion dynamics near the impurity, we give the projection of the trajectory in the〈 〉 − 〈 〉 M M y x plane (where M x and M y are the x-and y-axis components of the total magnetization, respectively) with J = 1, 2, 3 and 3.5 × 10 11 A m −2 , respectively, as shown in figure 10(c). As we have already known previously that r s increases almost linearly with R at the same r e and J, it reveals that the skyrmion is extruded by the edge of the nanodisk, leading to the deformation of the skyrmion especially near the edge of the nanodisk. For example, when the skyrmion is at the right side of the nanodisk (the maximum of x in the real space), deformation occurs mainly at the right side of the skyrmion, leading to the variation of M y , because magnetization here is along the y axis. Figure 10(c) shows that the trajectory of the〈 〉 − 〈 〉 M M y x is deformed and the deformation becomes increasingly obvious with the increase of J. Continuing to increase J to J > 3.5 × 10 11 A m −2 , the skyrmion will be destroyed.
Let us give a brief summary of the STNO with a single skyrmion. The linewidth is smaller than 1 MHz and f can be adjusted by changing J, r e and R. However, for regulating J and r e , the range of f is very narrow (f is just around 0.7 GHz). For regulating R, f ranges from 0 Hz to about 1.4 GHz, but this is inconvenient for applications. In addition, τ is extremely long when J is small. Finally, the output power is limited when there is only one skyrmion in the nanodisk. To solve these problems, we try to nucleate multiple skyrmions in one nanodisk (figure 11), which is expected to decrease τ, while increasing output power and also the range of f.
STNOs with multiple skyrmions
As has been mentioned previously, it is reasonable to have more than one skyrmion in one nanodisk. Figure 11 shows the relaxed states of multiple skyrmions (two, four, five and six, respectively) in a nanodisk of R = 50 nm, forming a centrosymmetric structure. Thus, we propose a multiple-electrode STNO device as shown in figure 12 (R is set to 50 nm here and in the later simulations), where there are multiple (six) point contact electrodes in the STNO. Each skyrmion is subject to two forces at the relaxed states, namely the repulsive force F ss from other skyrmions [50] and the repulsive force F se away from the edge of the nanodisk. Since the skyrmions are in equilibrium, F ss is balanced by F se as shown in figure 11(a). With an increasing number of skyrmions, the resultant force from other skyrmions becomes stronger, so the distance between the skyrmions and center of the nanodisk r s widens, as shown in figures 11(d)-(f).
Then spin-polarized current is applied to drive the skyrmion oscillation, and it is one oscillation cycle (T, and the corresponding oscillation frequency is f ) when each skyrmion is rotated by 2π, as in the case of the figure 11(a) state to the figure 11(c) state. When one skyrmion moves to the location of the next skyrmion, one where N is the number of skyrmions. For applications, we are more concerned with f w . Figure 13(a) shows f w as a function of J for different N with r e = 7 nm. It can be seen that f w increases with the increase of J. For the case of N = 1, f w ranges from 0.07 to 0.14 GHz, while for cases of N > 1, the range of f w is hugely extended. Taking the case of N = 2 as an example, f w increases rapidly from nearly 0 GHz to 0.25 GHz and then slowly increases to 0.33 GHz. For the case of N = 3, the maximum f w (0.36 GHz) is relatively larger than that of two skyrmions, which is easy to understand. As we know, the point contact electrodes are at the center of the nanodisk, and the relaxed state of the skyrmion is also at the center for the case of N = 1, so that the function area of the spin-polarized current is totally on the skyrmion, so larger J (J > 18 × 10 11 A m −2 ) will destroy the skyrmion. With the increase of N, the relaxed states of the skyrmions are off-center on the nanodisk and r s increases, so the function area of the spin-polarized current on the skyrmions becomes smaller and smaller, leading to skyrmions that could withstand a larger J and the initial growth rate of f w for the case of N = 2 is larger than that of N = 3. It is worth noting that there is hardly any function area of the spin-polarized current on the skyrmions for N > 3 when r e = 7 nm so that the skyrmions do not move. In order to study this issue, r e is enlarged to r e = 15.8 nm. Figure 13(b) shows the results for r e = 15.8 nm. It can be seen that the skyrmions are driven to oscillation for all simulated systems. Similar to the case of r e = 7 nm, for N > 1, the smaller N, the larger the initial growth rate of f w . Moreover, the maximum f w is relatively large for larger N, as expected, except for N = 6. It can be predicted that f w could be larger if we continue to increase J for N = 6, but such a large current makes it pointless for applications. It is worth mentioning that the maximum f w for N > 1 is 1.07 GHz, which is about 4.46 times higher than that of a single skyrmion (0.24 GHz) in these simulations (J < 300 × 10 11 A m −2 ). The inset of figure 13(b) shows an enlarged view of smaller J, which reveals that the STNO could work at a current magnitude of 10 8 A m −2 (2 × 10 8 A m −2 at least) which is of great benefit for applications. Then we compare f w of different N with r e = 15.8 nm and J = 1 × 10 12 A m −2 as shown in figure 13(c). f w first increases and reaches a maximum at N = 4 and then decreases with the increase of N; the f w for N = 4 is 0.79 GHz, which is about 3.27 times higher than that of a single skyrmion. The reason why f w reaches the maximum at N = 4 is related to the different initial growth rate, and the peak position shifts to a higher value with increasing J. Therefore, if the STNO works at r e = 15.8 nm and J = 1 × 10 12 A m −2 , N = 4 is the best choice. In addition, the previous section showed that τ is extremely long when J is small for N = 1, which is because the skyrmion is initially at the center of the nanodisk, while for N > 1, the skyrmions are initially off-center of the nanodisk. Figure 13(d) shows the x component of the total magnetization (note that variation of M x is weak for the skyrmion is a circularly symmetric object) as a function of simulation time with r e = 15.8 nm and J = 5 × 10 8 A m −2 for N = 3, which reveals that τ is extremely small. Thus, STNOs with multiple skyrmions could have a faster response.
For STNOs with multiple skyrmions, the total number of detecting electrodes should be equal to that of the skyrmions to ensurethat the skyrmions move into and out of the area of detecting electrodes synchronously. Thus, the six signals can be combined to enhance the output power since they are synchronous.
In practice, there is a possibility that one skyrmion moves to the center of the nanodisk, which will impede the application. In order to solve this problem, we put a hollow area at the center of the nanodisk, and this provides an energy barrier that prevents the skyrmions from residing at the center of the disk, as shown in figure 14.
Similar to vortex-like skyrmions, hedgehog skyrmions in nanodisks can also oscillate when driven by a point contact current that is perpendicular to the nanodisk, which will be discussed in more detail in our next work.
Conclusion
In summary, we propose a spintronic application of skyrmion-based STNOs. For an STNO with one skyrmion, the working frequency can be adjusted by the current density, radius of point contact electrodes, and radius of the nanodisk. Additionally, the linewidth can be smaller than 1 MHz, which offers a huge advantage for STNO applications. For STNOs with multiple skyrmions, the range of working frequency is hugely extended; the minimum working frequency can be close to 0 Hz and the maximum is 1.07 GHz which is about 4.46 times higher than that of a single skyrmion for R = 50 nm. Moreover, this device can work at a current density magnitude of 10 8 A m −2 and the start of the oscillation time is markedly reduced. This device is also expected to improve the output power. Our studies may contribute to the development of skyrmion-based microwave generators. | 7,464.8 | 2015-02-18T00:00:00.000 | [
"Physics"
] |
Early Detection of Diabetes Using Random Forest Algorithm
Diabetes is one of the most chronic and deadly diseases. According to data from WHO in 2021, there were approximately 422 million adults living with diabetes worldwide, and this number is expected to continue to increase in the future due to various factors. Many studies have been conducted for early detection of diabetes by focusing on improving accuracy. However, a big problem in diabetes prediction is the selection of the right classification algorithm. This study aims to improve the accuracy of early detection of diabetes by implementing the Random Forest algorithm model. This research was conducted with the stages of data collection, data preprocessing, split data, modeling, and evaluation. This research uses the Pima Indian Diabetes data set. The results showed that the diabetes early detection model using the Random Forest algorithm produced an accuracy of 87%. This research shows that by using the Random Forest algorithm model, the performance of early detection of diabetes can be improved. However, there is still room for optimization of this performance, which is recommended for further research to carry out feature selection, data balancing, more complex model building, and exploring larger data.
Introduction
Diabetes mellitus is a chronic and deadly disease that has become a global health problem due to its continuously increasing prevalence from year to year [1].According to the World Health Organization (WHO), diabetes mellitus is a chronic condition that occurs when the pancreas is no longer able to produce enough insulin or when the body is not able to use insulin effectively, which leads to the occurrence of increased levels of glucose (sugar) in the blood, which can cause various health complications [2].Diabetes can be classified into four types: type 1, type 2, gestational, and prediabetes [3].The long-term effects of the disease include blindness, kidney failure, amputation, and even death.According to WHO data, by 2021, there will be approximately 422 million adults living with diabetes worldwide.This number is expected to continue to rise in the future to 625 million is due to factors such as an aging population, lifestyle changes, and increasing rates of obesity [4].
Research into early diagnosis of diabetes is a general need because of the number of people who have diabetes around the world.The increase in cases is due to lifestyle training and unhealthy diet.Automatic diagnosis can identify fatal complications such as heart disease, kidney problems, nerve damage, and eye problems.Nevertheless, automatic screening for diabetes can increase a huge financial burden for people and the health system in general.Based on the many cases of people with diabetes nowadays, it is necessary to take early action to address problems in the future, namely by making early predictions about diabetes.
This prediction of diabetes can be done by utilizing some of the data of patients with diabetes that has been stored in a database to create a pattern for determining diabetes [5].Machine learning (ML) technology has been widely used in a variety of fields [6], especially in the early detection of diabetes.Because of this over the years, Machine learning has solved many sophisticated and complex problems in a variety of fields such as marketing, business and retail, natural language processing, health, robotics, imaging, sound, gaming, etc [7].
The previous researchers [8], created a model for diabetes prediction using two datasets: Pima Indian Diabetes data and early-stage diabetes risk prediction to detect diabetes using several models such as Naïve Bayes, K-nearest neighbor, Decision Tree, Logistic Regression, Random Forest, SVM, AdaBoost classifier, Gradient classifier, and extra tree classifier.Among the several proposed methods, the Super Learner Classifier model has the highest accuracy of 86% for the Pima Indian dataset.For the data set of early-stage diabetes risk prediction, the KNN model has the highest accuracy of 97%.The study [9] used the DLPD (Deep Learning for Predicting Diabetes) model to predict diabetes and produced an accuracy of 94,02,174% for the Diabetes-type data sets and the Indian Pima-diabetics data sets are accurate at 99.4112%.The experimental results show improvements in the recommended formats compared to the target method.In addition, the model of prediction of diabetes disease also has been developed [10], use of short-term memory (LSTM), convoluted nerve network (CNN), and combinations to extract complex time dynamic characteristics from HRV input data.These characteristics are transferred to a supported vector machine (SVM) for classification.The proposed classification system can help doctors diagnose diabetes with an ECG signal with a very high accuracy of 95.7 percent.Further studies conducted by [11], used Gradient Boosting, Logistic Regression, and Naive Bayes for diabetes diagnosis to obtain 86% accuracy for gradient boosting, 79% for logistical regression, and 77% for naive bayes.Further, [12] uses logistical regression, support vector machines, nearest K neighbors, random forests, naive Bayes, boot gradient algorithms, and predictive machine learning models built and monitored in this study.With predictive capacities of 86.28% and 86.29%, respectively, learning-based models from random forest predictions and booting gradients proved to be the best prediction models.Besides, research conducted by [13], has detected miletus diabetes early using predictive analysis.The results showed the decision tree algorithm and the random forest had the highest specifications of 98.20% and 98.00%, each holding the best for the analysis of diabetes data.Naive Bayesian results state the best accuracy of 82.30%.Further, [14] developed a new super-learning model and managed to obtain the best accuracy results in the detection of diabetes mellitus compared to the base-learning for the prediction of risk of early-stage diabetes (99.6%),PIMA (92%), and diabetes 130-US hospitals (98%) datasets, respectively.
Method
The method used in this research is proposed in the form of a flowchart in Figure 1 which starts with data collection, data pre-processing, split data, modeling, and evaluation.Each step in this study was conducted sequentially.A detailed explanation of each step taken in this research can be seen as follows.
1. Data Collection
To detect diabetes in a person, this research uses the Pima Indian dataset which is publicly accessible on the Kaggle platform [15].The Pima Indian Dataset has been one of the most frequently used datasets in machine learning and deep learning research to detect diabetes because it has a large sample size and a variety of features that include health history and demographic characteristics [16].The dataset consists of 768 individual data with an age range of 21 to 81 years.A total of 500 data records are in the form of a negative class, namely individuals who are not diagnosed with diabetes.While the rest, namely 268 is the amount of data from individuals diagnosed with diabetes.The dataset consists of 8 variables, namely number of pregnancies, glucose concentration, diastolic blood pressure, triceps skinfold thickness, 2-hour insulin, body time index, hereditary disease history, and age.
Preprocessing Data
Preprocessing data is done to prepare the data to run well in the modeling process.Data preprocessing can be done by cleaning data, changing data types, and others.In the preprocessing stage of this research, NaN values or empty rows in the dataset were identified.However, after this identification, it is found that the dataset is complete, there are no rows containing NaN.Then checking the value of 0 in each feature is done.It is possible because as in the example of the pregnancies feature, it makes sense if someone has never been pregnant, and is indicated by the pregnancies data 0.However, there are features such as glucose, blood pressure, skin thickness, insulin, and BMI, which are impossible if the value is 0. Then the mode value is entered into the data that contains 0 in each of these variables.
3. Split Data
At this stage, dataset division is carried out where patient data that has been processed in the previous stage is divided into 3 sets, namely training data, validation data, and testing data.The training data here is used to train the Random forest model.Validation data is used to test the performance of the model during training.Testing data is used to test the performance of the model after training.The percentage of data division is 60% as training data, 25% as validation data, and 20% as testing data.
4. Random Forest
The Random Forest algorithm is proposed to be able to increase the accuracy of early diabetes prediction.This algorithm is a machine learning classifier that works by building a decision tree.These algorithms can also be used for regression and classification [17].Random Forest is part of the Supervised Learning group developed by Leo Breiman [18].This method is one of the most accurate classification methods used in making predictions, can handle enormous amounts of variable inputs without overfitting, and helps eliminate correlations between decision trees such The relationship pattern between the features in the dataset and its class, namely 'outcome' which consists of classes 1 (diabetics) and 0 (healthy individuals) can be seen in Figure 3 below.Based on the correlation between the pregnancy variable and the age feature, it can be seen that age and pregnancies are spread across healthy and diabetic individuals.However, there is a slight tendency that healthy individuals (non-diabetics) are those who are around 30 years old and below and do not often get pregnant.For the correlation of glucose and age, we can see a pattern where healthy individuals (non-diabetics) are those who are mostly less than 40 years old and their glucose is less than 140.For the correlation of blood pressure and age, we can see a pattern where healthy individuals (non-diabetics) are those who are around 30 years old and less than 100.For the correlation between skinthickness and age, we can see a pattern where healthy individuals (not diabetics) are those whose ages are around 30 years and below and whose skinthickness is around 40 and below.For the correlation of insulin and age, we can see a pattern where healthy individuals (non-diabetics) are those whose age is around 30 years and below and their insulin is less than 400.However, there are also some individuals aged 50 to 60 years who do not suffer from diabetes because their insulin is very minimal.For the correlation of BMI and age, we can see a pattern where healthy individuals (not diabetics) are those whose ages are around 40 years and below and their BMI is less than 40.For the correlation of diabetes pedigree function and age, we can see a pattern where healthy individuals (not diabetics) are those whose ages are around 40 years and below and their diabetes pedigree function is less than 0.8.
The model evaluation stage is carried out to obtain measurement performance, where the metric used in this research is accuracy.From the process carried out and the model that has been created, this research obtained a training accuracy of 78.18% and an accuracy on testing of 87%.The following is a comparison of the findings of the research with previous research in Table 1.Research conducted by applying the Random Forest algorithm has higher accuracy results compared to previous research using the super learner classifier model algorithm on the same dataset.In the data preprocessing stage before modeling, the percentage of data division into training, validation, and testing data can be said to influence and help in obtaining these results.So this research shows that the model built, namely Random Forest together with the preprocessing stage carried out, is superior in classifying individuals suffering from diabetes with healthy individuals.
Conclusion
In this study, early detection of diabetes was carried out using the Random Forest algorithm.This research shows the superiority of the Random Forest algorithm over the use of other algorithms in the early detection of diabetes using the same data.In the model built, this study achieved an accuracy of 87%.This result shows an increase in performance from previous research.However, there is room for further development.Future research is expected to consider other factors to improve detection accuracy such as the implementation of data balancing methods, application of feature selection, building more complex models, and exploring larger data.
Figure 1 .
Figure 1.Flowchart of the proposed method
Figure 2 .
Figure 2. Comparison between patients with diabetes and patients without diabetes
Figure 3 .
Figure 3. a. Correlation between class and pregnancies feature
Figure 3 .
Figure 3. d.Correlation between class and skinthickness feature
Table 1 .
Comparison of accuracy | 2,809.8 | 2024-01-29T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Pharmacological and toxicological studies of a novel goserelin acetate extended-release microspheres in rats
LY01005 is an investigational new drug product of goserelin acetate which is formulated as extended-release microspheres for intramuscular injection. To support the proposed clinical trials and marketing application of LY01005, pharmacodynamics, pharmacokinetics and toxicity studies were performed in rats. In the pharmacological study in rats, LY01005 induced an initial supra-physiological level increase of testosterone at 24 h post-dosing which then rapidly fell to castration level. The potency of LY01005 was comparable to the comparator Zoladex® but its effect lasted longer and more stable. A single-dose pharmacokinetics study in rats demonstrated that the Cmax and AUClast of LY01005 increased in a dose-proportional manner in the range of 0.45–1.80 mg/kg and the relative bioavailability was 101.0% between LY01005 and Zoladex®. In the toxicity study, almost all of the positive findings of LY01005 in rats including the changes in hormones (follicle-stimulating hormone, luteinizing hormone, testosterone, progestin) and in reproductive system (uterus, ovary, vagina, cervix uteri, mammary gland, testis, epididymis and prostate) were related to the direct pharmacological effects of goserelin. Mild histopathological changes in foreign body removal reaction induced by excipient were also observed. In conclusion, LY01005 displayed a sustained-release profile of goserelin, and exerted a continuous efficacy in vivo in animal models, which had a comparable potency but with a more sustained effect than that of Zoladex®. The safety profile of LY01005 was largely the same with Zoladex®. These results strongly support the planned LY01005 clinical trials.
Introduction
Goserelin is a potent synthetic decapeptide agonist analogue of the naturally occurring hormone known as gonadotropin releasing hormone (GnRH) (Chrisp and Goa, 1991). The pharmacological effects of goserelin are related to its occupation of the majority of GnRH receptors present on the pituitary which then become internalized, disappearing from the cell surface. As a result of the receptor occupancy, there is an initial surge of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) secretion. Then, the secretions of FSH and LH are markedly suppressed due to the desensitization of GnRH receptors caused by the continued presence of goserelin. In males, this suppression leads to testes atrophy, suppression of testosterone secretion and prostate involution. In females, this will result in ovarian atrophy, a decrease in estradiol to castrate or post-menopausal values and involution of the uterus and mammary gland, as well as regression of sex hormone-responsive tumors (Brogden and Faulds, 1995;Cheer et al., 2005).
As compared to daily drug administration, depot formulations and implantable devices have many advantages. They can enhance patient compliance and reduce the total dose of drug required to achieve castrate testosterone levels. They can also minimize tissue damages related to frequent injections. Zoladex ® , a long-acting subcutaneous (s.c.) implantable formulation of goserelin acetate, was developed by AstraZeneca UK Limited and approved by FDA. It is indicated for the management of locally confined carcinoma of the prostate and palliative treatment of advanced carcinoma of the prostate in men. It is also indicated for the management of endometriosis and used as an endometrial-thinning agent prior to endometrial ablation for dysfunctional uterine bleeding in women and palliative treatment of breast cancer in pre-and perimenopausal women. However, implant formulations employed by Zoladex ® require the concurrent administration of a local anesthetic or a special injection technique (Perren et al., 1986;Filicori et al., 1993;Nukui and Morita, 2011). It is clear that there is clinical need to improve the administration convenience and patient compliance of the drug. This can be achieved by developing an injectable goserelin formulation to avoid the issues related to implant formulation.
Goserelin acetate extended-release microspheres for injection (Code name: LY01005) is a novel liquid intramuscular (i.m.) injection of goserelin acetate developed by Luye Pharmaceutical Co., Ltd. (Luye pharm). Unlike Zoladex ® implant (using 16-gauge needle), LY01005 can be given through a fine 21-gauge needle. As such, LY01005 does not require the concurrent administration of a local anesthetic or a special injection technique (e.g., using ice cubes or vapo-coolant spray for relieving pain induced by Zoladex ® implant injection). LY01005 could minimize the discomfort to patients and reduce the risk of injection site hematoma (especially important for patients who also take anticoagulants). In addition, LY01005 only requires conventional injection method to deliver the drug which can be done by a physician or other members of the healthcare team. The injection frequency can be tailored to enable more individualized, patient-orientated treatment, which can be given at home or in the office, and timed to coincide with regular check-ups.
Here we report a series of pharmacodynamics, pharmacokinetics, and toxicological studies of LY01005. Using Zoladex ® as a comparator, these studies were intended to support the clinical trials and marketing application of LY01005 as a reformulated drug product.
2 Materials and methods 2.1 Chemicals and reagents LY01005 was provided by Luye Pharma. The active ingredient, goserelin acetate, was micro-encapsulated in PLGA at a concentration of 40 mg of goserelin acetate per gram of microspheres (4% drug content). The diluent for i. m. injection was a sterile, clear and colorless solution, containing carboxymethylcellulose sodium (SCMC), sodium chloride and water for injection. Placebo microspheres (without goserelin acetate) and vehicle (1% SCMC) were also supplied by Luye Pharma. LY01005 and placebo microspheres were suspended in SCMC to the desired concentrations. 2,4,7,10,14,18,21,24,28,32 and 35 after injection. Blood samples without anticoagulation were centrifuged at 3,000 rpm for 15 min at room temperature. Serum testosterone levels were analyzed using the Testosterone Parameter Assay Kit (R&D Systems, Inc., United States) according to the manufacturer's instructions.
Functional observational battery test in rats
Functional observational battery (FOB) tests were performed to evaluate the potential effects of LY01005 on neurobehavioral functions in rats. Rats were randomly assigned to the following groups: vehicle control (1% SCMC), placebo microspheres (287.54 mg/kg) and LY01005 at doses of 1.2, 3.6, 10.8 mg/kg, with 10 rats (n = 5/sex) per group. Animals received a single i.m. injection with the same dose volume of 3 mL/kg.
Effect of LY01005 on the respiratory function in conscious rats
Conscious rats were used to evaluate the potential effect of LY01005 on the respiratory function by DSI Respiratory Whole Body Plethysmography System. The rats were randomly assigned to groups of vehicle control (1% SCMC), placebo microspheres, or LY01005 at doses of 1.2, 3.6 and 10.8 mg/kg, respectively, with a dose volume of 3 mL/kg. The parameters of respiration rate, tidal volume and minute ventilation rate were evaluated at 1, 72, 240 and 648 h after i.m. injection.
Pharmacokinetics study in rats
Male rats were randomly divided into three groups (n = 6/ group) and were administered with a single i.m. dose of 0.45, 0.90 or 1.80 mg/kg of LY01005 (peptide base). The comparison experiment between LY01005 and Zoladex ® at the dosage of 1.80 mg/kg was performed in male rats (n = 6/group). Blood samples (0.5 mL) were collected from the eye socket prior to dosing and at 0.5, 1, and 6 h after injection on day 1 and on days 2, 4, 7, 9, 10, 11, 13, 15, 18, 21, 24 and 28. The plasma concentrations of goserelin were determined by LC-MS/MS. The pharmacokinetic parameters of goserelin were calculated using non-compartmental methods by the software Phoenix WinNonlin 6.3 (Pharsight, Mountain View, CA, United States).
2.5 Toxicology studies 2.5.1 Acute toxicity study in rats Rats (n = 5/sex/group) were i.m. injected with a single dose of vehicle control (1% SCMC), placebo microspheres, or LY01005 at doses of 3.75, 15 and 60 mg/kg, respectively, using a dose volume of 12 mL/kg. The dosing day was defined as Day 1. Parameters for evaluation included the mortality, clinical signs, body weights, food consumption, hematology, clinical chemistry, and gross-and microscopic-pathology evaluations. Scheduled necropsies were conducted in all animals on Day 29.
Sixteen-week subchronic toxicity study in rats
Rats (n = 15 rat/sex/group) were administered with i.m. injection of vehicle (1% SCMC), placebo microspheres (287.54 mg/kg) or LY01005 at doses of 1.2, 3.6 or 10.8 mg/kg, respectively, once every 4 weeks for 16 weeks followed by an 8week recovery period. Drug was injected in the long adductor of the hindlimb using a dosing volume of 3.0 mL/kg. At the end of the treatment and recovery periods, 10 rats/sex/group and five rats/ sex/group were euthanized for pathological evaluations, respectively. Parameters evaluated included the mortality, clinical signs, body weight, food consumption, hormone levels, hematology, clinical chemistry, organ weights, gross-and microscopic pathology.
The toxicokinetics study was performed in combination with the subchronic toxicity study. Each LY01005 group included 24 rats (n = 12/sex/group) for TK study. Blood samples were collected from the rats in LY01005 groups at pre-dose (0 h) and at 0.5, 1,6,24,48,96,168,264,336,432,504 and 672 h post-dose after the 1st, and the 4th dosing. The LC-MS/MS method was validated to determine the plasma concentrations of goserelin.
Statistical analysis
Quantitative data such as testosterone levels, body weight, food consumption, hematology, clinical chemistry, organ weights and ratios were presented as mean ± standard deviation. Quantitative data were evaluated using one-way analysis of variance (ANOVA). If the ANOVA was significant (p ≤ 0.05), Dunnett's t-test was then used for pairwise comparisons. Levene's test was used to analyze the variance homogeneity. In the case of heterogeneity of variances, Kruskal-Wallis (K-W) H tests were used to analyze and if there was significant difference (p ≤ 0.05), Mann-Whitney (M-W) U tests were used for pairwise comparison. PRISTIMA 6.1.1 system (Xybion Medical Systems Corporation, United States) was used for the statistical analysis of body weight, food consumption, hematology, clinical chemistry, organ weights and ratios. The Graphpad prism 5.0 was used for statistical analysis of the testosterone levels.
Single-dosing pharmacology study in rats
The pharmacological effects of goserelin mostly result from its occupation of the majority of GnRH receptors present in the pituitary gland. Initially there is a supra-physiological elevation of testosterone and estradiol levels post-dosing in males and in females, respectively. Thereafter, testes atrophy and a decrease in Frontiers in Pharmacology frontiersin.org 03 testosterone secretion are observed in males, and ovarian atrophy and a decrease in estradiol to castrate or post-menopausal levels are observed in females due to the long-lasting GnRH receptor blockade. Based on this unique pharmacological mechanism of action of goserelin, here serum testosterone level was chosen as a pharmacodynamics biomarker in male rats. The results showed that the initial supra-physiological elevation of testosterone was as expected at 24 h post-dosing in LY01005-treated rats. Then the testosterone level rapidly fell to castration level on Day 4 and maintained until Day 35 in 0.72 and 1.44 mg/kg LY01005-treated groups. There was no difference between LY01005 and Zoladex ® treatments in their effects on the serum testosterone levels from Day 1 to Day 21. However, the serum testosterone level in LY01005-treated rats was significantly lower than that of Zoladex ® -treated rats from Day 24 to Day 35. The potency of LY01005 was similar to Zoladex ® but its effect lasted longer and was more stable (Figure 1).
Multiple-dosing pharmacology study in rats
Acute-on-chronic phenomenon in GnRH agonist therapy refers to the paradoxical increase of serum testosterone level at the end of the dosage interval which is generally attributed to the premature exhaustion of a depot formulation. Therefore, a multiple-dosing pharmacodynamics study is necessary to examine whether the acute-on-chronic phenomenon occurs during LY01005 treatment. As is shown in Figure 2 and as expected, the initial supraphysiological level of testosterone on Day 1 and the rapid reduction to castration level on Day 4 were observed both in LY01005-and Zoladex ® -treated animals. The testosterone concentrations significantly exceeded the castration level from Day 21 to Day 28 (the second dose), from Day 49 to Day 56 (the third dose), and on Day 84 in Zoladex ® -treated group. In contrast, the testosterone concentrations maintained at the castration level between Day 4 and Day 84 in LY01005-treated group. These data suggested that LY01005 treatment had a much lower risk than Zoladex ® treatment in inducing the acute-on-chronic phenomenon.
Functional observational battery tests in rats
The FOB is a non-invasive procedure designed to better quantify neurotoxic effects in animals resulting from exposure to chemicals in conjunction with other neuropathologic evaluation and/or general toxicity studies. In this study, 1 h after the drug administration, a decrease in walking and/or climbing was observed in rats receiving placebo microspheres or LY01005 10.8 mg/kg treatments. On Days 2, 7, 11 and 28, a reduction in movement grade was observed in rats receiving LY01005 3.6 and 10.8 mg/kg treatment as compared to the vehicle control and the pre-dose data. These changes in movement were most likely due to mechanical stimulation induced by intramuscular injection of placebo microspheres or LY01005 but not due to the pharmacological effect of goserelin. No other abnormalities were observed on the home-cage observation, hand-held observation, open-field observation, stimulus response observation, grip strength and body temperature parameters in the present study. It was concluded that the neurobehavioral functions of the rats were unaffected by the placebo microspheres (287.54 mg/kg) and LY01005 at doses of 1.2, 3.6, 10.8 mg/kg.
Respiratory functions in rats
As shown in Table 1, the parameters of respiration rate, tidal volume and minute ventilation volume in rats receiving placebo microspheres and LY01005 treatments were not significantly different to rats receiving vehicle treatment at 1, 72, 240 and 648 h postdose (p > 0.05). The results indicated that the respiratory function of conscious rats was not affected by LY01005 at doses of 1.2, 3.6, 10.8 mg/kg.
FIGURE 1
Effects of LY01005 on serum testosterone levels in male rats. *, p < 0.05, LY01005 compared to Zoladex ® at the same dose.
Single-dosing pharmacokinetic study in rats
The LC-MS/MS method was developed and validated to determine the plasma concentrations of goserelin in rats, with the linearity range of 0.0200 ng/mL (lower limit of quantitation, LLOQ) to 30.0 ng/mL (upper limit of quantification, ULOQ). Following the administration of LY01005, an initial goserelin release was observed at 0.5 h and then a quick decrease within 6 h was observed in the plasma concentration-time profile. Goserelin was continuously released and then a secondary peak was observed from day 5 to day 10. The plasma concentration of goserelin was detectable up to 28 days postdose ( Figure 3). As is shown in Table 2, C max and AUC last increased in a dose-proportional manner in the range of 0.45-1.80 mg/kg. The
FIGURE 2
Effects of LY01005 on serum testosterone levels in male rats. *, p < 0.05, LY01005 compared to Zoladex ® at the same dose. Figure 4). The mean pharmacokinetic parameters of the two groups were shown in Table 2. Compared to Zoladex ® , the relative bioavailability of LY01005 was 101.0%.
Toxicology studies 3.3.1 Single-dose toxicity study in rats
In the single-dose toxicity study, rats were i.m. injected with vehicle control (1% SCMC), placebo microspheres, or LY01005 at doses of 3.75, 15 and 60 mg/kg, respectively. All animals survived until the study termination. A decrease in the body weight in male Note: SD, standard deviation.
FIGURE 4
Mean plasma concentration-time profiles of LY01005 and Zoladex ® in rats.
Frontiers in Pharmacology frontiersin.org 06 rats was observed in the LY01005-treated groups while an increase in both the body weight and food consumption in female rats were noted in the LY01005-treated groups. Slight decreases in red blood cell (RBC), hemoglobin (HGB) and hematocrit (HCT) and an increase in total bilirubin (TP) were noted in LY01005-treated male rats. In addition, grey-white nodules at the injection site muscles were macroscopically observed in rats receiving placebo microspheres and all doses of LY01005, which were microscopically identified as foreign body granuloma. The relationship of the above changes to LY01005 treatment could not be excluded, which suggests the need of further monitoring in repeated-dose toxicity study. For this acute toxicity study in rats, the maximum tolerated dose (MTD) of LY01005 was determined to be greater than 60 mg/kg.
Sixteen-week subchronic toxicity study in rats
In the subchronic toxicity study, rats were i.m. injected with vehicle (1% SCMC), placebo microspheres (287.54 mg/kg) or LY01005 at 1.2, 3.6 or 10.8 mg/kg, once every 4 weeks for 16 weeks followed by an 8week recovery period. The signs of swollen and/or scleroma were observed in rats receiving placebo microspheres, 3.6 and 10.8 mg/kg LY01005. During the treatment period, increases in the body weight and food consumption were observed in female rats that received different doses of LY01005. These changes gradually returned back to normal level during the recovery period.
At the end of the treatment period, an increase in lymphocyte % (within 10%) and a decrease in neutrophile granulocyte % (approximately 30%) in all LY01005-treated male rats and an increase in white blood cell and lymphocyte (approximately 50%-60%) were observed in LY01005-treated female rats. These changes were considered to be related to the phagocytosis and degradation of microspheres and chronic inflammation caused by LY01005 injections. Slight decreases in RBC, HGB and HCT were observed in all LY01005treated male rats. These findings were considered to be related to LY01005-induced hormone changes. Decreases in ALB, TP and A/G were noted in LY01005-treated female rats, which might be related to liver abnormality induced by hormone changes. However, no noticeable change was found in the liver tissues by gross and histopathological examinations or other liver function parameters. As food consumption decrease was observed in female rats, the decrease in ALB and TP might be caused by reduced food intake (Moriyama et al., 2008). The decrease in A/G was caused by ALB decrease. Decreased Ca 2+ and increased ALP were noted in female rats which were consistent with minimal to slight decrease in bone trabecular number. The findings were considered to be related to LY01005-induced hormone changes. All the changes discussed above were completely recovered to normal at the end of the recovery period.
During the treatment period, significantly increased FSH and LH, progestin and testosterone levels in the early phase, and a decrease or a trend of decrease in the late phase were observed in LY01005-treated rats. These observed hormone changes were directly related to the pharmacological mechanism of LY01005. The changes in hormone levels were completely or partially recovered at the end of the recovery period.
At the completion of treatment, decreases in the absolute weight and organ/body (and brain) weight ratios of the testes, epididymides and prostate were noted in male rats at all LY01005 treatment doses in a dose-dependent manner. Decreases in absolute weight and organ/body (and brain) weight ratios of the ovaries and uterus were observed in female rats at all LY01005 treatment doses in a dose-dependent manner. The changes in the reproductive system (germ cell depletion accompanied by mineralization, sperm number and cellular debris in the epididymal duct, prostate atrophy, and seminal vesicle atrophy in males; ovarian atrophy, uterus atrophy, cervix atrophy, vagina atrophy and mammary glands acinar atrophy in females) were directly related to the pharmacological mechanism of LY01005. These changes in the reproductive system mostly returned back to normal by the end of the recovery period. Moreover, at the end of the recovery period, increased number of follicles in the ovaries and hyperplasia of luminal epithelium/ glandular epithelium in the uterus and the vagina epithelium were observed in LY01005-treated female rats; epithelium hyperkeratosis of the cervix were noted in rats receiving 3.6 mg/kg LY01005; epithelium mucification of cervix and vagina were noted in rats receiving 10.8 mg/kg LY01005. These changes could be attributable to LY01005 withdrawal-induced hormone changes.
During the treatment period, slightly decreased bone trabecular, bone marrow hematopoietic cells, and slightly increased bone marrow adipocytes were observed in LY01005-treated rats. These changes were considered secondary effects to LY01005-induced hormone changes.
Local nodules were observed at injection sites in placebo microspheres and all LY01005 treatment groups. Histopathological examination revealed foreign body granuloma in the peripheral connective tissue surrounding sciatic nerve at injection sites. These changes mostly returned back to normal by the end of the recovery period.
The LC-MS/MS method was validated to determine the plasma concentrations of goserelin in the toxicokinetics (TK) studies. The linearity range was 0.05-40 ng/mL as presented in Figure 5; Table 4. No significant sex differences were noted in drug exposure (AUC last ) at each dose after the 1st and 4th dosing. After the 1st dosing, the average drug exposure (AUC last ) increment was lower than doseproportionality within 1.2-3.6 mg/kg in female rats, while the AUC last increment was higher than dose-proportionality within 3.6-10.8 mg/kg in female rats. After the 4th dosing, the average drug exposure (AUC last ) was increased dose-proportionally within 1.2-10.8 mg/kg in female and male rats. The drug accumulation was not apparent in 1.2 mg/kg group (AUC last ratio: 1.51) while the accumulation was apparent in 3.6 (AUC last ratios: 4.96) and 10.8 mg/kg groups (AUC last ratios: 2.81).
In summary, the changes in hormones (FSH, LH, testosterone, progestin) and in reproductive system (uterus, ovary, vagina, cervix uteri, mammary gland, testis, epididymis and prostate) were related to the pharmacological effects of goserelin. The changes in bone and bone marrow were secondary to hormone changes induced by LY01005. In addition, foreign body granulomas at injection sites may be caused by excipient. At the end of the recovery period, these changes returned back to normal.
Discussion and conclusion
Maintaining effective and sufficient suppression of serum testosterone levels (below 50 ng/dL) is one of the essential strategies in the treatment of metastatic prostate cancer. Currently this is primarily achieved by front-line androgen-Frontiers in Pharmacology frontiersin.org 07 deprivation therapy agents such as long-acting GnRH agonists (goserelin, histrelin, leuprolide, and triptorelin) (Scher et al., 2008;Desai et al., 2021). GnRH agonists are known to cause a mechanism-related transient surge in testosterone level and then decrease to the castration level. However, certain patients may fail to reach this primary therapeutic endpoint and may experience significant testosterone fluctuation during longterm maintenance treatment when repeated dosing is required. This clinical phenomenon is known as the end-of-dose phenomenon or acute-on-chronic phenomenon (Sharifi et al., 2002;Yri et al., 2006;Gomella, 2009). The acute-on-chronic phenomenon occurs in approximately 4%-10% of patients who receive GnRH agonists treatment (Berges, 2005). It is an obvious risk for those prostate cancer patients who experience testosterone fluctuation during maintenance treatment to recur (Attard et al., 2009;Eckstein and Haas, 2014).
In the present study, there were no significant differences on the rat serum testosterone levels between LY01005-and Zoladex ® -treated rats in the first three weeks of dosing. However, LY01005 could be more effective than Zoladex ® in the subsequent 2 weeks. Similar findings were also observed in the three-dose pharmacology study in rats. Testosterone concentration was maintained at castration level until the end of the 12-week study period in LY01005-treated rats. In contrast, the testosterone concentration significantly exceeded castration level at the end of each dosing interval in Zoladex ®treated rats. Pharmacokinetic results provided a reasonable explanation for the observed pharmacodynamic differences between LY01005 and Zoladex ® . The plasma concentration of LY01005 was detectable for up to 28 days while Zoladex ® was only detectable for 24 days. These data support the notion that LY01005 has less risk of acute-on-chronic phenomenon for clinical prostate cancer treatment as compared to Zoladex ® . Note: T max1 , T max2 were the medians.
Frontiers in Pharmacology frontiersin.org 08 Safety pharmacology results showed that LY01005 did not show noticeable activity in the central nervous system and respiratory system in rats up to the dose of 10.8 mg/kg, and no noticeable activity in the cardiovascular system in dogs up to the dose of 3.6 mg/kg (unpublished data). Acute and subchronic toxicity studies of LY01005 were conducted in rats and dogs (unpublished data) by intramuscular route of administration. Almost all of the positive findings from these studies including the changes in hormones (FSH, LH, testosterone, progestin) and in reproductive system (uterus, ovary, vagina, cervix uteri, mammary gland, testis, epididymis and prostate) were related to the direct pharmacological effects of goserelin. In addition, slightly decreased bone trabecular, bone marrow hematopoietic cells, and slightly increased bone marrow adipocytes were only observed in LY01005-treated rats. These changes were considered secondary to goserelin (LY01005)-induced hormone changes. Histopathological changes in foreign body removal reaction induced by excipient were also observed. We compared the toxicityrelated measures of LY01005 (as reported here) with the published nonclinical safety data of Zoladex ® (Goserelin Depot, 2012). No noticeable differences were observed between the two drugs.
In conclusion, LY01005 is an investigational new drug product of goserelin acetate formulated as extended-release microspheres. LY01005 could be administered through a much finer needle to minimize patient's discomfort and reduce the risk of injection site trauma as compared to Zoladex ® . LY01005 had a comparable pharmacodynamic potency to Zoladex ® in rats but with a more lasting and stable effect. This may be more beneficial in reducing the risk of acute-on-chronic phenomenon when compared to Zoladex ® .
Almost all of the findings reported in toxicity studies were related to the known pharmacological effects and secondary effects to goserelininduced hormone changes. These data strongly support the proposed clinical investigational plan and marketing application for LY01005.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Ethics statement
The animal study was reviewed and approved by IACUC of shandong luye pharmacy Pharmaceutical Co., Ltd. and IACUC of WestChina-Frontier PharmaTech Co., Ltd. under the Good Laboratory Practice (GLP) conditions.
Author contributions
MY, YL, SC, GX, GX, DL, DG, and ZX, performed the experiments and interpretated the results. YP, wrote the manuscript. CX, YP, WH, and TJ, conceived and designed the current study and revised the final manuscript.
Funding
This work was partially supported by National Natural Science Foundation of China (82073888, 82273969), Taishan Scholar Project, Natural Science Foundation of Shandong Province (ZR2021LSW011).
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 6,099.8 | 2023-02-21T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Chemo-sensors development based on low-dimensional codoped Mn2O3-ZnO nanoparticles using flat-silver electrodes
Background Semiconductor doped nanostructure materials have attained considerable attention owing to their electronic, opto-electronic, para-magnetic, photo-catalysis, electro-chemical, mechanical behaviors and their potential applications in different research areas. Doped nanomaterials might be a promising owing to their high-specific surface-area, low-resistances, high-catalytic activity, attractive electro-chemical and optical properties. Nanomaterials are also scientifically significant transition metal-doped nanostructure materials owing to their extraordinary mechanical, optical, electrical, electronic, thermal, and magnetic characteristics. Recently, it has gained significant interest in manganese oxide doped-semiconductor materials in order to develop their physico-chemical behaviors and extend their efficient applications. It has not only investigated the basic of magnetism, but also has huge potential in scientific features such as magnetic materials, bio- & chemi-sensors, photo-catalysts, and absorbent nanomaterials. Results The chemical sensor also displays the higher-sensitivity, reproducibility, long-term stability, and enhanced electrochemical responses. The calibration plot is linear (r2 = 0.977) over the 0.1 nM to 50.0 μM 4-nitrophenol concentration ranges. The sensitivity and detection limit is ~4.6667 μA cm-2 μM-1 and ~0.83 ± 0.2 nM (at a Signal-to-Noise-Ratio, SNR of 3) respectively. To best of our knowledge, this is the first report for detection of 4-nitrophenol chemical with doped Mn2O3-ZnO NPs using easy and reliable I-V technique in short response time. Conclusions As for the doped nanostructures, NPs are introduced a route to a new generation of toxic chemo-sensors, but a premeditate effort has to be applied for doped Mn2O3-ZnO NPs to be taken comprehensively for large-scale applications, and to achieve higher-potential density with accessible to individual chemo-sensors. In this report, it is also discussed the prospective utilization of Mn2O3-ZnO NPs on the basis of carcinogenic chemical sensing, which could also be applied for the detection of hazardous chemicals in ecological, environmental, and health care fields.
Introduction
Semiconductor codoped nanomaterials have received significant interest due to their electronic, optoelectronic, magnetic, catalytical, electro-chemical, mechanical behaviors and their potential applications in different research areas. Semiconductor nanomaterials might be a promising due to their high-specific surface-area, lowresistances, high-catalytic activity, attractive electrochemical and optical properties [1,2]. Nanomaterials are also scientifically important codoped nanostructure materials owing to their extraordinary mechanical, optical, electrical, electronic, thermal, and magnetic characteristics. Lately, it has attained significant attention in manganese doped-semiconductor materials in order to develop their physic-chemical behaviors and extend their efficient applications [3][4][5]. It has not only investigated the basic of magnetism, but also has huge potential in scientific features such as magnetic materials, bio & chemi-sensors, photo-catalysts, and absorbent nanomaterials [6][7][8][9]. Recently, very few articles are published based on transition-metal doped semiconductor nanomaterials synthesis and investigated the magnetic behaviors and potential applications only [10][11][12][13]. Here, it is prepared codoped Mn 2 O 3 -ZnO NPs by easy, facile, economical, non-toxic, repeatable, and reliable low-temperature wet-chemical technique. The nanostructure and morphology of the codoped Mn 2 O 3 -ZnO NPs were examined and potentially applied for the enhancement of higher-sensitive 4-nitrophenol chemo-sensor at room condition. Generally, chemo-sensing exploration have been developed with the transition-metal oxides nanostructures for the recognition and quantification of various toxic-chemicals such as phenyl-hydrazine, methanol, formaldehyde, ethanol, chloroform, dichloromethane etc., which are not ecologically safe and friendly [14][15][16][17][18]. The sensing mechanism with doped semiconductor metal oxides thin-film used primarily the properties of mesoporous thin-film generated by the physi-sorption and chemisorptions methods. The hazardous chemical detection is depended on the current responses of the fabricated thin-film, which cause by the presence of chemical components in the reaction-format in aqueous phase [19][20][21]. The key efforts are based on recongnition the least amount of 4-nitrophenol necessary for the fabricated Mn 2 O 3 -ZnO NPs chemo-sensors for electrochemical investigation.
Phenolic compounds have attained significant interest in last decade owing to their eco-toxic effects on human health, ecological, and environmental fields. These toxic compounds (i.e., 4-nitrophenol) are prepared using a number of polluting techniques, such as industry-related ways of plastic, pesticides, paint, drugs, composites, antioxidant, petroleum, and paper production [22]. The 4-nitrophenol is recognized for its hazardous nature, carcinogenetic, toxicity, and persistence in the environment, which is become a common pollutant in nature and waste water [23]. Because of its high solubility and stability in water, it has been also found in freshwater, sea environments and has been detected in industrial wastewaters and is difficult to degrade by conventional method. It is concerned in most of the degradation pathways of organo-phosphorous pesticides, which are decomposed in soil and water to form 4-nitrophenol as an intermediate or final-product in the reaction systems [24,25]. Therefore, 4-nitrophenol is integrated in the Environmental Protection Agency List of Priority Pollutants (EPALPP) [26]. Therefore, it is straight away desirable to fabricate a chemo-sensor for the detection of organic pollutants to accumulate the environment and human health. There is focused a significant attention for the development of simple, reliable, and ultrasensitive in various detection methodology based on codoped nanomaterials. Generally, the detection of toxic 4-nitrophenol is consummated using chromatographic techniques, such as gas-chromatography [27,28], highperformance liquid chromatography [29,30], liquid chromatography connected with mass-spectroscopy [31], and capillary-electrophoresis [32]. Electrochemical technique, which can offer fast, reliable, and direct real-time monitoring is one of the most utilized methods in the determination of nitro-phenolic stuffs. Electro-analytical techniques have been performed for 4-nitrophenol detection and quantification with a modified glassy carbon electrode [33,34] hanging mercury drop electrode [35] and boron-doped diamond electrode [36]. The analytical signal is derived from the four-electron reduction of the nitro-group [37] or by the direct two-electron oxidation of phenol to the corresponding o-benzoquinone [38][39][40]. Electrochemical chemo-sensors have attained huge interest in the recognition and quantification of environmental unsafe chemicals due to their reliable and fast response and determination [41][42][43][44]. Chemo-sensor technology plays a significant task in ecological protection that usually caused by environmental contamination and unintended seepage of harmful chemicals, which is a hugemenace for eco-systems. Thus for the attention of ecological and health monitoring, it is important to fabricate easy, simple, reproducible, reliable, and inexpensive chemo-sensors to detect toxic chemicals in aqueous systems. The sensitivity and low-detective of electrochemical chemo-sensor energetically dependent on the size, structure and properties of fabricated electrode doped nanomaterials. Hence doped nanostructure materials have received much attention and have widely been used as a redox mediator in chemo-sensors [45][46][47][48].
Codoped nanomaterial is largely established for the recognition of toxic chemicals in electro-chemical control method owing to their numerous benefits over conventional chemical methods in term of large-surface area for examining in medical, health-care and environmental fields [49][50][51][52][53][54][55][56]. In general electro-analytical technique, it was executed the slower responses, surface-fouling, noises, flexible-responses, and smaller dynamic-range and lower-sensitivity with bared codoped nanomaterials surfaces for chemical recognition. Therefore, the modification of the chemo-sensor surface with doped metal oxides nanostructure materials is urgently required to achieve higher sensitive, repeatability, and stable responses. Therefore, an easy and reliable I-V electrochemical approach is immediately needed for relatively simple, appropriate, and economical instrumentation which displays higher-sensitivity and lower-detection limits compared to general techniques. Here, a consistent, large-scale, and highly responsive I-V method is applied for detection of 4-nitrophenol chemical by codoped Mn 2 O 3 -ZnO NPs. The present approach represents a consistent, sensitive, low-sample volume, ease to handle, and specific electrochemical methods over the existing UV, CV, LC-MS, LSV, FL, and HPLC methods [57][58][59][60]. The simple coating technique for preparation of nanomaterials thin-film with conducting coating agents is developed for the fabrication of doped Mn 2 O 3 -ZnO NPs films. Here, low-dimensional doped Mn 2 O 3 -ZnO NPs films with conducting coating agents are synthesized and detected 4-nitrophenol in phosphate buffer solution (PBS) phase by reliable I-V method. To best of our knowledge, this is the first report for detection of 4-nitrophenol chemical with doped Mn 2 O 3 -ZnO NPs using easy and reliable I-V technique in short response time.
Materials and methods
Manganese chloride (MnCl 2 .4H 2 O), zinc chloride (ZnCl 2 ), 4-nitrophenol, ammonium hydroxide (25%), Ethyl cellulose (EC), Disodium phosphate, Butyl carbitol acetate (BCA), Ethanol, Monosodium phosphate, and all chemicals utilized were of analytical grade and obtained from Sigma-Aldrich Company. Stock solution of 1.0 M 4nitrophenol was synthesized in double distilled water. The doped Mn 2 O 3 -ZnO NPs was investigated with UV/visible spectroscopy (Lamda-950, Perkin Elmer, Germany). FT-IR spectra were recorded for Mn 2 O 3 -ZnO NPs with a spectrophotometer (Spectrum-100 FT-IR) in the mid-IR range, which was acquired from Perkin Elmer, Germany. Raman station 400 (Perkin Elmer, Germany) was exploited to investigate the Raman shift of Mn 2 O 3 -ZnO NPs using radiation source (Ar + laser line, λ:~513.4 nm). The XPS measurements were executed on a Thermo Scientific K-Alpha KA1066 spectrometer (Germany). Monochromatic AlKα x-ray radiation sources were used as excitation sources, where beam-spot size was kept in 300.0 μm. The spectrum was recorded in the fixed analyzer of transmission mode, where pass-energy was kept at 200.0 eV. The scanning of the spectra was performed at lower pressures (<10 −8 Torr). The X-ray powder (XRD) diffraction prototypes were measured with X-ray diffractometer (XRD; X'Pert Explorer, PANalytical diffractometer) prepared with Cu-Kα1 radiation (λ = 1.5406 nm) by a generator voltage (~40.0 kV) and current (~35.0 mA) applied for the measurement. Morphology of codoped Mn 2 O 3 -ZnO NPs was evaluated on FE-SEM instrument (FESEM; JSM-7600 F, Japan). Elemental analysis (EDS) was investigated for doped Mn 2 O 3 -ZnO NPs using from JEOL, Japan. I-V technique was used for sensing NPs modified sensor electrode by Electrometer (Kethley, 6517A, Electrometer, USA) at room conditions.
Synthesis and growth mechanism of codoped Mn 2 O 3 -ZnO NPs
Initially manganese chloride (MnCl 2 .4H 2 O) and zinc chloride (ZnCl 2 ) were gradually dissolved into the deionized water to prepare 0.1 M concentration separately at room temperature. After addition of NH 4 OH into the mixture of metal chloride solution, it was stirred slowly for several minutes at room condition. Mn 2 O 3 -ZnO NPs have been synthesized by adding equi-molar concentration of manganese chloride and zinc chloride as starting (reducing) materials into reaction-cell (in Teflon-line auto-clave) for 12 hours. Then the solution pH is attuned (at 10.5) by using prepared NH 4 OH and put into the auto-clave cell. The starting materials of MnCl 2 and ZnCl 2 were employed without further purification for co-precipitation method to codoped Mn 2 O 3 -ZnO nanoparticles composition. Again reducing agent (NH 4 OH) was added drop-wise into the vigorously stirred MnCl 2 and ZnCl 2 solutions mixture to produce a significant doped precipitate.
The growth mechanism of doped Mn 2 O 3 -ZnO NPs can be explained based on chemical reactions and nucleation, and growth of doped Mn 2 O 3 -ZnO crystals. The probable reaction mechanisms are anticipated for achieving the codoped Mn 2 O 3 -ZnO nanomaterials, which are appended in below.
The precursors of MnCl 2 and ZnCl 2 are soluble in alkaline medium (NH 4 OH reagent) according to the equation of (i) -(iii). After addition of NH 4 OH into the mixture of metal chlorides solution, it was strongly stirred for several minutes at room temperature. The reaction is development gradually according to the equation (iv). Then the resultant solution was washed systematically with ethanol, acetone and kept for drying at room temperature. During the total preparative procedure, NH 4 OH acts a pH buffer to control the pH value of the solution and slow donate of OHions. When the concentrations of the Mn 2+ , Zn 2+ , and OHions are reached above the critical value, the precipitation of doped Mn 2 O 3 -ZnO nuclei begin to start. As there is higher concentration of Zn 2+ ion in the solution, the nucleation of doped Mn 2 O 3 -ZnO crystals become easier due to the lower-activation energy barrier of heterogeneous nucleation. However, as the concentration of Zn 2+ subsistence, a number of larger doped Mn 2 O 3 -ZnO crystals with a spherical particle-shape morphology form in nano-level. The shape of codoped Mn 2 O 3 -ZnO NPs is approximately consistent with the growth pattern of codoped Mn 2 O 3 -ZnO crystals [61,62]. Finally, the asgrown codoped Mn 2 O 3 -ZnO NPs products were calcined at 400.0°C for 4 hours in the furnace (Barnstead Thermolyne, 6000 Furnace, USA). The calcined doped nanomaterials were synthesized in detail in terms of their morphological, structural, optical properties, and applied for 4-nitrophenol chemical sensing.
Fabrication of AgE using doped Mn 2 O 3 -ZnO NPs
Phosphate buffer solution (PBS, 0.1 M, pH 7.0) is arranged by properly mixing Na 2 HPO 4 (0.2 M) and NaH 2 PO 4 (0.2 M) solution in 100.0 mL de-ionize water. Flat AgE is fabricated by doped Mn 2 O 3 -ZnO NPs with butyl carbitol acetate (BCA) and ethyl cellulose (EC) as conducting coating agents. Subsequently, the fabricated electrodes are transferred into the oven at 65.0°C for 12 hours until the film is totally dry, consistent, and stable. An electro-chemical cell is prepared with codoped Mn 2 O 3 -ZnO NPs coated silver electrode as a working electrode and palladium wire is employed as a counter electrodes. 4-nitrophenol (~1.0 M) is diluted at different concentration in DI water and used as a target chemical. Amount of 0.1 M PBS is kept constant in the small-beaker as 10.0 mL during the chemical analysis. Analyte solution is made with various concentration of 4-nitrophenol from 1.0 nM to 1.0 M. The sensitivity is calculated from the current-slope vs. analyte concentration from the calibration stature by considering the active surface area of doped Mn 2 O 3 -ZnO NPs fabricated chemo-sensors. Electrometer is properly utilized as a voltage sources for reliable I-V method in two electrodes assembly. With high-mechanical strength, good-conductivity, highly stability, large-surface area, and extremely miniaturized dimension of codoped Mn 2 O 3 -ZnO NPs have been extensively used in chemo-sensor modification and fabrication of 4-nitrophenol detection. The codoped Mn 2 O 3 -ZnO NPs were applied for the detection of 4-nitrophenol in liquid-phase system at room conditions. Initially, the NPs thin-film was prepared using conducting binders (EC and BCA) and embedded on the flat AgE electrode. The development and fabrication techniques are exhibited in the schematic diagram ( Figure 1). The PdE and doped Mn 2 O 3 -ZnO NPs fabricated AgE is used as counter and working electrodes respectively, which is presented in Figure 1a
Optical properties
The optical behavior of the codoped Mn 2 O 3 -ZnO NPs is one of the important features for the evaluation of its photo-catalytic property. The optical absorption spectra of NPs are investigated by UV-visible spectrophotometer in the visible range (200.0 to 800.0 nm). From the absorption spectra, it has been investigated the absorbance of the doped Mn 2 O 3 -ZnO NPs is about~284.0 nm, which is presented in Figure 2a. Band gap energy (E bg ) is executed based on the most absorption band of NPs and originated to be~4.50704 eV, according to following formula (v).
Where E bg is the band-gap energy and λ max is the wavelength (~284.0 nm) of the doped Mn 2 O 3 -ZnO NPs. No extra peak related with contaminants and structural defects were found in the spectrums, which confirmed that the prepared NPs control crystallinity of codoped Mn 2 O 3 -ZnO NPs [63,64].
The codoped Mn 2 O 3 -ZnO NPs are also investigated from the atomic and molecular vibrations. To investigate the vibration of materials, FT-IR spectrum mostly in the area of 450.0-4000.0 cm -1 is measured. Figure 2b
Structural properties
Crystallinity and crystal phases of the doped Mn 2 O 3 -ZnO NPs were investigated. X-ray diffraction outlines of codoped NPs are presented in Figure 3a. The Mn 2 O 3 -ZnO NPs samples were examined and exposed as conventional tetragonal structure. The as-grown doped Mn 2 O 3 -ZnO NPs was calcined at 400.0°C in mufflefurnace to start the formation of nano-crystalline phases. (400) degrees. The tetragonal lattice space group is l41/amd. Finally, the powder X-ray prototype is corresponded to doped Mn 2 O 3 -ZnO NPs, which may be featured to the lattice site of NPs semiconductor nanomaterials [68][69][70]. Further, no other impurity peak was found in the XRD prototype screening the codoped Mn 2 O 3 -ZnO NPs phase formation.
The electron dispersive spectroscopy (EDS) evaluation of calcined Mn 2 O 3 -ZnO NPs assigns the existence of Mn, Zn, and O composition in the pure calcined Mn 2 O 3 -ZnO materials. It is clearly employed that NP materials controlled with only manganese, zinc, and oxygen elements, which is shown in Figure 3b. The composition of Mn, Zn, and O is 33.07%, 18.81%, and 48.12% respectively. No other peak related with any impurity has been found in the EDS, which demonstrates that the doped Mn 2 O 3 -ZnO NPs are composed only with Mn, Zn, and O. High resolution FESEM images of calcined Mn 2 O 3 -ZnO NPs are exhibited in Figure 3c-d. The FESEM images displayed of codoped materials with aggregated nano-particles shapes. The average diameter of doped Mn 2 O 3 -ZnO NPs is calculated in the range of 22.7 nm to 50.0 nm, which is close tõ 37.5 nm. It is displayed noticeably from the FESEM images that the simple wet-chemical method of prepared crystalline nanomaterials are nanostructure of codoped Mn 2 O 3 -ZnO NPs, which executed in aggregated shape, higher-density, and attained nanostructure in spherical nano-particle shapes. It is also suggested that nanomaterials composed in spherical particle-like morphology of the combined codoped Mn 2 O 3 -ZnO NPs [71,72].
Chemical analysis X-ray photoelectron spectroscopy (XPS) is a quantitative spectroscopic method that determines the elementalcomposition, empirical-formula, chemical-state and electronic-state of the elements that present in nanomaterials. Here, XPS measurements were employed for doped Mn 2 O 3 -ZnO NPs to examine the chemical states of Zn, Mn, and O atoms. The full XPS spectra of Zn2p, Mn3s, Mn2p, and O1s are displayed in Figure 4a. The O1s spectrum employs a major-peak at 532.9 eV in Figure 4b. The peak at 532.9 eV is assigned to lattice oxygen, may be shown to oxygen (i.e, O 2 -) in presence in the doped Mn 2 O 3 -ZnO NP nanomaterials [73]. In [74]. Figure 4d also shows the XPS spectra (spin-orbit doublet peaks) of the Mn2p (3/2) and Mn3p (1/2) regions found with semiconductor doped Mn 2 O 3 -ZnO NPs. The binding energy of the Mn3p (3/2) and Mn3p (1/2) peak at 644.7 eV and 655.3 eV respectively indicates the existence of Mn since their bindings energies are similar [75]. In Figure 4e, the spin-orbit peaks of the Zn2p (1/2) and Zn2p (3/2) binding energy for codoped Mn 2 O 3 -ZnO NPs appeared at around 1025.7 eV and 1048.9 eV respectively, which is in good conformity with the reference data for Zn [76]. Figure 5c. It displays the current changes of developed films as a function of 4-nitrophenol concentration at room conditions. It was also observed that with increasing the concentration of analyte, the resultant currents also enhanced considerably, which corroborated that the response was a surface-process. It shows the response of codoped Mn 2 O 3 -ZnO NPs as a role of analyte concentration at room conditions. A large concentration range of 4-nitrophenol concentration was selected to study the probable investigative parameters, which was calculated in 1.0 nM to 1.0 M. The calibration curve was drawn from the variation of 4-nitrophenol concentrations, which was presented in Figure 5d. It was exhibited a calibration curve for the response current versus 4nitrophenol concentration of developed doped Mn 2 O 3 -ZnO NPs on AgE electrode. It was measured from the calibration plot that as the concentration of target analyte enhances, the current response also increased and finally at high 4-nitrophenol concentration, the current achieves at a saturated level, which proposes that the active surface sites of doped NPs saturated with analyte units [77]. The sensitivity is manipulated from the calibration-curve, which is close to~4.6667 μA cm -2 μM -1 . The linear dynamic range of this chemo-sensor exhibits from 0.1 nM to 50.0 μM (linearity, r 2 = 0.977) and the detection limit was calculated as~0.83 ± 0.2 nM (at an SNR of 3). Usually, the resistance value of the codoped Mn 2 O 3 -ZnO NPs modified electrodes/chemo-sensors are decreased with enhancing active-surface area, owing to the essential characteristics of the semiconductor materials [78][79][80]. In fact, oxygen adsorption (O 2 ) displays a considerable liability in the electrical features of the doped NPs (n-type semiconductor) structures. Oxygen ion (O 2 -) adsorption eradicates the conduction electrons and enhances the resistance of doped Mn 2 O 3 -ZnO NPs. Active oxygen species (i.e., O 2 − and O − ) are adsorbed onto material surfaces at room condition, and the quantity of such chemo-sorbed oxygen species strongly depend on the structural properties. At room condition, O 2 − is chemo-sorbed, while in NPs morphology, both O 2 − and O − are chemo-sorbed, and the O 2 − vanishes rapidly [81,82]. Here, 4-nitrophenol sensing mechanism of doped Mn 2 O 3 -ZnO NPs chemo-sensor is based on the semiconductors metal oxides, which is held owing to the oxidation/reduction of the semiconductor NPs. According to the dissolved O 2 in bulk-solution or surfaceair of the neighboring atmosphere, the following reactions (vi) & (vii) are accomplished in the reaction medium.
Applications: detection of 4-nitrophenol using codoped Mn 2 O 3 -ZnO NPs
e À NPs ð These reactions are consummated in bulk-system/airliquid interface/neighboring atmosphere owing to the small carrier concentration, which increased the resistance. The 4-nitrophenol sensitivity towards doped Mn 2 O 3 -ZnO NPs could be ascribed to the higheroxygen lacking conducts to enhance the oxygen adsorption. Larger the amount of oxygen adsorbed on the doped NPs-sensor surface, higher would be the oxidizing potentiality and faster would be the oxidation of 4nitrophenol. The activity of 4-nitrophenol would have been extremely big as contrast to other toxic chemical with the surface under indistinguishable conditions [83][84][85]. When 4-nitrophenol reacts with the adsorbed oxygen (by producing electrons) on the chemo-sensor surface, it oxidized to 4-hydroxylaminophenol and water. Later, the oxidation of 4-hydroxylaminophenol takes place to convert to 4-nitrosophenol and the subsequent reversible reduction, which released free electrons (2e -) into the conduction-band (C.B.). This phenomenon could be elucidated through the following proposed reactions (viii-x).
HO À Ph À NHOH→HO À Ph À No þ 2 À e H þ C:B: ð Þ These reactions related to oxidation of the reducing carriers in presence of semiconductor doped Mn 2 O 3 - ZnO NPs. These techniques improved the carrier concentration and hence reduced the resistance on exposure to reducing liquids/analytes. At the room condition, the incorporating of metal oxide surface to reduce liquid/ analytes results in a surface interceded adsorption process. The abolition of iono-sorbed O 2 enhances the electron communication and hence the surface conductance of the thin-film [86,87]. The reducing analyte offers electrons co-doped Mn 2 O 3 -ZnO NP surfaces. Accordingly, the resistance is slightly reduced, where conductance is amplified. For this reason, why the analyte response (current) intensified with increasing resultant potential. Thus, the electrons are contributed for quick enhance in conductance of the thin-film. The codoped Mn 2 O 3 -ZnO NPs unusual regions dispersed on the surface would improve the ability of nanomaterial to absorb more O 2 species giving higher resistances in air ambient, which is presented in Figure 6.
The response time was approximately 10.0 s for the doped Mn 2 O 3 -ZnO NPs coated-electrode to attain saturated-steady state current. The outstanding sensitivity of NPs chemo-sensor can be accredited to good absorption (porous-surfaces coated with conducting binders) and adsorption ability (large-surface area), higher-catalytic activity, and good bio-compatibility of the codoped Mn 2 O 3 -ZnO NPs [88]. Due to large surface area, NPs are proposed a favorable nano-environment for the 4-nitrophenol exposure and gratitude with exceptional sensitivity. The sensitivity of codoped Mn 2 O 3 -ZnO NPs affords high-electron communication characteristics, which improved the direct electron communication between the active sites of NPs and chemo-sensor electrode surfaces [89,90]. The modified thin NPs fabricated-film had a better consistency and reliability. However, owing to large-dynamic surface area, the codoped Mn 2 O 3 -ZnO NPs were entailed productive surroundings for the 4nitrophenol chemical detection (by adsorption) with huge-amount [91,92]. To check the repeatability and storage stabilities, I-V response for codoped Mn 2 O 3 -ZnO NPs coated chemo-sensor was investigated (up to two weeks). After every experiment, the fabricated chemo-sensor was washed carefully with the PBS buffer solution and executed no considerable reduced on the current responses (recovery,~95.2%). The sensitivity was retained almost same of initial sensitivity up to week, after that the response of the developed doped Mn 2 O 3 -ZnO NPs sensor gradually decreased. In Table 1, it is contrasted the performances for 4nitrophenol recognition based doped Mn 2 O 3 -ZnO NPs using various modified electrode materials.
Conclusions
By reliable I-V techniques for fabricating, assembling and integrating structural semiconductor doped Mn 2 O 3 -ZnO NPs onto conductive flat-silver electrodes has been investigated in details for the detection of toxic 4nitrophenol compound. Codoped Mn 2 O 3 -ZnO NPs fabricated sensor executed the potential applications in providing 4-nitrophenol chemo-sensors and encouraging improvement has been consummated in this investigation. Besides the development of codoped nanomaterials, there are still a number of significant subjects that are required for additional examination before this nanomaterial can be moved into the profitable uses for the mentioned applications. As for the doped nanostructures, NPs are introduced a route to a new generation of toxic chemo-sensors, but a premeditate effort has to be applied for doped Mn 2 O 3 -ZnO NPs to be taken comprehensively for large-scale applications, and to achieve higher-potential density with accessible to individual chemo-sensors. | 6,010.2 | 2013-03-28T00:00:00.000 | [
"Materials Science"
] |
Challenges of mobile learning – a comparative study on use of mobile devices in six European schools: Italy, Greece, Poland, Portugal, Romania and Turkey Desafios do mobile learning – estudo comparativo sobre a utilização de dispositivos móveis em seis escolas europeias: Itália, Grécia, Polóni
Although mobile technology is not yet widely used in schools, and in some cases even prohibited by internal regulations, the truth is that this technology, besides being a hallmark of contemporary life, is a powerful tool that challenges teachers and students to innovate in teaching learning practices. This article intends to contribute to the understanding of this phenomenon. The article is part of a project called "Bringing life into the classroom: innovative use of mobile devices in the educational process" (BLIC & CLIC), which aims to diagnose the use of mobile devices in the educational context for the development of digital skills by students and school teachers. This diagnosis will be the first "output" of the project, and the results will allow the (re)design of the future interventions that will respond to the general objectives of the project.
Introduction
Schools are crowded with smartphones and tablets continuously connected to the internet. The popularity of these devices among the newest generation of students has increased so much that teachers feel the challenge to innovate teaching by integrating mobile technologies into the pedagogical designs they propose. This necessity of developing a mobile online environment allows the mobile learning (Lencastre, Bento, & Magalhães, 2016). Mobile learning meets the needs of today's students, who want to use the tools they use outside of class for learning inside. Mobile learning is defined as learning across multiple contexts, through social and content interactions, using personal electronic devices (Crompton, 2015). This definition makes it clear how mobile technologies can extend the learning spaces, not limited anymore by regular or specific classroom hours, but by learning and pedagogical pluralism (Pachler, Bachmair, & Cook, 2010). With mobile technologies, students can learn both in the formal classroom and outside of school context. This gives the student the opportunity to learn autonomously and intuitively by combining formal and informal learning processes (Trentin, & Repetto, 2013). Mobile learning provides an active, participatory, motivated and personalised student experience, distinguishing modes of communication, collaboration, and interaction with information (Sharples, 2013); empowering ubiquitous learning, networking, and lifelong learning. All this flexibility requires the teacher's openness to perform new roles in the teaching and learning processes using active and participative methodologies (Attewell, & Savill-Smith, 2014).
This article presents a comparative study on mobile learning using the data collected in the scope of the project 'Bringing life into the classroom: innovative use of mobile devices in the educational process' (www.blicclic.com). This project, funded by the Romanian Agency for Erasmus plus (ANPCDEFP), addresses the use of mobile devices in educational context for the development of digital skills in students and teachers from six European schools: Colegiul Tehnic Edmond Nicolau Focsani (Romania), IS M. Filetico (Italy), 1st Lyceum of Rhodes -Venetokleio (Greece), Zespol Szkol im. por. Jozefa Sarny w Gorzycach (Poland), Agrupamento de Escolas da Maia (Portugal), Toki Halkali Anadolu Imam Hatip Lisesi (Turkey).
Based on the data collected in these schools, it is sought to know the uses of mobile devices in educational context from the perspective of the teachers questioned, namely, (i) to identify the teachers' competencies on mobile learning, (ii) to understand the pedagogical use of mobile technologies, and (iii) to understand the teachers' opinions on the use of mobile devices. This diagnosis will be used to design future interventions within this European project.
The article is organised as follows: Section 2 describes the method used for this study; Section 3 reports and discusses the results under three topics: teachers' competencies on mobile learning, pedagogical use of mobile technologies, advantages and disadvantages of using mobile phones in class. The final section concludes and provides recommendations for further research on mobile devices.
Methodology
We used the survey research method. Data collected through an online questionnaire, based on a questionnaire designed to diagnose the use of mobile technologies in the teaching and learning of a foreign language (english) (Lobato & Peter, 2012). The survey was selfadministered to teachers of all levels at the schools participating in the project, from the six partner countries. The questionnaire was set with four general objectives: (i) to survey the project teachers' views about the importance of mobile technologies in an educational context; (ii) diagnosing the digital skills of teachers; (iii) collect the opinion of teachers regarding the use of mobile devices in an educational context; and (iv) to analyze the opinions of teachers in regard to the advantages and disadvantages of using mobile technologies in schools. The questionnaire consisted of seventeen closed questions and two open questions, taking an average of approximately 20 minutes to fill.
The questionnaire was validated based on the premise that the data collection is a process that ensures that what you want to collect serves the purpose of the study, as referred by De Ketele and Roegiers (1993). With the definition of the type of questionnaire, the variables and analysis of other previously tested questionnaire was completed. Thereafter, the questionnaire was prepared for a pilot study, which resulted in a detailed analysis of the initial release and further construction of the final version.
As such, in the original questionnaire (Lobato & Peter, 2012), there were 20 questions, of which 14 were kept unchanged, while 6 were adapted. After adaptating the Portuguese version, a link was sent to 5 teachers similar to the target audience, (2 at Agrupamento de Escolas Gonçalo Mendes da Maia -English and Mathematics; 1 at Agrupamento de Escolas Castêlo da Maia -Portuguese; 2 at Agrupamento de Escolas Coronado e Castro -Portuguese), together with the following 9 questions that should be answered after the questionnaire was completed: 1. How long did it take complete the questionnaire? 2. The instructions were clear? 3. Did you found any ambiguous question? If so, what and why?
4. The list of closed questions covers all the options? 5. Does any question influence the answer? 6. Did you deny to answer any questions? 7. In your opinion, was any important topic omitted? 8. Did you considered the format of the questionnaire clear/atractive? 9. Would you like to add any comments? After receiving the 5 responses of the teachers who participated in the pilot, we made the following changes: Correction of questions 5 and 6, to allow simultaneous options; adaptation of the Likert scale, reducing to 5 answer options, in response to statements (questions 7 to 16), because teachers felt there were too many options and confusing ones: "very confusing decision", "too many answer choices that make it confusing to answer"; elimination of question 18 ("Today it is impossible to live without a mobile phone and therefore also at school he should be used"), as teachers felt it repeated the previous question; spelling correction of the last question (19).
The final version of the questionnaire (goo.gl/cD9Q3p) was translated into seven languages (english -the official language of the project, romanian, polish, italian, turkish, greek and portuguese), so that data could be collected among teaching staff of each participating school. As such, a convenience sampling was employed, consisting of teachers of schools that are part of the project.
The questionnaire was sent to project coordinators in each country, on March 1, 2017, with the deadline for submission of responses to 30 March 2017. However, as at March 30, 2017, as there was a small number of answers, a new deadline was proposed, 31 April, and a request was sent to the coordinator of the project (Petronia Moraru) to alert the partners. The questionnaire was sent via email to a total of 484 teachers, having obtained 220 answers (45.5%) which constitute the sampling from which data was produced. Data collection took place in March and April of this year (2017).
Data from Likert scale questions (7-16) was analysed using JASP 8.2. Analyses performed include frequency analysis and crosstabs contingency tables. Cronbach's alpha was used as a measure of the internal consistency of these items in the questionnaire. Alpha was 0.713, indicating a reasonable level of consistency.
Data from open questions (17, 18) was submitted to qualitative analysis based on the techniques suggested by grounded theory (Strauss & Corbin, 1990) with the support of NVivo11 software. In each question, the first step of the analysis was "open coding -the process of segmenting the data, examine them, compare them, conceptualise them and categorize them" (Strauss & Corbin, 1990, pp 60-61). We considered as unit of analysis the "text blocks that reflect a particular topic" and which "can be a sentence or two pages" (Ryan & Bernard, 2000, p. 782). For coding purposes in NVivo, these basic units of analysis were defined through free text selection.
In the case of question 17, related to the advantages of using mobile phones in the classroom, this process resulted in 14 categories. Afterwards, these categories were integrated into four conceptually higher categories: Cognitive aspects; socio-afective aspects; methodological aspects; other aspects. This is the axial coding (Strauss & Corbin, 1990), aiming to restructure the data already coded through open coding.
Question 18, regarding the disadvantages of using mobile phones in the classroom, 11 categories were identified in the open coding phase. These were organized, in the axial coding phase, into four categories: cognitive aspects; socio-affective aspects; ethical aspects; other aspects. The category health aspects maitained the open coding.
After examining the corpus, questioning and display functions were applied, which favours the understanding of the analysis. In this paper, we refer to frequency words (Figure 8 and Figure 9), charts of search words, and models (vide goo.gl/G5WtYN). Given the space limitations, we do not explore the interpretation of each of the figures presented, work that we will be presenting in a future publication.
Results
Teachers that participated in this study are in the majority women (Table 1) and predominantly between 36 and 54 ( Table 2). Regarding country of origin, Portugal has the higher number of respondents (64) and Greece the lower (22) ( Table 1). Mobile phone and laptop clearly dominate the mobile devices owned (Figure 1). On a country basis, larger differences come up in relation to personal laptops (from Italy with 26,9% to Romania with 100%). Portugal is the country where fewer teachers report having their own mobile phone (54,7%) and Romania the highest (100%) (Figure 2). It is relevant to notice that Romania (n=23) has a third of Portugal's respondents (n=64), but within the same age ranges (from 36 to 55 and more) ( Table 2).
None of the other devices goes above 40%, iPad being the closest to this value, in Greece and Italy (36,4% and 34,6%, respectively). Finally, it is worth noting that Portugal reports a 23,4% of other devices, which are not identified (Figure 2).
Pedagogical use of mobile technologies
Most of the teachers in this study see pedagogical potential in the use of mobile devices in the classroom. More than 60% 'agree' or 'strongly agree' with statements such as 'mobile phone is a personal device that should be used in school' (Q11); 'mobile devices could be used in school activities' (Q12); 'I see mobile devices as a pedagogical resource that should be explored' (Q13). Of the issues that sought to measure this indicator, only Q16 ('nowadays it is impossible to live without a mobile phone, and therefore also in the school it should be used') generates more doubts, with almost a third of teachers disagreeing or strongly disagreeing with this statement.
Romania and Poland is where the belief that mobile devices afford pedagogical opportunities is stronger and Italy where it is weaker. All these 4 variables register values above 80% for 'agree' and 'strongly agree' options in Romania and above 70% in Poland. In Italy, with the exception of Q12 (65,3%), the number of teachers agreeing or strongly agreeing with the other sentences stops at 46% (Figure ii -goo.gl/G5WtYN).
Data shows a significantly different image when it comes to assessing if teachers already take advantages of the potential mobile devices may bring to classroom. Romania and Poland maintain higher figures, above 80%. Greece and Turkey are the only cases where the 'disagree' or 'strongly disagree' options are the highest ( Figure 5). When asked about mobile devices in class being a distractor, Turkish teachers stand out (Figure 7). Notwithstanding, this is a less consensual issue, with the percentage of teachers that find mobile devices distractive (42,8%) close to ones considering the opposite (36,8%) and a fifth of respondents choosing neither ( Figure 6 and Figure 7). This word cloud reflects the frequency of words and allows us to situate the general ideas expressed by respondents regarding the positive aspects. We found that the words registering higher ocurrence refer to information search by the students: using (50) information (35) students (29) search (21) access (20) interest (15). Indeed, among the many benefits reported by teachers, which we specify ahead, the most frequent is access to online information. A count of the number of analysis units in each category by country (Figure iii at goo.gl/G5WtYN) shows the category access to information clearly highlighted. It is worth noticing that Portugal is where the idea is more frequent and Turkey the country that least refers to it.
The model resulting from the analysis of categorical data (Figure v goo.gl/G5WtYN) presents a clearer x-ray of the data. The dimension positive aspects of mobile phone use in the classroom has four categories (cognitive, socio-afective; methodological; other) each with several subcategories obtained by the process described before. The circles refer to categories and subcategories of analysis and the lines to countries whose teachers refer them. With the exception of Turkey, which does not seem to have clear ideas about the advantages of using mobile phones in the classroom, teachers from other countries acknowledge these advantages. In the case of the disadvantages of the use of mobile phones in the classroom, we have verified that the words of highest occurrence are those whose meaning refers to a possible distraction on the part of the students: using (74) students (62) distraction (56), in the forms distract, distracted, distracting, distraction, distracts and distractibility, and attention (25), in this case concerning lack of attention, as can be verified by the context where the word occurs. Thus, what worries most teachers seems to be the possibility that the mobile phones, in the classroom, cause the students' deconcentration and lack of attention since, they justify, it is easy to wander to areas of their personal interest that have nothing to do with schoolwork. This idea is confirmed by the number of units of analysis (Figure iv at goo.gl/G5WtYN), with the category distraction including by far the largest number of units in all countries. Following are situations related to lack of privacy, and superficiality in the work performed. It is curious to notice that, once again, Portugal has the highest number of registered units of analysis, and Turkey, which undervalued the positive aspects, reveals a greater awareness of the negative aspects.
Negative aspects of mobile phone use in the classroom present five categories (ethical; cognitive; socioaffective; health; other), four of which have several subcategories (Figure vi at goo.gl/G5WtYN). Our analysis shows that ethical issues are a concern among teachers of all countries as well as health issues.
It should be noted that teachers in Turkey refer to most of the negative aspects identified by the six countries and even when asked about the positive aspects, they refer to the negatives. Perhaps Turkey is still in an early stage of using mobile phones in the classroom, which, like any other innovation, is initially seen more as a threat than an advantage.
Concluding remarks
This study was conducted with the participation of 220 teachers, from elementary to secondary school levels, as well as vocational, in the six countries participating in the Blic&Clic project. The analysis performed shows a high level of motivation to the use of mobile devices as a pedagogical resource in the classroom. Romania and Poland are the countries where most teachers find this appealing, while Italy registers lower numbers. The possibility to easily access information is what teachers value most.
Nonetheless, actual use of these tools in classes is significantly lower and unequal between the different countries. Greece and Turkey actually present higher figures for those who don't use mobile technologies in classroom. Romania and Poland is where more teachers report already taking these advantages. Both statistical and qualitative analysis present strong evidence that teachers are worried about mobile phone leading to distractions that disturb classroom work. In our analysis of negative aspects, this category contains by far the largest number of units in all countries.
Further research should explore in more depth what explains the gap between teachers' enthusiasm and the actual integration of mobile devices in school. While it is certain that some schools, or even national legislation, prohibit these devices in classroom, our data points to other sorts of difficulties. This is particularly the case for distractions, but also for ethical and health issues. Given the differences between these countries, future work should identify what distinguishes the ones that seem more confortable and confident to actually bring mobile devices to class. | 4,019.4 | 2017-12-17T00:00:00.000 | [
"Education",
"Computer Science"
] |
Fast, greener and scalable direct coupling of organolithium compounds with no additional solvents
Although the use of catalytic rather than stoichiometric amounts of metal mediator in cross-coupling reactions between organic halides and organometallic counterparts improves significantly the atom economy and waste production, the use of solvents and stoichiometric generation of main-group byproducts (B, Sn and Zn) hamper the ‘greenness' and industrial efficiency of these processes. Here we present a highly selective and green Pd-catalysed cross-coupling between organic halides and organolithium reagents proceeding without additional solvents and with short reaction times (10 min). This method bypasses a number of challenges previously encountered in Pd-catalysed cross-coupling with organolithium compounds such as strict exclusion of moisture, dilution and slow addition. Operational ease of this protocol combines the use of industrially viable catalysts loadings (down to 0.1 mol%), scalability of the process (tested up to 120 mmol) and exceptionally favourable environmental impact (E factors in several cases as low as 1).
Supplementary Tables
Supplementary Table 1. Pd-catalysed cross-coupling reaction of organolithium compounds and organic halides employing deep eutectic solvents (DES). Conditions: Commercial available PhLi (1.8 M in nBu 2 O) was added to a mixture of 1b (0.3 mmol, 56 mg) and Pd-PEPPSI-iPr ( atmosphere. However, we have repeated the synthesis (e.g. compound 2m) keeping the Schlenk flask open to the air and a similar selectivity (>99%) and isolated yield (95%) was obtained.
Supplementary Note 2:
The authors have not experienced significant problems of exothermicity in comparison to usual couplings (or other catalytic reactions). The synthesis of 2aa was performed on 6 mmol scale (1.25 g), with a small increase of temperature of 4 °C (from 25 °C to 28 °C) upon addition of the organolithium reagents.
Supplementary Note 3:
The authors have performed a cross coupling of 1-bromonaphthalene and dry MeLi, by removing the solvent under vacuum of a commercial organolithium reagent, and subsequent transferring to a glove box. The cross coupling works although, with strongly reduced selectivity in which 2aa was formed up to 15%. We do explicitly warn for the pyrophoric nature of dry organolithium species.
S40
Supplementary Note 4: The authors have performed a cross coupling of 1-bromonaphthalene and phenyl lithium under conditions given in general procedure A, however using 1 eq of the lithium species, rather than 1,2. Compound 2b was obtained in similar conversion and yield.
Supplementary Note 5:
We did not experience any problem with salt formation (for instance on stirring the reaction mixture) under any of the conditions we used.
Supplementary Note 6: Experimental procedure and calculation of the E-factor for the synthesis of 2ag, including aqueous work-up: The reaction mixture was quenched with 1 mL of water, extracted with 1 mL of AcOEt and the organic phase was dried with anhydrous Na 2 SO 4 . Evaporation of the solvent under reduced pressure afforded the crude product that was then filtered over a silica gel plug to afford the pure product. Yield 97%. The E factor including the water used for the work-up is 15.4. The E factor reported in literature for the Suzuki coupling is 84.
General Procedure A for the Cross-Coupling with (Hetero)aryllithium Reagents
The corresponding commercially available or homemade (hetero)aryllithium reagent was added over a mixture of substrate (1 mmol) and Pd-PEPPSI-iPr (1.5 mol %, 10.5 mg) at room temperature for 10 min. After the addition was completed a saturated solution of aqueous NH 4 Cl was added and the mixture was extracted with AcOEt or Et 2 O. The organic phases were combined and dried with anhydrous Na 2 SO 4 . Evaporation of the solvent under reduced pressure afforded the crude product that was then filtered over a silica gel plug.
General Procedure B for the Cross-Coupling with Alkyllithium Reagents
The corresponding commercially available alkyllithium reagent was added over a mixture of substrate (1 mmol) and Pd[P(t-Bu) 3 ] 2 (2 mol%, 10 mg) at room temperature for 10 min. After the addition was completed a saturated solution of aqueous NH 4 Cl was added and the mixture was extracted with AcOEt or Et 2 O. The organic phases were combined and dried with anhydrous Na 2 SO 4 . Evaporation of the solvent under reduced pressure afforded the crude product that was then filtered over a silica gel plug.
General Procedure C for Reactions Carried out in 120 mmol Scale
Commercially available n-BuLi (100 mL, 1.6 M solution in hexane) was added via cannula over a mixture of substrate (120 mmol, 27 g) and Pd[P(t-Bu) 3 ] 2 (0.4 mol%, 250 mg) at room temperature for 30 min, keeping the temperature between 20-25 °C with the use of an additional water bath. After the addition was completed water was slowly added and the mixture was extracted with AcOEt or Et 2 O. The organic phase were combined and dried with anhydrous Na 2 SO 4 and solvent was removed under reduced pressure affording the final product in reagent grade quality. 3 CAS Registry Number: 613-37-6. Synthesized using catalytic system A with 1-bromo-4-methoxybenzene (1 mmol, 187 mg) and 798 µL of PhLi. Catalytic system A: Reaction carried out at room temperature. White solid obtained after filtration over a silica plug (SiO 2 , n-pentane/ Et 2 O 100:1), 155 mg, 84% yield. | 1,220.8 | 2016-06-02T00:00:00.000 | [
"Chemistry"
] |
Numerical study on factional differential-algebraic systems by means of Chebyshev Pseudo spectral method
A numerical treatment to a system of Caputo fractional order differential- algebraic equations (SFDAEs) is presented throughout this article. The suggested method based upon the shifted Chebyshev pesedu- spectral method (SCPSM). The shifted Chebyshev polynomials (SCPs) are handled to reduce the SFDAEs into the solution of linear/ nonlinear systems of algebraic equations. By using some tested applications, the effectiveness and the accuracy of the suggested approach are demonstrated graphically. Also numerical comparisons between the proposed technique with other numerical methods in the existing literature are held. The numerical results show that the proposed technique is computationally efficient, accurate and easy to implement.
Many physical phenomena's are obviously designated by a system of differential-algebraic equations (SDAEs). These types of systems follow in the displaying of the mechanical systems subject to constraints, power systems, electrical networks, optimal control, chemical process and in other numerous applications [17]. SFDAEs have currently confirmed to be a suitable devise in the displaying of the numerous engineering and physical problems such as non-integer order optimal controller design, electrochemical processes, complex biochemical [18].
The approximate and numerical solutions of these types of systems have been a focus of several researchers especially the nonlinear systems, because most of these systems don't have exact solutions. Numerical methods to solve SDAEs have been given such as, the numerical algorithms for computing the matrix Green's operator [19], implicit Runge-Kutta method [20], Padé approximation method [21,22], homotopy perturbation method [23], Adomain decomposition method [24] and variation iteration method (VIM) [25].
For FDEs, the spectral collocation method (also called pseudo-spectral method) is more applicable and commonly applied to numerically solve different types of the fractional differential equations [15,32]. In collocation technique, expansion coefficients are determined by constructing the approximate solution to satisfy the differential equation at some applicably selected points from the domain identified as collocation points. Recently, various types of orthogonal polynomials and collocation points are used in spectral collocation approximations [15,32].
CPs have many useful properties. These polynomials present, among others, very good properties in the approximation of functions. This encourages many researchers for using these polynomials for solving different types of differential equations and FDEs [32][33][34][35].
The proposed technique used the properties of Chebyshev polynomials (CPs) to reduce the SFDAEs into a system of algebraic equation which is greatly simplifying the problem. To the best of our knowledge, the numerical treatment of SFDAEs has not been established by using SCPSM yet.
To check the accuracy of the suggested method, five numerical applications including comparisons between our obtained results with those achieved by using other existing methods are presented. This article is prescribed as follows: The basic definition of the Caputo fractional derivative and the main properties of the CPs are summarized in Section 2. In Section 3, the necessary theorems of the upper bound of errors and the convergence analysis of the fractional derivatives of the SCPs are explained. Section 4 contains the procedures for the implementation of the suggested method to nonlinear FDAEs. Some applications are discussed in Section 5. Finally, a brief conclusion finishes the paper in Section 6.
Definition 2.1:
A real function f (t), t > 0, is assumed to be in the space C μ, μ ∈ , if there exists a real number p > μ such that f (t) = t p g(t) where g(t) ∈ C(0, ∞), and is assumed to be in space C m μ if and only if f (m) ∈ C μ , m ∈ N.
Main properties of the CPs
The CPs, T n (z) of degree n are determined by the following recurrence relation [32][33][34] T n+1 (z) = 2zT n (z) − T n−1 (z), T 0 (z) = 1, The analytic form of T n (z) is defined by where n 2 is the integer part of n 2 . The SCPs T * n (t) of degree n defined in the interval [0, L], constructed by offering the change of variable z = 2 L t − 1 and defined by The square integrable function f (t) in [0, L], can be approximated using the first (m + 1) terms of the SCPs as Where the coefficients c i are given by
The approximation of the fractional derivatives of the SCPs and its convergence analysis
The approximate formulation of the non-integer derivatives, the truncating error and the convergence analysis of the SCPs are considered in the following theorems.
Theorem 3.1: (Chebyshev truncation theorem) [32]
The error in approximating x(t) by the sum of its m-terms is bounded by the sum of the absolute values of all neglected coefficient. If then Theorem 3.2: Let f (t) be approximated by SCPs as in (8) and suppose that α > 0, then Where ψ (α) i,k is given by For Proof, see [32].
Theorem 3.3: [32,35]
The Caputo fractional derivative of order α for the SCPs can be expressed in terms of the SCPs themselves as in the following form where Theorem 3.4 [32,35] The error
Solution to the SFDAEs
In this section we will explain the main steps of the procedure of applying the SCPSM for solving the following SFDAEs: y 1 , y 2 , · · · y n , y 1 , y 2 , · · · , y n ), i = 1, 2, · · · , l − 1, t ≥ 0, 0 < α i ≤ 1, Subject to the initial conditions Step 1: Approximate y i (t) by using the SCPs as: Step 2: Use Eq. (12) to approximate the Caputo fractional derivatives, then the SFDAEs (17) is reduced to Step 3: Approximate the initial conditions (18) by using SCPs: Step 4: For a suitable collocation points, use the of the SCPs roots, T * m i +1− α i (t).
Step 5: The obtained equations of the previous step 4 with Eq. (21) represent a system of linear/nonlinear algebraic equations which contain Step 6: Solve the algebraic system by using the Newton iteration method to obtain the unknowns.
Step 7: The approximated solutions will be
Numerical applications
In this section five numerical applications of SFDAEs are solved by the proposed technique; the applications include variable and constant coefficient linear nonlinear SFDAEs Application 5.1: Consider the following variable coefficient linear SFDAE [27][28][29] (22) With the initial conditions For the special case at α = 1, system (22) has exact solution: By implementing the proposed technique using m 1 = And the approximated equations for the initial conditions will be 5 j=0 c j T * j (0) = 1 and The equations obtained by collocating system (25) at the first five roots of the SCPs T * 5 (t) with Eqs. (26) represent a system of linear algebraic equations which contains twelve equations for twelve unknowns; c j and b j , j = 0, 1, . . . , 5. These unknowns are obtained by using Newton iteration method.
For α = 1. The estimated solutions are Table 1. Executing time of this problem is measured using Mathematica 10 software on CPU Intel(R) Core(TM) i3, it's apparent that the solution doesn't require much CPU time. It is remarkable that the proposed technique is very effective even with using few terms of SCPs and the overall errors can be made smaller by adding more terms of SCPs.
In Table 2 the approximate numerical solutions for x(t) for α = 0.75 and 1 are compared with the solutions given by VIM [27], HAM [28] and TF [29]. These numerical results demonstrate the harmony between our method and the other numerical methods used in the comparisons.
Application 5.2:
Consider the following nonlinear SFADE [27][28][29] ⎧ ⎪ ⎪ ⎨ With 0 < α ≤ 1 and initial conditions For the special case when α = 1, the exact solution is By employing the proposed technique using m 1 j=0 a j T * j (t) We obtain a system of nonlinear algebraic equations which contains eighteen equations for eighteen unknowns; c j , b j and a j , j = 0, 1, . . . , 5. These unknowns are obtained by using Newton iteration method.
For α = 1. The estimated solutions will be x(t) = 0.9999 + t(3.0001 + t(2.4967 + t(0.85017 y(t) = 1. + t(6.00725 + t(13.647 + t (19.38 + t(4.417 + 11.9554t)))) The numerical solutions of Application 5.2 are shown graphically through Figure 2 and tabulated in Table 3 (a-c) and Table 4(4a and 4b). From Figure 2, it is easy to conclude that the obtained solutions are continuously depend on the fractional derivative and the estimated solutions are in good agreement with the exact solution at α = 1. Table 3(a-c) show the effectiveness of the suggested technique even with using few terms of SCPs and the accuracy of the method is increased by adding more terms of the SCPs. Also the solutions do not need much CPU time to produce very accurate numerical solutions. Table 4(a,b) demonstrate the accuracy of the proposed technique when compared with the numerical methods in [27][28][29]. where 0 < α ≤ 1 and initial conditions:
Application 5.3: Consider the following variable coefficient nonlinear SFDAEs [29]
At α = 1. The exact solution is: By applying our proposed technique using m 1 = m 2 = m 3 = 5, y(t) = 5 j=0 c j T * j (t), z(t) = 5 j=0 b j T * j (t) and w(t) = 5 j=0 a j T * j (t) to the fractional system (5-8), we obtain a system of linear algebraic equations with unknowns; c j , b j and a j , j = 0, 1, . . . , 5. These unknowns are attained by using Newton iteration method. For α = 1, the estimated solutions will be y(t) = −1.56 × 10 −17 + t(−0.0002 + t(1.004 The numerical solutions of Application 5.3 are shown graphically through Figure 3 and the absolute errors between our approximate solutions and the exact solutions for different values of m with their CPU time are given in Table 5(a-c) Application 5.4: Consider the following nonlinear SFDAEs [26] ⎧ ⎪ ⎪ ⎨ with the initial conditions At α = 1. System (31) has exact solution: x(t) = t 2 , y(t) = t 4 , z(t) = 2t 3 + t + 1. By using our proposed technique using m 1 = m 2 = m 3 = 5, x(t) = 5 j=0 c j T * j (t), y(t) = 5 j=0 b j T * j (t) and z(t) = 5 j=0 a j T * j (t) to the fractional system (5-6), we obtain a system of linear algebraic equations with unknowns; c j , b j and a j , j = 0, 1, . . . , 5. These unknowns are gained by using Newton iteration method. Table 6(a-c).
The exact solution of this problem is y 1 (t) = t 5 2 , y 2 (t) = t 2 , y 3 (t) = sin(t) By using our proposed technique using m 1 = m 2 = m 3 = 5, y 1 (t) = 5 j=0 c j T * j (t), y 2 (t) = 5 j=0 b j T * j (t) and y 3 (t) = 5 j=0 a j T * j (t) to the fractional system (33), we obtain a system of linear algebraic equations with unknowns; c j , b j and a j , j = 0, 1, . . . , 5. These unknowns are obtained by solving the algebraic system. Then the estimated solutions are The numerical results of Application 5.5 are graphically illustrated in Figure 5. A numerical comparison between our attained solutions with the results in [31] is tabulated in Table 7(a-c). The numerical results demonstrate the effectiveness and the accuracy of the proposed technique even by using a few terms of SCPs, and our obtained results are quite similar to the results given by SGOM [31].
Conclusion
Through this paper, The SCPSM method has been extended to solve the linear and nonlinear SFDAEs. The numerical results of some tested applications were a good evidence for the applicability and efficiency of the suggested method. A specific advantage of the suggested implementation is that it transfers the fractional differential equations into a system of algebraic equations which is easier to solve. Also, satisfactory results are obtained by using a few terms of the SCPs, and the efficiency of the anticipated method is increased by using more terms of SCPs. The obtained solutions are continuously depended on the fractional derivative and this note confirms the physical meaning of the behaviour of the solution for the proposed real problems. Also the solutions do not require much CPU time. | 3,008.6 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Fitting neutrino physics with a U(1)R lepton number
We study neutrino physics in the context of a supersymmetric model where a continuous R-symmetry is identified with the total Lepton Number and one sneutrino can thus play the role of the down type Higgs. We show that R-breaking effects communicated to the visible sector by Anomaly Mediation can reproduce neutrino masses and mixing solely via radiative contributions, without requiring any additional degree of freedom. In particular, a relatively large reactor angle (as recently observed by the Daya Bay collaboration) can be accommodated in ample regions of the parameter space. On the contrary, if the R-breaking is communicated to the visible sector by gravitational effects at the Planck scale, additional particles are necessary to accommodate neutrino data.
Introduction
Having already collected an integrated luminosity of 5 fb −1 , the LHC is starting to probe the nature of the (possible) UV completion of the Standard Model (SM). Supersymmetry (SUSY) is surely one of the best motivated SM extensions, since it elegantly solves the hierarchy problem. In the construction of the supersymmetric version of the SM (SSM), one finds dangerous operators that allow for proton decay. In order to forbid such operators, the common assumption is to enlarge the symmetry group to SU(2)×U(1)×G, where invariance under G forbids proton decay. Typically one assumes that G is a discrete group (R-parity R p ) under which ordinary particles are even while supersymmetric particles are odd. Beside forbidding Baryon (B) and Lepton (L) number violating operators that generate proton decay (and other flavor changing processes), an immediate consequence of R-parity is to make the Lightest Supersymmetric Particle (LSP) absolutely stable. Of course there are alternatives to R-parity (e.g. one can impose invariance under other discrete groups, under L and/or B, or one can extend R-parity to a continuous U(1) R [1,2]), and one can even assume that proton decay is not forbidden, as in R-parity violating (RPV) theories (see [3] for a comprehensive review), where however the coefficients of the L and B violating operators must be strongly suppressed.
The case of G = U(1) R , the continuous group that contains R-parity as Z 2 subgroup, requires to go beyond the minimal scenario. Indeed, the R-symmetry forbids Majorana gaugino masses, but Dirac gaugino masses are allowed if the gauge sector of the theory is enlarged to the one of N = 2 SUSY, e.g. including adjoint superfields Φ a i for each gauge group G i . If SUSY breaking is transmitted to the visible sector through a spurion D-term, W α = D θ α , then a lagrangian term of the form 1 M d 2 θ(W W a i )Φ a i generates
JHEP05(2012)100
Dirac mass terms of order m d ∼ D M . R-symmetric models [4][5][6] represent an interesting possibility to explore for several reasons. First of all, gaugino one loop contributions to squared soft masses are finite [7], so that the fine-tuning issue for the gluino is softened (see e.g. [8]). In addition, the LHC phenomenology is non standard, both due to the Dirac nature of the gluino [9][10][11][12] and to the presence of additional particles that can be rather easily detected [13]. Moreover, the Flavor Problem is also softened, since unsuppressed flavor changing terms are now allowed for sufficiently heavy gaugino Dirac masses [4,14].
As it has been recently explored in [6] and [20], 1 it is not necessary to define the R-symmetry as the continuous symmetry containing R-parity. Indeed, proton stability could be ensured also identifying the R-symmetry with Lepton number [6] or with Baryon number [20]. Both scenarios violate R p , but proton stability is guaranteed without any suppressed coupling, since the model posses either an accidental standard Baryon or standard Lepton number. We will focus here on a scenario where the R-symmetry is identified with Lepton number. One of the distinctive feature of this idea is that it allows for a sneutrino to play the role of down-type Higgs [6]. 2 Assuming the R-symmetry not to be spontaneously broken by the sneutrino vev, one is then forced to require vanishing Lepton number for the slepton doublet. This immediately implies that neutrino Majorana masses are forbidden by the R-symmetry and that the sneutrino vev, being unrelated to neutrino masses, can be large enough to give mass to the bottom quark.
However, the R symmetry is not an exact symmetry, since an irreducible source of R-breaking ( R from now on) is given by the gravitino mass necessary to cancel the cosmological constant. This suggests a tight connection between neutrino physics and SUSY breaking.
In the specific model presented in [6], the R-symmetry was identified with the lepton number of a specific flavor, and only one non zero neutrino mass was generated. We want here to enlarge the R-symmetry to the total Lepton number to see whether this more realistic scenario can reproduce neutrino physics, analyzing in detail the parameter space compatible with the present experimental neutrino data.
R-symmetry as global lepton number
Let us now describe our framework. We generalize the model of [6] in such a way that the R-symmetry is identified with the global lepton number, U(1) R = U(1) L . In particular, all the R-charges of Lepton doublets and singlets are respectively fixed to 0 and 2, see table 1. The R d electroweak doublet with R-charge 2, introduced to have an anomaly free framework, will play the role of an inert doublet (since we do not want the R-symmetry to be spontaneously broken), while the role of the usual down-type Higgs doublet will be played by a combination of sleptons, as we will explain later on. Since R-symmetry invariance is incompatible with Majorana gaugino masses, it is necessary to introduce three adjoint 1 See also [15][16][17][18] and [19] for earlier attempts. 2 The idea of having a non zero sneutrino vev has been extensively explored in the literature, see [21][22][23][24][25][26][27][28][29] for examples. superfields, ΦW ,B,g , that couple to the ordinary gauginos via D-term SUSY breaking [7] to generate Dirac masses. The most general superpotential compatible with the given R-charge assignment is:
JHEP05(2012)100
where λ ijk = −λ jik from the antisymmetry of L i L j . The R-conserving SUSY breaking soft lagrangian is instead: The R-symmetry cannot be an exact symmetry, since it is broken at least by the gravitino mass necessary to cancel the cosmological constant. To write down the R soft SUSY breaking lagrangian, we need an ansatz on how the R-breaking is communicated to the visible sector. A minimal scenario is to assume that gravity conserves the R-symmetry [30], so that R-breaking effects are communicated to the visible sector only through Anomaly Mediation; however, we can also imagine that gravity effects at the Planck scale can break the R-symmetry.
In the first case, which we will call Anomaly Mediation R-Breaking (AMRB) scenario, the soft R-breaking lagrangian is given by:
JHEP05(2012)100
where L Majorana = m BBB + mW tr(WW ) + m g tr(gg) , The first term contains gaugino Majorana masses of order m m 3/2 16π 2 , 3 while the second one contains trilinear scalar interactions proportional to the supersymmetric Yukawa couplings.
Turning to the case in which gravitational effects at the Planck scale break the Rsymmetry (which we will call Planck Mediated R-Breaking (PMRB) scenario), the Rbreaking structure is much richer than in the previous case, since now all the operators suppressed by some power of the Planck scale can contribute. The R-conserving superpotential and soft SUSY breaking lagrangian, eqs. (2.1), (2.2), are corrected by the following R-contributions: The R soft SUSY breaking contribution has the same structure as in eq. (2.4), but now we simply expect all the terms generated to be of order of the gravitino mass m 3/2 and the A-terms not to be aligned to the supersymmetric Yukawa couplings. Let us notice the appearance of µ-terms and Majorana masses for the Adjoint Fermions, also of order m 3/2 . As we will see, they will play an essential role in neutrino physics.
Let us now study how electroweak symmetry breaking works in this framework and how fermions get masses. Since all the sleptons have a b µ term, eq. (2.2), in a general basis all sneutrinos will get a vev. However, we can use the freedom to rotate slepton fields to work in a "single vev basis" where just one sneutrino gets a vev. 4 We will denote with A, B, C the flavor indexes in this basis, withL A referring to the doublet that plays the role of the down-type Higgs. The superpotential can be rewritten as: where y B ≡ λ ABB and y C ≡ λ ACC . In the new basis, the R-conserving soft lagrangian of eq. (2.2) maintains the same form, while the R-breaking ones of eqs. (2.4), (2.5) now read (2.9) The first term gives slepton and squark left-right mixing, 5 while the second term contains trilinear scalar interactions that do not involve the slepton that takes vev. Let us stress that the gaugino Majorana masses and the scalar left/right mixing will play a crucial role in the generation of neutrino masses. The analysis of the scalar potential can be done along the line of ref. [3], although in our case the situation is more involved. Indeed, when the left handed slepton soft squared mass matrix is not flavor universal, a mixing between the sneutrino that takes vev and the other two is in principle possible, so that we expect the physical Higgs to be an admixture of all the three sneutrinos. On the contrary, when the squared mass matrix is flavor universal, the resulting scalar potential is the usual one [6]. We assume here for simplicity that, at leading order, the soft squared mass matrix is flavor universal, deferring to a future work the analysis of the non flavor universal case.
From eq. (2.7) it is immediate to notice that the charged lepton of flavor A cannot acquire mass trough a SUSY invariant Yukawa term as the operator A A e c A is null due to the SU(2) invariance. Therefore, a mass for the lepton A must be generated by a hard SUSY breaking sector through couplings between messengers and leptonic superfields [6]. However, in the present scenario, this sector will generate hard Yukawa couplings also for the B and C flavors.
If we assume that the main contribution to B,C masses comes from the supersymmetric Yukawa couplings, the additional contribution from the hard sector must somehow be suppressed. This makes A = e the simplest possibility. Indeed, if A = τ , the τ lepton mass must be generated by the hard sector, while the hard contribution to the other masses must be suppressed (for example requiring the hard Yukawa couplings y ij to satisfy y ij 10 −6 ). This corresponds to assuming a large hierarchy between the hard Yukawa couplings. The same line of reasoning can be applied in the A = µ case. If instead A = e, a hard Yukawa contribution which generates Yukawa couplings of order y e O(10 −6 ) for all the charged leptons does not give a too large contribution to the µ and τ masses, while providing the correct order of magnitude for an electron mass. Since in this case there is no need to introduce any large hierarchy in the new sector, it appears a more natural choice. A possible example of hard Yukawa sector is given in [6]; however, let us stress that, since we will left largely undetermined this sector, in what follows we will analyze also the cases in which A = e.
As a last comment, let us stress that the interaction terms of W trilinear (which are not present in [6]) closely resemble the trilinear interaction terms that appear in RPV theories [3]. However, in our case all the off diagonal terms involving the flavor A, λ ( ) Aij , are zero in the single vev basis, so that the number of parameters is reduced. Moreover, coupling of the type λ ( ) Aii now play the role of Yukawa couplings and are not free parameters. We 5 In the AMRB scenario the off diagonal terms ABC , ACB are zero.
JHEP05(2012)100
conclude that our scenario is a variation of RPV models (with less parameters), although as we will see a larger amount of R-parity violation in the neutrino sector than in the standard case will be allowed.
Electroweak precision measurements and flavor constraints
Let us now discuss in turn the experimental constraints coming from Electroweak Precision Measurements (EWPM) and from flavor physics.
One of the distinctive feature of models where the R-symmetry is identified with Lepton Number, is that all the supersymmetric partners, with the exception of charged sleptons and sneutrinos, have a non vanishing lepton number. As a consequence, charged leptons and neutrinos can mix with the "new" spin 1/2 leptons (Dirac gauginos and higgsinos). A priori, the neutralino mass matrix is a 9 × 9 squared matrix, while the chargino mass matrix is a 12 × 12 square matrix. However, in the single vev basis, the leptons of flavors B and C do not mix with any other fermion, so that the effective matrix is the same as in [6], to which we refer for a detailed analysis of the mass eigenstates. The important point to stress for our purpose is that in the R-symmetric limit all neutrinos are massless. Also, the same bounds on the sneutrino vev coming from the bounds on the coupling of the Z boson to charged leptons apply, i.e. for MW ∼ 1 TeV one should have v A 40 GeV.
Let us now turn to the bounds on trilinear couplings appearing in W Yukawa and W trilinear [32]. Since these are RPV couplings, we refer to [3,32] for a detailed description of the origin of the various bounds. It is interesting to notice that our framework has distinctive differences both with the model of [6] and with the standard RPV SUSY.
On the one hand, Lepton Flavor Violating (LFV) processes are allowed in our framework but not in [6], and the same is true also for semileptonic meson decays (such as rare decays of B and K mesons), unless we assume alignment between the matrices (λ B,C ) ij and the quark mass matrix.
On the other hand, even though our situation is more similar to the standard RPV SUSY, some bounds have a different interpretation. In particular, bounds that involve a product between two trilinear couplings can now involve one Yukawa coupling. In order to maximise the parameter space for the sneutrino vev we read these bounds as vev dependent constraints on the trilinear couplings appearing in W trilinear . For example, when A = e, GeV and v e = (10 − 80) GeV. At the same time, it is true that, among the constraints that involve only one trilinear coupling and not a product, some will refer to bounds on Yukawa couplings, implying thus a bound on the sneutrino vev.
In the following we will always assume that trilinear couplings not directly related to neutrino physics are always small enough to satisfy all the experimental constraints.
Fits to neutrino oscillation data. Where two different values are present for one parameter, upper and lower row refer respectively to Normal and Inverted Hierarchy.
Neutrino physics and U(1) R lepton number
In our model the R-symmetry is identified with the global Lepton number, so that U(1) R breaking corresponds to Lepton Number breaking. In the following section we will discuss how neutrino masses and mixing are generated from R-symmetry breaking effects. The problem of neutrino masses in models with an R-symmetry have been studied both for Majorana [33] and Dirac [34] neutrinos. Both scenarios require to enlarge the particle content of the model introducing right handed neutrinos. Indeed, in the standard Rsymmetric scenario [4,5], there is no natural connection between the R-breaking and Majorana neutrino masses, since these are allowed by R-symmetry (all lepton superfields have R-charge 1). A priori, however, R-symmetry does not forbid Dirac masses either, since their presence depend on the R-charge assignment of right-handed neutrinos. This makes the connection between neutrino Dirac masses and R-symmetry breaking less stringent.
On the contrary, in our scenario there is a clear connection between Majorana neutrino masses and R-breaking effects, since such Majorana masses are clearly incompatible with the U(1) R symmetry. In this way, in principle we don't need to introduce any additional particle (i.e. right-handed neutrinos) in order to generate non zero masses. While, as we will see, this will be true for AMRB, in the case of PMRB additional structure will be necessary in order to reproduce neutrino masses and mixing, making this scenario less compelling.
Let us stress again that in our scenario the scale at which Lepton Number is broken is deeply connected with the scale of supersymmetry breaking through the gravitino mass, while in general the Majorana neutrino masses generated through the Weinberg operator call for a very large scale, which may or may not be connected to the scale of supersymmetry breaking.
Neutrino masses and mixings
Before analyzing the neutrino phenomenology in our framework, let us briefly summarize some features of a general neutrino mass matrix.
As it is well known, the neutrino mass matrix is largely undetermined, since we lack of information on the absolute neutrino mass scale and on the hierarchy between the mass eigenstates. For three active neutrinos, the present data are summarized in table 2.
At the same time, CMB data point towards i m ν,i 0.6 eV (see e.g. [38]), from which one can infer a loose upper bound m lightest 0.1 eV for both hierarchies. Using data in the expression of the neutrino mass matrix in terms of masses and mixing, we expect the following general form for the mass matrix (in the (ν e , ν µ , ν τ ) basis): • Normal Hierarchy: • Inverted Hierarchy: The matrices on the left and on the right refer respectively to a small In what follows, taking the approach of [39], we will focus on specific forms for the neutrino mass matrix that we consider representative of the different phenomenological scenarios. In particular, we will focus on the two following matrices, representative respectively of the Normal and Inverted Hierarchy cases for small lightest neutrino mass: We do not show here the corresponding matrices for the large lightest neutrino mass scenario because, as we will explain later on, they can be reproduced only in a very small region of parameter space. To construct the previous matrices, we fixed the lightest neutrino mass to 2 × 10 −5 eV, while the other parameters are fixed as follows: ∆m 2 12 7.6 × 10 −5 eV 2 , ∆m 2 13 2.4 × 10 −3 eV 2 , sin 2 θ 12 0.3, sin 2 θ 23 0.47, sin 2 θ 13 0.024, i.e. we take θ 13 9 • as recently observed by the Daya Bay collaboration [40]. For simplicity, we have also assumed a vanishing CP violating phase.
Neutrino physics in AMRB
Inspecting eq. (2.3), it is clear that the gaugino Majorana masses contribute to the neutralino-neutrino mass matrix. This resembles what happens in RPV theories with bilinear terms [3], where one neutrino gets a non zero mass already at tree level through its mixing with gauginos.
On the contrary, in this scenario all neutrinos remain massless at tree level. This is a striking difference with respect to the RPV case, and can be understood considering the approximate eigenstates of the neutrino mass matrix (calculated e.g. using the usual seesaw formula): The B and C flavors are by themselves approximate eigenstates and cannot get mass through a mixing with gauginos. At the same time, the flavor A mixes only with Higgsinos and adjoint fermions, so that the absence of mixing with gauginos and of Majorana masses for the adjoint fermions prevents ν A from getting a tree level mass.
It is now clear that, in the AMRB scenario, the only possibility for neutrinos to acquire a mass is through loop effects. In the (ν A , ν B , ν C ) basis, the main contributions at 1-loop are given by [3]:
JHEP05(2012)100
• Loops with two supersymmetric trilinear couplings and one mass insertion in the scalar propagator due to Anomaly Mediation.
Since this term is proportional to the mass of the fermion circulating in the loop, the dominant contributions are given by bottom quark, strange quark and tau lepton. 6 They are given by: where λ Aii = (m d ) i /v A is the i th down-quark Yukawa coupling, mb is the common left handed and right handed sbottom mass scale,β b is the bottom β-function [31], and for simplicity we have assumed λ B23 = λ B32 , λ C23 = λ C32 .
In the lepton sector, the main contribution is given by where λ A33 = m τ /v A is the tau Yukawa coupling andβ τ the tau β-function [31].
• Loops with two gauge couplings and one Majorana mass insertion in the gaugino propagator: where MW is the Dirac Wino mass and we have used the Anomaly Mediation contribution to the Majorana Wino mass: mW = g 2 16π 2 m 3/2 . In the previous equations we neglected the mixing of ν a with the adjoint gauginos (see eq. (3.4)): this is consistent in the portion of parameter space we will consider in the following numerical analysis.
Barring special relationship between the parameters involved, the neutrino mass matrix has now three non zero eigenvalues. These depend on free parameters (trilinear RPV couplings and gravitino mass), that can be chosen to fit the experimental data, but also on gauge couplings and masses which are constrained by collider experimental bounds. 7 6 We neglect the muon contribution because, due to color factors, it is subdominant with respect to the strange quark contribution. 7 In what follows we will always take as reference a "natural" spectrum for the supersymmetric partners, with only the squarks of the third generation below the TeV scale, while all other superparticle masses can be above the TeV scale. At the moment, the experimental bounds on this kind of spectrum is less severe than those obtained for almost degenerate squarks [8]. Note that Dirac gauginos have an improved naturalness with respect to Majorana gauginos [7], and this allows us to have a natural gluino above the TeV scale and a heavier Wino. In what follows, we will take the Dirac Wino mass up to 10 TeV.
JHEP05(2012)100
20-100 300-1000 200-1000 20-100 300-1000 200-1000 0.5-10 As already stressed, our scenario is a particular case of RPV SUSY (in particular the loop contributions are the same in both cases), so that it is interesting to compare the two situations. Usually in RPV scenarios the left/right sparticle mixing and the Majorana gauginos mass are at the EW scale, while in our case they are proportional to the gravitino mass and can be subleading for small supersymmetry breaking scale. This implies that while usually one needs to suppress too large loop contributions to neutrino masses putting severe upper bounds on the trilinear couplings [3], in our case the upper bound is translated on the gravitino mass (with trilinear couplings usually allowed to saturate the bounds from EWPM and flavor physics, see section 2.1).
A loose upper bound on the gravitino mass can be derived from cosmological considerations. Indeed, as already stressed, the absolute neutrino mass scale is bounded from above from CMB measurements, m ν 0.6 eV. This readily translates into an upper bound on the gravitino mass, which can be roughly estimate as follows. Since m AA is the only entry in the neutrino mass mass matrix that do not depend on trilinear couplings, we can use it to roughly set the largest neutrino eigenvalue scale. For typical value of sparticle masses (mb ,τ 1 TeV, MW 10 TeV) we obtain m 3/2 0.5 GeV.
We will now study in detail whether, in the AMRB scenario, the phenomenological neutrino mass matrices can be reproduced in the case where the flavor A is the either electron, muon or tau.
A = e: electronic Higgs
In this case we assign A = e, B = µ, C = τ . We perform our numerical scan for the parameters of table 3, requiring the other variables to reproduce the phenomenological matrices and imposing the constraints of [32]. For simplicity, we have assumed degeneracy between LH and RH sparticles, and a full family degeneracy in the slepton sector. Strictly speaking, this simplification implies that, barring accidental cancellations, a natural common slepton JHEP05(2012)100 mass cannot be too large, since it enters in the determination of the Z mass through the minimization of the scalar potential. However, keeping in mind that only the LH slepton mass matrix affects the Higgs sector, and to have an idea of the general behavior of the model, we allow the common slepton mass to assume also larger values.
The main result of this section is that while the Normal Hierarchy case can be reproduced only in a very small region of the parameter space (corresponding to v e ∼ 100 GeV and rather large Dirac Wino masses, MW 5 TeV), a much larger portion of parameter space is available for Inverted Hierarchy. This can be understood looking at the phenomenological matrices of eq. (3.3): the m ee entry in the Normal Hierarchy case is about one order of magnitude smaller than the one of the Inverted Hierarchy case. For this to happen, one needs large sneutrino vev and large Wino mass. This can be seen noting that we can parametrize m ee as where the first term comes from the squark and slepton loops (α ∼ 1/m 2 ) while the second one is due to the Wino loop (β ∼ 1/M 2 W ). A large vev can suppress the first term, while a large Wino mass can suppress the largeness of the second one. The available parameter space for Inverted Hierarchy is shown in figure 1, where the allowed region is the colored one. We do not show plots on the sparticle and Wino mass planes since these parameters are practically unconstrained. As can be seen, in the squark sector the diagonal trilinear couplings λ 333 , λ 233 are rather small, both at most of order O(10 −2 ), while the off-diagonal trilinear couplings λ 332 , λ 232 can be large, up to O(10 −1 ). In the lepton sector we have again couplings λ 233 , λ 231 at most of order O(10 −1 ).
Another interesting consequence of our analysis is that we can set a more precise range on the gravitino mass, 1 MeV m 3/2 100 MeV, (3.9) Furthermore, we also have an indication on the sneutrino vev: we can fit the neutrino mass matrix in our framework only if the sneutrino vev is somewhat large, v e 30 GeV, i.e. tan β ≡ vu ve 6. Let us also notice that for larger sneutrino vev, a larger gravitino mass is allowed. This can be understood from eq. (3.8), from which it is clear that for small sneutrino vevs the term between brackets can be large, so that in general a small gravitino mass is needed to suppress this entry. On the contrary, for larger values of the vev the term between brackets is more suppressed, and a larger gravitino mass is allowed.
A comment on the situation for larger lightest m lightest is in order. We have explicitly checked the situation for m lightest 0.1 eV, finding that only in very a small region of parameter space the phenomenological neutrino mass matrix can be reproduced. However, let us stress that in this case approximately the same region of parameter space can reproduce both Hierarchies, since now the typical form of the mass matrix in the two cases is similar (eqs. (3.1), (3.2)).
A = µ, τ : muon and tau Higgs
As pointed out in section 2, we consider the case of an Electronic-Higgs (A = e) more motivated from the point of view of the generation of the hard Yukawa couplings. However, for JHEP05(2012)100 completeness we study also other possibilities. In particular, as we will see, the A = µ case offers an interesting different phenomenological situation with respect to the A = e case.
Let us start with A = µ, B = e and C = τ . In this case eq. (3.8) is valid for m µµ , which is similar for the two hierarchies. Thus, in general we expect that, unlike what happens in the A = e case, both Hierarchies should be reproduced. This is indeed what happens, as confirmed by the scan performed for the parameters of table 3 within the same approximations described in the previous section.
The results are shown in figures 2-3. Also in this case we have checked that increasing the lightest neutrino mass diminishes drastically the available parameter space (although also in this case both Hierarchies can be accommodated).
As can be seen from the plots, the range of parameters is roughly the same as the A = e case, although some details can change. An exception is given by the lepton trilinear coupling λ 133 , which is now allowed to be also of O(1). Regarding the muon-sneutrino vev and the gravitino mass, interestingly the situation does not change much with respect to the A = e case: we conclude that the bounds of eq. (3.9) are rather typical, for small neutrino masses, while they are no longer valid increasing the lightest neutrino mass.
Let us now comment on the Tau-Higgs case, i.e. A = τ . We have performed our analysis both in the approximation of vanishing (i) and non vanishing (ii) muon mass. In the case (i) there is no contribution from loops involving sleptons, so that one can solve for the Dirac Wino mass instead of scanning on it. The results of our scan show that JHEP05(2012)100 a solution compatible with the phenomenological mass matrices, eq. (3.3) requires either very large trilinear couplings or very large Wino masses (well above 100 TeV). While the first possibility is excluded by the bounds coming from EWPM [32], the second one is in principle viable. However, since as we already pointed out we want to stick to a spectrum which is not too unnatural, we consider this possibility at best marginal.
In the case (ii) there is a non vanishing slepton loop contribution, in such a way that the scan on parameter space is quite similar to those of the two previous sections (with the exception that in this case one of the two trilinear coupling constants involved is the muon-Yukawa coupling, so that there is no need to scan over it). Nevertheless, also in this case compatibility with eq. (3.3) requires trilinear couplings incompatible with the bounds of [32].
The situation is summarized in table 3; and the conclusion is that the case A = τ cannot reproduce neither a Normal nor a Inverted Hierarchy spectrum.
Neutrino physics in PMRB
Let us now turn to the case where gravitational effects also break the U(1) R symmetry. 8 The main difference with the previous case is that now two non zero neutrino masses are generated at tree level. To understand this, let us consider the mixing among fermions in the neutralino sector. In the R-symmetric limit, the R = −1 mass eigenstates are well JHEP05(2012)100 approximated by: while ν B and ν C do not mix, as we have already noticed. The R = 1 states are instead: The inclusion of R-breaking effects generates new mixing terms for all neutrinos: which in turn produce a mass term for ν A and mixing terms m AB , m AC : Furthermore, a Majorana mass for the adjoint gauginos is generated, and through it the neutrino ν A acquire an additional mass term: This is an example of inverse seesaw mechanism [41][42][43][44][45][46][47][48], where the role of the right handed Dirac neutrinos is played by the Dirac gauginos. Therefore, the tree level mass matrix in the PMRB scenario is: which has indeed just a zero eigenvalue.
JHEP05(2012)100
Let us first of all discuss the upper bound on the gravitino mass imposed by the condition m ν 0.6 eV. Looking at the non zero entries of the mass matrix we see that in general the upper bound depends on the value of λ T,S . As in the AMRB case, we focus on the m AA entry. When the first term is negligible, the inverse seesaw term gives an upper bound m 3/2 1 − 10 keV for M W 1 TeV and v A 100 GeV. On the other hand, when the first term cannot be neglected, it dominates over the term coming from the inverse seesaw, and the upper bound now reads m 3/2 0.1 keV λ S,T which can be more stringent than in the previous case (depending on the value of λ T,S ). We conclude that, under these assumptions, in PMRB the upper bound on the gravitino mass can be significantly lower than the one of the AMRB scenario.
Let us now explain why in this case fitting neutrino physics calls for the introduction of a new sector in the model. Inspecting the phenomenological mass matrices of eq. 3.3, we see that both hierarchies require leading order entries in the µ − τ sector, which cannot be accommodated by the mass matrix (3.15). This is true for any choice of the flavor A. At the same time, we expect loop factors to be much smaller than the tree level entries, so that the overall picture cannot be modified too much. This calls for the introduction of a new sector in the model. We can wonder what is the minimal sector able generate neutrino masses and mixing. First of all we would like to generate neutrino physics without the need for a new source of R-breaking. This means we should consider a mechanism that generates neutrino masses and mixing when the lepton number is broken at very low scale (the keV gravitino mass). The minimal possibility we can think of is an inverse seesaw mechanism with additional electroweak singlets. 9 Therefore, we introduce a right handed Dirac neutrino (two singlets S andS with R = 0 and R = 2 respectively) and the following terms in the superpotential: (3.16) Each singlet gets a Majorana mass of order of the gravitino mass trough R-breaking effects, and this generates a Majorana neutrinos mass of order m ν ∼ λ i vu M S m S . An interesting possibility for the Dirac mass M S is the TeV scale, since this opens up a link between neutrino physics and LHC physics; however, a complete analysis of this situation is beyond the scope of the paper and we defer it to a future work.
Conclusions
With a luminosity of about 5 fb −1 already collected by the LHC, and without any hint of signal so far, the available parameter space of standard supersymmetric models is getting more and more constrained. This motivates the study of a larger portion of the weak scale supersymmetry landscape. Since neutrino physics can be a natural probe into new physics, it is natural to ask whether or not, given a specific framework, neutrino masses and mixing can be accommodated. In this work we have studied a supersymmetric scenario where a continuous R-symmetry is identified with the total Lepton Number, so that a possible 9 In general, such Singlets may be present in the sector that generates the hard Yukawa coupling [6].
JHEP05(2012)100
connection to neutrino physics is immediate. In particular, we have found that neutrino physics is strongly connected with the mechanism of R-symmetry breaking, which in turn is related to supersymmetry breaking.
When R-symmetry breaking effects are communicated to the visible sector solely via Anomaly Mediation, all neutrinos acquire mass at 1-loop level. The hierarchy that can be reproduced depends crucially on the flavor of the sneutrino that gets a vev and plays the role of down type Higgs. For small values of the lightest neutrino mass, and for A = e, the case of Normal Hierarchy is disfavored, since it can be reproduced only in a very limited portion of the parameter space. On the contrary, for A = µ, both hierarchies can be fitted in a consistent portion of parameter space. Finally, for A = τ , we are not able to reproduce neutrino phenomenology solely via loop effects. The situation changes increasing the lightest neutrino mass, since in this case both hierarchies can be accommodated for A = e and A = µ (but not for A = τ ), but only in a limited region of parameter space.
Another possibility is that R-breaking effects are communicated to the visible sector at the Planck scale. In this case two non vanishing neutrino masses are generated at tree level, but with a pattern that does not allow to reproduce the phenomenological matrices studied. Since loop effects give subdominant contributions and cannot change the overall picture, we conclude that a new sector must be added to the theory in order to reproduce neutrino physics. The minimal possibility is to introduce additional singlets (that can however be already present in the sector that generates the hard Yukawa couplings) in order to have an inverse seesaw mechanism. The study of this possibility is however beyond the scope of this paper.
Since neutrino physics selects a particular region of the parameter space of the model, some consequences on Dark Matter and collider physics can be inferred. The cosmological upper bound on the sum of neutrino masses translates into an upper bound on the gravitino mass, m 3/2 0.5 GeV for AMRB (with a more precise range selected by the neutrino mass matrix fit, m 3/2 1 MeV − 100 MeV)), and m 3/2 10 keV for PMRB. In both scenarios the gravitino lifetime is long enough to evade all experimental bounds, so that it can be a Dark Matter candidate [49].
Furthermore, neutrino physics selects also a preferred order of magnitude for the trilinear couplings both in the lepton and quark sector (with the general indication that the off diagonal couplings are larger that the diagonal ones). This can have important consequences for LHC physics. Indeed, one can expect squarks generation changing decays (asb L → ν B s R ort L → e + B s R ) to dominate over the corresponding generation conserving decays (b L → ν B b R ort L → e + B b R ). A similar conclusion applies in the slepton sector, with decays likeν B → bs orẽ B → st generally dominating overν B → bb orẽ B → bt. We defer to a future work [49] the detailed analysis of possible signals. | 9,133.6 | 2012-05-01T00:00:00.000 | [
"Physics"
] |
On longitudinal moving average model for prediction of subpopulation total
In the paper the empirical best linear unbiased predictor of the subpopulation total is proposed under some longitudinal model where both temporal and spatial moving average models of profile specific random components are taken into account. Two estimators of the mean square error of the predictor are proposed as well. Considerations are supported by two Monte Carlo simulation studies and the case study.
Introduction
In the survey sampling estimation or prediction of population characteristics is usually the key issue but subpopulations (domains) characteristics are of interest as well. What is more, in many cases we are looking for possibilities of increasing the accuracy, especially when the sample size in the domain of interest in the period of interest is small. Such domains are called small areas. In the case of the longitudinal data we can "borrow strength" from different periods and/or domains and use the information on spatial and temporal correlation. In the paper some unit-level longitudinal model is proposed which is a special case of the Linear Mixed Model (LMM) with two random components which obey assumptions of spatial moving average model and the temporal MA(1) model. Verbeke and Molenberghs (2000, p. 24) or Hedeker and Gibbons (2006, p. 115) propose a longitudinal model which is a special case of the Linear Mixed Model with profile-specific random components, where the profile is defined as a vector of random variables for a population element in different periods. Here we define the profile as a vector of random variables for observations of an element in some domain what allows to take the possibility of population changes in time into account. Hence, the profile is not element specific but element and domain specific. In mentioned books the assumptions are made only for the sampled elements while we make assumptions for all of population elements. What is more, the authors assume profiles to be independent, while here they are spatially correlated.
In many papers small area predictors are derived under both area-level and unit-level models where the spatial correlation is taken into account but assuming that all data refer to single time point (Molina et al. 2009;Petrucci and Salvati 2006;Petrucci et al. 2005;Pratesi and Salvati 2008;Chandra et al. 2007). The models are special cases of the Linear Mixed Model where one of the random components obeys the assumption of the SAR(1) process between subpopulations (what means that we assume the same realization of the random component for all of the population elements which belong to the same domain). What is more, Salvati et al. (2009) propose the spatial M-quantile predictor which occurred slightly more accurate than other predictors for contaminated data in their simulation studies.
If longitudinal data are studied many predictors are considered especially based on area-level models. Rao and You (1994) and Esteban et al. (2012) assume longitudinal area-level models with time effects under the assumption of the AR(1) model and independent area-level effects. In Marhuenda et al. (2013) the area-level model with AR(1) time effects and SAR(1) area effects is proposed. Singh et al. (2005) using the Kalman filtering approach propose a spatio-temporal model. Ugarte et al. (2009) study semiparametric models combining both non-parametric trends and small area random effects using P-spline regression. Saei and Chambers (2003) propose many small area methods for longitudinal data as a part of the EURAREA project. In the sections devoted to both unit-level and area-level models they consider independent area effects together with independent or autocorrelated time effects. Models with time varying area effect are studied as well.
The unit-level model with spatially correlated area effects is also considered but for one period. Molina et al. (2010a) in the European Project SAMPLE propose inter alia many area and unit-level models and predictors. In the chapter 7 they study longitudinal arealevel models with time varying area effects assuming the independence of the effects between domains and the AR(1) model across time instants (independence of time varying area effects is also considered). They also propose partitioned versions of the model, where domains are divided into two groups and parameters of the distribution of the time varying area effects differ between these groups. In the chapter 8 they consider area-level time-space models which are special cases of the Linear Mixed Model with three random components, including assumptions of the AR(1) and the SAR (1) processes for random components. In the chapter 9 they consider unit-level models with independent and correlated time-effects. In one of the models they assume three random components including independent area effects and time varying area effect which obeys assumptions of the AR(1) model across time instants and independence across areas.
In this paper we propose some longitudinal model and we derive empirical best linear unbiased predictor under the model together with its MSE estimators. The main differences between the proposed approach and proposals presented in other papers are as follows: -random components in our model are profile specific while in other papers area effects or time effects or time varying area affects are assumed, what means that in our case we do not assume that realizations of random components are the same within domains or within time instants or vary only between domains and time periods, -in this paper we use the spatial moving average model to describe spatial dependence instead of the first order spatial autoregressive model SAR (1), -here we use the first order temporal moving average model to describe temporal autocorrelation instead of the first order autoregressive model, -spatial dependence is assumed at the low aggregation level-between profiles instead of domains, -temporal autocorrelation is assumed at the low aggregation level-within profiles instead of within domains, -in the model changes of population and changes of domains' affiliation in time are taken into account.
Basic notations
Longitudinal data for periods t = 1, . . . , M are considered. In the period t the population of size N t is denoted by Ω t . The population in the period t is divided into D disjoint subpopulations (domains) Ω dt of size N dt , where d = 1, . . . , D. Let the set of population elements for which observations are available in the period t be denoted by s t and its size by n t . The set of subpopulation elements for which observations are available in the period t is denoted by s dt and its size by n dt . The d * th domain of interest in the period of interest t * will be denoted by Y id = Y id j M id ×1 will be called profile and the vector Y sid = Y id j m id ×1 will be called sample profile. Let the vector Y rid = Y id j M rid ×1 be profile for nonobserved realizations of random variables. The proposed approach may be used to predict the domain total for any (past, current and future) periods but under assumption that values of the auxiliary variables and the division of the population into subpopulations in the period of interest are known.
Superpopulation model
Special cases of the general or the generalized mixed linear models are widely used in different areas including for example genetics (e.g. Bernardo 1996), insurance (e.g. Wolny 2009) and statistical image analysis (e.g. Demidenko 2004, chapter 12), We consider superpopulation models used for longitudinal data (compare Verbeke and Molenberghs, 2000;Hedeker and Gibbons, 2006) which are special cases of the LMM. The following model is assumed: where where v id is a random component and v d (d = 1, 2 . . . , D) are assumed to be independent, e d = col 1≤i≤N d (e id ), where e id is a random component vector of size M id × 1 and e id (i = 1, 2, . . . , N ; d = 1, 2, . . . , D) are assumed to be independent, v d and e d are assumed to be independent.
What is more, the vector of random components v d obeys assumptions of the spatial moving average process, i.e.
where W d is the spatial weight matrix for profiles Moreover, elements of e id obey assumptions of MA(1) temporal process, i.e.
Variance-covariance matrices of Y d (where d = 1, 2, . . . , D) are functions of unknown parameters δ = σ 2 ε σ 2 u λ (t) λ (sp) . If the population changes in time, new elements of the population or observations of the population element after the change of its domain affiliation form a new profile Y id . It means that observations of the new population element will be temporally correlated within the profile and spatially correlated with other population elements within the subpopulation. If the population element changes its domain affiliation its new observations will be temporally correlated (but temporally uncorrelated with old observations) and spatially correlated with other population elements within a new subpopulation (but spatially uncorrelated with elements of the previous subpopulation).
To explain the idea of the model let us suppose that we study a population of households divided into domains according to the type of the household (what includes the criterion of the number of persons who belong to the household). Let the variable of interest be expenditures on some goods and let us consider the problem of prediction of the expenditures for the domains. Based on the model we assume that expenditures of two households of the same type (i.e. which belong to the same domain) are spatially correlated (where the distance may be measured in geographical or economic sense). Moreover, we assume that expenditures of each household are temporally autocorrelated assuming the MA(1) model. The assumption of the MA(1) model (which belongs to the class of short memory time series models) implies that non-zero covariances are assumed for lags which equal 1 (for periods t and t − 1). The assumption is more realistic than the assumption of the temporal independence and in the case of fast changes in the economy and in the economic situation of households it does not have to be treated as strong. Let us consider a situation when the type of household is changed e.g. from the household which consists of two persons (a couple) into the household which consists of three persons (a couple and a child). Hence, we assume that the temporal correlation is broken. Moreover, the household is not longer spatially correlated with households of the previous type but it becomes spatially correlated with households of the new type.
Best linear unbiased predictor
, Z sid is a known vector of size m id × 1 (e.g. the vector of 1s), ssid is a submatrix obtained from id by deleting rows and columns for unsampled observations. Based on the Royall (1976) theorem it is possible to derive the formula of the best linear unbiased predictor (BLUP) of the subpopulation total: wherex rd * t * is a 1 × p vector of totals of auxiliary variables in Ω rd * t * , γ rd * is a n d * i=1 M rid * × 1 vector of ones for observations in Ω rd * t * and zero otherwise. The predictor (5) is the sum of three elements. If t * is the future period then s d * t * = ∅, Ω rd * t * = Ω d * t * and the first element of (5) (given by i∈s d * t * Y id * t * ) equals zero. Hence, if the domain total of the auxiliary variable is known in the future period as well as the division of the population into subpopulations in the future period is known, then it is possible to use (5) to predict the future domain total of the variable of interest.
The MSE of the BLUP given by (5) is as follows: where where Z rd = diag 1≤i≤N rd (Z rid ), Z rid is a known vector of size M rid × 1 (e.g. the vector of 1 s), rrid is a submatrix obtained from id by deleting rows and columns for sampled observations, rsid is a submatrix obtained from id by deleting rows for sampled observations and columns for unsampled observations.
Empirical best linear unbiased predictor
Let the unknown variance parameters in (5) be replaced by their maximum likelihood (ML) or restricted maximum likelihood (REML) estimates under normality. We obtain the two-stage predictor called EBLUP. It remains unbiased under some weak assumptions (inter alia symmetric but not necessarily normal distribution of random components for the model assumed for the whole population). The proof is presented byŻadło (2004) for the empirical version of Royall (1976) BLUP and it is based on the results presented by Kackar and Harville (1981) for the empirical version of the BLUP proposed by Henderson (1950). The problem of MSE estimation based on the Taylor expansion is considered in many papers on small area estimation but for the empirical version of BLUP proposed by Henderson (1950). The first proposal of the MSE estimator of the empirical version of the BLUP proposed by Henderson (1950) was presented by Kackar and Harville (1984) but they did not prove asymptotic unbiasedness of their MSE estimator. The landmark paper on the topic is the paper written by Prasad and Rao (1990). They assume inter alia (as in this paper) independence of random variables for elements of population from different domains and that estimators of variance components are unbiased (what is not true for ML and REML estimators). They consider three special cases of the linear mixed model: Fay and Herriot (1979) model, the nested error regression model and the random regression coefficient model. To derive the MSE estimator they use three approximations. They prove that two of them are of order o(D −1 ) for all of the three considered models. They also prove that the third approximation is of order o(D −1 ) but only for the Fay and Herriot (1979) model. Unbiasedness of estimators of variance components is not assumed by Datta and Lahiri (2000). They assume the linear mixed model with block-diagonal variance covariance matrix (as in this paper) and they prove that the bias of their MSE estimator for ML and REML estimators of variance components is of order o(D −1 ). But the proof is valid if the variance-covariance matrix is a linear combination of variance components. Das et al. (2004) consider a different asymptotic set-up. The bias of their MSE estimator is of order o In the previous paragraph the problem of the MSE estimation was considered but for the empirical version of Henderson (1950) BLUP while in this paper empirical version of the BLUP proposed by Royall (1976) is studied. Using our notation Royall (1976) derived the BLUP of domain characteristic defined as a linear combi- where γ is a known vector. Hence, the problem studied by Henderson (1950) may be treated as a special case of the problem considered by Royall (1976). The MSE estimator of the empirical version of Royall (1976) BLUP is proposed bẏ Zadło (2009). He presented proof (under some regularity conditions) that the bias of derived MSE estimator is of order o(D −1 ). The proof is a direct generalization of the results presented by Datta and Lahiri (2000) for the empirical version of Henderson (1950) BLUP. MSE estimators presented below are special cases of the estimators derived byŻadło (2009) where it assumed that the variance-covariance matrix is a linear combination of unknown variance parameters. For the proposed model (1) the assumption is not met what means that the order of approximation of MSE given by the equation (9) and the order of the bias of the MSE estimators presented below [see (10) and (11)] are not proven to be o(D −1 ).
Applying results presented byŻadło (2009) under the model (1) for ML and REML estimators of δ we obtain: where g * and for ML estimators of δ by: where for the proposed model (1) g 1 (δ), g 2 (δ), g * 3 (δ) are given by (7), (8) (14) and (15) respectively. The elements of ∂g 1 (δ) ∂δ are given in the "Appendix" by (16)-(19). In the simulation study the proposed MSE estimator will be compared with deleteone-domain jackknife MSE estimator proposed by Chen and Lahiri (2002). For the proposed model (1) it is given by whereδ −d is an estimator given by the same formula asδ but based on the data without the dth domain, b d * t * (δ) = g 1 (δ) + g 2 (δ), g 1 (δ), g 2 (δ) are given by (7) and (8) is given by (5) where δ is replaced byδ −d . It is known, that parametric bootstrap distribution approximates the true distribution of the EBLUP very well-see the proof presented by Chatterjee et al. (2008). Hence, it is also possible to use the parametric bootstrap method to estimate the MSE of the EBLUP. The problem for unit-level models in small area estimation is considered inter alia by González et al. (2007), González et al. (2008). In each iteration of both jackknife and bootstrap methods we need to estimate parameters of the model (what is time-consuming). Because the number of iterations in the delete-one-domain jackknife procedure for the data considered in the Sects. 6 and 7 is several times smaller than in the bootstrap method we will use the jackknife method to estimate the MSE in the Monte Carlo simulation studies.
Monte Carlo simulation study: artificial data
The simulation study was conducted using R package (R Development Core Team 2013). It is based on artificial longitudinal data from M = 3 periods. The population size in each period equals N = 400 elements. The population consists of D = 20 domains (subpopulations) each of size 10 elements. The balanced panel sample is considered-in each period the same 40 elements are observed. The sample sizes in D = 20 domains are: 1 for seven domains, 2 for six domains and 3 for seven domains. Model parameters are estimated using restricted maximum likelihood method-we wrote restricted likelihood function for the model using R language and then we use constrOptim function available in stats R package to find the maximum. The number of iterations in Monte Carlo simulation study is L = 2000. In the simulation study the simulation MSE of the EBLUP is computed as d * t * ) and the simulation bias of the MSE estimator as ) are values of the EBLUP, the domain total and the MSE estimator computed in lth iteration of the simulation study.
In the simulation data are generated based on the model (1) assuming arbitrary chosen parameters: different values of λ (sp) and The spatial weight matrix (denoted by W d ) is rowstandardized neighborhood matrix (each population element has two neighbors). In the simulation study three predictors are considered: -spatial BLUP (SBLUP) given by (5) where variance parameters are assumed to be known, -spatial EBLUP (SEBLUP), given by (5) Because we are mainly interested in the spatial effect in the simulation we assumed λ (t) = {−0.5, 0.5} and λ (sp) = {−0.9, −0.6, 0.6, 0.9}. In our opinion the comparison of accuracies of the SEBLUP and its simplified version (under assumption of the lack of spatio-temporal correlation of random effects and components) is crucial because the predictor is the natural alternative of the SEBLUP. What is important, the comparison measures the effect of including spatio-temporal correlation. Additional comparison between mean squared errors of SEBLUP and SBLUP is also important because it allows to measure the loss of accuracy due to the estimation of model parameters.
In each figure squares denote values of some statistic for one out of D = 20 domains and the black squares denote the mean values of the statistic over D = 20 domains. Hence, we do not present only the mean values of the considered statistics but their whole distribution [as e.g. simulation results presented by Białek (2014)]. In the Fig. 1 it is shown that ratios of mean squared errors of BLUPind and SEBLUP for all of domains and different values of λ (t) and λ (sp) are from 1.004 to 1.131. It means that the maximum gain in accuracy due to the inclusion of spatio-temporal correlation is 13.1%. Because we compare the MSE of BLUPind and the MSE of SEBLUP (not SBLUP) the decrease of accuracy due to the estimation of model parameters is taken into account.
What is important, the decrease of accuracy due to the estimation of model parameters presented in the Fig. 2
Fig. 2 Effect of estimation of model parameters for different values λ (t) and λ (sp)
is very small-from 0.1 to 1.7 %. It means that its influence on results presented in the Fig. 1 is not large.
Approximately unbiasedness of the MSE estimator (10) is not proven but the biases presented in the Fig. 3 are not high-for D = 20 domains and for different values of λ (sp) and λ (t) -from ca −8.8 % to ca 16.8 % (with mean ca 1.9 %). In the Fig. 4 biases of two MSE estimators (10) and (12) are compared for λ (t) = −0.5 and λ (sp) = −0.9 where we observed (see Fig. 3) the highest bias for the proposed MSE estimator based on the Taylor expansion. In the Fig. 3 it is shown that the jackknife estimator may give significantly better results although it is not the rule (compare with the Fig. 7 for real data).
Monte Carlo simulation study: real data
The second simulation study was also conducted using R package (R Development Core Team 2013) and model parameters are estimated using R as described in the previous section. The number of iterations in Monte Carlo simulation study is L = 2000. We consider real data on investments of Polish companies (in million PLN) in N = 378 regions called poviats (NUTS 4) in M = 3 years 2009-2011. We consider the balanced panel sample-in the first period a sample of size n = 38 using (arbitrarily chosen) Midzuno (1952) sampling scheme is selected and the same elements are in the sample in M = 3 periods. The population is divided into D = 28 domains according to larger regions called voivodships (NUTS 3) and types of poviats (city poviats and land poviats) within voivodships. In 7 out of D = 28 domains sample size equals 0. The spatial weight matrix is the row-standardized neighborhood matrix. The neighborhood matrix is constructed based on the 2-nearest neighbors role using auxiliary variable-the number of new companies registered in the poviat. Data are generated based on the model (1) where the values of all of the model parameters are obtained based on the whole population data using REML and assuming that We assume ∀ d β d = β because for the considered case we have no observations from some of domains in all of periods (what implies that it is not possible to estimate some of β d 's). What is important, the spatial and temporal correlations for the real data are weak: λ (t) = 0.352 and λ (sp) = −0.396. In the model-based simulation study we compare accuracies of the following predictors and estimators of the domain total in the last period: -spatial BLUP (SBLUP), given by (5), where variance parameters are assumed to be known, -spatial EBLUP (SEBLUP), given by (5), where variance parameters are replaced by REML estimates, -BLUP under the assumption that λ (sp) = 0 and λ (t) = 0 (BLUPind) which under the model and for the balanced panel sample does not depend on unknown model parameters, -Count Synthetic Estimator (C-SYN), see Rao (2003, p. 46), -Ratio Synthetic Estimator (R-SYN), see Rao (2003, p. 47), where the auxiliary variable is the number of new companies registered in the poviat in 2011, -Generalized Regression Estimator (GREG), see Rao (2003, p. 17 Y id j = x id j β + u 1,d + u 2,d j + e id j where e id j ∼ (0, σ 2 0 ), domain specific u 1,d are independent and u 1,d ∼ (0, σ 2 1 ), time-varying area effects u 2,d j for d = 1, 2, . . . , D are independent, but inside domains for j = 1, 2, . . . , M are AR(1) with parameters denoted by σ 2 2 and ρ (t) . The predictor does not take the spatial correlation into account. The temporal autocorrelation is included but on the higher aggregation level-within domains instead of within profiles as in (1). To compute values of the predictor function in R language presented in Molina et al. (2010b, pp. 123-126) is used. SEBLUP, SBLUP, BLUPind, SP use information on the variable of interest from all of the periods while C-SYN, R-SYN, GREG and GREG-L use information on the variable of interest only from the last period. GREG and GREG-L are direct estimators what means that is it possible to compute their values only for domains with sample sizes greater than zero in the period of interest (in 21 out of D = 28 domains in the simulation study).
In the Fig. 5 the accuracy of SEBLUP is compared with other predictors and estimators. Estimators and predictors R-SYN, C-SYN, GREG, GREG-L and SP are several times less accurate than SEBLUP. What is interesting, in 22 out of D = 28 domains 762 T. Żadło Fig. 6 Effect of including spatio-temporal correlation (assuming that model parameters are known) and effect of estimation of model parameters SEBLUP is less accurate than BLUPind. The situation is explained in the Fig. 6 (the results for the same domains are matched by lines). The reason is that the gain in accuracy due to the including spatio-temporal correlation (assuming that model parameters are known) measured by ratios MSE(BLUPind)/MSE(SBLUP) is in 22 domains smaller than the increase of MSE due to the estimation of model parameters measured by ratios MSE(SEBLUP)/MSE(SBLUP). It explains the suggestions presented in the previous section that the comparison of SEBLUP and its simplified version (assuming the lack of spatio-temporal correlation) is very important or even crucial.
In the Fig. 7 biases of two MSE estimators (10) and (12) are compared. For the studied case means of absolute biases are similar (see the right part of the Fig. 7). For the jackknife MSE estimators it equals 5.1 % while for the MSE estimator based on the Taylor expansion it equals 4.8 %.
Case study: real data
In the previous section we have studied the problem of prediction of total value of investments of Polish companies (in million PLN) in D = 28 regions in 2011 in the simulation study. Because we were interested in the gain in accuracy which resulted only from incorporating spatio-temporal correlation we did not use auxiliary information. In this section we will use the same data to show how to choose the appropriate model based on the real data. In this section we will use data on investments of Polish companies in 2009-2011 (the same as in the previous section) and additionally two [2008][2009][2010]. The same sample as in the previous section is studied. Firstly, we would like to find the appropriate model for the real data. Is is possible to use the likelihood ratio test to compare two models but if the models are nested (see e.g. Pinheiro and Bates 2000, pp. 83-84). Hence, at the significance level 0.05, we compare our model with two auxiliary variables with its special cases with two auxiliary variables as well but under simplified assumptions on spatio-temporal correlation (obtaining the following p values): -assuming the independence of random effects and the independence of random components (p value of likelihood ratio test: 1.1 × 10 −8 ) -assuming the independence of random effects and MA(1) random components ( p value of likelihood ratio test: 2.8 × 10 −9 ) -assuming the spatial moving average model for random effects and independence of random components (p value of likelihood ratio test: 0.0306).
Hence, our model should be preferred comparing with its special cases. Pinheiro and Bates (2000) in chapter 5 suggest using e.g Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC) if we would like to compare non nested models. Moreover, the authors present different models available in R which will be compared in this section with the proposed model (1). It is possible to include other models as well but in this case the computations must be conducted using original functions (as in the case of the proposed model). Pinheiro and Bates (2000) in chapter 5 present special cases of the linear mixed models where different assumptions on correlation structure of random components can be made but assuming the independence of random components within groups defined by the grouping variable used for the random effects. Hence, if we assume profile specific random effects we can define different temporal models for random components within profiles, and if we define time specific random effects we can define different spatial models for random components within domains. Below we use different correlation structures described by Pinheiro and Bates (2000) in chapter 5 including different spatial correlation structures defined in Pinheiro and Bates (2000, p. 232).
In the Table 1 we present the values of the AIC and BIC criteria of the proposed model and other non nested models: -with independent profile specific random effects and MA(2) random components (model_i_MA2) -with independent profile specific random effects and AR(1) random components (model_i_AR1) -with independent profile specific random effects and AR(2) random components (model_i_AR2) -with independent profile specific random effects and ARMA(1,1) random components (model_i_ARMA) -with independent profile specific random effects and compound symmetry temporal correlation of random components (model_i_compound_symmetry) -with independent time specific random effects and independent random components (model_t) -with independent time specific random effects and exponential spatial correlation of random components (model_t_exponential) -with independent time specific random effects and gaussian spatial correlation of random components (model_t_gaussian) -with independent time specific random effects and linear spatial correlation of random components (model_t_linear) -with independent time specific random effects and rational quadratic spatial correlation of random components (model_t_rational_quadratic) -with independent time specific random effects and spherical spatial correlation of random components (model_t_spherical) -with independent time specific random effects and compound symmetry spatial correlation of random components (model_t_compound_symmetry) -with independent domain specific random effects and independent random components (model_d) The proposed model has the smallest values of AIC and BIC criteria comparing with other analyzed models. It is worth noting that the values of the criteria for some models are the same what is not unusual-see eg. Pinheiro and Bates (2000, p. 249) where 4 out 5 models with different spatial correlation structures have the same values of AIC and BIC criteria.
We have also compared our model with models with the same variance-covariance matrices as the models presented in the Table 1 but using only one out of two auxiliary variables. these models have also higher values of AIC and BIC criteria than the proposed model. Although the assumed model with only one out of two auxiliary variables has higher values of AIC and BIC criteria the formal test of significance of fixed effects will be conducted as well. In the section we will use permutation tests of fixed effects. The algorithm for testing the jth fixed effect is as follows (Pesarin and Salmaso 2010, p. 45): 1. Based on the original data a test statistic, denoted by T 0 = T (X), is computed, e.g. the test statistic can be defined as log-likelihood (as in this paper). 2. We take a random permutation of jth column of the matrix X and we obtain a new matrix of auxiliary variables denoted by X * . 3. Value of the test statistics T * = T (X * ) is computed. 4. Steps 2 and 3 are repeated B times and B values of T * b = T (X * b ) are computed, where b = 1, 2, . . . , B. 5. We estimate p value as B −1 1≤b≤B I (T * b ≥ T 0 )-he fraction of the permutation values not smaller than the the value of the test statistic computed based on the real data.
If is not possible to make computations for all possible permutations, the estimated p value strongly converges to its respective true value (Pesarin and Salmaso 2010, p. 45). In the case study the number of all possible permutations is (n × M)! = (38 × 3)! ≈ 2.5 × 10 186 . Hence, p-values will be computed based on B = 1000 independent permutations. Let us consider tests of fixed effects for two auxiliary variables (production sold and fixed assets). In both cases p-values of permutation test equal 0, what means that the variables have a significant influence on the variable of interest.
Finally, in the Fig. 8 we present real values of domain totals of investments and the predicted values-values of the empirical version of the proposed predictor given by (5) based on the sample data considered in the section. It should be noted that the 766 T.Żadło -zero for 7 out of D = 28 domains, -one for 11 out of D = 28 domains, -two for 5 out of D = 28 domains, -three for 3 out of D = 28 domains, -four for 2 out of D = 28 domains.
Conclusions
In the paper some special case of the LMM for longitudinal data is proposed. The BLUP of the subpopulation total for the model is derived and MSE estimators of its empirical version are proposed. The accuracy of the proposed predictor and biases of the proposed MSE estimators are analyzed in two Monte Carlo simulation studies based on the artificial and the real data. In the first simulation study based on the artificial data the accuracy of the empirical version of the proposed predictor was better for all of the domains comparing with the predictor derived under the assumption of lack of spatio-temporal correlation. In the second simulation study based on the real data the empirical version of the proposed predictor was even several times more accurate than other predictors and estimators but it was better than the predictor derived under the assumption of lack of spatio-temporal correlation only in 6 out of 28 domains. It resulted from the decrease of the accuracy due to the estimation of model parameters.
In both simulation studies biases of the proposed MSE estimator were small. The considerations are also supported by the case study. ∂ × σ 2 u Z rd * H d * Z T sd * + diag 1≤i≤N rd * ( rsid * ) T γ rd * + γ T rd * σ 2 u Z rd * H d * Z T sd * + diag 1≤i≤N rd * ( rsid * ) | 8,190 | 2015-08-01T00:00:00.000 | [
"Mathematics"
] |
An Analysis of Usage of a Multi-Criteria Approach in an Athlete Evaluation: An Evidence of NHL Attackers
: The presented research focuses on the commonly used Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), which is applied to an evaluation of a basic set of 581 national hockey league (NHL) players in the 2018/2019 season. This is used in combination with a number of objective methods for weighting indicators for identifying differences in their usage. A total of 11 indicators with their own testimonial values, including points, hits, blocked shots and more, are selected for this purpose. The selection of a method for weighting indicators has a major influence on the results obtained and the differences between them, and maintains the internal links within the ranked set of players. Of the evaluated methods, we prefer the Mean Weight method, and we recommend that the input indicators be considered equivalent when evaluating athletes.
Introduction
Sports are among the largest sources of entertainment and, therefore, revenue in America.The top five most popular team sports are American football (the National Football League-NFL), baseball (Major League Baseball-MLB), basketball (the National Basketball Association-NBA), ice hockey (the National Hockey League-NHL) and football (Major League Soccer-MLS).They have been researched and examined by a range of authors focusing on marketing [1], television ratings [2], estimates of spending by persons attending sporting events [3], referees [4], marginal revenue product [5], the "superstar" effect [6], the effects of weather on attendance [7] and many other factors [7].
This paper covers the NHL, the premier hockey league in the world, and specifically the 2018/2019 season, which featured 31 teams: 24 from the United States and 7 from Canada [8].Each of the teams plays 82 regular-season games, with the top 16 teams then advancing to the playoffs.Thanks to this number of games and the number of teams involved, there is an ample dataset available with detailed information about every team and the individual players.These data include commonly referenced statistics such as goals and assists, shots, games played, plus-minus, game-winning goals, hits, blocked shots, power play time on ice and many others that quantify the skills of individuals in minute detail and that are freely accessible (nhl.comaccessed on 5 December 2020; tsn.ca).
While such a large volume of data is aggregated for individual players and teams, the most popular statistics in the media remain goals and assists and, in the case of goal keepers, save percentage [9,10].This fact is confirmed by the NHL's awards, given to the best of the best.A total of 20 individual awards are handed out annually to players, coaches and general managers [11].The most important, which are typically awarded to offensive players, include awards for the most outstanding player as judged by members of the Professional Hockey Writers' Association (Hart Trophy) and the most outstanding player in the regular season as judged by the members of the NHL Players' Association (Ted Lindsay Award), along with the Art Ross Trophy for the league scoring champion (goals and assists combined).The Maurice Richard Trophy is awarded to the regular-season goal-scoring champion, the Calder Memorial Trophy to the best rookie player under 26 and the Conn Smythe Trophy to the most valuable player during the playoffs [11].These awards are typically given on the basis of total points and, therefore, do not, in our opinion, fully capture the overall complexity of the players themselves, which is often very important.This myopic focus on the most visible indicators may be to the detriment of a large group of players.These are largely referred to as team players, who are willing to do the less visible work to help their more productive teammates succeed and excel.
From our point of view, it is not sufficient to choose the best players based on the total number of points achieved.It is necessary to consider other important factors/criteria (like plus-minus, hits, blocked shots and others).Therefore, the purpose of this paper was to introduce Multi-Criteria Decision-Making (MCDM) methods and their possible usage into the area of sports (as a possible advantage in managerial decision-making).This new perspective on sports can be widely applied in the selection of players (drafts, trades) or player ratings (contracts).The objective of this research was to use the selected MCDM method and TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) to comprehensively evaluate the performance of NHL offensive players and map their performance using multiple attributes.This research is divided into multiple sections for this purpose.The NHL, which is a highly contemporaneous topic that represents a large source of information for various types of research, is the subject of the second section.The third section presents TOPSIS as the primary tool for comprehensive assessment of a selected group of NHL players.Attention is focused in this section on five selected methods for weighting input parameters, the selection and influence on the overall results of which are quantified in the fourth section.This section is preceded by the research methodology, which describes in detail the procedure for selecting the parameters, the research sample and the apparatus of mathematical-statistical methods used in this process.The fifth section, represents the results of the completed analysis devoted to the application of the individual methods for weighting the monitored attributes combined with the TOPSIS technique.The final sections, the discussion and conclusion, summarizes the results within the context of the restrictions of the completed research and potential opportunities for its continuation.
National Hockey League from Different Points of View
Currently, a large group of authors are devoted to studying hockey and the NHL from different points of view.Booth et al. [12], Farah et al. [13] and Madsen et al. [14] explored the application of a mathematical programming approach to the expansion of NHL draft optimization and to the factors contributing to elite hockey players' decisions, exploring variations in the production of NHL draftees.Nandakumar and Jensen [15] analyzed the unique challenges of quantitatively summarizing the game of hockey and highlighted how deficiencies in existing methods of evaluation shaped major avenues of research and the creation of new metrics.Chiarlitti et al. [16] evaluated draft-eligible players based on body composition, speed, power and strength.Farah et al. [13] explored whether population density and proximity to Canadian Hockey League teams were associated with the number of draftees produced.Depken et al. [17] analyzed the determinants of career length in the league.
Much attention is also paid to the field of medicine (e.g., incidence of traumatic brain injuries during contact sports, including in the NHL; see [18]).Navarro et al. [19] examined the effects of concussions on individual players in the National Hockey League (NHL) by assessing career length, performance and salary.Other authors used positional comparisons to assess the impact of fatigue on movement patterns in hockey [20], the utility of using visible signs (VS) of concussion in predicting a subsequent diagnosis of concussion in players [21] and other aspects of the NHL, especially in terms of health impacts on players [22,23].
From our point of view, there is another interesting group of authors who are looking at the NHL from an economic perspective and analyzing its microeconomic and macroeco-nomic impacts.An example would be the study of Treber et al. [24], which considered that labor-related work stoppages in professional sports could have the potential to alienate fans; however, whether they generate sustained reductions in demand remains an open question (an evaluation of lockouts that took place during 1994-1995, 2004-2005 and 2012-2013).Ge and Lopez [25] found limited evidence of enhanced productivity among European players and no evidence of a benefit or drawback for North American players.Using the impact of an NHL lockout on a county with an NHL team relative to trends in the surrounding counties, Jasina and Rotthoff [26] found no general impact on employment; however, we did find a decrease in payroll in some sectors.Marketing aspects of sponsorship were evaluated by O'Reilly et al. [27] and Bragg et al. [28].The research of O'Reilly et al. [29] explored the drivers of merchandise sales in professional sports and provided the direction on key antecedents.Brander and Egan [30] showed that NHL player salaries exhibit a strong seniority-based wage structure, as performance-adjusted salaries rise significantly with age for most of the relevant range, peaking at about age 32 and onward.NHL players commonly miss time due to injury, which creates a substantial burden in lost salary costs [31].
As can be seen from the above literature review, the NHL is a current topic that has been addressed by several studies in the fields of medicine, economics, marketing, psychology, etc.The game itself, as a source of information for multi-criteria evaluation, is discussed in the following section.
National Hockey League as a Big Data Source
The result and course of a game is influenced by several factors.Franjkovic and Matkovic [32] aimed to determine which variables affected the final outcome more in situational parameters.One of the conclusions of this study is that save percentage contributes the most to the final result.Good teams usually have a better defense setup to eliminate shots in front of the net and a slot position to help goaltenders to have an open shot from a distance.Cyrenne [33] examined the relationship between a team's salary distribution and its winning percentage and found evidence of a superstar effect, in that teams with a higher maximum player salary have higher winning percentages.According to Schulte et al. [34], Markov Game Model validation showed that total team action and state value provide a strong predictor of team success, as measured by the team's average goal ratio.An evaluation of the other aspects of the NHL was analyzed by Friesl et al. [35].Bowman et al. [36] indicated that competitive balance in the National Hockey League increased rather substantially during this period, and that overtime rules and shootouts have had a much larger positive impact on the competitive balance in the NHL than overtime approaches have had on the competitive balance of any of the other sports examined.Hoffmann et al. [37] investigated the magnitude of the home advantage, as games proceeded from regulation, to overtime, to the shootout, while adjusting for team quality.The shootout may affect the psychological and behavioral states of home-team players, generally resulting in a decrease in the home team's odds of winning in the shootout relative to overtime.Beaudoin et al. [38] showed that there are various situational effects associated with the next penalty call, related to the accumulated penalty calls, the goal differential, the stage of the match and the relative strengths of the two teams.They also investigated individual referee effects across the NHL.Camire [39] examined the benefits, pressures and challenges of leadership and captaincy in the NHL.Different aspects were analyzed by Rockerbie [40], who estimated the effect of fighting in hockey games on attendance in the NHL from the 1997-1998 season through to the 2009-2010 season.Lopez [41] found that in the current points system, several teams are playing a significantly higher proportion of overtime games against non-conference opponents than in-conference ones, and that overtime games are also significantly more likely to occur in the months leading up to post-season play.
Gu et al. [42] considered how to use all available data and describe an expert system for predicting NHL game outcome (with 77.5% accuracy).In each system, the essential component is the system element, which, in this case, is players (see [43,44]).Our research is focused on evaluating the performance of NHL players as well as their comprehensive evaluation using MCDM methods in this field.
Technique for Order of Preference by Similarity to Ideal Solution
TOPSIS is defined by Zavadskas et al. [45] as being the second most widely used MCDM method.Others, the use of which is noted, for instance, by Tramarico et al. [46], include the Analytical Hierarchy Process (AHP), Analytic Network Process (ANP), Multi-Attribute Utility Theory (MAUT), Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE) and Elimination and Choice Expressing Reality (ELECTRE).Its origin may be traced back to Hwang and Yoon [47] and Yoon [48], who developed this technique as an alternative to the ELECTRE method mentioned above (see Figure 1).Streimikine et al. [49] described the result of this technique as the solution with the shortest distance to the positive ideal solution (PIS) calculated using the Euclidean distance.This opinion was elaborated by other groups of authors (e.g., [45]), according to whom this method offers a solution that is, under the given conditions, closest to the above-mentioned PIS, while at the same time being farthest from the negative ideal solution (NIS).In Figure 1, to which Vavrek [50] refers in describing this method, each yellow ball represents one of the alternatives, while the red ball represents the NIS alternative and the green ball the PIS alternative.The best-ranked alternative (ball) is farthest from the grey ball (NIS) and closest to the black ball.The TOPSIS technique is calculated as per Vavrek and Bečica [51] and Vavrek et al. [52], but this research is not concerned with its more in-depth characteristics or an analysis of its calculation.These are readily available in the works of other authors, including Pavic and Novoselac [53], Seyedmohammadi et al. [54] and many others.
In every MCDM method, the first and one of the most important steps is the weighting of individual input indicators, and the TOPSIS technique is no exception.Keršuliene et al. [55] differentiate approaches to weighting into four basic groups: subjective, expert, objective and integrated (which represents a combination of the preceding approaches).Subjective methods reflect the personality of the decision-makers and their individual preferences (indicator weight is defined based on a subjective opinion).Expert evaluation, meaning evaluation by a group made up of a small number of experts in a given field, is covered by Kendall [56], Fisher and Yates [57], and the Fuller Method or the Fuller Triangle is typically used in this case.The final group, the group of objective methods, weights individual indicators based on a predefined mathematical model unique to each method, without any influence from the decision-maker on the result (the weight is given by the nature of the input data).
The focus of this research is to provide results to the professional community, influenced by the decision-maker (an individual's subjectivity) to the lowest possible extent.Therefore, a total of five objective methods were selected for the needs of this research for weighting of the input indicators with various calculation processes, which should help accomplish this aim.The objective methods that are used together with the TOPSIS technique, and that are covered in more detail further on in this text, include:
Selected Methods for Weighting Indicators
As mentioned above, several methods were selected for weighting the input indicators for the needs of calculation, using the TOPSIS technique.This section presents the five methods selected (CV, CRITIC, MW, SD and SVP), which are classified as objective methods.We consider the identified methods to be the root cause of the varied results produced within the completed research.
There are numerous uses for the coefficient of variance in the academic environment.The most frequent include its use in the form of momentary characteristics [58,59], CV control charts [60,61] and as a weighting method [62].This research was completed using the calculation employed by Singla et al. [63].An interesting fact is that the first step is the same as that specified by Yalcin and Unlu [64] for calculation using another of the employed methods, the CRITIC method.The CRITIC method is one of the most commonly used ones.It has applications in the environmental [65], medical [66], industrial and services fields [67,68].The approach used for the CRITIC method calculation is based on the research of Yalcin and Unlu [64], who focused on the evaluation of an initial public offering (IPO) and divided this approach into three steps (data normalization, correlation calculation and weighting).For its extension and application to an offshore wind turbine technology selection process, see Narayanamoorthy et al. [69].The MW method is the simplest in terms of its approach, given that the weight assigned to each indicator is the same.This method can be used when "no method" for weighting the individual indicators is being used, i.e., in situations where the monitored indicators are mutually equivalent [70].The SD method involves weighting based on the variability of the individual indicators, i.e., basic momentary characteristics, the use of which is quite common in the academic world [71].The highest weight is assigned to the indicator within which the greatest differences are found between the evaluated variants, i.e., the indicator with the greatest standard deviation.Ouerghi et al. [72] provide uses for this method.The SVP method operates in a manner similar to the SD method and weights each indicator based on variance.Its applications are covered, for instance, by Nasser et al. [73] and Tayali and Timor [74].The application of other methods can be found in the research of Geetha et al. [75], Narayanamoorthy et al. [76] and Ramya [77].
As can be seen, the principle behind the calculation of each of these methods is different, and they all follow different data aspects or characteristics.
Methodology
The objective of this research was to perform a comprehensive evaluation of NHL players' performance regardless of their salary, marketability or any aspects other than those that are directly related to the game itself.The process is illustrated in the following figure (Figure 2).A total of 11 attributes were selected (Section 4.1) to accomplish such an evaluation; we consider them to be the most important performance indicators that are monitored in practice.These indicators were monitored in a group of 518 offensive NHL players who were included in the statistics provided for individual players on the league's official website (Section 4.2).The results from the application of the TOPSIS technique and the five methods for weighting the individual attributes are statistically verified and described in Section 4.3.
The objective was not to identify the best player in the NHL.The result of our analysis represents an assessment of the real application of multi-criteria evaluation on this group of NHL players and an identification of differences based on the use of various methods to determine the importance of the attributes applied.
Attribute Selection
The first phase involved work with the 26 individual data criteria published on the website www.nhl.com, with the goal of comprehensively depicting the performance of NHL offensive players in terms of offensive and defensive characteristics.All of the monitored attributes are absolute in nature and were recorded for the entire 2018/2019 regular season (see Table 1).As noted above, the idea behind this research was to apply the TOPSIS technique to assess the performance of NHL players, while the aim was to apply attributes with a true testimonial value, meaning the lowest multi-collinearity of the input data.For this purpose, a linear order correlation (due to failure to meet the normal distribution condition) was calculated between the individual pairs of results, with the results captured in Figure 3.Given the accepted level of a linear relationship (≤0.7), the majority of attributes whose structure and variability was duplicated with other monitored attributes were excluded from this group of attributes.The FOW or FOL attribute, which is dominant especially in the case of centers, given the logic of the game, was also excluded, as it could not be applied to the entire group of players (i.e., wings).The result of this selection was a group of 11 attributes that represent the input data for analysis using the TOPSIS technique (see Table 2).The resulting structure of the monitored indicators sufficiently depicts, in our opinion, the accomplishment of both the offensive and defensive tasks of centers (C), as well as left (L) and right (R) wings.
Selected Methods for Weighting Indicators
The set of players examined as specified in the evaluation methodology above includes all offensive players (C, L and R positions) who played in the 2018/2019 regular season and whose statistics are recorded on the NHL's official website.There were 581 players in total, and their structure is specified below (see Figure 4).To verify the testimonial value of the generated results, selected players who were nominated for certain awards for the 2018/2019 season were identified in the overall rankings (see Table 3).These are players who were nominated for at least 2 awards, specifically Nikita Kucherov, Connor McDavid and Patrick Kane, who were the top three players in terms of points at the end of the 2018/2019 regular season (assuming that none of them would be evaluated as the best by any of the combinations).The performance of these players should be above average in the evaluated dataset, and they should be found near the top of the overall evaluation using the TOPSIS technique, regardless of the method applied for weighting the monitored attributes.
Methods of Processing and Statistical Verification of the Results Obtained
Analysis using the TOPSIS technique was completed a total of five times using the following methods for weighting the monitored attributes: CV, CRITIC, MW, SD and SVP (see Section 3).These results were then subjected to detailed statistical analysis, which included the following: where: n-number of observations of a pair of variables; n c -number of concordant pairs; n dnumber of discordant pairs.This was used for the initial monitoring of multi-collinearity between the individual attributes under consideration, as well as the linear relationship of these attributes with the overall results.
•
The Kolmogorov-Smirnov test (K-S) where: F 2,n 2 (x)-empirical distribution function of the second sample.The K-S test was used for verification of the conformity of the distribution functions of results obtained using the TOPSIS technique and individual methods for weighting the attributes.
•
The Kruskal-Wallis test (Q) was used to verify the conformity of the mean of the results obtained: n-number of observations; n i -number of observations in the i-th group; T 2 i -total number of orders in the i-th group.
•
The Shapiro-Wilk test (S-W) to verify normal distribution of the distribution function of the results: where: n-number of observations; n i -empirical frequency; p i -theoretical probability that the values of a random variable lie in the i-th interval.
Multi-criteria evaluation using the TOPSIS technique and the weighting of the individual attributes was completed in MS Excel.Statistica 13.4 and Statgraphics XVIII software were used for statistical verification in the scope defined above and for graphic illustration purposes.
Results of Multi-Criteria Evaluation
The first section describes the different weights obtained through the application of the CV, CRITIC, MW, SD and SVP methods (Section 5.1).The results of the completed analysis are then divided into five separate sub-sections devoted to the application of the individual methods for weighting the monitored attributes combined with the TOPSIS technique (Section 5.2).The results are compared in the final section using the above-specified statistical apparatus (Section 5.3).
Comparison of the Importance of Weights by the Individual Methods
Weighting a monitored parameter is one of the steps in every multi-criteria method, including the TOPSIS technique, in which the weights are applied in the third step using a normalized criteria matrix.Their importance is covered, for example, by Vavrek [52], who assessed the impact of the selection of a suitable method for weighting monitored parameters on the overall results.
This research involved five objective methods for weighting the 11 monitored parameters, the result of which is depicted in Figure 5.This graphical illustration reveals significant and clear differences between the assigned weights, which are documented by the values of the coefficient of variance for every parameter at a level greater than 24% (vx ≥ 0.24), and a minimum standard deviation of s x ≥ 0.01963.The most significant differences in terms of the monitored parameters were observed in the case of short-handed time on ice (SHTOI) and HITS, for which the variance range was significantly different from that of the others (R HITS = 0.3239; R SHTOI = 0.3183).These results also contributed to rejecting the null hypothesis of the Levene test, i.e., confirming the heteroscedasticity of the weights of individual parameters (LE = 8.065; p < 0.01).Significant differences also appear in terms of the individual methods and the distribution of 100% weight among the 11 parameters.Only the MW method shows an even distribution, which is of course given by its very name and especially the calculation itself.In such a case, all the parameters are equivalent and thus, insignificant, in terms of the TOPSIS technique (since there is multiplication by a constant, which does not change the structure of the data).The CV method came the closest to such an even distribution of weights, followed by the SVP method (note that the order of the specific parameters is not considered).In the former, the cumulative weight of the four most important parameters is 47.09% (36.3% in the case of MW), while the least important parameter has a weight of 2.32% (PM).In the SVP method, 79% of the importance is distributed between the SHTOI and HITS parameters, while the total weight of the last eight parameters is 6.89%.A higher number of overtime goals (in the OTG parameter) almost disappears from the results obtained using the method, as it is assigned a weight of only 0.005% (see Figure 6).Given the above, it may be said that the selection of a method has a significant and clear impact on the results of the MCDM method, be it the TOPSIS technique or any other available method used in practice.The differences are so significant that they may substantially negate any attempt on the part of the decision-maker to select parameters with a clear testimonial value (without multi-collinearity), as was the case in the research provided (see Section 4.1).This conclusion was then verified in the evaluation of the overall results of the TOPSIS technique, combined with the CV, CRITIC, MW, SD and SVP methods for weighting.
Methods of Processing and Statistical Verification of the Results Obtained
This section briefly describes the results obtained using the TOPSIS technique combined with the five methods employed to define the importance of the input indicators, specifically the CV, CRITIC, MW, SD and SVP methods.Within each of them, attention is paid to identifying the best-rated players as well as differences compared to three award-winning players: Nikita Kucherov, Connor McDavid and Patrick Kane.
Results Obtained Based on the CV-TOPSIS Combination
Artemi Panarin (Columbus Blue Jackets) was identified as the best player based on the evaluation using the TOPSIS technique and the CV method for weighting the attributes (see Table 4).Overall, the rating of players is heterogeneous, using 60.06% of the potential variance range c i ∈ 0; 1 .A positive skew (γ 1 = 1.189) indicates a higher number of below-average players, i.e., players with a result of c i < 0.1348 and rejection of the potential normal distribution of results (S-W = 0.890; p < 0.01).The best player as evaluated by the CV-TOPSIS combination in comparison to the three specified award-winning players (Kucherov, McDavid and Kane) exhibited better and more stable results in terms of individual parameters, meaning that he is among the best in most indicators.A significantly lower number of penalty minutes (PIM) ranks him higher than Kucherov, and his better plus/minus rating (PM) puts him ahead of McDavid and Kane.These differences ultimately influenced the overall ranking of the players (see Figure 7).
Results Obtained Based on the CRITIC-TOPSIS Combination
The second CRITIC-TOPSIS combination identified Aleksander Barkov (Florida Panthers) as the best player among the sample of 581 evaluated players, a ranking of whom is provided in Table 5. Differences in the evaluations of these players decreased (R = 0.5293), and there was a decrease in the skew of the results (γ 1 = 0.314), which also indicates minimal differences in player rating and equalization of the overall score.Analysis of these results, given the results of the Shapiro-Wilk test, permits rejection of the hypothesis of their normal distribution (S-W = 0.954; p < 0.01).Within the monitored attributes showing Aleksander Barkov as the highest-ranked player using the CRITIC-TOPSIS combination, there is a balance between a set of four indicators (SHP, GWG, OTG and SG), among which differences are minimal.Short-handed time on ice (SHTOI) is a clear differentiating factor compared to Kucherov, McDavid and Kane.These results underline the fact that the selected indicators are comprehensive and offer their own testimonial value (see Figure 8).
Results Obtained Based on the MW-TOPSIS Combination
Evaluation using the MW-TOPSIS combination provided the same result as the evaluation in the previous section and identified Aleksander Barkov (Florida Panthers) as the best player (see Table 6).Other parameters, specifically variance range and skew, show very similar values (γ 1 = 0.266; R = 0.5151), which led to a rejection of the hypothesis of the normal distribution of the MW-TOPSIS results (S-W = 0.961; p < 0.01).Identifying the same player as the de facto best means that the evaluation is similar to that in the previous Section 5.2.2, as the composition of the players remains unchanged.
Results Obtained Based on the SD-TOPSIS Combination
The most significant difference was the identification of Brandon Tanev (Winnipeg Jets) as the best or most effective player based on an evaluation of the 11 indicators (see Table 7).The differences between other players increased, which resulted in an increase in the overall variance range (R = 0.7176).A majority of the players delivered below-average results (c i < 0.2192), while the structure of these results did not have a normal distribution, as in the previous instances (S-W = 0.952; p < 0.01).In the case of Brandon Tanev, the HITS and SHTOI indicators can be identified as the reason for his ranking.He is a fundamentally different kind of player (compared to Kucherov, McDavid and Kane) whose deficiencies on the offensive side are compensated by his defensive play, meaning his on-ice tasks are of a different nature (see Figure 9).
Results Obtained Based on the SVP-TOPSIS Combination
The trend identified while using the SD-TOPSIS combination concurred with the SVP-TOPSIS combination result.Brandon Tanev (Winnipeg Jets) was once again ranked first, his defensive efforts once again being the driving force (HITS, SHTOI).Significant differences were also identified between individual players (R = 0.800), while the evaluations of the three best players (see Table 8) can be described as outliers.Identifying the same player as the de facto best means that the evaluation is similar to that in the previous Section 5.2.4,as the composition of the players remains unchanged.Once again, this result may be characterized as a rejection of the normal distribution of the results obtained (S-W = 0.925; p < 0.01).
Statistical Verification of the Results Obtained
A comparison of the results obtained was completed from numerous perspectives with the goal of ascertaining the feasibility of using the TOPSIS technique to evaluate NHL players and any best combination for its actual implementation.In the first step, the testimonial value of the results obtained using the Kendall coefficient was compared to the following results (see Figure 10).Figure 10 shows a high correlation between the pairs of results, namely the TOPSIS results obtained with CV-CRITIC, CV-MW, CRITIC-MW and SVP-SD.The alternation of these pairs may be considered, and therefore, the use of both would be superfluous from a methodological perspective.In the second step, the differences between selected momentary characteristics were compared and tested, specifically in terms of the mean value (Kruskal-Wallis test) and variance (Levene test), with the following results (see Figure 11).Differences at the level of the variance range and the positions of the mean values (based on skew) are described in Section 5.2.In Figure 11, there are differences in variance (LE = 39.42;p < 0.01) and mean value (Q = 111.512;p < 0.01).Differences in the median are primarily observed for the CV-TOPSIS combination with other combinations, which were significant in all instances.In the next (third) step, the focus was on comparing the distribution functions of the results obtained using the Kolmogorov-Smirnov test, which confirmed the consistency of the distribution functions of the results obtained in two cases, namely CRITIC-MW (K-S = 0.645; p = 0.799) and SD-SVP (K-S = 1.32; p = 0.061).These results confirm the characteristics identified in Section 5.2 and the high correlation confirmed by the Kendall coefficient.
Figure 12 provides a closer examination of the overall results in terms of the individual positions (L, C and R).From a statistical perspective, there is no significant difference in the mean value (Q = 1.14; p = 0.565) or variance (LE = 0.139; p = 0.871) between the rankings for these positions, and their testimonial value is identical (r K = 1; p < 0.01).Differences can be observed only between the individual methods-for instance, the best-rated left wings using the SD method for weighting the individual input indicators, etc.However, differences identified in this way are the same across all monitored positions, and therefore, it is not possible to assume either a positive or negative impact of the selected method on only certain subsets of the players analyzed.
The ability to reflect a professional view of the quality of individual players was verified by the ranking of the three players (Kucherov, McDavid and Kane) in the evaluation of individual combinations, which is illustrated in Figure 13 and is, at first glance, markedly different.
Differences in the results produced by the individual methods appeared here as well.While the selected players were among the best in the TOPSIS combinations with CV, CRITIC and MW, they fell back to the average among the 581 NHL players in the SD and SVP combinations.The difference in results underlines the comparison of their evaluation compared to the ranking of the best player for the given combination, which once again emphasizes differences in the results obtained (see Figure 14).
Discussion
Any evaluation of the quality of a player in any sport is subject to various criteria and differences in perspectives among experts and the general public.How do we choose the best, and which criteria should be used to evaluate them?When considering only one league-in this case, the NHL-is the best player the one who scores the most goals (Maurice Richard Trophy) or the one who has the most points (Art Ross Trophy)?Or is the best player the one selected by the NHL Players' Association (Ted Lindsay Award)?Today's open world provides numerous potential answers to this question.Some try and identify the 10, 25, 50 or 100 best players [78-81], while others even select the 250 they consider the best [82].There is also the ability to evaluate players based on their attributes in virtual reality or in video games (see [83]).Furthermore, there is an opinion that players should be evaluated not together, but instead by individual positions [84], etc.The only thing that these approaches have in common is the absence of any evaluation methodology.An evaluation method, a set of evaluation attributes and a method for calculating an overall evaluation, and therefore a final ranking, are completely lacking.
Of course, there have been attempts to take a quantitative approach to answering this question.One example is the research of Tarter et al. [85], who ranked players based on a set of 12 attributes, including goals per game, penalty minutes and physical parameters, such as height and weight.Macdonald [86] used four indicators, including goals and shots, to evaluate players.Von Allem et al. [87] used games played, goals and five other indicators.Other attempts can be found in the research of Chen et al. [88] and Qader et al. [89].In composing a set of monitored indicators, our approach was based on this research, using 26 freely available indicators as the starting point.These were subjected to multi-collinearity testing to identify indicators with an independent testimonial value.The result was 11 evaluated attributes, which include points, penalty minutes, hits and others (see Table 2 for a complete list of the evaluated indicators), based on which the NHL players were evaluated within the provided research.
To evaluate the performance of players using this group of monitored indicators, the TOPSIS technique was selected, the application of which has been proven in numerous fields, including tourism [90], transportation [91], agriculture [54], risk assessment [92], the evaluation of cloud services providers [93] and the evaluation of local government entities [50].One of the most important steps in every MCDM method, including TOPSIS, is weighting, or determining the importance of the evaluated indicators.To ensure that the analysis was not affected by our subjective outlook or subjectivity, we selected five objective methods for weighting the indicators: the CV, CRITIC, MW, SD and SVP methods.The selected methods are proven in practice and are unique and specific thanks to their individual approaches to calculation (see Section 3.1).
A difference in approach towards determining the importance of the selected indicators appeared in the first step of completing the analysis.Some methods, despite the individual testimonial value of the selected indicators, designated some of them as nearly unnecessary (such as OTG when the SVP method was used for weighting).Importance was most evenly distributed among the monitored indicators using the CV method (see Figure 6), which most closely approximated the MW method.This method considers the indicators to be equivalent (and there is no need to consider the importance of the indicators in this case).
Similar results were obtained using the MW and CRITIC methods and the SD and SVP methods.These pairs showed a high correlation of results (r K > 0.9) or a statistically significant match between their distribution functions.The generated results may be labeled as significantly different, especially in terms of two groups:
•
The CV, CRITIC and MW methods, which primarily considered offensively skilled players to be the best when used within the TOPSIS technique, and which specifically identified Aleksander Barkov and Artemi Panarin as the best players;
•
The SD and CVP methods, which primarily considered defensively skilled players to be the best when used within the TOPSIS technique, both of which identified Brandon Tanev as the best player.
Conclusions
A total of 581 NHL (offensive) players were evaluated in this research using a set of 11 indicators for the 2018/2019 regular season.These data were used for multi-criteria assessment using the TOPSIS technique and five objective methods to determine the importance of the input indicators (CV, CRITIC, MW, SD and SVP).These combinations produced significantly different results, which highlights the need for greater diligence when selecting a suitable method for weighting input indicators.This selection does not have an impact on the internal connections between the subjects of evaluation, which was shown in a comparison of the results by the players' positions (see Figure 12).Based on the results obtained, we would favor one of the CV, CRITIC and MW methods for the purposes of evaluating athletes (as the subjects of evaluation).In this specific case, we have the greatest preference for the MW method and would consider the input indicators as equivalents for the purpose of multi-criteria evaluation.Therefore, we can recommend its usage in many problems requiring multiple criteria to be taken into account (not only in sports).
Further research can be carried out in three ways: (1) the results achieved can be processed via different methods (e.g., sensitivity analysis, factor analysis); (2) the group of objective weighting methods could be extended or compared with the results achieved using any subjective methods (e.g., the Fuller method); (3) the group of objective weight-
Figure 3 .
Figure 3. Kendall rank correlation plot of potential attributes.
Figure 4 .
Figure 4. Frequency of individual positions among evaluated NHL players.
Figure 5 .
Figure 5.Comparison of the assigned weights based on the individual methods (CV, CRITIC, MW, SD and SVP).
Figure 6 .
Figure 6.Cumulative frequency of weights within the individual methods (CV, CRITIC, MW, SD and SVP).
Figure 7 .
Figure 7.Comparison with the best player as evaluated by the CV-TOPSIS combination (Artemi Panarin).
Figure 8 .
Figure 8.Comparison with the best player as evaluated by the CRITIC-TOPSIS combination (Aleksander Barkov).
Figure 9 .
Figure 9.Comparison with the best player as evaluated by the SD-TOPSIS combination (Brandon Tanev).
Figure 10 .
Figure 10.Rank correlation between the results obtained (TOPSIS combined with CV, CRITIC, MW, SD and SVP).
Figure 12 .
Figure 12.Box plot of results obtained by player position (TOPSIS combined with CV, CRITIC, MW, SD and SVP).
Figure 13 .
Figure 13.Ranking of selected players within the completed analysis (TOPSIS combined with CV, CRITIC, MW, SD and SVP).
Figure 14 .
Figure 14.Share of the rankings for selected players and the best player within the completed analysis (TOPSIS combined with CV, CRITIC, MW, SD and SVP).
Table 1 .
Structure of potential attributes.
Table 2 .
Resulting structure of monitored indicators.
Table 3 .
Players nominated for selected awards for the 2018/2019 season.
Table 4 .
Ten best players as evaluated by the CV-TOPSIS combination.
Table 5 .
Ten best players as evaluated by the CRITIC-TOPSIS combination.
Table 6 .
Ten best players as evaluated by the MW-TOPSIS combination.
Table 7 .
Ten best players as evaluated by the SD-TOPSIS combination.
Table 8 .
Ten best players as evaluated by the SVP-TOPSIS combination. | 9,434.6 | 2021-06-16T00:00:00.000 | [
"Computer Science"
] |
Rapid Real Time PCR Based Detection of Cell Count in Case of Urinary Tract Infection
Microbial identification and antimicrobial susceptibility testing methods currently used in clinical microbiology laboratories require at least two to three days because they rely on the growth and isolation of micro-organisms. This long, but necessary, delay has enormous consequences on prophylactic usage of antimicrobial drugs. This study was an attempt to reduce this detection time span. Taq Man Real Time PCR has been used as an important tool in the differentiation of Gram nature of bacteria present in UTI patients that allows detection of spiked bacterial 16S rDNA from urine samples within a short span of 5h and also gives us the corresponding cell count of both/either Gram positive and negative organisms present. A standard curve was generated which was used to determine the cell count of control as well as patient samples. Detection could be done in the range of 10 3 to 10 6 cells/mL Patient samples screened clustered either in the allele 1 or allele 2 axes, depending on majority concentration of Gram nature of the microorganisms. The cell counts for control individuals were scattered within 0 to 10 2 , while very few in the range of 10 4 . The case was just reverse for patient group, where most of the points were scattered within 10 4 to 10 8 . Thus the optimal selection of appropriate antimicrobials (depending on the gram nature) by clinicians, will be gradually improved as an increasing number of rapid molecular diagnostic tools for the detection, identification and characterization of infectious agents become commercially available.
INTRODUCTION
The urinary tract is one of the most common sites of bacterial infection in women (Foxman, 2003;Ishaq et al., 2011). These infections also carry the risk of possible progression to bacteremia. The empirical choice of an effective treatment is becoming more difficult as urinary pathogens are increasingly becoming resistant to commonly used antibiotics (Nicolle et al., 1996;Barret et al., 1999;Mathai et al., 2001;Karlowsky et al., 2001;Ishaq et al., 2011;Dharmadhikari and Peshwe, 2009;Butcu et al., 2011;Ultley et al., 1988;Leclerq et al., 1988;Ishaq et al., 2011). Any infection left untreated like UTI, kidney infection, is extremely dangerous and can lead to life threatening conditions such as bacteremia. This is usually a very serious condition that results in death unless prompt appropriate treatment is provided.
One of the major drawbacks of the routine diagnostic methods for pathogen identification in UTI is the long period for detection (48 to 72 h) required in culture based methods (Ramlakhan et al., 2011). Besides, uncultivable microbes are numerically dominant in biological samples, urine being no exception and therefore have to be detected by culture independent methods (Carroll et al., 2000;Belgrader et al., 1998;Bittar et al., 2008;Bergeron and Ouellette, 1995;Picard and Bergeron, 1999;Tang et al., 1997;Ishaq et al., 2011). The total count (cultivable as well as non-cultivable bacteria that are alive, but do not give rise to visible growth under non-selective growth conditions) needs to be detected rapidly for prompt medical intervention.
The delay of the microbiology laboratory contrasts with the time required (less than one hour) to get the results from other hospital laboratories or departments, such as biochemistry, hematology and radiology. Indeed, clinical microbiology procedures remain still based on the use of a variety of growth-dependent biochemical tests developed by Pasteur and others during the 19th and 20th centuries. Consequently, physicians rarely consult the microbiology results unless the patient is not responding to the initial antimicrobial therapy, which is based on key information obtained during the first hour after patient admission, thereby excluding any diagnosis based on microbiology results. Clearly, there is a need for rapid and accurate diagnostic tests for use in clinical microbiology laboratories to enable optimal patient management and treatment. Rapid detection and identification of microbial pathogens and their antimicrobial resistance profiles would have a tremendous impact on the practice of medicine by providing physicians with key microbiology results when needed.
The use of rapid molecular diagnostics may provide a solution for treating this disease which has a high morbidity and mortality rate. Molecular biology techniques for correct detection and identification of bacteria is now widely used in clinical microbiology namely 16S rRNA based identification, terminal Restriction Fragment Length Polymorphism (tRFLP), Random Amplification of Polymorphic DNA (RAPD), Real Time PCR (Picard and Bergeron, 2002;Ishaq et al., 2011). There are innumerable number of patents (USPTO 20090239248, USPC 4356, US Patent 7205111, US Patent 7662562, US Patent 4693972) stating methodologies for rapid identification of microbes from clinical samples, but none of them mention the sensitivity of detection. Keeping this fact in mind we tried to fine-tune already existing methods to develop a more sophisticated system of detection. Real Time PCR has been used as an important tool in the differentiation of Gram nature of bacteria present in UTI patients using a consensus real-time PCR protocol with a TaqMan probe that allows detection of spiked bacterial 16S rDNA from urine samples within a short span of 5h and also gives us the corresponding cell count of both/either Gram positive and negative organisms present.
Genomic DNA Isolation
Bacterial genomic DNA was extracted using Fit Amp Urine DNA Isolation Kit (Epigentek, P-1017-050) from various dilutions (10 8 to 10 1 cells/mL) of urine samples. Cartridge based DNA extraction kit was used for isolation of DNA as per manufacturer's protocol with minor modifications. 900µl of sterile urine (urine samples were filtered by passing them through a 0.22 µ syringe filter) was taken and seeded with culture of gram negative (E.coli) and gram positive (S.aureus) bacteria separately at a concentration of 10 1 , 10 2 , 10 3 , 10 4 , 10 5 , 10 6 10 7 and 10 8 cells/mL. The suspension was centrifuged at 10,000 rpm for 10min (Eppendorf Centrifuge 5418, Rotor eppendorf FA-45-18-11 aerosol tight) at room temperature to pellet down the cells. The supernatant was discarded and the pellet was resuspended in 200 µL of suspension buffer and mixed through pipetting. Then 4 µL of DNA digestion buffer containing enzyme was added and mixed using vortex. The mixture was incubated at 65°C for 1 h. To it 300µl of DNA capture buffer was added and mixed using pipettman. The mixture was transferred to a spin column placed inside a 2 mL collection tube. It was centrifuged at 12,000rpm for 1min (Eppendorf Centrifuge 5418, Rotor eppendorf FA-45-18-11 aerosol tight). The flow through was discarded and the spin column was replaced in the collection tube. The centrifugation step was repeated again and the supernatant was discarded. Then 300 µL of 70% ethanol was added to the spin column and centrifuged at 12,000 rpm for 30 sec (Eppendorf Centrifuge 5418, Rotor eppendorf FA-45-18-11 aerosol tight). The flow through was discarded. Two more washes of 200 µL of 90% ethanol were applied similarly as stated above. This was to remove salts as well as to wash away impurities. The spin column was replaced into a fresh 1.5mLcentrifuge tube and the DNA was eluted using 10 µL of DNA elution buffer.
Real Time PCR
The DNA obtained was directly used for gram nature detection using the TaqMan PCR protocol (Genotypic assay) as reported by Shigemura et al. (2005) and Ishaq et al. (2011) to check the sensitivity of the assay. The only modification was that for Probe 2, FAM was used in place of TET in order to maintain the compatibility of our Real time PCR Step One system. ROX was used for internal control. The allelic discrimination assay (Genotyping) was set up in a 48 well reaction plate. Each experiment was repeated at least 6 times.
For determination of the standard curve, sterile urine samples (post filtration through 0.22µ syringe filter units by Whatman) were seeded with 1E7 concentration of Gram positive (S aureus) bacteria. Genomic DNA was extracted using Fit Amp Urine DNA Isolation Kit as mentioned above. The DNA was serial diluted (corresponding to 1E1 to 1E7 cells) and real time experiment was set up exactly the similar way as already mentioned above, under Standard Curve option. Patient urine samples were also analyzed based on the standard curve equation to determine the cell count. A total number of 70 non-infected and 89 patient samples were analyzed and their corresponding C T values for both Gram positive and Gram negative bacteria were noted down.
RESULTS
Seeded sterile urine was used to check the efficiency of detection/sensitivity of this assay. Detection could be done in the range of 10 3 to 10 6 cells/mL as evident from the scatter plot diagrams in Fig. 1a. A distinct allelic discrimination plot was obtained that clustered gram positive and gram negative seeded samples in different axes. Patient samples screened clustered either in the allele 1 or allele 2 axes, depending on majority concentration of Gram nature of the micro-organisms present in the infected urine samples.
In order to determine the sensitivity of the detection limit, a standard curve was generated using three replicates of each dilution (1E1 to 1E7), by plotting the cell concentration or quantity on the X axis and the C T value on the Y axis (Fig. 2a). The C T values were also plotted in an Excel worksheet to determine the straight line equation (y = -3.143x+38.44) of the curve. The best fit line (Fig. 2b), had a R 2 value of 0.994, which is well within the optimal limit. Figure 2c is the screen print of the experimental page setup for standard curve experiment in ABI Step One Real Time PCR instrument, software version 2, showing amplification, multicomponent, raw data plot, besides the standard curve. The plate layout with their respective concentrations and the C T values for both alleles are also depicted in the diagram. From the standard curve equation, by plotting X (C T value), the unknown value of Y (cell concentration) was determined. The amplifications obtained for the normal and infected samples are shown in Fig. 2d and 2e respectively. The Gram positive and Gram negative C T values were separately plotted for both normal and patient groups (Fig. 2f to 2i). For control individuals, most of them are scattered within 0 to 10 2 , while very few in the range of 10 4 , but not above that. The case is just reverse for patient group, where most of the points are scattered within 10 4 to 10 8 .
DISCUSSION
The passive reference dye signal ROX was absolutely constant during the entire experiment. Gram positive samples clustered on the Allele 1 axis of the scatter plot, while Gram negative ones clustered on the allele 2 axis. The probe VIC (allele 1) was responsible for amplification of Gram positive specimens, while FAM (allele 2) for Gram negative ones. Sterile urine sample seeded with a known concentration of Gram positive bacteria shows amplification of only the VIC probe (Fig. 1b) in case of single or multiple samples seeded with the same Gram nature at different concentrations (Fig. 1c). The case is exactly the same for Gram negative seeded samples which shows only amplification of FAM probe ( Fig. 1d and 1e), while VIC and ROX are relatively constant. Thus we can say that there was no experimental error.
Without even looking at the CT values in case of control and patient samples (Fig. 2f-2i), one can easily infer from the graph that, there is considerable less amplification in case of normal samples as compared to the infected ones. In case of infected samples amplification starts after 12cycles only while in control individuals there is no amplification till almost 26th cycle.
Another important notable feature was that, the few points which lie towards 0 to 10 1 , in Gram negative plot in the patient group have a higher cell concentration value in the Gram positive plot and vice versa; i.e., those samples are not infected by high concentration of both Gram nature organisms, but only one of either types. For example, in patient 1 and 2, the gram positive cell count was 0 in both cases, while their respective gram negative counts were 10 8 and 10 5 . Similarly in case of the last two sample numbers 88 and 89, the gram negative cell count was 0, but the corresponding gram positive count was 10 7 for both. The raw data with cell count value has been provided in Table 1 (highlighted), where sample numbers exhibiting this phenomenon have been highlighted (showing 0 in one gram nature plot and a higher cell count value in the other gram nature plot). Thus, infections caused by non-cultivable bacteria can also be detected by using this culture independent assay. The detection time is drastically brought down from 72h to less than 5h, thus allowing quick administration of antibiotics. The exact cell number of micro-organisms causing the infection can also be determined from the standard curve equation, without the hassle of cultivating them.
CONCLUSION
A major goal in diagnosis and treatment of patients especially female patients suffering from UTI is the ability to rapidly detect the characteristics of infecting microbes. We have used Real Time PCR in the differentiation of Gram nature of bacteria present in UTI patients using a consensus real-time PCR protocol that allows detection of spiked bacterial 16S rDNA from urine samples within 5h along with the corresponding cell count of both/either Gram positive and negative organisms present. The similar technique could be used for pathogen detection in case of Septicemia. | 3,136 | 2013-06-22T00:00:00.000 | [
"Biology"
] |
Collapsing Domain Wall Networks: Impact on Pulsar Timing Arrays and Primordial Black Holes
Unstable domain wall (DW) networks in the early universe are cosmologically viable and can emit a large amount of gravitational waves (GW) before annihilating. As such, they provide an interpretation for the recent signal reported by Pulsar Timing Array (PTA) collaborations. A related important question is whether such a scenario also leads to significant production of Primordial Black Holes (PBH). We investigate both GW and PBH production using 3D numerical simulations in an expanding background, with box sizes up to $N=3240$, including the annihilation phase. We find that: i) the network decays exponentially, i.e. the false vacuum volume drops as $\sim \exp(-\eta^3)$, with $\eta$ the conformal time; ii) the GW spectrum is larger than traditional estimates by more than one order of magnitude, due to a delay between DW annihilation and the sourcing of GWs. We then present a novel semi-analytical method to estimate the PBH abundances: rare false vacuum pockets of super-Hubble size collapse to PBHs if their energy density becomes comparable to the background when they cross the Hubble scale. Smaller (but more abundant) pockets will instead collapse only if they are close to spherical. This introduces very large uncertainties in the final PBH abundance. The first phenomenological implication is that the DW interpretation of the PTA signal is compatible with observational constraints on PBHs, within the uncertainties. Second, in a different parameter region, the dark matter can be entirely in the form of asteroid-mass PBHs from the DW collapse. Remarkably, this would also lead to a GW background in the observable range of LIGO-Virgo-KAGRA and future interferometers, such as LISA and Einstein Telescope.
I. Introduction
Physical models that feature the formation of cosmic domain wall (DW) networks have typically been seen as problematic due to the so-called domain wall problem, the fact that the network tends to dominate the energy density of the universe [1].However, if domain walls are biased and annihilate, this problem turns into a virtue, as the network naturally tends to be an abundant component in the universe before its collapse, and is thus easier to probe.
Gravitational waves (GWs) are one of the potential signatures [2][3][4].The spectrum is stochastic and analyses with LIGO-Virgo O3 data already place constraints on the parameters of the network [5].Recently, the evidence for nano-Hz GWs at Pulsar Timing Arrays (PTAs) [6][7][8][9] brought renewed interest in this possibility: DW networks that annihilate around the QCD phase transition provide a good explanation of the signal and outperform several other models [10,11].In case of detection of GWs, it is also crucial to find additional signatures, that can help in selecting DWs over other early Universe sources.Dark radiation and collider signals may be some of such signatures [10], as well as the production of Primordial Black Holes (PBHs) from the collapsing network [12,13] (see also [14][15][16][17]), although the resulting PBH abundance is subject to large uncertainties.
Overall, these aspects motivate a detailed investigation of the evolution of a DW network during its collapse phase (see [18][19][20][21][22][23] for previous work) and of its gravitational relics, i.e.GWs (see [24][25][26][27][28][29]) and PBHs, which we aim to perform in this work.We consider the simple case of DWs with a double-well potential, with an energy per unit area (tension) σ, and where the potential is slightly tilted by a term of size ∆V , i.e. a bias such that one of the Z 2 -symmetric minima becomes a false vacuum [30] (see e.g.[31][32][33][34] for other mechanisms to have viable long-lived walls).
We simulate the corresponding DW network throughout the formation, scaling and annihilation regimes, using field theory numerical simulations in an expanding radiation-dominated background, with box size up to 3240 3 , while computing the GWs radiated throughout the evolution.
Until recently, most analyses assumed that GWs in this scenario are radiated until the pressure on the walls caused by the bias overcomes the self-acceleration due to the wall's tension, i.e. when ∆V = σH, where H is the Hubble rate. 1 Before this epoch, the network is in the socalled scaling regime, with most of its energy density in a fixed O(1) number of walls per Hubble patch.However, the collapsing network consists of large DWs of various shapes, which contain False Vacuum (FV) pockets, that shrink to small sizes after the scaling epoch.These last stages of evolution can certainly source GWs in addition to the ones from the so-called scaling epoch, and has been found in recent work [28].An order-one change in the estimate of the time scale for GW production can lead to an order-of-magnitude enhancement of the final GW signal.Our simulations improve on the determination of such time-scale, while also providing new insights into the properties of the network at the onset of annihilation and highlighting the role of the kinetic energy of the scalar field in the production of gravitational waves in the final stages of the network annihilation.
Our numerical results will also allow us to establish the time evolution of the decaying network, in particular of its FV pockets.Indeed, the collapse of the network takes some time: the abundance of Hubble-sized FV pockets at some point drops very quickly, but a small fraction of rare super-Hubble sized pockets survive for a longer time as they must cross the Hubble radius to annihilate.Their abundance dramatically decreases in time, but their likelihood to collapse into a BH grows instead simply because the Schwarzschild radius associated with the FV pocket grows faster in time than the Hubble length.This may result in a tiny population of BHs at formation, but potentially large at present if the network collapses in the early universe (this formation mechanism is similar to the one in [36,37] for isolated DWs, see also [38], with the crucial difference that for a network the collapse is in general far from spherical).
We provide an analytical understanding of this picture, which complements our numerical findings to provide an important step forward in the estimate of the PBH abundance.Nonetheless, large uncertainties in the final PBH abundance persist, due to an exponential sensitivity to parameters and the departure from spherical collapse, which at this point is still difficult to quantify.
With these new estimates of both GW and PBH relics, we then analyze the phenomenological consequences of a generic DW network that annihilates at different epochs in the early Universe.First, we assess the viability of the DW interpretation of the PTA signal in light of PBH production.Second, we discuss PBHs from DWs as a candidate for the observed dark matter, with a possible correlated GW signal at interferometers such as LIGO-Virgo-KAGRA (LVK) [39], LISA [40], Einstein Telescope (ET) [41] and Cosmic Explorer (CE) [42].
The paper is organized as follows.We summarize the evolution of annihilating DW networks in Sec.II, highlighting the important time scales in the problem.We present the results from our numerical simulations in Sec.III.We present an analytical model to account for the late FV pockets in Sec.IV.We conclude in Sec.V with a discussion on the phenomenological impact.
II. Domain Wall Networks: Scaling and Annihilation regimes
In this work, we focus on a simple model exhibiting DWs: a real scalar ϕ with a Z 2 symmetry ϕ → −ϕ and with a double well potential keeping in mind that several aspects of our discussion may also apply to other models (e.g. with more minima or from axion potentials).In this model the DW tensionthe energy per unit area of the walls -is σ = √ 2 λ 3 v 3 and the scalar mass squared in the minima is m 2 = 2λv 2 .In addition, we will assume a small bias term in the potential that breaks such a symmetry, of size ∆V , that will lead to annihilation of the walls.
We assume that a network of walls is formed by a phase transition in the early universe during radiation domination, i.e. we start with zero initial field plus small random fluctuations and the walls are formed via the so-called Kibble mechanism [1,43].The subsequent evolution follows a sequence of events, schematically represented in Fig. 1, and explained in detail below: (i) the network reaches the so-called scaling regime, (ii) the network starts annihilating when the bias term becomes relevant, (iii) a peak of gravitational waves is produced, (iv) some very rare domain walls that survive for a longer time may collapse and give rise to PBHs.
A. Scaling regime
Soon after formation, the DW network reaches a selfsimilar 'scaling' regime characterized by, on average, about one Hubble-sized DW per Hubble volume.If we consider a generic network of total comoving area A in a box of comoving volume V , its total energy is σAa 2 and so its total energy density is given by where a is the scale factor, H is the Hubble parameter and we introduced the so-called area parameter which is a dimensionless number related to the area density of the network.During scaling A is expected to be of order unity [26].
One of the remarkable signatures of the DW network is the stochastic spectrum of GWs that it creates.In the scaling regime, the spectrum Ω gw (k, t) ≡ ρ −1 c dρ gw /d log k, where ρ gw is the energy density in GWs and ρ c is the critical background density, as a function of the wave number k and cosmic time t, peaks at the Hubble scale and previous studies have found that in scaling its amplitude at the peak is given by [44] Ω (scaling) gw where ϵ ≃ O( 1) is an efficiency factor extracted from the numerical simulations and α(t) ≡ ρ dw /ρ c is the fraction of the total energy density stored in the walls.The fraction α increases over time and thus one expects the GW spectrum to have a peak around the annihilation time of the network.
B. Annihilation Phase
The annihilation phase occurs roughly once the Hubble rate becomes smaller than the pressure acceleration, that is, for conformal time η ≳ η ∆V , defined by one then expects the network to start collapsing.So far in most literature it has been assumed that the peak of GW production occurs exactly at the time given by eq. 5, identified with the annihilation time of the network.As we will see, it is important to determine precisely both the annihilation time of the network and the time of emission of the peak of GWs, since the final GW spectrum grows as η −4 , as one can see by extrapolating the scaling properties in eqs. 2 and 4.
A useful quantity to monitor the degree of annihilation of the network is the fraction of volume occupied by the False Vacuum, F fv (η).In a Z 2 model, F fv = 1/2 in scaling.There is some literature on how F fv decays during the annihilation phase [18][19][20][21]32] (see also [22]) with no current consensus.In the following sections, we will see that the network decays exponentially in the annihilation phase according to with η ann possibly differing from η ∆V by O(1) factors, and p ≃ 3.
III. Numerical results
We now present the results of our lattice simulations of DW networks, obtained by means of the Cosmolattice code [45,46].We set initial conditions at the initial conformal time η i in radiation domination such that m = H(η i = 0), and a(η i = 0) = 1.We assign a white noise spectrum of small Gaussian fluctuations in Fourier space to the scalar field, while setting its homogeneous component at ϕ = 0.The number of gridpoints N 3 and the comoving box size L determine the comoving lattice spacing ∆x = L/N .We set ∆x and the duration η f of our simulations such that the physical lattice spacing a(η f )∆x at the end of the simulation is smaller than (or equal to) the domain wall width δ w ∼ m −1 , and impose that the simulation box contains at least one Hubble patch at the final time.From here on we set v = m = 1, which corresponds to the choice λ = 1/2 for the quartic coupling.With these choices, a(η) = 1 + η, H(η) = (1 + η) −2 and the domain wall tension is simply σ = 2/3.The maximal dynamical range that can be probed with the simulation is then η f,max = √ N − 1, obtained by choosing L = √ N , such that at η f there is only one Hubble patch in the box and a(η f )∆x = δ w .The field evolution is performed using the leapfrog algorithm with a time step ∆η ≲ ∆x/2, such that the Courant criterion is well satisfied.More details about the numerical setup can be found in Appendix A.
Here we shall consider a cubic bias, i.e.V bias = qϕ 3 , rather than the linear term.The reason for this choice is only technical: a linear bias shifts the location of the maximum of the potential, thereby introducing a bias in the population of the two minima already at the early times of the simulation.Since the limited dynamical range that can be simulated requires a sizable ∆V , such a population bias would prevent the formation of a network and/or alter its evolution.The cubic bias allows to overcome this problem, as the maximum is not displaced, and for sufficiently small initial field fluctuations the system does not notice the asymmetry introduced by V bias in the initial steps. 2The size of 2 An alternative strategy is to use a time-dependent linear bias [28], which is initially negligible and becomes important only after a certain activation time.However, we have found that this technique introduces additional uncertainties in the final results, due to different possible choices of the activation time of the bias.
The scenario relevant to our analysis is that of a domain wall network that achieves the scaling regime sufficiently before it starts collapsing as a consequence of a pressure bias between the two vacua.For the potential adopted in this work, this occurs for ∆V /V (0) ≲ 0.007, corresponding to η ∆V ≳ 18 and thus we focus on this range of bias sizes.For larger bias, the network does not fully achieve scaling for a sufficient time before collapse.While this scenario can certainly occur, it is characterized by some residual dependence on initial conditions, which makes it difficult to extract general conclusions.
We show the evolution of the average field value together with its standard deviation as a function of conformal time in Fig. 2, as obtained in simulations with a box size (N = 3000) 3 and maximal time η f = 55.
Here we fixed initial conditions and varied the size of the bias potential, such that η ∆V = {19, 22, 25}.The following features can be clearly distinguished: first, the field is initially localized very close to the maximum of the potential, until around η ≃ 6 it relaxes to the two minima.Field oscillations are sizeable until η ≃ 10, when they are significantly diluted by Hubble friction.The standard deviation then begins to shrink after η ∼ 20 and for η ≳ 30 only the leftmost minimum is populated, signaling that the network is rapidly dissolving under the action of the bias.For comparison, the behavior for vanishing bias is also shown (dot-dashed curve).In this case, a tendency towards the right-most minimum occurs at around η ∼ 15, which is to be attributed to the relatively small (and decreasing) number of Hubble patches at those times, i.e. ∼ (L/η) 3 ≲ 50 at η ≳ 15.
The deviations in the network evolution in the presence of a bias are best understood by focusing on the following quantities.
A. False Vacuum Fraction
First, we look at the fraction of volume in the false vacuum F fv , which is numerically obtained as the fraction of the simulation grid points where ϕ > 0, shown in Fig. 3 (blue curves).As expected, initially F fv = 0.5 in all simulations.This remains approximately true for the simulation without a bias, although a slight deviation to larger values is observed at late times, corresponding to the aforementioned small number of Hubble patches near the end of the simulation.In simulations where a bias is included, F fv decreases rapidly after a time which depends on the size of the bias, obviously the later the smaller ∆V .In this work, we are mostly interested in false vacuum regions that are at least Hubble-sized, since their radius becomes equal to their Schwarzschild radius if their energy density becomes comparable to the background at Hubble crossing, as we will explain in more detail in Section IV.When the false vacuum fraction drops below the inverse number of Hubble volumes in the box, given by n H ≃ (L/(1 + η)) 3 and shown by the black dot-dashed curve in Fig. 3, such regions no longer exist in our simulation.For the bias sizes of interest, this occurs around η ≃ 30.At later times, the remaining false vacuum regions are in sub-Hubble structures.Correspondingly, a much steeper decrease of F fv is observed at late times than at early times, when super-Hubble false vacuum regions can still be present.Thus we attempt fitting the numerical results until the time at which F fv ≃ n −1 H with eq. ( 6) where η ann and p are fitting parameters.The result of this procedure is shown by the orange curves in Fig. 3.The fitting function above provides an excellent fit to the early time data, and we find p ≃ 3.3 − 3.5.
To investigate the dependence of these results on the simulation box, we increase the number of Hubble patches in our box by increasing L (and N to a smaller extent) at the price of slightly worsening the spatial resolution, and thus limiting the dynamical range of our simulations.Guided by the previous discussion, we choose L and η f such that δ w /(a∆x) ≃ 1 at time η f ≃ 35, corresponding to L ≃ 90, with N = 3240.These choices increase the number of Hubble patches by a factor of ≈ 4.5 with respect to the results in Fig. 3.The new results are reported in Fig. 4, together with the fitting curves.It can be appreciated that F fv now remains almost constant for the entire simulation time in the absence of bias, thereby confirming that the previously observed deviation is due to the limited number of Hubble volumes.The most relevant result of this improved analysis is a decrease of the inferred value of p (which is closer to analytical expectations discussed in Sec.IV).
We then perform several realizations with increased number of Hubble patches, for several values of bias size and changing the random seed that sets the initial conditions, to estimate numerical uncertainties in our results.We report results in Table I.Averaging over all realizations, we infer When comparing η ann to the rough expectation η ∆V : σH = ∆V , we find a slight delay η ann ≈ 1.5η ∆V . 3 3 In this regard, we disagree with [23] on the dependence of ηann with ∆V .3), as a function of conformal time.Note that at early times, η ≲ 10, there is a transient which does not carry physically relevant information.
B. Area parameter
In the absence of a bias, a domain wall network achieves a scaling regime where the area parameter A remains approximately constant with time.The energy density in the domain walls during the scaling regime, is determined by eq. ( 2), which is generally smaller than the total energy density stored by the scalar field.We show in Fig. 5 the area parameter extracted from our high resolution simulations, in the absence of a bias (solid magenta curve).As expected, it remains approximately constant for η ≳ 25, saturating for this particular realization to a value A ≃ 0.9.These findings agree roughly with those of [26].
The corresponding evolution in the presence of the bias is shown by the blue dashed, dotted and dot-dashed curves, for several bias sizes.One can appreciate that the biased network follows the unbiased one until η ≃ 14−17, depending on the size of the bias. 4
C. Energy density
The evolution of the energy density of the scalar field is shown in Fig. 6 (left) for unbiased (solid) and biased (dashed, dotted, dot-dashed) potential.These results are obtained in our longest simulations, corresponding to the choice L = √ N .As expected, in the unbiased case the total energy density in the scalar field remains approximately constant after η ≳ 30, with a final value ρ tot ≈ 3.5 σH.On the other hand, in the biased case the total energy density decreases sharply, together with the decrease of the vacuum contribution from the bias potential (orange curves), due to the annihilation of the network.The evolution of the three components of the energy density, i.e. kinetic, gradient and potential energy, is shown in Fig. 6 (right) for the unbiased (solid) potential as well as for an example case of biased potential (dashed) with the choice η ∆V = 22.
The following observations can be made: in the unbiased case, the gradient energy rapidly saturates to a constant value ρ grad ≃ 1.4 σH, while the potential energy density ρ pot decreases slowly until the end of our simulations (here by potential we denote only the Z 2 symmetric term, whose behavior is shown by the purple curve).The latter decrease may however partially be a numerical effect since it occurs almost at the end of the simulation where the domain wall width becomes comparable to the lattice spacing of the simulation.Overall, these two components make most of the energy density of the scalar field, in agreement with the intuition that most of the energy density is in domain walls, and in particular ρ grad + ρ pot ≃ 2.6 σH at the end of the simulation.On the other hand, the kinetic energy decreases rapidly until it saturates to an approximately constant value ρ kin ≃ 0.9 σH, thereby making a subleading contribution to the energy density.
The situation in the biased case is strikingly different, where deviations from the unbiased scenario occur around η ≳ 20: most importantly, the kinetic energy stops decreasing and quickly begins to increase, while the potential energy decreases sharply.The former overcomes the latter around η ≳ 34.This is very close to the value of the annihilation time η ann inferred from our fit of the false vacuum fraction, nicely confirming that this time scale indeed characterizes the annihilation of the network when the vacuum pressure from the bias term (shown by the dashed brown curve) accelerates the walls, thereby increasing the kinetic energy.The gradient energy initially remains constant, before sharply decreasing at η ≳ 40.This is easy to interpret, as the existence of the network is the source of gradient energy, which is thus quickly dissipated away once the walls annihilate.Eventually, at η ≳ 45 also the kinetic energy starts decreasing.We have checked that for η ≳ 45 both the potential and kinetic components decrease approximately like non-relativistic matter, as expected since we are working with a massive scalar field with m ≫ H at the end of the simulation.Overall, the kinetic energy dominates the energy density of the biased domain wall network at the end of our simulations.Notice also that the bias term very rapidly vanishes after η ≳ 40, corresponding to the exponential decay of false vacuum regions.
Our findings on the behavior of the energy density in the unbiased case may seem different from the common lore in the literature, which mostly adopts ρ dw ≃ 2AσH.In our simulations, we find that the total energy density in the scalar field, including all the simulation box, is roughly twice as large as the estimate above, ρ tot ≈ 3.5σH.One should nonetheless notice that: 1) the commonly adopted estimate is expected to apply only to static domain walls, i.e. it neglects the contribution of the kinetic energy, which in our simulations accounts for ρ kin ≈ 0.9σH; 2) the simulation box includes regions where |ϕ|> 1, which cannot be attributed to domain walls, but rather to scalar waves.The energy density in this region of field space is reported in Fig 7 , for a simulation with a large number of Hubble patches.Its size at the end of the simulation is ρ scal ≈ 0.8σH, mostly coming from kinetic and potential energy.Ignoring the kinetic part, it contributes ≈ 0.5σH.Therefore our larger total energy density in the scalar field is easily explained in terms of the two contributions above (plus a small unavoidable contribution from scalar waves in the region |ϕ|≤ 1). 5 We conclude that, according to our simulations, domain walls carry an energy density ρ dw ≃ 2.4σH.If interpreted in terms of a relativistic correction to the standard formula, i.e. ρ dw ≃ 2Aγ 2 σH, it implies γ ≈ 1.2.
D. Gravitational Waves
It has long been appreciated that a domain wall network acts as an efficient source of GWs during its evolution.Previous numerical calculations have mostly focused on the contribution from the scaling regime.In our simulations, we are able to extract the energy density radiated in GWs throughout the annihilating phase as well.Due to higher memory consumption, our simulations including GWs are limited to N = 2040, with a maximal simulation time η f ≲ 45.Additionally, to speed up the calculation, we only start the numerical computation of the GWs at the latest times (η > 35).This is justified since we find that, unlike previous estimates, most of the GWs are emitted during the annihilation epoch rather than in the scaling regime (see also [28]).
A simple estimate of the maximal energy density fraction (i.e. the energy density in GWs compared to that of the radiation background) is provided by the quadrupole formula, which gives Ω gw (η) ∼ 3/(32π)α 2 tot (η), where α tot ≡ ρ ϕ /(3H 2 M 2 p ).This would correspond to the case in which all the energy density in the scalar field sources GWs.We compare our results with this simple estimate in Fig. 8, for three different choices of bias size for which we are able to capture the full GW production (with same initial conditions as in all previous figures).For all our choices, we find that the GW energy density fraction reaches a peak at η ≳ 40.The efficiency factor with respect to the simple quadrupole estimate at peak production is ϵ ≡ Ω gw /(3/(32π)α 2 tot ) ≃ 0.5 − 0.6.Fig. 9 shows that GWs are compatible with being mostly sourced by the kinetic component of the energy density in the scalar field, with a subdominant contribution from the gradient component, for one example value of bias size.The GW energy density fraction is computed by integrating the numerically obtained GW spectrum, although the final result is dominated by the region around the peak of the spectrum.
Our results importantly point to a mild delay between the characteristic time of the annihilation epoch η ann and the time at which most GW production occurs, η gw .In our simulations, this is estimated to be Physically, our findings suggest that efficient GW production continues throughout the annihilation epoch, and so beyond the production during scaling studied in previous works, and that most of the relic abundance of GWs is determined by the late stages of the DW network collapse.Compared to the previous literature, the delay amounts to a factor of 1.3 × 1.4 ∼ 1.8 in the time of GW emission, which was previously estimated to be η ∆V , and so, following eq.( 4), an enhancement of 1.8 4 ∼ 10 in the total GW abundance.
IV. Semi-Analytical approach
Let us now turn to a different way to analyze the problem.The network annihilation process can be viewed as a transition from DWs in the scaling regime to, eventually, a collection of rare FV pockets.During scaling, in each Hubble patch, there is typically one Hubble-sized DW, some scalar radiation and, to a good approximation, no closed DWs.Most of the energy is stored in the form of large DWs and any DW is typically separated from neighbour DWs by the correlation length, which in scaling is set by the Hubble length.
Annihilation starts when the force per unit area from ∆V is larger than σH, pushing the DWs to reduce the volume in the FV.Let us assume that this effect turns on instantaneously at η ∆V .Since the typical separation between walls is also of order η ∆V , a Hubble-sized FV pocket takes a time of about η ∆V to shrink to zero, and so one expects that the fraction of the volume in FV becomes tiny after a delay of order η ∆V , that is, at FIG. 6: Left: Energy density in the scalar field as a function of conformal time (blue), normalized to the scaling behavior σH, without (solid) and with several choices of bias sizes (dashed, dotted, dot-dashed).In orange, the evolution of the average of V bias in the simulation box is shown.Right: Components of the energy densities of the scalar field as a function of conformal time, for unbiased (solid) and biased (dashed) potential, the latter for η ∆V = 22.The vertical gray line corresponds to the value of η ann inferred from fitting the false vacuum fraction according to eq. ( 6).In both figures, N = 3000 and L = √ N .η ≃ 2η ∆V , since after that time only very rare structures survive, i.e, the ones that started with super-Hubble size at η ∆V .As an additional confirmation the numerical results shown in section III A, this delay can be qualitatively reproduced by solving the equations of motion for a DW enclosing the FV pocket, in the socalled Nambu-Goto approximation, for some simple DW shapes (see Appendix B for details), as shown in Fig. 10.
This simple observation has two interesting consequences.First, we identify that η ∼ 2η ∆V is a 'maximal shrinking time', when most of the FV has shrunk to zero, and so it is natural to expect an additional contribution to GW production on top of the GW that the DWs have sourced during scaling.Qualitatively, this confirms the results presented in the previous section of η gw ∼ 2η ∆V .
Second, this also sets the time when we can start to picture the 'remainders' of the network in a simple and useful way: as an ensemble of FV pockets of different sizes, which are placed far apart so that we can treat them independently.One can call this a dilute gas of FV pockets.
A. False Vacuum fraction
This approximation allows to compute the FV volume fraction F fv and extrapolate it to times that are inaccessible with numerical methods.The basic idea is simple: FV pockets shrink in time (see Fig. 10), so that the lifespan of each pocket is determined by its initial size R 0 at η = η ∆V .Once the network is sufficiently fragmented we can approximate F fv (η) as a sum over an ensemble of pockets of different initial radii R 0 .Moreover, the relative weights in the sum R 0 are known because they are inherited from the scaling regime.
Indeed, since there are only 2 vacua (for a Z 2 model) that are essentially distributed randomly during scaling, the probability of finding a super-Hubble region of radius R 0 , where the field is in one vacuum, is expected to be with the correlation length at onset of annihilation, which is identified as The distribution eq.satisfies the wanted normalization: initially, a Hubble-sized region with R 0 = L ann , has 50% chance to be in either vacuum.
The FV volume fraction then can be obtained by adding up the (shrinking) volumes of all pockets with weights given by eq. ( 8) over R 0 > L ann , We assume here a flat integration measure in log R 0 for simplicity, but the results do not depend very dramatically on the measure choice.Even if this formula FIG.10: Evolution of FV pockets with spherical (solid), cylindrical (dashed) and planar (dotted) shapes as extracted from the Nambu Goto approximation (see Appendix B).Blue lines show the comoving radius R(η) ≡ R(η; R 0 ) as a function of conformal time for various initial radii R 0 , starting at η ∆V from rest.Lower R 0 pockets are more numerous.The dashed vertical gray line denotes when most of the pockets shrink to small sizes.This is expected to coincide with the peak of GW production η gw .The black solid line is the comoving Hubble length at each time.The inset shows the initial values of R 0 of (spherical or cylindrical) pockets that reach zero (purple) or the Hubble length (black) at a given time.
holds only for η > 2η ∆V , the overall normalization constant N can be fixed to have F fv = 1/2 when extrapolated to initial time η ∆V .
Generically, pockets shrink and vanish after some time.The 'trajectories' R(η; R 0 ), that we obtain by solving the dynamics in the Nambu-Goto approximation (see Appendix B) are basically triangular, they decrease to zero and remain zero.As shown in Fig. 10, the shrinking time is quite shape independent too.At any time η, one can track back the initial radius of the pocket that reaches R = 0 at that moment.We call this and show it in the inset of Fig. 10 (purple curves) 6 .By construction, then, the lower integration limit in eq. ( 9) can be replaced by R min 0 (η).Since the size distribution in eq. ( 8) is exponentially biased towards small R 0 , it is clear that F fv must be suppressed by 2 −(R min 0 /η ∆V ) 3 .The 6 Since there is a slight shape dependence, we take two values of R min 0 (η), from the spherical and the cylindrical pockets, to give a sense of 'theoretical' errors.
Nambu-Goto approximation also provides some useful information to narrow down the asymptotic behaviour at large η.Figures 10 and 13 suggest that the mock trajectory R(η; R 0 ) → R 0 − w(η − η ∆V ) with w an O(1) constant gives a reasonable approximation.With this, the asymptotic form follows.Ignoring the power law term, this allows to recognize the FV decay time introduced in eq. ( 6) as The value of 1/w can be read off from Fig. 13, which in the end results in η ann /η ∆V being around 1.3.This is in quite remarkable agreement with the numerical simulations, see Table I, representing an important nontrivial validation of the FV pocket picture.This is also manifest in Fig. 11, where we compare the fits from the numerical simulations (orange curves) to the analytic expression eq. ( 9) (blue curves).
In this picture, it is also possible to isolate the part of this FV fraction F hor fv given by Hubble sized pockets (or larger) at any time.It reduces to integrate from a higher value of R 0 , the one corresponding to pockets that enter the Hubble radius (rather than vanishing) at that time.We then introduce which is also shown in the inset of Fig. 10 (black line).Notice how this radius grows significantly than R min 0 (η), hence of the FV fraction Hubble sized (or bigger) is much smaller than the total F fv .
In this case we obtain Since R hor 0 grows linearly in η (see Fig. 10), we conclude that the FV fraction contained in super-Hubble pockets behaves asymptotically like eq. ( 6) with p = 3.
Note also that a very simple relation holds if we use the approximate mock trajectory linear in η given above, i.e.R hor 0 (η) = R min 0 (η) + η, which immediately implies that the fraction in Hubble-sized patches is: F hor fv ∼ P 0 (R hor 0 ) ∼ 2 −((1+w)η/η ∆V ) 3 with w ≈ 0.8, much more suppressed than the total fraction in FV pockets as can be appreciated in Fig. 11. while the total fraction in FV pockets is roughly F fv ∼ P 0 (R min 0 ) ∼ 2 −(w η/η ∆V ) 3 .These results are reminiscent of the scaling exp −η d , in d + 1 dimensions, suggested by [18].However, let us emphasize that the similarity is accidental.The analysis of [18] refers actually to an annihilation mechanism based on population bias, with ∆V = 0.In 2 + 1 dimensions, it was found in [21] that the analog of [18] actually did not 0.5 1.0 1.5 2.0 2.5 3.0 3.5 FIG. 11: FV fractions from numerical simulations and from analytic estimates as a function of conformal time.
Orange lines correspond to eq. ( 6) with η ann and p extracted from the fits to the simulations shown in Table I (only the first and third row, giving the lowest and highest fractions respectively).The remaining curves arise from the analytic FV pocket 'gas' approximation.Blue lines are the estimate of the full FV fraction from eq. ( 9) with w = 0.9 (solid) and 0.8 (dashed), which represents a justified margin of uncertainty (see Fig. 13).
The red curves are two estimates of the FV fraction contained in Hubble-sized (or larger) pockets, for a spherical (dashed) and a cylindrical (dotted) pocket.For α c = 1 these correspond to the intersection of the red curves with the dotted vertical line -a tiny number well outside the plot range.The FV fractions for other choices of the effective collapse criterion α c are given by the intersections of the vertical lines and the red curves.
apply for the population bias case, whereas it did work for pressure bias, with ∆V ̸ = 0. Our work extends the analysis of [21] to 3 + 1 dimensions (see also [23]) and clarifies the physical reasons behind this decay law for the pressure bias (∆V ̸ = 0) mechanism.
B. PBH formation
We are now ready to give some rough estimates of the abundance of PBHs produced during the network decay.As in [13], the natural strategy is to follow the various FV pockets as they shrink.Part of this evolution takes place while the pocket is super-Hubble sized.It is useful then to look at the 'figure of merit' defined by the ratio of the Schwarzschild radius of the pocket to the Hubble radius when the pocket size crosses the Hubble radius.This quantity actually coincides with the local overdensity of a FV with energy density ρ pocket , For α loc ≪ 1 the pocket needs to contract significantly after entering the Hubble radius, in order to form a PBH.This is less likely to happen if it is non-spherical and, since FV pockets descend from a DW network, asphericities can actually be large.For larger α loc , this is not so challenging and so one can expect PBHs to form in this way.Moreover, α loc grows in time, so some PBHs are certainly produced.Notice that the limit α loc → 1 is special: the pocket collapses to a BH as soon as it enters the cosmological Hubble radius.These BHs are actually expected to carry a baby-universe.For spherical symmetry, they have been considered in [36,37,47,48].We will consider both types of BHs, but we anticipate that baby-universe BHs should be much rarer than the ordinary ones.
The rest of the argument to estimate the PBH abundances is as follows.First, the overdensity produced by FV pockets that enter the Hubble radius at η scales like with α gw the average fraction of energy density in DWs at η gw , and the prefactor fixed by numerical simulations, see Appendix B. Second, as argued before, the collapsing network can only be acceptably approximated as an ensemble of pockets after η ≳ 2η ∆V .
Third, in principle one should do an analysis of how many of the different pockets actually manage to shrink enough to form BHs. This will depend on their degree of asphericity and angular momentum and it can be model dependent.Its estimate deserves a dedicated analysis of data from numerical simulations that is outside the scope of this work.The result of such an analysis should effectively result in a threshold of collapse, α c , such that (on average) pockets which reach α loc ≥ α c collapse into a BH.
At present we are unable to obtain a reliable estimate of α c .We thus consider a range of benchmark values that could be reasonable for this quantity.Since α c = 1 is the threshold for baby-universe BHs, α c < 1 corresponds to sub-Hubble BHs.
Fourth, we will identify the PBH abundance (of either type) with the FV fraction F hor fv evaluated at the time η PBH when α loc = α c is met.
We show in Fig. 11 the FV fraction expected to collapse into BHs according to this simplified criterion, with α c = 0.15 and 0.2.Clearly, since the collapse criterion is satisfied first with smaller α c , the sub-Hubble PBHs are much more abundant than the ones carrying baby universes.
As expected, the abundance is exponentially sensitive to α c .The abundance of baby-universe BHs (α c ≃ 1) is extremely suppressed (and given by the extrapolation of the red curves to the vertical dotted line) even for quite large α gw .On the other hand, the sub-Hubble PBHs that can be formed 'soon' could be much more abundant.The basic reason why sub-Hubble PBHs are more abundant is simply that the FV pockets they descend from are (extremely) more common.Unfortunately, with the current analysis, it is difficult to make any further quantitative statements due to the large uncertainty in α c .A more detailed study is left for the future.
V. Phenomenological Implications
We now proceed to discuss the phenomenological impact of our estimates of the spectrum of stochastic GWs and of the abundance of PBHs from the DW network.We base our results on the numerical output described in Sec.III and on the analytical understanding of the evolution of the false vacuum fraction described in Sec.IV.We start by estimating the minimal PBH abundance, formed from Hubble sized PBHs pockets that reach α loc = 1, using eq.( 11).The total abundance may be expressed as a function of the energy fraction of the network at the time of GW emission, α gw , and the background temperature at that time, T gw .The mass of these black holes is set roughly by the total energy in the Hubble volume and more precisely by eq.(B3).The abundance at a given mass is experimentally constrained by a wide variety of probes.In Fig. 12 we translate the bounds on the PBH abundance from [49] and [50] into bounds on the α gw − T gw plane (thick blue curve).We relate α loc with α gw using eq.(B4), which is a minor refinement of eq. ( 16).Constraint on α gw are obviously stronger for larger T gw , as PBHs redshift as matter for a longer time.As an interesting example of the typical mass of these PBHs, we show in green the boundaries of the asteroid mass range 10 −16 M ⊙ ≲ M PBH ≲ 10 −11 M ⊙ where the PBHs can account for the whole of the dark matter [51].
The remaining blue lines in Fig. 12 refer to tentative estimates of the abundance of PBHs, by assuming some benchmark values, α c = 0.3 and α c = 0.1, at Hubble crossing, which indicates how much the structure has to further shrink (without dissipation) to enter its Schwarzschild radius.
In the same plot, we also show the range of parameters where the GW signal from the DW network could be observed by different GW detectors, from SKA [52] at the lowest frequencies to LIGO-Virgo-KAGRA (LVK) [39], ET and CE at the highest ones.We have fixed the frequency of the GW spectrum at the peak to be dictated by the Hubble radius at T gw , and amplitude given by eq. ( 4), with efficiency ϵ = 0.6, as obtained from the numerical results, and assumed the spectrum to decrease as 1/ω for frequencies ω larger than the peak and ω 3 for smaller frequencies.This behavior corresponds to that observed in the scaling regime [26], and has been roughly confirmed recently also during the annihilation phase by [28].The GW spectra computed in our simulations roughly agree with those works, although a dedicated study is necessary to firmly establish the high frequency slope.
We then plotted the regions of parameters where the spectrum overlaps with the power law sensitivity curves derived in [53].In the same figure, we also show the current bounds obtained in [5] with LIGO-Virgo O3 data (LV), which already indirectly constraints the maximal PBH abundance for α c = 1, and the region of parameter space which could provide an interpretation of the PTA signal (red contours) [11].
Let us focus on the PTA region first, i.e. at T gw ≈ 1 − 10 GeV.We find the DW interpretation of the PTA signal to be overall compatible with constraints on PBHs for collapse thresholds α c ≳ 0.1.Interestingly, if α c ≲ 1, a fraction of PBH dark matter from DW collapse is expected, which is compatible with what is inferred from the BH merger rate measured by LIGO/Virgo.
Significant tension between PTA observations and astrophysical bounds on PBHs would instead arise only if the collapse threshold were even smaller, i.e. α c ≲ 0.1.The likelihood of such low thresholds remains to be assessed by future work.Our results differ drastically both from [17] (for example regarding the time dependence of the DW decay) and from the earlier contradictory in the first version of [54].
As mentioned above, a particularly interesting region of parameter space is the asteroid mass range, for 10 6 GeV ≲ T gw ≲ 10 9 GeV, where the totality of dark matter could be explained by PBHs from the network.This mass range is typically very hard to probe, given the particle-like size of their Schwarzschild radius.Crucially however, if PBHs originate from the DW network, a complementary GW signature of their existence is expected, that can be probed partially by LVK at design sensitivity and fully by ET and CE.
Beside these two interesting regions, the plot shows that in a large range of parameters, the annihilation of the network can be "heard" by different GW observatories, and that a non-negligible abundance of PBHs might also be expected if α gw > O(10 −2 ).
The interplay of GW and PBHs signatures described in this work is similar to the more studied scenario of PBH formation from the collapse of large adiabatic perturbations from inflation.
However, GWs from density perturbations as an interpretation of PTA data have been shown to be in tension with constraints from PBH overproduction [55,56].In contrast, the mechanism presented here remains viable according to our current understanding.Additionally, asteroid-mass PBH DM from inflationary perturbations has been argued to give a GW signal peaked in the LISA frequency band [57], whereas ground-based interferometers (LVK, ET, CE) are best-suited to indirectly probe PBH DM from DW collapse.The shaded regions are the constraints on the GW spectrum from LIGO-VIRGO O3 data (LV) and the prospects for detection with LIGO-Virgo-KAGRA design sensitivity (LVK), Einstein Telescope, Cosmic Explorer, LISA and SKA using the power-law integrated sensitivity curves from [53].Finally, the green band corresponds the asteroid mass range (10 −16 − 10 −11 M ⊙ ) [58,59] and the black dashed lines correspond to PBHs between 1 and 100 solar masses, for α c = 1.For the other values of α c the band moves slightly to the left.
The thin wall approximation provides additional insight on the DW network behaviour, especially in the annihilation phase where the remains of the network is a collection of separate FV pockets.This is a good approximation when the DW worldsheet curvature radius, R, is large compared to its width, which is set by the inverse scalar mass.(This is satisfied in most of the network during scaling, except for a small fraction of the total volume where collisions, interconnections or pinch-off events occur).It is possible to include the gravitational effect from the DWs themselves (see e.g.[36,37,60,61]) but we shall ignore this here.
In this approximation, the evolution of a FV pocket follows from the 'equation of motion' for the DW at its boundary, which reduces to the Nambu-Goto (NG) equation Here K is the extrinsic curvature of the DW worldsheet and we included a pressure term given by the bias ∆V , see e.g.[62].It is not easy to solve this equation in general.However, the equation simplifies for walls with higher symmetry.We are interested in the motion of a DW network where inevitably the DW shapes are random.However, once the network annihilation starts, the DW motion, in a way, simplifies.Indeed, the DWs are simply the boundaries of FV pockets, which shrink quite quickly and quite independently of their initial shape.
This can be illustrated by comparing 3 extreme cases that can be easily computed, where the shape of the DW is: i) spherical, ii) cylindrical and iii) planar.The NG equation then reduces to with n = 2, 1 for spherical or cylindrical DW of comoving radius R respectively.The case n = 0 corresponds to a planar wall placed at, say, z = R(η).Primes denote derivatives w.r.t.conformal time, and It is straightforward to integrate this equation and thus follow the evolution of a structure of certain initial comoving radius R 0 .Some representative examples are shown in Fig. 10.It is clear from the figure that the structures reach arbitrarily small size after a finite (conformal) time ∆η, that is the time lapse until R approaches 0. Of course eventually a small enough structure transfers its energy into scalar waves (which are not captured in the NG approximation).Note that if one prefers to define the collapse time as when R(η) reaches a small radius r c , the result would be the same so long as r c ≪ R 0 .
The trajectories shown in Figs. 10 are readily understood: closed DWs shrink under both the effect of the tension and the pressure ∆V , reaching relativistic speeds quite quickly.
Interestingly, the collapse time ∆η depends mostly on the initial size.As shown in Fig. 13, for large enough initial FV pockets, the collapse time approaches C R 0 , with a constant C in the range 1.15 − 1.2, quite independently of the pocket shape.
Of course, in a network the DW shapes are not symmetric.However, Fig. 13 signals a quite clear time scale in the network decay: the first stage of annihilation, (that is still far from the FV pocket gas picture) takes about one Hubble time.
One expects a 'burst' of GWs from the collapsing FV regions as they shrink because they are significantly nonspherical.This GW production time η gw is then expected to be near the collapse time because it is dominated by the most numerous pockets, with sizes of order η ∆V .In summary, the fact that Hubble-sized structures have to reach small sizes naturally leads to the expectation It is also easy to keep track of the energy of a given FV pocket, and how it evolves in time.Focusing on spherical symmetry for simplicity, the energy is ∆V R 3 (η)a 3 (η) + 4πσR 2 (η)a 2 (η)γ(η) , (B3) and because of the expansion it is not conserved.For super-Hubble pockets, first E grows (because both the FV region and the DW gain volume/area by the expansion).Only when they enter the Hubble radius the energy stabilizes.
We can keep track of the energy carried by the FV pocket that enters the Hubble radius at each time, by evaluating eq.(B3).The γ factor grows in time, but not enough to compare to the volume contribution.Thus, after about one Hubble time, E scales like the physical Hubble volume, ∼ η 6 .Solving numerically the NG equation (B1) we find that the expression E/E 0 ≃ (τ 6 + τ 5 + 2τ 4 )/4, with τ = η/η ∆V and E 0 the initial energy of the pocket, provides a good fit (within 5%) of the actual time dependence (the τ 5 term can be identified as the DW gamma factor).This leads to the following improvement of eq. ( 16) where the factor of 1.5 comes from rewriting α loc (η gw ) = ∆V /(3H(η gw ) 2 M 2 p ) in terms of the fraction of the energy density in the DW network at the same time α dw = ρ dw /ρ c , with ρ dw (η gw ) = 2.6 σH(η gw ) and τ gw ≃ 2. This is smaller than eq.( 16) for τ > τ gw , so eq.( 16) provides a conservative bound.
FIG. 1 :
FIG.1: Timeline of DW network evolution.After spending some time in the scaling regime, the bias (vacuum energy difference ∆V ) becomes effective at (conformal) time η ∆V .GW and PBH production occur somewhat later, at η gw and η pbh respectively.The decay of the False Vacuum volume fraction, F fv , is parameterized by a decay time η ann , slightly larger than η ∆V , and an exponent p.Both our numerical simulations and analytical model point to p = 3.
2 FIG. 2 :
FIG.2: Evolution of the average of the field in the simulation box, φ, plus (minus) its standard deviation σ ϕ , for several sizes of the bias ∆V , as a function of conformal time.
FIG. 3 :FIG. 4 :
FIG. 3: (Log of) Volume fraction in the simulation box in the false vacuum, as a function of conformal time.The blue curves show the numerical results from our simulations.We fitted such results for small η, before the curves cross the black dot-dashed curve, that shows the inverse of the number of Hubble volumes in the simulation box.The orange curves are the resulting fits.
FIG. 5 :
FIG.5: Area fraction, eq.(3), as a function of conformal time.Note that at early times, η ≲ 10, there is a transient which does not carry physically relevant information.
7 FIG. 8 :
FIG.8: Energy density fraction in GWs compared to quadrupole formula estimate from the total energy density available in the scalar field.Note that the initial Ω gw is very small because we start the computation of GWs only at η = 35 for computational reasons.
FIG. 9 :
FIG.9: Energy density fraction in GWs compared to quadrupole formula estimate from the total energy density available in the scalar field.
FIG. 12 :
FIG. 12: Constraints on the PBH abundance from the collapse of the DW network for different values of the collapse threshold α c (blue lines) in terms of the fraction of the energy density in the DW network at the time of GW emission (α gw ) and the background temperature at that point (T gw ).The region above the solid blue curve is conservatively excluded.The red ellipsis shows the region of parameters where the DW network interpretation explains the recent GW signals detected at PTA[11] at one and two standard deviations.The shaded regions are the constraints on the GW spectrum from LIGO-VIRGO O3 data (LV) and the prospects for detection with LIGO-Virgo-KAGRA design sensitivity (LVK), Einstein Telescope, Cosmic Explorer, LISA and SKA using the power-law integrated sensitivity curves from[53].Finally, the green band corresponds the asteroid mass range (10 −16 − 10 −11 M ⊙ )[58,59] and the black dashed lines correspond to PBHs between 1 and 100 solar masses, for α c = 1.For the other values of α c the band moves slightly to the left.
2 FIG. 13 :
FIG.13: Ratio of the initial radius R 0 and the (conformal) time to reach R(η) = 0, ∆η, of super-Hubble FV pockets.It depends mildly on the shape and it asymptotes to a constant value (dotted black curve).
TABLE I :
Mean and standard deviation of the parameters of the fitting function eq.(6), over several realizations of our lattice simulations (about 3 − 4 per each row), with different random seeds. | 13,120.8 | 2024-01-25T00:00:00.000 | [
"Physics"
] |
Evolutionary relationships, hybridization and diversification under domestication of the locoto chile (Capsicum pubescens) and its wild relatives
Patterns of genetic variation in crops are the result of multiple processes that have occurred during their domestication and improvement, and are influenced by their wild progenitors that often remain understudied. The locoto chile, Capsicum pubescens, is a crop grown mainly in mid-highlands of South-Central America. This species is not known from the wild and exists only as a cultigen. The evolutionary affinities and exact origin of C. pubescens have still not been elucidated, with hypotheses suggesting its genetic relatedness and origin to two wild putative ancestral Capsicum species from the Central Andes, C. eximium and C. cardenasii. In the current study, RAD-sequencing was applied to obtain genome-wide data for 48 individuals of C. pubescens and its wild allies representing different geographical areas. Bayesian, Maximum Likelihood and coalescent-based analytical approaches were used to reconstruct population genetic patterns and phylogenetic relationships of the studied species. The results revealed that C. pubescens forms a well-defined monotypic lineage closely related to wild C. cardenasii and C. eximium, and also to C. eshbaughii. The primary lineages associated with the diversification under domestication of C. pubescens were also identified. Although direct ancestor-descendant relationship could not be inferred within this group of taxa, hybridization events were detected between C. pubescens and both C. cardenasii and C. eximium. Therefore, although hybrid origin of C. pubescens could not be inferred, gene flow involving its wild siblings was shown to be an important factor contributing to its contemporary genetic diversity. The data allowed for the inference of the center of origin of C. pubescens in central-western Bolivia highlands and for better understanding of the dynamics of its gene pool. The results of this study are essential for germplasm conservation and breeding purposes, and provide excellent basis for further research of the locoto chile and its wild relatives.
Introduction
Patterns of genetic variation in cultivated plants result from multiple evolutionary processes.To understand these patterns and processes, phylogenetic reconstructions within the context of the putative ancestral wild relatives are essential.Such species-wide assessments often allow for identification of the ancestral lineages that gave rise to early domesticates and modern cultivars, and may provide insights into the factors contributing to the observed distribution of genetic diversity across gene pools (Glaszmann et al., 2010;Meyer et al., 2012).This also applies to domesticated species with unknown wild forms, as the related wild species may represent an extended gene pool (Brozynska et al., 2016;Bohra et al., 2022).Hybridization and introgression between cultivated forms and wild relatives contribute significantly to the formation and evolution of domesticated species.Ongoing natural and artificial introgression is a major factor shaping the current genetic diversity of modern crops (Jarvis and Hodgkin, 1999;Ellstrand et al., 2013).Therefore, better understanding of the origin of crops in the context of their wild relatives is crucial for developing strategies for the conservation and sustainable use of their diversity (Mastretta-Yanes et al., 2018;Pironon et al., 2020), especially considering crop genetic erosion due to global climate change and biodiversity loss (Khoury et al., 2022).
Capsicum pubescens Ruiz & Pav.(Solanaceae), commonly known as 'locoto' or 'rocoto', is a chile pepper species with a major cultural and economic importance in the Central Andes (i.e., Bolivia, Peru and Ecuador).The species is cultivated mainly in mid-and highlands from north-western Argentina to central Mexico (Heiser and Smith, 1953;Barboza et al., 2022).It is morphologically distinctive with conspicuous pubescence, primarily purple flowers and fruits with large blackish-brown seeds.The fruits are hot fleshy berries of variable shapes, sizes, and colors (Barboza et al., 2022).Capsicum pubescens is the least studied and exploited among domesticated chile species, most likely because of its specific environmental requirements and the high fruit fleshiness, which makes them prone to fast rotting (Eshbaugh, 1993).Its cultivation outside the Americas is infrequent, although it has been introduced and grown as far away as Indonesia (Yamamoto et al., 2013).In recent years, locoto chile market demands have grown due to increased gastronomic and phytochemical interest (Meckelmann et al., 2015;Leyva-Ovalle et al., 2018).
In contrast to the other four domesticated Capsicum species (i.e., C. annuum L., C. baccatum L., C. chinense Jacq., C. frutescens L.), C. pubescens is known only as a cultigen and no ancestral wild population has been found so far (Barboza et al., 2022).Its domestication has been hypothesized to have taken place around 6,000 years ago in Bolivia and/or Peru (DeWitt and Bosland, 2009), followed by a human-assisted range expansion to other areas of the continent, including Central America and Mexico (Heiser and Smith, 1953;Barboza et al., 2022).Despite various attempts to unravel its evolutionary history (e.g., Eshbaugh, 1979;Moscone et al., 2007;Perry et al., 2007;Carrizo Garcıá et al., 2016), the origin of C. pubescens remains unknown and its evolutionary affinities are controversial.The locoto chile was traditionally placed in the so-called purple-flowered group of Capsicum together with two wild chile species, C. cardenasii Heiser & P.G.Sm.and C. eximium Hunz.(including C. eshbaughii Barboza, formerly C. eximium var.tomentosum Eshbaugh & P.G.Sm.), hypothesis supported by morphological, chemical and cross-breeding data (Heiser and Smith, 1958;Ballard et al., 1970;Eshbaugh and Smith, 1971;Eshbaugh, 1979).These wild chile species, popularly known as 'ulupicas', are used locally as hot spices, either cultivated on small farms or harvested directly from the wild (van Zonneveld et al., 2015;Barboza et al., 2022).Capsicum eximium and C. cardenasii are native to the Central Andes, from center-western Bolivia to northwestern Argentina (Barboza et al., 2022).Geographically, part of the cultivation range of C. pubescens, and one of the areas proposed as its hypothetical center of origin, overlap with the distribution ranges of these two wild species.Based on the combined evidence, C. eximium and C. cardenasii have long been suggested as the putative wild progenitors of C. pubescens (Pickersgill, 1971;Eshbaugh, 1975, Eshbaugh, 1979), an early hypothesis that was generally accepted but never rigorously tested.
Previous phylogenetic analyses using a wide arrange of molecular markers have attempted to resolve the relationships of all five domesticated chile species and to identify their closest wild relatives (Carrizo Garcıá et al., 2022, and references herein).In comparison to the other four cultivated chile species, the phylogenetic position and affinities of C. pubescens have not been fully resolved, with the evidence suggesting that C. pubescens was either closely related/sister to C. eximium and C. cardenasii (McLeod et al., 1979(McLeod et al., , 1983;;Choong, 1998;Ince et al., 2010;Ibiza et al., 2012) or, more frequently, recovered as an isolated lineage (Walsh and Hoot, 2001;Ryzhova and Kochieva, 2004;Carrizo Garcıá et al., 2016, 2020;Silvar and Garcıá-Gonzaĺez, 2016;Barboza et al., 2019Barboza et al., , 2020)).Thus, the informal purple-flowered group s.l. has repeatedly been inferred as paraphyletic.The phylogenetic affinities of C. pubescens were most recently addressed in a phylogenetic study of relationships within the genus Capsicum based on genome-wide SNP data of 1-3 accessions of each of 36 of its 43 currently recognized species (Carrizo Garcıá et al., 2022).This analysis placed Capsicum pubescens as a sister species to a small clade encompassing C. eximium, C. eshbaughii and C. cardenasii, with all four species forming the so-called clade Pubescens (Carrizo Garcıá et al., 2022).This evidence allowed to narrow down the closest relatives of C. pubescens, but the sampling of genetic diversity of the clade Pubescens was insufficient to conclusively infer the nature of the relationships among these four species.
Although ancestor-descendant relationships have been proposed within the clade Pubescens, they have never been resolved, whereas the occurrence of natural hybrids has been repeatedly reported in the group (Eshbaugh, 1975(Eshbaugh, , 1979;;Onus and Pickersgill, 2004;Scaldaferro, 2019;Barboza et al., 2022).The impact of recent hybridization and introgression, versus noncontemporary processes like ancient introgression or incomplete lineage sorting (deep coalescence; Twyford and Ennos, 2012), are therefore still unclear.Similarly, the extent (if any) of genetic contribution from wild species C. cardenasii, C. eximium and C. eshbaughii to the genetic variation of cultivated C. pubescens, as well as the extent of the C. pubescens gene pool, remains largely unknown.Previous phylogenetic studies included a low number of samples per species, thus limiting the power of the phylogenetic inferences.Analyses based on a broader sampling, addressing both genetic variation and geographic distribution of the target species, are thus necessary to shed light on the origin and evolutionary affinities of C. pubescens.Over the past decade, the availability of extensive genomic data and development of computational analytical approaches has allowed for the detection of independent lineages with high level of objectivity and statistical rigor.Restriction site-associated DNA sequencing (RADseq; Baird et al., 2008) is a reduced representation sequencing approach that covers a subset of noncoding and coding regions across the entire genome.Frequently used for genomic diversity scans of closely related groups within species or genera (Davey and Blaxter, 2011;Andrews et al., 2016), RADseq has been valuable for delimiting species, reconstructing phylogenies, and inferring evolutionary histories of various plant and animal groups, including the genus Capsicum (Carrizo Garcıá et al., 2022).In this study, comprehensive population genetic and phylogenetic analyses of RADseq genomewide data of multiple accessions of C. pubescens and its sister species representing genetic and geographical variation of the group were performed to: (1) test existing hypotheses on the phylogenetic relationships of C. pubescens and its closest wild relatives, (2) identify patterns of genetic relatedness and structure across this species group, and (3) gain novel insights into the origin and evolutionary history of C. pubescens.
Sampling
A total of 48 samples of C. pubescens, C. cardenasii, C. eximium and C. eshbaughii (Figure 1), all four species representing the clade Pubescens [Carrizo Garcıá et al. (2022); Supplementary Table 1] were included in the analyses.One individual of C. tovarii Eshbaugh, P.G.Sm.& Nickrent was used as outgroup.The samples were collected across the known distribution ranges of the species (Figure 1), except for the cultivated C. pubescens, for which 26 accessions were sampled representing the main genetic clusters described in Palombo and Carrizo Garcıá (2022).This approach aimed to attempt to cover the whole cultivation range and the genetic variation present in the species.Three individuals identified as artificial hybrids (Barboza et al., 2022) were also included in the analyses (Supplementary Table 1).Plant material was collected either from the wild or from plants grown at the Instituto Multidisciplinario de Biologıá Vegetal (IMBIV, Cordoba, Argentina) and the Botanical Garden of the University of Vienna (HBV, Vienna, Austria).
RADseq loci from all samples were filtered and assembled de novo in ipyrad v.0.9.87 (Eaton and Overcast, 2020) using default parameters for diploids except for parameters number 14 (clust_threshold) and number 21 (min_sample_locus).The threshold for clustering reads within and between individuals was set to 0.88 based on previous results (Carrizo Garcıá et al., 2022;Palombo and Carrizo Garcıá, 2022).Assemblies with different minimum amounts of samples per locus (min_sample_locus 12, 24 and 37) were filtered out to assess the effects of the number of loci and missing data on genomic analysis, i.e., 75% (designated as min25), 50% (min50) and 25% (min75) of missing data, respectively.Given that some analyses are more sensitive to missing data (e.g., population structure analysis, SNAPP; see below), additional filtering was performed, including only biallelic sites with a minor allele frequency above 0.05 using VCFtools v.0.1.16(Danecek et al., 2011) and pruning to one SNP per locus with the vcf_parser.pyscript (https://github.com/CoBiG2/RAD_Tools/blob/master/vcf_parser.py).These assemblies were referred to as filtered.
Genetic structure analyses
Genetic structure was first analyzed with fineRADstructure v.0.3.3 (Malinsky et al., 2018) using the *.alleles file from the ipyrad output (all SNPs per locus).The allele data was converted using the finerad_input.pyscript (http://github.com/edgardomortiz/fineRADstructure-tools).fineRADstructure was run following the software pipeline with the default settings and using the associated R script to plot the heatmaps in R v.4.0.3 (R Core Team, 2022).The min50 dataset (minimal 24 samples per locus, 37,970 loci, 50% missing data) was selected for further analyses because it yielded the best resolution.
The genetic structure was also inferred by applying the sNMF function within the R package LEA v.2.0 (Frichot and Francois, 2015) to better visualize genomic variation and admixture among individuals.This function calculates ancestry proportions of K ancestral populations with least-square estimates.Because these analyses are less tolerant to missing data and rare alleles, the min75_filtered dataset (minimal 37 samples per locus, 12,414 unlinked SNPs, 25% missing data) was selected to execute the analysis.sNMF was run for K= 1-10, with 100 repetitions, regularization parameter set to 250 and 25% of the genotypes masked to compute the cross-entropy criterion.Bar plots showing ancestry coefficients were obtained using the software R script.
Phylogenetic and species tree inferences
A maximum likelihood tree of all individuals was inferred using IQ-TREE 1.6.12(Nguyen et al., 2015) in the program web server (Trifinopoulos et al., 2016).The analyses were run with and without the seven putative hybrid individuals detected in the populational analysis.The min50 dataset (minimal 24 samples per locus, 37,970 loci, 50% missing data) was selected for the analysis because it balanced the number of loci and missing data.Variable sites were filtered out from the *.usnps.phyipyrad files producing output alignments that could then be used with the ascertainment bias correction (ASC) model.The best nucleotide substitution model was chosen a priori using ModelFinder (Kalyaanamoorthy et al., 2017).Node supports were calculated with 1,000 iterations of UltraFast-Bootstrap (UFBoot; Hoang et al., 2018).
A species tree under the multi-species coalescent model was inferred using the SVDquartets algorithm (Chifman and Kubatko, 2014) implemented in PAUP* v4.0a (Swofford, 2003).The min75_filtered dataset (9,243 unlinked biallelic SNPs) was used to run the analyses both with and without the seven putative hybrid samples identified in the population analysis.All specimens were treated as independent samples and the "distribute" option for heterozygous sites was applied.All possible quartets were analyzed using the QFM algorithm and node support was assessed by performing 1,000 bootstrap replicates (BS).The IQ-TREE and SVDquartets consensus trees were calculated and node support values were annotated and visualized in FigTree v1.4.3 (Rambaut, 2012).UFBoot ≥ 95% and BS ≥ 80% were considered as strong support.All inferred trees were rooted with C. tovarii as outgroup.
The Bayesian coalescent-based approach implemented in SNAPP (Bryant et al., 2012) was applied to infer a species/ population tree within the clade Pubescens.A subset of the taxonomic sampling that maximized the number of available SNPs was selected because SNAPP does not incorporate missing data.To that end, the min75_filtered dataset was pruned to 19 individuals that represented the main lineages inferred by the clustering and phylogenetic analyses, mostly corresponding to species, except for C. pubescens, for which three main groups/ lineages were resolved and treated independently in populations' tree inference.Only sites with no missing data were allowed using VCFtools (1,059 unlinked SNPs in total).The vcf2phylip.py script (https://github.com/edgardomortiz/vcf2phylip)was then used to generate the nexus input file for SNAPP.BEAUti 2 (Bouckaert et al., 2019) was used to create.xmlfiles in which the samples were clustered into species/populations.Two chains, 2 million generations each, logging every 1,000 with the first 10% discarded as burn-in, were run in BEAST 2.6 (Bouckaert et al., 2019) on the CIPRES platform (Miller et al., 2010).The mutation rates (u and v) were sampled from within the MCMC.Effective chain convergence and effective sample sizes across parameters (ESS ≥ 200) were assessed in Tracer v1.7.1 (Rambaut et al., 2018).The tree files (10% burn-in) were combined and the resulting trees were annotated with LogCombiner v2.6.3 and TreeAnnotator v2.6.3 (Drummond and Rambaut, 2007), respectively.The posterior distribution of all the trees combined was visualized as a cloudogram using DensiTree v.2.2.3 (Bouckaert, 2010) and the maximum clade credibility tree was visualized and annotated in FigTree.Posterior probabilities (PP) ≥ 0.95 were considered as strong support.
Hybridization detection
The extent of hybridization among taxa was first assessed using TreeMix (Pickrell and Pritchard, 2012), a method for inferring the patterns of population splits and reticulation in the history of a set of populations based on the allele frequency data of the whole genome.Samples were grouped into populations by generating *.treemix input files with the populations program in Stacks v.2.41 (Catchen et al., 2013) using the min75_filtered dataset, representing genetic clusters/clades as recovered in the structure and phylogenetic analyses.The analysis was run in TreeMix v.1.13with C. tovarii as outgroup.The number of migration events (m) was sequentially increased and changes in likelihood with each event added were examined.Obtained trees were plotted in R. The TreeMix subprogram 'threepop' was used to calculate F3 statistics between populations (Reich et al., 2009) in order to see if admixture was supported.
To further investigate the incidence of hybridization, the potential hybrids and parental taxa were tested with HyDe v.0.4.3 (Blischak et al., 2018), an approach similar to the ABBA-BABA test that uses phylogenetic invariants arising under a coalescent model with hybridization to detect and assign the probability of hybridization both at species/population and individual levels.HyDe tests all possible combinations of input taxa as putative hybrids and parents (P1 and P2) and the parameter g estimates the genomic contributions of the parents to the hybrid.For this analysis, the min75 dataset with C. tovarii as outgroup was set as input data using the *usnps.phyfile from ipyrad.The population map defined all parental individuals as species (C.cardenasii, C. eximium and C. pubescens), and each putative hybrid individual was assigned its own group (i.e., 'Hybrid').A file of 21 triplets was created to test the seven putative hybrid individuals as hybrids of the parental species, as suggested by the results of previous analyses.
Sequencing and SNP loci assembly
A total of 90,534,731 RADseq reads were generated for the 49 analyzed samples.An average of 1,705,693 (± 976,049 SD) highquality reads per individual were obtained after demultiplexing and filtering (Supplementary Table 1).The ipyrad pipeline was run separately for datasets used in downstream analyses: 70,304 loci/ 316,506 SNPs were retained in the min25 dataset (Supplementary Data Sheet S1), 37,970 loci/184,444 SNPs in the min50 dataset (Supplementary Data Sheet S2), and 13,226 loci/63,355 SNPs in the min75 dataset (Supplementary Data Sheet S3) (75%, 50%, and 25% missing data, respectively).The results obtained from the different assemblies were largely consistent, thus datasets min50 and min75 were selected for further analysis.
Genetic structure
Two to three main supported clusters were recovered in fineRADstructure analysis indicating a clear separation of C. pubescens from the three wild species (C.cardenasii, C. eximium, and C. eshbaughii) and a weak structuring within C. pubescens (Figure 2A).The cluster formed by the wild species was genetically more heterogeneous and divided into two subclusters: C. cardenasii and C. eximium together with C. eshbaughii.Capsicum eshbaughii accessions were intermingled with C. eximium accessions.The individuals labelled as artificial hybrids (ex_136, hib_212, hib_213) were recovered in an intermediate position between the two main clusters of species, together with four samples previously labelled as pure C. eximium (ex_95, ex_138, ex_140) and C. cardenasii (ca_208).These seven samples showed intermediate co-ancestry values in the heatmap (Figure 2A) compared to individuals of the two main clusters, indicating reticulation events.sNMF structure analysis also revealed the presence of two to three main clusters (Figure 2B; Supplementary Figure 1), indicative of the genetic differentiation of species and confirmed the admixed composition of the seven samples detected as hybrids in the fineRADstructure analysis.At K= 2, the groupings corresponded to C. cardenasii + C. eximium + C. eshbaughii, and C. pubescens.The most likely model, i.e., K= 3, supported three species-structured clusters, with the seven hybrids partially assigned to each group.The first cluster corresponded to C. cardenasii, the second was composed of C. eximium + C. eshbaughii, and the third cluster comprised C. pubescens accessions.At K= 4, two individuals previously labelled as C. pubescens × C. eximium hybrids (hib_212, hib_123) were recognized as a separate group.Results of K= 5-8 sub-optimal models were also informative, showing a sub-structure within C. pubescens (K= 5, 7, 8) and C. eximium + C. eshbaughii (K= 6) clusters.At K= 5, the two groups of C. pubescens samples were recovered corresponding to into individuals from Bolivia and Argentina, and individuals from Peru to Mexico, respectively.Similarly, at K= 6, a sub-structure was observed within C. eximium, in which the Argentinian samples and the Bolivian samples (plus C. eshbaughii) formed two separate groups.For K= 7-8, two new subgroups were recognized within C. pubescens accessions corresponding to samples collected/ marketed in central-western Bolivia (in the surroundings of La Paz city and the town of Villa Serrano).In these models, three samples marketed in La Paz (pu_197, pu_243) and Cusco (pu_255) showed high levels of genetic admixture.The other hybrid samples (ex_95, ex_136, ex_140, ca_208) showed high levels of mixed Genetic structure of C. pubescens and its sister species with samples names as in Supplementary Table
Phylogenetic relationships and species/ population tree estimation
Phylogenetic reconstructions using both the IQ-TREE and SVDquartets approaches were highly congruent (Figures 3A, B).Two major clades were resolved with high support: (1) clade of the wild species C. cardenasii, C. eximium and C. eshbaughii, and (2) clade of the domesticated C. pubescens.The three artificial hybrid individuals (ex_136, hib_212, hib_213) as well as the four samples identified as putative hybrids in the genetic structure analyses (ex_95, ex_138, ex_140, ca_208) were recovered in intermediate positions between the main clades, except for the sample ex_138 in the IQ-TREE outcome (Figure 3A).Excluding artificial and putative hybrids, C. pubescens was resolved as sister to a clade consisting of two well-defined subclades of the three other species (Supplementary Figure 2).One subclade was represented by C. cardenasii, and the other included C. eximium and C. eshbaughii.As the two samples of C. eshbaughii were nested within C. eximium, these two species were treated as a single group in the SNAPP and TreeMix analyses (see below).Internal branch supports were mostly moderate-strong (UFBoot= 93-100) throughout the C. eximium-C.eshbaughii assemblage, and moderate-weak (BS= 32-74) in some branches in the SVDquartets analysis, linked to alternative topologies (Figures 3A, B; Supplementary Figure 2).In the SVDquartets outcome, the C. eximium samples were consistently recovered into two distinct groups, one representing the accessions from Argentina, and the other the accessions from Bolivia plus C. eshbaughii, with internal relationships weakly supported.In the IQ-TREE topology, the C. eximium accessions from Bolivia were placed in three different groups, with the C. eshbaughii samples sister to the same C. eximium accessions as in SVDquartets.All 26 C. pubescens accessions formed a well-supported monophyletic group with internal branching structure that was moderately resolved and mostly congruent across both phylogenetic reconstructions (Figures 3A, B; Supplementary Figure 2).The first splitting branches within this C. pubescens clade corresponded to samples collected/marketed in central-western Bolivia (in the surroundings of La Paz city and the town of Villa Serrano).The remaining accessions were recovered as two well-supported main clades, one comprising the Argentinian and other central-Bolivian samples (from Cochabamba and Santa Cruz de la Sierra cities to the south), while the other comprised Peruvian, Ecuadorian, and Central American accessions.The internal branch supports of these clades varied from weak to moderate-strong in the IQ-TREE and SVDquartets trees (UFBoot= 54-100; BS= 49-100).Only minor incongruences in the relationships between some of the accessions (i.e., pu_197, pu_243, pu_255) were detected between both approaches.
Based on the outcome of sNMF, IQ-TREE and SVDquartets analyses, 19 individuals were selected and grouped by species/ populations to perform the SNAPP analyses (Figure 3C).The Phylogenetic affinities of C. pubescens and its sister species.Samples names as in Supplementary Table 1; tip names indicate sample ID and geographic provenance indicated by the two-letter country code; tip colors represent the main genetic clusters resolved by sNMF (Figure 2B).taxon partitioning followed the results of the genetic structure and phylogenetic analyses.For species tree inference, C. eximium and C. eshbaughii were merged, and C. pubescens was treated as a single partition (Supplementary Figure 3).The C. pubescens samples were then split into populations corresponding to the three previously inferred main lineages/groups (identified as C. pubescens 1, 2 and 3; Figure 3C).Finally, each individual was considered as a single entry to assess the consistency of the relationships found (Supplementary Figure 3).Species/populations trees inferred with SNAPP were highly congruent and showed the same well-supported relationships (PP= 1) as those inferred from the concatenated RADseq-SNPs (Figures 3A, B).Relationships congruent with the IQ-TREE and SVDquartets trees were also recovered when individuals were considered as single entries, with the cloudogram and the superimposed consensus tree graphically depicting some level of uncertainty in the nodes within C. pubescens (PP 0.89-0.25;Supplementary Figure 3).
Hybridization detection
The TreeMix analysis inferred branching topology consistent with the phylogenetic and species/population tree reconstructions (Supplementary Figure 4).The putative hybrids were consistently recovered as sister to C. cardenasii-C.eximium and C. pubescens, with one migration event from C. pubescens towards the hybrids' group.Removing the hybrids did not affect the branching pattern.When the hybrids were considered as two separate groups, following the topology of the SVDquartets outcome, one group was found to be sister to C. cardenasii-C.eximium and the other group to C. pubescens.The tree models with no migration (m= 0) events explained 94-96% of the variation in relatedness between the populations; however, the addition of two migration events (m= 2) explained 99.9% of the variation.Admixture events were supported by F3 statistics (Supplementary Table 2).
The HyDe analysis (Table 1) inferred hybrid origin for six of the seven putative hybrid accessions identified in other analyses.Significant hybridization between the 'parental' species (pairs C. cardenasii -C.pubescens and C. eximium -C.pubescens) was detected at the species level (g∼0.5;Table 1A).Six of the seven putative hybrid individuals (all except for sample ex_138) showed significant levels of hybridization, with g-values ranging from 0.26 to 0.61, indicating different levels of hybridization across the individuals tested (Table 1B).Most of the admixture occurred at intermediate levels (i.e., g close to 0.4-0.6),indicating recent hybridization events (samples card_208, exim_95, hib_212 and hib_213).Significant high levels of admixture (i.e., g close to either 0.1 or 0.9) were detected for samples ex_136 and ex_140, suggesting introgression or older hybridization events.
Discussion
This study tested old and proposes new hypotheses on the evolution of the domesticated chile Capsicum pubescens and its closest wild allies, based on whole-genome SNP data populational and phylogenetic analyses.The analyses of the extended-sampling datasets using different analytical approaches allowed us to (1) evaluate existing hypotheses on the genetic relationships between these species, (2) identify the main lineages associated with diversification under domestication within C. pubescens, and (3) detect hybridization events within the clade Pubescens.All these data allowed us to gain a better understanding of the extent of the primary gene pool of C. pubescens.This study provides therefore new insights into the evolutionary history of C. pubescens and its human-assisted geographic dispersal in the Americas.
The evolutionary affinities of domesticated Capsicum species have previously been studied using molecular data (cf.Liu et al., 2023).Such approaches allowed for tracing the origin of domestication and the subsequent differentiation of most domesticated Capsicum species (e.g., Kraft et al., 2014;Scaldaferro et al., 2018;Taitano et al., 2019;Tripodi et al., 2021;Liu et al., 2023).Capsicum pubescens is a chile pepper widely cultivated in South America for which, however, no wild ancestor is known.Although the most comprehensive and complete phylogenetic study of the genus Capsicum has recently been carried out (Carrizo Garcıá et al., 2022), the affinities between C. pubescens and its closest wild relatives or putative ancestors, i.e., C. cardenasii, C. eximium and C. eshbaughii, were ambiguously resolved over time due to low sample number of these species, leading to inconclusive inferences.The current study is the first to specifically address the evolutionary relationships of C. pubescens and its allies.Highly resolved and strongly supported phylogenetic relationships on populational level were inferred by the extensive utilization of genome-wide SNP data, encompassing significantly larger number of markers than previous studies.This comprehensive dataset included multiple samples representing different geographical areas of distribution and a broad spectrum of genetic diversity of the target species.The results supported C. pubescens as a distinct lineage, with C. cardenasii, C. eximium and C. eshbaughii as its closest wild relatives.However, none of these three species could be inferred as a direct ancestor of locoto chile.The same patterns of phylogenetic relationships were recovered in all analyses, consistent with the delimitation of the four species within the clade Pubescens proposed recently (Carrizo Garcıá et al., 2022).The proposed Pubescens clade circumscription is also in agreement with the traditional informal placement of the four species known as the purple-flowered group of chiles (Pickersgill, 1971;Eshbaugh, 1975).
RADseq data, originally developed for intraspecific phylo geographic studies (Baird et al., 2008;McCormack et al., 2013), allowed to analyses a large number of phylogenetically informative loci/SNPs.The use of more loci and a higher number of closely related individuals enabled a better characterization of phylogeographic variation, particularly within C. eximium and C. pubescens, the species with wider distribution ranges.Capsicum eximium is the most phenotypically variable 'ulupica' species with distribution range spanning contrasting ecogeographical areas, from the Yungas rainforest of Bolivia and northern Argentina to the dry valleys of central Bolivia, and a wide altitudinal gradient (ca.1000-3000 m).The species exhibits a high level of phenotypic variation (e.g., corolla pigmentation), although it has not been examined exhaustively to date (Eshbaugh, 1982;Barboza et al., 2022;pers. obs.).Current results suggest that geographic factors, such as climate and topography, may have played a role in shaping the structure of genetic variation within the species as Argentinian and Bolivian accessions were found to represent different genetic groups/ lineages.Additionally, C. eshbaughii, an endangered species that can only be found in a very restricted area of south-central Bolivia (Barboza et al., 2022), was recovered within the (Bolivian) C. eximium clade.Initially described as a variety, i.e., C. eximium var.tomentosum ( Eshbaugh and Smith, 1971), it was later recognized as a distinct species under the name C. eshbaughii (Barboza, 2011), but its species status has recently been questioned (Carrizo Garcıá et al., 2020).The current results do not support a recognition of C. eshbaughii at specific level without recognizing C. eximium as paraphyletic.Thus, the taxonomic status and evolution of the C. eximium-C.eshbaughii assemblage needs to be addressed with extended sampling of both taxa.A few studies have analyzed intraspecific genetic variation of other wild Capsicum species across their geographical distribution to understand their evolutionary history and diversity (but see Scaldaferro et al., 2023) and this study demonstrates the profits of extended sampling and genome-wide analyses.
Capsicum pubescens accessions were recovered in two main geographically structured groups representing south (from Argentina and Bolivia) and north (northwards from Peru to Mexico) populations.A substructuring was revealed within the Bolivian accessions, with a group from central-western Bolivia (La Paz surroundings).Phylogenetically, C. pubescens formed a single clade in which three primary lineages were recognized, distributed mostly in (1) central-western Bolivia, (2) central-southern Bolivia to Argentina, and (3) northern part of the continent from Peru to Mexico.The central-western Bolivian sample set was found to have diverged earlier than the other two, but did not consistently form a monophyletic group.The diversification of this lineage within C. pubescens is consistent with the geographical patterns of genetic variation reported earlier (Palombo and Carrizo Garcıá, 2022), revisited here in an evolutionary framework thus adding a temporal dimension.The central-western Bolivian accessions, characterized by unique genetic variation congruent with distinct plant morphology, including characters like near-absence of pubescence, small mostly 5-merous flowers, and the smallest and fleshiest fruits (Supplementary Figure 5), hint at a minor/ incomplete domestication syndrome or de-domesticated phenotype (Palombo and Carrizo Garcıá, 2022).These plants collected in situ from a home garden and disturbed sites exhibited higher genetic diversity than other C. pubescens genetic groups (Palombo and Carrizo Garcıá, 2022), suggesting that they may more closely resemble the ancestral gene pool of the species.Accessions from the other two sister lineages (i.e., central-southern Bolivia to Argentina, and northwards from Peru to Mexico) displayed typical characteristics of the C. pubescens cultigen (Supplementary Figure 5).The current results suggest that the locoto chile has diversified from central-western Bolivia towards the south and north of the continent, possibly due to humanassisted germplasm dispersal.Moreover, measures of genetic diversity revealed that the northern samples group exhibits the least diversity (Palombo and Carrizo Garcıá, 2022), suggesting the occurrence of a bottleneck or founder effect during the locoto introduction to the north of the continent.This is in agreement with previous reports of the species being introduced to Central America and Mexico in the 20th century, rather than the development of historical cultivars (Heiser and Smith, 1953;Barboza et al., 2022).No research so far has explored in depth the locoto chile domestication and dispersal process, thus, the current data provide a good basis for such studies.Future analyses might profit from more detailed information about cultivation and sales locations of the studied samples for better understanding of intraspecific genetic dynamics.
The Central Andes (i.e., Bolivia, Peru and Ecuador) have been inferred as the ancestral area of origin of the entire clade Pubescens (Carrizo Garcıá et al., 2022).The lineage of C. pubescens was hypothesized to have diverged in the upper Pliocene, earlier than the clade formed by C. cardenasii, C. eximium and C. eshbaughii that diversified from the mid-Pleistocene (Carrizo Garcıá et al., 2022).These inferences imply a long period of evolutionary divergence of the different lineages within the clade Pubescens.Despite of the early origin of the C. pubescens lineage, the archaeological record indicates that its domestication, leading to the known extant form of the species, might have only taken place a few millennia ago [6,500 cal.BP (Perry et al., 2007;Chiou et al., 2014)].The central Bolivian mid-highlands have been proposed as the hypothetical center of origin for C. pubescens, which is consistent with a greater morphological variation of locoto chile in this region as well as with the presence of its wild sister species (Eshbaugh, 1979;McLeod et al., 1983;DeWitt and Bosland, 2009).Previous studies have also shown that plants with smaller fruits (more ancestral character) are found in Bolivia, supporting the hypothesis that the Bolivian region would harbor plants most resembling the ancestral gene pool of C. pubescens (Eshbaugh, 1979;Palombo and Carrizo Garcıá, 2022).However, no archaeological evidence exists to support this hypothesis.It is possible that the wild ancestor of the extant C. pubescens may have become extinct.There are a few examples of crop plant species for which no wild ancestral populations have been identified yet, such as greater yam (D. alata L.; Chaïr et al., 2016).Capsicum pubescens cultivation is mainly restricted to particular environmental conditions, which led to the hypothesis that perhaps the only sites in which the wild forms could have grown have been occupied by humans and their cultigens (Rick, 1950).Subsequent competition and/or hybridization of wild ancestral forms with the "improved" domesticated forms might have led to the loss of the original genetic and morphological diversity among the wild forms, and rendering their identification difficult or impossible (Rick, 1950).The earliest C. pubescens domesticates could have also been extracted from the wild prior to the domestication bottleneck and their parental population(s) may have disappeared either as the result of early human activities or spontaneously over time.The current data suggest central-western Bolivia as a potential region in which to search for the origin of C. pubescens, specifically in the inter-Andean valleys from the south-east of La Paz to the north-west.Future work in this geographic area will allow for a better understanding of the variability of the cultigen and its geographical distribution, and will in turn allow for better understanding of its origin and diversification.
The combined evidence strongly supports the hypothesis that none of the three closest allied species of C. pubescens can unequivocally be identified as its wild progenitor (Carrizo Garcıá et al., 2022).No direct ancestor-descendant relationships could be recovered within this group, but, since the genetic diversity of a species can exceed its taxonomic limits, the suggestion that primary (extended) gene pool of C. pubescens includes the wild C. eximium and C. cardenasii (van Zonneveld et al., 2015) is reinforced.The existence of such gene pool, also known as "Pubescens complex", has received support from cross-breeding experiments between C. pubescens and the wilds C. eximium and C. cardenasii (Eshbaugh, 1979;Tong and Bosland, 1999;Onus and Pickersgill, 2004) and is now statistically supported by the current results.Successful reciprocal crosses of C. eshbaughii with both C. eximium and C. cardenasii (Eshbaugh and Smith, 1971;pers. obs.) suggest that C. eshbaughii is also part of this gene pool.Moreover, the current study revealed the presence of natural hybrids between either C. pubescens and C. eximium or C. pubescens and C. cardenasii.In many traditional communities across South-Central America, wild Capsicum species are often found growing close to domesticated chiles, in home gardens or at the edges of cultivated fields, where they can readily hybridize with the cultigens (van Zonneveld et al., 2015;Peŕez-Martıńez et al., 2022).Disregarding the two hybrids previously described as experimental crosses between C. pubescens and C. eximium (Barboza et al., 2022), the remaining putative hybrids were inferred to represent hybridization events, with a genetic contribution of the C. pubescens cluster from central-western Bolivia (surroundings of La Paz).Some of these hybrids were collected from or near home gardens (Barboza et al., 2022;unpublished notes) indicating that cross-breeding between cultigens and wild relatives does occur.Two hybrid samples were collected in the area of dry valleys of Luribay (ca.150 km south-east of La Paz, Bolivia) where C. cardenasii and C. eximium grow (Barboza et al., 2022), further suggesting that the species may freely hybridize in sympatry, as was also observed earlier (Eshbaugh, 1975(Eshbaugh, , 1979)).The Luribay valley is therefore a potential natural laboratory site for the study of hybridization and introgression across the Pubescens clade and their potential evolutionary and/or ecological impact (Twydord and Ennos, 2012;Taylor and Larson, 2019).Although there is no direct evidence of wild-to-domesticated species transition between the current members of the clade Pubescens, gene flow between its species would impact the extent and maintenance of present genetic variation of C. pubescens gene pool.Understanding gene flow between locoto chile and its wild relatives in their native range is crucial for in situ conservation of genetic diversity, a globally recognized approach to safeguard plant genetic resources alongside ex situ conservation strategies (Hammer et al., 2003;Wambugu and Henry, 2022).This knowledge would allow a design of measures to prevent genetic homogenization and effectively conserve the diversity of C. pubescens gene pool.
Conclusion
Analyses of genome-wide SNP data using population genetic and phylogenetic approaches shed new light on the evolutionary history of the cultivated locoto chile, C. pubescens.The results clearly demonstrated that this species forms a monotypic lineage that is sister to a group of three other wild Capsicum species from the Central Andes.The analysis of high number of samples representing genetic and geographical variation of the four target species allowed for the detection of hybridization events between these taxa and also for the identification of primary lineages associated with the diversification under domestication of C. pubescens.The highlands of central Bolivia, south-east to the north-west of La Paz, were hypothesized to represent the center of origin of C. pubescens.More extensive sampling of the populations from this region will allow for more rigorous testing of this hypothesis.The new inferences of evolutionary and phylogenetic relationships under the domestication form new basis for germplasm conservation and breeding strategies, and will also be fundamental for guiding further research of the locoto chile and its wild relatives.The author(s) declared that they were an editorial board member of Frontiers, at the time of submission.This had no impact on the peer review process and the final decision.
FIGURE 1
FIGURE 1 Geographic distribution and morphological characters of analyzed Capsicum species.(A) Area of C. pubescens cultivation in Central-South America (blue) with the details of the native ranges of distribution of C. eximium (green), C. cardenasii (red), and C. eshbaughii (fuchsia) in the inset.Circles on the map indicate the provenance of the C. pubescens accessions analyzed.The star points to the city of La Paz, Bolivia.(B-F) Flowers of C. pubescens (B), C. cardenasii (D), C. eximium (E), C. eshbaughii (F) and fruits of C. pubescens (C).Photos by NEP and CCG. 1 ; tip names indicate sample ID and geographic provenance indicated by the two-letter country code.(A) Heatmap plot obtained with fineRADstructure, showing the variation in pairwise co-ancestry among individuals according to the scale shown on the left.Solid squares represent samples from the same species and the dotted squares samples identified as putative hybrids.(B) Result of the sNMF analysis for K= 2-8.Each bar represents a sample and the colors represent the partitioning of the sample genotype in each group.The samples are sorted by species, except for hybrid individuals.
ancestry with C. pubescens from central-western Bolivia contributing to genetic makeup of these individuals.
FIGURE 3 (A) Best-scoring Maximum Likelihood phylogenetic tree inferred in IQ-TREE with support values next to the branches indicating ultrafast bootstrap (UFBoot).Bar indicates substitutions/site (note that the branch for the outgroup is truncated for graphical reasons).(B) The coalescent-based species tree inferred in SVDquartets with bootstrap support (BS) values next to the branches Samples marked with stars represent putative hybrids and samples marked with asterisks were also included in the SNAPP analysis.The dotted square indicates C. eshbaughii samples.(C) Species trees after SNAPP analysis depicted as cloudogram and consensus tree.Clades and colors are marked in the same in (A, B).Nodal support values are provided as posterior probabilities (PP).
The
HyDe results for three groups of accessions (P1, P2, Hybrid).The two parents (P1, P2) and the Hybrid are shown for each rooted triplet comparison.The Z-score, p-value, and gamma (g) values for each test are shown.Significant values are indicated as bold.(A) P1 and P2 (C.cardenasii, C. eximium or C. pubescens) and Hybrid (putative hybrid accessions), and (B) P1 and P2 (C.cardenasii, C. eximium or C. pubescens) and each of the putative hybrid accessions treated as a single individual.
TABLE 1
Hybridization among analyzed Capsicum species. | 9,543.8 | 2024-02-23T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Inversion Boundary Annihilation in GaAs Monolithically Grown on On‐Axis Silicon (001)
Monolithic integration of III–V materials and devices on CMOS compatible on‐axis Si (001) substrates enables a route of low‐cost and high‐density Si‐based photonic integrated circuits. Inversion boundaries (IBs) are defects that arise from the interface between III–V materials and Si, which makes it almost impossible to produce high‐quality III–V devices on Si. In this paper, a novel technique to achieve IB‐free GaAs monolithically grown on on‐axis Si (001) substrates by realizing the alternating straight and meandering single atomic steps on Si surface has been demonstrated without the use of double Si atomic steps, which was previously believed to be the key for IB‐free III–V growth on Si. The periodic straight and meandering single atomic steps on Si surface are results of high‐temperature annealing of Si buffer layer. Furthermore, an electronically pumped quantum‐dot laser has been demonstrated on this IB‐free GaAs/Si platform with a maximum operating temperature of 120 °C. These results can be a major step towards monolithic integration of III–V materials and devices with the mature CMOS technology.
Introduction
Driven by the rapid development of smartphones, cloud computing and the Internet of Things, the unprecedented growth of worldwide data traffic significantly increases the demand of ever-higher data transmission speeds in data centers. A CMOSprocess compatible Si-based photonic integrated circuits (PICs), would attract extensive scientific and industrial interest as it defects arising from the epitaxial growth of polar III-V compound semiconductor materials on non-polar Si substrates. The IBs are nucleated at the edge of single-atomic-height (S) Si steps while their nucleation is prevented on the double-atomicheight (D) Si steps. [9,13] To avoid the formation of IBs, the conventional strategy employs Si (001) substrates with 4-6° offcut towards [110] or [111], which promotes stable D steps after hightemperature annealing. [16,17] However, this strategy is incompatible with CMOS technology, which strictly requires nominal on-axis Si (001) substrates. [18] Until now, great efforts have been made to develop epitaxial growth techniques for IB-free III-V materials on on-axis Si (001) substrates including the implementation of V-groove patterned Si substrate, [19][20][21] template-assisted selective epitaxy (TASE), [22,23] III-V nano-ridge engineering [24,25] and high temperature annealing of Si substrate under hydrogen ambient environment by using metal organic chemical vapor deposition (MOCVD) systems. [26][27][28] With the aid of high temperature and high-pressure hydrogen, the epitaxial growth of IB-free GaP and GaAs on D-dominated Si has achieved great success. However, these methods require hydrogen gas and Si substrates with intentionally selected offcut angles (0.15° and 0.12° for the growth of GaAs/Si and GaP/Si respectively) to promote the formation of dominated D steps while few S islands remain at the step edge. [26,[28][29][30] IBs that only arise from these few S islands remain low density and intersect pairwise during the subsequent high temperature layer growth, leading to sufficient IBs selfannihilation. Those methods are incompatible with solid-source molecular beam epitaxy (MBE) growth, due to lack of hydrogen source. On the other hand, the MBE system is superior in developing high-quality InAs/GaAs quantum dot (QD) lasers, which have been proved as an important laser source for Si photonics due to its robustness and high-quality performance. The achievement of fully MBE grown IB-free III-V buffer layer on on-axis Si (001) is thus highly desirable. Recently, researchers have successfully grown III-V lasers on on-axis Si (001) by MBE using hightemperature annealing and an Al 0.3 Ga 0.7 As nucleation layer. [31] However, the mechanism of IB annihilation during growth is not clear, and the critical growth parameters remain uncertain.
In this work, we demonstrate a growth method of IB-free GaAs layers on on-axis Si (001) with periodic S steps by high-temperature annealing of Si buffer within the MBE system. The impact of an annealed Si buffer layer on the propagation of IBs is extensively studied. A sufficient self-annihilation of IBs during GaAs growth to achieve IB-free GaAs within 1 µm thickness grown on on-axis Si (001) substrates was demonstrated. Furthermore, a 1.3 µm InAs/GaAs QD laser was grown on the IB-free GaAs/Si substrate, with a low threshold current density of 83.3 A cm −2 and a high operating temperature of 120 °C.
Epitaxial Growth, Surface Morphology of III-V Materials on On-Axis Si Substrates
Three samples (A-C) were first studied, which were grown on microelectronic standard on-axis Si (001) substrates with random miscut angles within 0.15° ± 0.1° toward [110] orientation by solid-source MBE system. All the Si (001) substrates used in this paper were not intentionally selected before epitaxy. In sample A, pre-growth heat treatment within the MBE growth chamber was performed before growing a 1 µm thick GaAs layer, and the (Al)GaAs growth follows the method demonstrated by Kwoen et al. [31,32] The deoxidized Si (001) substrate was heated up to ≈1200 °C for 30 min to enable the formation of D steps on the Si surface, which were believed to be the key to suppress the IB nucleation at the GaAs/Si interface. [9,33] However, no 2 × 1 RHEED pattern was observed during the heating process and a high density of IBs was observed on the surface of the subsequently grown GaAs layer, as shown in the top-view of scanning electron microscope (SEM) image of Figure 1a. The visible deep trenches on the SEM image illustrate the location of IBs since material evaporate easier on IBs due to week bonding than normal III-V crystal. [30,34] Therefore, we further optimized the growth method by employing a three-step 1 µm GaAs growth for Sample B, a 250 nm GaAs nucleation layer was first grown at a low temperature (LT) of 350 °C on the deoxidized Si (001) substrate, followed by a deposition of another 250 nm GaAs layer at a mid-temperature (MT) of 420 °C. Finally, a 500 nm GaAs layer was grown at a high temperature (HT) of 580 °C to finish the growth. A notable reduction of IB density is observed in Figure 1b and most of the IB are closed loops. Even though the material quality has been significantly improved, the IBs are still visible after the 1 µm GaAs growth. To improve the quality of Si epi surface before GaAs growth, a 200 nm Si buffer layer is grown and then annealed inside MBE chamber at 1200 °C for Sample C, [33] followed by an identical growth procedure of GaAs as Sample B. Figure 1c shows IB-free GaAs surface for Sample C. These results clearly indicate that the annealed Si buffer plays To understand the mechanism by which the annealed Si buffer causes IB annihilation, the surface morphology of Si substrates without and with the annealed Si buffer layer was compared through atomic force microscopy (AFM). AFM images of Si surfaces after deoxidation and after the annealed Si buffer layer are shown in Figure 2a,b, respectively. For the deoxidized Si surface, a random atomic-step distribution is obtained without a clear step order, as presented in Figure 2a. The formation of these wavy steps is a result of the interaction between different stress domains, which helps to reduce the net elastic energy of the Si surface at small offcut angle. [35,36] In contrast, clearly ordered Si steps are visible in Figure 2b, and a zoomed-in measurement of those ordered Si steps is presented in Figure 2c, showing a combination of alternating straight and meandering Si atomic steps. [16,37] The height of each step was measured around 0.13 nm, as shown in Figure 2d, revealing the existence of only S steps instead of the D steps after the high-temperature annealing on Si buffer process. [38][39][40][41] It is well established that on-axis Si (001) surfaces, which have small offcut angle, exhibit terraces of alternating 2 × 1 and 1 × 2 dimerization separated by two types of S. [16,37,42] Based on Chadi's nomenclature, these two step types are denoted as S a and S b . [43] S b steps are relatively rough while S a steps are straight, as shown in the schematic diagram in the inset of Figure 2c. Each meandering S b step, which is due to the thermal fluctuation, is sandwiched between two neighboring S a steps, as shown in Figure 2c. The offcut angle of the used on-axis Si (001) substrate thus can be determined from where θ relates to the surface misorientation of Si substrate, a represents the theoretical height of S step which is 0.136 nm, and L shows the half terrace width of neighboring S a steps, which is around 80 nm obtained in Figure 2d. The offcut angle representing this terrace width is thus calculated as <0.1°, which is clearly within the typical misorientation range of on-axis Si (001) substrates. The offcut angle of on-axis Si (001) substrates is not intentionally selected before growth. Considering the unavoidable offcut introduced during the cutting process of Si (001) ingot, the parallel S step is achievable for on-axis Si (001) substrates with random offcut angle within 0.15° ± 0.1° towards [110] orientation. To investigate the nucleation and the propagation of IBs within GaAs grown on (S a + S b ) arrays, a cross-sectional annular dark field scanning transmission electron microscope (ADF-STEM) measurement was performed. As shown in Figure 3a, IBs are nucleated on the edge of the S steps and propagated at low temperature along an energy-favoured (110) plane. [44,45] The measured line profiles obtained from ADF-STEM image indicate the swapping of sublattices of Ga and As atoms across the boundary. Therefore, periodic IBs could be generated on the periodic S steps, where the distance between the neighbouring IBs directly relates to the terrace width, the distance between S a and S b , as shown in the schematic illustration in Figure 3b and confirmed later by the results shown in Figure 4. In addition, a relative high GaAs/Si interface roughness, as a result of Ga melt-back etching, is presented on the Figure 3a. The further improvement of GaAs/Si interface quality can be achieved by carefully controlling III-V on Si nucleation process and compensate any excess of Ga droplet before it etched the Si buffer layer. [46,47]
Characteristic Measurements of Inversion Boundaries
To further understand the mechanism of IB annihilation on the periodic Si atomic steps of (S a + S b ) during GaAs growth, the growth of Samples B and C, without and with a Si buffer layer are studied in detail and layer by layer, respectively. The surface morphology at LT, MT and HT GaAs layers of Samples B and C are presented in Figure 4a-c and Figure 4d-f, respectively. A considerable number of nucleated curved IBs appear randomly after the LT GaAs layer is grown for Sample B, as illustrated in Figure 4a, which is consistent with the wavy Si atomic steps after deoxidation as shown in Figure 2a. An increase in the growth temperature for the further MT 250 nm GaAs layer enlarges the boundaries, despite a reduction of IBs density as observed in Figure 4b. Although the density of IBs is visibly lower after the growth of HT 500 nm of GaAs, as illustrated in Figure 4c, the size of some IBs is significantly larger than the nucleated ones at LT GaAs, which severely lowers the crystal quality of the materials subsequently grown. These results indicate that full annihilation of the IBs is difficult to achieve for the sample B without annealed Si buffer layer. By stark contrast, as shown in Figure 4d, well-organized periodic boundaries are observed in sample C after the deposition of the first 250 nm LT GaAs. The formation of these periodic boundaries is the result of IBs nucleated at the edge of periodic S steps during the deposition of the LT GaAs layer. The distribution of IBs reproduces the structure of periodic S steps, indicating the low temperature implemented for the nucleation layer growth is insufficient to kink IBs from {110} into higher index plane. This GaAs surface pattern is distinctive compared with the previously reported Al 0.3 Ga 0.7 As nucleation layer. [31] The gaps between the separated IBs, are visible as dips. During the further growth of 250 nm MT GaAs, the dips are reduced in size and gradually annihilated and the density of IBs also becomes visibly lower, as shown Figure 4e. Finally, Figure 4f shows a single phase of GaAs on the surface, after the 500 nm HT GaAs is deposited. These results suggest that a fully IB-free GaAs surface was obtained after growth of 1 µm thickness of GaAs by utilizing the periodic (S a + S b ) arrays on Si to promote IB annihilation, and with a relatively low root mean square (RMS) roughness of 4.9 nm.
Cross-sectional TEM measurements have been studied on the Samples B and C to further study the mechanism of IB annihilation by investigating the cross-sectional structural properties of GaAs-on-Si heteroepitaxy. The images were taken through two viewing directions, [110] (Figure 5a-d) and [110] (Figure 5e-f). As shown in the dark-field TEM image of Figure 5a, the IBs nucleate through the (110) plane in Sample B in the LT GaAs layer. Subsequently, the IBs start to propagate along a higher index plane, such as {111}, {112} and {113} planes, [30] through the MT and HT GaAs layers. This enhances the probability of IBs' intersecting and annihilating with each other. The twisted patterns observed in Sample B, are due to the randomly distributed IBs nucleation. In contrast, periodic arrays of IBs are visible when GaAs was deposited on the annealed Si buffer layer in Sample C, as shown in Figure 5b. The distance between IB loops corresponds to the half-width of each Si terrace, the distance between S a and S b , which is approximately 80 nm in this case. The kinks of IBs are observed in the higher growth temperature region which leads to the annihilation of IBs when they meet. Stacking faults appear occasionally in the GaAs nucleation layer, as shown in the inset image of Figure 5b, without a visible impact on the IBs propagation. Figure 5c and Figure 5d present larger scale bright-field cross-sectional TEM measurements of Samples B and C, respectively. Despite most of the IBs self-annihilating during the growth in sample B, there are still some IBs that penetrate through the whole structure as seen from Figure 5c. Those remaining IBs propagate freely in three dimensions, making annihilation extremely unlikely once the density of IBs becomes lower. However, the IBs that nucleate on (S a + S b ) arrays follow the shapes of both steps and annihilate within approximately 500 nm of GaAs growth. In addition, the IBs that penetrate through the whole structure can be observed from [110] direction in Sample B, as shown in Figure 5e. In contrast, due to the formation of straight and parallel S steps towards [110] orientation, the periodic IB nucleation resembles the distribution of the (S a + S b ) arrays, which are along (110) plane, leaving no observation of IBs from [110] direction for Sample C, as shown in Figure 5f. This phenomenon is different from the observation in GaP/Si system, which has triangle-shaped S islands appear between engineered D steps, the resulting IBs formed on that remained triangle-shaped S islands reflect the Si surface structure and can be observed from both [110] and [110] directions. [48] X-ray diffraction reciprocal space mapping (XRD-RSM) imaging was used to examine the residual strain inside the IB-free GaAs buffer layers as shown in the inset of Figure 5f. A full-relaxation line passes directly through the center of the patterns representing GaAs and Si, implying that no residual strain is present in the GaAs layers. The compact pattern of GaAs indicates a good crystal quality of the IB-free GaAs layer.
Performance Characterization of QD Laser on Si
To exploit the feasibility of using this IB-free GaAs layer as a platform for the integration of polar III-V optoelectronic devices on non-polar group IV substrates, a 1.3 µm InAs QD laser structure was monolithically grown on this GaAs/Si (001) platform. The bright-field TEM image demonstrates high quality of the InAs QD gain medium where no apparent TDs and IBs are observed in Figure 6a. Comparing room-temperature photoluminescence (PL) for the InAs QD material grown on our IB-free GaAs/Si (001) platform and those without the Siarray GaAs/Si (001) virtual substrate, the sample with annealed Si buffer shows four-fold improvement of PL intensity with a similar peak wavelength of ≈1288 nm, as shown in Figure 6b. The full width at half maximum of InAs QDs on IB-free GaAs/ Si (001) is as low as ≈27.8 meV. An AFM image of uncapped InAs QDs grown on fully relaxed IB-free GaAs/Si (001) under the same growth conditions is shown in the inset of Figure 6b, where InAs QDs with a high density of 5.4 × 10 10 cm −2 are present. A broad-area InAs QD laser was fabricated in order to assess the quality of our IB-free GaAs/Si (001) platform. Figure 6c shows the light-current (L-I) curves of the InAs QD laser under different operating temperatures. Room temperature threshold current density (J th ) is as low as 83.3 A cm −2 , which is better than the previously reported J th for 1.3 µm InAs QD laser on an exact Si (001) substrate all grown by MBE. [14,32,49] Since robust temperature stability is necessary to support the Si based laser working in a high-temperature environment, the Si-based laser was tested at a range of operating temperatures. Lasing was observed under a pulsed mode with operating temperatures up to 120 °C. Moreover, the slope efficiency of the single-facet emission of 0.13 W A −1 at 20 °C remained stable as temperature increased, showing a good temperature reliability for the InAs QD laser on our IB-free GaAs/Si (001) platform. The room temperature electroluminescence (EL) spectra under different injection current densities are given in Figure 6d. Amplified spontaneous emission was observed below injection current density of 80 A cm −2 . When the injection current is increased above the threshold, the ground state lasing spectrum can be clearly observed with a peak wavelength at 1303.9 nm. The inset of Figure 6d shows a characteristic temperature (T 0 ) of ≈55 K between 20 and 100 °C. Based on these results, the 1.3 µm InAs QD laser grown directly on an on-axis Si (001) substrate using our IB-free GaAs virtual substrate demonstrated promising performance in terms of J th and temperature stability.
Conclusion
In this paper, we demonstrated IB-free GaAs epilayers monolithically grown on CMOS compatible on-axis Si (001) substrates with periodic S Si steps only, instead of the conventionally used D Si steps in MOCVD systems. The detailed mechanism of IB annihilation within the GaAs buffer layer grown on periodic S Si steps of Si substrates has been studied by using AFM and TEM. After the deoxidation of Si substrates, a random atomic-step distribution without a clear step order is observed for Si epitaxial surface. During the growth of GaAs on Si, IBs within GaAs buffer are generated on the S Si steps on Si substrates. Curved IBs are thus formed randomly for GaAs grown Si substrates without annealed Si buffer layer and do not effectively annihilate within the 1 µm GaAs buffer layer. On the other hand, a periodic surface morphology-alternating straight S a and meandering S b single atomic steps-on the Si surface has been obtained for the sample with annealed Si buffer layer. During the deposition of GaAs layers, the IBs that nucleate on (S a + S b ) arrays follow the shapes of both steps, and annihilate within approximately 500 nm GaAs. This approach simplifies growth requirements for a high-quality IB-free III-V platform on CMOS compatible Si (001). Using this GaAs buffer layer acting as a platform for the monolithic integration of III-V optoelectronics on CMOS compatible Si (001), a 1.3 µm InAs QD laser device with a low J th of 83.3 Acm −2 at room temperature and highest operating temperature of 120 °C was successfully demonstrated. These results indicate that IBs will no longer be a fundamental issue for the monolithic integration of polar III-V on on-axis Si (100) substrates and form a basis of combing monolithic integration of Si photonics with mature CMOS technology.
Experimental Section
Material Growth: The epitaxial materials were grown by a special twin MBE system, consisting of a Group-IV and a III-V growth chamber. The deoxidation of Si substrates, and the growth and annealing of Si buffer layer were performed in the Group-IV chamber before transferring to the III-V chamber for III-V epitaxy. An ultra-high-vacuum transfer chamber between these two chambers was used to keep a pure and smooth Si epi surface before GaAs growth, to avoid potential contamination during the wafer transfer process. Phosphorus-doped on-axis Si (001) wafers with 0.15° ± 0.1° offcut towards [110] were used. In situ deoxidation process of substrates within the Group-IV chamber was performed at 1200 °C for 30 min. For Sample A, a 30 nm Al 0.3 Ga 0.7 As nucleation layer with a growth rate of 0.7 monolayers per second (ML s −1 ) was grown on the deoxidized Si (001) substrate at 500 °C followed by a 970 nm GaAs layer at 580 °C. In Sample B, a 250 nm GaAs nucleation layer was first grown around low temperature of 350 °C, followed by a deposition of another 250 nm GaAs layer around mid-temperature of 420 °C. Finally, a 500 nm GaAs was grown around high temperature of 580 °C to complete the growth. For Sample C, a 200 nm thick Si buffer layer was grown on the deoxidized Si (001) substrate by an e-beam Si source, consisting of a 100 nm of Si layer annealed at 900 °C followed by 5 periods of 20 nm Si layers annealed at 1200 °C. The GaAs growth sequence is the same as for Sample B. The InAs/GaAs QD lasers were grown on the virtual substrate grown using the procedure of Sample C. Si-doped InGaAs/GaAs defect filter layers (DFLs) have been grown after the GaAs layer to reduce the threading dislocation density, which consists of 5 repeats of InGaAs/ GaAs superlattice and a 300 nm GaAs spacer layer, along with an in situ thermal annealing after each repeat. [50] After 3 repeats of DFLs, a fivelayer dot-in-well structure was grown as the active region, sandwiched by two 1.5 µm N-type and P-type Al 0.4 Ga 0.6 As cladding layers. Each layer of the InAs QDs was grown on a 2 nm In 0.18 Ga 0.82 As layer and capped by a 6 nm In 0.18 Ga 0.82 As layer followed by a 50 nm GaAs spacing layer. Finally, a 300 nm p-type GaAs contact layer was grown.
Device Fabrication: The broad-area lasers with 50 µm wide stripes were fabricated by standard lithography and wet chemical etching techniques. Ti/ Pt/Au and Ni/GeAu/Ni/Au were deposited on p+ GaAs contacting layer and exposed n+ GaAs layer to form the p-and n-contacts, respectively. After lapping the silicon substrate to 150 µm, the lasers were cleaved to 3 mm lengths and mounted (as-cleaved) onto the heat-sink and wire-bonded.
Measurements: AFM measurements were performed with a Veeco Nanoscope Dimension 3100 under tapping mode. The PL measurements are performed with a RPM2000 PL at room temperature, excited by a 635 nm red laser. The TEM and STEM measurements were Figure 6. Characteristic measurements of QDs and the laser device. a) Bright-field cross-sectional scanning TEM image of 5 stacks of dot-in-well structure grown on on-axis Si (001). b) Room temperature PL spectra of QD samples grown on deoxidized Si substrates with and without Si buffer layer; inset: 1 µm × 1 µm AFM image of uncapped InAs QD grown on on-axis Si (001) with high dot density. c) Temperature dependent L-I curve up to 120 °C of 1300 nm InAs QD laser on on-axis Si (001). d) EL spectra of InAs QD laser on on-axis Si (001) substrate with different injection current density under pulsed mode; inset: Temperature dependence of the J th revealing characteristic temperature T 0 of our laser sample. performed on JEOL 2100 and doubly corrected ARM200F microscopes respectively, both operating at 200 kV. The fabricated laser was characterized under pulsed conditions with 1 µs pulses and 1% duty cycle. The output power of laser was collected from a photodetector normal to the laser facet. | 5,670 | 2020-09-21T00:00:00.000 | [
"Engineering",
"Physics"
] |
Automatic Image Annotation based on Dense Weighted Regional Graph
Automatic image annotation refers to create text labels in accordance with images' context automatically. Although, numerous studies have been conducted in this area for the past decade, existence of multiple labels and semantic gap between these labels and visual low-level features reduced its performance accuracy. In this paper, we suggested an annotation method, based on dense weighted regional graph. In this method, clustering areas was done by forming a dense regional graph of area classification based on strong fuzzy feature vector in images with great precision, as by weighting edges in the graph, less important areas are removed over time and thus semantic gap between low-level features of image and human interpretation of high-level concepts reduces much more. To evaluate the proposed method, COREL database, with 5,000 samples have been used. The results of the images in this database, show acceptable performance of the proposed method in comparison to other methods. Keywords—automatic annotation; dense weighted regional graph; segmentation; feature vector
INTRODUCTION
Due to the growing use of digital technologies, image data generated and stored every day in large numbers, and using this data as text data has become commonplace.Hence the need to search for video data according to different demands increased.One of the traditional methods for image retrieval, is content based image retrieval [1,2,3].But these systems are not able to understand the meaning.Also in this system, user must express your wishes with the visual properties of image expression, which in turn is difficult for users [1].This is a formidable challenge in content-based image retrieval, called semantic gap.Semantic gap, the gap between low-level visual content of the image and human interpretation of it, is a highlevel concept.[2]Methods including automatic image annotation that have been proposed in recent decades to reduce semantic gap.[4] computer in automatic image annotation is used to describe words that are suitable for producing images.In this case, to recover the image of a set of annotated images, using text request is also possible.Using text of the question is far easier than using a sample image or characteristics of the image.[1] Great deal of research has been done in the field of image annotation that can be grouped into three models: probabilistic models, model-based on categories and models based on the nearest neighborhood.[5] Most probabilistic models [6,7,8,9] joint probability estimate on the image content and keywords.Model-based categories [10,11], the image annotation to be an issue with the supervisor behave category.Models based on the nearest neighborhood, is one of the oldest, simplest and yet most efficient models in the category, the model is k-nearest neighbor.This model is growing, especially in the absence of training samples, efficiently.One of the methods in this area includes paper [12] cited.But in all of the presented methods in the field of automatic annotation there are two challenging problems: First, annotation techniques available generally are a feature of regional or global brand used to describe alone.But national and regional features focused on different aspects of an image complement each other, so combining them together to describe the images will be beneficial.Second, in all delivery methods based on characteristics of the area, only the direct distribution areas as areas used for image annotation via image segmentation based on region or the nature of objects are obtained, and the relationship between areas does not be paid attention.If given the link between areas, each of which represents a word or concept, we can help improve final margin words. in order to fix the problem on the basis of a weighted graph, in this article we offer an area dense.So that the proposed method uses the theory of Rough and Fuzzy feature vector for regions resulting from segmentation to effectively classified images and graphs do make up a dense area of both national and regional characteristics for use together to annotate and significantly enhanced accuracy.It also uses weighting the edges between vertices in the graph area proposed for communication between areas of images detected by the system.This article is organized as in Section 2, the image automatic annotation method based on weighted graph describes dense in an area, in section 3, the proposed algorithm simulation results and comparison with other methods in this area are provided.In the fourth overall conclusion of the article offered.
II. IMAGE ANNOTATION BASED ON DENSE WEIGHTED REGIONAL GRAPH
In this section of the paper, the proposed method for annotating images is explained.
The construction of an area of dense graph is as follow: 1) First collect a free enlarge dataset of annotated images, and classify values into different categories according to their annotation keywords.www.ijacsa.thesai.org2) Pictures related to any particular class are divided with accurate and effective method to pieces Rough separate charges.
3) For Category parts resulting from segmentation of images related to any particular group, first we used the method of k-means, so that same parts are classified in the same group.But because of some shortcomings in this method and more accurate grouping, again we will classify lowdensity lightweight piece set of groups of pictures (which are included large amounts of each class of images) by considering fuzzy feature vector associated with them, and compare it with the original image feature vector corresponding to the category of other classes, categories reclassification of the charges.
4) At this point we create dense an area graph so that we put fences and high density near the center of the image in a category and get the fuzzy feature vector associated with the label of their respective classes, images are annotated.
For groups with low density and outlying parts by determining the fuzzy feature vector and determine its similarity to other video groups from other classes, in Group of parts that are most similar to them categorized and tagged with annotations are related to their respective floors.
5)
Then each vertex of the graph, which represents dense cluster of image segments with high similarity which are connected to each other by weighted edge with other categories that include other pieces from the collection of images related to each specific circuit.We are considering the joint probability of more accurate values for weight gain so that by taking pictures of each class, the weaker groups object to be removed from any particular class.
After creating an area of dense graph of images related to the training data, in order to annotate new image, first we will segmentation and then due to dense parts closer to the center of the image and fuzzy characterization of their right to obtain the original image, and in the following, based on Weighted area of dense graph, Image annotation associated by taking in account similarity of other image parts which are connected with more weighted vertices of main group of images.
In the graph area of dense, lightweight piece groups are annotated with names of collected images.Thus, the number of categories in training data should be large enough to cover the meanings of these groups and pieces of the image.
A. Image Segmentation in every particular class
Maximum image components can be extracted through various ways such as image segmentation [13], dense samples [14], and recognize a specified area [15] and etc.But most of these methods are very expensive in terms of time and calculation.Among the proposed methods, Rough set theory has the ability to cluster profitably data that comes from image analysis [16] as efficiently identify the edge that is one of the effective methods of segmentation, convergence time So we proposed this theory we use for segmentation of images.Rough set more detailed description about the image segmentation in the paper [17] can see.After extracting the piece of the picture, we produce dense piece band, we take action the k-means clustering pieces by the charges.However, in clustering k-means, the number of clusters manually set-up, and to produce pieces that are freely distributed is such as parts noise or background unstable and cluttered, so the results so not ideal.So after the initial clustering for producing dense, we have tried to use the feature vector effective, low-density parts with high density split into smaller pieces and fit them with similar criteria in groups with high similarity of our Categories.In this way text communication between groups are clearly to be identified, so that the results show high accuracy of the proposed method in the categories of different parts of images.
B. Produce dense areas and annotate
Most of the annotations provided in part due to lack of identification with high density of low-resolution images.Since the identification of areas with low density, similar to other image areas are densely populated in the group, we have tried to use the feature vector effective, low-density parts with high density split into smaller pieces and fit them with similar standards in groups with high similarity of our markets.For this purpose, we used a mask with size 60 × 60 on low-density areas.
We use to identify groups of two characteristic color and edge video compression phase for more accurate the obtained fuzzy logic.
More details on how to determine vector features can be found in Articles [18] and [19] pictures.Since fuzzy feature vector includes three properties in each area of the image are color, location and edge color, therefore, the proposed fuzzy similarity measure using data from the three phase characteristics for more accurate similarity or dissimilarity to appoint different images.
The following formula shows the proposed fuzzy similarity measure: (1) www.ijacsa.thesai.orgAs the numerical value for each component fuzzy weight in the literature [18] and [19] were determined as the following: Very large = 1, large = 0.8, medium = 0.55, small = 0.3, very small = 0.1 This method to a large extent is resistant to the problems that the majority of edge detection methods are faced, such as sensitivity to noise and with thick lines.So low-density areas are divided into several regions with higher density and tagged with annotations are the most similar.In this way, for different images in the database area of dense graph is created.
C. Annotation
After you remove the piece of background object slice group dense pieces, we attempted to annotate them.To do this, we first collect band name as a label.Since a large number of large-scale collections are on the floor, we assume that most of groups extracted can be annotated with the name of the group.We do annotation groups based on two criteria: 1) Visually similar groups must be annotated with similar tags.
2) Groups that are distinct and belong to a particular class, are likely annotate with this class.This idea is illustrated in Figure1.
Fig. 1.Image of the proposed method to annotate Group For example, in Figure 1 piece for the dog, after compression zones were discovered containing "dog", "woods", "sky" and so on, that piece band dogs due to high density and proximity to the center of the image the dog is the label class that is annotated.The following groups are fences "sky" and "woodlands" by comparing their characteristics with the vector of the main themes in classes other picture, as a new group piece which most closely resembles groups may be categorized with them And then the edge between the two vertices in the graph occurred that indicate the presence of similar images is common in these group.
For example in Figure 1 edge between vertices of dogs and woodlands parts thicker than the edge between the vertices of the parts is dog heaven and this occurred due to high joint between the two parts of the head.To determine the edge weight between vertices, following formula is used: (2) W is Weight between two vertices in the graph, an area corresponding to each of the respective condensing And K, is the number of subscribers took place labels on two related helm annotation of images that class and n, is the number of images of respective class.So common they both took the helm that is more likely that more weight be given indicator is thicker than the edges.
D. Experimental results and evaluation
The system uses a data set of 5,000 images from Corel database for evaluating the proposed approach.This collection of images includes 50 main groups that each group consisted of 100 images.Each image is accompanied by a set text labels include 1-5 word.Overall, there are 360 the meaning of the set of images.To perform the test, the complex image is divided into two parts: an image training set of 4500 images and a test set of 500 images.Figure 2 shows a few examples of annotated images from the database of our proposed method for Corel database.Tags are attributed to each image so that false labels marked with red color.
Based on images, it is clear that some of the labels we determined by the proposed approach them, are correct, but Due to poor images in the database annotation tags have been considered as inappropriate.
For example in the third image (eagle), the sea is also part of the picture shown.But the label is not determined by man.That's why this tag as a tag identified by the proposed method is considered to be false Recall (v i ) = N C / N R (4) F 1 = 2 × Precision × Recall / precision + Recall (5) Accuracy, is the ratio of N C , the ratio of the number of images in the test phase to the N S , the number of images than in the test phase.Calling the ratio of N C , the ratio of the number of images in the test phase N R , number of images in the database for each v i is the word.
Annotations to assess the quality, accuracy and calling for each and every word database (v i) are calculated.
Table 1, shows mean precision, recall and F 1 for the proposed methodology and the appropriate annotation recently presented three methods (IAGA -2014 Feature fusion and semantic similarity-2014 [5], MLRank -2013 [21]).More areas have higher density than other areas in each image by weighting the edges of the graph.Less important areas are removed so that it causes a system closer to annotate people, in our proposed method is compared to other methods.The results of this graph shows that, using accurate segmentation according to the theory of Rough and strong feature vectors that lead to the formation of dense graph and highlight an area densely populated areas in each image, is the more precise identification of concepts such as stairs, flowers and lawns aircraft per cent lower compared to IAGA that have been identified.
In fact, one of the main problems in annotation methods including IAGA non-designated areas in each image is important, we provided an area graph density to accurately classify areas according to each class video By weight of feature vectors appropriate and accurate to solve them.Therefore, in the proposed method, we were able to accurately and more efficient than the methods suggested in the annotation to achieve by Categorizing more accurate images with different areas of dense graph and also remove less important areas in each class by weighting the edges.
III. CONCLUSION
In this paper, a graph-based method for automatic image annotation densely an area is provided.
In most of the methods presented in the context annotations there are two basic challenging problems including Lack of integration of national and regional characteristics for each of the images and lack of attention to the relationship between different areas in Pictures.
In this article we formed an area graph, the relationship between different areas in the image are considered and weighted the edges in the graph, and done compression of areas so that Prominent areas on each floor image in lowdensity areas considered less important and have to be removed.Also by Using strong fuzzy feature vectors based on color and edge features for the considered areas we have done Aggregation of national and regional action features lightweight and have improved Annotations practice considerably.
At the end we have implemented the proposed approach on Corel database.The results provided on the database show acceptable performance of the proposed method compared with other methods in this field.
th bar of the histogram of the i-th edge to edge located W ei : the importance of the i-th to edge)
Fig. 2 .
Fig. 2. The words predicted by the proposed method for multi-image annotation database of Corel
Fig. 3 .Figure 4 ,
Fig. 3. Percentage identify 12 words sample to the suggested model and model IAGA Figure 4, the results retrieved for a query on the database shows Corel.For any query shows five images with the
Fig. 4 .
Fig. 4. Results semantic image retrieval database corel.Each row of five top result query semantic meaning in accordance with the left-most column shows
TABLE . I
. COMPARISON OF PRECISION, RECALL AND F 1OF THE PROPOSED METHOD AND OTHER METHODS | 3,932.6 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
Two-baryon systems from HAL QCD method and the mirage in the temporal correlation of the direct method
Both direct and HAL QCD methods are currently used to study the hadron interactions in lattice QCD. In the direct method, the eigen-energy of two-particle is measured from the temporal correlation. Due to the contamination of excited states, however, the direct method suffers from the fake eigen-energy problem, which we call the"mirage problem,"while the HAL QCD method can extract information from all elastic states by using the spatial correlation. In this work, we further investigate systematic uncertainties of the HAL QCD method such as the quark source operator dependence, the convergence of the derivative expansion of the non-local interaction kernel, and the single baryon saturation, which are found to be well controlled. We also confirm the consistency between the HAL QCD method and the L\"uscher's finite volume formula. Based on the HAL QCD potential, we quantitatively confirm that the mirage plateau in the direct method is indeed caused by the contamination of excited states.
In this work, we investigate the reliability of the HAL QCD method, and show that systematic uncertainties are under control. We also reveal the origin of the fake plateau in the temporal correlator quantitatively, and demonstrate that correct plateaux emerge for both the ground and the 1st excited states if temporal correlation functions are projected to eigenstates of the HAL QCD potential.
Speaker, e-mail<EMAIL_ADDRESS>In the time-dependent HAL QCD method [13], one measures the Nambu-Bethe-Salpeter correlation function given by with a source operator J, n-th energy eigenvalue W n , the inelastic threshold W th , single baryon correlator G B (t) and the baryon mass m B . With elastic saturation, R( r, t) satisfies where U( r, r ) is the non-local interaction kernel. Using the velocity expansion in the spin-singlet S-wave channel, U( r, r ) V eff (r)δ( r − r ), the effective leading order (central) potential is defined by Considering the higher order term as U( r, r ) {V LO (r) + V NLO (r)∇ 2 }δ( r − r ), it leads to where the leading order (V LO (r)) and next leading order (V NLO (r)) potentials are obtained by solving linear equations with several R( r, t).
Source dependence of HAL QCD method and the next leading order potential
First, we discuss the quark source dependence of the HAL QCD method. We use 2+1 flavor QCD configurations in Ref. [4], which are the Iwasaki gauge action and O(a)-improved Wilson quark action at a = 0.08995(40) fm, where m π = 0.51 GeV, m N = 1.32 GeV, and m Ξ = 1.46 GeV. We employ both the wall source q wall (t) = y q( y, t) and the smeared source q smear ( are the same as those in Ref. [4]. The number of the configurations and simulation parameters are summarized in Table 1. In this work, we focus on ΞΞ( 1 S 0 ) channel, which has smaller statistical errors and belongs to the same representation as NN( 1 S 0 ) in the flavor SU(3) limit. The upper panels in Fig. 1 show the effective leading order potential V eff (r) from the wall and smeared sources at L = 64, respectively. For the wall source, the potentials are almost unchanged from t = 10 to t = 18, while the results from the smeared source show significant t dependence. The lower panels in Fig. 1 are comparisons between the two at t = 11 and 14. The results imply that V smear eff (r) tends to approach to V wall eff (r) as t increases, while there remain small discrepancies even at t = 14.
The small difference between V wall eff (r) and V smear eff (r) indicates the existence of the next leading order correction in the derivative expansion of the non-local kernel U( r, r ). Fig. 2 shows the (next) leading order potential V LO (r) (V NLO (r)), which are obtained by using R wall ( r, t) and R smear ( r, t). The effective leading potential from the wall source is almost identical with the leading order potential as shown in Fig. 2 (Left), while in the smeared source, the next leading order correction to the potential, [V NLO (r)∇ 2 R( r, t)]/R( r, t), cannot be neglected. Fig. 3 shows the scattering phase shifts using V wall eff (r), V LO (r), and V LO (r) + V NLO (r)∇ 2 . These phase shifts suggest that ΞΞ( 1 S 0 ) is an attractive but an unbound channel at m π = 0.51 GeV. As shown in Fig. 3 (Left), at lower energies, these potentials give the consistent results within statistical error. The NLO correction appears only at higher energies (see Fig. 3 (Right)). These results show that (i) the derivative expansion of the non-local kernel has good convergence and the corresponding systematic uncertainty can be controlled (ii) the effective leading order potential from the wall source is reliable at low energies in this system.
The smeared source is tuned to have a large overlap with a single baryon ground state, while the saturation of the single baryon state for the wall source is known to be relatively slower than that of the smeared source [15]. Recently, some concerns are expressed to the wall source for the study of the two-baryon systems [20, 21] 1 . Fig. 4 (Left) shows the effective masses of the single baryon from the smeared and the wall sources. Although the ground state saturation of the wall source is slower than that of the smeared source, the results from both sources converge around t 16. (Even at a much earlier time, t = 10, the difference of the effective mass between the two is as small as 2%.) As shown in Fig. 4 (Right) (the same figure as in Fig. 1), we confirm that the wall source potentials at different t are consistent with each other including the time slices at t 16, and thus the systematic errors from the single baryon saturation are well under control. This also indicates that the contaminations from the single baryon excited states are almost canceled in the potential at early time slices.
Consistency between the Lüscher's finite volume method and the HAL QCD method
Next, we discuss the consistency between the HAL QCD potential and the Lüscher's finite volume formula [18], which extracts the scattering phase shift from the energy shift in the finite box. Fig. 5 (Left) shows the volume dependence of the lowest eigenvalue of the finite volume Hamiltonian H = H 0 + V wall eff (r). These spectra are proportional to 1/L 3 and converge to zero within error in the infinite volume limit. This volume dependence of the lowest energy strongly supports an absence of the bound state, which is consistent with the phase shift analysis by the HAL QCD method in Fig. 3.
We next calculate the scattering phase shift using the Lüscher's finite volume formula where k is given by Fig. 5 (Right) shows k cot δ 0 (k) as a function of k 2 using the ground state energy on three volumes and the 1st excited state energy on L = 64, which are compared with k cot δ 0 (k) in the infinite volume calculated from the HAL QCD potential (pink band). We here confirm not only a consistency between the two methods but also a smooth behavior of the finite volume energy: k cot δ 0 (k) for k 2 > 0 from the finite volume energy agrees with the pink band from the potential, and k cot δ 0 (k) for k 2 < 0 by the Lüscher's formula smoothly converges to the positive intersect at k 2 = 0, consistent with the pink band.
Diagnosis of the direct method form the HAL QCD potential
Finally, we reveal the origin of the fake plateau in the direct method. Using the low-lying eigenfunctions Ψ n ( r) and eigenvalues ∆E n , which are obtained by solving H = H 0 + V wall eff (r) in the finite box, the R-correlator in Eq. (1) can be decomposed as where a wall/smear n is determined by the orthogonality of Ψ n ( r). Fig. 6 (Left) shows the magnitude of the ratio |b n /b 0 | for both wall and smeared sources as a function of ∆E n , where the filled (open) symbol represents a positive (negative) value. This quantity represents the magnitude of the contamination of the excited states in R-correlator. For example, the contamination of the 1st excited state is smaller than 1% in the wall source, while it is as large as 10% (with a negative sign) in the smeared source.
In Fig. 6 (Right), we show the reconstructed effective energy shift using three low-lying modes for both wall and smeared sources, which is given by It well reproduces the (fake) plateaux-like behavior in the direct method around t = 15. We can estimate that about t ∼ 100a ∼ 10 fm is required for the smeared source to reach the correct ground state. Finally, we demonstrate reliability of eigenstates of the HAL QCD potential, using the projected effective energy shift defined by where R (n) (t) ≡ r Ψ n ( r)R( r, t) with the eigenfunction Ψ n ( r). Fig. 7 shows projected effective energy shifts for the ground and 1st excited states, which give source independent plateaux consistent with eigenenergies ∆E 0,1 within statistical errors. This demonstration establishes the correctness of the HAL QCD potential in this case, since its (finite volume) eingenenergies are faithful to the finite volume energies. Moreover, once correct eigenstates are obtained from the potential, we can construct correlation functions projected to these eigenstates, whose plateaux agree with correct eigenenergies even at rather small t.
Summary
In this paper, we have established reliability of the HAL QCD method by checking systematic uncertainties. Unlike the direct method, the time-dependent HAL QCD method is free from the elastic state contamination in two-baryon systems, and source dependence can be controlled in the derivative expansion. We have also shown the convergence of the derivative expansion in the non-local kernel, and the next leading order correction is negligible at low energies. We have revealed that the fake plateau in the direct method is caused by the contaminations from low-lying elastic excited states, and established that finite volume eigenenergies from the HAL QCD potential agree with effective energies of projected correlation functions. | 2,471 | 2017-10-17T00:00:00.000 | [
"Physics"
] |
Development of Real-Time Hand Gesture Recognition for Tabletop Holographic Display Interaction Using Azure Kinect
The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to interact with a computer. The technology has excellent hand detection performance, and even allows simple games to be played using gestures. Another example is the contactless use of a smartphone to take a photograph by simply folding and opening the palm. Research on interaction with other devices via hand gestures is in progress. Similarly, studies on the creation of a hologram display from objects that actually exist are also underway. We propose a hand gesture recognition system that can control the Tabletop holographic display based on an actual object. The depth image obtained using the latest Time-of-Flight based depth camera Azure Kinect is used to obtain information about the hand and hand joints by using the deep-learning model CrossInfoNet. Using this information, we developed a real time system that defines and recognizes gestures indicating left, right, up, and down basic rotation, and zoom in, zoom out, and continuous rotation to the left and right.
Introduction
Gesture interaction technology that measures and analyzes the movement of the user's body to control information devices or to link with content has been the topic of many studies [1][2][3][4][5][6][7]. Among them, the hand is the most easily used and is a medium capable of various formations owing to its high degree of freedom. Therefore, most gesture interactions include gestures that involve the use of the hands. Human interaction with a computer by way of gesturing relies on gesture interaction technology, representative examples of which are Azure Kinect and Leap Motion from Leap Motion, Inc. This technology enables gesture interaction to be used to easily control a variety of devices.
Recently, gesture recognition using a sensor such as Azure Kinect has been applied to a wide range of fields from smart home applications, medical applications, to automotive applications [8][9][10][11][12][13]. Through the gesture interaction of the smart home, household appliances are available without touching them directly. In medical applications, gesture interaction helps remote remedial exercise of burn patients to rehabilitate. In the automotive field, it is difficult for a driver to use a touch screen while driving. Gesture interaction allows drivers to control touch screen operations with gestures.
Tabletop holographic display is a device that allows the viewer to observe 3D hologram contents created from various angles with multiple cameras and view them anywhere from 360 degrees [14][15][16]. The system allows multiple viewers to view digital hologram images in the horizontal 360-degree direction by using digital micromirror device (DMD) capable of high-speed operation as a spatial light modulator (SLM). In addition, by applying a lenticular lens, holographic images can be viewed in a range of 15 degrees or more in the vertical direction. The device can display 22,000 binary hologram image data per second, with 1024 and 768 pixels in the horizontal and vertical directions, respectively.
A color hologram display device consisting of a total reflection prism illuminates the DMD and a prism capable of separating and recombining the three primary colors of light by applying three DMDs, a laser source emitting red, green and blue laser light, and the use of a fiber-based laser combination of the wavelengths 660, 532, and 473 nm. Because the holographic image is created by the interference of light, the hologram resulting from the tabletop holographic display must be observed in a dark environment from which all light is excluded. Therefore, in a dark environment without light, the display needs to be controlled by a computer to control the image displayed on the tabletop holographic display.
Since light is not present in the experimental environment, only depth information is used to develop gesture interaction using hand gestures. Thus, high accuracy hand detection using color information is not applicable. In this experiment, we use only the depth image with the high quality depth camera Azure Kinect. In addition, depth images are disturbed by structures such as optical components and eye-tracking cameras attached to tabletop holographic displays. To interact with the tabletop hologram display, we designed a new pipeline as shown in Figure 1. This pipeline is described in full in the next sections, beginning with the physical setup, to our proposed method for gesture interaction.
Azure Kinect
3D depth recognition technology consists of stereo-type, structured light, and Time-of-Flight (ToF). Stereo-type uses a viewpoint mismatch between two 2D cameras, similar to the principle that a person measures distance using both eyes. The structured light method recognizes the depth by calculating the amount of change in a pattern by scanning a specific light pattern on an object. The ToF method recognizes depth by calculating the travel time of light reflected from an object. The ToF method can acquire a better depth quality than other methods in an indoor environment. Azure Kinect is a Microsoft's ToF-based depth camera released in 2019. Figure 2a shows the configuration of Azure Kinect. Azure Kinect provides high quality depth information. Figure 2b shows the Azure Kinect view field. Depth Narrow-Field of view (NFOV) is a mode that provides depth information of a narrow area, and Depth Wide-Field of view (WFOV) is a mode that provides depth information of a wide area.
Gesture Interaction
Gesture interaction takes place when a user makes a gesture to command a device such as a computer. In other words, it entails noncontact interaction between the user and the computer. Among the human limbs, the hand is the most easily used and is a medium capable of conveying various expressions; thus, gesture interaction mostly occurs by using hand gesture interaction, which requires the use of the hands. Hand gesture interaction necessitates accurate hand detection and hand joint information [17,18].
CrossInfoNet [19] is a deep learning model that detects hands using depth information. Figure 3 shows the structure of CrossInfoNet. Unlike other existing models that detect the entire hand image to find joint information [20][21][22][23][24][25], CrossInfoNet detects the entire hand once, and then redetects the palm and fingers, respectively. First, the entire hand is detected to obtain approximate finger and palm joint information. The acquired palm and finger joint information is re-extracted from each of the two different branches. The joint information that is found again is transferred to the other branch. In other words, the information about the palm of the hand is delivered to the branch where the finger is found, and the information about the finger is transferred to the branch on which the palm is found. The branch containing the palm information also contains rough palm joint information received via a skip connection together with details of fine finger information shared with the palm joint information. When these types of information are subtracted, the coarse palm information and the fine palm information disappear, and only the fine finger information remains. In this way, finger joint information is obtained. In addition, in the branch in which the finger is found precisely, subtracting is performed using the approximate finger joint information received via the skip connection, the finger joint information found finely, and the sophisticated palm information shared. The result is elaborate palm information. Finger and palm information obtained by using this process is continuously shared with each finding as learning progresses, resulting in more accurate finger and palm joint information. The joint information of the last finger and palm joint is acquired to obtain the joint information of the entire hand. When hand detection and hand joint information are obtained, it is defined using the hand joint information obtained by the user.
It is important that each gesture should be defined such that it is intuitive and easy for the user to learn [26]. As a good example, operating a smartphone is similar to the operations performed when handling a book or paper in real life, such as switching the screen by swiping to the next page or pushing and lowering it, thus users are easily able to use the phone without the need to learn. Gesture interaction allows users to control the device they intend using without having to touch the mouse, keyboard, remote control, or screen. In addition, people with disabilities can control the device with simple hand gestures, thereby improving usability and convenience. In addition, gesture interaction is useful in situations in which it is difficult to operate other devices, such as a doctor working in an operating room while wearing work gloves.
System Configuration
When the gesture determined in the video is input to the server using UDP/IP socket communication, the server transmits the corresponding hologram to the sending end. The hologram reads the received signal and shows the hologram display corresponding to the signal. The 3D hologram data created via computer-generated holography (CGH) are stored in the hologram data storage section. The information of the user's hand gesture obtained from the Azure Kinect video is transferred to the Interaction Controller section using UDP/IP socket communication. In the Interaction Controller section, the hologram sending unit requests new hologram contents that have undergone appropriate actions for the corresponding gesture. The hologram transmitter that receives the request displays the hologram image with the gesture on the 360-degree Viewable Tabletop Holographic Display. The user can observe the new hologram display by viewing the gesture action taken in real time. Figure 4 shows the overall structure of the 3D holographic display system capable of interacting with hand gestures.
Proposed Gesture Interaction
Color information is used for most high performance hand detection models [27]. However, color information cannot be used in the tabletop holographic display environment. Therefore, using only depth information, the characteristics of the tabletop holographic display installed in many other structures are taken into account. Then, the necessary depth information is retrieved using background subtraction and Region of Interest (ROI) to detect the hand. The ROI is the area limited to processing only the region of interest in the image. Furthermore, gestures are defined using the joint information of the detected hand. Then, when a gesture corresponding to the subject's motion is detected, the gesture prediction is output in real time. The system was designed to operate with a delay of one second between gestures. Figure 5 shows the framework of the hand gesture recognition system based on depth frames.
Background Subtraction
We use the depth difference between the first frame and the next frame when the image is turned on, and we used a threshold to obtain the depth information of the image when it exceeds the threshold. Figure 6 shows the background subtraction method. When the camera is turned on, only the background or structures other than the user are present on the screen in the first frame that is received. Thereafter, the user exists in the incoming frames, and the depth difference is continuously calculated from the first frame. The background and surrounding structures are erased using only the depth information of the image with a depth difference value exceeding a predetermined threshold, it becomes the image in which only the depth information of the user and the user's hand exists. In a tabletop holographic display using Azure Kinect, the bottom part of the tabletop is closer than the hand. In addition, several cameras for gaze tracking are attached to the tabletop holographic display, which also causes depth information and interferes with accurate hand detection. Therefore, the depth information of the remaining parts except the moving person has to be erased. Thus, the depth information of the background and the structure is erased using background subtraction [28].
Set ROI
The background and structure were erased with background subtraction. However, a certain amount of noise remains, thus the ROI is set such that the hand can be detected only within the ROI. In the holographic display environment, the position of the camera and the position at which the gesture is made is always constant, thus the ROI is set statically. The mouse pointer is dragged to where the person actually stands in the image to find the coordinates. The ROI is drawn based on these coordinates.
Hand Detection and Bounding Box
Background subtraction erases the depth of the surrounding structures and uses only the depth information of the user to find the user's hand in the ROI using CrossInfoNet. We trained CrossInfoNet with the NYU hand dataset. As a result, information on 14 joints of the hands, including the center of the palm, can be obtained. The information of each of the 14 joints includes x, y, and z information. After the hand is detected, a bounding box is drawn based on the center of the hand. Bounding box is necessary to visualize the threshold of outputting the up, down, left, right, continuous rotation left and right gestures. The bounding box was drawn as large as the threshold in each of the up, down, left, and right directions, centering on the hand. In addition, the front and back were drawn with the size of 15cm around the hand. Thus, this bounding box was drawn in consideration of the threshold used when recognizing the gesture, and when the finger passes over this bounding box, the corresponding gesture is output.
Gesture Definition and Gesture Recognition
Eight possible gestures can be made: basic rotation up, down, left, right, and zoom in, zoom out, and continuous rotation to the left and right. Figure 7 shows 8 hand gestures. First, the basic rotation is divided into the motion of swiping the index finger and middle finger all the way up, down, left, and right. Intuitively, when swiping in the up, down, left, and right directions, the same gestures as swiping directions are output. Second, the thumb and index finger are used to zoom in and out. The distance between the thumb and forefinger is collected, and the two fingers are spread apart to output the gesture of zooming in. Spreading the thumb and index finger and pinching the two fingers together outputs a zoom out action. Lastly, continuous rotation to the left and right is recognized as a gesture of spreading all fingers and swiping to the left and right.
Each gesture was defined using the relative positional relationship of 14 joints obtained from the result of hand detection, the inner product for each finger, and the distance between fingers. The inner product uses the vector inner product between each finger and the center of the finger and hand. We normalized the value of the inner product from −1 to 1. When a finger is bent, the value of the inner product becomes a negative value close to −1 by the nature of the inner product of the vector, and when the finger is not included, the value of the inner product of the vector approaches 1. This method is used to distinguish gestures based on whether they are bent or stretched for each of the five fingers.
Up, down, left, and right gestures are output when the relative positions of the middle, upper, lower, left, and right joints of the middle finger relative to the center of the hand exceed the threshold, and the inner product of the ring finger and the palm of each finger is negative. The zoom-in and zoom-out gestures use a heap-like arrangement, which continuously stores the distance between the thumb and index finger. When the distance value between the two fingers stacked in the array shows a tendency to increase to exceed the threshold, a zoom in gesture is displayed, and if this distance decreases to exceed the threshold, a zoom out gesture is output. In continuous rotation to the left and right gestures, for all five fingers, the inner product value of the vector with the palm exceeds the threshold, and unlike basic left and right rotation, the ring finger and the little finger are also positioned relative to the palm center. When each threshold is exceeded, a gesture is output. Because the rotation gesture overlaps with the basic left and right gestures and the continuous left and right gestures, it is classified by using the inner product of the vector between the finger and the palm, respectively.
Experiment Method
In the experiment, a hologram using only a green laser was used. We conducted experiments at various distances. First, We experimented with 10 people on a total of 8 features we defined. Figure 8a shows the environment of the tabletop holographic display. All the subjects were aware of the gesture operation method and the delay of 1 second and conducted 10 experiments per gesture. All subjects were tested in the same test environment. As shown in Figure 8b, the distance between the camera and the subject's hand was kept constant at 35-50 cm. The subject performed the experiment in line with the Azure Kinect installed on the tabletop holographic display. Each subject made eight gestures that were tested 80 times. That is, 100 experiments were performed for each gesture. Second, the distance between the camera and the subject's hand was kept constant at 50-60 cm. Finally, the distance between the camera and the subject's hand was changed to 60-70 cm. Figure 9 shows the result of background subtraction. Figure 9a shows an image that not only includes the depth information of the user but also the structures and backgrounds. As shown in Figure 9b, the depth information of the structure and the background is removed through the background subtraction, and only the depth information of the user is shown. Figure 10a shows the result of setting the area to find the user's hand as an ROI. Figure 10b shows a 3D bounding box that shows the result of detecting the hand joint information and the threshold within the ROI. Figure 11 shows the result for the basic rotation gesture. In the default state, if the up gesture is made, the hologram image rotates upward, and when the down gesture is made, the hologram image rotates downward. Gesturing to the right and left has the effect of rotating the hologram image to the right and left, respectively. Figure 12 shows the results of the zoom in and zoom out gestures. In the default state, the hologram image becomes larger when the zoom in gesture is detected, and the hologram image becomes smaller when the zoom out gesture is performed. [29,30] of each gesture tested 100 times. TP means a condition when a gesture is properly output as a result of performing the gesture. FP means a condition when a gesture is performed, but another gesture is output as a result. FN means a condition when a gesture is performed, but no gesture is output as a result. Table 1 shows the results at a distance of 35-50 cm. For the up gesture, two FN results were obtained, and for the enlarge and reduce gestures, one FP result was obtained. For gesturing to the right with continuous rotation, seven FN results were obtained. The Precision, Recall, and F1 scores were calculated using these results. All the gestures had a precision value of 100, except for the zoom in and zoom out gestures, which yielded a precision value of 99 each. A Recall value of 100 was obtained for all gestures except the right gesture above and continuous rotation right.
Results
The Recall values of the up and right hand gestures were 98 and 93, respectively. The F1 score that was calculated using the Precision and Recall values was 98.98 for the up gesture, 99.49 for the zoom in and zoom out gesture, and 96.37 for gesturing to the right with continuous rotation. The F1 score of the remaining gestures was 100. Table 1. Experiment results of precision error, recall error, and F1 score about each gesture of 10 subject at 35-50 cm. Total Attempts 100 100 100 100 100 100 100 100 True Positive 100 100 98 100 99 99 100 93 False Positive 0 0 0 0 1 1 0 0 False Negative 0 0 2 0 0 0 0 7 Precision 100 100 100 100 99 99 100 100 Recall 100 100 98 100 100 100 100 93 F1 score 100 100 Table 2 shows the results at a distance of 50-60 cm. For the zoom in and zoom out gestures, Table 2 also obtained 1 false positive result each. False Negative results were obtained for 6 times for the left gesture, 9 times for the right gesture, 2 times for the down gesture, 7 times for the continuous rotation left gesture, and 12 times for the continuous rotation right gesture. All the gestures had a precision value of 100, except for the zoom in and zoom out gestures, which yielded a precision value of 99 each. A Recall value of 100 was obtained for the up gesture , zoom in gesture, zoom out gesture, and continuous rotation left gesture. The Recall value of the left gesture was 94, the right gesture was 91, the down gesture was 98, the continuous rotation left gesture was 93, and the continuous rotation right gesture was 88. The F1 score was 96.90 for the left gesture, 95.28 for the right gesture, 100 for the up gesture, 98.98 for the down gesture, 99.49 for the zoom in and zoom out gesture, 96.37 for the continuous rotation left gesture, and 93.61 for the continuous rotation right gesture. Table 2. Experiment results of precision error, recall error, and F1 score about each gesture of 10 subjects at 50-60 cm. Total Attempts 100 100 100 100 100 100 100 100 True Positive 94 91 100 98 99 99 93 88 False Positive 0 0 0 0 1 1 0 0 False Negative 6 9 0 2 0 0 7 12 Precision 100 100 100 100 99 99 100 100 Recall 94 91 100 98 100 100 93 Table 3 shows the results at a distance of 60-70 cm. For the zoom in and zoom out gestures, Table 3 also obtained 1 false positive result each. False Negative results were obtained for 10 times for the left gesture, 10 times for the right gesture, 9 times for the up gesture, 6 times for the down gesture, 17 times for the continuous rotation left gesture, and 21 times for the continuous rotation right gesture. All the gestures had a precision value of 100, except for the zoom in and zoom out gestures, which yielded a precision value of 99 each. A Recall value of 100 was obtained for the zoom in gesture and zoom out gesture. The Recall value of the left gesture was 90, the right gesture was 90, the up gesture was 91, the down gesture was 94, the continuous rotation left gesture was 83, and the continuous rotation right gesture was 79. The F1 score was 94.73 for the left gesture, 94.73 for the right gesture, 95.28 for the up gesture, 96.90 for the down gesture, 99.49 for the zoom in and zoom out gesture, 90.71 for the continuous rotation left gesture, and 88.26 for the continuous rotation right gesture. Table 4 combines the results of all gestures of 10 subjects at 35-50 cm, 50-60 cm, and 60-70 cm. The combined results of all the gestures at 35-50 cm were: Precision was 0.99747, Recall was 0.98872, and the F1 score was 0.99307. The combined results of all the gestures at 50-60 cm were: Precision was 0.99747, Recall was 0.95500, and the F1 score was 0.97577. The combined results of all the gestures at 60-70 cm were: Precision was 0.99747, Recall was 0.90875, and the F1 score was 0.95104. Tables 1-3 show one false positive result for each of the zoom in and zoom out gestures. The reason for the false positive is the occurrence of a delay of 1 second each time a gesture is output, because the duration of the gesture is shorter than the delay. In Table 1, nine gestures remained undetected, including two up gestures and seven gestures to the continuous rotation right. The false positive result among the up gestures and also among the continuous rotation to the right occurred because the threshold is not exceeded. The hand of an inexperienced subject was smaller than that of the other subjects. In particular, the hand movement in the case of the continuous rotation to the right gesture was more unnatural than the other gestures; thus, the subject with small hands was not able to easily exceed the specified threshold. Tables 2 and 3 were tested at a greater distance than Table 1. The amount of the undetected increases as the distance increases. As shown in Table 4, The best results were obtained at 35-50 cm. Especially, the undetected result of a person with a small hand increased significantly. In most ordinary hand-sized people, the amount of the undetected did not increase significantly until the distance between the camera and the user was 70 cm. The experiment was conducted at 70 cm or more, but the hand was not accurately detected and almost no gesture was output. In addition, in the case of a subject whose wrist movement range was not normal, it was difficult to perform the right gesture and the continuous rotation right gesture. Because of the difficulty of performing the right gesture and the continuous rotation right gesture, false negative results often occurred.
Discussions
If the user and the camera are not in a straight line or are turned more than 30 degrees, the system cannot properly detect the hand. Furthermore, when the camera is rotated more than 30 degrees, the shape of the hand visible on the camera no longer looks like a hand. In addition, only 10 subjects participated in the experiment, thus personal characteristics can influence the outcome. This can be solved by increasing the number of subjects to obtain more objective results.
The field of gesture interaction using Azure Kinect is expanding, such as gesture interaction for drivers in the automotive field and gesture interaction in home appliances by using Azure Kinect. Unlike the approach of these studies, this study developed gesture interaction using only depth information in situations where no light is available. This is a gesture interaction that is more robust to environmental changes than other approaches. If the above problems are solved, gesture interaction using only depth information can be applied to the smart home and automobile fields.
Conclusions
In this study, we designed a gesture interaction system that uses Azure Kinect to enable the hologram displayed on the tabletop holographic display to be controlled in real time without any equipment. Because the tabletop holographic display requires complete darkness, only depth information is available. Thus, we used Azure Kinect to implement a gesture interaction system that provides high-performance depth information and defined intuitive and easy gestures to render the system user friendly. As a result, precision and recall values of 0.99747 and 0.98872 were obtained, respectively, and finally, an excellent F1 score of 0.99307 was achieved. However, people with small hands could encounter the problem of undetectableness. The false positive rate caused by this problem could be reduced by allowing the threshold to change flexibly by considering the size of the user's hand. In the future, we plan to improve the system to allow users with small hands to use it without any prior exploration. In addition, through experiments, we found that people's right wrist bending behavior was more difficult than others. Therefore, it was assumed that it was necessary to lower the threshold of the gesture to twist the wrist to the right compared to other gestures. This system enables the user to control the hologram of the tabletop holographic display without using other equipment. As the demand for holograms increases and the amount of research in this field increases, the implementation of a larger number of gestures in the future would enable the user to control the hologram more freely with their own hands without requiring additional equipment. | 6,532.4 | 2020-08-01T00:00:00.000 | [
"Computer Science",
"Art"
] |
Structural and Optical Characteristics of γ-In 2 Se 3 Nanorods Grown on Si Substrates
This study attempted to grow single-phase γ-In2Se3 nanorods on Si (111) substrates by metal-organic chemical vapor deposition (MOCVD). High-resolution transmission electron microscopy (HRTEM) and selected area electron diffraction (SAED) confirmed that the In2Se3 nanorods are singularly crystallized in the γ phase. The photoluminescence of γ-In2Se3 nanorods at 15 K was referred to as free and bound exciton emissions. The bandgap energy of γ-In2Se3 nanorods at room temperature was determined to be ∼1.99 eV, obtained from optical absorption.
The III-VI semiconductors have been the subject of many investigations due to their peculiar electrical and optical properties, and their potential applications in electronic and optoelectronic devices, such as phase-change random access memories (PRAMs), solid-state batteries, and solar cells [1][2][3][4].Among these III-VI semiconductors, In 2 Se 3 is a defective structure of tetrahedral bonding, where one-third of the sites is vacant and forms a screw array along the c axis.Due to many different crystalline phases existing in In 2 Se 3 , growth of high-quality In 2 Se 3 with a single phase is a challenging task.Several different methods have been demonstrated to grow In 2 Se 3 epilayers, such as evaporation [5,6], the Bridgman-Stockbarger Method [7,8], and metal-organic chemical vapor deposition (MOCVD) [9][10][11].Recently, one-dimensional III-VI semiconductor nanostructures, such as nanowires and nanotubes, exhibited novel and device applicable physical properties, which can be used in a wide variety of applications in nanoelectronic and nanooptoelectronic devices [12][13][14][15][16][17].For example, α-phase layerstructured In 2 Se 3 nanowires have been grown and have shown a large anisotropy in both structure and conductivity [12].These III-VI semiconductor nanostructures can afford an efficient charge carrier transfer while maintaining a small cross-section for the applications.However, so far, little attention has been given to γ-phase In 2 Se 3 (γ-In 2 Se 3 ) nanorods.Bulk γ-In 2 Se 3 has been of particular interest for photovoltaic applications because it can be an absorbing layer in a solar cell.The one-dimensional γ-In 2 Se 3 nanostructures may be more interesting materials since they exhibit excellent light absorption owing to their high surfaceto-volume ratio.To be an absorbing layer in solar cells, γ-In 2 Se 3 requires deposition on different substrates with a high crystalline quality.It is well known that Si can be a good substrate to grow nanostructures because it offers many attractive advantages, such as good doping properties and thermal conductivity.If Si substrate can be utilized in growing γ-In 2 Se 3 nanostructures, various devices on Sibased integrated circuits could be developed in the future.
In our previous work, energy relaxation of hot electrons in γ-In 2 Se 3 nanorods has been investigated [16].It was found that the main path of energy relaxation for the hot electrons is LO-phonon emission.In this study, the detailed structures of the γ-In 2 Se 3 nanorods on Si substrates were investigated by scanning electron microscopy (SEM), transmission electron microscopy (TEM), and selected area electron diffraction (SAED).Also, the optical properties The γ-In 2 Se 3 nanorods were directly grown on Si (111) substrates without any buffer layers using an MOCVD system with a vertical reactor [16].The nanorods were grown using liquid MO and a trimethyl-indium (TMIn) compound at atmospheric pressure.Gaseous H 2 Se was employed as the reactant source material.Gaseous N 2 was used as the carrier gas in this process.Before growth, Si substrates were baked at 1100 • C for 10 min in gaseous HCl and H 2 in order to remove the native oxide.After the thermal etching process, the reactor cooled down to 425 • C and then started to grow γ-In 2 Se 3 nanorods.The total growth time was 50 min.The gaseous flow rate was kept at 3 μmol/min for TMIn and 40 μmol/min for H 2 Se.Gaseous H 2 Se was mixed with 85% hydrogen and 15% H 2 Se.The gaseous flow rate and temperature play an essential role in growing nanorod structures in γ-In 2 Se 3 .The TEM lattice image and the SAED pattern of an individual In 2 Se 3 nanorod were taken by a JSM-2100F (JEOL Company) Transmission Electron Microscope.The room temperature CL measurement and morphology of the nanorods image were measured by using the JSM-7001F (JEOL Company).The PL measurements were performed using a 532 nm semiconductor laser as the excitation source.The temperature-dependent PL spectra were measured by a close-cycle helium cryostat and were analyzed by means of a 0.75 m monochromator and silicon detector.
A cross-section image of the SEM for the grown In 2 Se 3 nanorods is shown in Figure 1(a).The SEM image was taken with 10 keV of electron energy to present a magnification of 30,000.As shown in Figure 1(a), the In 2 Se 3 nanorods are straight and not tapered.The average diameter and the average height of the In 2 Se 3 nanorods are about 64 and 460 nm, respectively.To understand the structural and morphological characteristics of nanorods, TEM investigations were carried out.Figure 1(b) shows a low magnification TEM image of In 2 Se 3 nanorods.The diameter and height of In 2 Se 3 nanorods are in good agreement with the SEM image shown in Figure 1(a).Figure 1(c) is a high-resolution TEM (HRTEM) image recorded from a segment of an In 2 Se 3 nanorod.The image exhibits the ordering feature across its entire width, with a uniform periodicity of ∼1.7 nm.This superlattice structure within the nanorods is a structural characteristic due to the effect of the vacancy ordering [12].A similar behavior was also reported for the vacancy ordering in α-In 2 Se 3 nanowires and CuInSe 2 -CdS Core-Shell nanowires [12,13].The SAED pattern taken along the [006] zone axis of In 2 Se 3 nanorods is displayed in the inset of Figure 1(c).The SAED pattern shows a rectangular array with characteristic distances of d 1 = 28.9×10−1 nm and d 2 = 7.93 × 10 −1 nm, respectively.These regular spots in SAED suggest an epitaxial orientation relationship between the In 2 Se 3 nanorods and substrates; that is, the In 2 Se 3 nanorods are single crystalline.The SAED pattern is consistent with the previous established pattern for γ-In 2 Se 3 with basis vectors of (−1,1,0) and (0,0,6) [17].It is noted that the growth direction of the nanowire in Figure 1(c) is not along the [006] direction, but it makes an angle of 13.7 • with respect to the [006] direction.Anyhow, the HRTEM image allows us to confirm that the grown In 2 Se 3 nanorods are well crystallized in the γ phase.
The PL spectrum of the γ-In 2 Se 3 nanorods on Si (111) substructure at 15 K is shown in Figure 2. Three Gaussian components, peaked at 2.126, 2.147, and 2.155 eV, are resolved in Figure 2. The full width at half maximum (FWHM) of the PL peak at 2.155 eV is 8 meV, indicating good crystal quality for the γ-In 2 Se 3 nanorods.In previous reports, the PL peak of γ-In 2 Se 3 epilayers at low temperatures was referred to as the exciton-related emission [9].Therefore, the main PL peak positioned at 2.155 eV is suggested to be the free exciton emission and the peaks in the lower energy side are suggested to be the bond exciton emissions.Figure 3 shows the temperature-dependent PL spectra from 15 to 180 K.The peak energy of the PL in γ-In 2 Se 3 nanorods is red-shifted with increased temperature.The open circles in the inset of Figure 3 show temperature-induced bandgap shrinkage extracted from the PL spectra in Figure 2.This relation was fitted by the Varshini Empirical Formula as where E 0 is the bandgap at 0 K, α is the average temperature coefficient and β is the Debye temperature of the material.Experimental data fitting is shown by the solid line in the inset in Figure 3.The experimental results show good agreement with data that fits using Varshini's Equation with E 0 = 2.14 eV, α = 1.1 × 10 −3 eV/K, and β = 173 K.By analyzing the variation of PL peak energy as a function of temperature and ability to fit with the Varshini Equation, the room temperature peak energy of the PL in the γ-In 2 Se 3 nanorods was evaluated to be ∼1.95eV.
To explore the bandgap energy of γ-In 2 Se 3 nanorods at room temperature, the CL and optical absorption spectra were investigated.The room temperature CL spectrum of γ-In 2 Se 3 nanorods is shown in Figure 4(a).The peak energy of the CL is 1.95 eV, in good agreement with the predicated value by Varshini's relation, as displayed in the inset of Figure 3.The optical absorption spectrum taken at room temperature is displayed in Figure 4(b).It is known that γ-In 2 Se 3 is a direct bandgap semiconductor.Thus, the absorption coefficient α near the band edge follows the relation of a direct bandgap transition [18] as where A is a constant, hν is the photon energy and E g is the energy, gap between the valence band and the conduction band.The bandgap can be derived from extrapolating the linear part of the curve to zero absorption.The straight line in Figure 4(b) shows the extrapolation, and the bandgap energy was estimated to be ∼1.99 eV.Obtaining the bandgap energy and absorption coefficient is essential for developing γ-In 2 Se 3 nanorods as absorber layers in photovoltaic applications.Figure 5(a) shows the PL spectrum of γ-In 2 Se 3 nanorods at 15 K in the infrared spectral range.A broad PL peak located at 1.24 eV was observed.The sharp peak with energy at 1.16 eV is the emission related to the excitation laser.To find out origin of the 1.24 eV PL, the dependence of PL intensity on the excitation intensity was studied.The PL spectra with the excitation power density varied from 17 to 270 W/cm 2 were shown in Figure 5.The open circles in Figure 6 show the PL intensity as a function of the laser excitation density, indicating a linear increase of the PL intensity with excitation density.The dependence of the PL intensity I on the excitation density P can be fitted by a relation [19]: where C and m are constants.The exponent m depends on the mechanism of recombination: for an excitonic recombination m = 1, while for free carrier recombination m = 2.When m < 1, it may indicate a transition associated with the donor-acceptor pair transition or freeto-bound transition [20,21].The solid line in Figure 6 displays the fit from (3).A value of m was determined to be around 0.6, which corresponds to the emission from the donor-acceptor pair transition or free-to-bound transition.In Figure 6, the PL peak at 1.24 eV shifts to the high-energy spectral region with increasing the excitation density.This blue shift, originating from the increase of the interaction between more closed donor-acceptor pairs, is a characteristic of the donor-acceptor pair transition.Therefore, according to these observations, the observed PL peak at 1.24 eV can be ascribed to the donor-acceptor pair transition in γ-In 2 Se 3 nanorods.In summary, γ-In 2 Se 3 nanorods deposited on Si (111) substrates were grown by MOCVD using dual-source precursors.The crystal structure and morphology of In 2 Se 3 nanorods were characterized by SEM and HRTEM.The SAED analysis taken along [006] reveals a rectangle spot pattern, confirming the single crystalline in the γ phase.The optical absorption, CL, and temperature-dependent PL have been investigated.The PL at 15 K contains three peaks, which are identified with recombination of free excitons and bound excitons.The energy of the direct bandgap at room temperature was found to be ∼1.99 eV.An infrared PL, peaked at 1.24 eV, was observed in 15 K and assigned to be the donor-acceptor pair transition.
Figure 3 :
Figure 3: The temperature dependence of PL spectra in the γ-In 2 Se 3 nanorods.The inset shows the temperature dependence of peak position in PL (open circles).The solid line in the inset shows the fit according to (1).
2 )Figure 4 :
Figure 4: (a) CL and (b) optical absorption spectra of γ-In 2 Se 3 nanorods at room temperature.The red solid line shows the fit according to (2).
Figure 6 :
Figure 6: PL intensity of the 1.24 eV peak as a function of excitation density.The solid line shows the fit according to (3).
Figure 2: PL spectrum of γ-In 2 Se 3 nanorods at 15 K. Three peaks are fitted with Gaussian line shape (solid line) to the experimental data (open circles). | 2,882.6 | 2011-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
An Assessment of the Impact of International Aid on Basic Education in Ghana
In Ghana and many other developing countries, the substantial investment in and provision of quality education have been identified as the surest path out of persistent poverty. The hope of accelerated development is now hinged on the provision of quality education for it citizenry. However, the inability to raise enough revenue by the government is as a result of varied factors including but not limited to macroeconomic and growth instability, high debt ratios, weak tax administration and large informal (non-taxable) sectors. The intent and desire of the state and government to provide quality accessible education to its citizens and the constraint of inadequate financial resources has compelled Ghana to seek external assistance to fill the resources gaps. Bilateral and multinational donors have responded in diverse ways to the call and over the last two decades, aid increased in quantity and prominence in Ghana’s education sector. There are and may be several reasons that could be assigned to the quick response of these bilateral and multinational donors to the call made by Ghana for aid. The paper seeks to comparatively assess the impact international aid has had on Ghana's educational sector over the last two decades in term of access to “quality” education, educational financing and infrastructure expansion at the basic level. This paper argues that, notwithstanding the challenges the educational sector in Ghana is facing, the impact of international stakeholders on educational policy making and practice especially at the basic level has been positive in terms of access, financing and infrastructural expansion.
Introduction
In Africa and many other developing countries, the substantial investment in and provision of quality education have been identified as the surest path out of persistent poverty. As suggested by UNESCO [1], the hope of accelerated development is now hinged on the provision of quality education for it citizenry. However, quality education requires substantial resource investment if the aim is to improve the human capital formation of the country to drive its developmental goals. Provisions of adequate resources by government to the educational sector have always been constrained by the availability of public resources. The inability to raise enough revenue by government is as a result of varied factors including but not limited to macroeconomic and growth instability, high debt ratios, weak tax administration and large informal (non-taxable) sectors. The intent and desire of the state and government to provide quality accessible education to its citizens and the constraint of inadequate financial resources has compelled Ghana to seek external assistance to fill the resources gaps.
Bilateral and multinational donors have responded in diverse ways to the call and over the last two decades, aid increased in quantity and prominence in Ghana's education sector. There are and may be several reasons that could be assigned to the quick response of these bilateral and multinational donors to the call made by Ghana for aid. One obvious reason is opined by Held [2] in Arnove, [3]. He indicated that, the continuous and increased socialization of the world, that is, the linkage of distant localities has the tendency to shape local happenings. Aside other possible motives behind the giving of aid, these international stakeholders responded to this call because the effects of neglect of same would have been obviously negative.
This paper seeks to comparatively assess the impact international aid has had on Ghana's educational sector over the last two decades in term of access to "quality" education, educational financing and infrastructure expansion at the basic level. The paper, for the sake of time and space will focus its comparison from the preindependence period to 1970 and 1980 to 2000. The rationale behind the selection of these periods is that, whilst the educational sector was funded with internally generated resources by the state in the pre-independence to the late 1970s, the early 1980s saw a dramatic change and international influence in the educational sector especially at the basic level with the involvement of international stakeholders when the country adopted the structural adjustment policies. This shift in paradigm was largely due to the "universal" acceptance of the attainment of universal basic primary education (UPE) by 2000 at a World Conference on Education for All jointly convened by the World Bank, UNICEF, UNESCO and UNDP, held at Jomtien, Thailand, in 1990 [4]. This paper argues that, notwithstanding the challenges the educational sector in Ghana is facing, the impact of international stakeholders on educational policy making and practice especially at the basic level has been positive in terms of access, financing and infrastructural expansion. The paper which is organized into three sections, will firstly use the Martin Carnoy's analytical framework to explain the rationale behind the educational reforms Ghana pursued under the periods of consideration. Secondly, educational financing under the period of consideration will be discussed. Finally, the paper will discuss the impact in terms of access to "quality "education and infrastructural expansion the international stakeholders support had on basic education.
Analytical Framework
In order to better understand and appreciate why external educational policies and ideas are selected and retained in particular places, "we need to look more closely at contextual contingencies of a different nature, especially at those of a political and institutional nature" [5]. It is in this regard that, the Martin Carnoy's framework will be used in explaining the educational policies pursued in Ghana under the periods of consideration. This framework explains the impact globalization has on the development and implementation of policies. The framework which was developed based on observation of international educational reform movements in the 1990s.
The main tenets of this framework is that, globalization influences educational policies making at the various levels of the sector with three main rationales. The national educational reforms or policy making will receive much support from the international donor community if such policies are competitiveness, finance and equity based. By competitive based it means that, countries adopt certain educational policies to strive for quality not only because they want their educational system to be competitive but also produce competitive human capital to drive their developmental needs. This thinking is directly linked with the human capital theory which views investment in education as the surest way of improving economic viability of individuals and states at large.
This paradigm which has shaped the economy of most developed countries is supported by both bilateral and multilateral donors. Ghana's educational reforms through the structural adjustment policies adopted in the 1980s was more of competition driven and as such had much support from the international community. According to this framework, a competition driven policy reforms has the following features embedded in them: decentralization, centralization of standards and management of educational resources, teacher recruitment and training school choice as well as privatization of education. These features have been prominent in the educational sector of Ghana after the structural adjustment programmes in the 1980s.
The second tenet of the Carnoy's framework which is closely related to the competition driven policy reforms /making discussed above is finance. Educational policies development which are finance driven are mainly championed by international financial institutions. Though the ultimate objective of such reforms is to improve labour productivity, it does so by ensuring that parents contribute to the cost of educating their children. This is a more prominent impact of globalization on the educational sector in most developing countries. There are three main types of reforms promoted under the finance driven motive of policy making. These are a shift in public funding from higher to lower levels of education, privatization of secondary and higher education as well as reducing cost per student at all levels. Educational reforms have been largely influenced by the finance driven thinking. Ghana called on the International Monetary Fund (IMF) in the 1980s as a result of financial difficulties it faced in those times, there by adopting "global" finance driven policy in the educational sector.
The final reason according to this framework to which policy makers assign to the adoption of certain educational policies is equity. The aim is to use education as means for social mobility and social justice in order to save certain groups from the negative impact of globalization. As a result, the focus of equity driven educational policies is on bringing education to the disadvantaged in society. This according to the framework is done in many ways, through introducing educational policies that will bring quality education to the lowest-income groups, and certain group of the population such as women, rural people and special needs students.
Though many reasons are assigned to the adoption and retaining of certain educational policies at the various levels, from the above it is clear that most of them have been influenced by financial motives. Though all the motives for adopting educational reforms are plausible and can be considered independent, Ghana's educational policy making has been directly or indirectly an embedded one. This suggests that, most reforms in Ghana covertly have the competition and equity motives implicitly or explicitly embedded in the educational policies which are mostly finance driven especially from the 1980s.
Basic Education Funding in Ghana
As was the case at the international level, Ghana educational policies in the 1960s and 1970s were more geared towards higher education. This was because higher education was viewed at the time to be the only means through which the skilled manpower needed to drive the developmental agenda of nations could be produced [4]. The situation however changed as much attention was given to basic education in the 1980s. This was because basic education was viewed at the international level as the most effective way of eradicating poverty especially in developing countries. This section addresses the financial commitment of the donor community towards basic education in Ghana during the period juxtaposing it to the support the educational sector received from government prior to the 1980s where the educational funding was the sole responsibility of the state.
Pre-independence to 1970
Initiatives and policies to expand basic education are not new to the Ghana educational sector. It goes back to the pre-independence days. The first of such policies was in 1908 when Governor Rodger sought to expand educational opportunities across the country, especially to the northern part of the country. This was as a result of the concentration of formal education in the coastal areas where the Europeans settled. This initiative was given a boost by Sir Gordon Guggisberg when he introduced the Ten-Year Educational Development Plan of 1920 [6,7].
Notwithstanding, the quest for huge state investment in education started with the Free Compulsory Universal Basic Education (fCUBE) programme introduced by Dr. Kwame Nkrumah in his first educational reform in 1951 under limited self-government (in the Accelerated Development Plan for education). This reform introduced a fee-free compulsory basic education policy for all children aged five to sixteen [8]. The aim was to expand access across the country and to narrow the gap between the north and the south and also between urban and rural areas in relation to access of quality education. This shows that, universal primary education wasn't a new policy to the Ghanaian education sector prior to the international endorsement.
The Universal primary education policy became the Education Act of 1961 (Act 87) when Ghana attained independence. This education policy was to be the driver of the broader desire of modernizing Ghana through industrialization. The reforms under Nkrumah increased the expansion of educational access across the country as primary and middle school enrolments increased by 211.9% and 141.8% respectively [9]. However, inadequate funding resulting from the economic downward of the mid 1960s, hampered the realization of the dream of providing fee free basic education to every Ghanaian child. And by the end of Nkrumah's administration in 1966, the quality of education remained poor for many [10]. The period after the overthrow of the first president of Ghana until the early 1980s when Ghana accepted the economic reforms imposed by the Breton Woods Institutions represent what scholars have term the "dark ages" in Ghana's educational history. The political and economic instability led to severe deterioration in education delivery. The 1967 and 1974 education reforms by the National Liberation Council and the Acheampong administrations respectively which sought to improve standards and expand access, failed because of inadequate funding and political instability [6,7,10]. Government's commitment to financing the educational sector declined sharply from 6.4% to 1.4% of GDP between 1976 to1983. This resulted in the decline of standards and quality in education. It is argued that by early 1980s, the education system in Ghana had reached crisis level; severely constrained by administrative, performance and resource problems [6,7].
A lot of challenges characterised the educational sector during this period. There was lack of trained teachers, teaching and learning materials (books and teaching aids), and inadequate infrastructure coupled with poor pay for teachers. As a result there were high drop-out rates, low enrolment rates as well as poor management and administration. The main problem of the educational system at the time was general lack of financing. This was generally due to the collective impact of repeated coups, the economic crisis and long strikes by both teachers and students culminating to a fall in public spending on education and consequently quality of education dwindled.
This inadequate investment in the sector resulted not only in the deterioration of the educational infrastructure and shortage of teaching and learning materials, but also made the education sector unattractive to its key stakeholders which led to the exodus of trained teachers to Nigeria. It is estimated that about 50% of trained teachers had left the country by 1983 [11,12]. Untrained teachers became the option as they were employed by the government in an attempt to prevent the complete disintegration and collapse of the education system [13]. This good intended objective of maintaining the educational system, was however, attained at the expense of quality of education. According to Ahadzie [11], by 1983, the quality of education in Ghana had reached crisis levels and it became necessary for a serious attempt to be made to salvage the situation.
It must be noted that, during this period, (preindependence to 1986) of Ghana's education development, the main source of funding was from government. There was no or limited involvement of external stakeholder and resources.
Financing Education from 1980 to 2000
The introduction of the much needed education reform programme in 1987, which was part of the economic recovery and the structural adjustment programme of the Breton Woods Institutions, brought a paradigm shift to education financing in Ghana. The main objectives of the reform programme was to increase access to basic education, make education more cost effective and improve the quality of education by making it more responsive to the needs and conditions of the country. This reform perhaps is the most comprehensive of all the education reforms Ghana has ever introduced.
At the basic education level, a non-selective basic education across the primary and junior secondary stages was introduced. It aimed at providing children with literacy skills in their own language and in English as well as create a positive attitude to hard work towards national development. Pupils at the basic education level after completing school were expected to become productive skilled workers. This was in line with government's efforts in providing jobs for the teeming youth in an attempt to revive the economy and bring about development after many years of economic challenges. This is exactly what the framework used in this paper espoused. The government of the day concentrated resources on basic education, and technical and vocational education [6]. It must however be noted that this idea was pushed by the donor community. The reform received tremendous support from donors and a lot of external resources were harnessed to support the implementation of the reform.
Evaluation reports on the implementation of the reform programme suggested that, the basic education sector received considerable donor support in a variety of forms, including loans, grants, credits and technical assistance [7,12,14]. The World Bank was the leading institution that
The Impact of the Donor support to Basic Education in Ghana
The educational reform did not only succeed in channeling a lot of local resources into basic education but also it increased the donor community support to basic education. There was massive expansion in the infrastructure at basic school level and an increase in enrolment. The roles played by different stakeholders in education impacted differently and significantly to education policy making and practice. This section discusses the impact of the donor support basic education in Ghana received during the implementation of the structural adjustment reforms in terms of access and provision of quality/infrastructure education.
Impact on Infrastructure/Quality of Education
The impact of the massive donor support the country received in the education sector is reflected in many ways. There was massive improvement in infrastructure, increase in number of teachers and text books available to children. The donor resources received were mainly devoted to school infrastructure due to the huge infrastructural deficit that existed before the adoption of the educational reform. The World Bank (2010) opined in 1988, a year after the launch of the 1987 education reform programme in Ghana that, few (less than half) of basic schools could use all their classrooms when it was raining. However, since 2003, over two thirds of these buildings can be used under any weather condition thanks to the support of the donor community.
For instance, the World Bank provided finance for the construction of more than 8,000 classroom blocks and has procured over 35 million textbooks for basic schools in the last 19 years. This changed the educational landscape as the percentage of primary schools with at least one English textbook per pupil rose from 21% in 1988 to more than 72% in 2007. It must also be noted that, during the same period, Mathematics books per student in Junior Secondary Schools improved (World Bank 2010). With the World Bank providing funding of US$78 million to the Education Sector Support Programme and the relevant components of the four Poverty Reduction Support Credits, Ghana was able to increase resources to 53 deprived districts, removed mandatory school fees and introduced capitation grants for all pupils in public basic schools throughout the country. Through its budgetary support, the DFID also assisted the government to increase its support to the education sector to 10% of Government of Ghana expenditure. Other donors such as the Netherlands and World Food Programmes also supported the Ghana School Feeding Programme which seeks to reduce hunger and malnutrition among children and therefore improve access and quality of education in Ghana [9,15].
Impact on Access to Education
The 1992 Constitution of Ghana included the Education Act of 1961's idea of providing fee free education for all Ghanaian children. Article 38(2) of the 1992 Constitution indicates that, the Government shall, within two years of parliament first meeting after coming into force of this Constitution draw up a programme for implementation within the following ten years, for the provision of free, compulsory and universal basic education.
Nonetheless, the coming into force of this provision only started in 1996 when the Free Compulsory Universal Basic Education (FCUBE) was launched. It had the objective of providing opportunity for every school going child to receive quality basic education. Emphasis was put on enhancing quality, efficiency in management and expanding access by empowering all partners to participate in the provision of education to all children. This meant that, the provision of access to quality education after the introduction of the educational reforms was not only the responsibility of the government but parents as well. This is what the framework used in this paper indicated, thus most policies are made as a result of financial reasons-cost sharing in the case of Ghana.
The 1996 FCUBE introduced by the government which still maintained the ideals of the educational reforms was seen as a strategic plan that aimed at continuing the implementation of the 1987 reform to ensure quality and access. It had the objective of addressing inequality in access to education especially among girls and those in disadvantaged areas; ensuring efficiency by reducing repetition and dropout rates; improve quality; and make education more relevant to the demands of a modern economy [9]. This human capital idea of investing in education to increase economic productivity and development was also captured by the competitive aspect of the Carnoy's framework of educational policy making and practice. That is, most educational reforms are engineered to provide the needed manpower to drive the developmental agenda of countries especially developing ones.
The reform achieved its goals however small it was as the number of primary school and Junior Secondary school pupils increased by 60% and 66% respectively between 1987 and 2006. The number of pupils enrolled in primary increased from 1.7 million to 3.1 million within the same period. However, the Gross Enrolment Ratio (GER), showed little growth during most parts of the reform following some initial improvements; after an increased from 74.5% in 1987 to 79.3% in 1990, it declined and after 10 years of implementation, the GER fell to 72. 5% -lower than it was at the time the reforms started. The decline in the GER in 1990 coincided with increases in book fees as a result of World Bank loan conditionality and this, coupled with subsequent increases in fees, dampened the expansion of the access aspect of the reform [9]. This might be because; cost sharing which was part of the conditionalities from the multilateral organizations was new to Ghanaians and also the economic hardship they faced during the period.
Following the Education for All (EFA) Conference held in Senegal, more efforts were made by the donor community to make basic education in Ghana more responsive to the needs of the state. Major donors at this time included the World Bank, USAID and DFID but there were other donors entering the education sector. For instance the USAID initiated a US$35 million Primary Education Project (PREP) spanning 1990 to 1995, while the World Bank implemented the Primary School Development Project (PSD) (1993)(1994)(1995)(1996)(1997)(1998). In addition, the Bank funded the Literacy and Functional Skills Project (1991)(1992)(1993)(1994)(1995) and National Functional Literacy Project (1992-1998). Donor funding was mostly in project mode.
The support received from the donor community led to significant progress in access expansion and gender equity. Ghana is commended for making tremendous strides towards achieving the Millennium Development Goal 2; universal access to basic education and gender equality at basic level. There have been increases in both Gross Enrolment Rates (GER) and Net Enrolment Rates (NER) over the past two decades especially at basic education level. There have been great improvement in the gender parity index in primary and Junior High School (JHS) with primary gender parity close to one however, it is less at JHS. Primary NER increased from 59% to 83.6% between 2001/02 and 2009/10 [16].
It also reflected in Junior High School (JHS) enrolment as well; the NER also increased from 30% to 47.5% over the same period. Gender parity index improved from 0.90 to 0.96 for primary and 0.84 to 0.92 for JHS between 2001/02 and 2009/10. However, there are still regional, poverty and rural-urban variations in enrolment and gender parity index. The three Northern regions have the lowest NER in basic and second cycle education. Rural areas also tend to have lower enrolment and wider gender disparity than urban areas and the poor have less girls in school compared to the rich [16,17].
From the above, there is no doubt as to the role of different stakeholders-international stakeholders can have on education. The increase of donor support has improved access to quality education. Notwithstanding the conditionalities attached to the support the country received from the donor community, the impacted has been a positive one. Comparing the educational standards prior to and after the adoption of the structural adjustment policies leaves no doubt in the minds of many that, the educational sector in general has seen tremendous improvement.
Conclusion, Implications and Contributions of the Study
The educational sector has received much investment by donors and government of Ghana. However, the sector is still faced with monumental challenges. Notwithstanding the numerous challenges the educational sector in Ghana is facing, the involvement of both bilateral and multilateral donors in the policy making process and practice have been positive. There have been more access opportunities created for the citizens who wouldn't have otherwise had the opportunity to taste education. This is because access to education prior to the involvement of the donor partners was limited and was more favourable to the rich and elite in society. The involvement of the donor community has brought about improvement in education infrastructure, and quality inputs such as teachers, textbooks, furniture just to mention a few. It has also helped improve the policy environment for instance the education planning and its annual assessment. Notwithstanding the gains made in the sector with the involvement of the donor community, poverty has not been eradicated; and the provision of access to quality and gender equity based education are some of challenges the sector is still facing.
Though access to education has been generally impressive and improved, there is a huge gap in making it more "quality". Most often, policy makers are preoccupied with policies that create more access than the quality aspect of same. If education can achieve its aim of producing the needed skilled manpower to drive the developmental agenda of the country, serious attention must also be given to the quality aspect of policies as well since they both go in hand in achieving the goal. Although donor assistance has contributed tremendously to improvements within the education sector in Ghana, it cannot be relied upon as a source of funding due to its unpredictability. Economic meltdown in a number of the donors' home countries is expected to affect the quantum of aid inflow. The migration of the country to middle income as well as becoming an oil producing country also imply that Ghana will not qualify for substantial donor assistance to support the education sector.
Though there are several reasons as to why education policy makers accept, adopt and retain certain educational policies (economic, social, political and cultural), the Carnoy's framework was used to explain the rationale that has largely influenced the educational policy adoption in Ghana. Policies are made not only because the state want its educational system to be competitive and relevant at the international level, but also to create a system that can aid in the production of the needed human resources to drive its developmental agenda. It was also shown that policies are made to ensure equity and financial responsibility on the part of parents in the education of their children.
There are growing concerns however that, donor resources is accompanied with direct and indirect controls leading to the argument that where there is growth in donor resources, there is also increasingly less control of the development agenda by governments. They argue that, the donor community, particularly the World Bank and the International Monetary Fund have been undermining national "sovereignty" by directly or indirectly directing the development agenda of recipient countries [18]. This position is however in contrast to what happened in India. Though the government of India received huge sums of these aid earmarked for basic education, it stood on it grounds to implement its own educational policies and programmes [4].
With increasing and consistent evidence of a negative correlation between aid and development in developing countries, doubt has been cast on the real impact of donor support on the progress of the education sector [19]. The fundamental question about the impact of aid to the development of the education sector in Ghana is yet to be answered. With Ghana discovering oil in commercial quantities and its migration into a 'middle income status', some donors have already drawn their exit plans [20].
Although government's financial support to education has been increasing, donor resources still remains very important. This is because about 90% of the financial resources available to the sector is spent on salaries and administrations and as such government will still count on aid to fill the financial gap. The questions however are for how long and to what extent will Ghana continue to rely on foreign aid to support its educational agenda? What will be the impact of donors withdrawing or playing less significant role in the education sector which has received immense contributions in terms of financial support from such stakeholders? Has the Ghana education sector now become more effective and efficient to manage its own educational system very well?
Implications for Practice
The contribution of the donor community towards Ghana's education has implications for the different actors in the sector; policy makers, learners and teachers. For the policy makers, it gives them an idea of the importance of smart investment in the sector and the need to draft policies that reflect not only the needs of the nation but also in tune with global standard to ensure our students are competitive globally. The reason being that education contributes to national development in many ways; it contributes to economic development through increased productivity and earnings.
As indicated by International Institute for Applied System Analysis [21], provision of better education leads not only to higher individual income but also it is a necessary (although not always sufficient) precondition for long-term economic growth and development. Educational planning, for this reason, has always been an integral part of the total economic and social planning that a nation undertakes periodically in order to improve the well being and living conditions of its people. National educational systems are, in this regard, not static. They keep changing in an attempt to response to national development plans and will continue to do so, so long as governments continue to search for new ways and initiate policies that will improve the living conditions of their people. This helps policy makers develop policies that can stand the test of time which wouldn't be affected by regime change.
Educational policies, however well-intentioned, and official curricula, however well crafted, cannot succeed without the teacher, whose professional management of the teaching-learning process ensures that education really takes place (Health & Education Advice Resource Team [22]). This suggest that, every policy revolves round the teacher who are the final implementers of educational policies. Education delivery has improved tremendously comparatively. This helps teachers to deliver quality teaching as a results of improved infrastructure and provision of adequate teaching materials.
For learners, it is the opportunity to be educated being offered. Investment in education translates not only into the delivery of quality education but also giving more children who ordinarily would have been out of the classroom the opportunity to be educated. Having more of your young population in school has implication for your general development as the human capital base improves a requirement for development.
Contribution of the Study
The article is important in many ways. Aside adding to the already existing knowledge on education funding, it also puts into perspective, how our education, especially basic education has received support from international donors. It also showed how and why the shift of focus from the tertiary to basic education by the international donor community.
Again, the study also indicated the rationale behind the initiation and implementation of educational policies which are likely to receive international support. It gives an idea of how education was financed before and after the pre-colonial era and its correlation with "quality" of education outcome In conclusion, the article puts into perspective, the path Ghana's education has travelled and reasons behind those paths. Again, it informs readers about the impact investment have on education and its outcomes. Additionally, the study shows how significant globalization has impacted on our national educational policies. | 7,418.6 | 2018-01-16T00:00:00.000 | [
"Economics"
] |
Experimentally and theoretically approaches for Congo red dye adsorption on novel kaolinite-alga nano-composite
ABSTRACT A comprehensive study combining experimental, computational, and field experiments was conducted to find out the most suitable catalysis method to assist industries using Congo red dye to get rid of this waste from industrial wastewater in Beni-Suef area. The adsorption potential of kaolinite, Liagora farinose (Egyptian marine macroalgae) and kaolinite modified by Liagora farinose macroalgae assessed for the removal of Congo red dye from aqueous solutions. The kaolinite/alga nano-composite with a crystal size of 40 nm was fabricated using a wet impregnation technique. Our results indicate that surface modification of kaolin with Liagora farinose results in an obvious increase in adsorption of toxic dye for nano-composite than individuals. Batch experiments were applied and both kinetics and isotherms of Congo red dye adsorption were also explored in order to find out the influence of different experimental factors. Congo red removal percentage is highly affected by adsorbent dose, working temperature, and pH value. The best temperature for Congo red adsorption onto kaolinite/alga nano-composite is 40°C at pH > 7. The maximum adsorption capacities were found to be 5.0, 7.0 and 10 mg/g for kaolinite, alga and kaolinite/alga nano-composite, respectively. Computational simulations studies have shown that the adsorption of the Congo red molecule on Kaolinite surfaces is exothermic, energetically favourable and spontaneous. Congo red adsorption on kaolinite/alga nano-composite is well handled with the first-order diffusion model, while kaolin and Liagora farinose follow two different kinetic adsorption models depending on the Congo red dye concentration. Finally, the field tests showed optimistic results with nearly 94% efficiency for kaolinite/alga nano-composite in removing mixed dyes from industrial wastewater, which in turn verified the foundation of new eco-friendly nano-adsorbents to help reuse industrial wastewater.
Introduction
Water is considered an essential source for the subsistence of life on earth.Industrialisation and innovation have been participated in a bad way to contaminate clean water resources [1][2][3].Many industrial sectors focused on the utilisation of synthetic dyes in their industrial processes which resulted in huge effluents of wastewater to the environment daily [4][5][6].These toxic effluents containing dyes that are reported as harmful organic materials of low biodegradability and it's a major reason in some environmental problems such as aesthetic pollution, eutrophication and perturbations of the aquatic system [7][8][9].The first contaminant to be recognised in wastewater is colour even if it is found in very small amounts as it inhibits re-oxygenation in water.Moreover, it also inhibits sunlight penetration and hence disrupts the biological activity of aquatic organisms.Besides, highly hazardous effects on the living systems due to the toxic, carcinogenic, mutagenic and allergic nature of dyes, the discharge of dyes containing effluents in natural waters [10].For example, serious eye and skin irritation in a few minutes if a person is exposed to Congo red (CR) dye.In addition to stomach irritation, nausea, vomiting, and diarrhoea caused by the ingestion of Congo red [2,11].Benzidine, a carcinogenic product obtained during metabolisation of CR that can cause definite allergic reactions.Although CR, a human carcinogen, has been banned in many countries due to health hazards, but it is still widely consumed in several countries [10,12].Many technologies have been developed for the removal of dyes from wastewaters in a wide range to decrease their environmental effect [13][14][15].Physical, chemical or biological techniques are widely used.Adsorption, advanced oxidation, filtration, coagulation, flocculation, and microbial degradation have been applied to remove dyes from wastewater [16][17][18][19][20][21][22].The most effective and convenient method is the adsorption process, for some reasons it's a very simple process to use and it can remove the pollutants at very low concentrations [8,17,[23][24][25].Naturally occurring clay materials such as bentonite, fly ash, kaolinite and also algae are preferred as adsorbents they are considered a very good adsorbent because of their high cation exchange capacity, very abundant, available with low cost and high surface area [25][26][27][28].Also the presence of active functional groups that inspire contaminant attached to the wall of the biomaterial these functional groups may be carboxylic, hydroxyl, amino, carbonyl, phosphate or sulphonic [29,30].kaolinite is an important industrial raw material that has great applications such as paper coating and filling, ceramics, paints, cracking catalysts, cements, wastewater treatment, and pharmaceutical industries [31,32].Kaolinite, [Al 2 Si 2 O 5 (OH) 4 ], with stacking layer structure (1:1) at which each separate layer contains a sheet from Si 2 O 5 2− tetrahedrons and another from the alumina [Al 2 (OH) 4 ] 2+ octahedrons, whereas these sheets bond together through common oxygen atoms and keep their lamellar structures through the hydrogen bonds [33,34].The existence of OH function groups in kaolinite structure is the most important because they are involved in a wide variety of chemical reactions [35,36].In our work, a comprehensive study including computational, experimental, and field experiments is conducted to find out the most appropriate adsorbent system that effectively removes waste dyes especially Congo red dye from industrial wastewater.The adsorption performance of kaolinite (K), alga (LF) and kaolinite/alga nano-composite (KLF) was studied for Congo red dye removal from wastewater under different experimental conditions to explore the effects of addition of LF on K's adsorption capacity.However, such adsorbents are not novel at all, it has been firstly reported long time ago and many times later.The innovation of this paper focused on the effect of the introduction of natural algae on Kaolinite adsorbent performance.LF and K have several factors such as their natural abundance, low cost, cheap, reusability, and recyclability, which qualify them for dye removal.These factors make them more economical in the application and popularisation of this low-cost technology.Batch experiments were performed including the study of the effect of starting CR concentrations, reaction times, nano adsorbent doses, reaction temperatures, and pH values on CR dye removal %.Adsorption isotherms and kinetics are also studied.
Raw materials, dyes, and reagents
Kaolinite ore was supplied from El-Nassr company for mining and used without any further modification.LF macroalga was collected from the inter-tidal area of Egyptian Red Sea shores at the distance between Quoseir and Marsa-Alam cities. Congo Red dye was purchased from Sigma Aldritch and dissolved in distilled water.Sodium hydroxide granules with 99.99% purity and 36% hydrochloric acid were supported by Sigma Aldrich and used for pH adjustment.
Preparation of kaolinite/alga (KLF) nano-composite
The wet impregnation technique was selected to be technique of choice for the fabrication of kaolinite/alga nano-composite (KLF) [37,38].Kaolinite/algae nano-composite was prepared in several steps, starting by mixing, 1 gm of kaolinite, 1 gm of algae and 20 ml of deionised (DI) water followed by magnetically stirred at 500 rpm for 60 min then ultrasonication for 60 min and repeated for 3 times after that, the resultant kaolinite/algae nano-composite was subjected to filtration, washing by DI water for several times, and finally dried using a vacuum oven at 60°C during 24 h.The K, LF, and KLF nano-composites were characterised using an X-ray diffractometer (XRD), Scanning electron microscope (SEM) and Fourier transformer-infrared (FTI-R) spectrometer.The pH at zero point charge (pHzpc) and the effect of pH values on the zeta potential were followed up using Zetasizer, Malvern Panalytical, Nanoseries, zs90, UK.
Preparation of adsorbate
The regular and well-known anionic dye, Congo red (CR), was chosen as the adsorbate in this study.CR dye is a sodium-based salt of S1 (Supplementary data) illustrates the structure of the CR dye.A 1000 mg/l solution in stock was prepared by dissolving an adequate amount (1000 mg) of CR in 1000 ml of DI water.The freshly prepared stock was diluted by DI water to obtain the required concentrations of the working solutions.The pH of all prepared solutions was tuned to 3, 5, 7 and10 by using either 0.1 M solution of HCl or NaOH.
Samples characterisations
The XRD characterisation was carried out by PANalytical diffractometer (Empyrean) using CuKα of wavelength λα = 0.154045 nm and operating at 40 kV,35 mA with scan step of 0.02°within the range 20-70°.The average crystallite size, D s , of the prepared nanoparticles was acquired by Scherer formula, D s = 0.94 λα/β w cosɸ; where β w and ɸ are the corrected full width at half maximum and the diffraction angle [39].SEM micrographs were measured using Quanta FEG 250 microscope (Switzerland).FT-IR spectra were measured using a Bruker VERTEX 70 FT-IR spectrophotometer using the dry KBr pellet technique.
Adsorption studies
Batch mode experiments were conducted for all CR adsorption experiments in various conditions including initial CR concentrations (5-25 mg /l), contact time (up to480 min), adsorbent dosage (10-50 mg), pH (3)(4)(5)(6)(7)(8)(9)(10), and temperature (25-90°C) with continuous shaking.Four adsorption experiments series were implemented on K, LF, and KLF adsorbents at diverse adsorption circumstances including, initial dye concentration, the temperature of adsorption, adsorbents dose, and initial pH of the solution as displayed in Table S1 (Supplementary data).The experiment time was set at 480 minutes with 25 ml solution volumes in all experiments.The variation in the CR concentration was elucidated from the absorption peak measured by UV/Vis spectrophotometer.The limits of detection and quantification (LOD/LOQ) for the used instrument were found to be 0.0066 and 0.02 mg /L, respectively.The reusability tests of both K, LF, and KLF adsorbents were examined 5 times using 0.02 g of all adsorbents, 25 ml of 10 mg/L initial concentration of CR for 480 minutes' contact time at 25 °C and pH 7. The three adsorbents K, LF, and KLF were collected from the solution after each run, then cleaned from dye residues by distilled water and set for the next run.The quantity of CR uptake by the synthesised nanocomposite at equilibrium (q e (mg/g) and time t (q t ) and CR dye removal% have been obtained utilising equations 1 and 2, respectively [40,41]: At which C o , C t , and C e are the concentrations of CR in mg/l at the start, after time t, and at equilibrium, respectively.V is the CR volume in mL and m is the K, AS, and KAS mass in mg.
Adsorption isotherms
Langmuir, Freundlich, and Tempkin isotherms have been used to explain the reaction isotherm of the fabricated K, LF, and KLF nanocomposites for the tested CR [42][43][44][45].All linear isotherms equations and their parameters are explained in supplementary data.
Tendency and favourability to Langmuir isotherm for the equilibrium data could be predicted from the value of the dimensionless separation factor(R L ) based on equation 3 [46].
Where C max represents the maximum initial CR concentration.
Adsorption kinetics and mechanism
Different adsorption mechanisms and kinetics models such as intra-particle diffusion, pseudo-first-order, pseudo-second-order and simple Elovich kinetic model are used for identifying the adsorption mechanisms and kinetics models that best match with the adsorption of CR onto K, LF, and KLF adsorbents [30,[47][48][49].All linear kinetics equations and their parameters are explained in supplementary data.
Statistical analyses
All adsorption results were measured in triplicates and the average values are presented.The values of regression coefficients (R 2 ) for the different kinetic and isotherm models were obtained using the statistical functions of Origin Pro 2016.
Computational calculations
The kaolinite and Congo red structures were optimised by density functional theory (DFT) using the GGA-PBE (Generalised Gradient Approximation-Perdew Burke Ernzerhof) functional.The double numerical polarised (DNP) basis set was assigned.No spin-polarisation effects were included in the exchange-correlation functional.the core electrons of kaolinite and Congo red structures were treated with the effective core potential and all electrons, respectively.The calculations were performed by DMol3 module [50,51] in the Biovia Materials Studio.The energy of the bulk structure of the kaolinite unit cell was minimised then the kaolinite unit cell was cleaved with (001) and (002) planes.We constructed three supercells (4x4x1), (6x6x1), and (8x8x1) for each plan with the vacuum thickness of 20 A° and optimised them at all the previous conditions.To identify the effects of different sizes of kaolinite surfaces on the adsorption energy and find the desorption sites of Congo red on the kaolinite surface, a Monte Carlo (MC) simulation was performed.MC simulation was carried out by the Adsorption Locator module in the Materials Studio using The COMPASS force field (Condensed-phase Optimised Molecular Potentials for Atomistic Simulation Studies) as a force field and use current in the charges section.The basic principles of MC simulation have been described by Frenkel and Smit [52].
Field experiments
The newly synthesised adsorbents system was tested as an effective eco-friendly adsorbent that could be applied on a large scale to remove industrial waste dye from industrial wastewater.For this purpose, wastewater containing waste dye was supported by clothes dyeing plant in Beni-Suef city, and the wastewater containing waste dye was used as it is without any further treatment or dilution.The appropriate adsorbent system was selected depending on our adjusted computational and experimental results.
FT-IR analyses
FT-IR characteristic peaks for K, LF, and KLF adsorbents are displayed in Figure 2(a).The FT-IR spectrum of K, Figure 2(a) (red colour) illustrated a wide mode of OH groups.The modes at 3691and 3621 cm −1 refer to the inner OH stretching [53][54][55].The peaks at 1109 and 1023 cm −1 are related to Si-O vibration modes [56].The peaks at 469, 543, and 919 cm −1 are related to the Si-O-Si bending, Si-O-Al, and octahedral aluminium (Al-OH) [57].All peaks in the region from 400 to 800 cm −1 are related to the metal oxides [58].
Peaks related to LF algae, Figure 2(a) (blue colour), exhibit well known characteristic band at 3410 cm −1 for hydroxyl function (-OH) of phenolic groups.The peak at 2930 cm −1 was allocated to the stretching mode of alkyl groups -CH, whereas the mode at 1618 cm −1 was corresponding to -C = O.The band at 1477 cm −1 was attributed to the C-H vibration [59,60].The bands located around 1110, 1120, and 1140 cm −1 are attributed to the C-O bond or may refer to the sulphate group [61].The bands at 3300-3500 cm −1 and 2500-3000 cm −1 are referred to as amine N -H stretching and carboxylic acid O -H stretching, respectively [62].Finally, FT-IR peaks of the newly synthesised adsorbent KLF are shown in Figure 2(a) (green colour).The presence of the characteristic peaks representing the two phases (kaolin and algae) confirms the presence of a new compound.The disappearance of some characteristic peaks especially those representing the amino group in the algae, which confirms the interlocking that happened between algae and the pores and surface molecules of kaolin.Not only peaks disappearance was noticed, but also the peak shift happened for characteristic peaks for both kaolin and algae.Both the peak shift and peak disappearance come in line with data obtained from other characterisation techniques which confirms the formation of a new compound.Table S2, supplementary data, lists the positions of the characteristic FT-IR bands for K, AS, and KAS adsorbents.
XRD characterisation
The XRD charts of K, LF and KLF are presented in Figure 2(b).The main XRD peaks of kaolinite minerals were observed at 2 theta 12.44° and 24.9°due to the crystallographic growth alongside the (001)and (002) planes [63,64] [59,65].The XRD chart of KLF displays a small shifting in the position of the main XRD peaks of kaolinite to shift to 24.98° at 26.2°, 26.9° and 45.8°.The average calculated crystallite size using the Scherer equation was 40.3 nm, which confirms the nanoscale nature of the newly synthesised composite.
Effect of initial dye concentration
The variations in the removal % and the amount of CR adsorbed with time using K, LF, and KLF nano-adsorbents at different initial CR concentrations are shown in Figure 3(a-c, d-f), respectively.It can be observed from these Figures that; during the first stage of the adsorption process, the adsorption capacity and the dye removal % were elevated at the beginning of the process and then, their increasing rates are reduced to reach the equilibrium state at the end.It was observed also that; contact time has no marked effect on the adsorption process using new sorbents after reaching the equilibrium.The prompt removal rates at the early stage of the reaction are allocated to the existence of a huge surface density of uncovered active spots on the nano adsorbent's surfaces.By increasing the period of contact between adsorbent and adsorbate, the hot spots converted to fully occupied sites by CR molecules.As a result, repulsion forces are established between the adsorbed CR molecules on the surface of adsorbents and CR molecules in the bulk liquid phase [41].
The clay nano-composite, KLF, revealed higher efficiency for Congo red adsorption at lower concentrations in comparison with the other adsorbents K and LF.At 5 and 10 ppm initial dye concentration; the CR removal% reached 98% and 90% for KLF, 49% and 43% for LF; and 37.7% and 28.7% for K, respectively.This behaviour matches with previously reported composites but with lower performance than ours does [32,66,67].With increasing concentration to 15ppm, the dye removal % reaches 77% and 36% for both KLF and LF,and 23.3% for K adsorbent.At relatively high concentrations, 20 and 25 ppm, the dye removal % was in the order KLF>LF>K.
The quantities of adsorbed CR are increased by raising the starting CR concentration as shown in Figure 3(d-f).This could be accredited to the growth of the concentration gradient with raising the initial CR concentration.Hence, appropriate growth in the draft forces occurs to overawed the mass transfer resistance between CR adsorbate and K, LF, and KLF adsorbents [68,69].The maximum adsorption capacities of K were found to be 1.88, 2.8, 3.5, 4.4, and 5.0 (mg/g), while the adsorption capacities of LF were found to be 2.45, 4.3, 5.4, 6.0, and 7.0 (mg/g) for CR with initial concentrations of 5, 10, 15, 20 and 25 mg/l respectively, at pH 7 and 25°C.Whereas, the maximum adsorption capacities of KLF adsorbent were found to be 4.9, 9.0, 11.55, 11.0 and 10 (mg/g) at these starting concentrations.The results showed that the modification of K with LF is a feasible approach to enhance the CR removal performance by KLF.
Influence of nano-adsorbent dosage
To determine the adsorption cost, the influence of the nano-adsorbent dose on the CR removal% was assessed for determining the optimised nano-adsorbent dosage that offers the maximum performance.This is graphically depicted in Figure 4(a).The adsorbent doses were varied from 0.01 to 0.05 g.It was found that 0.02 g nano-adsorbent per 20 ml of CR solution of an initial concentration of 10 mg/l was the best nano-adsorbent dosage.
From Figure 4(a), The CR removal% for all adsorbents rises as the nano-adsorbent dosage increased from 0.01 to 0.02 g.The removal% increases from 26% to 28.7% in the case of K adsorbent, from 27% to 43% in case of LF adsorbent and from 79% to 91% in case of KLF adsorbent.Which refers to the increase in the number of active dots with increasing the nano-adsorbent dosage [41].For nano-adsorbent dosage over 0.02 g, the removal% decreases again.For K adsorbent, the CR removal% decreases to 25, 22, and 15% with increasing the adsorbent dose to 0.03, 0.04, and 0.05 g, respectively.For LF adsorbent, the CR removal% decreases to 35, 18, and 10% with increasing the adsorbent dose to 0.03, 0.04, and 0.05 g, respectively.Also, the CR removal% decreases to 70, 37, and 36% with increasing the adsorbent dose to 0.03, 0.04, and 0.05 g, respectively, in the case of KLF adsorbent.This phenomenon may be attributed to the screening effect that occurs at elevated nano-adsorbent dosage due to the accumulation of the nano-adsorbent particles and decreasing the distance between the nano-adsorbent particles.The condensed layer at the surface of the adsorbent blocks the binding sites from CR molecules.Also; K, LF, and KLF overlapping resulted in a competition between CR molecules for restricted available binding sites.Aggregation or agglomeration at greater, K, LF, and KLF doses increases the diffusion path length for CR adsorption causing a decrease in adsorption % [30,[70][71][72].
Effect of pH
Due to its influence on the dissociation/ionisation of the K, LF, and KLF nano-adsorbents and their impact on the absorbent surfaces, the starting pH value of the CR solution can be a crucial factor in controlling the nano-adsorbent performance [72].So, the electrostatic charges on the K, LF, and KLF adsorbents and the CR sorbate are greatly affected by the pH of the solution.The effect of pH on the CR removal efficiency of the adsorbent was studied between pH 2 and pH 10 as shown in Figure 4(b) at initial CR concentration of 10 mg/l, sorbent dosage of 0.02 g.The K adsorbent shows removal percentages of 16.6%, 17.0%, 28.7%, and 33% for CR solutions of pH 2, 5, 7, and 10, in that order.The LF adsorbent shows removal percentages 62%, 34.5%, 43% and 36% while KLF adsorbent shows removal % of 59%, 37%, 90% and 90% at pH values of 2, 5, 7, and 10, respectively, at the same previously mentioned conditions.As such, a significant role for pH in controlling the surface charge of adsorbents was apparent.To investigate the effects on K, LF and KLF, we determined the zeta potential of composites in the solution.The effect of pH on the zeta potential of K, LF and KLF in an aqueous solution shown in Figure 4(e).The surface charge of K, LF and KLF shifted from higher to lower values with increasing the pH value from 2 to 5, which resulted in a gradual decrease in electrostatic attraction between the composites and the negatively charged CR.Such a decrease should lower adsorption capacity.A larger fluctuation of adsorption capacity was observed at a pH ranging from 5 to 7 to reflect a large change in zeta potential.Generally, the more the shift in the zeta potential value to the positive values the more the removal %.The lower zeta potential indicates that the adsorbents surface were partially negatively charged at a pH of 2 to 5 and that the electrostatic force between K, LF and KLF and CR through the sulphonic acid group (SO 3
−
) was mainly repulsive during the experiment [73,74].The pH at which the adsorbent has zero-point charge (pHzpc) was 5.8 in case of KLF adsorbent and above this pH the adsorbent surface became positively charged and consequently a large increase in the adsorption capacity take place on KLF surface.The pHzpc values were not detected in case of K and KLF under the investigated pH ranges.The gradual increase the adsorption capacity at pH 7 and 10 in case of K adsorbent could be related to the shift on zeta potential to less negative values.
Effect of temperature
The influence of reaction temperature on the uptake % of CR onto K, LF, and KLF was done at different adsorption temperatures degrees.The adsorption tests were done at 25, 40, 50, 60, 80, and 90°C and the results were presented in Figure 4(c).For both LF and KLF adsorbents, the CR removal % increase from 73 to 100% and from 92 to 100%, respectively, with increasing temperature from 25 to 40°C.This performance could be owed to the growth of the CR diffusion rates with rising temperature as a result of the decrease in viscosity of the solution [75].With more temperature rise, the CR removal% remains constant at 100% till 60°C for LF and KLF adsorbents.This could be attributed to the fact that the maximum limit of CR adsorption is reached by the nano-adsorbent.With more and more increase in temperature, the dye removal% drops down again, and this is owed to the collapse of adsorption force responsible for CR dye molecule adsorption on the LF and KLF surface.This may ascribe to active site damage and adsorptive force reduction between nano-adsorbent's active site and CR molecule [8,76].Therefore, the optimum temperature for adsorption of CR is from 40 to 60°C for LF and KLF adsorbents.For K adsorbent, CR elimination % increase with rising adsorption temperature from 25 to 40°C, where it increased from 28.7 to 53.3% with changing temperature from 25 to 40°C.The CR elimination % reduced from 53.8 to 33% by increasing temperature from 40 to 60°C.By increasing the temperatures from 25 to 40°C in the initial stage, the CR removal% increases due to the growth in the CR diffusion rate.By raising temperatures from 40 to 70°C, a decrease in the CR elimination% occurs and this could be attributed to the desorption of CR molecules caused by the destruction of adsorptive forces responsible for the CR dye adsorption on the K nano-adsorbent surfaces [77].With further increase in temperature from 60 to 90°C, the removal% increase from 33% to 36%.The best temperature for CR adsorption onto K is 40°C.
Reusability of adsorbents
The K, LF, and KLF reusability for the elimination of CR was performed four 4 times with the same adsorbent and the same adsorbent dosage (Figure 4(d)).The results showed that; the removal strength of all used adsorbent greatly varied throughout the four adsorption cycles.For K adsorbent, the recorded dye removal % was 28.7%@1 st cycle, 12.5%@2 nd cycle, 10%@3 rd cycle, and ~10%@4 th cycle.For LF adsorbent, the dye removal % was decreased from 73%@1 st cycle to 15%@4 th cycle.For KLF adsorbent, a decrease in the calculated CR removal% occurs where it changes from 92%@1 st cycleto 20%@4 th cycle.
The reduction on the CR removal% could be ascribed to the agglomeration of the CR molecules onto the surface of K, LF, and KLF adsorbents, which consequently blocks adsorbent surface and pores from the dissolved CR molecules and so, a reduction in adsorption capacity take place [78].
Linear regression analysis
The statistical significance of R 2 (the correlation coefficient) for the linear fitting of Ce/ qe versus Ce, log(q e )versus log(C e ), and q e versus Ln(C e )was the criteria by which the data fitted to Langmuir isotherm, Freundlich isotherm, and Tempkin isotherm, correspondingly.From the linear plots, the values of Q o , K L , K F , 1/n, K T , B, and R 2 were determined from Figure 5(a-c) and recorded in Table 1.The results in Table 1 demonstrate that CR adsorption on K and KLF adsorbents track the Langmuir isotherm models where the R 2 value is the highest; the adsorption process almost tracks the Langmuir isothermal model [79][80][81].Therefore, the elimination of the dye occurs at the active sites of the nanoadsorbents on a single surface layer, and the adsorbed CR molecules do not interact with each other.At 25°C, the obtained R 2 values by the Langmuir isotherms were 0.9911and 0.9893 for K and KLF adsorbents, respectively.The value of R L is < 1, which means that the CR adsorption is preferred in the study case [82].CR adsorption on LF adsorbents tracks the Temkin isotherm models where the R 2 value is the highest.
Nonlinear regression analysis
Redlich and Peterson proposed a nonlinear empirical isotherm model [83].Adsorption that does not follow perfect monolayer adsorption can be described by this hybrid mechanism.The model combines elements of the Freundlich and Langmuir models and can be used to describe sorption equilibrium over a wide range of adsorbate concentrations.The nonlinear form this empirical model is where C e is the adsorbate concentration (mg/L) in solution at equilibrium, P 1 (L/g), P 2 = 1 and P 3 (mg/L) −g are the Redlich-Peterson constants and g is an exponent with a value between 0 and 1.This model becomes a linear isotherm when g = 0, reduces to the Langmuir isotherm when g = 1, and converts into the Freundlich isotherm when P 1 , P 3 >>> 1 and g ≤ 1.The ratio of P 1 /P 3 indicates the adsorption capacity [84,85].
The parameters of the sorption isotherm model may be easily determined using the nonlinear regression approach and OriginPro 2018 by minimising the sum of the square differences between experimental data and model outputs.The theoretical q e versus C e values are then calculated using initial estimations of the unknown parameters in the model equation, and the residual sum of squares (RSS) between the experimental data and the theoretical model output is obtained.Following that, iterations are done in which the initial estimated parameter values are changed by a tiny amount and the RSS is recalculated numerous times until the parameter values result in the lowest feasible RSS value.As shown in Table 1 and Figure 5(d), nonlinear regression provides a more appropriate and precise determination of model parameters than linear regression [86].For K nanoadsorbent, P 1 and P 3 >>> 1, g < 1, and P 1 /P 3 = 0.99.For LF nanoadsorbent, P 1 > 1 and P 3 < 1, g < 1, and P 1 /P 3 = 6.46.For KLF, the P 1 /P 3 ratio is enhanced to 9.55, indicating an increase in adsorption capability.Also, because the g value for KLF is so close to unity, the nonlinear Redlich-Peterson model approaches the Langmuir model, which correlates well with the linear regression data in Table 1.
Error function analysis
An error function assessment is frequently necessary to evaluate the applicability of a model equation to experimental results.Error functions are statistical equations that are used to calculate the difference between theoretically expected data and actual experimental data values.The best fitting model was validated by three different statistical error functions namely coefficient of determination (R 2 ), reduced Chi-square(χ 2 ) test, and the reduced sum of square error (SSE).The best-fitting model is the one with the lowest value of SSE and χ 2 (close to zero) and the highest value of R 2 (close to unity).The values of these error functions are presented in Table 1.The obtained R 2 values by the Redlich-Peterson model were 0.9928, 0.9902, and 0.9981 for K, LF, and KLF adsorbents, respectively.Also, χ 2 values are very close to zero (0.02, 0.05, 0.03), this implies that our experimental data are more fit with the nonlinear regression.
Adsorption kinetic models
To investigate the most appropriate adsorption kinetics model, the adsorption of CR on K, LF, and KLF under various starting CR concentrations was addressed.The first-order, second-order, and Elovich kinetics linear graphs were represented in Figure 6 by plotting ln(q e -q t ) versus t, t q t versus t, and q t versus ln(t), respectively.The adsorption kinetics parameters k 1 , k 2 , q e , β, and α of the evaluation model in addition to R 2 were obtained using the linear plots and depicted in Table 2.The linear fit and regression coefficient values in Table 2 for all the studied kinetic models confirmed that CR adsorption onto K is well handled with the second-order diffusion model till 20 ppm dye concentration and over this concentration, the adsorption process follows the first-order rate law and this was also confirmed from the good approximation between the calculated qe and experimental qe [87,88].The CR adsorption onto KLF is well handled with the Elovich diffusion model and this appears in the higher R 2 values [87].The CR adsorption onto LF follows two different kinetics adsorption models depending on the CR concentration where it follows the Second order kinetics up to 15 ppm concentration and above this concentration, it follows the Elovich diffusion model.
Sorption mechanisms
To comprehend the adsorption kinetics process and rate-controlling steps, the practical data are fitted for Weber's Intra-particle diffusion.A straight line in the chart of q t versus t 1/ 2 , Figure S2 (Supplementary data), proposes the applicability of the Intra-particle diffusion model.The values of K 3 and I, Table 3, are obtained using the slope and intercept of the linear fitting, respectively.The intercept I ≠ zero, demonstrating that the Intra-particle diffusion model may not be the sole rate-controlling route in estimating the adsorption reaction kinetics [89].The intercept in Figure S2 refers to the boundary layer effect.The larger intercept, the greater the contribution of surface adsorption in the rate control stage [89].
MC simulation
The lowest configurations obtained due to the adsorption of Congo red on (001) and (002) kaolinite for different three supercells are summarised in Figure 7. MC simulation aims to elucidate the influence of diverse planes and sizes of kaolinite on the adsorption of CR.The adsorption energies of each kaolinite-Congo red system are titled in Table 4. Figure 7(a-c) demonstrates the CR adsorption on the kaolinite (001) plane in a dry system without any solvent.The CR molecule holds various hydrogen bond (HB) donor/acceptor spots, and hence, it forms a number of HBs with the nitrogen and oxygen atoms of the kaolinite superficial.The oxygen and the nitrogen atoms of CR form HBs and intra-molecular HBs through the hydroxyl hydrogen atoms of the kaolinite superficial.Figure 7(d-f) displayed the formation of HBs and intra-molecular HBs between CR and the hydroxyl hydrogen atoms of kaolinite alongside the (002) plane.The adsorption energy, ΔE ads , interaction energy, E int , deformation energy (E def ), and the substrate/adsorbate configurations (dE ads /dN i ), wherein one of the adsorbate constituents is missing, are reported in Table 4. ΔE ads for all configurations in this study are negative.I.e. the adsorption reactions of the CR molecules on Kaolinite surfaces are exothermic, energetically preferred and spontaneous, due to the existence of the intermolecular interactions.Also, increasing the Kaolinite surface size does not affect significantly the adsorption energies for all configurations.But the HBs and intramolecular HBs with the hydroxyl hydrogen atoms of the kaolinite (001) surface are weaker than those in (002).So, this may cause a decrease in ΔE ads values, whereas the adsorption energies for all configurations in the state of kaolinite (001) are larger than those in kaolinite (002) as shown in Table 4.The complete supercells for adsorption configurations of the adsorbed CR on kaolinite (001) and (002) facets are displayed in Figure S3 for clarity purpose.
Field experiments
The optimised parameters for the newly synthesised KLF adsorbent were 0.02 gm adsorbent mass, near room temperature, and contact time of 480 minutes, while the pH of the wastewater containing the waste dye remained unchanged.The presence of different wavelengths corresponding to different dyes was detected by the optical scanning of the as-received industrial wastewater.Absorbance at various wavelengths recorded at the end of the contact period to measure the removal efficiency of dyes from industrial wastewater.At various wavelengths, the data revealed removal efficiency reaches about 94%.
Comparison of adsorption capability of K, LF, and KLF with other adsorbents
The relation between the qm (adsorption capacity) values of the various adsorbents mentioned in the literature and that of K, LF, and KLF for CR dye adsorption is shown in Table 5.A comparison of qm values also showed that a fair adsorption potential of CR dye from aqueous solutions was demonstrated by K, LF, and KLF [10,14,32,[90][91][92][93].Our optimised composite showed qm grater then Kaolin-based and Liagora farinose adsorbents [10,14,32,[90][91][92][93].
Conclusion
The wet impregnation approach was used to produce a unique kaolinite/alga (KLF) nanocomposite.The performance of the KLF nano-composite as a nanoadsorbent for CR dye from aqueous solutions was investigated and compared to that of Kaolin (K) and Liagora Farinose (LF) alga.Various morphology and structure characterisation techniques for K, LF, and KLF indicated aggregation of LF nanoparticles with kaolinite nanopores to produce 40.3 nm nanocomposite crystallites.The adsorption tests revealed ~98%, 49%, and 37.7% removal efficiency of 5 ppm CR dye employing KLF, LF, and K, respectively.The highest adsorption capacities of K, LF, and KLF nanocomposite were 5.0, 7.0, and 10 mg/g, respectively, implying that the order of performance was KLF > LF > K for all CR concentrations.Furthermore; adsorbent dose, working temperature, and pH value all have a significant impact on CR removal percentage, with 40°C at pH > 7 being the best working temperature for CR adsorption onto KLF.The reusability tests for K, LF, and KLF revealed that none of the adsorbents were favoured for CR removal reuse, but the novel composite KLF demonstrated improved stability.The CR adsorption isotherms and kinetics on K, LF, and KLF indicate that the Langmuir isothermal models are monitored by K and KLF adsorbents, while LF is better suited to Tempkin isotherms.With the Elovich model, CR adsorption on KLF is well controlled, while K and LF adopt two separate models of kinetic adsorption depending on the concentration of CR.Increasing the surface size of Kaolinite also does not greatly affect the energy of adsorption for all configurations.Finally, field tests showed surprising results of 94% dye removal efficiency from industrial wastewater, which in turn confirms the foundation of a modern eco-friendly adsorbent device that could assist in the reuse of industrial wastewater.Future research should focus on improving the stability of the developed nanocomposite by incorporating plasmonic or metal oxide nanostructures.
Figure 1
Figure 1 illustrated SEM images of K, LF, and KLF adsorbents.For kaolinite, Figure 1(a)shows that agglomerated rounded regular shape particles, rough surface, different particle sizes, and porous cavities on the surface.The SEM image of LF, Figure1(b), revealed that LF exhibits a less porous surface, which consequently affects the surface area for LF which in turn affects its adsorption capacity.When kaolinite is treated with algae LF the SEM image of the nano-composite, Figure1(c), showed covered pores in the kaolinite surface with the LF particles and converted into agglomerated particles.The formation of KLF nano-composite could be established from the noted changes in the morphology of the nanocomposite relative to that observed for K and LF.
Figure 3 .
Figure 3.Effect of CR dye concentration and contact time on the removal% and the amount of CR dye adsorbed at 25°C and pH 7 by 20 mg of (a, d) K, (b, e) LF, and (c, f) KLF.
Figure 4 .
Figure 4. Effect of (a) adsorbent weight, (b) Initial pH of the solution, (c) adsorption temperature, and (d) reusability test on the removal% of 20 ml CR solution of 10 mg/l by K, LF and KLF (e) effect of pH on zeta potential.
Figure 5 .
Figure 5. Plots of (a) Langmuir, (b) Freundlich and (c) Temkin adsorption isotherms for the adsorption of CR dye by 50 mg of K, LF and KLF at 25°C and pH 7; and (d) nonlinear regression fitting for Redlich-Peterson isotherm.
Figure 7 .
Figure 7. Snapshots for the adsorption configurations of Congo red adsorbed on (a-c) kaolinite (001) facet and (d-f) k.
Table 1 .
Isotherm parameters for CR adsorption on K,LF and KLF.
Table 2 .
Parameters of the kinetic models for CR dye adsorption on K, LF and KLF.
Table 3 .
Intra-particle diffusion constant's at different initial CR concentrations at 25 O C.
Table 4 .
Adsorption energies for the adsorption configurations of Congo red adsorbed on kaolinite (001) and (002) facets.
Table 5 .
Comparison of the optimised conditions, removal%, and adsorption capacity of different CR adsorbents relative to our K, UL, and KUL nanoadsorbents. | 8,891.4 | 2021-08-31T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Emotion AWARE: an artificial intelligence framework for adaptable, robust, explainable, and multi-granular emotion analysis
Emotions are fundamental to human behaviour. How we feel, individually and collectively, determines how humanity evolves and advances into our shared future. The rapid digitalisation of our personal, social and professional lives means we are frequently using digital media to express, understand and respond to emotions. Although recent developments in Artificial Intelligence (AI) are able to analyse sentiment and detect emotions, they are not effective at comprehending the complexity and ambiguity of digital emotion expressions in knowledge-focused activities of customers, people, and organizations. In this paper, we address this challenge by proposing a novel AI framework for the adaptable, robust, and explainable detection of multi-granular assembles of emotions. This framework consolidates lexicon generation and finetuned Large Language Model (LLM) approaches to formulate multi-granular assembles of two, eight and fourteen emotions. The framework is robust to ambiguous emotion expressions that are implied in conversation, adaptable to domain-specific emotion semantics, and the assembles are explainable using constituent terms and intensity. We conducted nine empirical studies using datasets representing diverse human emotion behaviours. The results of these studies comprehensively demonstrate and evaluate the core capabilities of the framework, and consistently outperforms state-of-the-art approaches in adaptable, robust, and explainable multi-granular emotion detection.
Introduction
The rapid digitalisation of society has empowered knowledge-focussed human activities and communication to transpire on hyper-connected, digital platforms.This spectrum of intrapersonal, interpersonal, and group activities have led to the generation and management of high volumes of big social data that represents patterns of behaviour of individuals and organizations, and how they leverage insights drawn from that information for further engagement and collaborative activities [1].Expressions of emotion are encapsulated in these digital platforms which is highly useful towards accurately modelling human behaviour [2].The persistence of this textual digital record enables the use of computational approaches to process, analyse and synthesise emotion expressions.Computational approaches for emotion detection have been classified using several schemes in existing literature.Acheampong et al. [3].proposed three categories, rulebased, machine learning and hybrid methods.Alswaidan et al. [4] proposed a scheme of five categories, keyword-based, rule-based, classical learning, deep learning and hybrid.In reviewing these schemes, we have summarised into three technical categories, (1) heuristics (which includes keywords, rule-based, probabilistic and statistical), (2) Artificial Intelligence (AI) (consisting of classical learning, machine reasoning and deep learning) and (3) hybrids of the two.Despite the maturity of this topic in terms of classification schemes and the prevalence of many approaches across these three classes, the complexity and ambiguity of emotion expressions on digital platforms have not been fully addressed.We substantiated this challenge of complexity and ambiguity in terms of four capabilities, (1) output (granularity of emotion detection output), (2) domain specificity, (3) adaptability, and (4) explainability.
We conducted a systematic literature review of the state-of-the-art of recent emotion analysis and detection research published in the last five years, from 2018 to 2022.The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) flow diagram for this review in reported in Supplementary Fig. 1 (Filename: emotionaware supp Fig. 1.docx).The review produced 83 articles that aligned with the selection criteria, which then we evaluated in terms of the four capabilities noted above.Supplementary Table 1 (Filename: EmotionAwareSuppTable1.xlsx) presents the results of this evaluation.
Based on the findings of the literature review and the subsequent evaluation against capabilities, we propose a novel framework for Emotion Assembles With Adaptability Robustness and Explainability (AWARE).This Emotion AWARE framework intervolves heuristics and AI techniques with lexicon generation and finetuned Large Language Models (LLM) into a hetero-hierarchical structure that receives text containing emotion expressions as input and produces as output an assemble of emotions with corresponding intensity values.Emotion assembles can be created at three levels of granularity, two, eight and fourteen.The framework is adaptable as the hetero-hierarchical structure can be revised and reintroduced to reflect a domain or topic of interest.The framework is robust in its ability to detect implied emotion expressions through the context of surrounding terms as well as scale the intensity values based on negations, intensifiers, and inhibitors.The framework is explainable in its identification of terms and phrases for each emotion expression, leading up to a collection of terms that can be used to profile and compare multiple assembles.
In comparison to related work on emotion detection, the Emotion AWARE framework is novel in its construction of emotion assembles with intensity values, and the explainability, adaptability and robustness of these emotion assembles.On approach, AWARE leverages prior knowledge of lexicons and learned knowledge of the finetuned language models, in contrast to the singular approaches adopted in related work, and it is the only approach evaluated on eight datasets (across studies).In terms of output, it produces multi-granular emotion assembles of 2,8, and 14 emotions with intensity scores, in contrast to the class-based output produced by other methods.In terms of valence and arousal, the proposed framework detects valence across a broad spectrum of 14 emotion categories, and each category is assigned a score from 0 to 1.This scoring reflects arousal levels and is determined while taking modifiers and negations into consideration.All related methods in recent literature are limited to a specific domain or general application, whereas AWARE is intrinsically generic but can be adapted to a domain of interest.This feature is aptly demonstrated in the experimental results (study 5 and (6).Explainability, adaptability and modifier resolution are similarly more advanced than those reported in existing literature, mainly due to the effectiveness of the hybrid approach of prior knowledge from lexicons and learned knowledge from finetuned language models.
Literature review
As noted above, we conducted a systematic literature review of the state-of-the-art research on emotion analysis published in the last 5 years, from 2018 to 2022.The PRISMA flow diagram and the evaluation of the selected work against the four capabilities are reported in Supplementary Fig. 1 (Filename: EmotionAwareSuppFig1.docx) and Supplementary Table 1 (filename: EmotionAwareSuppTable1.xlsx), respectively.Here, we delineate key findings in terms of the three categories, heuristics, AI and hybrids.
Heuristic approaches include keyword recognition, rule-based logical/grammatical affinities, statistical and probabilistic methods.These methods are grounded in emotional lexicons, corpora and dictionaries that represent prior knowledge of how emotion is expressed in that domain or discipline.The emotion lexicon is typically a list of synonyms and related words used for each emotion category, where each word may also be assigned a fixed intensity value.Besides a list, the lexicon can also be organised hierarchically in a tree structure or interlinked as a graph or map structure.Several emotion lexicons reported in the literature are, Plutchik's emotional terms [5], theWordNet-Affect [6], EmoSenticNet [7], DepecheMood [8], SentiWordNet dictionaries [9].Keyword recognition methods [10] rely on locating keywords representing emotions in a given text and assigning an emotion label based on these keyword counts and other statistics.These methods can be used for explicit emotion detection.For example, "their arrival made me happy" explicitly expresses the emotion happiness/joy with the keyword "happy".But often emotions are not explicitly mentioned and can be negated or modified to give different or opposing interpretations than a keyword search method would suggest.In such cases more advance heuristics are required.Rule-based approaches incorporate text processing methods such as tokenization, part-of-speech tagging, and dependency parsing along with corpora and lexicons to find the most effective rules sets for emotion detection [11,12].Several other approaches use lexical affinity with the support of lexicons to capture contextual and semantic relatedness to generate probabilistic values for each emotion category [13].Furthermore, some approaches utilize dimensionality reduction and categorical feature extraction methods such as Latent semantic analysis (LSA) [14], Probabilistic LSA [15] for improved emotion detection [16].The use of lexicons enables domain adaptation in emotion detection as lexicons can be easily extended or altered to suit the target domain.Furthermore, these methods can be extended for emotion intensity calculation, negation and modifier detection as they can locate the keywords and evaluate the corresponding neighbourhood.However, a major drawback of all heuristic methods is that emotion expressions that are not specified in the lexicon and those that are implied or ambiguous are not detected.Due to these reasons, methods that are purely based on lexicons are not comparable to benchmark performance of AI based methods [3].AI-based methods can be subdivided into two, conventional supervised learning methods situated in annotated datasets and the contemporary transfer learning methods that leverage pre-training contextual language models.The conventional methods require large, labelled datasets where each sentence, paragraph or segment in the corpus is pe-assigned an emotion category (or label), typically by a human expert.This annotated dataset is used to train a multiclass classification model using supervised learning algorithms.Emotion classification and intensity calculation using XGBoost [17,18], Support Vector Machines (SVM) [19,20],Naïve Bayes (NB) [21,22], k-Nearest Neighbor (kNN) [22] and Decision Trees [23,24] are some prominent techniques reported in related literature.More recently, deep learning algorithms such as Long Short Term Memory (LSTM) networks [25,26], Gated Recurrent Units (GRU) [27,28] and Deep Neural Networks (DNN) [29,30] have also been used in the same supervised learning context but with increased performance.Collectively, all supervised learning methods have reported accuracies in the range of 65-80% on benchmark datasets [3].However, supervised learning methods are impeded by two major limitations, the scarcity of large, domain independent labelled datasets and the challenge of ambiguous and implicit emotion expressions.More recent AI methods address these limitations by leveraging the semantic context of emotion expressions embedded in pre-trained language models.Unlike supervised methods, these methods can be fine-tuned with smaller labelled datasets using transfer learning.Emotion extraction using variations of BERT [31][32][33], GPT [34,35], XLNet [36,37] are such methods that leverage the contextual knowledge embedded in language models.These approaches report state-of-the-art accuracies for emotion detection from benchmark datasets in the range of 75-99% [38].However, this strength is also a weakness due to the limited generalisability across new, unforeseen emotion expressions, as well as intensifiers, inhibitors, and negations of emotion expressions, lack of explainability and constrained domain adaptation.Collectively, these limitations question the practical value of the high accuracies reported in empirical evaluation [39].
Several hybrid methods also have been proposed in the recent literature combining heuristics with AI methods to improve accuracy and refine the emotion categories.Tzacheva et al. (2019) [20] proposed lexicon-based emotion annotation to train SVM classifiers, for emotion extraction in tweets.Wu and Chuang [40] utilized a rule-based approach to extract semantics related to emotions and combined it with lexicon ontology to extract emotions.In Salim et al. [41], authors presented self-supervised hybrid methodology for sentiment classification from unlabelled data that combines a machine learning classifier with a lexicon-based strategy.Li et al. [42] proposed a hybrid emotion detection system combining hand crafted rules and lexicon with machine learning based classifier to extract emotional levels in online blogs.
Collectively across all three categories, the practical value of these methods in the management of information and extraction of patterns of behaviour of individuals and organizations is vast.Large scale analyses of social media during elections [43,44], patient-centred care for chronic illnesses such as Alzheimer's disease, cancer, and diabetes [45][46][47], real-time depression detection on social networks [48,49], expressions of emotion and sentiment during the COVID-19 global pandemic [50][51][52], highlight the practical value in social and individual settings.In organisational settings, financial sentiment analysis [53], understanding consumer satisfactions [54], the role of social media in stock price moments [55], and the influence of review credibility and review usefulness [56] are pivotal studies that signify the continuing and incremental value of emotion analysis in digitalised content for all stakeholders.
In concluding the literature review, we elaborate on the four capabilities and their potency in addressing the challenges of the complexity and ambiguity of digital emotion expressions in knowledge-focused activities.The first capability is the output of the emotion detection approach.In most cases, this is limited to an emotion label without an intensity score for that emotion.This emotion label is also limited to a single granularity which cannot be further analysed in terms of its constituents.Most approaches assign a single emotion per atomic unit of text (sentence, paragraph or document), and overlook the presence of multiple emotions.The second capability, domain specificity relates to the generalisability of the approach across diverse domains.Most approaches are highly specific to the syntax or semantics of a given domain, such as emotions in short text like tweets [57,58], emotions in poetry [59,60], emotions in code switched text [61,62], and consumer reviews [63,64].These are developed using supervised learning and then evaluated using labelled custom datasets which further limit generalisability and its application in diverse domains.Despite the custom datasets, some methods can be adapted (or retrained) for a new application, which is the third capability of adaptability.In recent work that is based on language models and annotated datasets, this capability is limited due to the large number of parameters and the opacity of transformer-based learning.They cannot be adapted without a significant volume of work on configuration and finetuning which is equivalent to developing an entirely new approach.The fourth capability is explainability of the detected emotion which is becoming more important given our increasing dependence on AI and automation.Explainability has been overlooked in most approaches, mainly due to design limitations that have focused on producing emotion labels of singular granularity.We do not consider accuracy as a core capability as it can be configured (or tweaked) in the design phase as an offset between the availability of annotated datasets for supervised learning and the need for generalisability across multiple domains.A high-quality human-annotated dataset can be leveraged by a supervised learning approach to produce highly accurate emotion classifications.In summary, the granularity of emotion detection output, domain specificity, adaptability and explainability are the formative capabilities of the proposed method for addressing the complexity and ambiguity of emotion expressions.
Methods
As illustrated in Fig. 1, the emotion AWARE framework consists of three modules, Module 1-Emotion Language Model Finetuning, Module 2-Emotion Lexicon Generation and Module 3-AWARE Core.The components depicted in grey are external sources feeding into the Emotion AWARE framework, where the general instances we have used in this study can be replaced with specialised instances depending on the domain of application (this is demonstrated in Study 5 and 6 for the financial and technology sector).
Module 1 begins with a state-of-the-art language model, such as BERT [65] which has been effectively applied on diverse NLP tasks such as Reading Comprehension [66,67] and Natural Language Inference [68,69].State-of-the-art language models are pretrained on large volumes of unlabelled data to generate deep contextualised word representations by considering syntaxes and semantics [70].In application, these pre-trained models are finetuned using labelled datasets through transfer learning techniques.For this framework, we selected the DistilBERT [71] base-case model with Huggingface [72] PyTorch implementation for the finetuning.As the finetuning dataset we selected Emotion dataset [73] due to its substantial size, granularity of emotions, and widespread acceptance in the research community.It contained 20,000 tweets based on six emotions joy (33.5%), sadness (29.2%), anger (13.5%), fear (12.1%), love (8.2%), and surprise (3.6%).For the finetuning, we combined train and validation sets, randomized and selected a subset of 5653 points where 1000 samples per each emotion except surprise which was 653 points.Finetuning settings were, a default token length of 128 enabled by both padding and truncation, and batch size of 64 with 8 epochs.At a learning rate of 0.00002 and weight decay of 0.01, the finetuning completed with an F1 score of 0.9394 for the test segment of the dataset.The finetuned language model is utilised by Module 2 for the expansion of a curated list of emotion seed words and in Module 3 for emotion embedding space generation.As noted earlier, DistilBERT can be replaced with any other language model that is closely aligned with the domain of interest.
Module 2 initiates with an emotion seed word list constructed and curated using a combination of automated and manual methods.In developing our emotion lexicon, we referenced Plutchik's model [74] which identifies eight primary emotion classes, each further divided into three subcategories, resulting in a comprehensive 24-class system.Initially, seed keywords for each of these classes were manually curated from an online thesaurus [75].However, we encountered a scarcity of unique terms for certain emotions, which necessitated the merging of closely related categories joy and ecstasy, amazement and surprise, disgust and loathing, interest and vigilance, anger (rage, anger, annoyance), fear (terror, fear, apprehension).As a result, we consolidated the model into 14 broader emotion classes, each supported by 15-20 thesaurus-derived terms.
While manually curating seed terms yielded high-quality initial seeds, the number of words was insufficient for comprehensive lexicon construction.Therefore, we utilized the vocabulary of the finetuned DistilBERT model itself and extracted embeddings for each of our seed words and compared them with the raw embeddings from the model's vocabulary terms to find contextually and emotionally similar words.However, due to the ambiguity of individual term embeddings, the relevance of these expanded terms was not highly consistent.To address this, we first clustered seed words into 4[(with k = 4 set via the Elbow method [76])] subgroups using the constrained k-means algorithm [77] and then used the average embedding of each subgroup for the expansion.This process extended each subgroup to include highest similar 25 terms from the model's vocabulary, aiming for a total of 100 terms per each of the 14 emotion classes.Subsequent refinement involved removing duplicates and terms conflicting with Plutchik's polar opposites to improve the lexicon coherence.
The resulting vocabulary size for each emotion class contained between 80 to 100 terms.To standardize the lexicon, we pruned it by considering the centrality of term embeddings where we compared each term's embedding to the average category embedding and retained the 80 most pertinent terms per class.The final emotion lexicon comprised 1120 terms across the 14 classes.Table 1 depicts the alignment of the 2, 8 and 14 emotion classification schemes.The 8 classes of emotion contained 80 words per class with total of 640 terms.The version with two classes contained 480 words per category with total of 960 terms.Module 2 also contains externally sourced lexicons for modifiers (inhibitors and intensifiers) and negations, which was based on the valence detection work described in VADER [78].VADER employs an advanced process that integrates human annotations, heuristic rules, and statistical modelling to determine the valence and polarity of the modifiers.Module 2 provides these two lexicons and the expanded emotion terms as output into Module 3.
Module 3 received the expanded emotion word terms and their corresponding embeddings to generate an emotion embedding space.In case lexicon is constructed from the scratch this step will be skipped as words are already tagged with embeddings during the expansion.For external lexicons each word will go through embedding extractor and tagged with the corresponding embedding.The high dimensional vectors of this emotion embedding space can be visualised using the t-SNE algorithm on a 2-D grid as shown in Fig. 2. Each point on this Fig. 2 corresponds to an emotion term, with clear separation between green and red, where green is for positive emotions, and red for negative, in 14 emotion categorization.Next, the sample input text or an entire text corpus is received by Module 3.This input is pushed through the embedding generator and then projected on to the emotional embedding space.The n nearest neighbour extraction process identifies the closest emotion terms based on this projection.This process is depicted in Module 3 where the nearest neighbours are green dots and the blue are all other emotion embeddings.Based on these nearest neighbours, the Intensity Quantification component calculates the intensities of each of the relevant emotion classes.Here each neighbour will receive a score based on the proximity to the sample input.The terms are sorted and ranked based on similarity, then the terms are grouped based on emotion category and the summed scores for each category are normalised to create the emotion assemble of two, eight and fourteen emotions per input text.See Eq. 1.
Equation 1-Calculating Emotional Intensity
where,θ e intensity of emotion e. n number of nearest neighbours.A subset of nearest neighbours with emotion e. S x distance score of the neighbour x The next phase in Module 3 is the Explainability component.Explainability in AI aims to understand and interpret output made by the model.In the context of emotion AWARE, this is achieved by identifying and extracting the words that have contributed significantly towards forming the emotion profile.Here term embeddings extracted from the input text vector representation are compared with the mean embedding of the entire text.These terms are ranked based on similarity and the top ( 1) Fig. 2 The emotion embedding space generated by module 3 of the emotion AWARE framework N terms are recorded for explainability and also sent across to the intensity rectification component.
The intensity rectification component consisted of two resolution processors for modifiers (intensifiers and inhibitors) and negations.The adjacent terms of the top N terms are passed through the corresponding lexicons to check for negative, intensive or inhibitive terms.Modifier resolution is completed prior to negations in order to detect intensified or inhibited negations.For detected intensifiers and inhibitors, the score of the top emotion in the profile is revised depending on the intensity of the modifier.Then emotion profile will be normalized so that the increment/decrement of top emotion will affect the other emotions in the profile.In case of negations, the emotion categories are revised based on Plutchik's polar opposites.See Eqs. 2 and 3. (2)
Algorithm 1 EDGstar_Pathfingding
Algorithm 1 further describes the explainability component and insensitivity rectification.This algorithm takes nearest neighbours list and current emotion profile as inputs and generates as output, a rectified emotion profile with emotion keywords for explainability.
Figure 3 illustrates an instance of how AWARE constructs an emotion assemble for a given input text, each row of Fig. 3 depicts in the input text and relevant components of the output.The neighbourhood size is 50 and the input text is "The movie had a great start, but the ending was awful".Given the emotional ambiguity of this input, the 'Emotion Assemble' presents similar intensity scores for polar emotions, 'disgust' and 'joy' .This is also visible in the neighbour count vector.The explainable emotion terms are 'awful' and 'great' , which provides a rationale for the polarity of the emotion assemble.
Results
We designed nine studies that demonstrate the capabilities of the framework for the elicitation of multi-granular adaptable, robust, and explainable emotion assembles (Table 2).Each study is composed of a set of experiments where the datasets are drawn from a state-of-the-art collection that represent realistic conversations and content on digital media (Table 3).The results generated from this combination of nine studies across eight datasets confirms and validates the effectiveness of the proposed framework in the detection and analysis of emotions expressed in digital medium.The same configurations were used for all experiments, such as the finetuned language model, modifier and negation lexicons, scoring and explainability modules.Emotion lexicons/embedding spaces were based on the corresponding 2, 8 and 14 classes.
Study 1: Elicitation of two-emotion assembles (positive and negative) using ISEAR and twitter sentiment datasets
This study demonstrates the generation of two-emotion assembles of positive and negative emotions, the accuracy of which is then validated with existing methods for the same binary classification.We used two datasets Twitter Sentiment and ISEAR, in which we aggregated sad, anger, fear, disgust as negative and joy as positive.The two-emotion assembles were evaluated with three other methods reported in the literature, they are (1) linear keyword matching using Plutchik's emotion terms list [], (2) stemmed keyword matching [10] with negation, inhibitor, intensifier detection components and (3) SentiWordNet 3.0 [87].The evaluation was conducted across four metrics, accuracy, precision, recall and F1-score.
("Affective Text"), ISEAR and fairy tales
As noted prior, the proposed framework is capable of detecting all emotions in Plutchik's wheel of emotions [88].However, only a handful of related work have proposed techniques to detect all eight emotions.Therefore, we split the eight emotions into two subsets (common and rare) in order to ensure that Emotion AWARE can be evaluated with state-of-the-art approaches in extant literature.Study 2 evaluates the common subset anger, fear, sadness, joy, while study 3 evaluates the rare subset, disgust, surprise, anticipation, and trust.In study 2, we compared AWARE with rule-based, hybrid as well as machine learning techniques.Rule-based includes emotional linear keyword matching, stemmed keyword matching as well as the more advanced rule-based methods that consider contextuality and affinity-based methods CLSA, CPLSA, DIM.Here, CLSA and CPLSA are categorical classifications based on LSA and PLSA.Additionally, we also compared with context-based emotion vector construction methods [89], namely context-based Wiki, context-based Guten, context-based W-G.For machine learning methods, we finetuned DistilBERT [71] model on Emotion [90] dataset.Collectively, study 2 compares Emotion AWARE with ten similar techniques proposed in recent literature, using SemEval 2007, ISEAR and Fairy Tales datasets.For this, we incorporated the experiments included in previous work [16,89].As presented in Table 5, AWARE outperforms all methods for most combinations of dataset and emotions.
Study 5: Emotion AWARE adapted for the finance sector using the PhraseBank dataset
Domain adaptability is a core capability of Emotion AWARE.In study 5 and 6, we demonstrate this capability for the financial and technology sector.For the financial sector, we used the PhraseBank dataset which contains financial statements classified for positive and negative emotions.Emotion AWARE was adapted to this domain by simply expanding the vocabulary with 20 words each for positive and negative classes using the L&M financial emotion lexicon [91].Following the domain adaptation, twoemotion assembles were generated and compared with the stemmed keyword matching technique, finetuned DistilBERT with the Emotion dataset and SentiWordNet.Emotion AWARE is used with both the default vocabulary and the extended vocabulary using L&M.Table 8 summarizes the results, notably AWARE surpasses all methods across all metrics.
Study 6: Emotion AWARE adapted for the technology sector using Senti4SD8 dataset
Study 6 is the domain adaptation for the technology sector, where we used Senti4SD dataset which contains conversations from the stackoverflow community classified by emotion.Similar to study 5, we evaluated the proposed approach with default vocabulary as well as extended vocabulary along with stemmed keyword matching, SentiWord-Net, and finetuned DistilBERT.Here both positive and negative classes were extended with 20 words extracted using Emotion AWARE running on the training set.As shown in Table 9, Emotion AWARE outperforms all other methods in this adaptability task.
Study 7: Robustness of Emotion AWARE across intensifiers and inhibitors
Intensifiers and inhibitors are subjectively used in emotion expressions, which means an emotion detection method must be robust to intensifiers and inhibitors, specifically in digitalised emotion expressions where physical cues unavailable.To demonstrate this robustness property of Emotion AWARE, we created a new dataset because state-of-the-art datasets used in related work are limited in their inclusion of varying intensifiers and inhibitors.For constructing this manually curated dataset, we selected a random subset of 80 sentences from the Fairy Tales dataset and introduced intensifiers and inhibitors to each sentence to generate additional 160 sentences.Table 10 demonstrates the evaluation of a single sentence using known intensifiers and inhibitors and their corresponding impact on the emotion score and emotion category.Here the valence and intensity of modifiers is derived from prior work of VADER [78].In case of incrementing or decrementing modifier, current top emotion's score will be increased or decreased with a factor of corresponding modifier intensity as explained in the Eq. 2. Then the emotion profile will be normalized according to the Eq. 3.For this experiment we used a sample sentence from SemEval-2018 dataset.As depicted in Table 9, the base sentence "work was good for the first half " is classified as joy_ecstasy with an intensity score of 0.339 and admire with a score of 0.229.In the subsequent rows, we added intensifiers and inhibitors with varied valence that modifies the emotion expressed in the sentence.In descending order of Table 10, the intensity score of the top emotion of the base sentence (joy_ecstasy) decreases.This illustrates that AWARE has correctly identified all modifiers and attributed emotion labels and varied intensity scores accordingly.The manually curated dataset was used to evaluate Emotion AWARE, SentiWordNet, and stemmed keyword matching.Even though these approaches construct multi-facet emotion profiles, for this experiment we have only considered the most significant emotion as it is the most impacted from such modifications.For instance, if the most significant emotion in the original sentence is joy and has score of x, it is expected that intensified sentence score of joy be > x where inhibited sentence score of joy be < x (Table 11).Thus, we considered the most significant emotion score of the original sentence in inhibited and intensified cases to determine if this approach has correctly identified the modifiers.As the dataset consisted of 80 sentences, we calculated the mean of the most significant emotion score as the evaluation metric.Here DistilBERT (Emotion) is not included as it provides only labels (Table 12).As seen in the mean emotion scores, Emotion AWARE has increased from 0.346 intensified case and decreased from 0.161 in inhibited case.This shows that AWARE has correctly modified the emotions compared to corresponding original sentences.Stemmed keyword matching has incorporated the modifiers to some extent but it's bottlenecked with limitations of modifier capturing.When considering SentiWord-Net, none of the modifiers were detected, where it has mitigated the scores even in intensified sentences.
Study 8: Robustness of emotion AWARE in negation detection
Similar to Study 7, we randomly selected 80 sentences from the Fairy Tales dataset and manually negated to create a new dataset of 80 negated sentences.Here we used negation terms such as 'no' , 'not' , 'never' to reverse the emotions.We used this dataset to evaluate robustness of Emotion AWARE with that of stemmed keyword matching, SentiWordNet and DistilBERT finetuned on Emotion dataset.Table 13 presents mean F1 scores of emotion detection for original and negated sentences in this dataset.It is interesting to note that although SentiWordNet and DistilBERT show comparable accuracies to AWARE for the original sentences, they perform poorly for the negated sentences, unlike Emotion AWARE which scores 0.841 F1 score.We hypothesize that this observed behaviour is likely a result of the model's tendency to prioritize emotion-specific terms while disregarding the presence of negating words within the sentences.The datasets used in Study 7 and 8 will be made publicly available as a secondary outcome of this work.This dataset consisted of 320 sentences 80 per original, negated, intensified, and inhibited and optimal for modifier evaluation.Study 9 evaluates explainability of the emotion assembles generated by the Emotion AWARE framework, using both intensity scores and terms that contribute to the detection of an emotion.Figure 4 illustrates this capability for a sample sentence randomly selected from the Fairy Tales dataset, "How fortunate I am; it makes me so happy, it is such a pleasant thing to know that something can be made of me".The framework generates intensity peaks for the terms "fortunate", "happy" and "pleasant", which distinguishes the contributing terms and their significance in the emotion assemble.These intensities are based on the w_dist in the Algorithm 1. scores as explained as w_dist in the algorithm 1. Table 14 presents a further demonstration of explainability with emotion keyword extraction.Here the positive, negative samples are randomly selected from the Fairy Tales dataset.We combined some samples to create a mixed sample.The colour scheme depicts emotion significance, where shades of green are for positive emotions and shades of red are for negative emotions.The intensity scores are depicted on the right side of the image, which further improves the explainability of the emotion assemble.
The following table (Table 15) summarizes the emotion keyword results for the entire fairy tales dataset.Here for each sample in the dataset, top emotion and top keyword is extracted.The table contain each of the emotion category fear, anger, joy, surprise and sadness along with the 10 most frequent keywords per category.These keywords reflect the corresponding emotions which further validates AWARE.
Discussion
The study of emotion has a vibrant history, beginning with the evolutionary context where Charles Darwin [92] posited that emotions are an expressive behaviour that has evolved to increase our chances of survival, right up to Barrett [93] constructivist view where an emotion is constructed by cognitively classifying an affect based on past knowledge of that emotion.A multitude of studies have been conducted on the types of emotions, using methods such as philosophical postulations, factor analytic studies, similarity scaling studies, child development studies, cross cultural studies and facial expression studies.Based on studies of facial expression, Ekman [94,95] proposed six basic emotions; anger, disgust, fear, happiness, sadness and surprise.This was followed by Plutchik's [74] eight primary emotions interlinked by polarity; joy and sadness, trust and disgust, surprise and anticipation, anger and fear.Plutchik also proposed the wheel of emotions, a three-dimensional circumplex that illustrates degrees of similarity/polarity between emotions [74].The wheel is split into eight sectors for eight primary emotions, layers within each sector signify varying intensities (for instance with joy, intense joy being ecstasy and less intense being serenity) and gaps between sectors represent the mix of two primary emotions.The more recent digitalisation of emotion expressions has led to new challenges in complexity and ambiguity due to the absence of physical cues and observer inference Table 15.Emotion AWARE addresses this complexity and ambiguity of emotion detection through its four capabilities of multi-granular emotion assembles, adaptability, robustness and explainability.Unlike related work in emotion detection, the proposed framework generates emotion assembles based on prior knowledge of heuristics and learned knowledge of the finetuned language models.Drawing upon the literature review, we conducted a capability comparison of Emotion AWARE against the most effective and relevant studies as tabulated in Table 16.Following this capability comparison, we developed empirical evidence through the experimental evaluation of Emotion AWARE across nine studies that are based on state-of-the-art datasets containing diverse human emotion expressions.Studies 1-4 evaluate the detection of a spectrum of emotion assembles, starting with binary (or sentiment), the four common emotions from Plutchik's wheel of emotion (anger, fear, sadness, joy), the four rare emotions (disgust, surprise, trust, anticipation), and the increasing granularity of emotions from 2, 4 to 14 categories.2, Emotion AWARE outperforms a finetuned DistilBERT, highlighting the importance of prior knowledge contained in lexicons.Adaptability of the framework is demonstrated in Study 5 and 6 where AWARE was adapted for the finance and technology domains.In Study 5, AWARE demonstrates a 6% improvement in F1-score with an extended vocabulary compared to finetuned DistilBERT.Most related work in recent literature forego domain adaptability, where the challenges include frequency and scarcity as well as changing emotion polarity across domains.For example, "unpredictable" is frequently used as a positive emotion expression in film reviews (e.g., "The plot of this movie is fun and unpredictable"), whereas it is a negative expression in financial markets or human resource management (e.g., "the impact on share market indices is unpredictable" or "the employee response to governance in unpredictable") [96].Language model-based approaches have limited adaptability across domains due to the scale of training data required for finetuning while lexicon-based approaches require large hand-crafted, domain-specific lexicons [97].Emotion AWARE is able to overcome both limitations by leveraging a short list of domain specific terms with the usage of embeddings, which introduces context through meaning and emotion instead of exact matching.Robustness of the framework is demonstrated in Studies 7 and 8 where implied emotions and the presence of intensifiers, inhibitors and negations are detected and assigned intensity values relative to other emotions expressed in the same text.Also, in Study 8 which demonstrates robustness of negation detection, DistilBERT and SentiWordNet perform poorly in comparison to Emotion AWARE due to its exclusive focus on learned knowledge of emotion expressions.For instance, DistilBERT can accurately identify emotions of sentences "I am truly glad to hear it!"(joy)and "I am truly sad to hear it!"(sad)but incorrectly detect the emotion as joy in the negated version "I am truly not glad to hear it!".This highlights the significance of incorporating a heuristic approach to manage negations in Emotion AWARE, enhancing the accuracy of emotion detection.Finally, study 9 demonstrates the explainability capability where contributing terms and corresponding intensity scores of emotion assembles effectively unpack and rationalise the detected emotions.
The practical implications of this framework are broad.The robust, domain adaptable and explainable detection of emotion expressions has wide application value as we increasingly express emotions using digital media.For instance, in a long-term healthcare setting of multiple stakeholders (such as cancer care involving a clinician, patient, and social worker), this framework can be adapted to suit the vocabulary of each stakeholder and the generated emotion assembles can be explained using the constituent terms, which yields further capabilities of converging or diverging the emotion profiles of all stakeholders for decision value and consensus building among human behaviours in such complex settings.
Conclusion
The exponential transition of knowledge-focussed human activities and communication into digital spaces and physical hybrids has necessitated the manifestation, communication and persistence of our expressions of emotions on digital media.The proposed Emotion AWARE framework enables the objective and unambiguous detection of such emotions, with adaptability, robustness and explainability, for the subsequent generation and management of information that represents patterns of behaviour of individuals and organizations.The results from eight experimental studies confirm its practical value and contribution towards the comprehension of such expressions and behaviour of individuals and organizations.As future work, we intend to address the limitations of Emotion AWARE in complex settings where emotion is implied using either highly technical, jargonistic or informal emoji-based expressions, and figurative expressions of emotion such as the detection of metaphors and similes.We will also work on the integration of detected emotions along with other dimensions and modalities of information into the decision-making activities of individuals and organizations.
Fig. 1
Fig. 1 The modular composition of the emotion AWARE framework
Equation 2 -
Rectifying Emotional Intensity Equation 3-Normalizing Emotional Intensity Here variables are as follows, θ e k * -Updated intensity of top keyword's emotion, θ e k -Current intensity of top keyword's emotion,b -Modifier polarity (intensifier (+1) or inhibitor (−1)), a -Modifier valence, θ e normalized Normalized intensity of emotion e, θ e -Intensity of emotion e E -Set of all intensities in the emotion profile.Both modifier and negation lexicons as well as polarity and valences are based on prior work of VADER [78].
Fig. 3 2 :
Fig. 3 An emotion assemble generated by the emotion AWARE framework for mixed polarity sample text
Table 1
Alignment of the 2, 8 and 14 emotion classification schemes Table 4 presents the results, where Emotion AWARE surpasses all three methods.
Table 2
Nine studies evaluating and demonstrating capabilities of the proposed Emotion AWARE framework Dataset: manually curated dataset based on fairy tales [81] Study 8 Objective: robustness of emotion AWARE in negation detection Dataset: manually curated dataset based on Fairy Tales Explainability Study 9 Objective: explaining emotion assembles using constituent intensity scores and terms of emotional significance Dataset: demonstrated on fairy tales dataset Study
3: Elicitation of four emotion assembles (disgust, surprise, trust, anticipation) using GoEmotions and SemEval-2018
For the rare emotions of disgust, trust, anticipation, and surprise, we used GoEmotions and SemEval-2018 datasets and compared with stemmed keyword matching and DistilBERT model finetuned with the Emotions dataset.Table6presents the results where AWARE outperforms all other methods across the four emotions.
Elicitation of 2, 8 and 14 emotion assembles in increasing granularity
This study demonstrates Emotion AWARE's ability to generate emotion assembles at diverse levels of granularity.Table7presents these granular emotion assembles for the same text.Only the emotions with non-zero scores are shown in this table.For instance, row 2 depicts a positive score in the two-emotion assemble, anticipation and trust as the detected emotions in the eight emotions assemble, and in the 14 emotions assemble, trust is further split into trust, acceptance, and admiration alongside the corresponding intensity scores.
Table 3
Description of datasets used in the experiments, with percentage distribution of each emotion
Table 4
Comparison of results with 95% CI for two-emotion assembles using ISEAR and Twitter
Table 5
Comparison of F1 score with 95% CI for four emotion assembles (anger, fear, sadness, joy)
Table 7
Demonstrating the Elicitation of 2, 8 and 14 emotion assembles in increasing granularity
Table 8
Comparison of results with 95% CI adapted for the finance sector using the PhraseBank dataset
Table 9
Comparison of results with 95% CI when adapted for the technology sector using Senti4SD8 dataset
Table 10
Demonstrating the variation of emotion intensity score based on intensifiers and inhibitors
Table 11
Demonstrating robustness across intensifiers and inhibitors emotion of a sentence
Table 12
Performance of inhibitor and intensifier detection
Table 13
Results for robustness of emotion AWARE in negation detection
Table 14
Contributing terms and corresponding intensity scores for emotion explainability Most gracious father, I will show her to you in the form of a beautiful flower, " and he thrust his hand into his pocket and brought forth the pink, and placed it on the royal table, and it was so beautiful that the king had never seen one to equal it You are so beautiful, I like you very much.'Tweet, tweet, " sang the bird, as he flew out into the green woods, and Tiny felt very sad.The little prince was at first quite frightened at the bird.It was like a giant, compared to such a delicate little creature as himself.But when he saw Tiny, he was delighted, and thought her the prettiest little maiden he had ever seen
Table 15
10 most frequent keywords per emotion category in fairy tales dataset
Table 16
Comparison of Emotion AWARE with related work in emotion detection | 9,538.2 | 2024-07-10T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Boer-Mulders effect in the unpolarized pion induced Drell-Yan process at COMPASS within TMD factorization
We investigate the theoretical framework of the $\cos 2\phi$ azimuthal asymmetry contributed by the coupling of two Boer-Mulders functions in the dilepton production unpolarized $\pi p$ Drell-Yan process by applying the transverse momentum dependent factorization at leading order. We adopt the model calculation results of the unpolarized distribution function $f_1$ and Boer-Mulders function $h_1^\perp$ of pion meson from the light-cone wave functions. We take into account the transverse momentum evolution effects for both the distribution functions of pion and proton by adopting the existed extraction of the nonperturbative Sudakov form factor for the pion and proton distribution functions. An approximate kernel is included to deal with the energy dependence of the Boer-Mulders function related twist-3 correlation function $T_{q,F}^{(\sigma)}(x,x)$ needed in the calculation. We numerically estimate the Boer-Mulders asymmetry $\nu_{BM}$ as the functions of $x_p$, $x_\pi$, $x_F$ and $q_T$ considering the kinematics at COMPASS Collaboration.
I. INTRODUCTION
The Boer-Mulders function is a transverse momentum dependent (TMD) parton distribution function (PDF) that describes the transverse-polarization asymmetry of quarks inside an unpolarized hadron [1,2]. Arising from the correlation between the quark transverse spin and the quark transverse momentum, the Boer-Mulders function manifests novel spin structure of hadrons [3]. For a while the very existence of the Boer-Mulders function was not as obvious. This is because, similar to its counterpart, the Sivers function, the Boer-Mulders function was thought to be forbidden by the time-reversal invariance of QCD [4]. For this reason, they are classified as T-odd distributions. However, model calculations incorporating gluon exchange between the struck quark and the spectator [5,6], together with a re-examination [7] on the time-reversal argument, show that T-odd distributions actually do not vanish. It was found that the gauge-links [7][8][9][10] in the operator definition of TMD distributions play an essential role for a nonzero Boer-Mulders function.
As a chiral-odd distribution, the Boer-Mulders function has to be coupled with another chiral-odd distribution/fragmentation function to survive in a high energy scattering process. Two promising processes for accessing the Boer-Mulders function are the Drell-Yan and the semi-inclusive deep inelastic scattering (SIDIS) processes. In the former case, the corresponding observables are the cos 2φ azimuthal angular dependence of the final-state dilepton, which is originated by the convolution of two Boer-Mulders functions from each hadron. This effect was originally proposed by Boer [1] to explain the violation of the Lam-Tung relation observed in πN Drell-Yan process [11], a phenomenon which cannot be understood from purely perturbative QCD effects [12][13][14]. Similar asymmetry was also observed in the pd and pp Drell-Yan processes, and the corresponding data were applied to extract the proton Boer-Mulders function [15][16][17][18]. Besides the parmaterizations, the Boer-Mulders function of the proton has also been studied extensively in literature by several QCD inspired quark models, such as the spectator model [19][20][21][22][23][24][25], the large N c model [26], the bag model [27,28] and the light-front constituent quark model [28,29]. The study of the Boer-Mulders function has been extended to the case of pion meson by the spectator model [30][31][32], the light-front constituent quark model [33,34] and the bag model [35].
A suitable theoretical framework for studying the cos 2φ asymmetry at low transverse momentum is the TMD factorization. As the TMD evolution of Boer-Mulders function is difficult to solve, early phenomenological studies focusing on the Boer-Mulders effect in the cos 2φ asymmetry Drell-Yan [19,22,[36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51] usually employed tree-level factorization, in which the full TMD evolution of the Boer-Mulders function was not considered. In Ref. [33], the authors applied a Gaussian ansatz to estimate the k T -evolution effect of the Boer-Mulders function and the cos 2φ asymmetry in πp Drell-Yan process, following the effective description on the energy-dependent broadening of transverse momentum in Ref. [52]. In Ref. [34], the cos 2φ asymmetry in πp Drell-Yan process was studied in a transverse momentum weighted approach. In that work, the weighted asymmetry was expressed as the product of the first k T -moment of the Boer-Mulders function h included. In Ref. [53], the Collins-Soper-Sterman formalism [54][55][56] was applied to study the azimuthal spin asymmetries in electron-positron annihilation which is similar to the case of the Drell-Yan process, and a Sudakov suppression of the asymmetries in the region q T ≪ Q was found.
The purpose of this work is to apply the TMD factorization to estimate the cos 2φ azimuthal asymmetry in the pion induced Drell-Yan process contributed by the Boer-Mulders effect. From the viewpoint of TMD factorization [54][55][56][57], the physical observables can be written as the convolution of the factors related to hard scattering and well-defined TMD distribution functions or fragmentation functions. The evolution of TMD functions is usually performed in the b space, which is conjugate to the transverse momentum k T [55,56] through Fourier Transformation. In the large b region, the b dependence of the TMD distributions and the evolution kernel is nonperturbative. While in the small b region (perturbative region), the perturbative methods can be employed and the TMD distributions at fixed energy scale can be expressed as the convolution of perturbatively calculable coefficients C and their collinear counterparts order by order of the α s . The collinear counterparts can be the corresponding collinear parton distribution functions, fragmentation functions or multiparton correlation functions. Particularly, in the case of the Boer-Mulders function, it can be written as the convolution of the perturbatively calculable coefficients and the twist-3 chiral-odd correlation function T (σ) q,F . The energy dependence of T (σ) q,F (x, x) needed in the work can be solved by considering an approximate evolution kernel.
After solving the evolution equations, the TMD evolution from one energy scale to another energy scale is implemented by the exponential factor of the so-called Sudakov-like form factors [55,56,58]. The Sudakov-like form factor can be separated into the perturbatively calculable part S P and the nonperturbative part S NP , which cannot be perturbatively calculated and only can be extracted from the experiment data. In Ref. [59], the nonperturbative Sudakov form factor for the pion distribution functions was extracted from the unpolarized πN Drell-Yan data measured by the E615 experiment [60] at Fermi Lab. As for the S NP related to proton distribution functions, there were several parameterizations [61][62][63][64]. In this work, we will use the extracted S NP of pion for both the evolution of the unpolarized distribution function and the Boer-Mulders function. As for the evolution of the proton TMD distributions, we will apply the extracted S NP in Ref. [62] and the parametrization results of the Boer-Mulders function in Ref. [16].
Since the pion meson can serve as the beam to collide off the nucleon target in experiments, the Drell-Yan process [65,66] may be an ideal way to study the parton structure of unstable particles like pions. The idea was brought out decades ago and was exploited by the NA10 Collaboration [67] and the E615 Collaboration [68], which measured the azimuthal angular asymmetries in the process π − N → µ + µ − X, with N denoting a nucleon in the deuterium or tungsten target. Recently, COMPASS Collaboration at CERN [69][70][71] started a new Drell-Yan program by colliding a π − meson with energy E π = 190GeV on the NH 3 target, which can be a great opportunity to explore the Boer-Mulders function of the pion meson as well as the nucleon, in the case an unpolarized target or averaging the polarized data can be applied.
The rest of the paper is organized as follows. In Sec. II, we investigate the TMD evolution of the unpolarized distribution function and the Boer-Mulders function of proton and pion meson. In Sec. III, we present the theoretical framework of the cos 2φ azimuthal asymmetry ν BM contributed by the coupling of two Boer-Mulders functions in the pion induced unpolarized Drell-Yan process under the TMD factorization framework. We make the numerical estimate of the cos 2φ azimuthal asymmetry in Sec. IV and summarize this work in Sec. V.
II. THE TMD EVOLUTION OF THE DISTRIBUTION FUNCTIONS
In this section, we will present the TMD evolution formalism of both the unpolarized distribution function f 1 and the Boer-Mulders function h ⊥ 1 of the pion as well as those of the proton, within the TMD factorization. In general, it is more convenient to solve the evolution equations for the TMD distributions in the coordinate space (b space) other than that in the transverse momentum k T space, where b is conjugate to k T via Fourier transformation [55,56]. The TMD distribution functionsF (x, b; µ, ζ F ) in the b space have two energy dependencies, namely, µ is the renormalization scale related to the corresponding collinear PDFs, and ζ F is the energy scale serving as a cutoff to regularize the light-cone singularity in the operator definition of the TMD distributions. Here, F is a shorthand for any TMD distribution function and the tilde denotes that the distribution is the one in the b space. The energy evolution for the ζ F dependence of the TMD distributions is encoded in the Collins-Soper (CS) [56] equation: while the µ dependence is derived from the renormalization group equation as withK the CS evolution kernel, and γ K and γ F the anomalous dimensions. The overall structure of the solution forF (x, b; µ, ζ F ) is the same as that for the Sudakov form factor. More specifically, the energy evolution of TMD distributions from an initial energy µ to another energy Q is encoded in the Sudakov-like form factor S by the exponential form exp(−S)F where F is the factor related to the hard scattering. Hereafter, we will set µ = Studying the b-dependence of the TMD distributions can provide useful information regarding the transverse momentum dependence of the hadronic 3D structure through Fourier transformation, which makes the understanding of the b dependence quite important. In the small b region, the b dependence is perturbatively calculable, while in the large b region, the dependence turns to nonperturbative and should be obtained from the experimental data. To combine the perturbative information at small b with the nonperturbative part at large b, a matching procedure must be introduced with a parameter b max serving as the boundary between the two regions. A b-dependent function b * is defined to have the property b * ≈ b at low values of b and b * ≈ b max at large b values. The typical value of b max is chosen around 1 GeV −1 to guarantee that b * is always in the perturbative region. There are several different b * prescriptions in literature [72,73]. In this work we adopt the original prescription introduced in Ref. [55] In the small b region 1/Q ≪ b ≪ 1/Λ, the TMD distributions at fixed energy µ can be expressed as the convolution of the perturbatively calculable hard coefficients and the corresponding collinear counterparts, which could be the collinear PDFs or the multiparton correlation functions [54,74] where ⊗ stands for the convolution in the momentum fraction x and F i/H (ξ, µ) is the corresponding collinear counterpart of flavor i in hadron H at the energy scale µ, which could be a dynamic scale related to b * by µ b = c 0 /b * , with c 0 = 2e −γE and the Euler Constant γ E ≈ 0.577 [54]. The Sudakov-like form factor S in Eq. (4) can be separated into the perturbatively calculable part S P and the nonperturbative part S NP According to the studies in Refs. [64,[75][76][77][78], the perturbative part of the Sudakov form factor S P has the same result in Eq. (8) among different kinds of distribution functions, i.e., S P is spin-independent. The perturbative part has the form The coefficients A and B in Eq.(8) can be expanded as the series of α s /π: In this work, we will take A (n) to A (2) and B (n) to B (1) in the accuracy of next-to-leading-logarithmic (NLL) order [55,61,75,77,79,80] : The values of the strong coupling α s (µ) are obtained at 2-loop order as an approximation with fixed n f = 5 and Λ QCD = 0.225 GeV. We note that the running coupling in Eq. (14) satisfies α s (M 2 Z ) = 0.118. The quark and antiquark contributes to the perturbative part S P equally [81], i.e., For the nonperturbative form factor S NP associated with the unpolarized distribution of the proton, a general parameterization has been proposed in Ref. [62] and it has the form In Ref. [62] the parameters g 1 , g 2 , g 3 are fitted from the nucleon-nucleon Drell-Yan process data at the initial scale The parameters are extracted as g 1 = 0.212, g 2 = 0.84, g 3 = 0. Since the nonperturbative form factor S NP for quarks and antiquarks satisfies the following relation [81] S NP associated with the TMD distribution function for the protons can be expressed as In this work, we will apply the above result to calculate the spin-independent cross-section. In Ref. [59], parameterization of the nonperturbative Sudakov form factor for the unpolarized TMD distribution of the pion was proposed as follows: which has the same form as that for the proton. Here the parameters g π 1 and g π 2 were fitted at the initial energy scale Q 2 0 = 2.4 GeV 2 with b max = 1.5 GeV −1 as g π 1 = 0.082 and g π 2 = 0.394. We note that a form of S f 1,q/π NP motivated by the NJL model was given in Ref. [82].
Thus we can rewrite the scale-dependent TMD distribution functionF of the proton and the pion in b space as The hard coefficients C i and F for f 1 have been calculated up to next-to-leading order (NLO), while those for the Boer-Mulders function are still remained in leading order (LO). For consistency, in this work we will adopt the LO results of the C coefficients for f 1,q/H and h ⊥ 1,q/H , i.e., we take C q←i = δ qi δ(1 − x) and take the hard factor F = 1. Thus, we can obtain the unpolarized distribution function of the proton and pion in b space as If we perform a Fourier transformation on thef 1,q/H (x, b; Q), we can obtain the distribution function in the transverse momentum space as where J 0 is the Bessel function of the first kind, and k T = |k T |. According to Eq. (5), in the small b region, we can also express the Boer-Mulders at one fixed energy scale in terms of the perturbatively calculable coefficients and the corresponding collinear correlation function where the hard coefficients are only calculated up to LO. Here T As for the nonperturbative part of the Sudakov form factor associated with the Boer-Mulders function, the information still remains unknown. In a practical calculation, we assume that it is the same as S After performing the Fourier transformation back to the transverse momentum space, one can get the Boer-Mulders function as
III. THE cos 2φ AZIMUTHAL ASYMMETRY CONTRIBUTED BY THE BOER-MULDERS FUNCTIONS IN DRELL-YAN PROCESS
In this section, by applying the TMD factorization with evolution effect, we set up the necessary framework of the cos 2φ azimuthal angular asymmetry contributed by the Boer-Mulders functions in the pion induced unpolarized Drell-Yan process. In the studied process, the π − beam is scattered off the unpolarized proton target, where the quark and antiquark in the beam and target annihilate into a photon and the photon then produces a lepton pair in the final state. The process can be written as where P π , P p and q denote the momenta of the π − meson, the proton and the virtual photon, respectively. Here, q is a timelike vector in Drell-Yan process, namely, Q 2 = q 2 > 0, which can be interpreted as the invariant mass square of the lepton pair. In order to express the experimental observables, we adopt the following kinematical variables [55,69] s = (P π + P p ) 2 , x π = Q 2 2P π · q , x p = Q 2 2P p · q , where s is the total center-of-mass (c.m.) energy squared; x π and x p are the Bjorken variables; q L is the longitudinal momentum of the virtual photon in the c.m. frame of the incident hadrons; x F is the Feynman x variable, which corresponds to the longitudinal momentum fraction carried by the lepton pair; and y is the rapidity of the lepton pair. In the leading-twist approximation x π and x p can be interpreted as the momentum fraction carried by the annihilating quark/antiquark inside the π − and the proton, respectively. Alternatively, x π and x p can be expressed as functions of x F , τ and of y, τ [59] x The angular differential cross section for unpolarized Drell-Yan process has the following general form [3] 1 σ where θ is the polar angle, and φ is the azimuthal angle of the hadron plane with respect to the dilepton plane in the Collins-Soper (CS) frame [83]. The coefficients λ, µ, ν in Eq. (34) describe the sizes of different angular dependencies. Particularly, ν stands for the asymmetry of the cos 2φ azimuthal angular distribution of the dilepton. The coefficients λ, µ, ν have been measured in the process π − N → µ + µ − X by the NA10 Collaboration [67] and the E615 Collaboration [68] for a π − beam with energies of 140, 194, 286 GeV [67], and 252 GeV [68], with N denoting a nucleon in the deuterium or tungsten target. The experimental data showed a large value of ν, near 30% in the region Q T ∼ 3 GeV. This demonstrates a clear violation of the Lam-Tung relation [11]. In the last decade λ, µ, ν were also measured in the p d and pp Drell-Yan processes [84,85]. The origin of large cos 2φ asymmetry-or the violation of the Lam-Tung relation-observed in Drell-Yan process has been studied extensively in literature [1,[86][87][88][89][90][91][92][93]. Here we will only consider the contribution from the coupling of the Boer-Mulders functions, denoted by ν BM . It might be measured through the combination 2ν BM ≈ 2ν + λ − 1, in which the perturbative contribution is largely subtracted.
According to the TMD framework, in the Collins-Soper frame [83] the unpolarized Drell-Yan cross section at leading twist can be written as [1] where we adopt the notation to express the convolution of transverse momenta. Here q T , k T and p T are the transverse momenta of the lepton pair, quark and antiquark in the initial hadrons, respectively.ĥ is a unit vector defined asĥ = qT |qT | = qT qT . The second term in Eq. (35) has a cos 2φ modulation and can contribute to ν asymmetry. Two coefficients A(y) and B(y) in Eq. (35) can be written as the function of θ in the c.m. frame of the lepton pair Combining Eqs. (34) and (35), we can obtain the expression of the cos 2φ asymmetry coefficient ν BM contributed by the Boer-Mulders functions as Adopting the notation in Eq. (36) and performing the Fourier transformation from the q T space to b space on the delta function, we can obtain the denominator in Eq. (37) as where the unpolarized distribution function in b space is given in Eq. (22). Similar to the treatment of the denominator, we can write the numerator using the expression of the Boer-Mulders function in Eqs. (27) and (28) as with T (σ) q/π,F (x π , x π ; µ b ) and T (σ) q/p,F (x p , x p ; µ b ) the chiral-odd quark-gluon-quark correlation function of the pion and proton defined in Eq. (26).
IV. NUMERICAL ESTIMATE
In this section, using the framework set up above, we present the numerical prediction of the cos 2φ azimuthal asymmetry ν BM in the pion induced unpolarized Drell-Yan process at the kinematics of the COMPASS Collaboration. To do this, we need to know the Boer-Mulders functions of the pion and the proton.
In Ref. [34], the integrated unpolarized distribution function and the Boer-Mulders function for the pion meson were calculated by a model, in which the pion wave functions is derived from a light-cone approach. In this work we adopt the results at the model scale µ 2 0 = 0.25GeV 2 as ) .
The values of the parameters are as follows [34,94] To perform the evolution of f 1,q/π from the model scale µ 0 to another energy scale numerically, we apply the QCDNUM evolution package [95]. As for the energy evolution of T (σ) q,F , the exact evolution effect has been studied in Ref. [96]. For our purpose, we only consider the homogenous term in the evolution kernel being the evolution kernel for the transversity distribution function h 1 (x). We customize the original code of QCDNUM to include the approximate kernel in Eq. (42).
Applying Eqs. (28) and (30), we calculate the Boer-Mulders function for the up quark inside the π meson at different scales. The results for the b dependent and k T -dependent Boer-Mulders function at x = 0.1 are plotted in the left and right panels of Fig. 1, respectively. In calculatingh ⊥ 1,q/π (x, b; Q) in Fig. 1, we have rewritten the Boer-Mulders function in b space ash The three curves in each panel correspond to three different energy scales: Q 2 = 0.25GeV 2 (solid lines), Q 2 = 10GeV 2 (dashed lines), Q 2 = 1000GeV 2 (dotted lines). From the curves, we find that the TMD evolution effect of the Boer-Mulders function is significant and should be considered in phenomenological analysis. The result also indicates that the perturbative Sudakov form factor dominates in the low b region at higher energy scales and the nonperturbative part of the TMD evolution becomes more important at lower energy scales. For the Boer-Mulders function of the proton needed in the calculation, we adopt the parametrization at the initial energy Q 2 0 = 1GeV 2 in Ref. [16]: As for the unpolarized distribution function f 1,q/p (x) of the proton, we adopt the leading-order set of the MSTW2008 parametrization [97].
Using the expression of ν BM in Eq. (37) as well as the denominator in Eqs. (38) and the numerator in Eq. (39), we calculate the cos 2φ Boer-Mulders asymmetry ν BM as functions of x p , x π , x F and q T . In calculating the functions of x p −, x π − and x F −dependent asymmetries, the integration over the transverse momentum q T is performed over the region 0 < q T < 2 GeV to make the TMD factorization valid. The same choice has been made in Refs. [98,99].
We plot the results of ν BM in Fig. 2, in which the upper panels show the asymmetries as functions of x p (left panel) and x π (right panel); and the lower panels depict the x F -dependent (left panel) and q T -dependent (right panel) asymmetries, respectively. The bands correspond to the uncertainty of the parametrization of the Boer-Mulders function of the proton [16]. We find from the plots that, in the TMD formalism, the cos 2φ azimuthal asymmetry in the unpolarized π − p Drell-Yan process contributed by the Boer-Mulders functions is around several percent. Although the uncertainty is rather large, the asymmetry is firmly positive in the entire kinematical region. The asymmetries as the functions of x p , x π , x F show slight dependence on the variables, while the q T dependent asymmetry shows increasing tendency along with the increasing q T in the small q T range where the TMD formalism is valid. Our results show that, precise measurements on the Boer-Mulders asymmetry ν BM as functions of x p , x π , x F and q T can provide an opportunity to access the Boer-Mulders function of the pion, as well as to constrain the Boer-Mulders function of the proton.
V. CONCLUSION
In this work, we have applied the formalism of the TMD factorization to study the cos 2φ azimuthal asymmetry contributed by the coupling of two Boer-Mulders functions, in the pion induced unpolarized Drell-Yan process that is accessible at COMPASS. To do this, we have adopted the model results of the unpolarized distribution function f 1 and Boer-Mulders function of the pion meson calculated from the light-cone wavefunctions. For the distribution functions of the proton target needed in the calculation, we have applied available parametrizations.
We have also taken into account the TMD evolution of the pion and proton distribution functions. Specifically, we have utilized the nonperturbative Sudakov-like form factor of the pion TMD distributions extracted from the unpolarized πN Drell-Yan data, while for the proton target, we have adopted a parametrization of the nonperturbative Sudakov form factor that can describe the experimental data of SIDIS, DY dilepton and W/Z boson production in pp collisions. We have also assume that the Sudakov form factors for the Boer-Mulders function are the same as those for the unpolarized distributions f 1 .
We have calculated the contribution of the Boer-Mulders functions to the cos 2φ azimuthal asymmetry in the unpolarized π − p Drell-Yan process at the kinematics of COMPASS. The predictions are presented as functions of the kinematical variables x p , x π , x F and q T . We find that, the double Boer-Mulders asymmetry in π − p Drell-Yan process calculated from the TMD evolution formalism is positive and is sizable, around several percent. It shows that there is a great opportunity to access the cos 2φ azimuthal asymmetry in the unpolarized π − p Drell-Yan process at COMPASS and to obtain the information of the Boer-Mulders function of the pion meson. Furthermore, the calculation in this work will also shed light on the proton Boer-Mulders function since the previous extractions on it were mostly performed without TMD evolution. | 6,115.2 | 2018-05-08T00:00:00.000 | [
"Physics"
] |
Prevalence of peripheral artery disease (PAD) and factors associated: An epidemiological analysis from the population-based Screening PRE-diabetes and type 2 DIAbetes (SPREDIA-2) study
Aim To describe the prevalence of Peripheral Artery Disease (PAD) in a random population sample and to evaluate its relationship with Mediterranean diet and with other potential cardiovascular risk factors such as serum uric acid and pulse pressure in individuals ranged 45 to 74 years. Methods Cross-sectional analysis of 1568 subjects (mean age 6.5 years, 43% males), randomly selected from the population. A fasting blood sample was obtained to determine glucose, lipids, and HbA1C levels. An oral glucose tolerance test was performed in non-diabetic subjects. PAD was evaluated by ankle–brachial index and/or having a prior diagnosis. Results PAD prevalence was 3.81% (95% CI, 2.97–4.87) for all participants. In men, PAD prevalence was significantly higher than in women [5.17% (95% CI, 3.74–7.11) vs. 2.78% (95% CI, 1.89–4.07); p = 0.014]. Serum uric acid in the upper quartile was associated with the highest odds ratio (OR) of PAD (for uric acid > 6.1 mg/dl, OR = 4.31; 95% CI, 1.49–12.44). The remaining variables more strongly associated with PAD were: Heart rate >90 bpm (OR = 4.16; 95%CI, 1.62–10.65), pulse pressure in the upper quartile (≥ 54 mmHg) (OR = 3.82; 95%CI, 1.50–9.71), adherence to Mediterranean diet (OR = 2.73; 95% CI, 1.48–5.04), and former smoker status (OR = 2.04; 95%CI, 1.00–4.16). Conclusions Our results show the existence of a low prevalence of peripheral artery disease in a population aged 45–74 years. Serum uric acid, pulse pressure and heart rate >90 bpm were strongly associated with peripheral artery disease. The direct association between Mediterranean diet and peripheral artery disease that we have found should be evaluated through a follow-up study under clinical practice conditions.
Introduction
Peripheral arterial disease (PAD) is an important marker of cardiovascular risk [1], and it is an indicator of widespread atherosclerosis in other vascular territories such as the coronary, carotid, and cerebrovascular arteries [2]. The annual mortality rate derived from epidemiological studies of patients with lower extremity PAD is high [3], with a combined event rate for myocardial infarction, stroke, and vascular death of 4% to 5% per year [4].
The ankle-brachial index (ABI) is the ratio of the ankle to brachial systolic blood pressure, and a value of <0.90 indicates the presence of a flow-limiting arterial disease affecting the limb. The accuracy of the ABI for detecting !50% stenosis in the leg arteries is high (75% sensitivity and 86% specificity) [5]. The American Heart Association (AHA) Prevention Conference V highlighted the a low ABI is a consistent independent risk factor for cardiovascular events and mortality and recommended its use to detect subclinical PAD [2,6] to offer early therapeutic interventions to lower the risk of cardiovascular events and mortality The prevalence of PAD ranges between 1.8% and 25% according to the population studied and the cutoff value of the ABI. In advanced countries it has reported to be 3-10% among those aged 40-70 years, and 10-20% among those over 70 years of age [7]. Data from the Multi-Ethnic Study of Atherosclerosis (MESA) showed that the prevalence of PAD was the same in men and women at 3.7%, but borderline values of ABI were significantly higher in women (10.6% vs. 4.3%) [8]. Likewise, the prevalence is higher in certain population subgroups such as diabetic patients [9] and smokers [10].
The Mediterranean Diet (MeDiet) is characterized by daily consumption of fruits, vegetables, legumes, grains, moderate alcohol intake (1-2 glasses/d of wine), a moderate-to-low consumption of red meat, and a high monounsaturated-to-saturated fat ratio [11].
The PREvención con DIeta MEDiterránea (PREDIMED) study [12] showed for the first time under a randomized controlled trial design that a MeDiet supplemented with either extra-virgin olive oil or nuts is useful in the primary prevention of cardiovascular disease (CVD), PAD, atrial fibrillation, and type 2 diabetes mellitus in individuals at high risk. However, few studies carried out under clinical practice conditions have studied the role of MeDiet on PAD, with unselected patients (with and without CVD) and with usual MeDiet consumption.
To date, five population-based studies [13][14][15][16] have been conducted in Spain, showing discordant results in PAD prevalence and associated factors, and none of them reported the influence of MeDiet. Furthermore, these studies were carried out in areas where the compliance to MeDiet is higher than in Madrid [17].
Moreover, serum uric acid is an independent risk factor for cardiovascular events [18], but few studies have explored the possible relationship between serum uric acid levels and PAD [19]. This association is plausible given the previous evidence that serum uric acid may affect vascular endothelial function [20], although the association remains controversial [21].
Lastly, pulse pressure (PP; difference between systolic and diastolic pressures) has been included as a predictor of ABI <0.9 in the Spain REASON risk score, and a recently study using the NHANES data [22] has confirmed this issue. Adding pulse pressure to the periodic evaluation of high-risk patients might be a promising PAD surveillance instrument for the community-based population.
The objectives of the present study are to describe the prevalence of PAD in a random population sample and to evaluate its relationship with MeDiet, and with other potential cardiovascular risk factors such as serum uric acid and pulse pressure in individuals older than 45 years.
Material and methods Design
This study was conducted as part of a broader project, the Screening PRE-diabetes and type 2 DIAbetes (SPREDIA-2) study, which has been described in detail elsewhere [23]. SPREDIA-2 is a population-based prospective cohort study in which baseline screening was performed from July 2010 to March 2014.
Subjects
A total of 2,553 subjects were contacted. Potential participants were selected randomly from the electronic health records of all patients with health care coverage from two districts in the north metropolitan area of Madrid (Spain), namely, Fuencarral-El Pardo and Tetuán, which include three and seven primary health care centers, respectively. Of the 1,592 subjects (62.4%) who agreed to participate, 166 had been diagnosed with DM.
Those subjects not interested in participating were asked to report voluntary sociodemographic and clinical data, which revealed no significant differences in age, sex, or BMI. However, subjects in the participants group had a significantly greater family history of DM, hypertension and dyslipidemia compared with the non-participants group (S1 Table). The study procedure has been described in detail elsewhere [23]. Briefly, recruitment was divided into three phases. First, the potential participants were sent a letter signed by their general practitioner explaining the objectives of the study and inviting them to participate. Second, subjects were contacted by phone to resolve doubts, and, if they were interested in participating, were given an appointment for the assessment. To minimize the losses attributable to failure to locate the patient, up to four telephone calls were made at different times and on different days. Third, the patient attended the assessment in the outpatient clinic of Carlos III Hospital after an overnight fast. Upon arrival, a fasting blood analysis was obtained by measuring blood levels of glucose, creatinine, serum uric acid, HbA1c, serum insulin, and lipids and lipoproteins. Immediately after blood sampling, all subjects with no previous diagnosis of diabetes underwent an oral glucose tolerance test (OGTT) with 75 g of anhydrous glucose in a total fluid volume of 300 ml. A second blood sample was obtained 2 hours later.
Variables
The ABI values were measured by nurses trained at the Section of Internal Medicine of The Carlos III Hospital and according to current recommendations [24]. The ABI measurements were performed with a bidirectional portable echo-Doppler of 8 MHz (Minidoppler HADECO ES-100, Kawasaki, Japan) and a calibrated sphygmomanometer. The systolic blood pressure (SBP) was measured in the posterior tibial and pedal arteries of both lower limbs and the brachial artery of both upper limbs. The value of the ABI for each limb was calculated dividing the greater SBP obtained in each limb by the SBP of whichever was the higher in the upper limbs. The lowest value obtained was considered the ABI for that individual.
Sociodemographic variables, family history of prevalent diseases, cardiovascular risk factors (smoking habit, hypertension, diabetes, and hypercholesterolemia), clinical history of cardiovascular disease (CVD), comorbidities, and current treatments were recorded for all individuals. Participants were considered as hypertensive when the arterial pressure was !140/90 mmHg. Hypercholesterolemia was defined as having LDL-cholesterol !100 mg/dl (2.57 mmol/l) and/or receiving hypolipidemic medication. The smoking habit included all who had consumed tobacco over the previous month. A diabetes diagnosis was established when baseline glucose was !7 mmol/l (126 mg/dl) on two different occasions, or if the patient was receiving oral hypoglycemic drugs or insulin. CVD included documented history of coronary heart disease (acute myocardial infarction, angina, coronary revascularization procedure), ischemic or hemorrhagic stroke, and PAD. All participants had a physical examination with determination of height, weight, waist circumference (midway between the lowest rib and the iliac crest), and blood pressure. PP was calculated as the difference between systolic and diastolic pressures.
A previously validated 14-item MeDiet Assessment Tool was the method for assessing adherence to the MeDiet [25] where subjects were asked for their consumption of the most common Mediterranean foods (S2 Table). The total score ranges from 0 to 14. Each item was scored 0 (non-compliant) or 1 (compliant) [26]. Higher scores reflected better adherence. High adherence was defined as meeting at least 11 of the 14 items [27].
Cholesterol and triglycerides were determined by enzymatic assays. Low-density lipoprotein cholesterol (LDL-cholesterol) was calculated according to the Friedewald formula (LDLcholesterol = total cholesterol-([high-density lipoprotein cholesterol (HDL-cholesterol) + triglycerides/5]) in subjects with triglycerides below 400 mg/dl. HDL-cholesterol was measured after precipitation of apo-B lipoproteins. Glucose was measured by the glucose oxidase method. HbA1c was measured by a high-performance liquid chromatography (HPLC) method. Uric acid was measured by the uricase method.
Statistical analysis
The quantitative variables are presented as means with standard deviation, and the qualitative variables are presented as percentages. To check for normality of distribution of quantitative variables the Kolmogorov-Smirnov test was applied. Comparison of categorical variables was performed using chi-squared tests, and for continuous variables, the ANOVA test was used. The chi-square test for linear trend was used for ordinal variables. To explore associations across the range of ABI levels, ABI was categorized into 4 levels, with <0.9 or known PAD as the lowest level, and tertiles of ABI (0.90-1.09, 1.10-1.19 and !1.20, respectively). Logistic regression analyses were performed to evaluate the independent association of PAD with those variables that, in the univariate analysis, showed significance levels of P<0.10, as well as those that were considered clinically important or potentially confounding, such as gender and age. In the fully adjusted analysis, the interaction between gender and age was not significant. The magnitude association was expressed with the Odds Ratio. In all cases, the accepted level of significance was 0.05 or less, with a 95% Confidence Interval (95% CI).
Statistical processing of the data was performed with SPSS v.19 software (IBM Inc, Armonk, NY, USA).
Ethical considerations
The study protocol had been approved by the Research Ethics Committee of the Carlos III Hospital in Madrid. The study complied with the International Guidelines for Ethical Review of Epidemiological Studies (Geneva, 1991). All patients signed an informed consent form.
Results
A total of 1,592 subjects agreed to participate in the study, 684 (43%) of whom were male. We excluded 6 participants who did not complete the ABI. Table 1 describes the characteristics of the study sample. From a total of 1,586 patients (mean age 61.5 years), 20 (1.3%) of them were previously diagnosed to have PAD (eight patients showed an ankle-brachial index [ABI] <0. 9 and twelve an ABI ! 0.9). Out of the 1,566 patients without a previous diagnosis of PAD, 40 (2.5%) showed an ABI < 0.9. The patients previously diagnosed to have PAD with an ABI !0.9 (n = 12) did not show differences in cardiovascular risk factors (hypertension, diabetes mellitus, dyslipidemia) or cardiovascular events (coronary artery disease, stroke) compared to individuals with an ABI<0.90, who were defined as having newly diagnosed PAD (n = 48).
PAD prevalence was 3.81% (95% CI, 2.97-4.87) for all participants. In men the PAD prevalence was significantly higher than in women [5.17% (95% CI, 3.74-7.11) vs. 2.78% (95% CI, 1.89-4.07); p = 0.014]. The prevalence of PAD increased with age in men, from 3.8% in subjects aged <60 years to 9% in those aged !70 years. In women, however, the prevalence decreased from 3.7% in subjects aged <60 years to 1.2% in those aged !70 years ( Fig 1A). Furthermore, the prevalence of PAD was lower in never-smokers than in current or former smokers ( Fig 1B). Also, for former smokers the PAD prevalence in men was 2.25-fold greater than in women (6.3% vs. 2.8%), but we did not find large differences in the PAD prevalence for the current smoker population between both genders. We found positive associations between PAD and PP and serum uric acid values grouped in quartiles (Fig 2).
Patients in lower ABI categories were more likely to be older and male and to have a higher number of traditional cardiovascular disease risk factors (Table 2).There were statistically significant differences between ABI categories in relation to: smoking status, level of studies, coronary artery disease, hypertension, diabetes, hypercholesterolemia, systolic blood pressure, PP, heart rate, waist circumference, metabolic síndrome, HbA1c, serum uric acid, creatinine, and use of diuretics, beta-blockers, antiaggregant, renin-angiotensin system blockers and statins.
In the univariate analysis, the OR of PAD for male gender was 1.91 (95% CI, 1.13-3.22; p = 0.014). However, after fully adjusting for all covariates used in our analyses, the OR changed to a non-significant 1.30 (95% CI, 0.69-2.57; p = 0.40) for the female gender. Table 3 shows the results of the individual analysis of the risk factors associated with the presence of PAD according to the multivariate logistic regression model adjusted for age, gender, and those variables with a p-value of less than 0.10 in the univariate analysis. Serum uric acid in the upper quartile was associated with the highest OR of PAD (for uric acid > 6.1 mg/
Discussion
The results of our study show that the prevalence of PAD is low in comparison with other international population-based studies [28], but similar to that found in the Hermex Study carried out in Badajoz (Spain) [29]. A recent systematic review for Peripheral Arterial Disease Research Coalition including 34 community-based studies [30] evidenced that prevalence of PAD ranged between 7.3-11.8% for women aged 50-74 years and between 6.4-12.1% for men aged 50-74 years among high-income countries. Spain belongs to the high-income countries category, and for this reason, one might expect a higher prevalence of PAD. However, a "Spanish paradox" has been described as a phenomenon by which the cardiovascular morbidity (myocardial infarction stroke and PAD) and mortality levels are dissociated from their cardiovascular risk factors. The existence of protector factors such as MeDiet and its interaction with different genetic patterns [31] has been argued as a plausible explanation of this phenomenon. Furthermore, other Spanish population-based studies have shown PAD prevalence ranging from 4.5% [13] to 10.5% [16]. The prevalence was higher in men than in women for all Spanish studies. Nevertheless, some community-based studies have shown a higher prevalence of PAD in women compared with men [32][33][34][35], even in each decade of life [33]. These findings raise concerns about whether there should be differences in the definition of normal ABI values between men and women, and therefore whether the diagnostic criteria of PAD should be based on a cut-off ABI value different to the currently accepted as standard.
It is commonly accepted that men have a higher prevalence of PAD than women until the seventh decade of life [16,29,36]. However, in our study, the PAD prevalence was lower in women in comparison with men for each age group. Also, we found an inverse relationship between PAD and advancing age among women. We have no strong explanation for this finding, in which chance could indeed be playing a role.
The direct association between adherence to MeDiet and PAD is an unexpected phenomenon, given the strong evidence of the reduction of risk of PAD with the daily consumption of MeDiet [37]. Some aspects may explain our results. Firstly, the PREDIMED Randomized Trial compared two groups of MeDiet supplemented with extra-virgin olive oil and nuts, respectively, with a group who received counseling on a low-fat diet (control group), and all groups received a comprehensive dietary educational program based on individual and group sessions with a dietitian every 3 months designed to increase adherence to the MeDiet or a low-fat diet. The use of the 14-point MeDiet questionnaire proved very useful because the results formed Prevalence of peripheral artery disease (PAD) and factors associated Prevalence of peripheral artery disease (PAD) and factors associated the basis for personalized advice on changes the participant should make to acquire a traditional MeDiet or low-fat diet pattern. However, in the present study, no person received counseling and/or supplements of diet. Quite simply, we merely asked them for their consumption of MeDiet using the 14-point MeDiet questionnaire. These differences might help explain our findings. Secondly, as it is well known, a cross-sectional study like ours does not allow the establishment of a causal relationship between MeDiet and PAD. Thus, patients with baseline known PAD might have initiated MeDiet as part of their treatment, given that these individuals with known PAD had high vascular morbidity and a trend to show a better adherence to MeDiet compared with those without PAD. Thirdly, is plausible that those patients with known PAD and those with compatible symptoms of PAD (i.e. intermittent claudication) would tend to conserve healthier diets like the MeDiet whereas the population with a good perception of their health would lead to a progressive abandonment of the MeDiet, due to the economic crisis. This phenomenon has been detected in Italy, with a dramatic fall in the adherence to MeDiet, lowering the adherence from over 30% to 18% in the whole population with the global economic crisis [38]. This hypothesis might partially explain the results obtained here.
The positive association between smoking status and PAD is well established, and it is habitually found in the vast majority of studies [14,16,30,34]. Our study is consistent with these findings, but shows slight differences in the PAD prevalence between current and former smokers in the case of men. These findings are concordant with a recent systematic review of 34 studies [30]. However, other studies in our country showed greater differences between both levels of smoking status [14].
Our findings of a strong, independent association between serum acid uric >6.1 mg/dl and PAD is congruent with previous studies in adults with high cardiovascular risk [39,40] and among the general population [41]. Serum uric acid has been found to be associated with several inflammatory markers, including C-reactive protein and interleukin-6 [42]. Furthermore, hyperuricemia has been reported as a factor responsible for cardiovascular diseases through endothelial dysfunction caused by inactivation of nitric oxide, which is a potent vasodilator [43], and arresting the proliferation of endothelial cells [44].
However, it is well known that there is a high correlation between serum uric acid and glomerular filtration rate (GFR), strengthening the possibility that Chronic Kidney Disease (CKD) status may be a statistical confounder in the relationship between uric acid and cardiovascular disease, rather than the mediator [45]. In recent years, a better understanding of uric acid metabolism suggests that CKD may be an intermediate step between hyperuricemia and cardiovascular disease, and increased levels of uric acid are, at once, a dependent and independent risk factor of cardiovascular disease and kidney disease progression [46].
As large-artery stiffness increases in middle-aged and elderly subjects, SBP rises, and DBP falls, with a resulting increase in PP [47]. A series of prospective and cross-sectional studies have shown that PP is associated with cardiovascular events [47][48][49] and mortality [47,[50][51]. Prevalence of peripheral artery disease (PAD) and factors associated Interest has been increasing on the association between PP and PAD. In this line, previous studies have shown an association between PP and PAD [52][53][54]. Our results are in accordance with these findings, but our study was carried out in a general population rather than a population with a high risk of cardiovascular disease as other studies [55,56]. The Multi-Ethnic Study of Atherosclerosis based in subjects free of cardiovascular disease has shown a tendency, though not statistically significant, to a higher proportion of patients with PAD for each 10 mmHg increase in PP [57].
To our knowledge, this is the second study that has shown an association between high heart rate and PAD. In The MERITO Study [58], for each increase in the heart rate of one beat per minute, the OR for PAD was 1.02 (96% CI, 1.01-1.03). Resting heart rate has been associated with all-cause and cardiovascular mortality [59,60]. Some studies have found indirect associations between heart rate and PAD. So, resting heart rate !77 beats/min has been associated with frailty in older men (age-adjusted OR = 1.90; 95%CI, 1.30-2.48) [61] and frailty is strongly associated with subclinical PAD (ABI<0.8) (OR = 3.56; 95%CI, 2.03-6.24) [62].
A practical application of our findings is that heart rate>90 beats/min and a serum uric acid above 6.1 mg/dl could be two factors to consider in selecting patients to screen for PAD, regardless of gender, age and diabetes status. Nevertheless, this proceeding requires caution because the pathophysiological basis of the relationship between resting heart rate and PAD is still unknown.
Other known factors firmly established to increase the risk of PAD, such as diabetes mellitus, hypertension, or hypercholesterolemia were not associated in the multivariate analysis with the presence of a low ABI. This is a phenomenon already observed in other studies [58,63,64], and may be due to the known limitations of cross-sectional studies, the lack of a sufficient number of cases of the disease or the exclusion of persons with an ABI >1.5, who are more likely to have diabetes mellitus.
Concerning to the limitations of the study, the cross-sectional design did not allow determination of the causal effect of variables studied with PAD. Also, women were more likely to participate than men, as usually occurs in population-based studies. This aspect would limit the inference of the results to the entire population. Also, our findings about the magnitude of the association between both serum uric acid and high heart rates with PAD could have been affected by a potential residual confounding by unknown or misspecified confounding variables.
The study presents some strengths consisting of having collected epidemiological information on the prevalence of PAD in a sample of a wide range of ages, representative of the general population and from a region of our country with high prevalence of risk factors. Having used Prevalence of peripheral artery disease (PAD) and factors associated the same methodology as in other published population-based studies allows comparability to these.
In conclusion, our results demonstrate the existence of a low prevalence of PAD in a population aged 45-74 years. Serum uric acid, pulse pressure and heart rate >90 bpm were strongly Prevalence of peripheral artery disease (PAD) and factors associated associated with PAD. The direct association between MeDiet and PAD that we have found should be further evaluated through a follow-up study under clinical practice conditions. Supporting information S1 The MADIABETES Research Group (https://www.madiabetes.com) is a multidisciplinary group composed by general practitioner and nurses from 55 Health Centers of Madrid (Spain) who attended patients with Type 2 Diabetes Mellitus, and researchers of Public Centers, with a high interest to know the factors associated with the optimal evolution of the patients with this chronic disease.
Funding: This work was funded by the Agencia Laín Entralgo (Consejería de Sanidad de la Comunidad de Madrid) Grant 'RS_AP10/6' and by FIS (Fondo de Investigaciones Sanitarias, Instituto de la Salud Carlos III) grants no. PI12/01806 and PI15/00259 and co-financied by the | 5,569.2 | 2017-10-26T00:00:00.000 | [
"Medicine",
"Biology"
] |
Anomaly Detection Based on Tree Topology for Hyperspectral Images
As one of the most important research and application directions in hyperspectral remote sensing, anomaly detection (AD) aims to locate objects of interest within a specific scene by exploiting spectral feature differences between different types of land cover without any prior information. Most traditional AD algorithms are model-driven and describe hyperspectral data with specific assumptions, which cannot combat the distributional complexity of land covers in real scenes, resulting in a decrease in detection performance. To overcome the limitations of traditional algorithms, a novel tree topology based anomaly detection (TTAD) method for hyperspectral images (HSIs) is proposed in this article. TTAD departs from the single analytical mode based on specific assumptions but directly parses the HSI data itself. It makes full use of the “few and different” characteristics of anomalous data points that are sparsely distributed and far away from high-density populations. On this basis, topology, a powerful tool in mathematics that successfully handle multiple types of data mining tasks, is applied to AD to ensure sufficient feature extraction of land covers. First, the redistribution of HSI data is realized by constructing a tree-type topological space to improve the separability between anomalies and backgrounds. Then, topologically related subsets in this space are utilized to evaluate the abnormality degree of each sample in a dataset, and detection results for the HSI are output accordingly. Abandoning traditional modeling but focusing on mining the data characteristics of HSI itself enables TTAD to better adapt to different complex scenes and locate anomalies with high precision. Experimental results on a large number of benchmark datasets demonstrate that TTAD could achieve excellent detection results with considerable computational efficiency. The proposed method exhibits superior comprehensive performance and is promising to be popularized in practical applications.
I. INTRODUCTION
H YPERSPECTRAL remote sensing utilizes hyperspectral sensors (i.e., imaging spectrometers) mounted on different space platforms to image-specific scenes in continuous and subdivided spectral bands spanning visible light, near-infrared, and short-wave infrared (0.4-2.5 μm) [1], [2], [3]. Compared with traditional images, hyperspectral images (HSIs) contain both image information and spectral information [4], [5], [6]. The abundant information provided by HSIs makes hyperspectral remote sensing a valuable technology with strong comprehensiveness and broad application prospects [7], [8], [9]. The research on target detection is one of the most important directions of hyperspectral remote sensing [10], [11], [12], exhibiting excellent performance and unique advantages in many civil and military fields [13], [14], [15]. With the rapid development of remote sensing, the quality of captured observational data has substantially improved [16], [17]. For target detection in real scenes covering multiple types of land covers, the abundant and detailed information contained in images raises higher requirements for data mining and information extraction techniques. Therefore, it is of great practical significance to develop hyperspectral target detection to meet the broad demands of this technology in various fields.
According to whether the prior spectral information of target is available [18], [19], [20], target detection could be divided into two categories: supervised matching detection and unsupervised anomaly detection (AD) [21], [22]. In practical applications, it is very likely to encounter the lack of fully informative spectral databases and accurate reflectance inversion algorithms [23], [24]. Moreover, the subpixel problem and the constraints of measurement conditions also lead to certain limitations in matching detection [25]. While the operators used in AD methods do not require any prior spectral information of target or background, and are widely used in these cases [10]. Therefore, the research on unsupervised AD is highly practical [26]. Traditional hyperspectral AD algorithms are derived based on signal processing theory [1]. Such methods have been proposed in large numbers since the 1990s, providing a solid foundation for the discipline [27]. The general design process for traditional methods is to This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ obtain statistics from the HSI first, and derive the decision function through specific model assumptions and decision criteria [4], [28]. Then, the test pixel represented by a high-dimensional vector is substituted into the decision function, and the output value is compared with a given threshold to determine whether the anomaly exists [29], [30]. Primitive space model [8], subspace projection model, and probability distribution statistics (whitening space) model are the most classic model assumptions in signal processing [10], [31]. There are Reed-Xiaoli detector (RXD), low probability target detector, and uniform target detector based on probability distribution statistical models to process HSI data to achieve AD [10]. Among them, a series of derivative versions developed according to RXD have been widely used and manifested stable performance [32], [33], [34].
The development of hyperspectral AD based on signal processing has established a mature theoretical system and a relatively complete variety of algorithms [1], [35]. Such traditional detectors could achieve effective separation between target and background and show good detection effect under reasonable model assumptions [27]. However, most of the hyperspectral remote sensing data in practical applications are captured by imaging real scenes covering multiple types of land covers. Traditional detectors rely heavily on specific assumptions and are limited in their analytical capabilities for complex models, resulting in inapplicability to real data with distributional complexity of land covers [36], [37]. As a result, the detection effect of traditional methods is suboptimal due to the inability to fully exploit the abundant detailed information provided by HSIs [38], [39].
In view of the bottleneck encountered in the development of traditional methods, more and more scholars focus on machine learning (ML) to seek for breakthroughs and vitality for hyperspectral AD. With the rapid development of ML theories [40], [41], the algorithms designed based on them have performed brilliantly in various fields including hyperspectral remote sensing [42], [43]. In recent years, ML-based methods for hyperspectral AD have emerged continuously [44], [45]. Kernel methods [46], sparse representation models [47], discriminative subspace analysis, spectral data self-learning, and deep learning represent several major research directions [23]. The widespread application of ML-based method in hyperspectral AD exhibits strong analytical ability for complex models. It is fully demonstrated that the unique advantages of ML enable it to parse HSI data with complexity and sufficiently extract information [48], [49]. However, the aforementioned popular ML-based methods specialize in various application problems and satisfy different usage conditions, which means that the effectiveness of such methods usually needs to meet certain conditions to be stimulated [50], [51]. Moreover, in addition to consuming a huge amount of time and space resources, some methods based on data-driven mechanism have high requirements on the quantity and quality of training datasets [52], which are quite difficult to collect in practical applications. The strict usage conditions still restrict these methods to a certain extent, whether in implementation, application, or popularization. Different from traditional signal processing based methods and several popular ML-based methods, this article treats AD as a data mining task, focusing on data features of anomaly and background in an HSI, rather than prioritizing and constrained by specific model assumptions. An HSI is mathematically modeled as a data cloud, in which it is obvious that data points corresponding to the background are densely distributed, while the anomalous data points are sparsely distributed and far away from the high-density populations. Based on such fact, topology, a powerful mathematical tool, is adopted to solve the data mining task of AD. Topology has been proved to be capable of solving various types of tasks in ML involving point set analysis, including AD, information retrieval, classification [53], [54], etc. The general idea of its implementation in these tasks is to map the research object into a certain number of point sets, whose relations are represented by a geometric space to achieve an evaluation purpose associated with the task requirements. In a topological space, cardinality could be simply defined as the number of elements in a specific point set. Basener et al. proposed a topology-based algorithm for AD in dimensionally large datasets, demonstrating the superiority of topology over RXD in separating anomalies from the background [55]. Topology is essentially the mathematicalization of the sets and intuitive properties of those very simple and basic graphs. It is perfectly suitable for problems related to point set analysis, and could provide a brand new and feasible solution for the data mining task of hyperspectral AD.
So far, the demands of the solution for AD turned out to be simple and direct. The design of the proposed method is accomplished by achieving two phased goals: 1) separation of the target and background; and 2) highlighting the target and suppressing the background. Given the core idea of applying topology to point set analysis is to achieve the ultimate evaluation purpose through the geometric deformation of space according to requirements of various tasks, it is crucial to choose the form of mapping to construct the corresponding topological space for AD. Binary tree is a hierarchical structure defined by branch relationship in ML [56], [57], exhibiting immense potential in data mining [58]. It could exploit the numerical differences between the anomaly and the background in different dimensions to achieve the separation between these two [59], which is highly compatible with high-dimensional datasets such as HSI. This article takes full advantage of such compatibility and a tree-structured mapping is chosen to construct a topological space to significantly improve the separability between different types of data points, thereby achieving the aforementioned first phased goal of designing a detection method. On this basis, the detection output is designed to meet the critical requirements of highlighting the anomaly and suppressing the background, so as to achieve the second phased goal. Taken together, this article proposes a tree topology based anomaly detection (TTAD) method for HSIs. The design for TTAD overcomes the limitations encountered by traditional detectors by not relying on any specific model assumptions. The proposed method fully utilizes the simple, direct, and effective measurement strategy provided by point set topology for detection output to meet the essential requirements of the data mining task of AD. Briefly, the contributions of related research work on TTAD are summarized as follows: 1) This article successfully applies point set topology to the data mining task of hyperspectral AD. For the incompatibility of traditional detectors with HSIs in changeable real scenes, the analytical thinking that relies on specific model assumptions is completely divorced. Instead, the data characteristics of HSI itself are deeply parsed to guide the algorithm design. The abstract differences between the anomaly and the background in data features are successfully emphasized and highlighted in an intuitive way through the geometric deformation of space, thereby making the anomaly easier to be divided. 2) With the construction of topological space, a novel anomaly measurement called "topological cardinality" is developed for detection output with high precision. In the process of geometric deformation, anomalies are sparsely distributed and far away from high-density populations, resulting in the formation order and cardinality of the ultimate subset where they are located are clearly distinguishable. The topological cardinality combining these two terms is perfect for quantifying the abnormality of a sample. Moreover, the simple and basic intuitive properties in topological space enable the acquisition of detection output without any expensive computations that consume a lot of time and memory. 3) The proposed method performs AD tasks in parallel topological spaces to improve robustness. Given the distributional complexity of land covers in real HSIs, it is unavoidable to implement redistribution of HSI data with randomness in a single topological space. Hence, data redistribution in multiple parallel spaces is employed to penetratingly reveal the characteristics for various land covers. The topological cardinality of the test pixel in parallel spaces is averaged to achieve the detection output with accuracy and stability. TTAD is equipped with strong adaptability for various imaging scenes with complexity, providing a reliable solution for locating anomalies in practical applications. The arrangement of this article is described as follows. Section II elaborates the methodology of the proposed TTAD. Section III presents the experimental results and relevant analysis and discussion. Section IV summarizes the research work in this article and gives the conclusions.
II. ANOMALY DETECTION BASED ON TREE TOPOLOGY (TTAD)
The implementation flow of the proposed TTAD is shown in Fig. 1. The methodology is divided into the following three parts, which are explained in Section II-A, B, and C, respectively: 1) the topological space based on tree-structured mapping is constructed to realize the redistribution of HSI data, which makes the anomalies easier to be divided by improving the separability between different land covers; 2) in the tree-shaped topological space, an exclusive measurement for quantifying the abnormality of test pixels-"topological cardinality" is developed; and 3) the AD task is accomplished by parallel measurements of topological cardinality in parallel topological spaces.
A. Tree-Structured Mapping for Topological Space
Each pixel in an HSI corresponds to a nearly continuous spectrum, and the spectral differences between various types of land covers establish the basis for the realization of target detection, segmentation, and classification. Although there are a variety of land covers contained in the real scene, the only two categories concerned in the design of unsupervised AD methods are anomaly and background. In the absence of prior spectral information, it is crucial to find and fully utilize the data features of these two types of samples. Constructing the topological space through the tree-structured mapping could extract the features of HSI itself and realize the redistribution of entire dataset. Specific subsets are divided accordingly to assign distinct cardinalities to the subsets where the anomaly and the background are located.
Specifically, the redistribution of HSI data is realized by constructing parallel topological spaces through a series of mappings, each of which adopts a dedicated and unique treeshaped frame. This stage is completely based on random subsampling, so no prior information is required. Assume X = [x 1 , . . . , x N ], X ∈ R L×N represents an HSI dataset, and the number of bands and pixels are denoted by L and N , respectively. For the construction of each tree-shaped frame, N sub pixels are randomly selected from the original HSI X ∈ R L×N to form a hyperspectral data subset X sub ∈ R L×N sub . Given the fact that various imaging scenes correspond to different dataset sizes in practical applications, each subsampling in this stage uses a percentage of the total number of pixels to set the size of the subset N sub = N · sub percent (1) where sub percent represents the percentage of subsampled pixels in X sub to the total number of pixels in X. The selection of the feature dimension hierarchy use corresponding to a band of the root node is random, and then a value is randomly selected as the newly added node between the maximum and minimum values of the dataset N odeSet current on this band. After a new node is added to the tree, the remaining subset X remain could be divided into two parts X left and X right according to the numerical relationship with the node. With the current hierarchy hierarchy current of the tree is incremented by 1, the remaining data subset X remain is updated through X left and X right , and the updated X remain could be utilized to continue constructing the left and right subtrees. For each subsampling, constructing a tree-shaped frame is a recursive process, and the termination condition of the recursion is closely related to two variables, the current hierarchy hierarchy current of the tree and the remaining subset X remain . The iterative process terminates when the hierarchy of the tree hierarchy current reaches the limit hierarchy limit or the remaining subset X remain is indivisible. Specifically, the meaning of X remain being indivisible is the pixel values of the current dimension N odeSet current in X remain are all equal or there is only one pixel left. In addition, to ensure that the sparse points corresponding to the anomaly are stretched away from the dense points corresponding to the background in the tree topology, the hierarchy of the tree is preferably as close as possible to the dimension of the high-dimensional data. Hence, the number of bands L in the original HSI is utilized to limit the hierarchies of a single tree-shaped frame. Algorithm 1 details the implementation steps for the first stage. It is worth noting that the construction of a tree-shaped frame in Algorithm 1 utilizes a randomly selected subset, while the mapping process utilizes a complete set of an HSI. After the HSI X to be processed is put into the root node, according to the band indexes corresponding to the root to the leaf in the tree topology recorded by hierarchy order , a single-band image corresponding to the current hierarchy of the tree can be found from X. And this single-band image is divided into left and right subsets through the magnitude relationship with the current node value. The left and right subsets are further divided according to the band index corresponding to the next hierarchy recorded by hierarchy order , as described in the above steps. It proceeds sequentially from root to leaf until all pixels in an HSI fall into leaf nodes, that is, disjoint leaf sets are generated. As the mapping is implemented, the dataset is redistributed so that a separable anomaly pixel falls earlier into its leaf set with a smaller cardinality, whereas a background pixel falls later into its leaf set with a larger cardinality. As a result, intuitive differences between the anomaly and background emerge in the tree topology.
Accordingly, after completing the construction of the framework, HSI could be mapped into topological spaces to realize Algorithm 1: TopoFrame (X, Frame size , sub percent ).
Input: X ∈ R L×N -the original HSI, Frame size -the number of parallel spaces, sub percent -the percentage for a single subsampling. Output: T opoF rame-the frame for parallel topological spaces. Initialization: hierarchy current = 0, hierarchy limit = L , N sub = N · sub percent . 1: Start looping to build each frame according to the number of parallel spaces; for i = 1 : Frame size do 2: Randomly select N sub pixels to obtain a subset X sub ∈ R L×N sub , X remain =X sub ; 3: Randomly arrange the band indexes of the original HSI to obtain hierarchy order , which are the corresponding bands from root to leaf in a tree topology; 4: if hierarchy current < hierarchy limit and X remain is divisible then 5: hierarchy use = hierarchy order (hierarchy current + 1), N odeSet current = X remain (hierarchy use , :); 6: Randomly sample a value value node in the uniform distribution of the continuous interval between the minimum and maximum of N odeSet current as the node of the current hierarchy, tree.node = value node ; 7: X left = X remain (:, find(N odeSet current < = value node )); 8: X right = X remain (:, find(N odeSet current > value node )); 9: hierarchy current = hierarchy current + 1; 10: tree.LeftNode ← X remain = X left , re-execute from step 4; 11: tree.RightNode ← X remain = X right , re-execute from step 4; 12: end if (step 4) 13: return tree 14: T opoF rame = T opoF rame ∪ tree ; 15: end for (step 1) 16: return T opoF rame data redistribution. The first phased goal of designing detection methods mentioned in Introduction, the separation of the anomaly and background, has been achieved. Moreover, the abstract differences in data features of anomaly and background are fully extracted and emphasized in an intuitive way through the geometric deformation of space. So far, the formation of the tree topology is fully prepared for the subsequent AD tasks. To achieve the second phased goal, how to design the detection output to highlight the anomaly and suppress the background is the most critical issue.
B. Topological Cardinality for Detection Output
Performing AD tasks based on tree topology is essentially a binary classification of anomalies and backgrounds in HSIs containing a large number of pixels, which needs to provide a reliable basis for subsequent judgments on whether the test pixels are anomalous or not. However, the absence of prior information imposes strict requirements on the design of detectors: how to extract data features of anomalies and backgrounds and exploit them to design detection outputs for test pixels in a tree-shaped topological space? This directly determines whether the AD task could achieve high-precision performance in real scenes. To address this problem, this article proposes a novel anomaly measurement called "topological cardinality" for detection output.
The HSI dataset is mapped into a topological space using a tree structure, where each node from the root to the leaf corresponds to a specific point set. There is an entire set of an image in the root node, and the leaf nodes correspond to the subsets. Obviously, the anomalous points are sparsely distributed and far away from the high-density populations, they are easier to be divided than the background, so the leaf sets are formed earlier. Moreover, the extremely small proportion of anomalies leads to a relatively small cardinality of the leaf set where they are located. In the tree-shaped topological space, the hidden and latent data features in the original space are presented more intuitively and simply. As a result, the anomaly and background differ significantly in quantity, spatial distribution, and spectral characteristics, and such differences are emphasized and highlighted in an intuitive way through space geometric deformation. For the obtained tree topology, the formation order of a leaf set and its cardinality are suitable for measuring the abnormality degree for a test sample, so these two are combined to develop a novel measurement: topological cardinality. For a test pixel, the earlier the formation order of the leaf set covering it, the smaller the cardinality and the smaller the topology cardinality, indicating that the abnormality degree is higher and the possibility of being judged as anomalous is greater. Based on this, the anomaly and background are assigned distinct scores to achieve high-precision detection results. It is worth mentioning that the aforementioned formation order and cardinality are simple and basic intuitive properties in topological space, which enables the abnormality degree of test pixels could be effectively measured without expensive computations that consume a lot of time and memory.
After the mapping is completed, each pixel of the original HSI is located in a leaf node, and the tree topology makes each leaf node correspond to its unique root-to-leaf trace, which records its formation order. Correspondingly, the trace on the tree for each pixel is unique. Each node of the tree topology is a specific point set, and each trace starting from the root and ending at the leaf connects a series of point sets, whose cardinality is decreasing. Considering the distributional characteristics of data points belonging to various land covers in topological space, the formation order and cardinality of leaf sets reflect the abnormality degree of data points in them. In other words, for the test pixel, the topological cardinality for detection output could be calculated by observing the formation order and cardinality of the corresponding leaf set. As shown in Fig. 2, X ∈ R L×N is an HSI dataset. It is assumed that the image to be processed contains 20 pixels, that is, N = 20, and x i , x j ∈ R L×1 are two test pixels in X. In the tree topology, x i and x j are located in different leaf subsets, corresponding to two different root-to-leaf traces. Their leaf sets and traces are marked with red circles and arrows, respectively, and the cardinalities of all leaf sets are given in Fig. 2. For the leaf subset where x i is located, its formation order is represented by hierarchy leaf (x i ), and its cardinality is represented by cardinality leaf (x i ). The topological cardinality of x i could be obtained by calculating the product of these two Similarly, using hierarchy leaf (x j ) and cardinality leaf (x j ) to represent the formation order and cardinality of the leaf subset where x j is located, the topological cardinality of x j could be calculated by the following formula: It can be observed from the tree topology shown in Fig. 2 that since the spectral characteristics of x i are significantly different from most pixels, there are few pixels similar to it, and it is easier to be divided. As for x j , like most pixels, is divided into leaf nodes at a deeper hierarchy of the tree, and has more similar pixels. Obviously, x i is more in line with the characteristic of "few and different" and is more likely to be judged as the anomaly, while x j is more likely to be judged as the background. As the previous calculation results, TopoCard(x i ) is smaller than TopoCard(x j ), the smaller the topological cardinality is, the higher the abnormality of the sample, and the greater the possibility of it being judged as the anomaly. The above example shows that, as a anomaly measurement, the topological cardinality could effectively quantify the abnormality of a test pixel, and provide a reasonable basis for the subsequent judgment on whether it is anomaly. For a test pixel x ∈ R L×1 , TopoCard(x) is utilized to indicate its topological cardinality, and the decision criterion is set as follows: where η is the comparison threshold. If D(x) > η, x is judged as the anomaly. Conversely, if D(x) < η, x is judged as the background.
C. Hyperspectral Anomaly Detection in Parallel Topological Spaces
During the construction of topological space, the data subsets used to form a tree topology are randomly selected from HSIs. While real scenes are diverse and complex, since this stage does not utilize any prior information, it is difficult to guarantee the effective extraction of anomaly and background features if the selected subset does not contain any anomalous points. This is the least expected situation that could be encountered during the formation of a tree topology, which will lead to serious missed detection. According to the ensemble learning theory [60], to reduce the influence of such worst case on subsequent detection results and further improve the robustness of the proposed TTAD algorithm, the AD task is performed in parallel spaces containing multiple tree topologies. In the second stage of performing the AD task, parallel measurements of topological cardinality are implemented in multiple tree topologies, and the detection results are output accordingly. In the traversal of HSI, for the test pixel x i , first find the leaf set where it is located in a single tree topology corresponding to a space, and calculate the reciprocal of its topological cardinality to obtain 1 TopoCard(x i ) . Then implement parallel measurements on 1 TopoCard(x i ) in all tree topologies, accumulate and average to get result i , which is utilized as the detection output of the current test pixel. The larger result i is, the higher the abnormality degree of x i is, and the more likely it is to be judged as the anomaly. The implementation steps of the second stage are shown in Algorithm 2.
In general, the complete processing flow of TTAD includes two stages, corresponding to Algorithms 1 and 2, respectively. In addition to the HSI to be processed, two key parameters are included in the input, the number of parallel spaces Frame size and the subsampling percentage used to form a subset for a single topological frame sub percent . Since the proposed AD method does not adopt a specific detection mode, such as sliding dualwindow, the settings of the above two parameters affect both Algorithm 2: TTAD (X, T opoF rame).
Input: X ∈ R L×N -the original HSI, T opoF rame-the frame for parallel topological spaces. Output: result ∈ R 1×N -detection results. 1: Start looping to process each pixel of the HSI; for i = 1 : N do 2: x i ∈ R L×1 ←i-th test pixel of X; 3: result i_sum = 0; 4: for j = 1 : Frame size do 5: TreeTopo j ← j−th tree topology of T opoF rame; 6: Find the leaf set where x i is located in TreeTopo j , and get its formation order hierarchy leaf (x i ) and cardinality cardinality leaf (x i ); 7: Compute the topological cardinality for x i and take the reciprocal, the subsequent detection effect and computational efficiency to a great extent. The experimental part will discuss the parameter settings and find the optimal combination for datasets in different real scenes. Then, TTAD is compared with other widely used and advanced algorithms in terms of detection effect and execution time to fully demonstrate the superiority of the proposed method in comprehensive performance.
III. EXPERIMENTAL RESULTS AND ANALYSIS
This article attaches great importance to both the detection effect and practicability of the proposed method. Therefore, benchmark HSI datasets in both natural and artificial environments are selected for experiments to fully evaluate the performance of TTAD in various scenes. Section III-A describes the evaluation indicators used for detection results. Section III-B introduces HSIs in multiple scenes including natural and artificial environments, which are adopted as experimental datasets. Section III-C shows and discusses the detection results of TTAD under different settings for key parameters. Section III-D makes a comparison of TTAD with other classical and advanced detectors to demonstrate the effectiveness and efficiency of the proposed method. All experiments are carried out with MAT-LAB R2014a on a Windows 10 computer with an Intel Celeron CPU N3350 @ 1.10 GHz and 4.00 GB RAM.
A. Evaluation Indicators
The well-recognized and widely used indicators in this research field are employed for qualitative and quantitative analysis of experimental detection results as follows:
1) Receiver Operating Characteristic (ROC) Curve: ROC
visualizes the correspondence between the probability of detection (PD) and the false alarm rate (FAR), based on which the detection performance of algorithms could be scientifically analyzed and compared [61]. The ideal detection performance of an algorithm exhibits high PD and low FAR. The more the ROC curve shifts to the upper left, the better the detection performance of the related algorithm. 2) Area Under the Curve (AUC): AUC provides a quantitative evaluation indicator by integrating the ROC to calculate the AUC [11]. As mentioned above, the trend of ROC curve reflects the detection performance, which corresponds to the value of AUC. The larger the AUC, the better the performance. The magnitude of AUC conduces to further accurately evaluate the performance for algorithms. 3) Separability Map: For detection results at anomaly and background locations, the separability map visualizes the separability between these two groups of values [49]. Specifically, two boxes distinguished by color represent the statistical range of the two groups of values. The horizontal bar on the box indicates the median value. The two edges of the box from bottom to top represent the 25th and 75th percentile, respectively. And the whiskers extend from the 0.5th percentile to the 99.5th percentile. This indicator reflects the detector's ability to separate out the anomaly.
B. Experimental Datasets
There are five publicly available benchmark hyperspectral experimental datasets used to verify the comprehensive performance of the proposed method, including Hyperspectral Digital Imagery Collection Experiment (HYDICE), Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-1), AVIRIS-2, AVIRIS-World Trade Center (WTC), and Airport-Beach-Urban (ABU). The experimental datasets selected in this article cover HSIs in various complex scenes, such as airports, urban areas, beaches, and towns, each of which contains multiple types of land covers in natural or artificial environments. Both the anomaly and the background meet the verification conditions for AD on HSI data with complexity, which could fully demonstrate both the detection effect and practicability for the proposed TTAD.
1) HYDICE: This dataset was acquired by the HYDICE sensor imaging an area in Michigan, USA, with a spatial resolution of about 2 m. The original image contains 210 bands from the visible to the near-infrared spectrum. After removing the low-quality bands, such as the low signalto-noise ratio (SNR) and water absorption, 175 bands are reserved for experiments. There are 307 × 307 pixels in the full image covering the urban scene as shown in Fig. 3(a).
In the experiment, a subimage of size 80 × 100 containing 21 vehicle pixels as anomalies is cropped [62], as shown in Fig. 3(b). Among them, the spatial size of anomalies distributed in 10 locations is in the range of 1-4 pixels. Fig. 4(a) and (b) shows the true-color and ground-truth maps, respectively. 2) AVIRIS-1: This dataset was acquired by the AVIRIS covering an area of San Diego in the spectral range of 370-2510 nm with a spatial resolution of 3.5 m. The water absorption and low SNR bands are removed from the original 224 spectral channels, and 189 bands are reserved for experiments [7]. As shown in Fig. 5(a), the full image contains 400 × 400 pixels. The AVIRIS-1 is a cropped subimage containing 100 × 100 pixels, as shown in Fig. 6(a). Three planes occupying 20, 22, and 22 pixels, respectively, are regarded as anomalies, and the corresponding groundtruth map is shown in Fig. 6(b). 3) AVIRIS-2: As shown in Fig. 5(b), this dataset is also cropped from the abovementioned full AVIRIS image. Its spatial size is 128 × 128 and contains 189 bands [18]. In this subimage, three planes occupying 34, 42, and 44 pixels, respectively, are regarded as anomalies. The true-color and ground-truth maps of AVIRIS-2 are illustrated in Fig. 7 The fire sources in this scene are anomalies to be detected, occupying 83 pixels in a total of 10 locations [63]. Fig. 8(a), (b), and (c) illustrate the three-dimensional (3-D) data cube of AVIRIS-WTC, the true-color map, and the ground-truth map, respectively. 5) ABU: This dataset was manually cropped after being downloaded from the AVIRIS website [26]. There are 13 images in total, and the corresponding scenes include airports, beaches, and urban areas. For the images used in experiments, the bands heavily disturbed by noise have been removed. Detailed information, such as data size, spatial resolution, and capture location are listed in Table I. Figs. 9, 10, and 11 show the distribution of land covers and anomaly locations in three categories of scenes: airports, beaches, and urban areas, respectively.
C. Discussion on Parameters
It can be seen from Algorithms 1 and 2 that the input of the proposed TTAD involves two key parameters, Frame size and sub percent , which indicate the number of parallel topological spaces and the subsampling percentage corresponding to the framework required for a single topology, respectively. Since no prior information is available during the formation of topological frameworks, improper parameter setting would make it difficult to ensure effective extraction of anomaly and background features, resulting in unideal detection effect. In addition, considering that Frame size and sub percent also affect the computational efficiency, the design of the experiments needs to ensure that the search range for the two parameters is large enough to find a suitable combination. Based on the above considerations, in the early stage of experiments, the ranges of Frame size and sub percent are set to [10,20,30,40,5,60,70,80,90, 100] and [0.1%, 0.2%, 0.3%, 0.4%, 0.5%, 0.6%, 0.7%, 0.8%, 0.9%, 1.0%], respectively. After traversing the permutations, there are 100 parameter combinations in total. In order to compare the detection results under different parameter combinations in a fair, reasonable, and convenient way, AUC is adopted as a quantitative indicator to evaluate the performance for the proposed method.
For the HYDICE dataset, Table II gives the AUC values corresponding to the detection results of TTAD under 100 parameter combinations. The surface in Fig. 12(a) is drawn according to the results in Table II, which more intuitively illustrates the relationship between different parameter combinations and detection performance. The maximum and minimum values of AUC are 0.9888 and 0.4071, respectively. Correspondingly, the optimal parameter combination is Frame size = 80, sub percent = 1.0%. From the surface colored according to the amplitude of AUC in Fig. 12(a), it can be seen that when Frame size and sub percent are small, the fluctuation amplitude of AUC is large. With the increase of these two parameters, the AUC gradually increases and the fluctuation becomes more gentle. It shows that when the subsampling percentage corresponding to the framework for a single topology and the number of parallel topological spaces are large, the proposed TTAD could achieve excellent detection effects and exhibit robustness.
For the AVIRIS-1 dataset, Table III and the corresponding colored surfaces in Fig. 12(b) demonstrate the detection performance of the proposed method within a given range of parameter settings. Under 100 parameter combinations, the minimum AUC value is 0.6636 and the maximum is 0.9890, and the corresponding optimal parameter combination is Frame size = 50, sub percent = 0.6%. Among the four surfaces corresponding to different experimental datasets shown in Fig. 12, the fluctuation of AUC on AVIRIS-1 is the most obvious. However, such drastic fluctuation only occurs when the number of topological spaces and the subsampling percentage are small. With the increase of Frame size and sub percent , the AUC value gradually increases and tends to converge.
For the AVIRIS-2 dataset, the AUC values obtained from the detection results of the proposed method under the variation of Frame size and sub percent are given in Table IV, which is related to the surface in Fig. 12(c). The variation range of AUC is 0.7913 to 0.9343, when Frame size = 30, sub percent = 0.5%, AUC reaches the maximum. From the observation of the colored surface shown in Fig. 12(c), the overall variation range of AUC is relatively small, indicating that the detection effect of TTAD on AVIRIS-2 is not particularly sensitive to different parameter settings.
For the AVIRIS-WTC dataset, Table V provides the experimental results of TTAD under different Frame size and sub percent , the AUC varies from 0.8448 to 0.9942, and the optimal parameter combination is Frame size = 40, sub percent = 0.4%. In the colored surface of Fig. 12(d), it can be observed that under a wide range of parameter combinations, the variation range of the AUC evaluation results is small, and the proposed method performs excellent and stable on AVIRIS-WTC.
For the ABU dataset, there are a total of 13 images in the airport (4), beach (4), and urban (5) scenes. Each image is experimented with 100 combinations of parameters Frame size and sub percent , and the proposed TTAD is run a total of 1300 times on this dataset. Figs. 13, 14, and 15 illustrate the detection evaluation results of TTAD on the experimental datasets of three major categories of scenes: airports, beaches, and urban areas, respectively. A total of 13 HSIs correspond to their respective maximum AUC values and optimal parameter combinations. Preliminary conclusions could be drawn from the analysis of the overall variation trend of all colored surfaces. When the number of topological spaces Frame size and the subsampling percentage sub percent are small, the AUC evaluation results are small and the fluctuation range is large, representing poor detection effects.
As the values of the two parameters increase gradually, the AUC increases and its variation range decreases, and the surface becomes smoother, indicating that the detection performance of TTAD tends to be stable.
According to the above experimental results, Table VI provides the optimal setting schemes for parameters Frame size and sub percent when AUC reaches the maximum on different TABLE III AUC VALUES OF TTAD UNDER DIFFERENT PARAMETER COMBINATIONS ON THE AVIRIS-1 TABLE IV AUC VALUES OF TTAD UNDER DIFFERENT PARAMETER COMBINATIONS ON THE AVIRIS-2 TABLE V experimental datasets. In the proposed method, these two parameters are the key factors involved in the implementation of tree topology-based AD tasks, Frame size determines the number of parallel topological spaces and sub percent determines the number of subsampled pixels required to build a framework for a single topology. Imaging various real complex scenes to obtain hyperspectral data to be processed, in which the number of pixels and the proportion of anomalous points are different. Therefore, the settings of the two parameters are crucial for the extraction of data features, which in turn affects the subsequent detection results. Initially, in order to avoid erroneous conclusions caused by the contingency of parameter discussion with a few experimental results, a certain range for each parameter was selected to be set. The reason is that first, the search range for the optimal combination could be expanded, and second, the sensitivity and variation trend of TTAD performance to different parameter settings could be reasonably analyzed through the evaluation results within a given range. On the whole, when the values of the two parameters are small, for the redistribution of HSI data by constructing topological spaces in stage 1, the total utilization rate of pixels in an image is too low to ensure effective extraction of data features, resulting in unsatisfactory detection effects. By observing all 17 colored surfaces in Figs. 12-15, it can be concluded that with the increase of the parameter values, the detection effect of the proposed method improves steadily, and the corresponding fluctuation range of AUC decreases. It is fully demonstrated that TTAD exhibits excellent detection effects and robustness within reasonable ranges of parameter settings. In practical applications, we recommend that Frame size and sub percent be set to 100% and 1.0% at most to achieve the desirable detection effect. It is worth noting that although the search range of parameter combinations in experiments is large, the setting limits of Frame size and sub percent are 100% and 1.0%, respectively, such a subsampling scale is quite small for the whole image. It is further demonstrated that the proposed method with low computation and memory consumption is suitable for HSIs in various real complex scenes as expected.
D. Comparison of Detection Performance
In this article, TTAD is compared with other classical and advanced hyperspectral AD methods to further demonstrate the effectiveness and superiority of the proposed method. The algorithms selected for comparison in the experiments include 1) RXD (Global-RXD) [32]; 2) local RXD (Local-RXD) [64]; 3) segmented RXD (Segmented-RXD) [33]; 4) robust principle component analysis RXD (RPCA-RXD) [11]; 5) collaborativerepresentation-based detector (CRD) [47]; and 6) relative-massbased detector (RMD) [65]. For algorithms using the sliding dual-window detection mode, the inner and outer windows are set according to the spatial size of anomaly in different experimental datasets to achieve excellent detection results. According to the detailed information of the experimental dataset provided in Section III-B, for HYDICE, the outer and inner window sizes are set to 13 × 13 and 3 × 3, respectively; for AVIRIS-1, the two windows are set to 15 × 15 and 5 × 5; for AVIRIS-2, the dual window sizes are 17 × 17 and 7 × 7, and for AVIRIS-WTC, they are 15 × 15 and 5 × 5, respectively. For the ABU dataset, there are 13 images in the scenes of airports, beaches and urban areas, among which different images have their respective settings for the dual-window sizes, including 13 × 13 and 3 × 3, 15 × 15 and 5 × 5, 17 × 17 and 7 × 7. On all benchmark hyperspectral datasets, the comparison experiments are carried out under the same conditions. Moreover, qualitative and quantitative indicators are utilized to make fair and scientific evaluations of detection results, so as to fully demonstrate the comprehensive performance and conduct analysis for all algorithms.
For the HYDICE dataset, the processing results of all algorithms are linearly stretched to the range of 0-255. Fig. 16 shows 2-D maps of the detection results for all algorithms involved in the comparison, which are colored according to the magnitude of detection values. Due to the small spatial size of anomaly in this scene, in the results of different algorithms, in addition to pixels corresponding to anomaly locations in the ground-truth being highlighted, other high-brightness pixels may cause false alarms. In the map of Local-RXD, the values of background locations are lower on the overall level, indicating a strong ability to suppress background interference. For the other six algorithms, it is difficult to accurately distinguish the difference in performance only by visual effects. Fig. 17 provides both ROC curves and separability map for comparison. The abscissa display range of the ROC curves is 0-0.1, showing the PDs under low FARs. The curves of Global-RXD, RPCA-RXD, CRD, RMD, and TTAD in Fig. 17(a) are closer to the upper left, Local-RXD is inferior when FAR > 0.004, while Segmented-RXD is outstanding when FAR > 0.048. From the observation of Fig. 17(b), it is obvious that the position of the blue box representing the background of Local-RXD is the lowest, indicating that it could suppress the values of background part to a lower level, which corresponds to the visual effect shown in Fig. 16(d). However, compared with the other six algorithms, the position of the red box representing the anomaly of Local-RXD is too low, indicating a poor ability to highlight the anomaly. Among the other six algorithms, except that the blue box of RPCA-RXD is slightly lower, the blue boxes of Global-RXD, Segmented-RXD, CRD, RMD, and TTAD are located at similar positions, indicating that the suppression effects on the background are also relatively close. While the positions of the red boxes of CRD, RMD, and TTAD are generally higher, which shows that the separability of anomaly and background is stronger in the detection results of these three.
For the AVIRIS-1 dataset, Fig. 18 presents the visual detection results colored according to the magnitude of values. Except for Local-RXD, the shapes of three planes are basically preserved in the detection results of the other six methods. Compared with Global-RXD, Segmented-RXD, and RPCA-RXD, the brightness of CRD, RMD, and TTAD at anomaly locations is generally higher, and the ability to highlight anomalous pixels is stronger. Moreover, the proposed method preserves the spatial information of anomaly to the greatest extent, so that the detection map has a superior visual effect. The ROC curves and separability plots in Fig. 19 further evaluate and compare the detection results between the algorithms. Obviously, the red curve corresponding to TTAD in Fig. 19(a) is closest to the upper left, and the PD of TTAD is higher than that of other comparison algorithms, indicating that the proposed method could obtain detection results with high confidence. The curve of RMD is second only to TTAD. And the positions of the curves for Global-RXD, Segmented-RXD, RPCA-RXD, and CRD are similar. The ROC evaluation results of CRD and RPCA-RXD are overall better than Global-RXD and Segmented-RXD, while Local-RXD performs worse. The separability map of Fig. 19(b) visually demonstrates the statistical range of values for anomaly and background locations in all detection results. Compared with the other six algorithms, TTAD has a higher position of the red box associated with the anomaly, illustrating the superior ability to separate out the anomaly.
For the AVIRIS-2 dataset, Fig. 20 gives the 2-D colored maps for comparison, and the corresponding ROC curves and separability map are shown in Fig. 21. The visual detection results of all the algorithms involved in the comparison could highlight the anomaly to varying degrees. Global-RXD, RPCA-RXD, CRD, and TTAD are slightly inferior to Local-RXD, Segmented-RXD, and RMD in the suppression of background interference. While in the colored map of TTAD, pixels at the anomaly locations are very conspicuous, hence the overall visual effect is ahead of other algorithms. The ROC evaluation results in Fig. 21(a) further demonstrate the superior performance for the proposed method, with the PDs of RMD and TTAD being higher under the same FAR. The ROC curves of Global-RXD, Segmented-RXD, RPCA-RXD, and CRD are roughly close and interleaved. The AUC evaluation results are provided subsequently to more accurately distinguish the differences in the detection effects of algorithms. Fig. 21(b) visualizes the separability between anomaly and background in detection results. The intersection area between the red and blue boxes of Local-RXD is quite large, which is prone to false alarms. In contrast, the two boxes of Global-RXD, Segmented-RXD, RPCA-RXD, CRD, RMD, and TTAD are farther apart. Among them, the range of values in the anomaly locations of TTAD is at a higher level, which highlights the anomaly to the greatest extent, so its overall detection effect wins.
For the AVIRIS-WTC dataset, the colored detection maps are shown in Fig. 22. The values of Local-RXD and CRD in the background part are well suppressed, but the highlighting effect for anomaly is not obvious. In the maps of Global-RXD, Segmented-RXD, RPCA-RXD, RMD, and TTAD, pixels at the anomaly locations are brighter, and their detection results are all affected by background interference to a certain extent. The ROC evaluation results are shown in Fig. 23(a). It can be seen that the curves of RMD and TTAD are maximally skewed to the upper left, leading other algorithms by a prominent advantage. When FAR > 0.008, the performance of RPCA-RXD is second only to RMD and TTAD. The overall trends of the curves of Global-RXD and Segmented-RXD are similar, and the trends of Local-RXD and CRD are similar. In the separability map of Fig. 23(b), although the blue boxes of Local-RXD and CRD have lower positions, the intersection of boxes in two colors is large in both algorithms. This indicates that a large part of the statistical ranges of detection results at the anomaly and background locations overlap, reflecting a poor separation effect between these two land covers. The detection results of the remaining Global-RXD, Segmented-RXD, RPCA-RXD, and RMD in the background part are roughly at a similar level. The separability degree of anomaly and background in RMD is the highest. On the other hand, TTAD exhibits a rather low overall intersection degree of two colored boxes despite the slightly higher position of the blue one, which proves that the proposed method with excellent detection ability could effectively separate the anomaly from the background.
For the ABU dataset, there are a total of 13 images in scenes such as airports, beaches, and urban areas. Figs. 24, 26, and 28 illustrate the colored maps of detection results for all algorithms in these three categories of scenes, respectively. In Fig. 24, the first to fourth rows correspond to Airport-1 to Airport-4, respectively; in Fig. 26, the first to fourth rows correspond to Beach-1 to Beach-4 respectively; in Fig. 28, the first to fifth rows correspond to Urban-1 to Urban-5, respectively. In addition, Figs. 25, 27, and 29 give the ROC curves and separability maps obtained from the detection performance comparison experiments on datasets of three scenes, respectively. It is worth mentioning that the ABU dataset is large in scale and contains multiple types of real complex scenes. There are considerations for the selection of this dataset: first, the robustness of the proposed method could be analyzed more comprehensively through the discussion on parameters in the previous stage; second, it can be further verified whether the proposed method is suitable for various scenes and exerts its unique advantages through the comparison of detection performance at this stage. The diversity of experimental datasets provides sufficient basis for the trial and promotion of the research in this article in practical applications. As described above, AUC, as a widely used quantitative evaluation indicator, simultaneously examines the PD and the FAR, which could scientifically and reasonably evaluate the detection performance for an algorithm. Therefore, in order to compare all detection methods more concisely and intuitively, Table VIII summarizes the AUC evaluation results on all experimental datasets including ABU.
In addition to evaluating the detection ability of algorithms using indicators, such as colored map, ROC curve, and separability map, Table VII records the execution time of algorithms involved in the comparison on all experimental datasets. An examination of computational efficiency is added to analyze the comprehensive performance for algorithms. Table VIII summarizes the AUC evaluation results of algorithms on all experimental datasets, and the maximum of AUC on each HSI is marked in bold. Due to the large scale of datasets and many evaluation results obtained in experiments, in order to fairly and scientifically compare the performance of detection algorithms on a total of 17 HSIs, this article adopts the Wilcoxon rank sum test to evaluate all the participating algorithms [66]. The Wilcoxon SCORE table is drawn based on Table VIII. In Table IX, the 7 detection algorithms used for comparison are set as the column labels, and the row labels are the 17 experimental HSIs. For a specific AUC value, compare it with each AUC in its row, and if it is greater, integrate one point, otherwise not. Finally, all the integrals of each column are summed to obtain the Wilcoxon SCORE corresponding to each detection algorithm. For the evaluation of the participating algorithms, the larger the Wilcoxon SCORE, the better the detection performance. In the last row of Table IX, the Wilcoxon SCORE achieved by TTAD is the highest among all the detection algorithms participating in the comparison. It can be concluded that the proposed method exhibits the best overall detection effect on all experimental datasets, demonstrating its strong adaptability to various real scenes with complexity. Moreover, by comparing the total execution time of all algorithms in Table VII, it can be seen that, except that the processing time of Global-RXD, Segmented-RXD, RPCA-RXD, and RMD is shorter than that of TTAD, the computational efficiency of TTAD is much higher than that of Local-RXD and CRD. In summary, TTAD could achieve the best overall detection effect under the premise of considerable time consumption. The comparative experiments with other algorithms verify
IV. CONCLUSION
In this article, an unsupervised AD method based on tree topology is proposed for the lack of spectral prior information in practical applications. The point set topology is applied to HSI processing, which successfully avoids feature extraction of land covers with distributional complexity in real scenes through specific model assumptions. The proposed method fully exploits the "few and different" characteristics of anomalous data points that are sparsely distributed and far away from high-density populations, and constructs a topological space through tree-structured mapping to realize the redistribution of HSI data. Through the geometric deformation of space, the hidden and potential differences in data features between the anomaly and the background in the original space are presented in an intuitive and simple form. On this basis, a measurement called "topological cardinality" is developed to quantify the abnormality degree of all samples in a data set for detection output. Finally, the AD task is performed in parallel topological spaces, and parallel measurements of topological cardinality is implemented in multiple tree topologies according to the ensemble learning theory to boost the detection performance. Extensive experimental results on HSIs of various scenes in both natural and artificial environments demonstrate that the proposed TTAD exhibits superior detection effects and robustness within a reasonable range of parameter settings. Compared with other classical and advanced AD algorithms on all experimental datasets, TTAD stands out with certain advantages in overall detection performance. The proposed method is capable of adapting to various complex scenes without excessive time consumption, and the considerable comprehensive performance makes it promising to be popularized and advantageous in practical applications. | 12,915 | 2023-01-01T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
Titi monkey neophobia and visual abilities allow for fast responses to novel stimuli
The Snake Detection Theory implicates constricting snakes in the origin of primates, and venomous snakes for differences between catarrhine and platyrrhine primate visual systems. Although many studies using different methods have found very rapid snake detection in catarrhines, including humans, to date no studies have examined how quickly platyrrhine primates can detect snakes. We therefore tested in captive coppery titi monkeys (Plecturocebus cupreus) the latency to detect a small portion of visible snake skin. Because titi monkeys are neophobic, we designed a crossover experiment to compare their latency to look and their duration of looking at a snake skin and synthetic feather of two lengths (2.5 cm and uncovered). To test our predictions that the latency to look would be shorter and the duration of looking would be longer for the snake skin, we used survival/event time models for latency to look and negative binomial mixed models for duration of looking. While titi monkeys looked more quickly and for longer at both the snake skin and feather compared to a control, they also looked more quickly and for longer at larger compared to smaller stimuli. This suggests titi monkeys’ neophobia may augment their visual abilities to help them avoid dangerous stimuli.
While there is now extensive evidence that primate visual systems hold snakes in a privileged position, the majority of studies have been conducted on humans and other catarrhine primates, such as macaques (Macaca spp.) and vervet monkeys (Chlorocebus pygerythrus). As catarrhine and platyrrhine primates were exposed to venomous snakes for different amounts of evolutionary time 5 , it is important to broaden the investigation to include more platyrrhines. While several platyrrhine species are reactive toward snakes 26,[31][32][33][34] , to date, there have been no studies on their latency to detect snakes. Here we investigate the ability of coppery titi monkeys (Plecturocebus cupreus) to detect quickly a small portion (2.5 cm) of a snake skin, which provides only the visual cue of scale pattern, and the entire body of a snake, which, in addition to scale pattern, provides the visual cue of a curvilinear shape. Titi monkeys are small-bodied, pair-bonded, platyrrhine primates 35,36 that are vulnerable to snakes 33,34 and, like many other primate species, give alarm calls and mob snakes when they are detected 34 .
Methods
Subject housing and recruitment. Sixteen titi monkey families underwent testing. Each family had an average of 2.6 animals, with a minimum of 2 animals (N = 10, pair-bonded male and female) and a maximum of 5 animals (N = 1, pair-bonded male and female with 3 offspring). All titi monkeys lived at the California National Primate Research Center (CNPRC) 37 . All study subjects were captive-born and naïve to snakes. The animals were housed in cages measuring 1.2 m × 1.2 m × 2.1 m or 1.2 m × 1.2 m × 1.8 m. The rooms where they were housed were maintained at 21 °C on a 12-h light cycle with lights on at 06:00 h and lights off at 18:00 h. Subjects were fed twice daily on a diet of monkey chow, carrots, bananas, apples, and rice cereal. Water was available ad libitum and additional enrichment was provided twice a day. This setup was identical to housing situations described in previous experiments on this titi monkey colony 37,38 .
Animals were recruited for the study based on availability. Groups with infants younger than 4 months old were excluded from our study. All animals were tested in their family groups.
Testing design. This study was loosely modeled after a field experiment on latency to detection in which vervet monkeys were exposed to 2.5 cm of a gopher snake skin (Pituophis catenifer) stuffed with cotton to give the snake skin a rounded, life-like shape 29 . We used the same gopher snake skin in our study and presented the titi monkeys with the same amount of snake skin, but since titi monkeys are known to be strongly neophobic [39][40][41] , we also exposed them to 2.5 cm of a blue synthetic feather to determine if their response to the snake skin was a response to a perceived potential danger or simply to a new stimulus in their environment. We used a blue feather because their ability to see blue hues is unaffected by their dichromatic color vision 42 . We predicted that the monkeys' latency to look would be shorter, and the duration of looking, longer, for the snake skin than for the feather.
Because the responses to the 2.5 cm snake skin were weak, we later added to the experimental design the entire snake skin but without the head, and the entire feather, to test if a larger snake skin would elicit a stronger response than the partial snake skin. Thus, our final experimental design included four stimuli: 2.5 cm feather, 2.5 cm snake skin, entire feather, and entire snake skin.
Transport and behavioral testing. All animals were caught in their home cage and transported to the testing room in familiar transport boxes (31 × 31 × 33 cm). We tested them in a separate room from where the animals were housed to ensure other animals in the colony remained naïve to the novel stimuli. We released the animals into a testing cage (Fig. 1) that was baited with Spanish peanuts (1 per monkey) to encourage exploration of the testing cage. Upon transfer of the last monkey, we moved a stimulus platform into view of the animals (approximately 45 cm from the front of the cage). The platform was covered in a tan-colored towel (Fig. 1). Two www.nature.com/scientificreports/ researchers sat in chairs to the side of the testing setup to score behavior and facilitate the test. The towels covering the test platform and stimuli (described below) were also tan to minimize contrast with the floor and walls. We used a within-and between-subjects crossover design for this experiment. The order in which subjects saw the test stimuli was counterbalanced across families. Each 20-min testing session consisted of three trials: acclimation, sham control, and stimulus. In the acclimation trial, animals were given ten minutes to acclimate to the new cage, room, and testing platform. No behaviors were scored during this trial. All Spanish peanuts were eaten during this period, and so did not affect subsequent trials.
Once the acclimation trial was over, a research assistant walked between the testing cage and stimulus platform, blocking the stimulus platform with their body. The top towel was lifted and removed, revealing an identical towel below it; then the 5-min sham trial began. This sham trial was used to control for the novelty of manipulating the stimulus platform. This trial will be referred to as "control" in analyses below; all control data were pooled and assessed as one condition.
Once the sham control trial ended, the researcher again walked between the testing cage and stimulus platform. The next towel was removed, revealing the test stimulus (Fig. 2). Depending on the test condition, the animals were shown for 5 min either two hand towels covering both ends of the stimulus, leaving 2.5 cm of the stimulus showing, or a fully uncovered stimulus.
On the first day of a 2-day testing period, animals were either shown 2.5 cm of the snake skin or feather during the test trial. Half of our subjects were shown the 2.5 cm feather on the first day, while the other half were shown the 2.5 cm snake. On the second day, animals were shown 2.5 cm of the alternative stimulus type. After a waiting period of at least two weeks, animals participated in another 2-day testing period in which they were either shown the entire feather or entire snake skin during the test trial. Half of our subjects were shown the entire feather on the first day, while the other half were shown the entire snake. On the second day, animals were shown the entire extent of the alternative stimulus type. At the end of testing each day, families were returned to their home cage and monitored for any signs of distress, none of which were observed.
The data from three families that were exposed to the 2.5 cm feather and 2.5 cm snake were not included in the analyses because we collected behavioral data on them via video as part of our pilot testing instead of livescoring as we did for the other families. These families contributed data only from the entire snake and entire feather test conditions. One family lost a family member partway through the study (unrelated to this study) and thus the surviving family members did not participate in the entire feather or entire snake test conditions. Behavior scoring and focal recruitment. During all trials, the latency to look (in seconds) at the sham control platform or stimulus was recorded for every family member, including offspring (N = 16 families, N = 40 individuals) by the second observer (MED or TJF). The number of observations for each trial varied based on the number of animals in each family and which stimulus they saw.
Since we did not know which individual would detect the snake skin first, on the first day of testing, one adult from each family was randomly chosen as the focal animal. We observed this animal for the acclimation and sham control trials on that day. The first adult to detect the stimulus during the test trial on the first day then became the focal animal for the test trial and all subsequent trials. One observer (ARL) live-scored latency to look and duration of looking (also in seconds) for the focal animal using Behavior Tracker 1.5 software (www. behav iortr acker .com) on a Dell laptop. The second observer (MED or TJF) was responsible for removing towels between trials, operating a stopwatch, and recording the latency to look for non-focal animals. We operationally defined a "look" as both head and eye orientation toward the stimulus that lasted for more than one second in duration. Look duration was scored by one observer to ensure consistent scoring. The focal animal's latency to look was scored by both the primary and secondary observer as a test of the primary observer's reliability. Observers agreed > 95% of the time in scoring latency to look for the same animals. Observers were not blind to condition since the stimulus platform was visible to all subjects and observers. Data analysis. We used a survival/event time model for the response variable latency to look because a few animals did not respond to the stimulus during the 5-min test trial and therefore had censored observations of latency to look (N = 15 observations across 11 unique individuals from 10 different families). We fitted two Cox Proportional Hazards regression models using the coxph function of the survival library 43 invoked from the R statistical computing language 44 . The first was a null model including "cluster robust" standard errors to accommodate repeated measures on each subject. The second model added main effects of experimental condition and stimulus order, along with their interaction, to capture the structure of the crossover experiment. The interaction of experimental condition and stimulus order allowed for the effect of the stimulus to depend on the order in which the stimuli were shown. Model comparison of the second model to the first, using Akaike's Information Criterion (AIC), assessed the extent to which animals responded to the experiment. To check the fit of the models, we examined a graph of the cumulative hazard function of the Cox-Snell residuals, e.g., [ 45 :356]. www.nature.com/scientificreports/ The response variable duration of looking (seconds) was integer-valued and highly variable across subjects, suggesting that a negative binomial model would be appropriate. We fitted two generalized linear mixed-effects models using the glmmadmb function of the glmmADMB library 46 . The first was a null model incorporating random intercepts to accommodate repeated measures on each subject. The second model added effects of experimental condition and stimulus order as above for latency to look. We examined a quantile-quantile plot of the Pearson residuals compared to a chi-squared distribution with one degree of freedom to assess goodness of fit, and compared the first and second models using AIC.
We tested eight planned contrasts to infer the effects of stimulus length and stimulus type on our animals' latency to look and duration of looking. Under the crossover design, stimuli were necessarily presented to each family in a given order. Although presentation order was counterbalanced across families, we nonetheless wished to contrast stimuli in a manner that was, broadly speaking, indifferent to order. Marginalizing model estimates with respect to order, described in detail in supplemental material 1, meets this need. We calculated marginal contrasts and confidence intervals for the following: entire snake vs. entire feather, 2.5 cm snake versus 2.5 cm feather, entire snake versus 2.5 cm snake, and entire feather versus 2.5 cm feather for both of the response variables, latency to look and duration of looking. We used the Bonferroni correction for eight comparisons to preserve a 5% study-wide false positive rate. We estimated marginal contrasts and standard errors using the log scale of the Cox Proportional Hazards and negative binomial models.
Ethical note. This study was approved by the IACUC of the University of California, Davis. This study met all legal requirements of the United States as well as guidelines set by the American Society of Primatologists for the ethical treatment of non-human primates. This study was carried out in compliance with the ARRIVE guidelines. (6,25)], and entire snake [9 (7,21)]. Based on model comparisons with the AIC, the model for latency to look incorporating experimental effects was preferred over the null model [AIC(null) = 1842, AIC(experimental) = 1784]. Broadly speaking, the experiment had strong consequences for latency to look. Based on the survival/event time curves (Fig. 3), animals typically attended more quickly to the 2.5 cm feather, followed by the 2.5 cm snake and the control. The entire snake and the entire feather were both looked at earlier in time than the other three stimuli. Although the smooth, fitted curves in Fig. 3 do not very well distinguish the entire snake and entire feather, the full model fits well overall (see
Duration of looking. We included duration of looking for 94 observations from 18 individuals across 16
families. We had a total of 18 individuals because for two families, our focal individual changed partway through the experiment due to the focal recruitment criteria described above. Subjects' duration of looking (seconds) was recorded for the sham control [median (25th, 75th percentiles): 2 (1, 3)], 2.5 cm feather [7 (3, 9)], 2.5 cm snake [10 (9, 18)], entire feather [25 (16,42)], and entire snake [45 (36, 57)]. Based on model comparisons with the AIC, the model for duration of looking incorporating experimental effects was preferred over the null model [AIC(null) = 683, AIC(experimental) = 576]. As with latency to look, the experiment had strong consequences for www.nature.com/scientificreports/ duration of looking. They were shortest for the sham control, followed by the 2.5 cm feather, the 2.5 cm snake, the entire feather, and finally the entire snake (Fig. 4). Some animals never looked at the control (N = 11 observations from 10 individuals) and one animal never looked at the 2.5 cm feather. All animals spent some amount of time looking at the 2.5 cm snake, the entire feather, and the entire snake.
Planned contrasts. In order to address our predictions more quantitatively, we planned eight contrasts.
Namely, we assessed if animals had a shorter latency to look and/or a longer duration of looking for potentially dangerous compared to non-dangerous stimuli (four contrasts: 2.5 cm snake vs. 2.5 cm feather, entire snake vs. entire feather for latency to look and duration of looking). We also assessed if animals had a shorter latency to look and/or a longer duration of looking for larger compared to smaller stimulus types (four contrasts: entire snake vs. 2.5 cm snake, entire feather vs. 2.5 cm feather for latency to look and duration of looking). For eight contrasts, the Bonferroni correction required that we construct ± 2.73 SE intervals instead of the usual ± 2 SE interval used to test a single hypothesis at a 5% level. Thus, each interval was relatively conservative. For five of the eight contrasts, the confidence interval contained the null value zero, indicating that the responses to the two stimuli compared were not different enough to be considered statistically significant (Fig. 5). For latency to look, the estimated hazard for the full snake was more than three times greater than the hazard for the 2.5 cm snake and this comparison was statistically supported, with a Bonferroni confidence interval that did not intersect the 1:1 comparison line. Thus, the monkeys' latency to look was shorter for the entire snake than the 2.5 cm snake. Similarly, for latency to look, the estimated hazard for the full feather was more than three times greater than the hazard for the 2.5 cm feather and this comparison was statistically supported, with a Bonferroni confidence interval that did not intersect the 1:1 comparison line. The ratio of duration of looking for the entire feather as compared to the 2.5 cm feather was approximately 3:1 on average, and this comparison was also statistically supported, although the lower confidence limit was only slightly above the 1:1 comparison line (Fig. 5).
Discussion
Unexpectedly, titi monkeys were relatively unresponsive in the presence of the partial snake skin. Vervets looking at the same snake skin simultaneously engaged in other responses, including bending down and peering, remaining still and staring, and standing bipedally to look at the snake skin 29 . We saw no such responses from the titi monkeys. Moreover, the first titi monkeys to look at the partial snake skin were slower (median: 41 s) than the first (free-ranging) vervets (median: 10 s) with similarly unobstructed views 29 . These muted behavioral responses to the partial snake skin, and the lack of a differential response from titi monkeys toward the snake skin and feather with only 2.5 cm showing might be interpreted as non-recognition of potential danger driven by poorer visual ability to detect fine detail such as scale lines. Alternatively, as the titi monkeys used in this study were three to eight generations removed from the original wild founder population, the lack of a differential response could have been driven by a history of captive rearing without exposure to snakes. This possibility prompted us to test them with the entire snake skin and the entire feather. More attention directed to the snake skin than the feather would suggest captivity had minimal effects on their ability to perceive the snake as a potential threat.
While titi monkeys tended to look more quickly (9 vs. 14 s) and for a longer duration (41 vs. 25 s) at the entire snake than the entire feather, suggesting that captivity has not extinguished their visual attraction to snakes, the most obvious differences, as identified in planned comparisons that did not overlap the 1:1 line, were in the latency to look at the entire snake (9 s) versus the partial snake (41 s), the latency to look at the entire feather (14 s) versus the partial feather (29 s), and the duration of looking at the entire feather (25 s) versus the partial feather (7 s). These results reveal that larger stimuli generated more attention than stimuli reflecting a potential www.nature.com/scientificreports/ threat. This is in line with the neophobic nature of adult titi monkeys in that they respond with caution and visual orientation to novel objects and conditions much more so than other platyrrhines [39][40][41] . Coupled with vision, their neophobia should be beneficial to them in detecting and better avoiding dangerous animals. A question to consider for the future is whether their neophobia might be part of an independently evolved strategy in response to the threat from venomous snakes as a form of compensation for poorer vision. Placing our study in a broader theoretical context, one of the hypotheses of the SDT is that catarrhine primates are uniformly capable of detecting snakes quickly because their common ancestor evolved in the presence of venomous snakes, a selective pressure that has persisted to the present day. Platyrrhine primates, in contrast, are hypothesized to vary more in their ability to detect snakes because they began diversifying prior to the arrival of venomous snakes in South America. Selection thus would have operated on platyrrhine lineages more independently as venomous snakes became established there 4,5 . The lineage leading to titi monkeys, for example, is estimated to have diverged 16-19.5 million years ago 47,48 , before bothropoid snakes began diverging in South America 49,50 . In addition to studies of comparative brain morphology and neural connectivity (discussed in [ 5 :103-106]), three lines of behavioral evidence appear to support the hypothesis of greater variability in platyrrhine visual systems: 1. Head-cocking is a behavior that is generated more by novel visual stimuli than by recognition of threat, and may help in discriminating the object of attention [51][52][53][54] . No catarrhines head-cock to novel stimuli, whereas platyrrhines are more variable 51 . 2. Catarrhines react to two-dimensional images as they do to three-dimensional images, including snakes 55-67 , whereas platyrrhines are more variable 26,54,55,68-71 . 3. Of all the platyrrhines, capuchin monkey visual systems appear to be most convergent with those of catarrhines 5 . This may also be reflected behaviorally. Capuchins do not head-cock to models of snakes or novel stimuli 26 , they react similarly to two-and three-dimensional images, and, like macaques and humans, they are able to distinguish between dangerous and non-dangerous snakes 26,32,72,73 , an ability that undoubtedly requires excellent visual discrimination. Capuchins are the most terrestrial of platyrrhines, and the risk from terrestrial as well as arboreal venomous snakes may have put a premium on excellent vision for objects that are close by and in front of oneself.
Testing the hypothesis that platyrrhines have greater variability than catarrhines in snake detection requires replicable studies of many taxa of the latency to detect snakes, ideally with primates that have likely had Figure 5. Results from eight planned contrasts assessing latency to look and duration of looking in response to feather and snake stimuli. Center dots represent the average marginal contrast. For latency to look, a center dot to the right of the 1:1 line indicates that the hazard for the first stimulus in the comparison is greater, and therefore that the latency to look is shorter. For duration of looking, a center dot to the right of the 1:1 line indicates that duration for the first stimulus in the comparison is greater. The error bars represent ± 2.73 SE of the marginal contrast, as appropriate for eight planned comparisons. The horizontal axes reflect ratio comparisons of the paired stimuli--hazard ratio for latency to look and ratio of durations for duration of looking. For example, the hazard for the entire feather was more than three times the hazard for the 2.5 cm feather, on average. www.nature.com/scientificreports/ experience with snakes, i.e., under field conditions. We have one other recommendation for future comparative studies. Our study's planned contrasts of the titi monkey responses limited our ability to detect a signal among the noise of our study design. We designed eight planned contrasts to examine the sensitivity of titi monkeys to partial and entire snakes and used the Bonferroni correction for eight comparisons to preserve a 5% study-wide false positive rate. This correction may have produced more conservative confidence intervals, thus resulting in several null findings. Our statistical corrections may have further muted the reduced response of titi monkeys to snake skins. Future studies should aim for larger sample sizes to minimize any such statistical limitations.
Data availability
All data generated or analyzed during this study are included in this published article (and its Supplementary Information files). | 5,572 | 2021-01-28T00:00:00.000 | [
"Biology",
"Psychology"
] |
Atomic fluctuations lifting the energy degeneracy in Si/SiGe quantum dots
Electron spins in Si/SiGe quantum wells suffer from nearly degenerate conduction band valleys, which compete with the spin degree of freedom in the formation of qubits. Despite attempts to enhance the valley energy splitting deterministically, by engineering a sharp interface, valley splitting fluctuations remain a serious problem for qubit uniformity, needed to scale up to large quantum processors. Here, we elucidate and statistically predict the valley splitting by the holistic integration of 3D atomic-level properties, theory and transport. We find that the concentration fluctuations of Si and Ge atoms within the 3D landscape of Si/SiGe interfaces can explain the observed large spread of valley splitting from measurements on many quantum dot devices. Against the prevailing belief, we propose to boost these random alloy composition fluctuations by incorporating Ge atoms in the Si quantum well to statistically enhance valley splitting.
the spin coherence 24 . Furthermore, small valley splitting may affect Pauli spin blockade readout 25 , which is considered in large-scale quantum computing proposals 5,6 . Therefore, scaling up to larger systems of single-electron spin qubits requires that the valley splitting of all qubits in the system should be much larger than the typical operation temperatures (20−100 mK).
It has been known for some time that valley splitting depends sensitively on the interface between the quantum well and the SiGe barrier 26 . Past theoretical studies have considered disorder arising from the quantum well miscut angle 27 and steps in the interface [28][29][30][31][32] demonstrating that disorder of this kind can greatly decrease valley splitting in quantum dots. However, a definitive connection to experiments has proven challenging for a number of reasons. At the device level, a systematic characterisation of valley splitting in Si/ SiGe quantum dots has been limited because of poor device yield associated with heterostructure quality and/or device processing. At the materials level, atomic-scale disorder in buried interfaces 33 may be revealed by atom-probe tomography (APT) in three-dimensions (3D) over the nanoscale dimensions comparable to electrically defined quantum dots. However, the current models employed to reconstruct in 3D the APT data can be fraught with large uncertainties due to the assumptions made to generate the threedimensional representation of the tomographic data 34 . This results in limited accuracy when mapping heterointerfaces 35 and quantum wells [36][37][38] . These limitations prevent linking the valley splitting in quantum dots to the relevant atomic-scale material properties and hinder the development of accurate and predictive theoretical models.
Herein we solve this outstanding challenge and establish comprehensive insights into the atomic-level origin of valley splitting in realistic silicon quantum dots. Firstly, we measure valley splitting systematically across many quantum dots, enabled by high-quality heterostructures with a low disorder potential landscape and by improved fabrication processes. Secondly, we establish a new method to analyse APT data leading to accurate 3D evaluation of the atomiclevel properties of the Si/SiGe buried interfaces. Thirdly, incorporating the 3D atomic-level details obtained from APT, we simulate valley splitting distributions that consider the role of random fluctuations in the concentration of Si and Ge atoms at each layer of the Si/SiGe interfaces. By comparing theory with experiments, we find that the measured random distribution of Si and Ge atoms at the Si/SiGe interface is enough to account for the measured valley splitting spread in real quantum dots. Based on these atomistic insights, we conclude by proposing a practical strategy to statistically enhance valley splitting above a specified threshold as a route to making spin-qubit quantum processors more reliable-and consequently-more scalable.
Results
Material stacks and devices Figure 1 overviews the material stack, quantum dot devices, and measurements of valley splitting. To increase statistics, we consider two isotopically purified 28 Si/Si 0.7 Ge 0.3 heterostructures (quantum wells A and B) designed with the same quantum well width and topinterface sharpness (Methods), which are important parameters determining valley splitting 23,26 quantum well A (Fig. 1a) has a sharp 28 Si → Si-Ge heterointerface at the top and a diffused Si-Ge → 28 Si heterointerface at the bottom, whereas in quantum well B (Fig. 1b) the growth process was optimized to achieve sharp interfaces at both ends of the quantum well. These heterostructures support a two-dimensional electron gas with high mobility and low percolation density (Supplementary Figs. 1 and 2), indicating a low disorder potential landscape, and high-performance qubits 10,39 with single-and two-qubit gates fidelity above 99% 10 . We define double-quantum dots electrostatically using gate layers insulated by dielectrics (Methods). A positive gate voltage applied to plunger gates P1 and P2 (Fig. 1c) accumulates electrons in the buried quantum well, while a negative bias applied to other gates tunes the confinement and the tunnel coupling between the quantum dots Q1 and Q2. All quantum dots in this work have plunger gate diameters in the range of 40-50 nm ( Fig. 1d and Supplementary Table 1), setting the relevant lateral length scale for atomic-scale disorder probed by the electron wave function.
Valley splitting measurements
We perform magnetospectroscopy measurements of valley splitting E v in dilution refrigerators with electron temperatures of about 100 mK (Methods). Figure 1e shows a typical charge stability diagram of a double quantum dot with DC gate voltages tuned to achieve the few electron regime, highlighted in Fig. 1f. We determine the 2-electron singlet-triplet energy splitting (E ST ) by measuring the gate-voltage dependence as a function of parallel magnetic field B along the (0,1) → (0,2) transition ( Fig. 1g) and along the (1,1) → (0,2) transition ( Supplementary Fig. 4). In Fig. 1g, the transition line (black line) slopes upward, because a spin ↑ electron is added to form a singlet ground state S 0 . Alternatively, a spin down electron can be added to form a T −state, with a downward slope. A kink occurs when the S 0 -state is energetically degenerate with the T − -state, becoming the new ground state of the two-electron-system. From the position of the kink (B ST = 1.57 T) along the theoretical fit (red line) and the relation E ST = gμ B B ST , where g = 2 is the electron gyromagnetic ratio and μ B is the Bohr magneton, we determine E ST = 182.3 μeV for this quantum dot. E ST sets a lower bound on the valley splitting, E v ≥ E ST 21,40 . Due to small size, our dots are strongly confined with lowest orbital energy much larger than E ST ( Supplementary Fig. 3), similar to other Si/SiGe quantum dots 14,18,22 . Therefore, we expect exchange corrections to have negligible effects 40 and here take E v ≈ E ST .
Here we report measurements of E v in 10 quantum dots in quantum well A and 12 quantum dots in quantum well B (Supplementary Figs. 5 and 6) and compare the measured values in Fig. 1h. We observe a rather large spread in valley splittings, however we obtain remarkably similar mean values and two-standard-deviation error bars E v ± 2σ of 108 ± 55 μeV and 106 ± 58 μeV for quantum wells A and B, respectively. The quantum dots all have a similar design and hence are expected to have similar electric fields across the devices with a small influence on valley splitting under our experimental conditions. We argue that quantum wells A and B have similar E v ± 2σ because the electronic ground state is confined against the top interface, which is very similar in the two quantum wells.
Atom probe tomography
We now characterise the atomic-scale concentration fluctuations at the quantum well interfaces to explain the wide range of measured valley splittings with informed theoretical and statistical models. To probe the concentrations over the dimensions relevant for quantum dots across the wafer, we perform APT on five samples each from quantum wells A and B, with a field of view of approximately 50 nm at the location of the quantum well (Methods). First, we show how to reliably reconstruct the buried quantum well interfaces, then we use this methodology to characterise their broadening and roughness. Figure 2a shows a typical point-cloud reconstruction of an APT specimen from quantum well B. Each point represents the estimated Supplementary Fig. 14). b, c Voronoi tessellation of the APT reconstructions for quantum wells A and B, respectively, and extracted isosurfaces corresponding to 8% Ge concentration. z is the average position of the 8% Ge concentration across these particular samples. We limit the lateral size of the analysis to ≈ 30nm × 30 nm, reflecting the typical lateral size of a quantum dot (Fig. 1d). d Average germanium concentration depth profiles across quantum wells A (magenta) and B (green). Shaded areas mark the 95% confidence interval over each of the sets of five APT samples. e Statistical analysis of the top interface width 4τ determined by fitting the data for quantum wells A (magenta) and B (green) to sigmoid functions. Thick and thin horizontal black lines denote the mean and twostandard-deviation error bars for the different APT samples. Dotted black lines show 4τ results from the HAADF-STEM measurements ( Supplementary Fig 13). position of an ionized atom detected during the experiment 34 . Qualitatively, we observe an isotopically enriched 28 Si quantum well, essentially free of 29 Si, cladded in a SiGe alloy. To probe the interface properties with the highest possible resolution allowed by APT and differently from previous APT studies on Si/SiGe 38 , we represent the atom positions in the acquired data sets in form of a Voronoi tessellation 41,42 and generate profiles on an x − y grid of the tessellated data, as described in Supplementary Note 2c. A sigmoid function ½1 + expðz À z 0 Þ=τ À138 is used to fit the profiles of each tile in the x − y grid. Here, z 0 is the inflection point of the interface and 4τ is the interface width. As the Voronoi tessellation of the data set does not sacrifice any spatial information, the tiling in the x − y plane represents the smallest lateral length scale over which we characterise the measured disorder at the interface. Note that we do not average at all over the z axis and hence maintain the inherent depth resolution of APT. We find that for tiles as small as 3 nm × 3 nm the numerical fitting of sigmoid functions to the profiles converges reliably. Although each tile contains many atoms, their size is still much smaller than the quantum dot diameter, and may therefore be considered to be microscopic. We use the sigmoid fits for each tile stack to visualise and further characterise the interfaces (Supplementary Figs. 8-10). Importantly, Ge concentration isosurfaces as shown in Fig. 2b, c are constructed by determining the vertical position for which each of the sigmoids reaches a specific concentration. Note, that we oversample the interface to improve the lateral resolution by making the 3 nm × 3 nm tiles partially overlap (Supplementary Note 2c).
In Fig. 2d, we show the average Ge concentration profile and measurement to measurement variations from the tessellated volumes (Supplementary Note 2b, c) of all samples for both quantum wells A and B. APT confirms HAADF-STEM results in Fig. 1a, b: quantum wells A and B have an identical sharp top interface and quantum well A has a broader bottom interface. Furthermore, the shaded colored areas in Fig. 2d reveal narrow 95% confidence levels, pointing to highly uniform concentration profiles when averaged across the wafer. Strong disorder fluctuations emerge at the much smaller tile length scale. In Fig. 2e we show for all samples of a given quantum well the interface width mean value with two standard deviations 4τ ± 2σ, obtained by averaging over all the tiles in a given sample. The results indicate uniformity of 4τ, and further averaging across all samples of a given heterostructure (μ 4τ , black crosses) yields similar values of μ 4τ = 0:85 ± 0:32 nm and 0.79 ± 0.31 nm for quantum wells A and B, consistent with our 4τ analysis from HAADF-STEM measurements (black dotted lines). However, the two-standard-deviation errors (2σ) of each data point can be up to 30% of the mean value 4τ.
To pinpoint the root cause of atomic-scale fluctuations at the interface, in Fig. 2f, g we utilize the 3D nature of the APT data sets, calculate, and compare the root mean square (RMS) roughness of the interfaces (solid green lines) as measured by APT on quantum well B to two 3D models (Fig. 2f, g) mimicking the dimensions of an APT data set. Both models are generated with random distributions of Si and Ge in each atomic plane (Supplementary Note 2d). The first model (solid black lines) corresponds to an atomically abrupt interface where the Ge concentration drops from~33.5% to 0% in a single atomic layer. It hence represents the minimum roughness achievable at each isoconcentration surface given the in-plane randomness of SiGe and the method to construct the interface. The second model (dashed black lines) is generated with the experimentally determined Ge concentration profile along the depth axis ( Supplementary Fig. 11). As shown in Fig. 2f, g, the roughness extracted from the second model fits well with the measured data, suggesting that the RMS roughness measured by APT is fully explained by the interface width and shape along the depth axis. Furthermore, as the deviation of each isosurface tile position from the isosurface's average position also matches that of the measured interfaces from the second model (Supplementary Movie 1) the APT data are consistent with a random in-plane distribution of Ge perpendicular to the interface in all data sets of quantum well B. For 2 out of 5 samples on quantum well A that we analyzed, we observe features that are compatible with correlated disorder from atomic steps (Supplementary Fig. 13). In the following, the alloy disorder observed in the APT concentration interfaces is incorporated into a theoretical model. As shown below, the calculations of valley splitting distributions associated with the 3D landscape of Si/SiGe interfaces can be further simplified into a 1D model that incorporates the in-plane random distribution of Si and Ge atoms.
Valley splitting simulations
We begin by considering an ideal laterally infinite heterostructure with no concentration fluctuations, and we denote the average Si concentration at layer l by x l . Due to the finite size of a quantum dot and the randomness in atomic deposition, there will be dot-to-dot concentration fluctuations. We therefore model the actual Si concentration at layer l by averaging the random alloy distribution weighted by the lateral charge density in the quantum dot, giving x d l = x l + δ x l , as described in Supplementary Note 3c. Here, the random variation δ x l is computed assuming a binomial distribution of Si and Ge atoms. We find that these fluctuations can have a significant impact on the valley splitting.
We explore these effects numerically using 1D tight-binding simulations. We begin with the averaged fitted concentration profiles obtained from the APT analysis in Fig. 2d, which enable us to directly measure the average Ge concentration in a given layer x l (Fig. 3a). The variance of the concentration fluctuations is determined by the size of the quantum dot, which we assume has an orbital excitation energy of ℏω = 4.18 meV and corresponding radius ffiffiffiffiffiffiffiffiffiffiffiffiffiffi _=m * ω p , as well as the average Si concentration x l . Here, m * is the effective mass of Si. Together, x l and the variance determine the probability distribution of weighted Si and Ge concentrations. Concentration profiles are sampled repeatedly from this distribution, with a typical example shown in Fig. 3b. The valley splitting is then determined from a 1D tight-binding model 43 . The envelope of the effective mass wavefunction ψ env (z) is shown in Fig. 3c (grey curve) for an electron confined in the quantum well of Fig. 3b. The procedure is repeated for 10,000 profile samples, obtaining the histogram of valley splittings shown in Fig. 3e. These results agree very well with calculations obtained using a more sophisticated three-dimensional 20-band sp 3 d 5 s* NEMO tight-binding model 44 (Supplementary Note 3b) and confirm that concentration fluctuations can produce a wide range of valley splittings. For comparison, at the top of Fig. 3e, we also plot the same experimental valley splittings shown in Fig. 1h, demonstrating good agreement in both the average value and the statistical spread. These observations support our claim that the valley splitting is strongly affected by composition fluctuations due to random distributions of Si and Ge atoms near the quantum well interfaces, even though the experiments cannot exclude the presence of correlated disorder from atomic steps in quantum dots.
Analytical methods using effective mass theory may also be used to characterise the distribution of valley splittings. First, we model the intervalley coupling matrix element 26 as Δ = R e À2ik 0 z l UðzÞ|ψ env ðzÞ| 2 dz, where k 0 = 0.82 × 2π/a 0 is the position of the valley minimum in the Si Brillouin zone, a 0 = 0.543 nm is length of the Si cubic unit cell, ψ env (z) is a 1D envelope function, and U(z) is the quantum well confinement potential. The intervalley coupling Δ describes how sharp features in the confinement potential couple the two valley states, which would otherwise be degenerate. In general, Δ is a complex number that can be viewed as the sum of two distinct components: a deterministic piece Δ 0 , arising from the average interface concentration profile, and a random piece δΔ, arising from concentration fluctuations. The latter can be expressed as a sum of contributions from individual atomic layers: δΔ = ∑ l δΔ l , where δΔ l is proportional to δ x l |ψ env ðz l Þ| 2 (see Methods). To visualize the effects of concentration fluctuations in Fig. 3c, we compute δΔ l using the randomized density profile of Fig. 3b (blue curve). We see that most significant fluctuations occur near the top interface, where |ψ env (z l )| and the Ge content of the quantum well are both large. In Fig. 3d we plot Δ values obtained for 10,000 quantum-well realizations using this effective mass approach. The deterministic contribution to the valley splitting Δ 0 (black dot) is seen to be located near the center of the distribution in the complex plane, as expected. However, the vast majority of Δ values are much larger than Δ 0 , demonstrating that concentration fluctuations typically provide the dominant contribution to intervalley coupling.
The total valley splitting is closely related to the intervalley coupling via E v = 2|Δ|, and therefore exhibits the same statistical behavior. In Fig. 3e, the orange curve shows the Rice distribution whose parameters are derived from effective-mass calculations of the valley splitting (see Methods), using the same concentration profiles as the histogram data. The excellent agreement between these different approaches confirms the accuracy of our theoretical techniques (Supplementary Note 3d).
Discussion
Based on the results obtained above, we now propose two related methods for achieving large valley splittings (on average), with high yields. Both methods are derived from the key insight of Fig. 3c: due to random-alloy fluctuations, the valley splitting is almost always enhanced when the electronic wavefunction overlaps with more Ge atoms. In the first method, we therefore propose to increase the width of the interface (4τ) as shown in Fig. 3f, since this enhances the wavefunction overlap with Ge atoms at the top of the quantum well. This approach is nonintuitive because it conflicts with the conventional deterministic approach of engineering sharp interfaces. The second method, also shown in Fig. 3f, involves intentionally introducing a low concentration of Ge inside the quantum well. The latter method is likely more robust because it can incorporate both deterministic enhancement of the valley splitting from a sharp interface, and fluctuation-enhanced valley splitting.
We test these predictions using simulations, as reported in Fig. 3g, where different colors represent different interface widths and the horizontal axis describes the addition of Ge to the quantum well. For no intentional Ge in the quantum well, as consistent with the heterostructure growth profile of our measured quantum dots, the calculations show a significant increase in the valley splitting with increasing interface width. Here, the narrowest interface appears most consistent with our experimental results (magenta marker), attesting to the sharp interfaces achieved in our devices. As the Ge concentration increases in the quantum well, this advantage is largely overwhelmed by concentration fluctuations throughout the well. A very substantial increase in valley splitting is observed for all concentration enhancements, even at the low, 5% level. Here, the light error bars represent 5-95 percentiles while dark bars represent 25-75 percentiles. At the 5% concentration level, our simulations indicate that >95% of devices should achieve valley splittings > 100 μeV. This value is more than an order of magnitude larger than the typical operation temperature of spin- A). b Typical, randomized Ge concentration profile, derived from a. c Envelope function ψ env (z), obtained for the randomized profile in b (grey curve), and the corresponding concentration fluctuations weighted by the envelope function squared: δ x l |ψ env ðz l Þ| 2 (blue). Here, the wavefunction is concentrated near the top interface where the concentration fluctuations are also large; the weighted fluctuations are therefore the largest in this regime. d Distribution of the intervalley matrix element Δ in the complex plane, as computed using an effective-mass approach, for 10,000 randomized concentration profiles. The black marker indicates the deterministic value of the matrix element Δ 0 , obtained for the experimental profile in a. e Histogram of the valley splittings from tight-binding simulations with 10,000 randomized profiles. The same profiles may be used to compute valley splittings using effective-mass methods; the orange curve shows a Rice distribution whose parameters are obtained from such effective-mass calculations (see Methods). f Schematic Si/SiGe quantum well with Ge concentrations ρ W (in the well) and ρ b = ρ W + Δρ (in the barriers), with a fixed concentration difference of Δρ = 25%. g Distribution of valley splittings obtained from simulations with variable Ge concentrations, corresponding to ρ W ranging from 0 to 10%, and interface widths 4τ = 5 ML (red circles), 10 ML (blue triangles), or 20 ML (orange squares), where ML refers to atomic monolayers. Here, the marker describes the mean valley splitting, while the darker bars represent the 25-75 percentile range and the lighter bars represent the 5-95 percentile range. Each bar reflects 2000 randomized tight-binding simulations of a quantum well of width W = 120 ML. The magenta diamond at zero Ge concentration shows the average measured valley splitting of quantum well A. In all simulations reported here, we assume an electric field of E = 0.0075 V/nm and a parabolic single-electron quantum-dot confinement potential with orbital excitation energy ℏω = 4.18 meV and corresponding dot radius ffiffiffiffiffiffiffiffiffiffiffiffiffiffi _=m * ω p .
qubits and is predicted to yield a 99% readout fidelity 25 . This would represent a significant improvement in qubit yield for Si quantum dots. A recent report of SiGe quantum wells with oscillating Ge concentrations provides the first experimental evidence that intentionally placing Ge in the quantum well leads to significant variability and some of the highest recorded values of valley splitting 45 .
In conclusion, we argue for the atomic-level origin of valley splitting distributions in realistic Si/SiGe quantum dots, providing key insights on the inherent variability of Si/SiGe qubits and thereby solving a longstanding problem facing their scaling. We relate 3D atom-by-atom measurements of the heterointerfaces to the statistical electrical characterisation of devices, and ultimately to underlying theoretical models. We observe qualitative and quantitative agreement between simulated valley splitting distributions and measurements from several quantum dots, supporting our theoretical framework. Crucially, we learn that atomic concentration fluctuations of the 28 Si → Si-Ge heterointerface are enough to account for the valley splitting spread and that these fluctuations are largest when the envelope of the wavefunction overlaps with more Ge atoms. Moreover, while we have only incorporated random alloy disorder into our theoretical framework so far, we foresee that APT datasets including correlated disorder, such as steps, will be used to further refine our theoretical understanding of valley splitting statistics. Since atomic concentration fluctuations are always present in Si/SiGe devices due to the intrinsic random nature of the SiGe alloy, we propose to boost these fluctuations to achieve on average large valley splittings in realistic silicon quantum dots, as required for scaling the size of quantum processors.
Our proposed approaches are counter-intuitive yet very pragmatic. The interface broadening approach seems viable for hybrid qubits, which require valley splitting to be large enough to be usable but not so large as to be inaccessible. For single-electron spin qubits, which don't use the valley degree of freedom, the direct introduction of Ge in the quantum well appears better suited for targeting the largest possible valley splitting. By adding Ge to the Si quantum well in small concentrations we expect to achieve on average valley splitting in excess of 100 μeV. Early calculations from scattering theories 46 suggest that the added scattering from random alloy disorder will not be the limiting factor for mobility in current 28 Si/SiGe heterostructures. However, an approximate two-fold reduction in electron mobility was recently reported when an oscillating Ge concentration of about 5% on average is incorporated in the Si quantum well 45 . We speculate that fine-tuning of the Ge concentration in the quantum well will be required for enhancing the average valley splitting while not compromising the low-disorder potential environment, which is important for scaling to large qubit systems. We believe that our results will inspire a new generation of Si/SiGe material stacks that rely on atomicscale randomness of the SiGe as a new dimension for the heterostructure design.
Si/SiGe heterostructure growth
The 28 Si/SiGe heterostructures are grown on a 100-mm n-type Si(001) substrate using an Epsilon 2000 (ASMI) reduced pressure chemical vapor deposition reactor equipped with a 28 SiH 4 gas cylinder (1% dilution in H 2 ) for the growth of isotopically enriched 28 Si. The 28 SiH 4 gas was obtained by reducing 28 SiF 4 with a residual 29 Si concentration of 0.08% 47 . Starting from the Si substrate, the layer sequence for quantum well A comprises a 900 nm layer of Si 1−x Ge x graded linearly from x = 0 to 0.3, followed by a 300 nm Si 0.7 Ge 0.3 strain-relaxed buffer, an 8 nm tensily strained 28 Si quantum well, a 30 nm Si 0.7 Ge 0.3 barrier, and a sacrificial Si cap. The layer sequence for quantum well B comprises a 1.4 μm stepgraded Si (1−x) Ge x layer with a final Ge concentration of x = 0.3 achieved in four grading steps (x = 0.07, 0.14, 0.21, and 0.3), followed by a 0.45 μm Si 0.7 Ge 0.3 strain-relaxed buffer, an 8 nm tensily strained 28 Si quantum well, a 30 nm Si 0.7 Ge 0.3 barrier, and a sacrificial Si cap. In quantum well A, the Si 0.7 Ge 0.3 strain-relaxed buffer and the Si quantum well are grown at 750°C without growth interruption. In quantum well B the Si 0.7 Ge 0.3 strainrelaxed buffer below the quantum well is grown at a temperature of 625°C, followed by growth interruption and quantum well growth at 750°C. This modified temperature profile yields a sharper bottom interface for quantum well B as compared to quantum well A.
Atom probe tomography
Samples for APT were prepared in a FEI Helios Nanolab 660 dual-beam scanning electron microscope using a gallium focused ion beam at 30, 16, and 5 kV and using a procedure described in detail in ref. 48. Before preparation, a 150-200 nm thick chromium capping layer was deposited on the sample via thermal evaporation to minimize the implantation of gallium ions into the sample. All APT analyses were started inside this chromium cap with the stack fully intact underneath. APT was carried out using a LEAP 5000XS tool from Cameca. The system is equipped with a laser to generate picosecond pulses at a wavelength of 355 nm. For the analysis, all samples were cooled to a temperature of 25 K. The experimental data are collected at a laser pulse rate of 200-500 kHz at a laser power of 8-10 pJ. APT data are reconstructed using IVAS 3.8.5a34 software and visualized using the AtomBlend addon to Blender 2.79b and Blender 2.92 software. For the Voronoi tessellation the reconstructed data sets were exported to Python 3.9.2 and then tessellated using the scipy.spatial.Voronoi class of SciPy 1.6.2. Note that in these analyses the interfaces are represented as an array of sigmoid functions generated perpendicular to the respective interface on 3 nm × 3 nm tiles that are 1 nm apart. This sacrifices lateral resolution to allow for statistical sampling of the elemental concentrations but preserves the atomic resolution along the depth axis that APT is known to provide upon constructing the interface as shown in Fig. 2a.
Device fabrication
The fabrication process for Hall-bar shaped heterostructure field effect transistors (H-FETs) involves: reactive ion etching of mesa-trench to isolate the two-dimensional electron gas (2DEG); P-ion implantation and activation by rapid thermal annealing at 700°C; atomic layer deposition of a 10-nm-thick Al 2 O 3 gate oxide; deposition of thick dielectric pads to protect gate oxide during subsequent wire bonding step; sputtering of Al gates; electron beam evaporation of Ti:Pt to create ohmic contacts to the 2DEG via doped areas. All patterning is done by optical lithography. Quantum dot devices are fabricated on wafer coupons from the same H-FET fabrication run and share the process steps listed above. Double-quantum dot devices feature a single layer gate metallization and further require electron beam lithography, evaporation of Al (27 nm)
Electrical characterisation of devices
Hall-bar measurements are performed in a Leiden cryogenic dilution refrigerator with a mixing chamber base temperature T MC = 50 mK 50 . We apply a source-drain bias of 100 μV and measure the source-drain current I SD , the longitudinal voltage V xx , and the transverse Hall voltage V xy as function of the top gate voltage V g and the external perpendicular magnetic field B. From here we calculate the longitudinal resistivity ρ xx and transverse Hall resistivity ρ xy . The Hall electron density n is obtained from the linear relationship ρ xy = B/en at low magnetic fields. The carrier mobility μ is extracted from the relationship σ xx = neμ, where e is the electron charge. The percolation density n p is extracted by fitting the longitudinal conductivity σ xx to the relation σ xx / ðn À n p Þ 1:31 . Here σ xx is obtained via tensor inversion of ρ xx at B = 0. Quantum dot measurements are performed in Oxford and Leiden cryogenic refrigerators with base temperatures ranging from 10 to 50 mK. Quantum dot devices are operated in the few-electron regime. Further details of the 2DEG and quantum dot measurements are provided in the Supplementary Note 1. | 7,291.2 | 2021-12-17T00:00:00.000 | [
"Physics"
] |
Design and Implementation of Mobile Application for Results Dissemination System
Mobile application is a kind of an application system delivered via on mobile phone, it is basically network-enabled convey of skills and knowledge. Due to advanced technology of smart phones, this application has become an important system that facilitates a large amount of people to access the information efficiently. The aim of this research is to design and implement a mobile application which is able to disseminate student’s results of their exams. We have developed this application with Java Programming Language, Phased model as Software Development methodology and Android technology [1]. This research has used following methods in terms of collecting data; This research has used following methods in terms of collecting data: documentation, interview and observation techniques. The researcher has concluded that the system has been successfully implemented using Phased Model Methodology.
Introduction
Mobile application for results dissemination is a needed system in higher learning institutions in order to deliver students' marks in easy way.The purpose of mobile application is to strengthen the quality of services delivered by higher learning institution.This is software which was designed by using Unified Modeling Language diagrams and implemented by using software development methodology, especially phased model through to the phases of conception: plan-ning and selection, requirement analysis, system design, implementation of the system, system testing, operations & maintenance [2] [3].The research purpose is to design and implement a mobile application system which will help Catholic University of Rwanda to disseminate the results.This will facilitate the student's access to their results using their mobile phones.
Planning and Selection
This is a first stage of software development methodology that focuses on feasibility study [4].
Feasibility Study
The following are the concepts of feasibility study: Economic, Financial, Operational, Technical, Legal and Political.
Economic Feasibility
There are benefits from Design and Implementation of Mobile Application for Results Dissemination System due to good quality of services.
Financial Feasibility
The Catholic University of Rwanda has contributed to design and implement this application.
Operational Feasibility
Observing and evaluating the tickets that are spent by students when they come to pick their transcripts, this system has resolved this issue.
Technical Feasibility
The System has been achieved its goals and implemented successful through the knowledge we has in information Technology.
Legal and Political
Mobile application for Results Dissemination has been implemented basing on ICT Policy and Ethical and gender consideration.
System Requirements Analysis
The system requirements analysis is the second phase of software development methodology [4] [5].Requirements analysis is very important because it describes the needs of the system and shows entities relationship.The requirements are categorized into functional and technical.
Functional Requirements Analysis
Mobile Application for Results Dissemination System has three sets of functions.
The first set of functions Mobile Application for Results Dissemination functions that allow the students to login, access their results using mobile phone and claim when there is a mistake then logout and also allows the staffs to login, upload the marks and logout.
The second set of functions is system administration that allows the system administrator to manage the database of the system, update it and maintain it, create users and give their privileges for accessing the system.
Operation Flow Chart
The figure describes the second set of functions.It shows in details what admininistrator does on the system.
Technical Requirements Analysis
The Network architecture is a client/server based on Windows clients running with Windows Eight, Web Server, and Web browser.Java programming language development environment, Net Bean as development platform and development tools, SQL server as the database management system.
System Design
The system is divided into two major components.One is the registration program which presents interfaces to the user, the second is the student program which presents interfaces to the user and the third component is the application server which processes the user's request before it can send response.This application server handles all requests made by the users [5].
Use Case Diagram
User case diagram is describing the users of the system and their roles on System.
In this Application there are three users: Admin, Staff and Student.The use case technique is used to capture a system's behavioral requirements by detailing scenario-driven threads through the functional requirements.Here below is the use case diagram that shows the functionalities of the system.
Class Diagram
In this system, class diagram illustrates entities and the relationships between those entities [6] [7].
Conclusion
This research is based on design and implementation of mobile application for results dissemination.The system aims to access student's marks through mobile Setting Student 'Marks
Generate Course Report
Create System'users
Staff
Class Diagram are entities, relationships, attributes, and the entity-relationship diagram is shown in physical model of data of access global diagram below.
system has been successfully implemented and delivered services timely and kept data efficiently. | 1,115.8 | 2017-08-11T00:00:00.000 | [
"Computer Science"
] |
Study on Using Fly Ash for Fly Ash - Soil Piles in Reinforcing Soft Ground
Currently, the construction technology on soft ground reinforcement is very developed, including the technology of constructing soil-cement piles for soft soil reinforcement which is technically and economically effective and widely used. Another technology is using fly ash waste from thermal power plants to make fly ash- soil piles for soft ground reinforcement, which not only takes advantage of local materials but also reduces environmental pollution from operating thermal power plants. This paper introduces some research results on fly ash content and pile diameter when reinforcing soft ground. The authors modeled the calculation diagram of the soft ground reinforcement under the roadbed with the case of the hypothetical pile diameter D = 40cm; 50cm; 60cm corresponding to the content of fly ash 35%, 40%, 45%, the pile length L = 8m to handle all soft ground layers. The results show that when the pile length L = 8m, pile diameter D = 60cm corresponding to the fly ash content of 45%, the stability coefficient of K = 1.992 is larger than the allowable stability coefficient [K] = 1.4. In this case, the largest settlement strain S = 0.17m, meeting permissible settlement strain of the ground [S] = 0.3m. These results provide basement for the design, construction and operation management units to propose solutions to maximize the working ability of the materials, enhance the stability of the roadbed during exploitation.
Introduction
Currently, the construction technology of soft ground reinforcement is very developed, including the technology of constructing soil-cement piles for soft soil reinforcement which is technically and economically effective and widely used [1,4,26]. Fly ash waste (blast furnace ash) from thermal power plants can be used to produce fly ash-soil piles instead of soil-cement piles to reinforce soft ground. Therefore, studying on fly ash content and fly ash-soil pile diameter when reinforcing is very necessary and practical. [3,5,27] According to the World of Coal Ash Conference (WOCA), 42.1% of fly ash is reused in the US, 90.9% in Europe, 96.4% in Japan, 67.1% in China, 13.8% in India, and 66.5% in other Asian countries, etc. [14].
In Europe, fly ash from thermal power industry is used as an additive in concrete mixes (29.5%), raw materials for Portland cement production (26.9%), and materials for roads construction and leveling (19%), etc. [14].
In 2013, Japan recorded that 12.5 million tons of ash and slag were discharged. In which, the majority of ash and slag (65.6%) were used for cement production, 5.6% used as leveling materials, and 4% used as reinforcement materials. China generated 440 million tons of ash, of which about 67% was reused. In India, the amount of fly ash discharged was 165 million tons, of which about 62.5% was reused. India used 41.2% of the fly ash as raw materials for cement production, 11.83% of waste for leveling, 6% in road construction and the rest was used as an additive to concrete, unburnt brick, ... [17]. Davidovits (1988) [6] studied that an alkaline activated fly ash mixture could harden within a few hours at normal temperature (about 30°C), within several minutes when heated to 85°C and within seconds if subjected to microwaves. The compressive strength of this material will increase over time up to about 28 days, being similar to that of using Portland cement. Compressive strength can be 20MPa after four hours at 20°C, while compressive strength after 28 days is in the range of 70-100MPa.
Davidovits [7,8] published a study on the ratios of molecules that make up fly ash cement material to obtain the products of high strength durability: the M 2 O/SiO 2 ratio is 0.2-0.48; the ratio of SiO 2 /Al 2 O 3 is 3.3-4.5; the water/M 2 O ratio is 10-25; the ratio of M2O/Al 2 O is 0.8-1.6 (where M is the alkali metal).
Balo [2] conducted experimental studies of the mechanical and thermal conductivity of the materials created by mixing wasted fly ash, clay, epoxide palm oil and renewable materials. These studies show that the higher ratio of both fly ash and epoxide palm oil, the lower the coefficient of thermal conductivity, weight, and tensile -compressive strength of the experimental samples.
Lin Wuu [11,12] conducted a 1:1 scale experiment and numerical analysis on a high-speed rail substrate to evaluate the effectiveness of the piles underneath this ground which is made of gravel, fly ash and cement. The research results show that this pile works effectively and is suitable for completely decomposed granite soil. Na Li et al. [13] studied consolidation properties of coastal cement soil when adding appropriate amount of fly ash. The analysis results show that, when the cement content is 20%, the fly ash content is 0%, 5%, 10%, 20%, 30% respectively, and water content is 80%, the ability to withstand compression of this material increases significantly.
Haibin Wei [16,17] studied the solution of treating fly ash and oil shale ash by combining with mud clay. When applying this solution, the elastic modulus and stress state of the soil are significantly affected, leading to destruction of the initial structure and increase soil porosity.
Wang [15] conducted several studies on the swelling and stress properties of 16 types of mixtures made of cement, fly ash and lime. These studies show that the type of adhesive and the adhesive content have a great influence on the strain modulus, compressive strength, destructive strain and destructive mode, shape and position of stress curves.
Kuan [10], Xiao [18] studied the application of fly ash as an additive in soil improvement of construction works. When soft coastal clay is mixed with fly ash, the strength is significantly improved, the plasticity index and the compressive index decrease by 69% and 23% respectively. This paper analyzes the performance of fly ash-soil piles at different diameters and different fly ash content in order to propose the most reasonable solution for soft ground treatment and reinforcement with fly ash-soil piles.
Physical-Mechanical criteria of fly ash
To determine the physical and mechanical properties of fly ash, 2.5 tons of fly ash samples were taken. Random samples were taken uncontinously from the silo storage of Duyen Hai thermal power plant. After that, the authors selected 3 random sample groups to experiment with mechanical -physical -chemical parameters of fly ash.
The experimental results were analyzed at Quatest 2 laboratory by the method of infrared spectroscopy analysis. Two control samples were carried out at the laboratory of Road Technical Centre No. 3 by chemical and calcination methods. The average results are shown in Table 1.
According to TCVN 10302:2014: Base ash is ash with CaO content greater than 10%, symbol: C The technical criteria of the soil layers are determined according to the Report on the results of engineering geological survey of the new urban area located in the east of Mau Than street, Tra Vinh city, Vietnam.
From the current ground to the survey depth (HK1: 20m, HK2: 40m), there are 06 soil layers. The distribution depth of each layer in the boreholes is shown in Table 2. Experimental samples are made at different fly ash content from 35%, 40%, 45% to test the following parameters: Compressive strength, splitting tensile strength, determining the modulus of total mono-axial confined compression test, shear resistance, elastic modulus ( Fig. 1 and Fig. 2).
Results
Test results of soil -fly ash mixture with the reinforcement content of 35%, 40%, 45% fly ash are synthetized at Table 3, 4, 5 respectively. Based on the above test results, we can graph the relationship between the fly ash content in reinforcement and the sample strength growing over time as Fig. 3.
Calculation Cases
The reasonable distance of piles according to TCVN 10304:2014 is from (1.5÷6)D, normally from (1÷3)D. The authors chose the distance between the piles as follows: 3.75D for d400 piles, 3D for d500 piles and 2.5D for d600 piles. The purpose of the study is to find the relationship between the reinforcement ratio with the stability and settlement of the construction; therefore, only changes resulted from pile diameter are recorded; Researches of Khoi and Linh (2013) [9], Zygmunt Meyer and Piotr Cichocki (2020) [28] show that the diameter has a significant impact on the load capacity of the pile. The completely treated length pile is 8m, calculated based on the calculated settlement area, and fully treated depth of the soft ground.
Results
A. In case of D600-35% fly ash piles, the distance between 2 piles is 1.5m When reinforcing D600 -35% fly ash piles, the result obtained as Figs. 4-6. 1078 Study on Using Fly Ash for Fly Ash -Soil Piles in Reinforcing Soft Ground B. In case of D600-40% fly ash piles, the distance between 2 piles is 1.5m When reinforcing D600 -40% fly ash piles, the result obtained as Figs. 7-9. C. In case of D600-45% fly ash piles, the distance between 2 piles is 1.5m When reinforcing D600 -45% fly ash piles, the result obtained as Figs. 10-12. The diagram of the relationship between the stability coefficient according to the pile diameter and the reinforced fly ash content is shown in Fig. 13. Through the chart of the relationship between the stability coefficient K with the pile diameter and fly ash content, it is shown that the larger the pile diameter is and the higher the fly ash content is, the higher the stability coefficient K is.
In terms of technical aspects, the author proposes to choose piles with diameter D = 60cm, pile length L = 8m with fly ash content of 45% for the most optimal coefficient of K. Comparing with the research results of [9] on stability coefficient K, this research shows that when using the fly ashsoil pile with the same diameter of D600 and the fly ash content is 45%, the stability coefficient K is higher than that of a soil-cement pile (Fig. 14).
B. Roadbed settlement
Calculation results of the roadbed settlement are synthetized at Table 7. The diagram of relationship between settlement by pile diameter and fly ash content reinforced is illustrated in Fig. 15. Through the chart of relationship between the settlement S with pile diameter and fly ash content, we notice that the larger the pile diameter is, and the higher the fly ash content is, the lower the settlement is.
For D400, D500 piles, when reinforced with fly ash content of 35%, the settlement is not guaranteed compared with the permissible limit settlement, when the fly ash content increases to 40%, 45%, the construction settlement is smaller than the permissible limit settlement. These results prove that the settlement of the building decreases gradually when we increase the pile diameter and fly ash content.
In terms of technical aspects, the author proposed to choose piles with diameter D = 60cm, pile length L = 8m with fly ash content of 45%, resulting in the most optimal settlement S.
Conclusions
Mechanical and physical properties, material characteristics used in the topic are taken directly from the experiment.
When designing the project without reinforcement of the fly ash -soil piles, the displacement at the bottom of the roadbed is too large, thus, it is clear that the roadbed needs treatment. The author modeled the calculation diagram of soft ground reinforcement under Mau Than roadbed with the assuming pile diameter D = 40cm; 50cm; 60cm corresponding to the content of fly ash 35%, 40%, 45%, the pile length L = 8m to handle all the soft soil layers.
Thereby, it is able to analyze the performance of fly ash-soil piles at different diameters and fly ash content. With the pile length L = 8m, pile diameter D = 60cm corresponding to the fly ash content of 45%, the stability coefficient is K = 1.992 which is greater than the allowable stability coefficient [K] = 1.4. The largest settlement strain in this case S = 0.17m ensures allowable settlement deformation of the ground [S] = 0.3m.
Through the conversion of stress, displacement, deformation values and so on from the reduced model to the actual model, design and construction consultants and operation managers can use this model as a basis to offer solutions to ensure the construction stability during its operation duration. | 2,847.6 | 2020-10-01T00:00:00.000 | [
"Engineering"
] |
Preparation of Nanopaper for Colorimetric Food Spoilage Indication
In this study, we are reporting the fabrication of a nanocellulose (NFC) paper-based food indicator for chicken breast spoilage detection by both visual color change observation and smartphone image analysis. The indicator consists of a nanocellulose paper (nanopaper) substrate and a pH-responsive dye, bromocresol green (BCG), that adsorbs on the nanopaper. The nanopaper is prepared through vacuum filtration and high-pressure compression. The nanopaper exhibits good optical transparency and strong mechanical strength. The color change from yellow to blue in the nanopaper indicator corresponding to an increase in the solution pH and chicken breast meat storage data were observed and analyzed, respectively. Further, we were able to use color differences determined by the RGB values from smartphone images to analyze the results, which indicates a simple, sensitive, and readily deployable approach toward the development of future smartphone-based food spoilage tests.
Introduction
Food packaging is crucial for the vigilant maintenance of food quality and safety during the storage, transportation, and sale of food. Due to the rising demand for food and industrial packaging materials, the world packaging industry is among the largest and fastest-growing commercial sectors. The global food packaging market size was USD 358.3 billion in 2022. It is estimated that the global food packaging market will reach around USD 478.18 billion in 2028, despite the COVID impact in the past three years, at a compound yearly growth rate of 5.1%, according to the report of "Fortune Business Insights" [1]. Despite innovative strides over the past decades, petroleum-based plastics are still the dominant food packaging materials, such as polyethylene terephthalate, polyethylene, polyvinyl chloride, polypropylene, and polystyrene [2]. Both food production and waste streams regarding these industrial plastic polymers leverage considerable environmental burdens, primarily due to the poor degradability of plastic polymers [3]. There has been an increasing interest in the design and fabrication of food packaging materials based on sustainable bio-based polymers, composed of polysaccharides, edible proteins, and natural polymers [4].
On the other hand, the design of traditional food packaging primarily focuses on providing protection of the food product from mechanical damage, light irradiation, undesirable chemical reactions due to exposure to gases (e.g., oxygen), and colonization by pathogenic microorganisms that can produce microbial toxins [5]. However, there is increasing concern about delivering fresh and safe foods to consumers. This has led to the emergence of intelligent food packaging that utilizes smart indicators to monitor food quality across the logistical chain. Intelligent food packaging is the design of monitoring systems that can detect and inform on the condition of food and/or the surrounding environment during transportation and storage of food, providing real-time feedback to producers, retailers, and end consumers [6]. Various technological approaches have been used for intelligent food packaging, such as radio frequency identification (RFID) tags [7], near-field communication tags [8,9], as well as a suite of chemical or biological approaches including integrity indicators, freshness indicators, and time-temperature indicators (TTI) [10,11]. However, one critical limit factor to new food packaging designs is the cost, i.e., the packaging price should be less than 10% of the overall product [12]. Therefore, low-cost freshness indicators that can monitor food spoilage by detecting metabolites from microorganisms through visual color change are particularly attractive.
Freshness indicators rely on pH-sensitive dyes that can change color by reacting with metabolite gases, such as total volatile basic nitrogen (TVBN, e.g., ammonia, dimethylamine, and trimethylamine), CO 2 , and H 2 S [13]. Often, the monitoring of TVBN is critical for meat spoilage. The generation of TVBN is typically caused by the degradation of proteins in meat by microbial metabolic pathways. TVBN has hence been used as an important biochemical marker to monitor chicken meat spoilage [14]. Several groups reported the fabrication of package materials by incorporating one or more pH-responsive dyes, such as bromocresol green (BCG), bromothymol blue, and methyl red [15]. The color change of the pH dyes requires the dissolution of TVBN in water. Therefore, a moderately humid chamber is required inside the food package to maintain the food indicator detection sensitivity [16]. To maintain moisture, highly hydrophilic packaging materials are often proposed for use in indicator preparation to improve the color response of pH-sensitive dye during food spoilage [17]. However, most of the current plastic films such as PE and PET are considerably hydrophobic. Thus, investigating food packaging materials utilizing sustainable biopolymers composed of hydrophilic surfaces may be an overall solution to these problems.
In this study, we are reporting the development of a food freshness indicator that monitors chicken breast meat spoilage using a nanocellulose paper (nanopaper). Nanocellulose is a derivative of cellulose that originates from wood pulp treatments or bacterial synthesis. Nanofibrillated cellulose (NFC) is a material with dimensions of several nm in diameter but 1 to 2 µm in length [17]. The nano-size property and surface chemical functionality make NFC a hydrogel with good transparency. In our previous work, we reported the use of NFC to prepare a film-based NFC paper (nanopaper) [17,18]. Due to its transparency and superior mechanical properties, nanopaper has been demonstrated to be an excellent platform for biosensing applications, which is particularly suitable for colorimetric assay [17]. On the other side, NFC has also been reported to be an excellent food package material due to its good gas and water vapor barrier, hydrophilicity, and excellent thermal-mechanical stability [19]. In this work, we reported a simple way to prepare a nanopaper food indicator by coating BCG dye on nanopaper. We demonstrated that the nanopaper food indicator could be used for monitoring chicken breast spoilage by both naked-eye-based observation and cellphone image-based analysis.
Chemicals and Materials
Some (2,2,6,6-Tetramethylpiperidin-1-yl)oxidanyl (TEMPO)-oxidized NFC (slurry, 0.8 wt% solid) was purchased from the Process Development Center at the University of Maine. Bromocresol green was purchased from Sigma-Aldrich (Milwaukee, WI, USA). Teflon ® film made from Teflon ® polytetrafluoroethylene (PTFE) discs and polyethylene terephthalate (PET) film were ordered from McMaster-Carr (Elmhurst, IL, USA). Unless otherwise specified, all the other chemicals were purchased from Sigma-Aldrich. Solutions with different pHs were prepared by adjusting the pH of distilled water using a solution of NaOH (1 N) or a HCl solution (1 M).
Nanopaper and Nanopaper Food Indicator Preparation
In a typical experiment, a slurry of TEMPO-Cellulose Nanofibrils (NFC) was dispersed in distilled water at a nanofiber content of 0.1 wt%, and the suspension was stirred extensively for 2 h at 800 rpm to disperse the NFC hydrogel. Fifty grams of the above suspension were then subjected to vacuum filtration using a hydrophilic PVDF filter membrane (EMD Millipore Corporation, Burlington, MA, USA; pore size: 0.45 µm) mounted on a Büchner funnel. After filtration, a wet transparent NFC hydrogel was formed on top of the filter membrane. The gel "cake" (6 cm in diameter) was further baked in an oven operating at 60 • C for 30 min to remove the remaining surface water. To prevent curling of the gel cake during the baking process, the gel on the filter paper was fixed to the Teflon plate using tape. The gel cake with the filter was carefully sandwiched between PET film and Whatman ® No. 1 filter paper and then two Teflon boards. Next, the package was placed and dried under pressure (2.6 MPa) at 70 • C or room temperature for 10 min to form a nanopaper. The nascent nanopaper was then peeled off from the filter membrane and kept inside a thick book to prevent surface curling until further use.
Prior to the preparation of the nanopaper indicators, the nanopaper was punched into small circle discs (6 mm in diameter). Then the discs were immersed in a pH-sensitive dye bromocresol green ethanol solution (1%) for 15 min or 30 min. Afterwards, the dyed nanopaper indicator was air-dried for 1 h prior to use.
Food Storage Test
A package of fresh and boneless chicken breast slices of normal pH (5.9-6.0) was purchased from a local Walmart meat department (Erie, PA, USA). The expiry date on the meat package indicated a two-week window for storage at 4 • C. The chicken breast was replaced in a polypropylene (PP) tray. The as-prepared nanopaper food indicator discs were placed in a plastic weigh cup. The cup with the discs was further placed in the PP tray together with the chicken breast, and the whole package was wrapped and sealed using plastic Glad ® Cling Wrap (Clorox Canada, Brampton, ON, Canada). Some space was left between the cup and the plastic wrap to ensure that air inside the package could pass through the food indicator discs. The overall sealed chicken breast package was stored at room temperature (20 • C) for a week. Color changes in the nanopaper indicators were monitored daily using a smartphone (iPhone XR; Apple Inc., Cupertino, CA, USA).
Characterization and Data Analysis
The morphology of the nanopaper was characterized with a scanning electron microscope (SEM, Hitachi S-3000N, Hitachi Ltd., Tokyo, Japan, operating under 12 kV) after being sputter-coated with gold.
Transmittance of the nanopaper was obtained using a UV-vis spectrometer (Spec-traMax M5; Molecular Devices, Sunnyvale, CA, USA). A tensile test was conducted to determine the mechanical strength of the nanopaper with a universal testing machine (MTEST-Quattro; ADMET, Norwood, MA, USA; the loading cell is 1 kN). A rectangular strip (5 mm × 50 mm) was prepared for the tensile test. The strain rate was fixed at 20% min −1 . Tensile strain (ε) was defined as length change (∆l) divided by the original length (l 0 ) of the sample.
Digital color images of the indicators incubated in the solutions of different pHs or exposed to the packaged chicken breasts were captured using a smartphone (iPhone XR; Apple Inc., Cupertino, CA, USA). The images were analyzed using ImageJ software Version 1.53t (National Institutes of Health, Bethesda, MD, USA) and the average RGB pixel intensities were collected. All RGB values are the mean values from three captured images of the same indicator. The RGB values of the nanopaper food indicators were normalized using the following equation to those of white printing paper to reduce potential errors caused by lighting, position, and camera angle [20].
where x is the number of storage days; R x , G x , and B x are normalized values; and R x , G x , and B x are the original values from the images, respectively. R wb , G wb , and B wb represent the white background values. Lastly, the color difference in the nanopaper indicator during the chicken breast storage period was calculated through the following formula [21]: where R 0 , G 0 , and B 0 are the normalized RGB values from day 0, respectively. All the above data were further analyzed and plotted with Microsoft Excel version 16.69.1. Figure 1 shows a scheme of the preparation of nanocellulose paper (nanopaper) through a vacuum filtration procedure, followed by a heating compression step, with slight modification from our previous report [18]. TEMPO-oxidated nanocellulose hydrogel was used to improve the transparency of the nanopaper [22] and provide a high amount of carboxylate groups (about 1.5 mmol/g of the carboxylate content) to bind with ammonium gas released from food during the spoilage process. Apple Inc., Cupertino, CA, USA). The images were analyzed using ImageJ software Version 1.53t (National Institutes of Health, Bethesda, MD, USA) and the average RGB pixel intensities were collected. All RGB values are the mean values from three captured images of the same indicator. The RGB values of the nanopaper food indicators were normalized using the following equation to those of white printing paper to reduce potential errors caused by lighting, position, and camera angle [20].
Nanopaper Preparation
where x is the number of storage days; R′x, G′x, and B′x are normalized values; and Rx, Gx, and Bx are the original values from the images, respectively. Rwb, Gwb, and Bwb represent the white background values. Lastly, the color difference in the nanopaper indicator during the chicken breast storage period was calculated through the following formula [21]: where R′0, G′0, and B′0 are the normalized RGB values from day 0, respectively. All the above data were further analyzed and plotted with Microsoft Excel version 16.69.1. Figure 1 shows a scheme of the preparation of nanocellulose paper (nanopaper) through a vacuum filtration procedure, followed by a heating compression step, with slight modification from our previous report [18]. TEMPO-oxidated nanocellulose hydrogel was used to improve the transparency of the nanopaper [22] and provide a high amount of carboxylate groups (about 1.5 mmol/g of the carboxylate content) to bind with ammonium gas released from food during the spoilage process. Figure 2A shows that the nascent nanopaper exhibits a transparent plastic-like film. As transparency is important for the later colorimetric-based food indicator, the temperature for heat compression was kept at a low level to prevent decarbonization of anhydroglucuronate units. Following the below discussion on indicator preparation optimizing, we performed the compression at room temperature. Figure 2B shows that the transmittance of the as-made nanopaper reaches more than 70%, from 400 nm to 800 nm, slightly lower than our previous study with compression at 85 °C [17]. Figure 2A shows that the nascent nanopaper exhibits a transparent plastic-like film. As transparency is important for the later colorimetric-based food indicator, the temperature for heat compression was kept at a low level to prevent decarbonization of anhydroglucuronate units. Following the below discussion on indicator preparation optimizing, we performed the compression at room temperature. Figure 2B shows that the transmittance of the as-made nanopaper reaches more than 70%, from 400 nm to 800 nm, slightly lower than our previous study with compression at 85 • C [17].
Nanopaper Characterization
We further characterized the nanopaper using a scanning electron microscope (SEM). As shown in Figure 3A, we could observe that the nanopaper exhibited a very flat surface. It should be noted that the thickness can be further adjusted by changing the weight of nanocellulose content used in the filtration step. However, it may take a long time to filter the NFC hydrogel if the nanocellulose content is increased. In our study, we prepared nanopaper with a thickness of around 80 to 90 µm ( Figure 2B), which is comparable to the thickness of commercial food packaging film. Previous studies indicated that nanopaper has strong mechanical properties [23]. Figure 4 also shows the mechanical testing results for our synthesized nanopaper in this study. The mechanical strength was measured at 200 MPa and Young's modulus was determined at 7.69 GPa, respectively, despite the low-temperature pressing we used in the study. The mechanical result is comparable and even stronger than some common plasticbased food packaging films such as PET films [24]. Owing to its transparency and excellent mechanical properties, it is expected that the nanopaper formulated in this study would be an ideal component for food packaging. Nonetheless, in this work, we focused primarily on the utility of our nanopaper as a food indicator matrix. As the nanopaper material is useful for both food packaging and food safety monitoring, it is worth noting that the application of spoilage testing dyes (i.e., using ink-jet printing technology) in strategic spots of the inner side of a nanopaper-based packaging film would be an ideal solution.
Nanopaper Characterization
We further characterized the nanopaper using a scanning electron microscope (SEM). As shown in Figure 3A, we could observe that the nanopaper exhibited a very flat surface. It should be noted that the thickness can be further adjusted by changing the weight of nanocellulose content used in the filtration step. However, it may take a long time to filter the NFC hydrogel if the nanocellulose content is increased. In our study, we prepared nanopaper with a thickness of around 80 to 90 µm ( Figure 2B), which is comparable to the thickness of commercial food packaging film.
Nanopaper Characterization
We further characterized the nanopaper using a scanning electron microscope (SEM). As shown in Figure 3A, we could observe that the nanopaper exhibited a very flat surface. It should be noted that the thickness can be further adjusted by changing the weight of nanocellulose content used in the filtration step. However, it may take a long time to filter the NFC hydrogel if the nanocellulose content is increased. In our study, we prepared nanopaper with a thickness of around 80 to 90 µm ( Figure 2B), which is comparable to the thickness of commercial food packaging film. Previous studies indicated that nanopaper has strong mechanical properties [23]. Figure 4 also shows the mechanical testing results for our synthesized nanopaper in this study. The mechanical strength was measured at 200 MPa and Young's modulus was determined at 7.69 GPa, respectively, despite the low-temperature pressing we used in the study. The mechanical result is comparable and even stronger than some common plasticbased food packaging films such as PET films [24]. Owing to its transparency and excellent mechanical properties, it is expected that the nanopaper formulated in this study would be an ideal component for food packaging. Nonetheless, in this work, we focused primarily on the utility of our nanopaper as a food indicator matrix. As the nanopaper material is useful for both food packaging and food safety monitoring, it is worth noting that the application of spoilage testing dyes (i.e., using ink-jet printing technology) in strategic spots of the inner side of a nanopaper-based packaging film would be an ideal solution. Previous studies indicated that nanopaper has strong mechanical properties [23]. Figure 4 also shows the mechanical testing results for our synthesized nanopaper in this study. The mechanical strength was measured at 200 MPa and Young's modulus was determined at 7.69 GPa, respectively, despite the low-temperature pressing we used in the study. The mechanical result is comparable and even stronger than some common plastic-based food packaging films such as PET films [24]. Owing to its transparency and excellent mechanical properties, it is expected that the nanopaper formulated in this study would be an ideal component for food packaging. Nonetheless, in this work, we focused primarily on the utility of our nanopaper as a food indicator matrix. As the nanopaper material is useful for both food packaging and food safety monitoring, it is worth noting that the application of spoilage testing dyes (i.e., using ink-jet printing technology) in strategic spots of the inner side of a nanopaper-based packaging film would be an ideal solution.
Nanopaper Food Indicator Preparation and Optimization
Prior to preparation of the nanopaper food indicator, the nanopaper was punched into small discs (about 6 mm in diameter) using a paper puncher. The nanopaper food indicator was then prepared by immersing the nanopaper discs into an ethanol solution of bromocresol green (BCG) at a concentration of 1%, followed by drying at room temperature. BCG is a pH-sensitive dye that belongs to the triphenylmethane family. It changes color from red/yellow at pH 3.8 to blue at pH 5.4. It has been used to titrate growth mediums to monitor the release of ammonium gas during the growth of microorganisms [25].
We initially used nanopaper prepared through compression under 70 °C conditions. The dyed nanopaper indicator disc exhibits a yellow color. A higher amount of yellow color rather than the red-orange color from pure BCG could be due to action by surface carboxylate groups from cellulose fibers. Figure 5A shows a yellow-colored sheet under pH 3.8. However, a ring of leaked dye was also observed while adding a drop of solution with pH 10.1 ( Figure 5A(ii)). This leakage indicates that the adsorption of the dye on the nanopaper was not very strong. This could be due to higher temperature pressing formulating the nanopaper in an extremely tight manner, leaving only the surface to adsorb the dye. To improve the dye content on the nanopaper, we tested oven drying to remove some of the water content from the NFC filtration "cake", prior to pressing, and then pressed the hydrogel "cake" at room temperature. It was expected that the nanopaper would retain more water content under these conditions. From Figure 5A(ii), we can see that the intensity of the yellow color increased, and no obvious blue dye leaked at pH 10.1. However, the blue color at pH 10.1 seems very thin. To improve this, we extended the immersion time of the nanopaper into the dye ethanol solution. It was expected that extensive incubation would help the ethanol to penetrate the nanopaper surface with dye and drive more water molecules out of the nanofibrils network. As a result, more dye could be trapped inside the nanofibrils network and hence reduce potential dye leakage. Figure 5A(iii) shows that after 30 min immersion of the nanopaper in the dye solution, the nanopaper indicator did not exhibit dye leakage while exposed to the pH solutions.
Nanopaper Food Indicator Preparation and Optimization
Prior to preparation of the nanopaper food indicator, the nanopaper was punched into small discs (about 6 mm in diameter) using a paper puncher. The nanopaper food indicator was then prepared by immersing the nanopaper discs into an ethanol solution of bromocresol green (BCG) at a concentration of 1%, followed by drying at room temperature. BCG is a pH-sensitive dye that belongs to the triphenylmethane family. It changes color from red/yellow at pH 3.8 to blue at pH 5.4. It has been used to titrate growth mediums to monitor the release of ammonium gas during the growth of microorganisms [25].
We initially used nanopaper prepared through compression under 70 • C conditions. The dyed nanopaper indicator disc exhibits a yellow color. A higher amount of yellow color rather than the red-orange color from pure BCG could be due to action by surface carboxylate groups from cellulose fibers. Figure 5A shows a yellow-colored sheet under pH 3.8. However, a ring of leaked dye was also observed while adding a drop of solution with pH 10.1 ( Figure 5A(ii)). This leakage indicates that the adsorption of the dye on the nanopaper was not very strong. This could be due to higher temperature pressing formulating the nanopaper in an extremely tight manner, leaving only the surface to adsorb the dye. To improve the dye content on the nanopaper, we tested oven drying to remove some of the water content from the NFC filtration "cake", prior to pressing, and then pressed the hydrogel "cake" at room temperature. It was expected that the nanopaper would retain more water content under these conditions. From Figure 5A(ii), we can see that the intensity of the yellow color increased, and no obvious blue dye leaked at pH 10.1. However, the blue color at pH 10.1 seems very thin. To improve this, we extended the immersion time of the nanopaper into the dye ethanol solution. It was expected that extensive incubation would help the ethanol to penetrate the nanopaper surface with dye and drive more water molecules out of the nanofibrils network. As a result, more dye could be trapped inside the nanofibrils network and hence reduce potential dye leakage. Figure 5A(iii) shows that after 30 min immersion of the nanopaper in the dye solution, the nanopaper indicator did not exhibit dye leakage while exposed to the pH solutions.
We further evaluated the dye under different pH ranges. As shown in Figure 5B, a visual color difference of the nanopaper indicator at various pH values was observed, and the corresponding color-difference-based RGB values are shown in Figure 5C. As we can see, the color changes gradually from yellow to blue with a concomitant increase in pH. It should be noted that the surface of the nanopaper may have gained some impurities during the nanopaper disc manipulation which may affect the pH at local spots. It would be more accurate to observe pH-dependence-based RGB values. In particular, computational techniques now allow a combination of a smartphone digital camera as an acquisition tool and digital-image processing apps to develop a low-cost and more readily available method to measure color. We further evaluated the dye under different pH ranges. As shown in Figure 5B, a visual color difference of the nanopaper indicator at various pH values was observed, and the corresponding color-difference-based RGB values are shown in Figure 5C. As we can see, the color changes gradually from yellow to blue with a concomitant increase in pH. It should be noted that the surface of the nanopaper may have gained some impurities during the nanopaper disc manipulation which may affect the pH at local spots. It would be more accurate to observe pH-dependence-based RGB values. In particular, computational techniques now allow a combination of a smartphone digital camera as an acquisition tool and digital-image processing apps to develop a low-cost and more readily available method to measure color.
Evaluation of Nanopaper Food Indictor Response to Chicken Breast Freshness
To evaluate the nanopaper food indicator response to food freshness, we used fresh chicken breast in our test, as the consumption of chicken consists of one third of the world's meat consumption [26]. Chicken breast is highly perishable, and freshness can be reduced dramatically over time even when stored in a fridge (at 4 °C). In our study, we kept the food storage temperature at 20 °C for an accelerated spoilage experiment. Figure 6A shows the nanopaper indicator discs that were co-packaged together with the chicken breast without direct contact, and the whole food packaged was wrapped with commercial Glad ® Cling Wrap. The color of the nanopaper indicator was initially yellow (i.e., day 0). The yellow color merely turned slightly deeper within the first three days. However,
Evaluation of Nanopaper Food Indictor Response to Chicken Breast Freshness
To evaluate the nanopaper food indicator response to food freshness, we used fresh chicken breast in our test, as the consumption of chicken consists of one third of the world's meat consumption [26]. Chicken breast is highly perishable, and freshness can be reduced dramatically over time even when stored in a fridge (at 4 • C). In our study, we kept the food storage temperature at 20 • C for an accelerated spoilage experiment. Figure 6A shows the nanopaper indicator discs that were co-packaged together with the chicken breast without direct contact, and the whole food packaged was wrapped with commercial Glad ® Cling Wrap. The color of the nanopaper indicator was initially yellow (i.e., day 0). The yellow color merely turned slightly deeper within the first three days. However, on the fourth day, the color turned blue, suggesting a significantly high amount of TVBN compound produced by the growth of microorganisms. It should be noted that previous studies showed that chicken breast may spoil within seven days at 4 • C, four days at 10 • C, and one day at 20 • C, respectively [27,28]. From the visual color observation, we were not able to see a significant difference in the first three days. This could be due to the good preservation conditions of the fresh meat we received from the grocery store. The package was accidentally moved during the incubation time, which caused the disc positions from day 1 to 3 to be different from the rest. However, there was no leaking observed. We also confirm that the change in color is only for the food indicators in the meat package. Figure S1 shows that there is no significant color change (and neither in RGB values) in the discs kept for a week without meat.
Discussion
In this study, we developed a nanopaper-based food indicator for chicken breast spoilage detection. Spoilage in the meat is due to the degradation of nitrogen-containing compounds, such as proteins, by microbes, causing the accumulation of volatile amines that are termed TVBNs. The detection mechanism thus relies on the determination of TVBN gas using a pH-sensitive dye, BCG. Characterization of microbial species could identify the sources of spoilage and help with food indicator validation, although that is From the color difference obtained using RGB values, we could see that after day 1, there was a significant RGB intensity increase compared to freshly purchased chicken breast. A large RGB increase occurred between days 3 and 4, which correlates with the visual blue color change on the nanopaper indicator. It is difficult to tell if the chicken breast meat spoiled within the first three days in our study, as there is a lack of threshold at this point. We would expect that a systematic study would be required to identify this in our future work. However, the RGB intensity change may indicate that the digital method would be a more sensitive and applicable mode to monitor food spoilage. The current RGB value evaluation is based on manual analysis through ImageJ software. We would expect that more theoretical and advanced studies should be undertaken for analyzing packaged meat quality change and change in nanopaper indicator RGB intensity, as well the development of cellphone apps for easy and readily available food spoilage testing.
Discussion
In this study, we developed a nanopaper-based food indicator for chicken breast spoilage detection. Spoilage in the meat is due to the degradation of nitrogen-containing compounds, such as proteins, by microbes, causing the accumulation of volatile amines that are termed TVBNs. The detection mechanism thus relies on the determination of TVBN gas using a pH-sensitive dye, BCG. Characterization of microbial species could identify the sources of spoilage and help with food indicator validation, although that is out of the scope of the current study. The dynamics of microbial growth and the dominant species can vary according to the availability of preferred substrate, oxygen, moisture, and the pH of the meat product [29]. For chicken breast, microbes, such as Pseudomonas spp., Shewanella putrefaciens, and yeast, are commonly present in low quantities during food production and packaging [30][31][32]. Under limited oxygen environments, these microbes change their preferential energy source toward amino acids [33]. Lu et. al. showed that in beef, Pseudomonas spp., Photobacterium spp., and Vibrionaceae spp. contributed to the increase in TVBN levels, resulting in the production of ammonia (NH 3 ) and methylamines (MA) [34]. In chicken meat, Lee et. al. found that Pseudomonas spp. continuously increased with an increase in storage time, which closely correlates with the amount of TVBN [28]. In a recent study, Saenz-Garcia et. al. showed that Pseudomonas spp. has the highest contribution to TVBN formation, comparing Brochothrix spp., Hafnia spp., Acinetobacter spp. during storage at 4 • C [35]. Therefore, we would expect Pseudomonas spp. to dominate the contribution of TVBN in our study. In our future study, microbial analysis will be performed to identify the microbe species.
We utilized nanocellulose paper as the substrate to adsorb pH-sensitive BCG dye for a food spoilage indicator. Nanocellulose is a derivative of natural cellulose materials. Cellulose and its derivative forms have shown remarkable properties, including wide availability, inexpensiveness, and degradability. They are also capable of effectively transporting and storing various chemicals. Previous studies have reported the development of filterpaper-based colorimetric food indicators for fish freshness via visual observation [36,37]. In those food indicators, porous filter papers are often used as color-changing layers to absorb dye, while other binding polymers are used to laminate or sandwich to prevent softening during the soaking of moisture inside the filter paper, which subsequently induces color-change distortions [28]. In this study, we used nanocellulose paper (nanopaper). Nanopaper exhibits good mechanical stability (also shown in Figure 4), reducing the risk of surface distortions. Nanopaper also displays a flat 2-D-based surface and has excellent transmittance in visible light, as shown in Figure 2. Our previous study showed that a flat surface provided better signal homogeneity in Raman spectrometry analysis compared to that of porous filter paper, and hence enhanced assay reproducibility [18]. Owing to its strong mechanical properties and being a good gas barrier, as well as being lightweight, nanopaper or nanocellulose film has been reported as an excellent food packaging material [38]. Nanopaper is adapted to printing technology, and thus it is possible that we will be able to print pH-sensitive dye-based (e.g., BCG) patterns on nanopaper. Transparency of nanopaper could enable the visualization of color change in real time.
The cost to produce nanopaper is currently much higher than that of cellulose paper (which is about USD 500 to USD 1500 ton −1 ), which is because of energy-and time-consuming nanocellulose pulp preparation procedures such as chemical treatment of cellulose (USD 2700 ton −1 for TEMPO oxidation). However, it is expected that it is possible to reduce the cost by recycling expensive catalyst TEMPO from spent liquid [39]. In our experiment, we used vacuum filtration for small-scale nanopaper preparation, which is also an expensive process. However, large-scale production of nanopaper using a roll-to-roll process has been developed by the VTT Technical Research Centre in Finland, with more efficiency and lower cost [40]. By using a roll-to-roll process, meter-long nanocellulose crystal film was also reported recently [41]. Therefore, nanopaper will potentially become more affordable with technological development and large-scale industrial production.
Conclusions
In this work, a nanocellulose-based nanopaper food indicator was developed for real-time colorimetric monitoring of chicken breast spoilage, analyzed by both bare-eye observation and using computational RGB analysis from images captured with a smartphone. The nanopaper food indicator consists of a nanocellulose film prepared by vacuum filtration and high-pressure compression with a pH-sensitive dye adsorbed onto the nanocellulose surface. The nanopaper food indicator displayed an optical color change from yellow to blue when the packed chicken was stored for three days at 20 • C. The change of color indicates the growth of microorganisms and release of volatile basic metabolic components. Overall, the nanopaper indicator displayed a distinct color change according to the freshness of food, suggesting that the nanopaper could be a potential platform for intelligent food packaging applications.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/polym15143098/s1, Figure S1: Comparisons of RGB values between discs kept at room temperature (without meat) between Day 0 and Day 7. Institutional Review Board Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to terms and policy. | 7,921.2 | 2023-07-01T00:00:00.000 | [
"Physics"
] |
High-Efficiency FCME-Based Noise Power Estimation for Long-Term and Wide-Band Spectrum Measurements
Statistics in terms of spectrum occupancy are useful for efficient and smart dynamic spectrum sharing, and the statistics can be obtained by long-term and wide-band spectrum measurements. In this paper, we investigate noise floor (NF) estimation for energy detection (ED)-based long-term and wide-band spectrum measurements since the NF estimation heavily affects the ED performance and eventually the accuracy of the statistics in terms of spectrum occupancy. Specifically, we address the following NF estimation problems simultaneously for the first time in the spectrum measurement field: (1) slow time-varying property of the NF, (2) frequency dependency of the NF, (3) the NF estimation in the presence of the signal, and (4) the computational cost of the NF estimation. Firstly, we apply Forward consecutive mean excision (FCME) algorithm-based NF estimation to deal with the above three problems ((1), (2) and (3)) successfully. Second, we propose and apply an NF level change detection on top of the FCME algorithm-based NF estimation to deal with the fourth problem. The proposed NF level change detection exploits the slow time-varying property of the NF. Specifically, only if the significant NF level change is detected, the FCME algorithm-based NF estimation is performed to reduce the redundant NF estimations. In numerical evaluations, we show the efficiency and the validity of the NF level change detection for the NF estimation problems, and compare the NF estimation performance with the method without the NF level change detection.
I. INTRODUCTION
During the past decades, the demand for radio spectrum has been increasing to support the bandwidth-hungry applications such as high-definition video streaming and emerging applications (e.g., Internet of things (IoT), device-to-device (D2D) communications), while there is little room to accommodate new emerging wireless systems due mainly to fixed spectrum assignment policy. However, the spectrum measurement campaigns around the world have shown that The associate editor coordinating the review of this manuscript and approving it for publication was Shuangqing Wei . almost all the spectrum is under-utilized in time and/or space domains [2], [3]. It means there are a lot of unused spectrum called white space (WS). In order to solve this issue, several types of dynamic spectrum sharing (DSS) frameworks have been investigated, such as opportunistic spectrum access (OSA) [4], licensed shared access (LSA) in Europe [5], and television white space (TVWS) [6], citizens broadband radio service (CBRS) based on spectrum access system in the U.S. [7].
In typical OSA, there are primary users (PUs), which have priority regarding spectrum usage, and secondary users (SUs), which can opportunistically access the WS as long as VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ the spectrum utilization by SUs does not cause any harmful interference to PUs. OSA includes two important techniques: spectrum sensing and wireless resource management (e.g., bandwidth, power, etc.). Spectrum sensing is a spectrum awareness technique in terms of instantaneous spectrum occupancy, either vacant or occupied [8]. The requirements of spectrum sensing, such as accuracy, latency, and implementation cost are substantially demanding [9]. On the other hand, in the wireless resource management in the context of DSS, it is required to manage wireless resources to enhance the spectrum utilization efficiency while SUs do not cause any harmful interference to PUs. In order to resolve the issue of spectrum sensing and provide an efficient wireless resource management, the advanced DSA approach (known as smart spectrum access (SSA)) has been investigated [10], [11]. In SSA, the aspect of spectrum usage by PU, such as statistics of channel occupancy rate (COR), which is the fraction of the time that a channel is occupied, i.e., contains signal(s) in addition to noise [12], can be available based on long-term, wide-band, and wide area spectrum measurements [13]. In fact, COR statistics can enhance not only spectrum sensing performance [14], [15], but also the efficiency of spectrum management, channel selection and MAC protocol [16], [17].
In this paper, we focus on the spectrum measurement part for realizing SSA. In general, the spectrum measurement consists of the acquisition of data associated with spectrum usage (e.g., I/Q data, power data) and processing the obtained data such as spectrum analysis, spectrum usage detection, and estimation of statistical information such as COR. Actually, there have been many spectrum measurement campaigns (see [2], [3] and references therein), and most of the campaigns use an energy detector (ED) as a spectrum usage detection technique. We also focus on the ED-based spectrum measurements.
One key challenge for the ED is the detection threshold-setting to achieve target detection performances, such as target false alarm rate. Therefore, there are several threshold setting criteria including the m-dB criterion and the constant false alarm rate (CFAR) criterion [18]. Basically, we need an accurate noise floor (NF) information no matter what criteria we adopt to set the threshold satisfying an adopted criterion. Actually, importance of the accurate NF estimation has been pointed out [19] since the inaccurate NF information leads to the deviated detection performance (probabilities of detection and false alarm) from the target one, i.e., non-guaranteed detection performance. In addition, the inaccurate NF information also leads to Signal-to-Noise Ratio (SNR) wall phenomenon in ED [20].
Through our long-term and wide-band NF measurements and existing works [21]- [23], we have identified the following challenging problems for the NF estimation: (1) slow time-varying property of the NF, (2) frequency dependency of the NF, (3) the NF estimation in the presence of the signal. In addition, (4) the computational cost of the NF estimation needs to be considered since it is expected to deploy many low-cost spectrum sensors in the long-term, wide-band, and wide-area measurements.
Most of the previous spectrum measurements utilizing ED have assumed a static NF (time-invariant NF), which is obtained by switching the receiver input to a matched load or is measured in an anechoic chamber before starting the measurements [24]- [26]. Thus, these estimations do not take the problem (1) into account, while they can take the problem (2) into account partly. We note that the problem (3) is not an issue in these estimations since these estimations can exclude the target wireless system signal. Obviously, these estimations also have the low computational complexity since the NF estimation is done only once. In this paper, we refer to the NF estimation that estimates the NF once before starting the measurements as the static estimation method.
On the other hand, some existing works take the problems (1) and (2) into account. The typical related works in the spectrum usage measurement field and the cognitive radio field include [27]- [33] as far as we know. These works can address the problem (1) by successive NF estimations between consecutive measurements (say, 1 second interval). In addition, they can also address the problem (3) by morphological image processing operations [27], rank-order filtering (ROF) [28], [29], Gaussian mixture model [30], auto-correlation estimation [31] or Forward consecutive mean excision (FCME) algorithm [32], [33]. The reference [27] also addresses the third problem as well as the second problem.
Especially, the FCME algorithm-based NF estimation is based on the signal detection theory [34]. Specifically, it classifies the measured signal into two group (signal group and noise group) and calculates the arithmetic mean of the noise group resulting to the NF estimate. However, the original FCME algorithm-based NF estimation does not take the third problem directly into account. In response to the disadvantage of the FCME algorithm-based NF estimation, the two dimensional FCME algorithm-based NF estimation was presented in a conference paper [23]. The method can take the problems (1)-(3) into account at the same time and it is the state-of-the-art NF estimation method according to our best knowledge. However, the problem (4) has not been addressed (i.e., heavy computational complexity) since the method must do the NF estimation frequently while the actual NF varies with time slowly [21]. Namely, frequent NF estimations lead to the excessive computational cost (estimation run-time) for NF estimation.
Therefore, this paper proposes and applies an NF level change detection for the efficient NF estimation based on FCME algorithm. 1 Specifically, the NF estimation is performed only when a significant NF level change is detected. Our aim is to reduce the run-time for the NF estimation process as much as possible while achieving the comparable NF estimation performance with respect to the state-of-the-art method and getting the obtained false alarm as close as possible to the target false alarm rate via the NF level change detection. The main contributions of this paper are as follows: • We propose an NF level change detection method to decide whether the NF estimation can be skipped or not. The method is based on the ED result with detection threshold based on previous NF estimate. Thus, the NF estimation process is skipped when the previous detection threshold is decided to be adequate.
• The proposed method has the lower computational cost (run-time) of the NF estimation while it offers the comparable NF estimation performance with respect to the existing state-of-the-art two-dimensional FCME algorithm-based NF estimation method. In addition, the proposed method has a better NF estimation performance than that of the static estimation method.
We numerically verify them. The rest of the paper is organized as follows: Section II is devoted to the description of spectrum usage measurement methodology, the time variation model of NF level and the significance of our work. In Section III, we introduce our proposed NF estimation process with the NF level change detection. The numerical evaluation and its corresponding discussion are provided in Section IV. Finally, Section V concludes our paper.
II. SYSTEM MODEL
A configuration of time frames for a spectrum measurement is shown in Fig. 1. The measurement period is set to long term, such as dozens of days. A spectrum sensor continuously acquires N complex baseband samples for one data acquisition period with the specified measurement bandwidth and accumulates them into its own local storage by the next data acquisition start. Data acquisition period is indexed with t, t ∈ {0, 1, · · · , T − 1} and T indicates the total number of measurement. The general signal processing used for the spectrum measurement is shown in Fig. 2. The first step is the power spectrum estimation with Welch FFT [35] using the baseband samples at the tth period y t ∈ C N . Then, the NF estimation is performed and we denote the estimated NF asÛ t ∈ R N FFT , where N FFT is the FFT size exploited in Welch FFT and typically N FFT < N in Welch FFT. In the conventional approach, the NF estimation is performed every acquisition period. On the other hand, in our approach, the NF estimation is performed at tth acquisition period if a significant NF level change is detected. Otherwise the NF estimation at t−1th acquisition period is used in tth acquisition period. In Sect. III, the proposed NF estimation with the NF level change detection will be shown. After the NF estimation, the threshold setting for ED is performed with usinĝ U t based on the CFAR criterion. Finally, the ED with the set threshold τ P FA (t) is performed to obtain the spectrum usage decisions D t . Below is the more detailed explanation for the process.
In the tth acquisition period, we model the baseband ∈ C N as the complex Gaussian random signal since we focus on the broadband wireless system such as WLAN system and LTE system and most all the modern broadband wireless systems now apply the OFDM technology, where the OFDM signal can be approximated to the Gaussian random signal as indicated in [36] and [37]. However, the idea in the paper can be applied to any broadband wireless systems since the detection method (ED) and the proposed NF estimation method exploit power information that can be calculated for any radio signals. Thus, the sampled baseband signal bandlimited to the measurement bandwidth is given by where s t [n] and z t [n] are the nth observation target signal sample and the noise signal sample which are the complex Gaussian random signal with zero mean and variance σ 2 , respectively. Moreover, the noise signal has a specific power spectrum shape (e.g., Fig. 3 (b)), but we assume that the target signal has a flat power spectrum over the measurement bandwidth. σ 2 s [t] and σ 2 z [t] are the signal power and the time-varying noise power at the tth acquisition period, respectively and we assume that these parameters are at least constant over one acquisition period. We define as the constant signal-to-noise ratio for the evaluation purpose in the numerical evaluation section. Thus, the total signal power is adjusted according to the given SNR value and the given total noise power at the tth acquisition period, i.e., σ 2 . Moreover, the status (H 1 ) indicates that PU signal exists in the measurement bandwidth partially or completely and the status (H 0 ) indicates otherwise (no signal present).
At first, the baseband signal y t is divided into K Welch FFT blocks with N s samples. Thus, y in the kth Welch FFT block is given by y The power spectrum estimation with Welch FFT consists of three steps: segmentation of y (t) k with a specific FFT size and an overlap ratio, calculation of multiple power spectra, and averaging of the power spectra [35]. The baseband signal y (t) k,l , l ∈ {0, 1, · · · , L − 1} at lth segment and kth Welch FFT block is given by where N FFT is the FFT size and the overlap ratio ρ is set to 0.5 [38]. Moreover, N s and N FFT are assumed to be powers of two. In this case, the number of segments L is given After the segmentation, normal FFT is performed for each segment. The result of FFT operation of y (t) k,l is given by where Hamming window is used in this process [33].
The calculated power spectra based on Welch FFT at kth Welch FFT block is given by where f ∈ {0, 1, · · · , N FFT − 1} indicates the index number of frequency bin. We define a matrix P t = [P The ED result indicates a presence of signal component at the kth Welch FFT block and the f th frequency bin as where 1 and 0 respectively indicate that a presence of signal component (H 1 ) and an absence of signal component A. TIME VARIATION MODEL OF NOISE FLOOR LEVEL shows that the NF has the frequency-dependency. Based on our two dimensional NF measurement campaign, we model the NF level variation in time and frequency domains as [23] U where γ j and µ ref [f ] indicate the NF level variation factor and the NF at a reference time instant j = t ref denoted by the reference NF level, respectively. The coefficient γ j indicates a gain to obtain the NF level at time j and it is frequencyindependent. The reference NF level at a reference time can be obtained from a measurement in an anechoic chamber or by using a radio frequency (RF) terminator at the spectrum sensor to avoid the presence of signal component. Then, the reference NF µ ref [f ] is calculated by time averaging of noise power spectra, and it is given by where M and P m,ref [f ] indicate the number of time averaging and noise power spectrum, respectively. Furthermore, we assume the NF at least do not change during one data acquisition time.
B. SIGNIFICANCE OF NF LEVEL CHANGE
According to the result in Fig. 3 (a), the NF level change is at most 0.4 dB. This NF level change affects the obtained false alarm rate since estimated NF level is used to set the threshold τ P FA [f ](t). The obtained false alarm rate can be calculated as follows [23] where˜ (α, θ) indicates a normalized incomplete Gamma function. The threshold τ P FA [f ](t) is set based on the CFAR criterion and is given by [23] τ whereṖ FA is a given target false alarm rate and˜ −1 indicates the inverse of a normalized incomplete Gamma function. Fig. 4 shows the obtained false alarm rate in case of the target false alarm rateṖ FA = 0.01 as a function of the NF level variation factor, γ , where γ implies the NF estimation error and γ = 0dB indicates no NF estimation error. The result indicates that the false alarm rate can be significantly large, such as 0.45 in the case of L = 1000 and γ = 0.3dB. This means the COR value may be estimated as 0.95 if the real COR value is 0.5 and measurement is in a high SNR environment. Therefore, the NF level change, such as γ = 0.3dB, is not negligible for false alarm rate. In addition, it is well know that this also leads to the SNR wall behaviour for energy detector [20].
III. PROPOSED NF ESTIMATION PROCESS
The block diagram of proposed NF estimation process is shown in Fig. 5. It consists of two blocks: Block 1 (B1), which is NF level change detection and Block 2 (B2), which is NF estimation based on the FCME algorithm. The process of NF level change detection in B1 is executed every data acquisition time t except for the first measurement t = 0. On the other hand, NF estimation in B2 is only executed when the NF level change is detected or t = 0 since the spectrum sensor does not know the NF at first. Therefore, the proposed method can reduce the computational cost of NF estimation processes if the computational cost of the process in B1 is smaller than one in B2 and the NF level changes slowly. In B2, we exploit the two-dimensional FCME algorithmbased NF estimation as the NF estimation as it can achieve the highly-accurate NF estimation performance while considering the frequency-dependency of the NF [23]. Briefly, the two-dimensional FCME-based NF estimation estimates the NF level variation factor γ t at time instant t exploiting the reference NF µ ref [f ] and the estimated power spectrum in the time-frequency plane P t , where the description of the reference NF is provided in Subsect. II-A [23]. More specifically, it locates the noise-only power samples in power spectrum samples P t based on the FCME algorithm, flattens or normalizes the located noise-only power spectra in frequency exploiting µ ref [f ], and estimates γ t by applying the FCME algorithm again. Then, the resultant NF estimate isÛ [t, f ] = γ t · µ ref [f ] whereγ t indicates the estimate of γ t . After estimating the NF in B2, the ED is performed based on the set threshold with the estimated NF.
On other hand, for other data acquisition time, i.e., t ∈ {1, 2, · · · , T − 1}, the processes in B1 are performed at first. It includes the tentative ED using the threshold from the previous data acquisition, i.e., τ P FA (t − 1), and the NF level change detection. The frequency bins where detection decision was ''noise-only'' are used for noise level change detection. They are normalized using the reference NF and then minimum is taken. Minimum of ED decision values has been used for NF estimation [39] and here it is used for different purpose, for change detection. The normalized minimum is compared with two thresholds derived from the previous round NF level variation factor. The purpose is to notice when NF is increasing or decreasing. If the change of the NF is detected, the processes in B2 are enforced and the ED result is the final ED result, i.e., D t = D t,final . Otherwise, the ED result equals to the tentative ED result (D t = D t,ten. ), Below is the more detailed description for the NF level change detection.
The NF level change detection exploits the result of the tentative ED, D t,ten. and the power spectrum P t . Let a set [f ] to be a set consisting of the indices of zeros (''noiseonly'') in D t,ten. [ where We can detect the NF level change by the thresholding process against δ t [f ] since δ t [f ] can be an estimate of γ t . Specifically, we decide that the NF level changes if min(δ t ) > η H or min(δ t ) < η L . Otherwise, i.e., min(δ t ) lies in between η L and η H , we decide that the NF level does not change. We apply two thresholds, η L and η H since the NF level possibly increases or decreases. Both thresholds are set based onγ t−1 and two hyperparameters ( L , H ) and these are given by where the hyperparameters are set in advance before the spectrum measurements by solving the following optimization problem 14 via the exhaustive search.
To set the hyperparameters properly, we introduce a minmax optimization problem as follows. It minimizes the maximum computational cost (run-time) in terms of NF estimation process over the whole target SNR region, while the mean absolute error (MAE) between the obtained false alarm rate P FA,o and the target false alarm rateṖ FA is lower than the allowable mean absolute error. The MAE of the false alarm rate implies the NF estimation performance since the good NF estimation leads to the smaller MAE and is given by where E[·] denotes expectation. Mathematically, the above criterion is defined as where C(η L , η H ; SNR), MAE P FA (η L , η H ) and P FA indicate the run-time given an SNR and hyperparameters, the MAE in terms of the obtained false alarm rate given an SNR and hyperparameters, and the allowable MAE of false alarm rate, respectively. The target SNR region is between SNR min and SNR max , where SNR min and SNR max are the minimum target SNR value and the maximum target SNR value, respectively.
IV. NUMERICAL EVALUATIONS
In this section, we evaluate the NF estimation performance and the spectrum occupancy detection performance of the proposed method based on computer simulations. For comparison, we evaluate the performances of the widely used static estimation method and the two-dimensional FCME algorithm-based NF estimation [23] which is the current state-of-the-art method. We assume the spectrum measurements of one wireless local area network (WLAN) channel with 20MHz bandwidth over 2.4GHz band and there is no signal in adjacent channels except the target channel. Common parameters are summarized in Table 1. Fig. 6 shows (a) the assumed NF variation in time and (b) the assumed reference NF, respectively. These correspond to the approximation to the NF by noise measurements as mentioned in Subsec. II-A (Fig. 3). Specifically, we calculated the NF level and the power spectrum of the NF (the reference NF) according to the experimental result of Fig. 3 by means of polynomial approximation. All the results in this section are evaluated using the time variation pattern and the power spectrum of the NF as shown in Fig. 6.
A. HYPERPARAMETERS OPTIMIZATION
In this subsection, the hyperparameters, L and H , are optimized based on (14). We show the optimization result in the case of COR = 0.5 as an example. The target SNR region is set to SNR ∈ [− 10 10] in dB, P FA = 2×10 −3 and the target false alarm rate is set to 0.01. An exhaustive search is applied The obtained false alarm rate is defined by shown in Fig. 7. However, the average run time in Fig. 8 is longest in the target SNR region. Thus, this result indicates the trade-off relationship between the run-time performance and the NF estimation accuracy for the proposed method and it is due to the NF level change detection. Thus, more skipping the NF estimation process, shorter the run-time, but worse the NF estimation performance and vice versa. However, we note that the MAE in terms of the obtained false alarm rate for the proposed method can satisfy the allowable MAE, P FA .
The result in the later non-optimal case ( L = 0.87 and H = 0.93) indicates an another trade-off in terms of the MAE of false alarm rate and the run-time performances as a function of SNR. Specifically, in the low SNR region, such as less than 1 dB, the MAE of false alarm rate is relatively high, but it is low in the high SNR region. On the other hand, in the low SNR region, the least run time is achievable, however it has the longer run time in the high SNR region. This implies that it is required to properly set the hyperparameters for the NF level change detection. Thus, the thresholds (or L and H ) for the NF level change detection in the later non-optimal case are almost proper from the perspective of the hyperparameters optimization problem (14) since the MAE of the obtained false alarm rate and the average run time in the later non-optimal case are almost same as ones in the case of the optimal solution in the low SNR region. However, the thresholds in the later non-optimal case are not proper from the perspective of the computational cost (i.e., longer run time than one for the optimal solution) in the high SNR region since the upper threshold (i.e., H = 0.93) for the later non-optimal case is slightly higher than the optimal upper threshold ( H = 0.91). In fact, the decision statistics of the NF level change detection (the minimum statistics of δ t , min(δ t )) tends to be larger as the SNR increases. As a result, the upper threshold for the later non-optimal case results in an increase in the number of the two-dimensional FCME algorithm-based NF estimation executed, i.e., longer run time in the high SNR region since the number that the NF level changes are detected increases. VOLUME 9, 2021
B. COMPARATIVE EVALUATION
In this subsection, we compare performance of the proposed NF estimation and the two-dimensional FCME algorithm-based NF estimation when the COR is 0.2, 0.5 and 0.8 in the time-varying NF scenario. We use the time variation pattern of the NF as shown in Fig. 6 (a). The different COR values indicate the low, moderate, and high channel occupancy environments, respectively. For the proposed NF estimation, we use the optimal hyperparameters for each COR value found by the exhaustive search mentioned in the previous subsection. Thus, the optimal hyperparameters are L = 0.75, H = 0.94, L = 0.83, H = 0.91 and L = 0.82, H = 0.90 for COR = 0.2, 0.5 and 0.8, respectively. Fig. 9 evaluates the NF estimation performance. Specifically, the figure shows the MAE in terms of the NF estimation and is defined as follow: where T , N FFTÛ [t, f ] and U [t, f ] is the number of measurements (number of simulation trials), the number of frequency bins, the estimated NF in linear scale at tth measurement and f th frequency bin and the true NF in linear scale at tth measurement and f th frequency bin, respectively. For reference, the result of the static estimation method is also shown. From this figure, the two-dimensional FCME algorithmbased NF estimation has a better NF estimation performance than the proposed method in the target SNR region since the proposed method can skip the two-dimensional FCME-based NF estimation by applying the NF level change detection to reduce the computational cost (run-time) at the cost of the slight NF estimation accuracy. Actually, it indicates the trade-off relationship between the computational cost and the NF estimation accuracy for the proposed method as mentioned in the previous subsection.
Moreover, for the two-dimensional FCME algorithm-based NF estimation, we can see the different NF estimation performance for each COR value. This is due to the less number of noise samples in the case of the higher COR value. In fact, the method in principle estimates the NF by averaging the noise samples.
On the other hand, we can see that both the proposed method and the two-dimensional FCME algorithm-based NF estimation have a much better NF estimation performance than one of the static estimation method. This is because that the static estimation method estimates the NF only once when starting the measurements and cannot track the time variation of the NF while both the proposed method and the two-dimensional FCME algorithm-based NF estimation can do it. probability and have a nearly equal detection probability to the ideal and target detection performance due to the highly accurate NF estimation. Here, the ideal and target detection probability indicates the probability of detection with the detection threshold (9) applying the 0.01 target false alarm rate.
On the other hand, Fig. 11 shows the small difference in terms of the obtained false alarm rate between the proposed method and the two-dimensional FCME algorithm-based NF estimation. This result is related to the NF estimation performance shown in Fig. 9. Thus, the lower MAE in terms of the NF estimation has the closer false alarm rate to the target false alarm rate with 0.01 and vice versa. Fig. 12 shows the MAE in terms of the obtained false alarm rate of the proposed method and the two-dimensional FCME algorithm-based NF estimation. The figure indicates that both methods satisfy the allowable MAE accuracy, P FA = 0.002. However, the two-dimensional FCME algorithm-based NF estimation has better MAE performance since the two-dimensional FCME algorithm-based NF estimation has better NF estimation performance as shown in Fig. 9. In fact, the result is related to the NF estimation performance shown in Fig. 9 due to (8). Thus, the lower MAE in terms of the NF estimation results in the lower MAE in terms of the false alarm rate and vice versa. Finally, we evaluate the average run-time of the proposed method and the two-dimensional FCME algorithm-based NF estimation. Fig. 13 shows the run-time of the proposed method is at most 10 times faster than one of the two-dimensional FCME algorithm-based NF estimation in the case where COR = 0.2. On the other hand, the run-time of the proposed method is at most 2 times faster than the one of the two-dimensional FCME algorithm-based NF estimation in the case where COR = 0.8. Comparing with Fig. 12, we can see that these two figures are inter-related. In fact, the lower MAE of the obtained false alarm rate, the higher the average run time due to the trade-off between the NF estimation performance and the computational cost (runtime) for the proposed method. On the other hand, the average run time of the two-dimensional FCME algorithm-based NF estimation is shorter in the case of the high COR value. This is due to the less number of noise samples averaging for the NF estimation. Therefore, the proposed approach can reduce the computational cost (run-time) significantly while maintaining accuracy of NF estimation.
V. CONCLUSION
In this paper, we have proposed an efficient NF estimation process (NF level change detection plus FCME algorithm-based NF estimation) for ED-based long-term and wide-band spectrum measurements. In fact, the proposed NF estimation process can deal with slow time-varying property and frequency dependency of the NF, the NF estimation in the presence of the signal, and the computational cost at the same time. Especially, the proposed process attempts to reduce the computational cost by exploiting slow time-varying NF via the proposed NF level change detection. Numerical evaluations have shown that the proposed method enables an accurate spectrum occupancy detection, considering the frequency-dependency and slowly time-varying property of the NF, while it can achieve 10 times faster run-time than one of the FCME algorithm-based NF estimation without the NF level change detection in case of the low COR environment. In this paper, we have numerically evaluated the effect of the thresholds on the NF estimation performance and the computational cost and shown the trade-off relationship between the NF estimation performance and the computational cost. As one of our future works, we will analyze the trade-off relationship theoretically. In addition, we will investigate a thresholds (hyperparameters) setting method for the proposed NF estimation method that satisfies the optimization problem (Eq. (14)). JANNE LEHTOMÄKI (Member, IEEE) received the Ph.D. degree from the University of Oulu, Oulu, Finland, in 2005. He is currently an Adjunct Professor (docent) with the Centre for Wireless Communications, University of Oulu. He spent the fall 2013 semester at Georgia Tech, Atlanta, as a Visiting Scholar. He is currently focusing on cognitive radios and terahertz band wireless communication. He coauthored the paper receiving the Best Paper Award at IEEE WCNC 2012. He is an Associate Editor of Physical Communication. VOLUME 9, 2021 | 7,691.6 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Adsorption Behavior of Surfactant on Lignite Surface: A Comparative Experimental and Molecular Dynamics Simulation Study
Experimental and computational simulation methods are used to investigate the adsorption behavior of the surfactant nonylphenol ethoxylate (NPEO10), which contains 10 ethylene oxide groups, on the lignite surface. The adsorption of NPEO10 on lignite follow a Langmuir-type isotherm. The thermodynamic parameters of the adsorption process show that the whole process is spontaneous. X-ray photoelectron spectroscopic (XPS) analysis indicates that a significant fraction of the oxygen-containing functional groups on the lignitic surface were covered by NPEO10. Molecular dynamics (MD) simulations show that the NPEO10 molecules were found to adsorb at the water-coal interface. Moreover, polar interactions are the main effect in the adsorption process. The density distributions of coal, NPEO10, and water molecules along the Z axis show that the remaining hydrophobic portions of the surfactant extend into the solution, creating a more hydrophobic coal surface that repels water molecules. The negative interaction energy calculated from the density profiles of the head and tail groups along the three spatial directions between the surfactant and the lignitic surface suggest that the adsorption process is spontaneous. The self-diffusion coefficients show that the presence of NPEO10 causes higher water mobility by improving the hydrophobicity of lignite.
Introduction
Coal plays an important role in fulfilling the energy needs of our society. Lignite is a typical low-rank coal with very large deposits, which is easy to burn and has highly volatile components [1,2]. However, lignite has low heating values primarily because of its high oxygen and moisture contents, and it forms a great deal of coal slime in mining because lignite is weathered and fragmented easily. Drying and flotation are the main technologies used to improve the utilization ratio of lignite. Flotation is one of the most important methods to concentrate lignitic slimes, but low-rank coals, especially lignite, are generally known to be the most difficult coals to float [3][4][5][6][7]. Their low buoyancy has mainly been attributed to the high oxygen content and the abundance of hydrophilic functional groups at their surface [8]. Many researchers have tried to improve the hydrophobicity of difficult-to-float lignite by introducing appropriate agents. Kadim et al. [9] carried out a series of flotation tests with lignite from three mines. Vamvuka et al. [10] explained the electric charge condition on the surface of the particles of the lignite flotation process using the zeta potential. Yakup [11] proposed the use of a regime involving treatments with kerosene + emulsifier and kerosene + emulsifier + surfactant and conducted a flotation study. However, microscopic understanding is less frequently used in considerations of the adsorption of surfactants on the surface of coal. Zhang et al. [12] used sorbitan monooleate to pretreat lignite, enhancing the flotation of pretreated lignite. Xia et al. [13] found that a mixture of dodecane and 4-Dodecylphenol was an effective collector for lignite flotation. Ni et al. [14] found that the combustible matter recovery of lignite increased when the lignite was preconditioned with Tween 80 before addition to the collector.
In recent years, molecular dynamics simulation has become a valuable tool to investigate the interactions of water/surfactant (collector)/mineral surfaces. Compared with experimental methods, computer simulations can directly provide microscopic details and fundamental understanding [15]. Chen et al. [16] used the ReaxFF reactive force field for molecular dynamics simulations of the spontaneous combustion of lignite. Zhang [17] used molecular dynamics (MD) simulations to study the structure and dynamics of a brown coal matrix during the moisture removal process. Zhang et al. [18] investigated the wettability modification of Wender lignite by adsorption of dodecyl poly ethoxylated surfactants with different degrees of ethoxylation by molecular dynamics simulation. Rai et al. [19] studied the adsorption of oleate and dodecylammonium chloride molecules on spodumene and jadeite surfaces. Xu et al. [20] computed the interaction energies between water molecules/ammonium ions and the Muscovite (001) surface. Wang et al. [21] used MD simulations to describe the co-adsorption of a mixed surfactant (dodecylamine hydrochloride and sodium oleate) on the Muscovite surface in an aqueous solution. Although there have been several studies about molecular dynamics simulations on minerals, they have mainly focused on minerals of a single structural and chemical component. However, lignite is a matter system with structural complexity and chemical component diversity, which is mainly composed of 85-95% organic material. Therefore, the uniform chemical structure representing coal is nonexistent. Some assumptions must be made for investigating the coal structure. Lignite can be regarded as a highly cross-linked polymer consisting of lignite macromolecules through bridge bonds, hydrogen bonds, van der Waals force, and so on [22][23][24]. Therefore, few studies that illustrate the adsorption of surfactants on a lignite surface have been reported because of the structural complexity and chemical component diversity of lignite.
We investigated the fundamental properties and mechanism of the adsorption of nonylphenol ethoxylate (NPEO 10 ), which contained 10 ethylene oxide groups, on the surface of lignite using MD and X-ray photoelectron spectroscopy (XPS). The adsorption of NPEO 10 on the model surface of lignite was investigated using MD simulations. Quantifying the molecular-scale structural and dynamic behavior of the water/surfactant/coal system is helpful to improve understanding of the interactions between NPEO 10 and lignite. In addition, adsorption experiments between lignite and NPEO 10 were performed to verify the simulation results.
Adsorption Isotherms
Adsorption isotherms of adsorptions of NPEO 10 on the coal sample are illustrated in Figure 1 at 308, 318, and 328 K, respectively. Figure 1 shows that the adsorption of NPEO 10 on coal is sensitive to the effect of temperature. The linearized Langmuir Equation (1) and linearized Freundlich Equation (2) can be represented as follows [25][26][27][28]: where b and Q m refer to the Langmuir constant, which is related to the affinity of the adsorption sites, and the maximum amount of NPEO 10 per unit weight of adsorbent, and were calculated from the slope and intercept, respectively, of the straight lines of the plot C e /q e vs. C e . According to Equation (2), k and n refer to the Freundlich constant and can be determined from the linear plot of lgq e vs. C e . k is correlated to the adsorption capacity when the adsorbate equilibrium concentration of the adsorbate is equal to 1, and n represents the degree of dependence of the adsorption process on the equilibrium concentration.
The values of Q m , b, and n are summarized in Table 1. The isotherm data were calculated using the least squares method, and the related correlation coefficients (r 2 values) are given in the same table. As shown in Table 1, the Langmuir equation represents the adsorption process very well; the r 2 values of the Langmuir equation are 0.9990, 0.9996, and 0.9992 at 308, 318, and 328 K, respectively, all higher than the r 2 values of the Freundlich equation, suggesting that the adsorption of NPEO 10 onto the coal sample closely follows the Langmuir model. The values of Qm, b, and n are summarized in Table 1. The isotherm data were calculated using the least squares method, and the related correlation coefficients (r 2 values) are given in the same table. As shown in Table 1, the Langmuir equation represents the adsorption process very well; the r 2 values of the Langmuir equation are 0.9990, 0.9996, and 0.9992 at 308, 318, and 328 K, respectively, all higher than the r 2 values of the Freundlich equation, suggesting that the adsorption of NPEO10 onto the coal sample closely follows the Langmuir model. The adsorption free energy (ΔG°) can be calculated using the following equation [29]: where b, R, and T are the Langmuir constant, ideal gas constant, and the absolute temperature, respectively. The adsorption free energy calculated on a molar basis was −44.80, −45.28, and −45.88 kJ/mol at 308, 318, and 328 K, respectively (as shown in Table 1). The value of ΔG° is negative, indicating a spontaneous process under the experimental conditions. The observed decrease in the ΔG° values with increasing temperature indicates that adsorption occurs more efficiently at higher temperatures.
XPS Analysis
A wide-scan spectrum in the binding energy range 0-1400 eV was obtained to identify and quantitatively analyze the surface elements present [30]. A typical XPS wide-scan spectrum of demineralized lignite coal is presented in Figure 2. The adsorption free energy (∆G • ) can be calculated using the following equation [29]: where b, R, and T are the Langmuir constant, ideal gas constant, and the absolute temperature, respectively. The adsorption free energy calculated on a molar basis was −44.80, −45.28, and −45.88 kJ/mol at 308, 318, and 328 K, respectively (as shown in Table 1). The value of ∆G • is negative, indicating a spontaneous process under the experimental conditions. The observed decrease in the ∆G • values with increasing temperature indicates that adsorption occurs more efficiently at higher temperatures.
XPS Analysis
A wide-scan spectrum in the binding energy range 0-1400 eV was obtained to identify and quantitatively analyze the surface elements present [30]. A typical XPS wide-scan spectrum of demineralized lignite coal is presented in Figure 2. Figure 2 shows that peaks of oxygen and carbon represent the major constituents of the coal surface both before and after the adsorption of NPEO10. We found clear changes in the O 1s, N 1s, C 1s, S 2p, Si 2p, and Al 2p peaks after adsorption of NPEO10. Compared with the case before NPEO10 adsorption, the C 1s peak is greatly enhanced after adsorption of NPEO10. However, the O 1s peak showed a clear reduction in size, and the N 1s, S 2p, Si 2p, and Al 2p peaks were slightly weakened. Quantitative peak analysis was carried out to determine the surface elements' concentrations, and the results are shown in Table 2. Table 2 shows that the contents (at %) of C 1s, O 1s, N 1s, S 2p, Si 2p, and Al 2p on the coal surface were, respectively, 76.58, 19.00, 1.12, 0.11, 1.82, and 1.37 at % before adsorption and 80.57, 17.33, 0.48, 0.25, 0.76, and 0.61 at % after adsorption. The content of C 1s increased from 76.58% to 80.57% after adsorption. However, the content of O 1s decreased from 19.00% to 17.33%. These results indicate that the oxygen functional groups on the lignite surface were significantly covered by NPEO10. Table 2. Contents of C 1s, O 1s, N 1s, S 2p, Si 2p, and Al 2p on the coal surface before and after adsorption of NPEO10.
Types
Before After Figure 2 shows that peaks of oxygen and carbon represent the major constituents of the coal surface both before and after the adsorption of NPEO 10 . We found clear changes in the O 1s, N 1s, C 1s, S 2p, Si 2p, and Al 2p peaks after adsorption of NPEO 10 . Compared with the case before NPEO 10 adsorption, the C 1s peak is greatly enhanced after adsorption of NPEO 10 . However, the O 1s peak showed a clear reduction in size, and the N 1s, S 2p, Si 2p, and Al 2p peaks were slightly weakened. Quantitative peak analysis was carried out to determine the surface elements' concentrations, and the results are shown in Table 2. Table 2 shows that the contents (at %) of C 1s, O 1s, N 1s, S 2p, Si 2p, and Al 2p on the coal surface were, respectively, 76.58, 19.00, 1.12, 0.11, 1.82, and 1.37 at % before adsorption and 80.57, 17.33, 0.48, 0.25, 0.76, and 0.61 at % after adsorption. The content of C 1s increased from 76.58% to 80.57% after adsorption. However, the content of O 1s decreased from 19.00% to 17.33%. These results indicate that the oxygen functional groups on the lignite surface were significantly covered by NPEO 10 . The aggregated structures of NPEO 10 on a lignite surface for different simulation times are shown in Figure 3. The Z-dependent (Z being normal to the coal surface) density profiles for equilibrated configuration (1 ns) were calculated, and the density distributions of coal, NPEO 10 , and water molecules along the Z axis are shown in Figure 4. The aggregated structures of NPEO10 on a lignite surface for different simulation times are shown in Figure 3. The Z-dependent (Z being normal to the coal surface) density profiles for equilibrated configuration (1 ns) were calculated, and the density distributions of coal, NPEO10, and water molecules along the Z axis are shown in Figure 4. The aggregated structures of NPEO10 on a lignite surface for different simulation times are shown in Figure 3. The Z-dependent (Z being normal to the coal surface) density profiles for equilibrated configuration (1 ns) were calculated, and the density distributions of coal, NPEO10, and water molecules along the Z axis are shown in Figure 4. The original configuration of NPEOs was in such a way with the polar head groups facing the lignite surface as can been seen in Figure 3a. As expected, after a short period of simulation, as shown in Figure 3b, due to the abundant hydrophilic oxygen functional groups of the lignite surface, the surfactant molecules try to reorient themselves so that their ethylene oxide groups adsorb lying on the coal surface through hydrogen bonds. Additional, the alkyl chains clearly highly intertwine with each other resulting from hydrophobic interaction. As time evolves, the hemimicelle structure forms on the lignite surface as, observed in Figure 3b,d. Figure 4 shows that the NPEO 10 peak appears at~20 Å, close to the first peak corresponding to water. This water layer may consist of the near-surface water film controlled by the hydrogen bonding between adsorbed water molecules and the coal surface. The intensity of the first peak of the water is obviously weaker, meaning there are fewer water molecules in this region. Most water molecules appear at a distance along the Z-axis exceeding 40 Å. The NPEO 10 molecules exist at the interface between water and coal. Water molecules are repelled from the coal surface because of the stronger hydrophobicity of the lignite surface after adsorbing NPEO 10 . The results also show that the densities of NPEO 10 and the coal surface overlap, which does not necessarily mean that NPEO 10 molecules penetrate the lignite. Instead, the overlap is partly caused by the roughness of the surface, whose microscopic valleys are filled with the surfactant molecules.
The Z-dependent mass density profiles for the head (ethoxylate) and tail (nonylphenol) groups of NPEO 10 were also calculated to survey the configuration of the adsorbed surfactant molecules on the coal surface, as shown in Figure 5. The peak of the head group was closer to the lignite surface than that of the tail group. Therefore, the non-ionic hydrophilic head groups are located next to the coal surface. As is well-known, the surface of coal is hydrophobic with hydrophilic sites, which means that there is an excess of head groups attached to the lignite surface. The polar interaction between the ethoxylated group of NPEO 10 and the hydrophilic sites on the coal surface is the main force affecting the adsorption process. In other words, the ethylene oxide groups of NPEO 10 preferentially adsorb on the hydrophilic sites of lignite and leave the hydrophobic portion of the molecule exposed to the solution. This result indicates that there is a high oxygen content and an abundance of hydrophilic surface functional groups on the lignite surface. The original configuration of NPEOs was in such a way with the polar head groups facing the lignite surface as can been seen in Figure 3a. As expected, after a short period of simulation, as shown in Figure 3b, due to the abundant hydrophilic oxygen functional groups of the lignite surface, the surfactant molecules try to reorient themselves so that their ethylene oxide groups adsorb lying on the coal surface through hydrogen bonds. Additional, the alkyl chains clearly highly intertwine with each other resulting from hydrophobic interaction. As time evolves, the hemimicelle structure forms on the lignite surface as, observed in Figure 3b,d. Figure 4 shows that the NPEO10 peak appears at ~20 Å, close to the first peak corresponding to water. This water layer may consist of the near-surface water film controlled by the hydrogen bonding between adsorbed water molecules and the coal surface. The intensity of the first peak of the water is obviously weaker, meaning there are fewer water molecules in this region. Most water molecules appear at a distance along the Z-axis exceeding 40 Å. The NPEO10 molecules exist at the interface between water and coal. Water molecules are repelled from the coal surface because of the stronger hydrophobicity of the lignite surface after adsorbing NPEO10. The results also show that the densities of NPEO10 and the coal surface overlap, which does not necessarily mean that NPEO10 molecules penetrate the lignite. Instead, the overlap is partly caused by the roughness of the surface, whose microscopic valleys are filled with the surfactant molecules.
The Z-dependent mass density profiles for the head (ethoxylate) and tail (nonylphenol) groups of NPEO10 were also calculated to survey the configuration of the adsorbed surfactant molecules on the coal surface, as shown in Figure 5. The peak of the head group was closer to the lignite surface than that of the tail group. Therefore, the non-ionic hydrophilic head groups are located next to the coal surface. As is well-known, the surface of coal is hydrophobic with hydrophilic sites, which means that there is an excess of head groups attached to the lignite surface. The polar interaction between the ethoxylated group of NPEO10 and the hydrophilic sites on the coal surface is the main force affecting the adsorption process. In other words, the ethylene oxide groups of NPEO10 preferentially adsorb on the hydrophilic sites of lignite and leave the hydrophobic portion of the molecule exposed to the solution. This result indicates that there is a high oxygen content and an abundance of hydrophilic surface functional groups on the lignite surface.
. Figure 5. Density profiles of the NPEO10 head group and tail group along the Z direction.
Interaction Energies between Surfactant and Coal
The relative intensity and efficiency of the interaction between the surfactant and the coal surface is indicated by the interaction energy (Einter NPEO & coal). The value of Einter NPEO & coal between NPEO10 and the lignite surface can be calculated using Equation (4):
Interaction Energies between Surfactant and Coal
The relative intensity and efficiency of the interaction between the surfactant and the coal surface is indicated by the interaction energy (E inter NPEO & coal ). The value of E inter NPEO & coal between NPEO 10 and the lignite surface can be calculated using Equation (4): where E total is the total energy of the system. E coal , E NPEO , and E water refer to the energies of the coal surface, NPEO 10 , and water, respectively. E coal + water , E NPEO + water , and E coal + NPEO are the total energy of coal and water, the total energy of NPEO 10 and water, and the total energy of coal and NPEO 10 , respectively. The value of E inter NPEO & coal obtained from the simulation is −174.16 kJ/mol. The negative value of the interaction energy between the surfactant and coal means that the system becomes more stable after adsorption.
Mobility of Water Molecules
The mean square displacement (MSD) is the statistical average of particle trajectories, and is a measurement of the average distance of particles from a given particle. The dynamic properties of water molecules can also be obtained from the MSD, which can be expressed as follows [31]: where N is the atom number, r i (0) represents the position vector at an initial time, and r i (t) represents the position vector after time t, and the angular brackets signify the ensemble average. The MSD curves of water molecules in the absence and presence of NPEO 10 are shown in Figure 6. It was evident that the mobility of water molecules was affected by the presence of NPEO 10 . Obviously, the increase in diffusion strength is greater in the coal-water-NPEO 10 system. where Etotal is the total energy of the system. Ecoal, ENPEO, and Ewater refer to the energies of the coal surface, NPEO10, and water, respectively. Ecoal + water, ENPEO + water, and Ecoal + NPEO are the total energy of coal and water, the total energy of NPEO10 and water, and the total energy of coal and NPEO10, respectively. The value of Einter NPEO & coal obtained from the simulation is −174.16 kJ/mol. The negative value of the interaction energy between the surfactant and coal means that the system becomes more stable after adsorption.
Mobility of Water Molecules
The mean square displacement (MSD) is the statistical average of particle trajectories, and is a measurement of the average distance of particles from a given particle. The dynamic properties of water molecules can also be obtained from the MSD, which can be expressed as follows [31]: where N is the atom number, ri(0) represents the position vector at an initial time, and ri(t) represents the position vector after time t, and the angular brackets signify the ensemble average. The MSD curves of water molecules in the absence and presence of NPEO10 are shown in Figure 6. It was evident that the mobility of water molecules was affected by the presence of NPEO10. Obviously, the increase in diffusion strength is greater in the coal-water-NPEO10 system. The self-diffusion coefficient (D) reflects the intensity of atomic mobility of water molecules and was calculated in mixed systems both with and without NPEO10. D can be expressed as follows according to Einstein's equation [32]: The MSD and self-diffusion coefficient are closely related, as follows: The self-diffusion coefficients were calculated to be 5.67 × 10 −5 and 4.79 × 10 −5 cm 2 /s in the mixed system with and without NPEO10, respectively. This result means that the mobility of water over the The self-diffusion coefficient (D) reflects the intensity of atomic mobility of water molecules and was calculated in mixed systems both with and without NPEO 10 . D can be expressed as follows according to Einstein's equation [32]: The MSD and self-diffusion coefficient are closely related, as follows: The self-diffusion coefficients were calculated to be 5.67 × 10 −5 and 4.79 × 10 −5 cm 2 /s in the mixed system with and without NPEO 10 , respectively. This result means that the mobility of water over the modified coal surface caused by the adsorption of NPEO 10 is enhanced as compared to that of water over the surface of original coal. The high mobility of water molecules should contribute to their displacement from the modified coal surface and the attachment of air bubbles. These simulated results are consistent with those obtained from XPS analysis.
Materials
The lignite used in this study was provided by a colliery in China and was crushed to −0.5 mm. Analysis of the coal samples showed that its moisture content (M ad ), ash content on a dry basis (A ad ), volatile content (V ad ), and fixed carbon content (FC ad ) were 9.10%, 19.82%, 62.75%, and FC daf = 37.25%, respectively. NPEO 10 was obtained from Union Carbide Chemicals (Danbury, Connecticut, USA) with no further purification. The chemical structure of NPEO 10 is shown in Figure 7. modified coal surface caused by the adsorption of NPEO10 is enhanced as compared to that of water over the surface of original coal. The high mobility of water molecules should contribute to their displacement from the modified coal surface and the attachment of air bubbles. These simulated results are consistent with those obtained from XPS analysis.
Materials
The lignite used in this study was provided by a colliery in China and was crushed to −0.5 mm. Analysis of the coal samples showed that its moisture content (Mad), ash content on a dry basis (Aad), volatile content (Vad), and fixed carbon content (FCad) were 9.10%, 19.82%, 62.75%, and FCdaf = 37.25%, respectively.
NPEO10 was obtained from Union Carbide Chemicals (Danbury, Connecticut, USA) with no further purification. The chemical structure of NPEO10 is shown in Figure 7.
Surfactant Adsorption
The maximum adsorption density was measured for NPEO10 on the sample under investigation. The adsorption experiments were carried out in scintillation vials. Each vial contained surfactant solution (25 mL) of a known concentration and 0.5 g of the coal sample. The vials were agitated in a constant temperature water bath oscillator at 308, 318, and 328 K for 24 h, respectively, followed by suction filtration to obtain clear solutions. The surfactant concentrations in the solutions were determined using a spectrophotometer (Model UV757CRT) at a wavelength of 275 nm.
XPS Measurements
The XPS experiments are carried out at room temperature in an ultra-high vacuum (UHV) system with the surface analysis system (ESCALAB250 Xi, Thermo Fisher Scientific, Hudson, NH, USA). The spectrum of the survey scan is recorded at the pass energy of 100 eV with the step size of 1.00 eV. The high resolution spectra are recorded at the pass energy of 20 eV with the step of 0.05 eV. The data processing (peak fitting) is performed with the XPS peak fit software, using a Smart-type background subtraction and Gaussian/Lorentzian peak shapes.
Molecular Dynamics Simulation Methodology
Molecular dynamics simulations were conducted using the Materials Studio 8.0 package. The COMPASS force field was applied in all simulations [33]. The molecular model of lignite was constructed based on Wender's model [34] in combination with Kumagai's model [35] and Tang's model [36], as shown in Figure 8a. The structure of the lignite model was optimized as shown in Figure 8b. Then, 20 optimized lignite macromolecules were immersed in a periodic box. The system was equilibrated in a constant pressure-temperature (NPT) ensemble using a Berendsen thermostat and barostat. The pressure of the system was maintained at 0.1 MPa and the temperature was set to 298 K. A time step of 1.0 fs was used to integrate the equations of motion. A van der Waals interaction cutoff of 12.5 Å was employed, and the Ewald summation method was used to account for the longrange electrostatic interactions. After an initial equilibration for about 500 ps in an NPT ensemble, the lignite model was built with density of 1.22 g/cm 3 as shown in Figure 8c. The structure of NPEO10 was selected for this study as shown in Figure 7 and the molecule models are shown in Figure 8d.
Surfactant Adsorption
The maximum adsorption density was measured for NPEO 10 on the sample under investigation. The adsorption experiments were carried out in scintillation vials. Each vial contained surfactant solution (25 mL) of a known concentration and 0.5 g of the coal sample. The vials were agitated in a constant temperature water bath oscillator at 308, 318, and 328 K for 24 h, respectively, followed by suction filtration to obtain clear solutions. The surfactant concentrations in the solutions were determined using a spectrophotometer (Model UV757CRT) at a wavelength of 275 nm.
XPS Measurements
The XPS experiments are carried out at room temperature in an ultra-high vacuum (UHV) system with the surface analysis system (ESCALAB250 Xi, Thermo Fisher Scientific, Hudson, NH, USA). The spectrum of the survey scan is recorded at the pass energy of 100 eV with the step size of 1.00 eV. The high resolution spectra are recorded at the pass energy of 20 eV with the step of 0.05 eV. The data processing (peak fitting) is performed with the XPS peak fit software, using a Smart-type background subtraction and Gaussian/Lorentzian peak shapes.
Molecular Dynamics Simulation Methodology
Molecular dynamics simulations were conducted using the Materials Studio 8.0 package. The COMPASS force field was applied in all simulations [33]. The molecular model of lignite was constructed based on Wender's model [34] in combination with Kumagai's model [35] and Tang's model [36], as shown in Figure 8a. The structure of the lignite model was optimized as shown in Figure 8b. Then, 20 optimized lignite macromolecules were immersed in a periodic box. The system was equilibrated in a constant pressure-temperature (NPT) ensemble using a Berendsen thermostat and barostat. The pressure of the system was maintained at 0.1 MPa and the temperature was set to 298 K. A time step of 1.0 fs was used to integrate the equations of motion. A van der Waals interaction cutoff of 12.5 Å was employed, and the Ewald summation method was used to account for the long-range electrostatic interactions. After an initial equilibration for about 500 ps in an NPT ensemble, the lignite model was built with density of 1.22 g/cm 3 as shown in Figure 8c. The structure of NPEO10 was selected for this study as shown in Figure 7 and the molecule models are shown in Figure 8d. The coal-water-NPEO system, which included 20 lignite macromolecules, nine NPEO10 molecules, and 2000 water molecules, was packed in a rectangular simulation cell 40 × 40 × 170 Å (X × Y × Z) with three-dimensional periodic boundary conditions. The simple point charge (SPC) water model [37] was used. The water potential parameters are listed in Table 3.
The molecular dynamics simulations were run at the NVT ensemble level at 298 K using a Nose thermostat, and the time step was set to 1.0 fs. A van der Waals interaction cutoff of 12.5 Å was employed, and the Ewald summation method with an accuracy of 10 −3 kcal/mol was used to account for the long-range electrostatic interactions. The coal surface was frozen during the simulation to save computational effort, while the surfactant and water were allowed to relax. A simulation was performed for 1 ns. The final results were calculated based on the outcome of simulations of a period of 500 ps after the equilibration period. The coal-water-NPEO system, which included 20 lignite macromolecules, nine NPEO10 molecules, and 2000 water molecules, was packed in a rectangular simulation cell 40 × 40 × 170 Å (X × Y × Z) with three-dimensional periodic boundary conditions. The simple point charge (SPC) water model [37] was used. The water potential parameters are listed in Table 3. The molecular dynamics simulations were run at the NVT ensemble level at 298 K using a Nose thermostat, and the time step was set to 1.0 fs. A van der Waals interaction cutoff of 12.5 Å was employed, and the Ewald summation method with an accuracy of 10 −3 kcal/mol was used to account for the long-range electrostatic interactions. The coal surface was frozen during the simulation to save computational effort, while the surfactant and water were allowed to relax. A simulation was performed for 1 ns. The final results were calculated based on the outcome of simulations of a period of 500 ps after the equilibration period. Figure 9. The potential energy, non-bond energy, kinetic energy, and total energy rapidly decreased to a minimum state and remained stable. Curves showing the fluctuation of energy during the processes of energy minimization and annealing are shown in Figure 9. The potential energy, non-bond energy, kinetic energy, and total energy rapidly decreased to a minimum state and remained stable.
Conclusions
The adsorption behavior of nonylphenol ethoxylate with 10 ethylene oxide groups (NPEO10) on the surface of lignite was investigated by experimental and computational simulation methods.
The adsorption of NPEO10 on lignite follows the Langmuir-type isotherm at different temperatures. The thermodynamic properties of the adsorption process shows that the whole process is spontaneous and driven by both enthalpy and entropy synergistically. The X-ray photoelectron spectroscopic (XPS) analyses show that most of the oxygen-containing functional groups on the lignitic coal surface were covered by NPEO10.
Molecular dynamics (MD) simulations were used to investigate the adsorption behavior of NPEO10 on a model lignite surface. The NPEO10 molecules were found to adsorb at the water-coal interface. Moreover, the polar interactions between the ethoxylate group of NPEO10 and the hydrophilic sites on the lignitic coal surface were the main factor in the adsorption process. The density distributions of coal, NPEO10, and water molecules along the Z-axis direction showed that the remaining hydrophobic portions of surfactant, which extend into the solution, create a more hydrophobic coal surface to repel the water molecules.
Conclusions
The adsorption behavior of nonylphenol ethoxylate with 10 ethylene oxide groups (NPEO 10 ) on the surface of lignite was investigated by experimental and computational simulation methods.
The adsorption of NPEO 10 on lignite follows the Langmuir-type isotherm at different temperatures. The thermodynamic properties of the adsorption process shows that the whole process is spontaneous and driven by both enthalpy and entropy synergistically. The X-ray photoelectron spectroscopic (XPS) analyses show that most of the oxygen-containing functional groups on the lignitic coal surface were covered by NPEO 10 .
Molecular dynamics (MD) simulations were used to investigate the adsorption behavior of NPEO 10 on a model lignite surface. The NPEO 10 molecules were found to adsorb at the water-coal interface. Moreover, the polar interactions between the ethoxylate group of NPEO 10 and the hydrophilic sites on the lignitic coal surface were the main factor in the adsorption process. The density distributions of coal, NPEO 10 , and water molecules along the Z-axis direction showed that the remaining hydrophobic portions of surfactant, which extend into the solution, create a more hydrophobic coal surface to repel the water molecules.
The aggregated structure of adsorbed NPEO 10 molecules was studied through the density profiles of the head and tail groups in the three spatial directions. The results showed that the negative interaction energy between the surfactant and the lignite surface suggest that the adsorption process is spontaneous, which is consistent with the thermodynamics of the experiment. The self-diffusion coefficients show that the presence of NPEO 10 causes higher water mobility, improving the hydrophobicity of lignite. | 7,868 | 2018-02-01T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Optimization of Quality of AI Service in 6G Native AI Wireless Networks
: To comply with the trend of ubiquitous intelligence in 6G, native AI wireless networks are proposed to orchestrate and control communication, computing, data, and AI model resources according to network status, and efficiently provide users with quality-guaranteed AI services. In addition to the quality of communication services, the quality of AI services (QoAISs) includes multiple dimensions, such as AI model accuracy, overhead, and data privacy. This paper proposes a QoAIS optimization method for AI training services in 6G native AI wireless networks. To improve the accuracy and reduce the delay of AI services, we formulate an integer programming problem to obtain proper task scheduling and resource allocation decisions. To quickly obtain decisions that meet the requirements of each dimension of QoAIS, we further transform the single-objective optimization problem into a multi-objective format to facilitate the QoAIS configuration of network protocols. Considering the computational complexity, we propose G-TSRA and NSG-TSRA heuristic algorithms to solve the proposed problems. Finally, the feasibility and performance of QoAIS optimization are verified by simulation.
Introduction
After decades of research and development, communication networks have become critical information infrastructures for economic growth and social progress in today's world [1][2][3].In recent years, along with the rapid advancement of artificial intelligence (AI) technology in new communication networks, intelligent applications of the Internet of Everything have been integrated into our lives and continue to drive and deepen a series of application scenarios, such as intelligent vehicle networking, smart industrial networking, smart cities, and smart healthcare [4][5][6][7].The development of intelligent applications brings a great demand for network connection, computing, sharing data, and AI capability, and intelligence permeates every corner of the network, from the end user to the network edge and the remote cloud.However, computing, business data, and AI model resources in 5G are usually in mobile edge computing and cloud computing infrastructure [8][9][10].It is difficult for the network to perceive and control the resources of the cloud AI platform in real-time to provide high-quality AI services with strict delay limitations according to changes in the wireless environment and user attributes.Therefore, the 6G network must consider deep integration with AI in the architecture design stage to natively provide AI capabilities.
The native AI design of 6G needs to consider two aspects of requirements: (1) AI can support high-level autonomy of the network.AI can improve the efficiency of data measurements and decision optimization in the network, then realize fast automated operation, maintenance, detection, and network self-healing [11].(2) AI can support intelligent applications in vertical industries.The 6G network should directly provide vertical industry users with quality-guaranteed AI services to create new market value.According to the above requirements, the 6G native AI wireless network is a unified architecture that deeply integrates communication and AI.It should have the ability to process the AI service logic, manage the full life cycle of the AI service, and provide AI services to the network itself as well as vertical industries [12].In addition, native AI wireless networks should orchestrate and control the communication, computing, data, and AI model resources in the network, including the core network, radio access network (RAN), and terminals.In collaboration with edge and remote clouds, the native AI wireless networks can quickly adapt to the customization needs of diverse scenarios [13].Hence, the 6G network will become the fundamental infrastructure for realizing ubiquitous intelligence to support various AI applications, such as real-time AI inference, distributed learning, and intelligent group collaboration.
One essential advantage of natively providing AI services in 6G networks is that resources can be controlled flexibly and on demand to ensure the quality of AI services (QoAISs) [14].Current networks can already guarantee the quality of service (QoS) for communications.Moreover, 3GPP defines QoS-related standards and sets the communication index dimensions corresponding to the QoS, such as bandwidth, delay, jitter, and bit error rate.RAN protocols (such as service data adaptation protocol) will provide users with differentiated network quality assurance services according to preset QoS parameters.However, the 6G native AI wireless network introduces intelligent capabilities, so in addition to the communication performance, the AI service delay, model performance, data redundancy, overhead, privacy, and other aspects need to be considered [11].
Various studies have investigated how native AI wireless networks can optimize the network itself or provide AI services to third parties [15,16].For these AI services, the accuracy of AI model training is a critical indicator of the QoAIS.Using high-quality data for training can significantly improve the accuracy of the AI model [11,17].However, wireless and computing resources are limited, and more data will lead to more transmission and computing delays.Therefore, the QoAIS needs to include at least two indicators: the accuracy of the AI model and the delay of the AI service.To provide better QoAIS services, a reasonable task scheduling and resource allocation scheme should be designed to optimize the QoAIS.One way is to weigh the above two indicators and propose a single-objective optimization problem.However, when the network protocol configures the QoAIS, each of its indicators may have a threshold value.However, the weights in the single-objective optimization problem are fixed in advance, so it is challenging to select the optimization scheme precisely according to the QoAIS.
On the other hand, the AI models required by AI services are specific.For example, target recognition services for autonomous driving requires models such as the region-based convolutional neural network (R-CNN) and you only look once (YOLO).The operation of these models is based on the corresponding AI development framework (e.g., PyTorch, TensorFlow) and will be equipped with related dedicated AI acceleration hardware [18].Before the AI service is provided, the corresponding environment and hardware need to be pre-configured and installed in the network.Limited by space and cost, it is difficult for a single network node to be equipped with the AI models required by all AI services.Therefore, when designing the task scheduling and resource allocation scheme for AI service, it is necessary to consider the collaboration between network nodes.
To this end, this paper considers a task scheduling and resource allocation scheme for AI training services in 6G native AI wireless networks to optimize the QoAIS, including the accuracy of training AI models and the delay of AI services.According to the wireless channel conditions of the network, the computing resources, and the type of AI model stored by each node, an effective mechanism is needed to select the appropriate data quality, bandwidth allocation, and node to complete the task of the AI service.Because of the conflict between the two indicators of the QoAIS, a single-objective integer programming problem is proposed to optimize the QoAIS.Further, considering the QoAIS configura-tion of network protocols, we transform this problem into a multi-objective optimization problem.Considering the computational complexity, we use the genetic task scheduling and resource allocation (G-TSRA) algorithm and the non-dominated sorting genetic task scheduling and resource allocation (NSG-TSRA) algorithm to solve the proposed problems.The main contributions of this paper are as follows.
•
We The remainder of this paper is organized as follows.We first present the related work in Section 2.Then, we describe the model of the native AI wireless network for AI training services in Section 3. The single objective QoAIS optimization problem and G-TSRA are proposed to solve it in Section 4. Further, we present the multi-objective optimization problem and develop the NSG-TSRA in Section 5. Finally, we demonstrate the numerical results in Section 6 and conclude this paper in Section 7.
Related Work
Building native AI capabilities in a 6G network can improve operation efficiency, reduce maintenance costs, and enhance user experience.On the other hand, 6G networks can utilize native AI to provide ubiquitous and easily accessible AI services for various industries and users.Driven by such benefits, native AI has recently attracted significant attention from the industry and academia.In [15], Wu et al. proposed the AI-native network slicing architecture, through the synergy of artificial intelligence and network slicing, to promote intelligent network management and support AI services in 6G networks.In [19], Hoydis et al. presented a 6G AI-native air interface designed in part by AI to enable optimized communication schemes for any hardware, radio environments, and applications.In [20], Soldati et al. identified two critical factors for the effective integration and systematization of AI in the future RAN system: the design of AI algorithms must aim to promote the entire RAN environment, and the RAN system must be equipped with an advanced and scalable learning architecture.Due to the current network slicing architecture not being native AI, the heterogeneity of the slicing arrangement is difficult to adapt to the machine learning paradigm.Therefore, Moreira et al. in [21] proposed and evaluated a distributed AI-native slice orchestration architecture that can provide machine learning capabilities in all life cycles of network slices.In [12], the 6G Alliance of Network AI (6GANA) offers the essential technical features needed for the native AI architecture of the 6G network, including the self-generation of use cases, QoAIS, task-oriented scheme, etc.A unified architecture is expected to provide quality-guaranteed AI services for the network and third-party users.
Compared with cloud AI providers, AI services provided by 6G native AI wireless networks have the advantage of guaranteed service quality.In 5G networks, 5QI (5G QoS identifier) is a parameter used to identify different service quality requirements [22].The value range of 5QI is 1-255, and each value corresponds to a set of preset performance values, including default priority level, packet delay budget, packet error rate, etc. Network operators configure QoS according to user requirements and network resource conditions.
According to the combination of different performance values represented by 5QI, the wireless network protocol provides communication services of different qualities, such as low-latency services, high-reliability services, and high-speed broadband services.For AI services provided by 6G networks, the service quality dimensions will be further expanded, such as the delay of AI services, the accuracy of AI models, communication overhead, computing overhead, data privacy, etc.Therefore, studying the available QoAIS optimization methods for wireless network protocols is necessary.
There have been some studies focusing on the training accuracy of AI models.In [23], Liu et al. proposed an improved particle swarm optimization algorithm (LK-PSO), aimed at the scheduling problem of AI data-intensive computing tasks in the Internet of Things, to effectively improve the scheduling performance of AI data-intensive computing tasks in the edge environment From the perspective of edge intelligence systems, Wang et al. in [24] proposed a deep neural network (DNN) layer-partitioning-based fine-grained cloudedge collaborative dynamic task scheduling mechanism to greatly reduce the average task response time and deploy more complex DNN models in cloud-edge systems with limited resources.
Based on the above discussion, although there are currently some studies on AI task scheduling, most focus on optimizing model training in resource-constrained networks and only consider AI task delay.Therefore, this work, driven by native AI, proposes an efficient task scheduling and resource allocation scheme for AI training services.Considering the dynamic changes of wireless networks, heterogeneous resources, and data distribution, this work optimizes the accuracy and completion delay of AI training simultaneously and provides QoAIS multi-dimensional indicators that 6G wireless network protocols can use.
System Description
The architecture of native AI includes the import of user requirements, the analysis of requirements to QoAISs, the full life cycle management and scheduling of the multiple resources of AI tasks, and the final delivery results.In this paper, we focus on scheduling the multiple resources of AI.In this section, we describe the native AI wireless network model for AI training services, including the communication model, the computation model, and the AI training model.
System Model
As shown in Figure 1, we consider a 6G native AI wireless network to provide various AI training services.A set of APs (access points) is distributed in an interested area.The APs are connected through wireless channels.Each AP can cover multiple areas, such as roads, parks, factories, etc. Users in these areas will have different service requirements for AI model training, such as pedestrian detection and fire monitoring.Due to limited local resources, users expect the AP to provide AI training services.
Each AP is equipped with hardware to provide communication, computing, data, and AI model resources for AI training tasks, including antennas, computing servers, and AI model caches.After the AP receives the task request, it can obtain the data required for AI training from the user.For each type of AI training, multiple data quality levels can be selected.As the performances of AI training results, such as DNNs, are closely related to the quality of data used for training, APs can request the highest quality data possible according to the remaining resources to obtain better training results.On the other hand, the type of AI training task corresponds to the type of AI model.Due to the resource capacity limitations, the AI model required for an AI task may only be stored in a few APs.When the AP stores the required AI model resources, the AI model training will be completed locally.When an AP does not store the AI model that matches the task, the AP will transmit the data through its antenna to another qualified AP for processing.In order to obtain global information and output better decisions, a software-defined network (SDN)-enabled controller is installed at a base station (BS) to centrally make the task scheduling and resource allocation decisions over its coverage area.At each time slot, the AP receives AI service requests containing type information and reports them to the BS along with channel conditions.The BS makes task scheduling and resource allocation decisions based on the collected information, and sends the decisions to the corresponding AP.The AP selects the data quality of users' AI tasks and sends them to the corresponding AP for execution based on the decision.
Denote the set of APs as N = {1, 2, . . ., N}.To clearly describe the connection relationship between different APs, we choose i and j as the indices of different APs, i, j ∈ N .In time slot t, AP i obtains M kinds of AI training tasks.According to the storage of AI model resources, tasks can be processed locally or in the corresponding AP j.For each AP, the task is denoted by M = {1, 2, . . ., M}.The set of AI models stored by AP j is denoted by π j .To ensure the successful execution of the task, AP j must store the AI model required by task m, which is f i,m ∈ π j , where f i,m is the type of task m.
x i,m,j ∈ {0, 1} is a binary decision variable that denotes whether the task m of AP i is transmitted to AP j.Each task can only choose one AP to be processed at time slot t, given by ∑ j∈N x i,m,j (t) = 1.
Communication Model
The task scheduling between AP i and j is facilitated through wireless communications.
According to [25], the data transmission rate between AP i and j can be calculated as where W i,m is the bandwidth of AP i allocated to task m.SNR i,j (t) is the signal-to-noise ratio (SNR) between the two APs.
where p i is the transmission power of each link, h i,j (t)is the channel gain, and σ 2 is the noise power.Each type of task has different qualities b, denoted by B = {1, 2, . . ., B}.The data size of task m in AP i is Z i,m (t) = a f i,m b i,m (t), where a f i,m is the amount of data per unit level related to the task type.Hence, the transmission time is given by
Computation Model
After receiving tasks from other APs, AP j will allocate computing resources to each task according to their requested CPU cycle C i,m (t): where c f i,m is the CPU cycle for each bit of the data, and τ i,m (t) denotes the number of training iterations determined by AP i at time slot t.
The total computing resource of each AP j is Φ j .Hence, the computing resource Φ j,i,m (t) allocated to task m of AP i is given by Consequently, the computing delay is calculated as
AI Training Model
After allocating computing resources, the AP will use task data for AI training, and output the trained AI model.The training of an AI model is the training of a large number of data samples.In typical AI training, for a data sample {x n , y n } with a multi-dimensional input feature x n , the goal is to find a model parameter vector ω that represents the labeled output y n with a loss function loss n (ω).The loss function of a local dataset with a number of D data samples can be defined as where g(•) is a regularizer function and can be given as i as the optimal model parameter for AP i. AP i trains its local AI model in an iterative manner [26]: The performance of an AI model can be evaluated using the accuracy of the model, denoted by ϕ ∈ [0, 1].The accuracy of AI models is related to the allocated computing resources, the quality/size of data, the number of training iterations, the learning rate, the algorithm used for training, and so on.Similar to [11], the accuracy of AI model m of AP i processed by AP j satisfies where ς lc and v are weight factors.Φ j,i,m (t) is the allocated computing resource to each task.α is the learning factor that reflects the marginal revenue of iterations and depends on the selected learning algorithm.
To assess the quality of the solution, the local ϕ accuracy satisfies Here, the implementation of ϕ = 1 needs to find the exact maximum, while ϕ = 0 means that no improvement is achieved in AP.
Problem Formulation
We formulate the AI task scheduling and wireless resource allocation problem in the 6G native AI wireless network.The objective is to optimize the quality of AI training services, including delay and accuracy.The BS maintains resource information for all APs, including the available bandwidth, computing power, and the types of AI models.At each time slot, the BS makes task scheduling and resource allocation decisions and sends the decisions to APs.
To maximize the total accuracy of the trained AI models and minimize the total delay of AI training tasks simultaneously, the optimization problem can be transformed into a single-objective problem by assigning different weights to each objective.
Subject to: where α is a weight parameter to balance the trade-off between the delay and accuracy.Constraint (13) indicates that one task can only be transmitted to one AP; (14) is the bandwidth constraint, and (15) is the data quality constraint.Moreover, (16) indicates that the AP must cache the corresponding AI model when processing tasks.
Genetic Task Scheduling and Resource Allocation Algorithm
A genetic algorithm [27] is a heuristic algorithm based on natural selection and natural genetics that can find an optimal solution in a limited time.Therefore, we propose the genetic task scheduling and resource allocation algorithm (G-TSRA) to solve the single objective QoAIS optimization problem.
In the proposed problem, each solution is encoded as an individual, including AP index, data quality, and bandwidth allocation for each task.The individual is given by The quality of each individual is evaluated by the function (11), which represents the degree of fitness to the environment.Multiple individuals form a population and evolve according to the principle of "survival of the fittest".
During evolution, the size of the population remains constant.Individuals are selected according to their quality, then crossover and mutation operations are performed to form a new population.Through continuous iterations, the optimized solution is finally obtained.
Problem Formulation
In the 6G native AI wireless network, to guarantee the QoAIS, each type of indicator needs to meet its requirements.However, in the single objective QoAIS optimization problem, the weight value is pre-configured, and it can only be judged whether each indicator meets the requirements after the algorithm is executed.Multiple tunings will be required, and the above operations must be repeated when the optimization requirements change.
To solve this challenge, using a multi-objective evolutionary algorithm to obtain the Pareto-optimal solution set is an effective method.When receiving QoAIS requirements, it can directly select a solution that meets the requirements according to the Pareto-optimal solution set.Even when QoAIS requirements change, the system does not need to re-run the algorithm.
In the following, we formulate the task scheduling and resource allocation problem as a multi-objective integer programming problem. Minimize: Subject to:(12) − (16) (20)
Non-Dominated Sorting Genetic Task Scheduling and Resource Allocation Algorithm
To solve the proposed multi-objective optimization problem of the 6G native AI wireless network with relatively low computational complexity, we designed an NSG-TSRA algorithm based on the idea of the non-dominated sorting genetic algorithm II (NSGA-II) [28].Before presenting the details of the NSG-TSRA algorithm, we first introduce two key approaches: a fast non-dominated sorting approach and a crowding-comparison approach.
Multiple optimization goals are often in conflict with each other.Therefore, a multiobjective optimization algorithm will involve a collection of optimal solutions.Hence, without additional conditions, there is no significant difference between the solutions in the set.
Fast Non-Dominated Sorting Approach
In the NSG-TSRA algorithm, fast non-dominated sorting involves dividing the population O = {1, 2, . . ., o} into several layers according to the dominance relationship.The function of this approach is to guide the search toward the Pareto-optimal solution set.Each individual o is a solution, which consists of an AP index, data quality, and bandwidth allocation for each task.
To facilitate uniform optimization, the objective of maximizing model accuracy needs to be translated into minimizing the relative model performance.Since 0 ϕ i,m,j 1, we have When an individual does not satisfy the constraints, a penalty value can be added to the objective function.The first layer is the set of non-dominated individuals in the population; the second layer is the set of non-dominated individuals obtained after removing individuals in the first layer, and so on.As shown in Figure 2 Then, for each individual in F 1 , the domination counters of its dominant solutions are subtracted by one.If the domination counter is 0, this domination will be added to F 2 .After multiple iterations, the individuals in S o are iteratively divided into different layers according to their rank.Eventually, sorted layers (F 1 , F 2 , ...) are obtained.The total complexity of finding all members of the different non-dominated levels in the population is O(K(2O) 2 ), where K is the number of objectives.Hence, the worst-case complexity of fast non-dominated sorting is O(K(2O) 2 ).
Crowding-Comparison Approach
In the NSG-TSRA algorithm, the crowding-comparison approach is adopted to maintain the diversity of the population, which can reduce the time complexity compared to the sharing function approach in NSGA.This approach consists of the density estimation and the crowding-comparison operator.
Crowded-comparison operator: After completing the fast non-dominated sorting and density estimation, we obtain the non-dominated rank rank o and the crowding distance I(o).To achieve a wider distribution of the Pareto-optimal set, the crowding-comparison operator is used to select individuals according to two conditions, as follows.Let ≺ * denote an order of comparison.The details of the crowding-comparison approach are shown in Algorithm 2.
Condition 1: A smaller rank means that the individual is closer to the Pareto front.Therefore, individuals with lower ranks will be selected.
Condition 2: When two individuals have the same rank, a larger crowding distance means that the individual is more dispersed from other individuals.Therefore, individuals with larger crowding distances will be selected.
The worst-case complexity of the crowding-distance assignment is O(K(2O) log(2O)).for each k do 13: Sort F s = sort(F s , k) for i = 2 to Num − 1 do 16: The NSG-TSRA algorithm adopts the ideas of elitism and tournament selection.The detailed procedure of the NSG-TSRA algorithm is shown in Algorithm 2.
When the algorithm is executed, the initial population O(t) t=0 is randomly generated and sorted by the fast non-dominated sorting approach.The elitism strategy involves retaining the best individuals in the current population to the next generation population without additional genetic operations.To implement this strategy, an offspring population Q(t) t=0 is created by selection, crossover, mutation, and other operations.Then, O(t) and Q(t) are combined to generate the expanded population, R(t).R(t) is sorted by the fast non-dominated sorting approach shown in Algorithm 1 and the divided layers (F 1 , F 2 , ...) are obtained.
After the division, individuals will be sequentially selected, starting from the first layer F 1 until the entire population O(t + 1) is filled.However, the size of R(t) is 2 times that of O(t + 1).Assume that it is not possible to put all the individuals of the vth layer F v into O(t + 1) during the filling process.The crowding-comparison approach is used to sort F v .Individuals will be sequentially added to the next population O(t + 1) according to the crowding distance until the number of individuals in the population reaches O.The remaining solutions are deleted.The worst-case complexity of sorting on ≺ * is O(2O log(2O)).
Finally, for O(t + 1), the tournament selection and crossover and mutation operations are used to create the new population, Q(t + 1).Here, the tournament selection operation is according to the crowded-comparison operator.Through continuous iteration, the algorithm finally outputs the approximate solution of the Pareto-optimal set.Considering the time complexity of fast non-dominated sorting, crowding comparison, and sorting on ≺ * , the overall complexity of the NSG-TSRA algorithm is O(E • K(2O) 2 ), where E is the number of iterations.After the approximate set is obtained, the solution can be flexibly selected according to the need.
Numerical Results and Discussion
In this section, we simulate the performance of the proposed single and multi-objective QoAIS optimization scheme for AI training services in the 6G native AI wireless network.Specifically, the number of APs is 5, and each AP can accept 2 different types of AI tasks.There are three quality data levels, and the corresponding data size is [1,2,3] Gbit.The computing capacity of each AP is randomly distributed in [0.5, 3] Gcycle/s.Each AP cache has two or three types of AI models, and the set of AI models in all AP caches meets the needs of all tasks.The SNR between two APs is randomly distributed in [20,40] dB.The bandwidth of each AP is 200 MHz and the number of sub-channels is 4.
For the parameter settings of the G-TSRA and NSG-TSRA algorithms, the population size is 100.The number of iterations is 1000 for G-TSRA and 200 for NSG-TSRA.The number of genes in each individual and the value range of each gene are set according to the above network parameters.The crossover probability parameter is 2. The probability of mutation is 0.1 for G-TSRA and 0.08 for NSG-TSRA.
Figure 3 shows the performance of G-TSRA as the α weight changes.α is randomly distributed in [0.01, 1].The algorithm converges around 250 iterations, and the delay and accuracy performances are obtained.The delay gradually increases as the weights decrease while the relative model performance decreases.This is because the weight belongs to the delay, so the weight reduction means that the delay's importance is gradually reduced compared to the relative model performance.The algorithm will tend to optimize the relative model performance.The figure shows that the performance changes drastically when the weight value drops from 0.08 to 0.07, while the change between 1 and 0.6 is relatively stable.Therefore, the performance does not change continuously and smoothly with the weight value, so it is impossible to determine the required QoAIS by presetting the weight value before the algorithm is executed.Figure 4 shows the convergence of the proposed NSG-TSRA algorithm.Since the proposed optimization problem is multi-objective, the output of each iteration is a set of solutions.Thus, the convergence trend is shown by calculating the average delay and the average relative model performance of the population, but each value is the sum of the delay and relative model performance of the 10 tasks.At the initial population, both latency and relative model performance are high.As the number of iterations increases, the performances of the two optimization objectives gradually decrease in the fluctuations.Due to exploration, the latency is minimized at 110 iterations at the cost of training accuracy.Finally, the performance reaches convergence at 190 iterations.In Figure 5, we investigate the performances of G-TSRA, NSG-TSRA, the multiobjective evolutionary algorithm based on decomposition (MOEA/D) [29], and the greedybased scheme.The greedy-based scheme selects the training node for the AI task based on the product of the computing performance of the node and the channel conditions from this node to all other nodes.Moreover, the maximum selection numbers are set for each training node, which can prevent a decline in performance due to the large number of tasks selected for the same node.Figure 5 shows that the performance of the Pareto solution of the NSG-TSRA-based scheme is better than that of the greedy-based scheme.The performances of G-TSRA and MOEA/D are slightly worse compared to NSG-TSRA.
In Figure 6, we investigate the performances under different data sizes, which are [0.5, 2.5], [1,3], and [1.5, 3.5] Gbit.The delay in task completion increases with the data size, but the accuracy of the trained model also increases.Since the value ranges of the data sizes partially overlap, the data sizes are similar under some specific data quality selections.Therefore, the different curves will partially overlap.In Figure 7, we investigate the performances under different bandwidths, which are 150, 200, and 300 MHz.With the same relative model performance, the task completion time decreases as the bandwidth of the AP increases.Since the bandwidth mainly affects the transmission rate, even if the transmission delay has an impact on the selection of APs, the impact on the performance of the final relative model is still limited.Under the parameter settings of different bandwidths, there is no overlapping part of the Pareto solution sets.In Figure 8, we investigate the performances under different numbers of APs, which are 4, 5, and 6.The computing resources and channel conditions of APs are generated randomly, with the average AP computing resources and channel gain gradually decreasing.With the same relative model performance, the task completion time increases with the number of APs.The analysis is as follows: as the number of APs increases, the resources in the network increase.However, the number of requests also increases, and the decline in average resources caused by random generation in the simulation leads to a decrease in overall performance.Based on the above analysis, the multi-objective QoAIS optimization scheme performs better than the single-objective optimization scheme.It can output an unbiased solution set that is more suitable for the QoAIS guarantee in 6G native AI wireless networks.
Figure 1 .
Figure 1.Native AI wireless network architecture for AI training services.
, 12 individuals will be assigned to the corresponding fronts and individuals with the same color are at the same front.The details are shown in Algorithm 1.Each individual has two kinds of parameters: domination count κ o and domination set S o .κ o represents the number of individuals that dominate individual o, and S o represents the individual set dominated by individual o.First, the algorithm calculates κ o and S o for each individual according to the dominance relationship and obtains the first non-dominated front F 1 .Specifically, if o dominates l, l will be added to the set of solutions dominated by o.Otherwise, the domination counter of o increases and o belongs to the first front.
Algorithm 1
Fast non-dominated sorting algorithm.Input: Population O 1: for each o in O do
Density estimation: The crowding distance estimates the density of an individual surrounded by others in the population.First, the non-dominated individuals of each layer are arranged according to the value of each objective function in ascending order.Then, the crowding degree of individual o can be quantified as the difference value of two adjacent individuals, Obj (i.e., o + 1 and o − 1), in the same layer.The crowding distance I of each individual is the sum of the crowding degrees under each objective function.Moreover, the distance values need to be normalized before summing.Obj max k and Obj min k are the maximum and minimum function values in this population.
Figure 3 .
Figure 3. Impact of the weight on the G-TSRA performance.
Figure 4 .
Figure 4.The convergence performances of different objectives.
Figure 7 .
Figure 7.The performances under different bandwidths.
Figure 8 .
Figure 8.The performances under different AP numbers.
Algorithm 2
Non-dominated sorting genetic task scheduling and resource allocation algorithm.Input: Task m of each AP, model storage π j , channel gain h i,j , computing resource Φ j 1: Initialize the parameters 2: Generate an initial population by random means 3: Obtain R(t) = O(t) ∪ Q(t) 4: Rank the population R(t) by the fast non-dominated sorting approach 5: O(t + 1) = ∅ and s = 1 6: while |O(t + 1)| + |F s | < O do | 7,634.2 | 2023-08-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Bayesian and machine learning-based fault detection and diagnostics for marine applications
ABSTRACT Marine maintenance can improve ship performance by leveraging predictive maintenance, Machine Learning and Data Analytics. This paper aims to enrich the literature, by developing a novel framework for ship diagnostics based on operational data and the probability of faults. Moreover, the framework can identify the root cause of developing faults avoiding black-box Neural Networks, and complex physics-based models. This research integrates Machine Learning-based Fault Detection, Exponentially Weighted Moving Average control charts, and Bayesian diagnostic networks which allow the examination of the rate of development (fault profile) of faults and failure modes. For validation, the case study of a marine Main Engine is used to examine faults in the engine’s Air Cooler and Air and Gas Handling System. It is concluded that any simultaneous abnormal deviations in the Main Engine’s Exhaust Gas Temperature are more likely to be caused by a fault in the Air and Gas Handling System.
Introduction
Machinery Fault Detection (FD) and diagnostics are integral parts of modern predictive maintenance and they are used to provide accurate predictions for targeted maintenance (Kobbacy 2008;Mohanty 2015;Karatug and Arslanoglu 2020;Karvelis et al. 2021). As a result, FD and diagnostics have a positive impact on safety as discussed by Turan et al. (2009), Lazakis et al. (2010) and Dikis et al. (2015) and can be viewed as hazard mitigation tools (EMSA 2018).
There is substantial literature from many different sectors (e.g. offshore, nuclear) demonstrating the benefits (i.e. fast and accurate models) of regression-based Expected Behaviour (EB) modelling for FD. Zaher et al. (2009) used Artificial Neural Networks (ANN) in EB models for FD in the gearbox of a wind turbine based on operational data. Moreover, Schlechtingen and Santos (2014) compared the effectiveness of polynomial regression models and ANN in EB models for FD. Both types of models exhibited good performance in identifying faults in the stator and gearbox of a wind turbine. Likewise, Schlechtingen et al. (2013) used ANN for the detection of structural and mechanical faults in wind turbines. Bangalore and Patriksson (2018) used an ANNbased EB model for FD in critical components of wind turbines and for optimal maintenance planning. Similarly, the Exponential Weighted Moving Average (EWMA) has been deployed in a variety of cases for the detection of sensitive faults, filtering of noise and trend-checking (Isermann 2006;Garoudja et al. 2017). Harrou et al. (2015) used EWMA in an FD methodology for distillation columns. Moreover, Badodkar and Dwarakanath (2017) detected broken teeth in mechanical gearboxes, from their early onset, by smoothing acceleration signals with the EWMA. Also, Nounou et al. (2018) proposed an EWMA-based predictive maintenance scheme for photovoltaic panels using performance parameters (voltage, current, etc.). Lastly, Adegoke et al. (2019) developed an EWMA-based FD methodology for the manufacturing sector. Diagnostic models can be categorised into (a) physics-based models, (b) data-driven models and (c) knowledge-based models (Jardine et al. 2006;McKee et al. 2014;Coraddu et al. 2021), based on their underlying algorithms. Under the scope of this work, knowledge-based models are examined due to their advantages. In detail, they have the distinct benefit of mimicking specialists' reasoning while effectively handling uncertainties, and not resorting to time-consuming physics-based, or black-box ANN models (McKee et al. 2014). In general, knowledge-based models have many implementations, but the more prominent approaches are based on Bayesian Networks (BN) (Chojnacki et al. 2019;Wang et al. 2019).
Knowledge-based BN are extremely popular in diagnostic tasks due to their compact nature, consistency, and modularity (Cai et al. 2017;Babaleye and Kurt 2020). For instance, Riascos et al. (2007) used BN for the diagnosis of faults in a proton exchange membrane fuel cell. Also, Diakaki et al. (2015) developed a model for route optimisation and fault localisation based on BNs. Atoui et al. (2015) developed a BN-based classifier for the detection and diagnosis of faults in chemical process plants. Moreover, Zhao et al. (2017) created a BN for the diagnosis of more than 27 faults in industrial air handling units, demonstrating the versatility of BN, as they make use of data fusion. The versatility and accuracy of BNs are demonstrated in the work of Wang et al. (2017), through the development of a diagnostic network for chiller units. Ami et al (2018), looked at the development of dynamic BN for FD and root-cause analysis for chemical process plants, generating evidence for collected data under the assumption of a Gaussian distribution. The effectiveness of BNs for diagnostic tasks, and consequently improved safety, is demonstrated in numerous research efforts applied in different industries, but with limited applicability in the maritime (Riascos et al. 2007;Tantele and Onoufriou 2009;Atoui et al. 2015;Cai et al. 2016;Zhao et al. 2017).
The maritime industry is taking quick steps to benefit from the applications of machine learning and data analytics. There are many examples of machine learning applications tackling a plethora of issues ranging from fault detection to route optimisation Raptodimos and Lazakis 2018;Iraklis Lazakis et al. 2019;Cheliotis et al. 2020;Tan et al. 2020). However, the application of modern diagnostic tools for shipboard systems is very limited and underdeveloped. For example, Silva et al. (2018) used two-dimensional wavelet transforms for the diagnosis of faults in the electrical system of a ship. Similarly, Campora et al. (2018) developed an ANN and a thermodynamic model for fault diagnostics of naval gas turbines. Moreover, Korczewski (2016) investigated the use of the Main Engine (ME) Exhaust Gas Temperature (EGT) in thermodynamic models for the diagnosis of engine internal faults. Homik (2010) used vibration signals in an FD and diagnostics methodology for torsional vibration dampers and marine ME crankshafts. Finally, Ranachowski and Bejger (2005) studied the use of wavelet analysis from acoustic signals for the diagnosis of common faults in the fuel injection systems of a marine diesel engine.
Comparison and gaps
From the previously cited literature, the following inferences regarding maritime diagnostics can be made. Overall, the size and quality of available datasets are not uniform as only sparse datasets are available for each ship application. Consequently, the algorithms included in the developed frameworks should be streamlined with these issues in mind.
It is observed that ANNs form the core of most EB models for FD. Despite their good performance, they require large datasets for training which are not readily available in the maritime industry. Also, ANNs are black-box approaches making it challenging to impart domain knowledge. The core approach for the EB model should serve the application and address the characteristics of the available data. In cases with sparse data and requirements for precise results, regression-based EB models should be considered.
Also, the use of EB models for FD is advantageous compared to the alternative classification approaches. With EB models there is greater flexibility in the selection of the underlying algorithms. In classification, in the absence of recorded faulty data one-class Support Vector Machines (SVM) are the standard choice, with few alternatives. Moreover, EB models are easier to integrate with diagnostic tasks.
In addition, most diagnostic models are physics-based which even though perform well they are time-consuming to develop. Likewise, data-driven diagnostic models also exhibit good behaviour, but they depend on large training datasets which are scarce in the maritime industry. On the contrary, knowledge-based diagnostics, including BNs, offer accurate performance without requiring lengthy set-up times or large amounts of training data. Furthermore, knowledge-based diagnostics are modular which improves their compatibility with FD modules and makes it easier to expand in other engineering systems.
From the above, it is inferred that the area of marine systems diagnostics is still under active development. Diagnostic efforts in the maritime field are very limited and cover only a few of the available systems. There is a gap for the development of a novel diagnostic framework that tackles the previously mentioned issues while benefiting from regression-based EB modelling, EWMA control chart, and BNs diagnostics and addressing the particular needs of maritime predictive maintenance. Also, there is a distinctive gap in the application of such a framework for ship propulsion plants.
Novelty and impact
The innovation of this work lies in the development of a novel framework that combines a machine learning based FD module with a BN-based diagnostic module. The FD module includes the pre-processing of data, multiple Polynomial Ridge Regression (PRR) for the development of an EB model, and EWMA control charts for the analysis of the residuals and the detection of faults. The FD module is combined in a novel way with the diagnostic module which includes the mapping of faults and the construction of a BN. Evidence of detected faults are propagated in a BN network. Then, the BN outputs quantified probabilities of the mapped faults together with the fault profiles of different failure modes.
By applying the novel diagnostic framework the maritime industry will benefit from (a) the ability to capture previously unseen faults based on the EB model, (b) the use of EWMA control charts to accurately detect developing faults (c) the ability to monitor realtime the development of faults and assess the condition of the selected system and (d) the practical and interoperable integration of FD with diagnostic tasks in a holistic novel framework catering to the needs of the maritime industry. Overall, the developed framework will allow the real-time detection and diagnosis of developing faults. This has a tangible impact on the effectiveness of maintenance planning and operational efficiency of ships. This will also enhance safety and reduce unavailability by creating time for pre- emptive actions. As seen, this novel framework can be used by ship operators to contribute to the safe and efficient operations of vessels.
In the rest of the paper, Section 2 discusses the details and novelty of the developed methodology, Section 3 presents the case study together with the results, and Section 4 provides the overall conclusions and future work.
Proposed methodology
The goal of this study is to establish a novel Bayesian and machine learning based diagnostic framework for practical ship system applications. To that end, machine learning tools are used for data pre-processing and the creation of an EB model. Subsequently, these are combined with EWMA control charts in a novel integration with fault mapping and BNs.
To fulfil the above, four interconnected phases are developed ( Figure 1). These phases are: (1) Data collection: comprising of the data gathering efforts from a data acquisition system and other sources.
(2) Data preparation: comprising of the pre-processing tasks, such as outlier detection, data filtering and correction to reference conditions (3) Fault detection: comprising of the development and application of the PRR model and EWMA control chart (4) Diagnostic Set-up: comprising of the pairing between monitored variables and corresponding faults and the creation of BN structure.
Data collection
Data collection is the initiating phase of the methodology that outputs a database with historic data (Figure 1), which is used throughout the methodology. The gathered data originate from a commercial data acquisition system installed onboard a merchant vessel. Normally, data acquisition signals for FD and diagnostic tasks of engineering systems include performance parameters (power outputs, speeds, etc.). Moreover, the sampling frequency can range from one recording per second to one recording per five minutes.
Data preparation and pre-processing
Data preparation is the essential second phase of the methodology, which includes the pre-processing tasks necessary to prepare and enhance the gathered data for the next steps of the analysis, as discussed by, Tanasa and Trousse (2004), Kotsiantis et al. (2006) and Cheliotis et al. (2019). The processes of this phase are performed using the Python programming language, and its output is the transformation of the collected historic data into 'cleaned' processed inputs. The data preparation phase initiates with form handling and units checking of the data and then proceeds with the use of the Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm for outlier and transient-state detection and removal. Outliers are sparse data points with considerably dissimilar values caused by sensor errors and instrumentation faults. It should be clarified that outliers are not part of fault suggestive patterns. The DBSCAN algorithm is an effective tool for the identification of outliers and other ill-recorded information, as demonstrated by many studies (Chen and Li 2011;Çelik et al. 2011;Thang and Kim 2011). DBSCAN is a density-based spatial algorithm that examines each point in the dataset to identify dense areas of points (clusters). The clusters are defined by the minimum number of points required to form a cluster (minP) and the maximum distance between points for them to be considered in the same cluster (1) (Chen and Li 2011;Çelik et al. 2011;Thang and Kim 2011). Finally, additional value-based data filtering takes place, which ensures that the data represent only operational periods and the remaining data are corrected to account for the environmental influences, as per standard practices (ISO 2008; MAN B&W 2014).
Fault detection
The FD phase follows the data preparation phase and includes the EB model and the EWMA control chart. An EB model is developed to predict the expected behaviour of a specified variable, based on certain inputs. Then, the output of the EB model is compared with incoming data of the variable to obtain the residuals. The residuals are finally analysed in the EWMA to detect any faults. Once the results from the fault detection phase are obtained, they are aggregated and used as input in the diagnostic set-up phase of the methodology.
EB model development
For the EB model, multiple PRR is used due to its accurate and reliable performance as reviewed by Olive (2005), Bowerman et al. (2015) and Cheliotis et al. (2020). The training, validation and testing performance of the model is carried using the R 2 metric. In general, the developed polynomial regression model for the specified variable has a form as shown in Equation (1). That equation represents the general form of zth order polynomial using two predictor variables (x 1 , x 2 ), having w 0 to w p as the regression coefficients, b is the axis intercept andŷ the prediction for the target (specified) variable.
From the available data, a segment is used for training and validating to fit and fine-tune the algorithm. During training sets of known predictor variables (x 1 , x 2 ) together with target variables (y) are used as input in Equation (1) to get estimates for the regression coefficients and the axis intercept. The training proceeds with the minimisation of the objective function shown in Equation (2). The objective function of the ridge regression includes the term a w 2 2 , which limits the magnitude of the regression coefficients to avoid overfitting. The user-specified hyperparameter, a, is responsible for controlling the limiting amount of L2. For the determination of the optimum a value, k-fold cross-validation was used, as detailed in Müller and Guido (2015) and Cheliotis et al. (2020).
The general working process that is followed for the development of the EB model is shown in the following pseudocode (Algorithm 2.3.1). This algorithm requires as inputs the predictor and target variables. It also requires the number of folds for the k-fold cross-validation and the size of the test set. Lastly, the set of the considered values for the model's hyperparameters is given. Algorithm 2.3.1 represents the generalised process for the development of a supervised model, including the optimisation of one of its hyperparameters.
Algorithm 2.3.1 (Cheliotis et al. 2020) Require: X, Y, TrainingSet, k and a list of hyperparameters h i , for i between 1 and n 1: X polynomial ⃪ derive the polynomial features of X 2: Augment X with X polynomial 3: X TrainValidate , X Test , Y TrainValidate , Y Test ⃪ Split and normalise X and Y based on (3). In Equation (3), y k represents the predicted value of the kth instance of the target variable and y k is the actual value of the kth instance of the target variable.
Using residuals for FD is a proven and effective strategy. By comparing the ideal value of a variable with the incoming data we can quantify any variations and classify them, given a set of operating conditions (Martinez-Guerra and Luis Mata-Machuca 2013; Harrou et al. 2015).
The residuals are then used to construct the EWMA control chart. Equation (4) calculates the EWMA statistic (q) for all of the k instances of the target variable, with q 0 representing the mean value of the specified variable in the incoming data. The calculation of the EWMA statistic also requires r k , together with the user-defined smoothing parameter, l. In this paper, the smoothing parameter was given the value of 0.4 according to common practices and by following the suggestions of Badodkar and Dwarakanath (2017), and Cheliotis et al. (2020).
Lastly, the Upper Control Limit (UCL) and Lower Control Limit (LCL) are calculated. These limits provide the basis for the detection of faults, as any point that exceeds them signify faults. In other words, the UCL and LCL form the envelope of normal operations for the selected variable. In Equations (5) and (6) the mean value of the specified variable in the incoming data (m 0 ) and the standard deviation (s) are used. Lastly, L signifies the width of the control chart and is empirically given a value of 3. Since the value of L affects the envelope of normal operations, it must be appropriately selected so that it can correctly classify normal operating points. As the recorded data represent 'healthy' operating points, the resulting envelope must fully enclose all the data points.
Diagnostic set-up
Diagnostic set-up is the next phase of the methodology and uses as input aggregated results from the fault detection. The goal is to use real-time information to produce accurate probabilities of different faults occurring in the system. For that purpose, this phase initiates with the fault mapping, which includes the pairing between monitored variables and corresponding faults; then, the structure of the diagnostic BN is determined.
Fault mapping
Fault mapping is a very important task as it identifies the potential faults that can be diagnosed in a selected system, and the variables required for their diagnosis. Alongside the required variables, the acceptable range and behaviour of each variable are specified. Lastly, any additional tests for the diagnosis of specific faults are determined. Fault mapping is based on domain knowledge and by taking into consideration the operating manuals of the selected systems, provided by the equipment's manufacturer or operator.
Bayesian network
Once the pairing between monitored variables and corresponding faults is completed, the structure of the diagnostic BN is determined. The identified faults are represented in the primary and secondary fault nodes of the network, while the variables required for the monitoring are used in the observable nodes. Any additional tests required for the investigation of a fault are inserted in the test nodes. Lastly, any other nodes concerned with the inner workings of the diagnostic tool can be inserted in the control nodes section. BNs represent a joint probability distribution of a set of random variables and consist of a qualitative part and a quantitative part. The qualitative part is defined by a probabilistic Directed Acyclic Graphical (DAG) model where each variable is depicted as a node and links between them define causal relationships. The quantitative part is defined by the conditional probability distribution in the Conditional Probability Table (CPT) of each node (Ruggeri et al. 2007). BNs are based on Baye's theorem, and calculate the posterior conditional probability distribution of occurrence of a fault given some observable evidence, as shown in Equation (7) Cai et al. 2017). In detail, the posterior conditional probability distribution of occurrence of a fault is calculated using the clustering inference algorithm based on the size of the network and its documented positive characteristics (Yu et al. 2004;Zheng et al. 2019).
Assuming a set (U) of n random variables U = (X 1 , X 2 , . . . X j , . . . , X n ), a BN with n-nodes can be constructed. Moreover, X j represents the jth variable. The BN for the n variables can be represented by Equation (8), where pa(X j ) denotes all the parent nodes of X j . P(X 1 , . . . X j , . . . , X n ) = n j=1 P(X j |pa(X j )) (8) For example, the case of a simple network is considered in Figure 2, assuming that each variable has only two states, True (t) and False (f ). Consequently, Equation (8) takes the form of Equation (9), by using the chain rule of probabilities and a conditional independence assumption. The conditional independence assumption dictates that a child node (X j ) is statistically dependent only to its parents (pa(X j )).
Equation (11) is also referred to as the posterior probability, and the first part of the product is due to Equation (7), while the second term is due to the joint probability distribution. For this study, two types of evidence were used, namely hard evidence and virtual evidence. The former represents the traditional type of evidence used in BNs, which dictate the value or state of a variable (e.g. X 1 = True). In this paper, hard evidence was used for strict diagnostic tasks, to obtain the probabilities of examined faults based on monitored variables. However, hard evidence can introduce troublesome assumptions, especially when the value of a variable is very close to a state's decision boundary. To counter this issue, and to extend the capabilities of the diagnostic network, virtual evidence was used. Virtual evidence represent evidence with uncertainty and was also used to obtain the fault profiles of the examined faults (Bilmes 2004;Korb and Nicholson 2010;Mrad et al. 2012). For instance, X 1 = 0.7 True is considered as virtual evidence and represents that X 1 is almost in its true state.
Data collection
This section of the paper demonstrates the application of the developed methodology in a case study. The used data originate from a 65,000 DWT dry bulk carrier and were collected from the installed data acquisition system. The system has a 5-minute sampling rate and recordings during the first three months of 2017 were collected, resulting in 25'627 instances per variable.
For this paper, the ME of the ship and its supporting systems are studied (Figure 3). The ME EGT is selected for monitoring, as it can help uncover developing faults, both within the engine and in the supporting systems. In detail, faults in the air cooler, turbocharger and gas passages of the ME can manifest through changes in the ME EGT (MAN B&W 2017; Zhan et al. 2007;Woodyard 2009). Lastly, Table 1 shows the collected variables together with a description and their diagnostic purpose.
Data preparation and pre-processing
Data preparation begins with form handling and units checking and continues with the application of the DBSCAN algorithm for outlier and transient detection. Lastly, the data are filtered, using a value-based approach, to ensure that they represent only operational periods. During this study, the user-defined hyperparameters 1 = 0.23 and minP = 11 are used for the application of the DBSCAN algorithm. The selected values of the 1 and minP hyperparameters were achieved after iterative attempts and according to Chen and Li (2011), Gaonkar and Sawant (2013), Rahmah and Sitanggang (2016) and Schubert et al. (2017). Finally, the remaining data are filtered according to Equation (12).
Power Main Engine . 5 kW (12) The application of the data preparation phase is demonstrated in Figure 4, showing the ME shaft power before (bottom chart), and after (top chart) the identification and filtering of the outliers and transient states. As can be observed, the data preparation phase can identify and remove sudden 'spikes', 'dips' and 'flat-lines' in the data.
EB model specification
From the data, 80% are used for training and validation, while the rest are kept for testing. For the process described in Algorithm 1, a five-fold cross-validation procedure is used. As a result, a fifth order multiple PRR model with a = 0.6 is obtained. Also, the ME scavenging air pressure and the ME shaft speed are used as predictor variables to obtain the ME EGT for each cylinder (target variable). The selection of the target variables was based on domain knowledge and after experts' advice. In detail, the resulting model exhibits good training, validation and testing performance with-R 2 TRAINING = 0.92, R 2 VALIDATION = 0.9 and R 2 TESTING = 0.9 respectively. In addition, the test set is used to ensure the multiple PRR model has good generalisation capabilities as it examines its performance in previously unseen data.
EWMA and verification
The FD phase concludes with the analysis of the residuals in an EWMA control chart. To evaluate the detection capabilities of the FD module, and by considering the fault-free nature of the used data, a sensitivity analysis is performed (Saltelli 2004;Law 2009). Since the collected data represent healthy operating conditions, as confirmed by the vessel's operator, the EWMA control chart must not detect any faults. Figure 5 shows the performance of the EWMA-based FD chart, in an example for Cylinder 1. As can be seen, the FD module correctly avoids the detection of any faults, as the envelope of normal operations, defined by the ULC and LCL, is not exceeded.
To further evaluate the FD module, an artificial fault is introduced, and the FD module is tested on its capability to detect it. The use of artificial fault is necessary due to the fault-free nature of the used data, which is a very common problem in applications from merchant vessels (Cheliotis et al. 2020). The maritime industry can be reluctant in sharing performance and condition datasets, even more so when they contain faulty data. The artificial fault is introduced in terms of increased residuals, according to domain knowledge and by examining the publications of Hountalas (2000), and Theotokatos and Tzelepis (2015). This verifies that the simulated failure is amongst the predominant failure modes of the examined system. The increased residuals are caused by a gradual increase in the ME scavenging air pressure (predictor variable) until the alarm limit (3.30 bar) for the variable is exceeded. The alarm limit is specified by the ME manufacturer and is obtained from the ME's operating manual (MAN B&W 2017). Figure 6 shows the performance of the FD chart in detecting the artificial fault, in an indicative example for Cylinder 1 of the ME. This figure includes two additional control limits (orange dotted lines), which demonstrate a transition between normal, degraded, and failed points. Lastly, it is observed that the artificial fault that is introduced on 18 January 2017 is successfully detected, as the EWMA of the residuals for Cylinder 1 exceeds the UCL.
Diagnostic set-up
3.4.1. Fault mapping During fault mapping, faults are paired with monitored variables (ME EGT) and with additional diagnostic tests, as shown in Table 2. The monitored variables are selected so that any deviations indicate the development of a specific fault. The behaviour of the variable is accessed in terms of any abnormal increments in all of the cylinders simultaneously, indicating faults in the supporting systems of the ME (i.e. AC and air and gas handling system). Conversely, isolated increments in the ME EGT of only one cylinder indicate faults in the internal components of that cylinder (e.g. exhaust valve). However, to diagnose such fault, parameters that are not reordered in the used dataset are required. Based on these two criteria, the examined faults are divided into primary and secondary. The primary faults refer to the system in which a fault is developed. The secondary faults refer to the specific components of a system in which the fault is developing. In this paper, two primary and five secondary faults are mapped (Table 2). Moreover, each fault is given a specific diagnostic test, to assist with the identification of the fault. For example, to verify whether the fouling in the air-side of the AC is causing the increase in the ME EGT, the value of DPC is examined in relation to SCAV AIR PRESS in the pressure drop test. Figure 7 shows a sample of the diagnostic test charts, and in this case of the fouling on the air-side of the AC. The DPC is examined in terms of the SCAV AIR PRESS and if its value is beyond the highlighted envelope, the test is failed. Failed diagnostic tests occurring when all the cylinders have increased EGT signifies the presence of the respective fault.
Bayesian network and verification
Once the fault mapping is concluded, the results from Table 2 are used to specify the structure of the BN. For the period of interest, the CPTs of the network are populated by aggregating the results from the FD module. After this, virtual and hard evidence regarding the states of the observable and test nodes are used as input. The probabilities of the primary and secondary faults, together with the profile of each fault be obtained using the clustering inference algorithm (Bayes Fusion LLC 2019).
In Figure 8 the resulting layout of the diagnostic BN is shown. The top nodes (i.e. Cyl 1-Cyl 5) represent the state of each cylinder (Failed, Degraded, Normal), based on the residuals' location in the EWMA control chart for that cylinder. The next layer represents the control nodes, which are tasked with accessing if a simultaneous increase in the ME EGT of all the cylinders takes place. The state of these nodes is binary (True or False). The next two layers represent the primary and secondary fault nodes, each of which has a Normal or Abnormal state. Lastly, the lowest layer represents the test nodes which help to quantify the probability of the occurrence of a specific fault and have Pass or Fail states.
Indicatively, on 23 January 2017, the EWMA control charts suggest that the ME cylinders are on the 'Failed' state (hard evidence), as presented in Figure 6. By aggregating the results in the EWMA control charts of each cylinder, between the beginning of the recordings and the 23 January 2017, the following percentages are used as inputs in the CPT of each cylinder (Figure 8): . Cylinder 1: 36% Failed, 29% Degraded and 36% Normal . Cylinder 2: 37% Failed, 44% Degraded and 19% Normal . Cylinder 3: 29% Failed, 22% Degraded and 49% Normal . Cylinder 4: 34% Failed, 22% Degraded and 44% Normal . Cylinder 5: 28% Failed, 14% Degraded and 59% Normal The combination of the CPTs and the 'Failed' state in the observable nodes increase the probabilities of the primary and secondary faults. Assuming that the AC pressure-drop test fails, as described in Table 2, Figure 9 is obtained. In this case, the air-side of the AC moves to 100% abnormal state. Also, the level of AC function moves to 86% abnormal state.
The CPTs of the observable nodes are populated by summarising the amount of data in the different regions of the EWMA control chart of each cylinder. Also, the CPTs of the test and the control nodes are populated to depict functional dependencies in the network. Lastly, the CPTs of the primary and secondary nodes are populated by obtaining failure statistics from Offshore and Onshore Reliability Data (OREDA) data-bank and using logical rules, due to the absence of available parameters from the used data acquisition.
The artificial fault described in Section 3.3.1 is used to assess the ability of the BN in finding the root cause of developing faults. Increased residuals are obtained for each cylinder and the results of the EWMA charts are summarised to populate the corresponding nodes. Subsequently, gradual transitions between each state of the observable nodes, through the use of Hard evidence is used to demonstrate the BNs ability to calculate the probability of observing each fault. Therefore, the observable nodes are set at the Fail state, due to the artificial fault, and the appropriate test nodes are set to the Fail state. The application of the virtual evidence follows the same principle, but their use allows to capture the profile of each fault (i.e. the cause for the artificial fault). Figures 10 and 11 show the fault profiles for the primary and secondary faults respectively, as verified by the data provider. Regarding the primary faults shown in Figure 10, the possible failure modes are examined and summarised in Table 3. Figure 10 shows the fault profiles for the primary faults based on Table 3. In Figure 10, the lower three lines represent the failure modes of the AC, whereas the remaining represent the failure modes of the air and gas handling system. As observed, the air and gas handling system have a faster fault development profile than the AC. Consequently, simultaneous deviations in the ME EGT are more likely to be caused by faults in the air and gas handling system. In particular, the fastest developing failure mode corresponds to the simultaneous failure of all the components of the air and gas handling system. Figure 11 shows the fault profiles for the secondary faults, observing that the fastest developing fault profile belongs to the corroded TC mechanical components followed by the AF fouling. Therefore, corrosion in the mechanical components of the TC is the fault that can develop the fastest in the system. Therefore, during the period of interest any faults manifested through the simultaneous increase of the ME EGT, are most likely attributed to the corrosion of the mechanical components of the TC.
Conclusions
The focus of this paper is the development of a novel diagnostic framework for practical applications in ship systems combining BNs and machine learning. In detail, the novelty of this paper lies in the combination of ML-based FD with BNs for practical ship system diagnostics. Also, another novel part is the use of both hard and virtual evidence for capturing the profiles of different faults. The aim is to improve ship safety and obtain an enhanced understanding of the operational condition for marine engineering systems in a practical manner. For this, DBCSAN and data filtering are used for pre-processing, multiple PRR and EWMA are used for FD, while fault mapping and a BN are used for the diagnostic task and for obtaining the fault profiles of the different faults and failure modes. In detail, the key outcomes of this study are the following: . The development of a novel diagnostic framework for practical applications in ship systems, enriching the lacking literature. . The creation of a practical diagnostic network allows the realtime assessment of operational data to compute accurate probabilities of different faults. . The use of the fault probabilities to better understand the operation state of the examined system and to improve ship safety. . The novel integration of machine learning applications for preprocessing and FD with BNs for diagnostic tasks. . The development of robust machine learning based FD and preprocessing modules. . The integration of domain knowledge with data from data-banks and results from the machine learning based FD for the population of the CPTs of the BN. . The ability to obtain the fault profiles for different faults and failure modes, which allows the comparison of different failure modes, based on the rate with which they develop.
Future work can include the expansion of the BN for the modelling of other systems (e.g. diesel generator system), keeping in mind that only variables whose behaviour can be predicted through EB modelling can be used. Additionally, another limitation is the use of data banks in the absence of available data from the data acquisition system. From the point of view of practicality, the developed framework can be used to monitor the degradation of systems over time while capturing trends caused by the ship's operating profile. Additionally, the real-time detection of developing faults and the identification of the root cause can help operators evaluate the condition of vessels and better plan and manage the required maintenance. Consequently, FD and diagnostics can trickle down and improve the operational efficiency, and ultimately the profitability of vessels. | 8,391.6 | 2022-01-09T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
On the occupation measure of super-Brownian motion
We derive the asymptotic behavior of the occupation measure of the unit ball, for super-Brownian motion started from the Dirac measure at a distant point x and conditioned to hit the unit ball. In the critical dimension d=4, we obtain a limiting exponential distribution for the ratio of the occupation measure over log(|x|).
Introduction
The results of the present work are motivated by the following simple problem about branching random walk in Z d . Consider a population of branching particles in Z d , such that individuals move independently in discrete time according to a random walk with zero mean and finite second moments, and at each integer time individuals die and give rise independently to a random number of offspring according to a critical offspring distribution. Suppose that the population starts with a single individual sitting at a point x ∈ Z d located far away from the origin, and condition on the event that the population will eventually hit the origin. Then what will be the typical number of individuals that visit the origin, and is there a limiting distribution for this number ?
In the present work, we address a continuous version of the previous problem, and so we deal with super-Brownian motion in R d . We denote by M F (R d ) the space of all finite measures in R d . We also denote by X = (X t ) t≥0 a d-dimensional super-Brownian motion with branching rate γ, that starts from µ under the probability measure P µ , for every µ ∈ M F (R d ). We refer to Perkins [Per] for a detailed presentation of super-Brownian motion. For every x ∈ R d , we also denote by N x the excursion measure of super-Brownian motion from x. We may and will assume that both P µ and N x are defined on the canonical space C(R + , M F (R d )) of continuous functions from R + into M F (R d ) and that (X t ) t≥0 is the canonical process on this space. Recall from Theorem II.7.3 in Perkins [Per] that X started at the Dirac measure δ x can be constructed from the atoms of a Poisson measure with intensity N x .
The total occupation measure of X is the finite random measure on R d defined by for every Borel subset A of R d . We denote by R the topological support of Z. In dimension d ≥ 4, points are polar, meaning that N x (0 ∈ R) = 0 if x = 0, or equivalently P µ (0 ∈ R) = 0 if 0 does not belong to the closed support of µ. In (see Theorem 1.3 in [DIP] or Chapter VI in [LG1]). It follows from the results in Sugitani [Sug] that, again in dimension d ≤ 3, the measure Z has a continuous density under P δx or under N x , for any x ∈ R d . We write (ℓ y , y ∈ R d ) for this continuous density.
For every x ∈ R d and r > 0, B(x, r) denotes the open ball centered at x with radius r. To simplify notation, we write B r = B(0, r) for the ball centered at 0 with radius r. By analogy with the discrete problem mentioned above, we are interested in the conditional distribution of Z(B 1 ) under P δx (· | Z(B 1 ) > 0) when |x| is large. As a simple consequence of (1) and scaling, we have when d ≤ 3, Here and later the notation f (x) ∼ g(x) as |x| → ∞ means that the ratio f (x)/g(x) tends to 1 as |x| → ∞. On the other hand, when d ≥ 4, it is proved in [DIP] that, as |x| → ∞, where κ d > 0 is a constant depending only on d.
For d ≥ 3, the Green function of d-dimensional Brownian motion is and ϕ is a nonnegative measurable function on R d , we use the notation µ, ϕ = ϕ dµ. We can now state our main result.
Theorem 1. Let ϕ be a bounded nonnegative measurable function supported on B 1 , and set ϕ = ϕ(y)dy.
(iii) If d ≥ 5, the law of Z, ϕ under P δx (· | Z(B 1 ) > 0) converges as |x| → ∞ to the probability measure µ ϕ on R + with moments m p,ϕ = r p µ ϕ (dr) given by and for every p ≥ 2, The scaling invariance properties of super-Brownian motion allow us to restate Theorem 1 in terms of super-Brownian motion started with a fixed initial value and the occupation measure of a small ball with radius ε tending to 0. Part (i) of Theorem 1 then becomes a straightforward consequence of the fact that the measure Z has a continuous density in dimension d ≤ 3: See Lee [Lee] and Merle [Mer] for more precise results along these lines. On the other hand, the proof of part (iii) is relatively easy from the method of moments and known recursive formulas for the moments of the random measure Z under N x . For the sake of completeness, we include proofs of the three cases in Theorem 1, but the most interesting part is really the critical dimension d = 4, where it is remarkable that an explicit limiting distribution can be obtained.
Notice that dimension 4 is critical with respect to the polarity of points for super-Brownian motion. Part (ii) of the theorem should therefore be compared with classical limit theorems for additive functionals of planar Brownian (note that d = 2 is the critical dimension for polarity of points for ordinary Brownian motion). The celebrated Kallianpur-Robbins law states that the time spent by planar Brownian motion in a bounded set before time t behaves as t → ∞ like log t times an exponential variable (see e.g. section 7.17 in Itô and McKean [IM]). The Kallianpur-Robbins law can be derived by "conceptual proofs" which explain the occurence of the exponential distribution. Our initial approach to part (ii) was based on a similar conceptual argument based on the Brownian snake approach to super-Brownian motion. Since it seems delicate to make this argument completely rigorous, we rely below on a careful analysis of the moments of Z, ϕ .
Let us finally comment on the branching random walk problem discussed at the beginning of this introduction. Although we do not consider this problem here, it is very likely that a result analogous to Theorem 1 holds in this discrete setting, just replacing Z, ϕ with the number of particles that hit the origin. In particular, the limiting distributions obtained in (i) and (ii) of Theorem 1 should also appear in the discrete setting.
Preliminary remarks
Let us briefly recall some basic facts about super-Brownian motion and its excursion has law P δx (see Theorem II.7.3 in [Per]). We can use this Poisson decomposition to observe that it is enough to prove Theorem 1 with the conditional measure P , and {M ≥ 1} is the event that the range of Y hits B 1 . Furthermore, the preceding Poisson decomposition just shows that the law of Z, ϕ under P δx coincides with the law of Z 1 + · · · + Z M , where conditionally given M , the variables Z 1 , Z 2 , . . . are independent and distributed according to the law of Z, ϕ under N x (· | Z(B 1 ) > 0). Since P (M = 1 | M ≥ 1) tends to 1 as |x| → ∞ (by the estimates (2) and (3)), we see that the law of Z, ϕ (or the law of f (x) Z, ϕ for any deterministic function f ) under P δx (· | Z(B 1 ) > 0) will be arbitrarily close to the law of the same variable under N x (· | Z(B 1 ) > 0) when |x| is large, which is what we wanted. Note that this argument is valid in any dimension.
Let us also discuss the dependence of our results on the branching rate γ. If (Y t ) t≥0 is a super-Brownian motion with branching rate γ started at µ, and λ > 0, then (λY t ) t≥0 is a super-Brownian motion with branching rate λγ started at λµ. A similar property then holds for excursion measures. Write N (γ) x instead of N x to emphasize the dependence on γ. Then the "law" of (λX t ) t≥0 under N . Thanks to these observations, it will be enough to prove Theorem 1 for one particular value of γ.
In what follows, we take γ = 2, as this will simplify certain formulas. For any nonnegative measurable function ϕ on R d , the moments of Z, ϕ are determined by induction by the formulas and, for every p ≥ 2, See e.g. formula (16.2.3) in [LG2], and note that the extra factor 2 there is due to the fact that the the Brownian snake approach gives γ = 4.
However, as |x| → ∞, On the other hand, since |x| d Z, ϕ |x| −1 = |x| d dy ℓ y ϕ |x| −1 (y) = dy ℓ y/|x| ϕ(y) the continuity of the local times ℓ y implies that, for every δ > 0, as |x| → ∞. By rotational invariance, the law of ℓ 0 under N x/|x| coincides with the law of the same variable under N x0 . Part (i) of Theorem 1 now follows from the preceding observations.
High dimensions
We now turn to part (iii) of Theorem 1 and so we suppose that d ≥ 5. As noticed earlier, we may replace P δx (· | Z(B 1 ) > 0) by N x (· | Z(B 1 ) > 0).
Without loss of generality, we assume in this part that ϕ ≤ 1.
Lemma 1. There exists a finite constant K d depending only on d, such that, for every x ∈ R d and p ≥ 1, Obviously, it is enough to consider the case when ϕ = 1 B1 . From (4), one immediately verifies that for some constant C 1,d depending only on d. Straightforward estimates give the existence of a constant a d such that, for every x ∈ R d , We then claim that for every integer p ≥ 1, where the constants C p,d , p ≥ 2 are determined by induction by Indeed, let k ≥ 2 and suppose that (6) holds for every p ∈ {1, . . . , k − 1}. From (5), we get and our choice of a d shows that (6) also holds for p = k. We have thus proved our claim (6) for every p ≥ 1. From (7) it is an elementary exercise to verify that C p,d ≤ K p d for some constant K d depending only on d. This completes the proof.
Let us now prove that for every p ≥ 1, N x ( Z, ϕ p | Z(B 1 ) > 0) converges as |x| → ∞ to m p,ϕ . If p = 1, this is an immediate consequence of (3) and (4). If p ≥ 2, we write and we can use the bounds of Lemma 1, and the property (|z| 2−d ∧ 1) 2 dz < ∞, in order to get The convergence of N x ( Z, ϕ p | Z(B 1 ) > 0) towards m p,ϕ now follows from (3). Finally, Lemma 1 and (3) also imply that any limit distribution of the laws of Z, ϕ under N x (· | Z(B 1 ) > 0) is characterized by its moments. Part (iii) of Theorem 1 now follows as a standard application of the method of moments.
The critical dimension
In this section, we consider the critical dimension d = 4. Recall that in that case G(x, y) = (2π 2 ) −1 |y − x| −2 . As in the previous sections, we take γ = 2. We start by stating two lemmas.
Let us explain how part (ii) of Theorem 1 follows from these two lemmas. Notice that the estimate of Lemma 2 also holds uniformly when x varies over a compact subset of R 4 \ {0}, by scaling and rotational invariance. Combining the results of the lemmas gives uniformly when x varies over a compact subset of R 4 \ {0}. By scaling, for any x ∈ R 4 with |x| > 1 the law of Z, ϕ under N x (· | Z(B 1 ) > 0) coincides with the law of |x| 4 Z, ϕ 1/|x| under N x/|x| (· | Z(B 1/|x| ) > 0). Hence, we deduce from the preceding display that we have also The statement in part (ii) of Theorem 1 now follows from an application of the method of moments. It remains to prove Lemma 2 and Lemma 3.
Claim. For every integer p ≥ 1, for every f ∈ F , for every β > 0, there exists ε 0 > 0 such that, for every ε ∈ (0, ε 0 ), We prove the claim by induction on p. Let us first consider the case p = 1. We fix f ∈ F . Using (4), for ε > 0 and y ∈ R 4 such that |y| > f (ε), we have Since lim ε→0 (f (ε)/ε) = ∞, we see that We thus deduce from (9) that for ε small enough, which gives our claim for p = 1. Let p ≥ 2 and suppose that the claim holds up to order p − 1. Fix f ∈ F and β ∈ (0, 1). Let β ′ ∈ (0, 1) be such that (1 − β ′ ) 4 = 1 − β, and let C > 0 be such that (1 + C −1 ) −2 = 1 − β ′ . Introduce the functionf defined by Clearly,f ∈ F . Furthermore, we have Using (5), we obtain, for any y / Using the induction hypothesis, we get, provided ε is small enough, From the definition of C, for any z ∈ B |y|/C , we have G(y, z) ≥ (1 − β ′ )G(0, y). It follows that Moreover, using the property f ∈ F and (11), we see that, if ε is sufficiently small, for any y / From the preceding bounds, we get that, if ε is sufficiently small, which is our claim at order p.
Upper bound.
Without loss of generality we assume that ϕ ≤ 1. We need to get upper bounds on N y [ Z, ϕ ε p ] for y belonging to different subsets of R 4 . We will prove that, for every p ≥ 1, for every f ∈ F and every β ∈ (0, 1) the following bounds hold for ε > 0 sufficiently small: Only (H 3 p ) is needed in our proof of Lemma 3. However, we will proceed by induction on p to get (H 3 p ), and we will use (H 1 p ) and (H 2 p ) in our induction argument. The bounds (H 1 p ) and (H 2 p ) are not sharp, but they will be sufficient for our purposes. Notice that (ϕ + β)/(2π 2 ) < 1/3 because ϕ ≤ 1 and β < 1. | 3,679.2 | 2006-05-17T00:00:00.000 | [
"Mathematics",
"Physics"
] |